No More Worries!


Our orders are delivered strictly on time without delay

Paper Formatting

  • Double or single-spaced
  • 1-inch margin
  • 12 Font Arial or Times New Roman
  • 300 words per page

No Lateness!

image Our orders are delivered strictly on time without delay

AEW Guarantees

image

  • Free Unlimited revisions
  • Guaranteed Privacy
  • Money Return guarantee
  • Plagiarism Free Writing

Polynomial Calculation

 

1. Consider the following polynomial with n terms, where ai is the ith coefficient:
P(x) =
n−1

i=0
aixi = a0 + a1x + a2x2 + … + an−1xn−1
Below is pseudocode of a naive algorithm that calculates P(x) for some input x:
1 Function pow(x, i):
2
// x is a number, i is an integer, returns xi
3
exp := 1
4
p := 1
5
while p ≤ i do
6
exp = exp ∗ x
7
p += 1
8
return exp
9 Function calculatePoly1(x, a, n):
10
// x is a number, a is a list of numbers, n is the length of a
11
sum := 0
12
i := 0
13
while i < n do
14
sum += a[i] * pow(x, i)
15
i += 1
16
return sum
(a) (4 points) Prove a tight bound for the runtime complexity of calculatePoly1 using summations.
(b) (4 points) Give a tight bound for the space complexity of calculatePoly1. Explain your
bound.
Assume that all integers and floats require the same, constant amount of memory.
Note: Unless otherwise specified, we ignore the space required to store the input arguments, since this will remain the same for all algorithms that solve the same problem.
(c) (10 points) Propose an algorithm calculatePoly2 with asymptotically better runtime
compared to calculatePoly1 and provide its pseudocode.
Give tight bounds for your algorithm’s runtime and space complexity.
Explain your bounds.
Your answer may not use a built-in exponentiation operator (like xˆn or x**n). You may
use other basic operators like addition, multiplication, etc.
2. For each of the following questions, briefly explain your answer.
(a) (4 points) If an algorithm’s best case runtime is Θ(n), is it possible that it takes Θ(n2)
time on some inputs?
(b) (4 points) If an algorithm’s best case runtime is Θ(n), is it possible that it takes Θ(n2)
time on all inputs?
(c) (4 points) If an algorithm’s worst case runtime is O(n2), is it possible that it takes o(n2)
time on some inputs?
(d) (4 points) If an algorithm’s worst case runtime is O(n2), is it possible that it takes o(n2)
time on all inputs?
3. (6 points) Prove the following bound using the inequality definition of O: 18n lg(n) + 2n2 =
O(n2)
4. (10 points) Consider all exponential functions of the form f(n) = kn, where k is a positive
constant.
Do there exist two exponential functions of this form with different bases (e.g. f(n) = 2n and
g(n) = 3n) such that their growth rates are asymptotically equivalent?
Prove your answer.
5. (10 points) Prove the property of transitivity for O: For any two asymptotically non-negative
functions f and g, if f(n) = O(g(n)) and g(n) = O(h(n)), then f(n) = O(h(n)).
6. (25 points) Order the following functions from least to greatest growth rate, such that for any
two functions f and g, if f comes before g, then f(n) = O(g(n)). You must prove each
ranking.
For example, if the functions were f, g, h and you gave the order g, h, f, then you would need
to prove that g(n) = O(h(n)), g(n) = O(f(n)), and h(n) = O(f(n)).
• f1(n) = √n
• f2(n) = nn
• f3(n) = n0.00001
• f4(n) = n!
• f5(n) = lg(n)

 

Sample Answer

1. Polynomial Calculation
(a) Runtime Complexity of calculatePoly1
The runtime complexity of the calculatePoly1 algorithm can be determined by analyzing the number of operations performed in each loop iteration.

In the while loop, the algorithm performs n iterations (where n is the length of the coefficient list ‘a’). In each iteration, it performs a multiplication and an addition operation. Therefore, the number of operations in the while loop can be expressed as 2n.

Outside the while loop, there is an additional loop that iterates n times. In each iteration, it calculates the power of x using the pow function. The pow function performs i multiplications, where i is the current iteration index. Therefore, the number of operations in this loop can be expressed as 1 + 2 + 3 + … + n, which is equivalent to (n * (n + 1) / 2).

Combining both loops, the total number of operations can be expressed as 2n + (n * (n + 1) / 2).

Thus, the tight bound for the runtime complexity of calculatePoly1 using summations is O(n^2).

(b) Space Complexity of calculatePoly1
The space complexity of the calculatePoly1 algorithm can be determined by analyzing the memory used to store variables and data structures.

In the algorithm, there are several variables used, such as ‘exp’, ‘p’, ‘sum’, and ‘i’. Additionally, there is a list ‘a’ that stores the coefficients.

The number of variables used remains constant regardless of the input size. Therefore, the space complexity for variables is O(1).

The coefficient list ‘a’ has a length of n, which means it requires O(n) memory.

Hence, combining both variables and data structures, the tight bound for the space complexity of calculatePoly1 is O(n).

(c) Improved Algorithm calculatePoly2
To improve the runtime complexity of calculatePoly1, we can use a more efficient algorithm called Horner’s method. This method allows us to calculate the polynomial value with a linear number of operations.

Here is the pseudocode for calculatePoly2 using Horner’s method:

Function calculatePoly2(x, a, n):
sum := a[n-1]
i := n – 2
while i >= 0 do
sum = sum * x + a[i]
i -= 1
return sum

In this algorithm, we start from the highest degree term (a[n-1]) and iteratively multiply it by x and add the next coefficient (a[i]) in reverse order. This way, we avoid calculating powers of x separately.

The runtime complexity of calculatePoly2 can be determined by analyzing the number of operations performed in each loop iteration.

The while loop iterates n-1 times since we start from index n-2 to 0. In each iteration, it performs a multiplication and an addition operation. Therefore, the number of operations in the while loop can be expressed as 2 * (n-1), which simplifies to 2n – 2.

Thus, the tight bound for the runtime complexity of calculatePoly2 is O(n).

The space complexity remains the same as calculatePoly1, which is O(n).

2. Algorithm Runtimes
(a) Best Case Runtime Θ(n) vs. Θ(n^2)
Yes, it is possible for an algorithm with a best case runtime of Θ(n) to take Θ(n^2) time on some inputs. The best case runtime only represents the lower bound of the algorithm’s performance. It means that in the best case scenario, the algorithm will run in linear time. However, there may be inputs where the algorithm exhibits a higher time complexity due to specific conditions or data characteristics.

(b) Best Case Runtime Θ(n) vs. Θ(n^2)
No, it is not possible for an algorithm with a best case runtime of Θ(n) to take Θ(n^2) time on all inputs. The best case runtime represents the lower bound on the algorithm’s performance. It means that in any scenario, the algorithm will run in linear time. If an algorithm consistently takes quadratic time on all inputs, it cannot have a best case runtime of Θ(n).

(c) Worst Case Runtime O(n^2) vs. o(n^2)
Yes, it is possible for an algorithm with a worst case runtime of O(n^2) to take o(n^2) time on some inputs. The big O notation represents an upper bound on the algorithm’s performance. It means that in the worst case scenario, the algorithm will run in quadratic time or less. However, there may be inputs where the algorithm exhibits a lower time complexity due to specific conditions or data characteristics.

(d) Worst Case Runtime O(n^2) vs. o(n^2)
No, it is not possible for an algorithm with a worst case runtime of O(n^2) to take o(n^2) time on all inputs. The big O notation represents an upper bound on the algorithm’s performance. It means that in any scenario, the algorithm will run in quadratic time or less. If an algorithm consistently takes subquadratic time on all inputs, it cannot have a worst case runtime of O(n^2).

3. Proof of Bound: O(18n lg(n) + 2n^2 = O(n^2))
To prove that 18n lg(n) + 2n^2 = O(n^2), we need to show that there exists a positive constant c and an input size n0 such that for all n >= n0, 18n lg(n) + 2n^2 <= c * n^2.

Let’s simplify and analyze each term separately:

For n >= 1 and lg(n) >= 0, we have: 18n lg(n) <= 18n * lg(n).

Since lg(n) <= n for all n >= 1, we can further simplify: 18n lg(n) <= 18n * n = 18n^2.

Therefore, for all n >= 1, 18n lg(n) <= 18n^2.

For n >= 1, we have: 2n^2 <= 2n^2.

Therefore, for all n >= 1, 2n^2 <= 2n^2.

Now let’s combine both terms:

For all n >= 1: 18n lg(n) + 2n^2 <= 18n^2 + 2n^2 = 20n^2.

Now we can choose c = 20 and n0 = 1:

For all n >= n0 = 1: 18n lg(n) + 2n^2 <= c * n^2.

Therefore, we have proven that 18n lg(n) + 2n^2 = O(n^2).

4. Exponential Functions
For exponential functions f(n) = kn and g(n) = mn, where k and m are positive constants with k != m:

To determine if their growth rates are asymptotically equivalent:

We need to compare their ratios f(n)/g(n) and g(n)/f(n).

Let’s calculate these ratios:

f(n)/g(n) = (kn)/(mn) = k/m g(n)/f(n) = (mn)/(kn) = m/k

Since k and m are positive constants with k != m, their ratios are not equal.

Therefore, their growth rates are not asymptotically equivalent.

5. Property of Transitivity for O
To prove the property of transitivity for O:

Let’s assume we have three asymptotically non-negative functions f(n), g(n), and h(n):

Given f(n) = O(g(n)) and g(n) = O(h(n)), we want to prove f(n) = O(h(n)).

By definition, f(n) = O(g(n)) implies that there exist positive constants c1 and n1 such that for all n >= n1, f(n) <= c1 * g(n).

Similarly, g(n) = O(h(n)) implies that there exist positive constants c2 and n2 such that for all n >= n2, g(n) <= c2 * h(n).

Combining both inequalities:

For all n >= max(n1, n2): f(n) <= c1 * g(n) <= c1 * (c2 * h(n)) = (c1 * c2) * h(n)

Let c = c1 * c2 be a positive constant.

For all n >= max(n1, n2), we have: f(n) <= c * h(n)

Therefore, f(n) = O(h(n)).

Hence, we have proven the property of transitivity for O.

6. Ranking Functions by Growth Rate
To rank the provided functions from least to greatest growth rate:

We need to compare their growth rates and prove the ordering.

Let’s analyze each function:

f1(n) = √n

This function has a square root growth rate.

To prove f1(n) = O(f5(n)), we need to show that there exists a positive constant c and an input size n0 such that for all n >= n0, √n <= c * lg(n).

Let’s choose c = √2 and n0 = 4:

For all n >= n0 = 4: √n <= √(n/4)*√4 <= √(n/4)*lg(4) <= √(n/4)*lg(√16) <= (√(n/4)*lg(√16))/lg(16) <= (√(n/4)*lg(√16))/4 <= (√(n/4)lg(√16))/4 <= (√(n/4)(4/4))/4 <= (√(n/4)4)/(44) <= √(n/4)/4 <= (√(n/4)/√16)/4 <= (√(n/64))/4 <= √(n/64)/√64 <= √(n)/64 Therefore, f1(n) = O(f5/n)), which implies f1(n) ≤ f5/n.

Hence, f1(n) ≤ f5/n.

 

 

This question has been answered.

Get Answer
PLACE AN ORDER NOW

Compute Cost of Paper

Subject:
Type:
Pages/Words:
Single spaced
approx 275 words per page
Urgency:
Level:
Currency:
Total Cost:

Our Services

image

  • Research Paper Writing
  • Essay Writing
  • Dissertation Writing
  • Thesis Writing

Why Choose Us

image

  • Money Return guarantee
  • Guaranteed Privacy
  • Written by Professionals
  • Paper Written from Scratch
  • Timely Deliveries
  • Free Amendments