# Algorithm Analysis and Design Solved Questions

## National Institute of Technology Rourkela

Year 2012 Question Paper Solution

**Question 1: What do you understand by algorithm? Explain the term definiteness with regard to algorithm. How algorithm can be analysed? What are directly affecting the analysis of algorithm?**

Ans 1: An Algorithm is a finite set of instruction that, if followed, accomplishes a particular task. In addition, if we want to say more precisely, than an algorithm is an effective method expressed as finite list of well-defined instructions for calculation of a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, will proceed through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending stat. All the algorithm must satisfy the following criteria.

**Input**- Zero or more quantities are externally supplied.**Output**– At least one quantity is produced.**Definiteness**– Each instruction is clear and unambiguous.**Finiteness**- If we trace out the instructions of an algorithm, then for all cases, the algorithm terminates after a finite number of steps.**Effectiveness**– Every instruction must be very basic so that it can be carried out, in principle, by using only pencil and paper. It is not enough that each operation be definite but it should also be feasible.

Note: You can learn the above five properties with the help of short cut IO-DEF, I-input, O-Output, D-Definiteness, F-Finiteness, E-Effectiveness.

**Definiteness**: Definiteness means each instruction should be clear and preciously defined and it should be unambiguous. The precision of the prescription, leaving no place to arbitrariness, and its universal comprehensibility.

**Factors Affecting the Algorithm Analysis**: Goal of the algorithm analysis is to make comparisons between the algorithms.

- The relative performance of the algorithms might depend on characteristics of the hardware, so one algorithm might be faster on Machine A, another on Machine B. The general solution to this problem is to specify a
**machine model**and analyse the number of steps, or operations, an algorithm requires under a given model. - Relative performance might depend on the details of the dataset. For example, some sorting algorithms run faster if the data are already partially sorted; other algorithms run slower in this case. A common way to avoid this problem is to analyse the
**worst case**scenario. It is sometimes useful to analyse average case performance, but that’s usually harder, and it might not be obvious what set of cases to average over. - Relative performance also depends on the size of the problem. A sorting algorithm that is fast for small lists might be slow for long lists. The usual solution to this problem is to express run time (or number of operations) as a function of problem size, and to compare the functions
**asymptotically**as the problem size increases. - Order of growth also affects the Algorithm Analysis very much.

**Question 2: Solve the Following recurrence relation : T(n) = 3 T(n/2) + n ^{2}, T(1)=1**

Ans 2: Given **T(n) = 3 T(n/2) + n ^{2 }** , T(1)=1

When we keep the value of n =2 we get

T(2)=3T(1)+n^{2}

T(2)=3(1)+4

T(2)=7

For Complete Solution Visit The Link

**Question 3: What is the difference between performance analysis and performance measurement of an algorithm?**

Ans 3: Performance Analysis: Performance Analysis is concerned with obtaining the space and time requirement of a particular algorithm. Performance analysis is used to compare two or more algorithm considering they will run on a same platform.

Performance Measurement: Performance measurement is concerned with obtaining the space and time requirements of a particular algorithm. These quantities depend on compliers and other options used as well as on the computer on which the algorithm is run.

**Question 4: How the Greedy Paradigm of Algorithm differs from that of Dynamic Programming? **

Ans 4: Dynamic Programming: Dynamic Programming is like divide and conquers method, solves problems combining the solutions to sub problems. A Dynamic programming algorithm solves each sub problem just once and then saves its answer in a table, thereby avoiding the work of re computing the answer every time it solves each sub problem.

Greedy Paradigm: A Greedy Algorithm always makes a choice that is best at the moment. The Greedy method suggests that one can devise an algorithm that works in stages, considering one input at a time. At each stage, a decision is made regarding whether a particular input is in an optimal solution. This is done by considering the input in an order determined by some selection procedure.

Greedy method |
Dynamic programming |

Only one sequence of decision is generated. |
Much number of decisions are generated. |

It does not guarantee to give an optimal solution always. |
It definitely gives an optimal solution always. |

** **

**Question 5: Explain how Prim’s algorithm can be implemented using heap. What would be the time complexity of the algorithm?**

Ans 5: A heap works well in this situation because one of the reasons we might use a heap is to speed up repeated minimum computations i.e. working out the minimum weighted edge to add to our spanning tree.

The pseudocode for the Prim’s algorithm which uses a heap reads like this:

- Let
*X*= nodes covered so far,*V*= all the nodes in the graph,*E*= all the edges in the graph - Pick an arbitrary initial node
*s*and put that into*X* - for
*v*∈*V*–*X* - key[v] = cheapest edge
*(u,v)*with*v*∈*X* - while
*X*≠*V*: - let
*v*= extract-min(heap)*(i.e. v is the node which has the minimal edge cost into X)* - Add
*v*to*X* - for each edge
*v, w*∈*E* - if w ∈
*V*–*X*(*i.e. w is a node which hasn’t yet been covered)* - Delete
*w*from heap - recompute key[w] = min(key[w], weight(v, w))
*(key[w] would only change if the weight of the edge (v,w) is less than the current weight for that key).* - reinsert
*w*into the heap

We store the uncovered nodes in the heap and set their priority to be the cheapest edge from that node into the set of nodes which we’re already covered.

** **

**Question 6: Is the minimum spanning tree generated using both Kruskal’s and Prim’s Unique?**

Ans 6: Yes, it is unique. ** **

**Question 7: Show that the following equation is correct: 33n ^{2} + 4n^{2} = Ω(n^{2})**

Ans 7: Because we can find a function as 33n^{2}+4n^{2} >= n^{2} for n>=1

**Question 8: Define a Problem class P & NP? What are the basic steps to prove a problem to be NP Complete? **

Ans 8: P is the set of all decision problems solved by deterministic algorithms in polynomial time. NP is the set of all decision problems solvable by non-deterministic algorithm in polynomial time. ** **

**Question 9: Find out the complexity of the Merge Sort algorithm form recurrence relation solve by method of substitution. **

Ans 9:

If the time for the merging operation is proportional to n, then the computing time of

merge sort is described by the recurrence relation

T(n) = a n = 1, a a constant

2T (n/2) + n n >1, c a constant

**Question 10: Order of the functions by asymptotic complexity(i.e Big Oh). **

Ans 10: nlog < n^(1+e) < n^2/logn < (n2-n+1)^4 < n^8 < (1+e)^n

**Question 2: Give the control abstraction for Divide and Conquer. Find out the solution to the equation : T(n)=aT(n/b)+ Θ(n ^{k}) where a>=1 and b>=1 with if a>b^{k}**

Ans 2: A control abstraction we mean a procedure whose flow of control is clear but whose primary operations are by other procedures whose precise meanings are left undefined.

Write the Control abstraction for Divide-and conquer.

**Algorithm DivideAndConquer (P)**

{

if small(p) then return S(P);

else

{

divide P into smaller instance p1,p2,…pk, k ≥1;

Apply DivideAndConquer to each of these sub problems

Return combine (DivideAndConquer C (p1) DivideAndConquer(p2),----, DivideA ndConquer (pk));

}

}

**Question 3.Amend the QUICKSOR algorithm to find the k ^{th} smallest element in an unsorted list of comparable? Find out the time complexity of your algorithm from the corresponding recurrence relation? **

Ans 3:

Finding kth smallest element array is a common question across interviews. When this question is asked, do not even bother to ask whether the input array is sorted. This question can be asked in different ways like:

- Find the nth largest element in an unsorted array
- Given two unsorted arrays find kth smallest element in both of the arrays and likewise.

Most widely known solution for this problem is to use QuickSelect algorithm, which is a modification of QuickSort itself.

A brute force approach would be to sort the array and then to pick the kth largest or smallest element. How do we improve this solution without sorting the entire array?

**Quicksort:**

Before we get into QuickSelect , let us understand Quicksort in brief. Quicksort works faster in practical than explained. Quicksort can sort n items in O(nlogn) time in an average case. In worst case, Quicksort makes upto n^2 comparisons. With in-space partitioning we can achieve a space complexity of O(n). How a quick sort works?

Simple Quicksort Algorithm:

a. Find a pivot. Pivot is an index inside the bounds of input array chosen intelligently, uniformly and in a random way(Well that's way to make Quicksort efficient!). Let us call this step as "Pivot identification"

b. Group all the elements whose value is lesser than element at index 'pivot' into one array and all the elements that are greater than the value at index pivot into another array. Let us call this step as "partitioning".

c. Recursively call quicksort method on these smaller arrays and concatenate smaller array, pivot and greater array to make the sorted array.

Simple enough, but this uses unnecessary space for arrays that are created while partitioning. Now lets do "partitioning" "In-place". We call an algorithm as in-space when algorithm works on the given data structure rather than using extra space.

To make partitioning in-space:

a. We need to modify all the values in the given array such that all the values below pivot's index are less than value at pivot's index and all the values above pivot's index are greater than value at pivot's index.

b. Above step can be achieved by swapping all the elements that are less than value at pivot's index towards starting of array and keep this count say 'storeIndex'. Once we this is done all the values after pivot will be greater than value at pivot. To simplify the swapping lets move pivot element to last element of array using swapping and once we swap other values as required in step -a , we will move back pivot to end of the count 'storeIndex'. This storeIndex will have pivot's index at the end of partitioning.

Pseudo code for this as in wiki:

*// left is the index of the leftmost element of the array*

*// right is the index of the rightmost element of the array (inclusive)*

*// number of elements in subarray = right-left+1*

**function** partition(array, 'left', 'right', 'pivotIndex')

'pivotValue' := array['pivotIndex']

swap array['pivotIndex'] and array['right'] *// Move pivot to end*

'storeIndex' := 'left'

**for** 'i' from 'left' to 'right' - 1 *// left ≤ i < right*

**if** array['i'] < 'pivotValue'

swap array['i'] and array['storeIndex']

'storeIndex' := 'storeIndex' + 1

end for

swap array['storeIndex'] and array['right'] *// Move pivot to its final place*

**return** 'storeIndex'

Now lets write in-place Quicksort alogirthm:

Quicksort(array, start ,end)

{

if array.length <= 1 return array; //recursive return condition

choose pivot such that start<=pivot<=end

pivotindex = partition( array, start, end , pivot);

return Quicksort(array, start, pivotindex -1) + pivot element + Quicksort(array, pivotindex+1, end);

}

Let us end a about quicksort now and concentrate on given problem to find out the kth smallest element. To do this we will use quick sort, partitioning and finding pivot that we learnt above.

**Quickselect**** **

Quickselect is an alteration of quick sort, where we

- Find the pivot
- Partition based on pivot
- Check if the required 'k' belongs to which of these partitions and recursively use above steps to find out where is kth smallest element

Here only alteration is 3rd step, where we need to calculate which part of array we need to recursively select to find kth smallest element. Logic is ,

"When we partition an array based on pivot_index then there will be pivot_index -1 elements in first part of array and (actual length of array - pivot_index) in second part of array. If we have find "kth" smallest element, check if k falls in (0, pivot_index-1) or (pivot_index+1, end) sub array. Based on this decision recursively find out where it falls"

Below algorithm is from Yale university's wiki. Algorithm itself is self explanatory.

*Finding the pivot in a randomized way will make it difficult to understand the complexity of algorithm.

*In any average case, this algorithm will work in O(n) [tough calculations involved to prove this since we use randomized way to find out the pivot]. Even in worst case this algorithm does not grow beyond O(n^2).

Hence it is better than sorting out whole array and picking from the index, which in better case can be done in O(nlogn).

* Using in-place version of same algorithm will keep the space complexity in O(n).

QuickSelect(A, k)

{

let r be chosen uniformly at random in the range 1 to length(A)

let pivot = A[r]

let A1, A2 be new arrays

# split into a pile A1 of small elements and A2 of big elements

for i = 1 to n

if A[i] < pivot then

append A[i] to A1

else if A[i] > pivot then

append A[i] to A2

else

# do nothing

end for

if k <= length(A1):

# it's in the pile of small elements

return QuickSelect(A1, k)

else if k > length(A) - length(A2)

# it's in the pile of big elements

return QuickSelect(A2, k - (length(A) - length(A2))

else

# it's equal to the pivot

return pivot

}

**Question 4: Explain the method by Strassen’s for matrix multiplication and prove that the time complexity is O(n ^{2.81})? Suppose that someone found a way to multiply two n X n matrices using only 148 multiplications of n/6 x n/6 matrices and Θ(n^{2}) additional work. Assume we used this to produce a recursive divide-and –conquer algorithm for matrix multiplication, similar to Strassen’s algorithm.**

**Give a recurrence for the number of operations required to multiply two n X n matrices by this method.****Give Θ-notation for the number of operations required by this algorithm. Your formula may involve expressions like log**_{3}5 ( but of course the numbers might be different from 3 and 5 ) without giving a numerical value for the algorithm.

Ans 4: Strassen’s algorithm can be viewed as an application of a familiar design technique: divide and conquer. Suppose we wish to compute the product *C *= *AB*, where each of *A*, *B*, and *C *are *n *× *n *matrices. Assuming that *n *is an exact power of 2, we divide each of *A*, *B*, and *C *into four *n**/*2 × *n**/*2 matrices, rewriting the equation

*A= a b*

* c d*

*B= e f*

* g h*

* *

*C *= *AB *as follows

r s a b e f

t u ^{=|} c d^{| |} g h^{| (28.8)}^{ }

Equation (28.8) corresponds to the four equations

*r *= *ae *+ *bg **, *(28.9)

*s *= *a f *+ *bh **, *(28.10)

*t *= *ce *+ *dg **, *(28.11)

*u *= *cf *+ *dh **. *(28.12)

Each of these four equations specifies two multiplications of *n**/*2 × *n**/*2 matrices and the addition of their *n**/*2 × *n**/*2 products. Using these equations to define a straightforward divide-and-conquer strategy, we derive the following recurrence for the time *T **(**n**) *to multiply two *n *× *n *matrices:

*T **(**n**) *= 8*T **(**n**/*2*) *+ *_(**n*2*) . *(28.13)

Unfortunately, recurrence (28.13) has the solution *T **(**n**) *= *_(**n*3*)*, and thus this method is no faster than the ordinary one. Strassen discovered a different recursive approach that requires only 7 recursive multiplications of *n**/*2×*n**/*2 matrices and *_(**n*2*) *scalar additions and subtractions, yielding the recurrence

*T **(**n**) *= 7*T **(**n**/*2*) *+ *_(**n*2*) *(28.14)

= *_(**n*lg 7*)*

= *O**(**n*2*.*81*) .*

**Question 5: What are the characteristic of the optimization problem, in the context of greedy algorithm? What do you mean by greedy choice property? Write the Greedy Method for SUBSET Paradigm and ORDERING paradigm? What is topological sorting? Use topological sorting algorithm to find the topological order of the vertices from the following graph? Comment on complexity of the topological sorting algorithm?**

Ans 5: **Greedy method** is the most important design technique, which makes a choice that looks best at that moment. A given ‘n’ inputs are required us to obtain a subset that satisfies some constraints that is the feasible solution. A greedy method suggests that one can device an algorithm that works in stages considering one input at a time.

**Feasible and optimal solution: **Given n inputs and we are required to form a subset such that it satisfies some given constraints then such a subset is called feasible solution. A feasible solution either maximizes or minimizes the given objective function is called as optimal solution

Subset paradigm |
ordering paradigm |

At each stage a decision is made regarding whether a particular input is in an optimal solution (generating sub optimal solutions) |
For problems that do not call for selection of optimal subset,in the greedy manner we make decisions by considering inputs in some order |

Example kNAPSACK,MST |
Optimal storage on tapes |

**Greedy method control abstraction for the subset paradigm
**

SolType Greedy(Type a[], int n)

// a[1:n] contains the n inputs.

{

SolType solution = EMPTY; // Initialize the solution

for (int i = 1; i <= n; i++) {

Type x = Select(a);

if Feasible(solution, x)

solution = Union(solution, x);

}

return solution;

}

**Greedy method control abstraction for the ordering paradigm
**Algorithm store(n,m)

{

J=0;

For i=1 to n do

{

Write(“append program”,I,”to permutation for tape”,j);

J=(j+1) mod m;

}

}

**Topological ordering:** of a directed graph is a linear ordering of its vertices such that for every directed edge *uv* from vertex *u* to vertex *v*, *u* comes before *v* in the ordering. For instance, the vertices of the graph may represent tasks to be performed, and the edges may represent constraints that one task must be performed before another; in this application, a topological ordering is just a valid sequence for the tasks. A topological ordering is possible if and only if the graph has no directed cycles, that is, if it is a directed acyclic graph (DAG). Any DAG has at least one topological ordering, and algorithms are known for constructing a topological ordering of any DAG in linear time.** **

**Question 6: Define and differentiate between deterministic and Non-Deterministic algorithm. Explain the construction of Non-deterministic algorithm with an example to solve 0/1 knapsack problem? **

Ans 6: Algorithm is deterministic if for a given input the output generated is same for a function. A mathematical function is deterministic. Hence the state is known at every step of the algorithm.

Algorithm is non deterministic if there are more than one path the algorithm can take. Due to this, one cannot determine the next state of the machine running the algorithm. Example would be a random function.

Non deterministic machines that can't solve problems in polynomial time are NP. Hence finding a solution to an NP problem is hard but verifying it can be done in polynomial time.

** **

**Question 7: Considering a 0/1 Knapsack problem, explain the application of Dynamic Programming, and Greedy algorithm to find an optimal solution. Give the two separate repetition of problem space and compare the time complexity of the algorithm under the said paradigm. . **

Ans 7:

The 0=1 knapsack problem exibits the optimal substructure : Let i be the highest

numbered item among 1; : : : ; n items in an optimal solution S for W with value v(S). Then S0 =

S - fig is an optimal solution for W - wi with value v(S0) = v(S) - vi.

We can express this in the following recursion. Let c[i;w] denote the value of the solution for

items 1; : : : ; i and maximum weight w.

c[i;w] = 0 if i = 0 or w = 0

c[i - 1;w] if wi > w

max(vi + c[i - 1;w - wi]; c[i - 1;w]) if i > 0 and w > wi

Notice that the last case determines whether or not the ith element should be included in an

optimal solution. We can use this recursion to create a straight forward dynamic programming

algorithm:

**Input: **Two sequences v = hv1; : : : ; vni and w = hw1; : : : ;wni the number of items n and the

maximum weight W.

**Output: **The optimal value of the knapsack.

DYNAMIC-0-1-KNAPSACK (v; w; n;W)

**{**

**for **w 0 **to **W **do**

c[0;w] 0

**end for**

**for **i 1 **to **n **do**

c[i; 0] 0

**for **w 1 **to **W **do**

**if **wi 6 w **then**

**if **vi + c[i - 1;w - wi] > c[i - 1;w] **then**

c[i;w] vi + c[i - 1;w - wi]

**else**

c[i;w] c[i - 1;w]

**end if**

**else**

c[i;w] c[i - 1;w]

**end if**

**end for**

**end for**

**return **c[n;W]

}

**Question 8: Give the algorithm of Binary Search Explain how it functions? Devise a ternary search algorithm that first test the element at position n/3 for quality with value of x, and then checks the elements at 2n/3 and either discovers x or reduces the set size to one-third the size of the original. Compare this with binary search? **

Ans 8: Algorithm of Binary Search

Algorithm BinSearch(a,n,x)

//Given an array a[1:n] of elements in nondecreasing

// order, n>0, determine whether x is present

{

low : = 1;

high : = n;

while (low < high) do

{

mid : = [(low+high)/2];

if(x < a[mid]) then high:= mid-1;

else if (x >a[mid]) then low:=mid + 1;

else return mid;

}

return 0;

}

**Question 9: What are the general characteristics of dynamic programming algorithm? In the longest-common-subsequence problem, we are given two sequences A= [a _{1},a_{2},……,a_{m}) and B=[b_{1},b_{2},……,b_{m}] and wish to find a maximum-length common subsequence X of A and B. [Please clearly define your notations.)**

**Describe a dynamic-[programming algorithm to find a maximum-length common subsequence X of A and B.(Please clearly define your notation)****Analyse the running time and space requirement of your algorithm ( in terms of big-O notation).****If we only want to find the length of X, what is the space requirement? Why?**

Ans 9:

**Dynamic Programming**

- Not a specific algorithm, but a technique (like divide-and-conquer).
- Developed back in the day when “programming” meant “tabular method” (like linear programming). Doesn.t really refer to computer programming.
- Used for optimization problems:
- Find
*a*solution with*the*optimal value. - Minimization or maximization. (We.ll see both.)

**Four-step method**

- Characterize the structure of an optimal solution.
- Recursively deÞne the value of an optimal solution.
- Compute the value of an optimal solution in a bottom-up fashion.
- Construct an optimal solution from computed information.

**Question 10: You are interested in analysing some hard-to-obtain data from two separate databases. Each database contains n numerical values-so there are 2n values total and you may assume that no two values are the same. You’d like to determine the median of this set of 2n values, which we will define here to be databases. In a single query, you can specify a value k to one of the two databases, and the chosen database will return the k ^{th} smallest value that it contains. Since queries are expensive, you would like to compute the median using as few queries as possible. Give an algorithm that finds the median value using at most O (log n) queries?**

Ans 10: Given

**OTHER IMPORTANT QUESTIONS:**

**Question : Define dynamic programming?**

Ans : Dynamic programming is an algorithm design method that can be used when a solution to the problem is viewed as the result of sequence of decisions. It is technique for solving problems with overlapping subproblems.

**Question : What are the features of dynamic programming**

Ans :

- Optimal solutions to sub problems are retained so as to avoid recomputing of their values.
- Decision sequences containing subsequences that are sub optimal are not considered.
- It definitely gives the optimal solution always..

**Question : What are the drawbacks of dynamic programming?**

Ans :

- Time and space requirements are high, since storage is needed for all level.
- Optimality should be checked at all levels

**Question : Write the general procedure of dynamic programming**

Ans : The development of dynamic programming algorithm can be broken into a sequence of 4 steps.

- Characterize the structure of an optimal solution.
- Recursively define the value of the optimal solution.
- Compute the value of an optimal solution in the bottom-up fashion.
- Construct an optimal solution from the computed information.

**Question : What are the drawbacks of dynamic programming?**

Ans :

- Time and space requirements are high, since storage is needed for all level.
- Optimality should be checked at all levels

**Question: Write the control abstraction for greedy method.**

Ans :** **

** ****Algorithm Greedy (a, n)**

{

solution=0;

for i=1 to n do

{

x= select(a);

if feasible(solution ,x) then

solution=Union(solution ,x);

}

return solution;

}