Thursday, August 19, 2010

Merge Sort


Merge-sort is based on the divide-and-conquer paradigm. The Merge-sort algorithm can be described in general terms as consisting of the following three steps:

  1. Divide Step: If given array A has zero or one element, return S; it is already sorted. Otherwise, divide A into two arrays, A1 and A2, each containing about half of the elements of A.

  2. Recursive Step: Recursively sort array A1 and A2.

  3. Conquer Step: Combine the elements back in A by merging the sorted arrays A1 and A2 into a sorted sequence.

This diagram below shows the divide and conquer (or merging) steps stated above. We can visualize Merge-sort by means of binary tree where each node of the tree represents a recursive call and each external nodes represent individual elements of given array A. Such a tree is called Merge-sort tree. The heart of the Merge-sort algorithm is conquer step, which merge two sorted sequences into a single sorted sequence. This simple but amazing algorithm and a straight forward C++ implemented is presented below, and some cool links are added in the "Reference" section at the end of the post.


Input: Array A[p...r], indices p, q, r (p ≤ q < r).
Output: Array A[p...r] in ascending order

MERGE-SORT(A, p, r):
if p < r
then q := (r + p) / 2
MERGE-SORT(A, q+1, r)
MERGE(A, p, q, r)

MERGE(A, p, q, r):
n1 := q - p + 1
n2 := r - q
create arrays L[1...N1+1] and R[1...N2+1]
for i := 1 to N1
do L[i] := A[p + i - 1]
for j := 1 to N2
do R[j] := A[q + j]
L[N1 + 1] := INF
R[N2 + 1] := INF
i := 1
j := 1
for k := p to r
do if L[i] <= R[j]
then A[k] := L[i]
i := i + 1
A[k] := R[j]
j := j + 1

Follow this link to see a javascript demonstration that simulates the above algorithm.


In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists) and the closed form follows from the master theorem.

Simple proof:
The merge sort algorithm created a complete binary tree, which have d depth and at each level, a total of n elements.
So, 2^d ≈ n, which implies d ≈ lg n
Now the total numbers of operation in merge sort algorithm is:
n * 2d ≈ 2n lg n ≈ O(n lg n)

In the worst case, merge sort does an amount of comparisons equal to or slightly smaller than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).

In the worst case, merge sort does about 39% fewer comparisons than quicksort does in the average case. Merge sort always makes fewer comparisons than quicksort, except in extremely rare cases, when they tie, where merge sort's worst case is found simultaneously with quicksort's best case. In terms of moves, merge sort's worst case complexity is O(n log n) — the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case. Although, depending on the machine's memory architecture, quick sort can sometimes outperform merge sort, which is a very rare case.

Recursive implementations of merge sort make 2n - 1 method calls in the worst case, compared to quicksort's n, thus merge sort has roughly twice as much recursive overhead as quicksort. However, iterative, non-recursive, implementations of merge sort, avoiding method call overhead, are not difficult to code. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in.

Merge sort as described here also has an often overlooked, but practically important, best-case property. If the input is already sorted, its complexity falls to O(n). Specifically, n-1 comparisons and zero moves are performed, which is the same as for simply running through the input, checking if it is pre-sorted.

Although heap sort has the same time bounds as merge sort, it requires only T(1) auxiliary space instead of merge sort's T(n), and is often faster in practical implementations. In case in-place sorting is necessary (Merge sort can be implemented as in-place algorithm, but the performance gain is not worth the complexity of the program, however, still merge sort runs in O(n lg n) time), heap sort is a better choice.

C++ Implementation:

Call Merge_Sort(A, start, end) to sort the closed range [start, end] of the array A.

void Merge(int A[], int p, int q, int r) {
int i, j, k, n1 = q - p + 1, n2 = r - q;
int L[n1], R[n2];
for(i = 0; i < n1; i++)
L[i] = A[p + i];
for(j = 0; j < n2; j++)
R[j] = A[q + j + 1];
for(k = p, i = j = 0; k <= r; k++) {
if(j >= n2 || (i < n1 && L[i] <= R[j])) A[k] = L[i++];
else A[k] = R[j++];

void Merge_Sort(int A[], int p, int r) {
if(p < r) {
int q = (p + r) / 2;
Merge_Sort(A, p, q);
Merge_Sort(A, q+1, r);
Merge(A, p, q, r);

For implementations in other languages, this page contains the implementation of merge sort procedure in almost all the living languages of the world:


Thanks for reading.

Wednesday, August 4, 2010

Simple Take Away Game

Combinatorial games are two-player games with an outcome (i.e. it cannot end in a draw) and the players move alternatively from one position to other position depending on their move and finally reach a terminal position. When a player reaches the terminal position, the winner is decided based on the type of game play.

There are two types of game plays – normal play and the misère play .In a normal play , the one who moves the game to the terminal position is the winner and in a misère play , the one who moves the game to the terminal position is a loser.And if the rules of the game are same for both the players , then its a impartial game and when the rules are different for the two players , then its a partisan game.An common example of partisan game is a game of chess.

Take Away Game :

A take away game is a impartial combinatorial game where there is a single pile with n chips and players remove a certain number of chips from the pile depending on the rules of the game.Let us for the ease of analysis consider a normal play where the one who takes the game to the terminal position wins the game.

Consider a pile of n chips and say the rules of the game are given

  • N={0,1,2,3,..,n} be the positions that the game can be and a 0 signifies a terminal position and a n signifies a starting position

  • S={1,3,4} be the possible set of moves

  • A player can take away ‘x’ chips from the current pile where x € {S}.

Say we need to analyze the game given n and S .By analysis what we mean is given n and S and assuming optimal play, who wins the game.To analyze the game we define two positions namely N position where the Next player to play wins and P position where the Previous player to have played wins.So a game ideally starts with either a P or a N position but always ends in a P position.

So we arrive at the following rules which can be recursively used to define the P and N positions.

  1. All terminal positions are P positions in a normal play.

  2. From an P position, any move takes us to a position.

  3. From a N position, we can reach a P position in at least one possible way so that we emerge the final winner.

If the game starts with P position , the second player to play wins because, from P position the game can move to only to N position and the Second player wins by taking the game again to a P position according to the Rule 3 stated above(this is called playing optimally).
If the game starts with a N position , the current player wins the game by taking the game always to a P position.

So Let S be {1,3,4} for our analysis.
We use the following Algorithm to classify the P and N positions of {0,1,2,… n}.

  • Step 1: Label every terminal position as a P-position.

  • Step 2: Label every position that can reach a labeled P-position in one move as an N-position.

  • Step 3: Find those positions whose only moves are to labeled N-positions; label such positions as P-positions.

  • Step 4: If no new P-positions were found in step 3, stop; otherwise return to step 2

So we begin by adding ‘0′ to P position and let set P={0} denote the currently recognized P positions .By using Step 1, { 1,3,4} are decided to be N positions and analyzing the next number in the close proximity , we take ‘2′.

‘2′ has only one transition and its to a N position so 2 gets added to P and 1,3,4 gets added to N in this iteration. Now moving to the next iteration, 5(2+3) and 6(2+4) are N positions and analyzing for 7, we have the possible transitions as 6,4,3 which are all N positions and hence 7 gets added to P
hence at the end of second iteration, P={0,2,7} and N={1,3,4,5,6} . On continuing further, we get the following

P={0,2,7,9,14,16,21,23….} and N={1,3,4,5,6,8,10,11,12,13,15,17,18,19,20….}

So generalizing all positions which leave a remainder of 0 or 2 on dividing by 7 are P positions and others are N positions.

So say we have 21 chips then we are asked to comment on the winner assuming the optimal game play, we can conclude from the above derivations that 21 is a P position so the second player can win if he plays optimally . Say i start with 21 , the possible moves are 20,18,17 which are all N positions . Say we move to one of these

17 can move to 16,14,13 (14,16 are both P position)
18 can move to 17,15,14 (14 is a P position)
20 can move to 19,17,16 ( 16 is a P position)

For all the above cases , we observe that there is a possible transition that the second player can make such that the game proceeds to a P position and the cycle continues and finally the second player comes to a N position which could be either 1,3,4 so that the second player makes a suitable move and moves to 0 winning the game. If the Second player at any stage makes a non-optimal move , the game goes temporarily out of his control and he may or may-not win the game.

Another Example from SPOJ :

Let us analyze this Problem from SPOJ NGM which can actually be found here.

The problem states that a number n is taken and the player take turns and subtract the number by any one of its non zero digits.And the one who makes this number ‘0′ wins the game .The two players are Nikofir and Trofim. This problem helps us understand the above discussed concepts.

Analysis :

We start by adding 0 to P and 1,2,3,4,5,6,7,8,9 all lead to 0 directly hence are added to N . Now we analyze 10 which goes to 9 alone which is a N position and gets added to P . So at end of one iteration,

P={0,10) N=(1,2,3,4,5,6,7,8,9} .

Next the following numbers {11,12,13,14,15,16,17,18,19 }all have one possible move to {0,10} so they belong to N and the next P number is 20 proceeding we fins tht P is a set of multiples of 10

P={ x| x is a multiple of 10}

N={x | x is not a multiple of 10}

So given number which is not a multiple of 10 it is in N position and the player who plays first always wins i.e. Nikofir wins always .

If the given number is a multiple of 10 , then its in P position and Trofim wins.

Stress on Optimal Play :

Why do we need to analyze an optimal play? The need for all this analysis is to make the computer make the best move at the current position so that it poses a challenge to the player who plays and if the player gets lucky, he can take advantage of these analysis to make a winning move.

Binary Tree -- N Nodes & H Height

The problem:

You are told that you have N unique numbers, how many different binary trees are possible with those N numbers (i.e. nodes) which has exactly H height.


If we look closely, it will be not that hard to figure out that, the problem actually wants to know, how many different binary trees are possible to build with N nodes, each of which has exactly H height allowing rotations. Here, two trees are different when their structures are different. For example:

@ @
/ \ / \
@ @ and @ @
/ \ / \
@ @ @ @


I don't know whether this problem has a mathematical solution or not, I have searched for it and found nothing... But, if the values N and H supports, this problem can be solved with a dynamic programming approach.

When we want to build a tree with N nodes and H height, we can select a node as a root, so, we can put 1 to N-2 nodes on the left subtree of height H-1 or less and N-2 to 1 nodes on the right subtree of height H-1 or less, satisfying that, at least one of the subtree has a height H-1. So, by using proper base condition, we can find the number of different trees of exactly H height from this recursive formulation, and we can express the relation as follows,

if k == 1 then exactTrees( N, D ) = ( n == 1 )
if n <= 1 then exactTrees( N, D ) = 0
for all {L, R} such that L, R > 0 and L + R = N - 1:
exactTrees( N, D ) += exactTrees( L, D - 1 ) * exactTrees( R, D - 1 )
exactTrees( N, D ) += smallerTrees( L, D - 2 ) * exactTrees( R, D - 1 )
exactTrees( N, D ) += exactTrees( L, D - 1 ) * smallerTrees( R, D - 2 )

This is pretty obvious, because, if we want to get a tree of height H from any node, either one or both of its subtrees must have a height H - 1, and if we have N nodes at any step, we need to take 1 node as root, and then if we take i nodes for left subtree, we have N - 1 - i nodes left for the right subtree.
When we calculate exact height, we need to consider that a subtree may have a smaller height, so we need to calculate smallerTrees( N, D ) separately, but actually it is a bit easier:

if n < 0 or k <= 0 then smallerTrees( N, D ) = 0
if k == 1 then smallerTrees( N, D ) = ( n == 1 )
if n <= 1 then smallerTrees( N, D ) = 1
for all {L, R} such that L, R > 0 and L + R = N - 1:
smallerTrees( N, D ) += smallerTrees( L, D - 1 ) * smallerTrees( R, D - 1 )

Sample program:

First one goes the recursive algorithm (with dp):

// states: nodes, height
int exact[202][101], smaller[202][101];

int smallTrees(int n, int k)
if(n<0 || k<=0) return 0;
if(k==1) return n==1;
if(n<=1) return 1;

int &ret = smaller[n][k], l, r;
if(ret!=-1) return ret;

ret = 0;
for(int i=1; i<n-1; i++)
l = i, r = n-1-i;
if(l && r)
ret += smallTrees(l, k-1)*smallTrees(r, k-1);
return ret;

int exactTrees(int n, int k)
if(k==1) return n==1;
if(n<=1) return 0;

int &ret = exact[n][k], l, r;
if(ret!=-1) return ret;
ret = 0;
for(int i=1; i<n-1; i++)
l = i, r = n-1-i;
if(l && r)
ret += exactTrees(l, k-1)*exactTrees(r, k-1);
ret += exactTrees(l, k-1)*smallTrees(r, k-2);
ret += smallTrees(l, k-2)*exactTrees(r, k-1);
return ret;

Here is the iterative version (a little complex):

// states: height, nodes
int exactTrees[101][202], smallTrees[101][202];

void solve(int n, int k)
int d, i, j;
exactTrees[1][1] = 1;
for(d=2; d<=k; d++) // depth
for(i=1; i<=n; i+=2) // number of nodes
for(j=1; j<=i-1; j+=2) // left & right
exactTrees[d][i] += exactTrees[d-1][j]*exactTrees[d-1][i-1-j];
exactTrees[d][i] += exactTrees[d-1][j]*smallTrees[d-2][i-1-j];
exactTrees[d][i] += smallTrees[d-2][j]*exactTrees[d-1][i-1-j];
for(i=1; i<=n; i+=2) // update smaller tree
smallTrees[d-1][i] += smallTrees[d-2][i]+exactTrees[d-1][i];

[P.S. The result of this problem grows very rapidly along with the increase of N and H, so, it may easily run out of range, normally problems like this requires the result with respect to some modulus values.]

Similar problem:

USACO Section: 2.3, Problem: Cow Pedigrees, Code: nocows.
[P.S. Links won't work for usaco, you need to reach level 2.3 if you want to see the problem yourself.]
If you find any more problem regarding this, please post it as a comment below.