Wednesday, December 29, 2010

CSE 202 Pseudo Codes 1



Iterative Tree Traversal


There are three types of tree traversals, namely, inorder, preorder and postorder. For some strange reasons, students @CSE-DU are forced to write iterative versions of these. I am just noting it down here again.

The Node is composed of three fields, namely, Info (data for this node), Left (pointer to left child), Right (pointer to right child). The pseudo-codes are shown below

Preorder



Preorder ( Info, Left, Right, root )
top := 1, Stack[top] := NULL, ptr := root;
repeat while ptr != NULL
apply process() to Info[ptr];
if Right[ptr] != NULL then
top := top + 1, Stack[top] = Right[ptr];
if Left[ptr] != NULL then
ptr = Left[ptr];
else
ptr = Stack[top], top := top - 1;
exit

Inorder



Inorder ( Info, Left, Right, root )
top := 1, Stack[top] := NULL, ptr := root;
repeat while ptr != NULL
top := top + 1, Stack[top] = ptr, ptr = Left[ptr];
ptr = Stack[top], top := top - 1;
repeat while ptr != NULL
apply process() to Info[ptr];
if Right[ptr] != NULL then
ptr := Right[ptr];
go to step 03;
ptr := Stack[top], top := top - 1;
exit

Postorder



Postorder ( Info, Left, Right, root )
top := 1, Stack[top] := NULL, ptr := root;
repeat while ptr != NULL
top := top + 1, Stack[top] = ptr, ptr = Left[ptr];
repeat while Stack[top] != NULL and ptr = Right[Stack[top]]
ptr := Stack[top], top := top - 1;
apply process() to Info[ptr];
if Stack[top] != NULL then
ptr := Right[Stack[top]];
go to step 03;
exit

Really pointless though!

Friday, December 24, 2010

Facebook Like on Blogger


Add facebook "like" on every post of your blogger blog


Where-ever you go today, you see facebook! Even, it might happen that, you have gone to market to buy some fish, and you see here and there, "Like"s are hanging aroung... People today are too much eager to be "Like"d on facebook or "Follow"ed on twitter. However, this is actually an experiment with blogger widget template, who cares if anyone likes or not on facebook. So, lets start.
If you look around on google, you will see many tutorials for this... however, here is the summery and some more detailed view.

Button Code: This is pretty much the button code:

<iframe
allowTransparency='true' frameborder='0' scrolling='no'
expr:src='&quot;http://www.facebook.com/plugins/like.php?href=&quot; + data:post.url + &quot;&amp;layout=standard&amp;show_faces=false&amp;width=500&amp;height=35&amp;action=like&amp;font=arial&amp;colorscheme=light&quot;'
style='border:none; overflow:hidden; width:500px; height:35px;'
/>

Customizing the button: There is nothing much to customize, however, the options are shown below:
If you look closely the expr:src tag, you will see the parameters, you can customize. These are:




















































layout=standard show_faces=false action=like font=arial colorscheme=light
standard false like arial light
button_count true recommend lucida+grande evil
box_count     segoe+ui dark
      tahoma  
      trebuchet+ms  
      verdana  

Placing the button: Go to "Design" from your dashboard and click on "Edit Html". Then you can see the template code of your blog. Check the "Expand Widget Templates". Now, before proceeding anymore, first take a backup of your template so that if anything happens, you can again revert it.
In your template, find the line:
<div class='post-header-line-1'/>

Then you can paste the above code (or modified one according to the above table). Then the "Like" button will appear just below the post header of every post.
Now, if you like to add it in/after post body, the next section is for you which looks like:
<div class='post-body entry-content'>

Or you can also scroll down to next section which looks like
<div class='post-footer'>
If you want to put it in footer of the post.

Preview: So, by exploring the template, you can put it anywhere... but, make sure you keep backup version of every working template.
Have fun and see the preview below when the above code is used in
<div class='post-body entry-content'>
after the
<data:post.body/>
Have fun!

Monday, December 20, 2010

About recent design changes



Hello friends, as I have noticed, adding loads of external widgets make blogger very slow, so I have decided to remove all of it, even, including the SyntaxHighlighter 3.0, which have had some other problems as well. I switched back to SyntaxHighlighter 2.1 which are hosted on my free server. Now the blog is significantly fast looks prettier than it was before.

The blog stat widget I have added is now from originally google's blogger, and it is more accurate than any other third-party free hit counters or so.

However, I was eager to know is it possible to add some right panel items as different pages, not sure if it is possible or not. A few more design changes are also about to come, as I am adding some more css to the template. I will be glad to get any feedback on that.

Thanks for understanding.

Friday, December 10, 2010

getch() & getche() in gcc/g++


As sometimes, we use the header "conio.h" in windows (although deprecated), we feel the need for it sometimes in linux as well, and unfortunately, in linux, i.e. in gcc/g++, there is nothing built in like "conio.h", to be more precise, getch() and getche() functions. getch() reads directly from console, without any buffering, and so does getche(). The difference between them is, getch() doesn't echo the pressed key on the screen while getche() does.

Well, it's not very hard to write your own getch() and getche() routines in linux if you know some system programming, and console attributes used in linux. All you need to do, is to stop stdin buffering before reading the character, and then restore the old settings again. Here goes the simple "conio.h" for gcc/g++:

/*
AUTHOR: zobayer
INSTRUCTION:
just make this file a header like "conio.h"
and use the getch() and getche() functions.
*/
#include <termios.h>
#include <unistd.h>
#include <stdio.h>

/* reads from keypress, doesn't echo */
int getch(void) {
    struct termios oldattr, newattr;
    int ch;
    tcgetattr( STDIN_FILENO, &oldattr );
    newattr = oldattr;
    newattr.c_lflag &= ~( ICANON | ECHO );
    tcsetattr( STDIN_FILENO, TCSANOW, &newattr );
    ch = getchar();
    tcsetattr( STDIN_FILENO, TCSANOW, &oldattr );
    return ch;
}

/* reads from keypress, and echoes */
int getche(void) {
    struct termios oldattr, newattr;
    int ch;
    tcgetattr( STDIN_FILENO, &oldattr );
    newattr = oldattr;
    newattr.c_lflag &= ~( ICANON );
    tcsetattr( STDIN_FILENO, TCSANOW, &newattr );
    ch = getchar();
    tcsetattr( STDIN_FILENO, TCSANOW, &oldattr );
    return ch;
}

Hope this helps. Please let me know if any error is found.

[Update:] However there is a library in gcc for some other similar functions of "conio.h", which is known as NCURSES library. Thanks to Muhammad Ridowan for letting me know this.


SPOJ Problem Classifier



It's a bit hard to find a good classifier for SPOJ, for example, compared to uvatoolkit for uva onlinejudge. Well, so, far I have seen, this site is the best. Give it a try, although it does only have a small number of problems classified.

Thursday, December 9, 2010

SPOJ - HORRIBLE



Problem 8002. Horrible Queries


Another nice problem by Iqram Mahmud.

This is a data structure related problem and can easily be solved by segment tree / range tree. At each node of the tree, we can keep the sum of all elements that falls in the current range, i.e. children of current node, and add which is the value which must be added when we pass through this node.

So, for the update query, we just find the exact interval and add the given additional value with current sum (Node->sum += interval * value) and add (Node->add += value). Then we recursively update sum of all its ancestors (Node->sum = Left->sum + Right->sum + interval * Node->add). Similarly, we can make the second query, we just keep adding the add values of each node we pass (call it carry), then when we are in the exact range, we return current sum + interval * carry.

Be careful, indexes are 1 based and the calculation may easily overflow 32 bit integer.

Saturday, December 4, 2010

Setup connector/j for JDBC



Setting up MySQL driver connector/j for JDBC in Windows


This is basically not a hard task. Make sure you already have the following things installed and configured in your system, (this demonstration is targeted on the latest versions of the components):

If you don't have these components installed yet, you might try taking some online help on how to install and configure those, not hard as well.

Now, to install java to mysql connector, which is know as connector/j, go to this page and download the windows ZIP package. Unzip the archive anywhere in your pc, it won't matter at all. Now, follow these steps:
  1. Copy the file "mysql-connector-java-5.1.13-bin.jar" to your java installation directory and place it in "..\jdk1.6.0_21\jre\lib\ext".
  2. Now, you have to add this runtime library to your systems classpath environment variable. Go to the properties of My Computer and select the Advanced tab, there you'll see Environment Variables. You will find all your system variables here. Find out the name CLASSPATH and append the location of connector/j i.e. where you just pasted. In my pc, its "C:\Program Files\Java\jdk1.6.0_21\jre\lib\ext\mysql-connector-java-5.1.13-bin.jar". Don't forget to separate this entry from the previous one with e semi-colon. Now click ok and close the dialogue boxes, you are done! But if you don't have the CLASSPATH variable, you need to create it yourself.

That's pretty much all of the work, now we are going to test if it works.

Lets assume that you already have a mysql database named "mysqltest" and you are the "root" user with password "adminadmin" using default host "localhost" with default http port 80, the following code should work then:

import java.sql.*;

public class Main {
public static void main(String[] args) {
Connection conn = null;
try {
Class.forName("com.mysql.jdbc.Driver") ;
System.out.println("MySQL JDBC driver loaded ok.");

conn = DriverManager.getConnection("jdbc:mysql://localhost/mysqltest?user=root&password=adminadmin");
System.out.println("Connected with default port.");

DatabaseMetaData meta = conn.getMetaData();
System.out.println("Server name: " + meta.getDatabaseProductName());
System.out.println("Server version: " + meta.getDatabaseProductVersion());
System.out.println("Driver name: " + meta.getDriverName());
System.out.println("Driver version: " + meta.getDriverVersion());
System.out.println("JDBC major version: " + meta.getJDBCMajorVersion());
System.out.println("JDBC minor version: " + meta.getJDBCMinorVersion());

Statement query = conn.createStatement();
int count = query.executeUpdate(
"CREATE TABLE test (" +
"id INT PRIMARY KEY NOT NULL AUTO_INCREMENT," +
"username VARCHAR(20) NOT NULL," +
"password CHAR(40) NOT NULL );"
);
System.out.println("Table created.");

conn.close();
} catch(Exception e) {
System.err.println("Exception: " + e.getMessage());
}
}
}

As you see, you are using "DriverManager.getConnection()" method to connect it using connector/j which you previously specified by "Class.forName("com.mysql.jdbc.Driver") ;" So, make proper changes to test various things. Here, you will get some more methods of connecting and testing.

Well, if this doesn't work for you, then there must be something which is beyond the scope of this post, else "congratulations!".

Tuesday, November 30, 2010

CRC - 32 (IEEE)



In our Data Communication Lab, we were supposed to write a CRC-32 implementation with C/C++. CRC stands for cyclic redundancy check, which is an error detection algorithm based on modulo-2 division. We used IEEE 32 bit polynomial 0x04C11DB7 to generate CRC-32 of a given string, and recheck it against the generated CRC whether an error occurred or not.

However, we looked for it in the net but could not find much information about it, probably because many languages (like php) already include functions as a built-in library to generate CRC checksum. So, I guess, people don't do these stuffs with C/C++ anymore.

The first tricky thing about generating CRC-32 is the polynomial is of 33 bits, which is an unnatural size for computers as it will need longer data type. However, after a slight observation, it becomes clear that, the 33th bit (or msb) is always 1 and thus, this is not present in the polynomial, rather, it is handled by the algorithm itself. Following the algorithm, the msb will be always 0 after we perform the XOR operation, and shifted out from the remainder. For this reason, we do not waste a bit to store that value, or add complexity to the code.

Here is my implementation goes, it is not very efficient, because, most of the implementations I have found later on in internet, use look-up tables to speed up the generation. However, this is quite simple and follow the algorithm straight:

/*
zobayer hasan
03/03/2011
*/

#include <cstdio>
#include <cstring>
using namespace std;

typedef unsigned int int32;

// key is the crc-32 ieee 802.3 standard polynomial
const int32 poly = 0x04C11DB7;
const int MAXLEN = 1024;

int32 reflect(int32 data, int bits) {
int32 ret = 0;
for(int i = bits-1; i>=0; i--) {
if(data & 1) ret |= (1 << i);
data >>= 1;
}
return ret;
}

int32 crc32(char* msg, int len) {
int32 crc = 0xffffffff;
int i, j;
for(i = 0; i < len; i++) {
crc ^= ((char)reflect(msg[i], 8) << 24);
for(j = 8; j; j--) {
crc = (crc << 1) ^ ((crc & 0x80000000) ? poly : 0x0);
}
}
return reflect(crc, 32) ^ 0xffffffff;
}

int main() {
int32 crc;
int len;
char msg[MAXLEN];

gets(msg);
len = strlen(msg);

// generate crc32 for msg
crc = crc32(msg, len);
printf("CRC: 0x%08X\n", crc);

return 0;
}


Here, reflect() just reverses the 'bits' number of lsb (from right) bits from 'data' passed to it.

Now, how to check consistency? It is simple, before sending message over ports, we generate the CRC and pass it with original message, in receiver end, the CRC is appended with the message and then it generates CRC again, this time, if now error has occurred, CRC will be 0.

Ahh, quite short code :)

Saturday, November 20, 2010

Matrix Exponentiation



Introduction:


Don't be confused with the title, this article has nothing to do with "how to calculate an exponent of a given matrix", rather it will discuss on how to use this technique to solve a specific class of problems.


Sometimes we face some problems, where, we can easily derive a recursive relation (mostly suitable for dynamic programming approach), but the given constraints make us about to cry, there comes the matrix exponentiation idea. The situation can be made more clear with the following example:


Let, a problem says: find f(n) : n'th Fibonacci number. When n is sufficiently small, we can solve the problem with a simple recursion, f(n) = f(n-1) + f(n-2), or, we can follow dynamic programming approach to avoid the calculation of same function over and over again. But, what will you do if the problem says, given 0 < n < 1000000000, find f(n) % 999983 ? No doubt dynamic programming will fail!


We'll develop the idea on how and why these types of problems could be solved by matrix exponentiation later, first lets see how matrix exponentiation can help is to represent recurrence relations.


Prerequisite:


Before continuing, you need to know:

  • Given two matrices, how to find their product, or given the product matrix of two matrices, and one of them, how to find the other matrix.
  • Given a matrix of size d x d, how to find its n'th power in O( d3log(n) ).


Patterns:


What we need first, is a recursive relation, and what we want to do, is to find a matrix M which can lead us to the desired state from a set of already known states. Let, we know k states of a given recurrence relation, and want to find the (k+1)th state. Let M be a k x k matrix, and we build a matrix A:[k x 1] matrix from the known states of the recurrence relation, now we want to get a matrix B:[k x 1] which will represent the set of next states, i.e. M x A = B, as shown below:


| f(n) | | f(n+1) |
| f(n-1) | | f(n) |
M x | f(n-2) | = | f(n-1) |
| ...... | | ...... |
| f(n-k) | |f(n-k+1)|

So, if we can design M accordingly, job's done!, the matrix will then be used to represent the recurrence relation.


  • Type 1:

    Lets start by the simplest one, f(n) = f(n-1) + f(n-2).
    So, f(n+1) = f(n) + f(n-1)
    Let, we know, f(n) and f(n-1); we want to get f(n+1)
    From the above situation, matrix A and B can be formed as shown below:





    Matrix AMatrix B

    | f(n) |
    | f(n-1) |


    | f(n+1) |
    | f(n) |

    [Note: matrix A will be always designed in such a way that, every state on which f(n+1) depends, will be present]
    So, now, we need to design a 2x2 matrix M such that, it satisfies M x A = B as stated above.
    The first element of B is f(n+1) which is actually f(n) + f(n-1). To get this, from matrix A, we need, 1 f(n) and 1 f(n-1). So, the 1st row of M will be [1 1].

    | 1 1 | x | f(n) | = | f(n+1) |
    | ----- | | f(n-1) | | ------ |

    [Note: ----- means, we are not concerned about this value]
    Similarly, 2nd item of B is f(n) which we can get by simply taking 1 f(n) from A. So, the 2nd row of M is [1 0].

    | ----- | x | f(n) | = | ------ |
    | 1 0 | | f(n-1) | | f(n) |

    [I hope you know how a matrix multiplication is done and how the values ar assigned!]
    Thus we get the desired 2 x 2 matrix M:

    | 1 1 | x | f(n) | = | f(n+1) |
    | 1 0 | | f(n-1) | | f(n) |


    If you are confused about how the above matrix is calculated, you might try doing it this way:


    We know, the multiplication of an n x n matrix M with an n x 1 matrix A will generate an n x 1 matrix B, i.e. M x A = B. The k'th element in the product matrix B is the product of k'th row of the n x n matrix M with the n x 1 matrix A in the left side.
    In the above example, the 1st element in B is f(n+1) = f(n) + f(n-1). So, it's the product of 1st row of matrix M and matrix B. Let, the first row of M is [x y]. So, according to matrix multiplication,


    x * f(n) + y * f(n-1) = f(n+1) = f(n) + f(n-1)
    ⇒ x = 1, y = 1

    Thus we can find the first row of matrix M is [1 1]. Similarly, let, the 2nd row of matrix M is [x y], and according to matrix multiplication:

    x * f(n) + y * f(n-1) = f(n)
    ⇒ x = 1, y = 0

    Thus we get the second row of M is [1 0].

  • Type 2:

    Now, we make it a bit complex: find f(n) = a * f(n-1) + b * f(n-2), where a, b are some constants.
    This tells us, f(n+1) = a * f(n) + b * f(n-1).
    By this far, this should be clear that the dimension of the matrices will be equal to the number of dependencies, i.e. in this particular example, again 2. So, for A and B, we can build two matrices of size 2 x 1:





    Matrix AMatrix B

    | f(n) |
    | f(n-1) |


    | f(n+1) |
    | f(n) |

    Now for f(n+1) = a * f(n) + b * f(n-1), we need [a b] in the first row of objective matrix M instead of [1 1] from the previous example. Because, now we need a of f(n)'s and b of f(n-1)'s.

    | a b | x | f(n) | = | f(n+1) |
    | ----- | | f(n-1) | | ------ |

    And, for the 2nd item in B i.e. f(n), we already have that in matrix A, so we just take that, which leads, the 2nd row of the matrix M will be [1 0] as the previous one.

    | ----- | x | f(n) | = | ------ |
    | 1 0 | | f(n-1) | | f(n) |

    So, this time we get:

    | a b | x | f(n) | = | f(n+1) |
    | 1 0 | | f(n-1) | | f(n) |

    Pretty simple as the previous one...


  • Type 3:

    We've already grown much older, now lets face a bit complex relation: find f(n) = a * f(n-1) + c * f(n-3).
    Ooops! a few minutes ago, all we saw were contiguous states, but here, the state f(n-2) is missing. Now? what to do?


    Actually, this is not a problem anymore, we can convert the relation as follows: f(n) = a * f(n-1) + 0 * f(n-2) + c * f(n-3), deducing f(n+1) = a * f(n) + 0 * f(n-1) + c * f(n-2). Now, we see that, this is actually a form described in Type 2. So, here, the objective matrix M will be 3 x 3, and the elements are:


    | a 0 c | | f(n) | | f(n+1) |
    | 1 0 0 | x | f(n-1) | = | f(n) |
    | 0 1 0 | | f(n-2) | | f(n-1) |

    These are calculated in the same way as Type 2. [Note, if you find it difficult, try on pen and paper!]


  • Type 4:

    Life is getting complex as hell, and Mr. problem now asks you to find f(n) = f(n-1) + f(n-2) + c where c is any constant.


    Now, this is a new one and all we have seen in past, after the multiplication, each state in A transforms to its next state in B.
    f(n) = f(n-1) + f(n-2) + c
    f(n+1) = f(n) + f(n-1) + c
    f(n+2) = f(n+1) + f(n) + c
    ................................. so on
    So, normally we can't get it through the previous fashions. But, how about now we add c as a state?


    | f(n) | | f(n+1) |
    M x | f(n-1) | = | f(n) |
    | c | | c |

    Now, its not much hard to design M according to the previous fashion. Here it is done, but don't forget to verify on yours:

    | 1 1 1 | | f(n) | | f(n+1) |
    | 1 0 0 | x | f(n-1) | = | f(n) |
    | 0 0 1 | | c | | c |


  • Type 5:

    Lets put it altogether: find matrix suitable for f(n) = a * f(n-1) + c * f(n-3) + d * f(n-4) + e.
    I would leave it as an exercise to reader. The final matrix is given here, check if it matches with your solution. Also find matrix A and B.


    | a 0 c d 1 |
    | 1 0 0 0 0 |
    | 0 1 0 0 0 |
    | 0 0 1 0 0 |
    | 0 0 0 0 1 |

    [Note: you may take a look back to Type 3 and 4]


  • Type 6:



    Sometimes, a recurrence is given like this:


    f(n) = if n is odd, f(n-1) else, f(n-2)
    In short:
    f(n) = (n&1) * f(n-1) + (!(n&1)) * f(n-2)

    Here, we can just split the functions in the basis of odd even and keep 2 different matrix for both of them and calculate separately. Actually, there might appear many different patterns, but these are the basic patterns.


  • Type 7:

    Sometimes we may need to maintain more than one recurrence, where they are interrelated. For example, let a recurrence relation be:
    g(n) = 2g(n-1) + 2g(n-2) + f(n), where, f(n) = 2f(n-1) + 2f(n-2). Here, recurrence g(n) is dependent upon f(n) and the can be calculated in the same matrix but of increased dimensions. Lets design the matrices A, B then we'll try to find matrix M.




    Matrix AMatrix B

    | g(n) |
    | g(n-1) |
    | f(n+1) |
    | f(n) |


    | g(n+1) |
    | g(n) |
    | f(n+2) |
    | f(n+1) |

    Here, g(n+1) = 2g(n) + 2g(n-1) + f(n+1) and f(n+2) = 2f(n+1) + 2f(n).
    Now, using the above process, we can generate the objective matrix M as follows:

    | 2 2 1 0 |
    | 1 0 0 0 |
    | 0 0 2 2 |
    | 0 0 1 0 |

So, these are the basic categories of recurrence relations which are used to be solved by this simple technique.

Analysis:


Now that we have seen how matrix multiplication can be used to maintain recurrence relations, we are back to out first question, how this helps us in solving recurrences on a huge range.


Recall the recurrence f(n) = f(n-1) + f(n-2).
We already know that:


M x | f(n) | = | f(n+1) |
| f(n-1) | | f(n) |
............(1)

How about we multiply M multiple times? Like this:

M x M x | f(n) | = | f(n+1) |
| f(n-1) | | f(n) |

Replacing from (1):

M x M x | f(n) | = M x | f(n+1) | = | f(n+2) |
| f(n-1) | | f(n) | | f(n+1) |

So, we finally get:

M^2 x | f(n) | = | f(n+2) |
| f(n-1) | | f(n+1) |

Similarly we can show:




M^3 x | f(n) | = | f(n+3) |
| f(n-1) | | f(n+2) |

M^4 x | f(n) | = | f(n+4) |
| f(n-1) | | f(n+3) |

...............................
...............................
...............................

M^k x | f(n) | = | f(n+k) |
| f(n-1) | |f(n+k-1)|


Thus we can get any state f(n) by simply raising the power of objective matrix M to n-1 in O( d3log(n) ), where d is the dimension of square matrix M. So, even if n = 1000000000, still this can be calculated pretty easily as long as d3 is sufficiently small.


Related problems:


UVa 10229 : Modular Fibonacci
UVa 10870 : Recurrences
UVa 11651 : Krypton Number System
UVa 10754 : Fantastic Sequence
UVa 11551 : Experienced Endeavour

Regards, Zobayer Hasan.



Merge Sort Improvement?



C++ STL uses a trick to improve the performance of merge sort. Which is, when the number of elements in a call goes down to a few 50 or something near, it uses insertion sort to sort those elements instead of recurring farther. However, this doesn't seem to be much helpful when we write the merge sort procedure manually.

Probably this happens to STL because of, being OOP nature, STL methods have huge overhead on mostly each of them, so allocating more recursive calls might turn out to be more costly than a small O(k^2) sub-routine. As we have seen on the previous posts, recursive calls of merge sort procedure creates a binary tree like structure and the deeper it goes, the number of nodes increases dramatically. Moreover, allocating new functions on call stack always has a overhead for computer systems. So, the just change the algorithm a bit so that the bottom few depths are cut from the tree reducing huge number of nodes from it, i.e. reducing huge overhead which in result, improves performance.


So they changes the algorithm of MERGE-SORT() a bit:

MERGE-SORT(A, p, r):
if r - p < k ;k is the tolerance limit
INSERTION-SORT(A, p, r)
return
if p < r
then q := (r + p) / 2
MERGE-SORT(A, p, q)
MERGE-SORT(A, q+1, r)
MERGE(A, p, q, r)

Now, what happens when we use it manually? I tried it using a huge range and used k = 15, which means, when the number of elements are below 15, it will use insertion sort. I found interesting result in this experiment. Insertion sort seems better under certain range, but after that, it keeps getting slower. So, the value k is a tread off between the overhead of recursive calls and running time. This graph shows the result of my experiment, certainly it will vary if you try to do the same.

[click on the image to get a clear view]

Here is a simple java tester I used for this comparison. Have a look:

import java.util.*;
import java.io.*;

public class Main {

static final boolean useInsertion = false;
static final int limit = 15;

public static void main(String[] args) throws IOException {
Scanner stdin = new Scanner(new FileReader("in.txt"));
PrintWriter out = new PrintWriter(new FileWriter("out.txt"));
int n = stdin.nextInt();
int[] a = new int[n];
for(int i = 0; i < n; i++) a[i] = stdin.nextInt();
out.println("Elements: " + n);
long start = System.nanoTime();
mergeSort(a, 0, n-1);
long end = System.nanoTime();
for(int i = 0; i < n; i++) out.println(a[i] + " ");
out.println("Elapsed Time: " +(double)(end - start)/1000000.0 + "ms.");
out.flush();
stdin.close();
out.close();
}

static void mergeSort(int[] a, int p, int r) {
if(useInsertion && r-p < limit) {
insertionSort(a, p, r);
return;
}
if(p < r) {
int q = (p + r) / 2;
mergeSort(a, p, q);
mergeSort(a, q+1, r);
merge(a, p, q, r);
}
}

static void merge(int[] a, int p, int q, int r) {
int n1 = q-p+1, n2 = r-q;
int[] L = new int[n1];
int[] R = new int[n2];
for(int i = 0; i < n1; i++) L[i] = a[p+i];
for(int j = 0; j < n2; j++) R[j] = a[q+j+1];
for(int k = p, i = 0, j = 0; k <= r; k++) {
if(j >= n2 || (i < n1 && L[i] <= R[j])) a[k] = L[i++];
else a[k] = R[j++];
}
}

static void insertionSort(int[] a, int p, int r) {
for(int i = p+1; i <= r; i++) {
int t = a[i];
for(int j = i - 1; j >= p; j--) {
if(t > a[j]) break;
a[j+1] = a[j];
}
a[j+1] = t;
}
}
}

Change the value of static final boolean useInsertion = false; to 'true' to enable using insertion sort and change the value of static final int limit = 15; to suitable limit, this is the number of elements when to apply insertion sort.

Keep digging!

Wednesday, September 29, 2010

Threaded Merge Sort



What?


In my previous post about Merge Sort, the algorithm for merge sort was discussed. Although the pictures shows something which appears to be apparently parallel, but actually they are not. In fact, as the procedure introduced there was recursive, it works in a post-order fashion, meaning, first it solves the left half, then the right half, and then it works with the current range. The following picture shows the actual work-flow of the standard recursive merge sort algorithm:


The red lines and numbers shows the actual call sequence of the algorithm merge sort, for reference, the basic algorithm, is shown here again:

MERGE-SORT(A, p, r):
if p < r
then q := (r + p) / 2
MERGE-SORT(A, p, q)
MERGE-SORT(A, q+1, r)
MERGE(A, p, q, r)

So, from the above example, we can see that no two calls are concurrent, i.e. parallel. Each call is one of the three calls made in MERGE-SORT(A, p, r) procedure stated above, also shown below (according to the picture, p=0, r=6):

Step-00: MERGE-SORT(A, 0, 6)
Step-01: MERGE-SORT(A, 0, 3)
Step-02: MERGE-SORT(A, 0, 1)
Step-03: MERGE-SORT(A, 0, 0)
Step-04: MERGE-SORT(A, 1, 1)
Step-05: MERGE(A, 0, 0, 1)
Step-06: MERGE-SORT(A, 2, 3)
Step-07: MERGE-SORT(A, 2, 2)
Step-08: MERGE-SORT(A, 3, 3)
Step-09: MERGE(A, 2, 2, 3)
Step-10: MERGE(A, 0, 1, 3)
Step-11: MERGE-SORT(A, 4, 6)
Step-12: MERGE-SORT(A, 4, 5)
Step-13: MERGE-SORT(A, 4, 4)
Step-14: MERGE-SORT(A, 5, 5)
Step-15: MERGE(A, 4, 4, 5)
Step-16: MERGE-SORT(A, 6, 6)
Step-17: MERGE(A, 4, 5, 6)
Step-18: MERGE(A, 0, 3, 6)

Also note that, there are no overlapping calls at a same level of the tree, which is going to be a key factor.

How threading helps:


After a few observations, we can easily find out some interesting properties about merge sort algorithm. If we closely look at the merge sort tree structure, it becomes very clear that, int the recursive call tree, parent nodes depend on children, but siblings are independent. This property helps us to deduce a threaded version of merge sort algorithm where, we can really solve the left and right sub-portion of a range in a parallel fashion by making the calls threaded.

As the algorithm here is recursive, it might look confusing at the first glance, but it is not. Because, nodes share data only with their ancestors and in a call, we can wait until the threads finish their works. So, no concurrency problem appears here. Also, we don't need to worry about mutual exclusion of the shared data, because, there is no such situation in this algorithm, already stated above.

So, the threaded version of the algorithm will look like this:

THREADED-MERGE-SORT(A, p, r):
if p < r
then q := (r + p) / 2
start_thread(left_thread, THREADED-MERGE-SORT(A, p, q))
start_thread(right_thread, THREADED-MERGE-SORT(A, q+1, r))
wait_for(left_thread)
wait_for(right_thread)
MERGE(A, p, q, r)

According to the above algorithm, the flow of program changes which is shown in the picture below:


Now because of being able to run multiple THREADED-MERGE-SORT() at a time, and the job scheduling features of modern computers, the effective number of iteration is dramatically reduced, resulting a much much elegant solution for the heavy duty areas like manipulating large data on drives.

A simple implementation:


Here a simple implementation is presented using pthread library in C++:

#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

typedef struct { int i, j; } pair;
const int MAX = 1024;
int a[MAX];
pthread_attr_t attr;

void *threaded_merge_sort(void *param) {
pair range = *(pair *)param, lrange, rrange;
pthread_t lside, rside;
int p = range.i, q, r = range.j, n1, n2, n = r-p+1, i, j, k;
int *aleft, *aright;
if(p < r) {
q = (p + r) >> 1;
lrange.i = p, lrange.j = q, rrange.i = q + 1, rrange.j = r;
pthread_create(&lside, &attr, threaded_merge_sort, (void *)&lrange);
pthread_create(&rside, &attr, threaded_merge_sort, (void *)&rrange);
pthread_join(lside, NULL);
pthread_join(rside, NULL);
n1 = q - p + 1, n2 = r - q;
aleft = (int *)malloc(sizeof(int) * n1);
aright = (int *)malloc(sizeof(int) * n2);
for(i = 0; i < n1; i++) aleft[i] = a[p+i];
for(i = 0; i < n2; i++) aright[i] = a[q+1+i];
for(k = i = j = 0; k < n; k++) {
if(i >= n1 || (j < n2 && aleft[i] > aright[j])) a[k+p] = aright[j++];
else a[k+p] = aleft[i++];
}
free(aleft);
free(aright);
}
return NULL;
}

int main() {
int n, i;
pthread_t sorter;
pair range;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
while(scanf("%d", &n)==1 && n) {
for(i = 0; i < n; i++) scanf("%d", &a[i]);
range.i = 0, range.j = n-1;
pthread_create(&sorter, &attr, threaded_merge_sort, (void *)&range);
pthread_join(sorter, NULL);
for(i = 0; i < n; i++) printf("%d%c", a[i], (i==n-1? '\n' : ' '));
}
pthread_attr_destroy(&attr);
return 0;
}

This may look a bit messy, but, actually, you may find a little difference with the original one, we just created threads instead of plain recursive calls, that's it.
To learn more about pthread, check these out:

Give it a try! :)

Thursday, August 19, 2010

Merge Sort



Introduction:


Merge-sort is based on the divide-and-conquer paradigm. The Merge-sort algorithm can be described in general terms as consisting of the following three steps:

  1. Divide Step: If given array A has zero or one element, return S; it is already sorted. Otherwise, divide A into two arrays, A1 and A2, each containing about half of the elements of A.

  2. Recursive Step: Recursively sort array A1 and A2.

  3. Conquer Step: Combine the elements back in A by merging the sorted arrays A1 and A2 into a sorted sequence.


This diagram below shows the divide and conquer (or merging) steps stated above. We can visualize Merge-sort by means of binary tree where each node of the tree represents a recursive call and each external nodes represent individual elements of given array A. Such a tree is called Merge-sort tree. The heart of the Merge-sort algorithm is conquer step, which merge two sorted sequences into a single sorted sequence. This simple but amazing algorithm and a straight forward C++ implemented is presented below, and some cool links are added in the "Reference" section at the end of the post.

Algorithm:


Input: Array A[p...r], indices p, q, r (p ≤ q < r).
Output: Array A[p...r] in ascending order

MERGE-SORT(A, p, r):
if p < r
then q := (r + p) / 2
MERGE-SORT(A, p, q)
MERGE-SORT(A, q+1, r)
MERGE(A, p, q, r)

MERGE(A, p, q, r):
n1 := q - p + 1
n2 := r - q
create arrays L[1...N1+1] and R[1...N2+1]
for i := 1 to N1
do L[i] := A[p + i - 1]
for j := 1 to N2
do R[j] := A[q + j]
L[N1 + 1] := INF
R[N2 + 1] := INF
i := 1
j := 1
for k := p to r
do if L[i] <= R[j]
then A[k] := L[i]
i := i + 1
else
A[k] := R[j]
j := j + 1

Follow this link to see a javascript demonstration that simulates the above algorithm.

Analysis:


In sorting n objects, merge sort has an average and worst-case performance of O(n log n). If the running time of merge sort for a list of length n is T(n), then the recurrence T(n) = 2T(n/2) + n follows from the definition of the algorithm (apply the algorithm to two lists of half the size of the original list, and add the n steps taken to merge the resulting two lists) and the closed form follows from the master theorem.

Simple proof:
The merge sort algorithm created a complete binary tree, which have d depth and at each level, a total of n elements.
So, 2^d ≈ n, which implies d ≈ lg n
Now the total numbers of operation in merge sort algorithm is:
n * 2d ≈ 2n lg n ≈ O(n lg n)

In the worst case, merge sort does an amount of comparisons equal to or slightly smaller than (n ⌈lg n⌉ - 2⌈lg n⌉ + 1), which is between (n lg n - n + 1) and (n lg n + n + O(lg n)).

In the worst case, merge sort does about 39% fewer comparisons than quicksort does in the average case. Merge sort always makes fewer comparisons than quicksort, except in extremely rare cases, when they tie, where merge sort's worst case is found simultaneously with quicksort's best case. In terms of moves, merge sort's worst case complexity is O(n log n) — the same complexity as quicksort's best case, and merge sort's best case takes about half as many iterations as the worst case. Although, depending on the machine's memory architecture, quick sort can sometimes outperform merge sort, which is a very rare case.

Recursive implementations of merge sort make 2n - 1 method calls in the worst case, compared to quicksort's n, thus merge sort has roughly twice as much recursive overhead as quicksort. However, iterative, non-recursive, implementations of merge sort, avoiding method call overhead, are not difficult to code. Merge sort's most common implementation does not sort in place; therefore, the memory size of the input must be allocated for the sorted output to be stored in.

Merge sort as described here also has an often overlooked, but practically important, best-case property. If the input is already sorted, its complexity falls to O(n). Specifically, n-1 comparisons and zero moves are performed, which is the same as for simply running through the input, checking if it is pre-sorted.

Although heap sort has the same time bounds as merge sort, it requires only T(1) auxiliary space instead of merge sort's T(n), and is often faster in practical implementations. In case in-place sorting is necessary (Merge sort can be implemented as in-place algorithm, but the performance gain is not worth the complexity of the program, however, still merge sort runs in O(n lg n) time), heap sort is a better choice.

C++ Implementation:


Call Merge_Sort(A, start, end) to sort the closed range [start, end] of the array A.

void Merge(int A[], int p, int q, int r) {
int i, j, k, n1 = q - p + 1, n2 = r - q;
int L[n1], R[n2];
for(i = 0; i < n1; i++)
L[i] = A[p + i];
for(j = 0; j < n2; j++)
R[j] = A[q + j + 1];
for(k = p, i = j = 0; k <= r; k++) {
if(j >= n2 || (i < n1 && L[i] <= R[j])) A[k] = L[i++];
else A[k] = R[j++];
}
}

void Merge_Sort(int A[], int p, int r) {
if(p < r) {
int q = (p + r) / 2;
Merge_Sort(A, p, q);
Merge_Sort(A, q+1, r);
Merge(A, p, q, r);
}
}

For implementations in other languages, this page contains the implementation of merge sort procedure in almost all the living languages of the world: http://rosettacode.org/wiki/Sorting_algorithms/Merge_sort



Reference:



Thanks for reading.

Wednesday, August 4, 2010

Simple Take Away Game


Combinatorial games are two-player games with an outcome (i.e. it cannot end in a draw) and the players move alternatively from one position to other position depending on their move and finally reach a terminal position. When a player reaches the terminal position, the winner is decided based on the type of game play.

There are two types of game plays – normal play and the misère play .In a normal play , the one who moves the game to the terminal position is the winner and in a misère play , the one who moves the game to the terminal position is a loser.And if the rules of the game are same for both the players , then its a impartial game and when the rules are different for the two players , then its a partisan game.An common example of partisan game is a game of chess.

Take Away Game :


A take away game is a impartial combinatorial game where there is a single pile with n chips and players remove a certain number of chips from the pile depending on the rules of the game.Let us for the ease of analysis consider a normal play where the one who takes the game to the terminal position wins the game.

Consider a pile of n chips and say the rules of the game are given

  • N={0,1,2,3,..,n} be the positions that the game can be and a 0 signifies a terminal position and a n signifies a starting position

  • S={1,3,4} be the possible set of moves

  • A player can take away ‘x’ chips from the current pile where x € {S}.


Say we need to analyze the game given n and S .By analysis what we mean is given n and S and assuming optimal play, who wins the game.To analyze the game we define two positions namely N position where the Next player to play wins and P position where the Previous player to have played wins.So a game ideally starts with either a P or a N position but always ends in a P position.

So we arrive at the following rules which can be recursively used to define the P and N positions.

  1. All terminal positions are P positions in a normal play.

  2. From an P position, any move takes us to a position.

  3. From a N position, we can reach a P position in at least one possible way so that we emerge the final winner.


If the game starts with P position , the second player to play wins because, from P position the game can move to only to N position and the Second player wins by taking the game again to a P position according to the Rule 3 stated above(this is called playing optimally).
If the game starts with a N position , the current player wins the game by taking the game always to a P position.

So Let S be {1,3,4} for our analysis.
We use the following Algorithm to classify the P and N positions of {0,1,2,… n}.

  • Step 1: Label every terminal position as a P-position.

  • Step 2: Label every position that can reach a labeled P-position in one move as an N-position.

  • Step 3: Find those positions whose only moves are to labeled N-positions; label such positions as P-positions.

  • Step 4: If no new P-positions were found in step 3, stop; otherwise return to step 2


So we begin by adding ‘0′ to P position and let set P={0} denote the currently recognized P positions .By using Step 1, { 1,3,4} are decided to be N positions and analyzing the next number in the close proximity , we take ‘2′.

‘2′ has only one transition and its to a N position so 2 gets added to P and 1,3,4 gets added to N in this iteration. Now moving to the next iteration, 5(2+3) and 6(2+4) are N positions and analyzing for 7, we have the possible transitions as 6,4,3 which are all N positions and hence 7 gets added to P
hence at the end of second iteration, P={0,2,7} and N={1,3,4,5,6} . On continuing further, we get the following

P={0,2,7,9,14,16,21,23….} and N={1,3,4,5,6,8,10,11,12,13,15,17,18,19,20….}

So generalizing all positions which leave a remainder of 0 or 2 on dividing by 7 are P positions and others are N positions.

So say we have 21 chips then we are asked to comment on the winner assuming the optimal game play, we can conclude from the above derivations that 21 is a P position so the second player can win if he plays optimally . Say i start with 21 , the possible moves are 20,18,17 which are all N positions . Say we move to one of these

17 can move to 16,14,13 (14,16 are both P position)
18 can move to 17,15,14 (14 is a P position)
20 can move to 19,17,16 ( 16 is a P position)

For all the above cases , we observe that there is a possible transition that the second player can make such that the game proceeds to a P position and the cycle continues and finally the second player comes to a N position which could be either 1,3,4 so that the second player makes a suitable move and moves to 0 winning the game. If the Second player at any stage makes a non-optimal move , the game goes temporarily out of his control and he may or may-not win the game.

Another Example from SPOJ :


Let us analyze this Problem from SPOJ NGM which can actually be found here.

The problem states that a number n is taken and the player take turns and subtract the number by any one of its non zero digits.And the one who makes this number ‘0′ wins the game .The two players are Nikofir and Trofim. This problem helps us understand the above discussed concepts.

Analysis :


We start by adding 0 to P and 1,2,3,4,5,6,7,8,9 all lead to 0 directly hence are added to N . Now we analyze 10 which goes to 9 alone which is a N position and gets added to P . So at end of one iteration,

P={0,10) N=(1,2,3,4,5,6,7,8,9} .

Next the following numbers {11,12,13,14,15,16,17,18,19 }all have one possible move to {0,10} so they belong to N and the next P number is 20 proceeding we fins tht P is a set of multiples of 10

P={ x| x is a multiple of 10}

N={x | x is not a multiple of 10}

So given number which is not a multiple of 10 it is in N position and the player who plays first always wins i.e. Nikofir wins always .

If the given number is a multiple of 10 , then its in P position and Trofim wins.

Stress on Optimal Play :


Why do we need to analyze an optimal play? The need for all this analysis is to make the computer make the best move at the current position so that it poses a challenge to the player who plays and if the player gets lucky, he can take advantage of these analysis to make a winning move.

Binary Tree -- N Nodes & H Height



The problem:


You are told that you have N unique numbers, how many different binary trees are possible with those N numbers (i.e. nodes) which has exactly H height.

Analysis:


If we look closely, it will be not that hard to figure out that, the problem actually wants to know, how many different binary trees are possible to build with N nodes, each of which has exactly H height allowing rotations. Here, two trees are different when their structures are different. For example:

@ @
/ \ / \
@ @ and @ @
/ \ / \
@ @ @ @


Solution:


I don't know whether this problem has a mathematical solution or not, I have searched for it and found nothing... But, if the values N and H supports, this problem can be solved with a dynamic programming approach.

When we want to build a tree with N nodes and H height, we can select a node as a root, so, we can put 1 to N-2 nodes on the left subtree of height H-1 or less and N-2 to 1 nodes on the right subtree of height H-1 or less, satisfying that, at least one of the subtree has a height H-1. So, by using proper base condition, we can find the number of different trees of exactly H height from this recursive formulation, and we can express the relation as follows,

if k == 1 then exactTrees( N, D ) = ( n == 1 )
if n <= 1 then exactTrees( N, D ) = 0
for all {L, R} such that L, R > 0 and L + R = N - 1:
exactTrees( N, D ) += exactTrees( L, D - 1 ) * exactTrees( R, D - 1 )
exactTrees( N, D ) += smallerTrees( L, D - 2 ) * exactTrees( R, D - 1 )
exactTrees( N, D ) += exactTrees( L, D - 1 ) * smallerTrees( R, D - 2 )

This is pretty obvious, because, if we want to get a tree of height H from any node, either one or both of its subtrees must have a height H - 1, and if we have N nodes at any step, we need to take 1 node as root, and then if we take i nodes for left subtree, we have N - 1 - i nodes left for the right subtree.
When we calculate exact height, we need to consider that a subtree may have a smaller height, so we need to calculate smallerTrees( N, D ) separately, but actually it is a bit easier:

if n < 0 or k <= 0 then smallerTrees( N, D ) = 0
if k == 1 then smallerTrees( N, D ) = ( n == 1 )
if n <= 1 then smallerTrees( N, D ) = 1
else
for all {L, R} such that L, R > 0 and L + R = N - 1:
smallerTrees( N, D ) += smallerTrees( L, D - 1 ) * smallerTrees( R, D - 1 )


Sample program:


First one goes the recursive algorithm (with dp):

// states: nodes, height
int exact[202][101], smaller[202][101];

int smallTrees(int n, int k)
{
if(n<0 || k<=0) return 0;
if(k==1) return n==1;
if(n<=1) return 1;

int &ret = smaller[n][k], l, r;
if(ret!=-1) return ret;

ret = 0;
for(int i=1; i<n-1; i++)
{
l = i, r = n-1-i;
if(l && r)
{
ret += smallTrees(l, k-1)*smallTrees(r, k-1);
}
}
return ret;
}

int exactTrees(int n, int k)
{
if(k==1) return n==1;
if(n<=1) return 0;

int &ret = exact[n][k], l, r;
if(ret!=-1) return ret;
ret = 0;
for(int i=1; i<n-1; i++)
{
l = i, r = n-1-i;
if(l && r)
{
ret += exactTrees(l, k-1)*exactTrees(r, k-1);
ret += exactTrees(l, k-1)*smallTrees(r, k-2);
ret += smallTrees(l, k-2)*exactTrees(r, k-1);
}
}
return ret;
}

Here is the iterative version (a little complex):

// states: height, nodes
int exactTrees[101][202], smallTrees[101][202];

void solve(int n, int k)
{
int d, i, j;
exactTrees[1][1] = 1;
for(d=2; d<=k; d++) // depth
{
for(i=1; i<=n; i+=2) // number of nodes
{
for(j=1; j<=i-1; j+=2) // left & right
{
exactTrees[d][i] += exactTrees[d-1][j]*exactTrees[d-1][i-1-j];
exactTrees[d][i] += exactTrees[d-1][j]*smallTrees[d-2][i-1-j];
exactTrees[d][i] += smallTrees[d-2][j]*exactTrees[d-1][i-1-j];
}
}
for(i=1; i<=n; i+=2) // update smaller tree
{
smallTrees[d-1][i] += smallTrees[d-2][i]+exactTrees[d-1][i];
}
}
}

[P.S. The result of this problem grows very rapidly along with the increase of N and H, so, it may easily run out of range, normally problems like this requires the result with respect to some modulus values.]

Similar problem:


USACO Section: 2.3, Problem: Cow Pedigrees, Code: nocows.
[P.S. Links won't work for usaco, you need to reach level 2.3 if you want to see the problem yourself.]
If you find any more problem regarding this, please post it as a comment below.

Friday, June 11, 2010

BrainF**k & C++


Here is my bf to C translator, so I can write some bf sh*ts and test easily on my VC++08. To run the program with command line, _NO_ARGS must be cleared with 0.

#define _CRT_SECURE_NO_WARNINGS 1
#define _NO_ARGS 0
#include <stdio.h>

FILE *fin, *fout;

void print(char *str, int tab);
int unknown(char ch);

int main(int argc, char **argv) {
char ch, pr, temp[1024];
int tab = 0, cnt, carr;
if(_NO_ARGS) {
scanf("%s", temp);
fin = fopen(temp, "rb");
scanf("%s", temp);
fout = fopen(temp, "wb");
}
else if(argc < 2) {
printf("insufficient arguments...\n");
return 1;
}
else if(argc < 3) {
fin = fopen(argv[1], "rb");
fout = fopen("noname.c", "wb");
}
else {
fin = fopen(argv[1], "rb");
fout = fopen(argv[2], "wb");
}
if(!fin || !fout) {
printf("file error...\n");
return 2;
}
print("#include <stdio.h>", tab);
print("", tab);
print("char buff[1024];", tab);
print("", tab);
print("int main() {", tab++);
print("char *p = buff;", tab);
pr = 0;
cnt = 0;
carr = 0;
while((ch=fgetc(fin))!=EOF) {
if(unknown(ch)) continue;
if(ch=='.' || ch==',' || ch=='[' || ch==']') {
if(carr) {
if(pr=='+') sprintf(temp, "(*p) += %d;", cnt);
else if(pr=='-') sprintf(temp, "(*p) -= %d;", cnt);
else if(pr=='>') sprintf(temp, "p += %d;", cnt);
else if(pr=='<') sprintf(temp, "p -= %d;", cnt);
carr = cnt = 0;
print(temp, tab);
}
if(ch=='.') print("putchar(*p);", tab);
else if(ch==',') print("*p = getchar();", tab);
else if(ch=='[') print("while(*p) {", tab++);
else if(ch==']') print("}", --tab);
}
else if(ch==pr || !carr) {
carr = 1;
cnt++;
}
else {
if(pr=='+') sprintf(temp, "(*p) += %d;", cnt);
else if(pr=='-') sprintf(temp, "(*p) -= %d;", cnt);
else if(pr=='>') sprintf(temp, "p += %d;", cnt);
else if(pr=='<') sprintf(temp, "p -= %d;", cnt);
carr = cnt = 1;
print(temp, tab);
}
pr = ch;
}
if(carr) {
if(pr=='+') sprintf(temp, "(*p) += %d;", cnt);
else if(pr=='-') sprintf(temp, "(*p) -= %d;", cnt);
else if(pr=='>') sprintf(temp, "p += %d;", cnt);
else if(pr=='<') sprintf(temp, "p -= %d;", cnt);
print(temp, tab);
}
print("return 0;", tab);
print("}", --tab);
fclose(fin);
fclose(fout);
return 0;
}

void print(char *str, int tab) {
int i;
if(str[0]) {
for(i=0; i<tab; i++) fputc('\t', fout);
for(i=0; str[i]; i++) fputc(str[i], fout);
}
fputc('\n', fout);
}

int unknown(char ch) {
switch(ch) {
case '+':
case '-':
case '<':
case '>':
case '.':
case ',':
case '[':
case ']': return 0;
default : return 1;
}
return 1;
}

Here is a sample bf code I have written just now, and here is the translated C code.
But no matter the size, bf is really fun!!!

Friday, June 4, 2010

Euler Tour



A famous drawing problem for kids, "You are given a picture, can you draw it without lifting up your pen?"... Well, in graph theory, we can determine this by checking the Euler Tour in the graph.


Actually this is about Euler Circuits and Eluer Paths. We know the conditions for an undirected graph, and we can extend it for directed graphs as well.

Undirected Graph


An undirected graph will have Eulerian tour ( either path or circuit ) if the following conditions hold:
  • The graph is connected.
  • Each node has even degree, or exactly two nodes have an odd degree.


Directed Graph


A directed graph will have Eulerian tour ( either path or circuit ) if the following conditions hold:
  • The undirected representation of the graph is connected.
  • The difference between indegree and outdegree of each node is at most 1.
  • Each node has equal indegree and outdegree, or, there is exactly two nodes which has different indegree and outdegree, and exactly one of them has indegree - outdegree = 1 and the other has outdegree - indegree = 1.


So, we can do this easily with the help of a bfs subroutine.
You might also like to read this and this.

Friday, May 28, 2010

MaxFlow :: Dinitz Algorithm


Here is a nice implementation of Dinitz blocking flow algorithm in C++ (with special thanks to Fahim vai). Works in undirected large graph containing multiple edges and self loops as well. No STL used. This implementation is pretty fast.

Here, input is, number of nodes 2≤n≤5000, number of input edges 0≤e≤30000, then e undirected edges in the form (u, v, cap) (1≤u,v≤n and 1≤cap≤10^9). Source and Sink are assumed 1 and n accordingly, can be changed in the init() function call.


#define SET(p) memset(p, -1, sizeof(p))
#define CLR(p) memset(p, 0, sizeof(p))
#define i64 long long

const int INF = 0x7fffffff;
const int MAXN = 5005, MAXE = 60006;

int src, snk, nNode, nEdge;
int Q[MAXN], fin[MAXN], pro[MAXN], dist[MAXN];
int flow[MAXE], cap[MAXE], next[MAXE], to[MAXE];

inline void init(int _src, int _snk, int _n) {
src = _src, snk = _snk, nNode = _n, nEdge = 0;
SET(fin);
}

inline void add(int u, int v, int c) {
to[nEdge] = v, cap[nEdge] = c, flow[nEdge] = 0, next[nEdge] = fin[u], fin[u] = nEdge++;
to[nEdge] = u, cap[nEdge] = c, flow[nEdge] = 0, next[nEdge] = fin[v], fin[v] = nEdge++;
}

bool bfs() {
int st, en, i, u, v;
SET(dist);
dist[src] = st = en = 0;
Q[en++] = src;
while(st < en) {
u = Q[st++];
for(i=fin[u]; i>=0; i=next[i]) {
v = to[i];
if(flow[i] < cap[i] && dist[v]==-1) {
dist[v] = dist[u]+1;
Q[en++] = v;
}
}
}
return dist[snk]!=-1;
}

int dfs(int u, int fl) {
if(u==snk) return fl;
for(int &e=pro[u], v, df; e>=0; e=next[e]) {
v = to[e];
if(flow[e] < cap[e] && dist[v]==dist[u]+1) {
df = dfs(v, min(cap[e]-flow[e], fl));
if(df>0) {
flow[e] += df;
flow[e^1] -= df;
return df;
}
}
}
return 0;
}

i64 dinitz() {
i64 ret = 0;
int df;
while(bfs()) {
for(int i=1; i<=nNode; i++) pro[i] = fin[i];
while(true) {
df = dfs(src, INF);
if(df) ret += (i64)df;
else break;
}
}
return ret;
}

int main() {
int n, e, u, v, c, i;
scanf("%d%d", &n, &e);
init(1, n, n);
for(i=0; i<e; i++) {
scanf("%d%d%d", &u, &v, &c);
if(u!=v) add(u, v, c);
}
printf("%lld\n", dinitz());
return 0;
}

Using adjacency matrix and/or STL makes it 10 to 4 times slower.

C++ :: Get Variable's Name



How to print a variables name instead of its value? (not just by giving the name yourself)

This can be easily done with a #define pre-processor directive. As #define treats the preceding expression as cstring, it is possible to get the variables name as a string. Look below:

#include <stdio.h>

#define getVarName(varName,holder) sprintf(holder, "%s", #varName)

int main() {
int var = 100; // just any type of identifier
char name[100]; // it will get the variables name
getVarName(var, name);
puts(name);
return 0;
}

It will print "var" instead of "100" or something other. I saw it years ago somewhere I can't remember now. Today, it suddenly struck me and I think it might be useful who knows when...

Saturday, May 22, 2010

Maximum Matching (DFS)



Maximum bipartite matching in a bipartite graph


Although Hopcroft Karp is faster and smarter, this one is pretty simple to code specially in contest time and when the graph is relatively smaller. It uses a DFS subroutine to cut and establish matching and thus produces a maximum matching. This version of BPM takes the adjacency list of left side of the bipartite graph and updates the Left[] and Right[] arrays with their respective matches.

Here, in the DFS subroutine, there are two for loops, where, the first one checks for yet unestablished connections, and the second one is the recursive DFS step. These two steps could be written in a single loop and the condition modified as "( Right[v]==-1 || dfs(Right[v]) )", actually separating them increases performance by some factors, because, first time it checks all the unmatched nodes before going into DFS.

A sample C++ implementation:

#define SET(x) memset(x, -1, sizeof(x))
#define CLR(x) memset(x, 0, sizeof(x))
#define MAX 100

vector < int > edges[MAX];
bool visited[MAX];
int Left[MAX], Right[MAX];

bool dfs(int u) {
if(visited[u]) return false;
visited[u] = true;
int len = edges[u].size(), i, v;
for(i=0; i<len; i++) {
v = edges[u][i];
if(Right[v]==-1) {
Right[v] = u, Left[u] = v;
return true;
}
}
for(i=0; i<len; i++) {
v = edges[u][i];
if(dfs(Right[v])) {
Right[v] = u, Left[u] = v;
return true;
}
}
return false;
}

int match() {
SET(Left);
SET(Right);
int i, ret = 0;
bool done;
do {
done = true;
CLR(visited);
for(i=0; i<MAX; i++) {
if(Left[i]==-1 && dfs(i)) {
done = false;
}
}
} while(!done);
for(i=0; i<MAX; i++) ret += (Left[i]!=-1);
return ret;
}

Notes: Here, edges[MAX] is the left side adjacency list, implemented with vector, Left[MAX] and Right[MAX] holds the matching and the procedure match() returns maximum matching. Pretty straight forward.

Saturday, May 15, 2010

Expression Evaluation



Infix to Postfix transformation and evaluation


Here, I would like to share a java source for converting an Infix expression to a Postfix equivalent and evaluate the Postfix expression. Postfix is also known as "Reverse Polish Notation". If you want to know more about this algorithm, this will be helpful.

Here is a simple java implementation. (Oh, we could do it a lot easily in C++, but, actually it has a academic purpose as well). A few things to note:
  • Fixed format of input, as this is just a demonstration. Do not use spaces.
  • It doesn't check whether the given expression is consistent or not.
  • No math error is checked here, you have to add it to your own.
  • Check sample execution for more details.
  • Only for binary operators +,-,*,/,%,^ and parenthesis ()


Java Source



//
// @author Zobayer
// @date May 10, 2010
//

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.io.IOException;
import java.util.List;
import java.util.ArrayList;
import java.util.Stack;

//
// Demonstrates Expression evaluation process.
// Doesn't take care of wrong input,
// You need to handle that on your own.
//

public class Expression {

// A sample main() method to demonstrate this process
public static void main(String[] args) throws IOException {
BufferedReader stdin = new BufferedReader(new InputStreamReader(System.in));
String expr;
List<String> inFix, postFix;
int result;

while((expr = stdin.readLine())!=null) {
expr = "(" + expr + ")";
inFix = getInFix(expr);
postFix = getPostFix(inFix);
result = evaluate(postFix);
System.out.println("Postfix form: " + postFix);
System.out.println("Result: " + result);
}
}

// Parse the input string and form an infix notation
static List<String> getInFix(String expr) {
List<String> list = new ArrayList<String>();
int n, i;
char ch;
boolean hasInt;

for(i = n = 0, hasInt = false; i < expr.length(); i++) {
ch = expr.charAt(i);
if(!isDigit(ch)) {
if(hasInt) {
list.add("" + n);
n = 0;
hasInt = false;
}
list.add("" + ch);
}
else {
n = n * 10 + (ch - 48);
hasInt = true;
}
}

return list;
}

// Enlist the tokens in a postfix notation
static List<String> getPostFix(List<String> inFix) {
List<String> list = new ArrayList<String>();
Stack<String> oper = new Stack<String>();
int i;
char ch;
String token, peek;

for(i = 0; i < inFix.size(); i++) {
token = inFix.get(i);
ch = token.charAt(0);
if(isDigit(ch)) list.add(token);
else if(ch=='(') oper.push("" + ch);
else if(ch==')') {
while(!oper.empty()) {
peek = oper.pop();
if(peek.charAt(0)!='(') list.add(peek);
else break;
}
}
else {
while(!oper.empty()) {
peek = oper.peek();
if(peek.charAt(0)!='(' && preced(ch) <= preced(peek.charAt(0))) {
list.add(peek);
oper.pop();
}
else {
oper.push(token);
break;
}
}
}
}

return list;
}

// Evaluate the postfix notation passed as a list
static int evaluate(List<String> postFix) {
Stack<Integer> stack = new Stack<Integer>();
int i, a, b;
String token;
char ch;

for(i = 0; i < postFix.size(); i++) {
token = postFix.get(i);
ch = token.charAt(0);
if(isDigit(ch)) stack.push(Integer.parseInt(token));
else {
b = stack.pop();
a = stack.pop();
switch(ch) {
case '+': a = a + b; break;
case '-': a = a - b; break;
case '*': a = a * b; break;
case '/': a = a / b; break;
case '%': a = a % b; break;
case '^': a = (int)Math.pow(a, b); break;
}
stack.push(a);
}
}

return stack.pop();
}

// Provides operator precedence
static int preced(char op) {
if(op=='^') return 3;
if(op=='*' || op=='/' || op=='%') return 2;
if(op=='+' || op=='-') return 1;
return 0;
}

// Checks if ch is a digit or not
static boolean isDigit(char ch) {
return (ch >= '0' && ch <= '9');
}
}

Sample run



(3+8-90*36)*((((89-5%6+2^3-10-10-10)))-8)+100
Postfix form: [3, 8, +, 90, 36, *, -, 89, 5, 6, %, -, 2, 3, ^, +, 10, -, 10, -, 10, -, 8, -, *, 100, +]
Result: -174266
90+0
Postfix form: [90, 0, +]
Result: 90
6+763-67*2367-(54/234)
Postfix form: [6, 763, +, 67, 2367, *, -, 54, 234, /, -]
Result: -157820

The algorithm implemented here is pretty simple, so I guess I haven't made a mistake yet, but who knows? So please check it...

Friday, May 7, 2010

Maximum Matching (Hopcroft)



Hopcroft-Karp is one of the fastest algorithm that finds the maximum cardinality matching on a bipartite graph. It has the best known worst case time complexity. More details can be found here [courtesy of Wikipedia].

C++ Source Code:



#define MAX 100001
#define NIL 0
#define INF (1<<28)

vector< int > G[MAX];
int n, m, match[MAX], dist[MAX];
// n: number of nodes on left side, nodes are numbered 1 to n
// m: number of nodes on right side, nodes are numbered n+1 to n+m
// G = NIL[0] ∪ G1[G[1---n]] ∪ G2[G[n+1---n+m]]

bool bfs() {
int i, u, v, len;
queue< int > Q;
for(i=1; i<=n; i++) {
if(match[i]==NIL) {
dist[i] = 0;
Q.push(i);
}
else dist[i] = INF;
}
dist[NIL] = INF;
while(!Q.empty()) {
u = Q.front(); Q.pop();
if(u!=NIL) {
len = G[u].size();
for(i=0; i<len; i++) {
v = G[u][i];
if(dist[match[v]]==INF) {
dist[match[v]] = dist[u] + 1;
Q.push(match[v]);
}
}
}
}
return (dist[NIL]!=INF);
}

bool dfs(int u) {
int i, v, len;
if(u!=NIL) {
len = G[u].size();
for(i=0; i<len; i++) {
v = G[u][i];
if(dist[match[v]]==dist[u]+1) {
if(dfs(match[v])) {
match[v] = u;
match[u] = v;
return true;
}
}
}
dist[u] = INF;
return false;
}
return true;
}

int hopcroft_karp() {
int matching = 0, i;
// match[] is assumed NIL for all vertex in G
while(bfs())
for(i=1; i<=n; i++)
if(match[i]==NIL && dfs(i))
matching++;
return matching;
}

The implementation is quite straight forward as the algorithm on Wikipedia page. I am looking for some optimizations.

Monday, March 22, 2010

UVa, SPOJ & TopCoder




First of all, thanks to Fahim vai, he gave me the idea first...

This is a very easy way to add a cool bookmark item on your browser which can easily locate UVa, SPOJ & TopCoder problems for you. After you follow these steps, you will be able to open any UVa / SPOJ problem with just a click, you even do not need to enter any web address !

UVa Locator:


Right click on bookmark toolbar of your browser. Then right click on it and select "New Bookmark...". So the new bookmark box will arrive. On the name field, put whatever you want, like "UVa" or something else and on the location field, paste the following javascript:

javascript:var%20pid=prompt('Enter%20problem%20number:');
location.href='http://uva.onlinejudge.org/external/'+pid.substring(0,pid.length-2)+'/'+pid+'.html';

Now, after clicking "Add", you will see a new bookmark item has been added to your browser's toolbar, just click on it and enter the problem number.. that's it!




























Spoj Locator:


Similar process, add e new bookmark item, now, give a different name, for example, "SPOJ" and on the location, paste the following javascript:

javascript:var%20pid=prompt('Enter%20problem%20name:');
location.href='https://www.spoj.pl/problems/'+pid.toUpperCase()+'/'

Click "Add" and done!. Now you will get a similar bookmark item as the previous one. But as you know, spoj problems are recognized with their codnames, not numbers. So enter a code name in the textbox, don't bother about case, and click ok. It will open the page of the problem if it is there. For example, to open the problem "Longest Common Substring", it has a codename LCS, you should enter "LCS" in the box, (not necessarily all should be uppercase).


TopCoder Locator:


Similarly, topcoder problems are known by their respective class names, you can add a simple search box for topcoder problems by adding the following javascript in the similar way stated above:

javascript:var%20pid=prompt('Enter%20class%20name:');
location.href='http://www.topcoder.com/tc?module=ProblemArchive&class='+pid
It will not open the problem, it is a simple javascript that calls the search engine on topcoder to find the class name you provide.

As these will open in current window, make sure you open a new tab before using these to save some back, forward clicks ><><><
Have Fun !!!

Sunday, March 21, 2010

Bigint Library by Jan vai


Thanks to Jane Alam Jan vai for this, It will be a great help... :)

/*
    Author       :    Jan
    Problem Name :    Big int for contest
    Algorithm    :
    Complexity   :
*/

#include <cstdio>
#include <string>
#include <algorithm>
using namespace std;

struct Bigint {
    string a;
    int sign;

    Bigint() {}
    Bigint( string b ) { (*this) = b; }
    int size() { return a.size(); }
    Bigint inverseSign() { sign *= -1; return (*this); }
    Bigint normalize( int newSign ) {
        sign = newSign;
        for( int i = a.size() - 1; i > 0 && a[i] == '0'; i-- ) a.erase(a.begin() + i);
        if( a.size() == 1 && a[0] == '0' ) sign = 1;
        return (*this);
    }
    void operator = ( string b ) {
        a = b[0] == '-' ? b.substr(1) : b;
        reverse( a.begin(), a.end() );
        this->normalize( b[0] == '-' ? -1 : 1 );
    }
    bool operator < ( const Bigint &b ) const {
        if( a.size() != b.a.size() ) return a.size() < b.a.size();
        for( int i = a.size() - 1; i >= 0; i-- ) if( a[i] != b.a[i] ) return a[i] < b.a[i];
        return false;
    }
    Bigint operator + ( Bigint b ) {
        if( sign != b.sign ) return (*this) - b.inverseSign();
        Bigint c;
        for( int i = 0, carry = 0; i < (int)a.size() || i < (int)b.size() || carry; i++ ) {
            carry += (i < (int)a.size() ? a[i] - 48 : 0) + (i < (int)b.a.size() ? b.a[i] - 48 : 0);
            c.a += (carry % 10 + 48);
            carry /= 10;
        }
        return c.normalize(sign);
    }
    Bigint operator - ( Bigint b ) {
        if( sign != b.sign ) return (*this) + b.inverseSign();
        if( (*this) < b ) return (b - (*this)).inverseSign();
        Bigint c;
        for( int i = 0, borrow = 0; i < (int)a.size(); i++ ) {
            borrow = a[i] - borrow - (i < b.size() ? b.a[i] : 48);
            c.a += borrow >= 0 ? borrow + 48 : borrow + 58;
            borrow = borrow >= 0 ? 0 : 1;
        }
        return c.normalize(sign);
    }
    Bigint operator * ( Bigint b ) {
        Bigint c("0");
        for( int i = 0, k = a[i]; i < (int)a.size(); i++, k = a[i] ) {
            while(k-- - 48) c = c + b;
            b.a.insert(b.a.begin(), '0');
        }
        return c.normalize(sign * b.sign);
    }
    Bigint operator / ( Bigint b ) {
        if( b.size() == 1 && b.a[0] == '0' ) b.a[0] /= ( b.a[0] - 48 ) ;
        Bigint c("0"), d;
        for( int j = 0; j < (int)a.size(); j++ ) d.a += "0";
        int dSign = sign * b.sign; b.sign = 1;
        for( int i = a.size() - 1; i >= 0; i-- ) {
            c.a.insert( c.a.begin(), '0');
            c = c + a.substr( i, 1 );
            while( !( c < b ) ) c = c - b, d.a[i]++;
        }
        return d.normalize(dSign);
    }
    Bigint operator % ( Bigint b ) {
        if( b.size() == 1 && b.a[0] == '0' ) b.a[0] /= ( b.a[0] - 48 ) ;
        Bigint c("0");
        int cSign = sign * b.sign; b.sign = 1;
        for( int i = a.size() - 1; i >= 0; i-- ) {
            c.a.insert( c.a.begin(), '0');
            c = c + a.substr( i, 1 );
            while( !( c < b ) ) c = c - b;
        }
        return c.normalize(cSign);
    }
    void print() {
        if( sign == -1 ) putchar('-');
        for( int i = a.size() - 1; i >= 0; i-- ) putchar(a[i]);
    }
};

int main() {
    Bigint a, b, c;
    a = "511";
    b = "10";

    c = a + b;
    c.print();
    putchar('\n');

    c = a - b;
    c.print();
    putchar('\n');

    c = a * b;
    c.print();
    putchar('\n');

    c = a / b;
    c.print();
    putchar('\n');

    c = a % b;
    c.print();
    putchar('\n');

    return 0;
}

Hope this helps my friends as well. Actually many of us are afraid of big integers, and also the above code may look hard, but this is actually what we do when performing these operations in real life... Use with caution, not a very first algorithm. Do not attempt to use *, /, % if numbers are 104 or more digits long.