Tuesday, November 30, 2010

CRC - 32 (IEEE)



In our Data Communication Lab, we were supposed to write a CRC-32 implementation with C/C++. CRC stands for cyclic redundancy check, which is an error detection algorithm based on modulo-2 division. We used IEEE 32 bit polynomial 0x04C11DB7 to generate CRC-32 of a given string, and recheck it against the generated CRC whether an error occurred or not.

However, we looked for it in the net but could not find much information about it, probably because many languages (like php) already include functions as a built-in library to generate CRC checksum. So, I guess, people don't do these stuffs with C/C++ anymore.

The first tricky thing about generating CRC-32 is the polynomial is of 33 bits, which is an unnatural size for computers as it will need longer data type. However, after a slight observation, it becomes clear that, the 33th bit (or msb) is always 1 and thus, this is not present in the polynomial, rather, it is handled by the algorithm itself. Following the algorithm, the msb will be always 0 after we perform the XOR operation, and shifted out from the remainder. For this reason, we do not waste a bit to store that value, or add complexity to the code.

Here is my implementation goes, it is not very efficient, because, most of the implementations I have found later on in internet, use look-up tables to speed up the generation. However, this is quite simple and follow the algorithm straight:

/*
zobayer hasan
03/03/2011
*/

#include <cstdio>
#include <cstring>
using namespace std;

typedef unsigned int int32;

// key is the crc-32 ieee 802.3 standard polynomial
const int32 poly = 0x04C11DB7;
const int MAXLEN = 1024;

int32 reflect(int32 data, int bits) {
int32 ret = 0;
for(int i = bits-1; i>=0; i--) {
if(data & 1) ret |= (1 << i);
data >>= 1;
}
return ret;
}

int32 crc32(char* msg, int len) {
int32 crc = 0xffffffff;
int i, j;
for(i = 0; i < len; i++) {
crc ^= ((char)reflect(msg[i], 8) << 24);
for(j = 8; j; j--) {
crc = (crc << 1) ^ ((crc & 0x80000000) ? poly : 0x0);
}
}
return reflect(crc, 32) ^ 0xffffffff;
}

int main() {
int32 crc;
int len;
char msg[MAXLEN];

gets(msg);
len = strlen(msg);

// generate crc32 for msg
crc = crc32(msg, len);
printf("CRC: 0x%08X\n", crc);

return 0;
}


Here, reflect() just reverses the 'bits' number of lsb (from right) bits from 'data' passed to it.

Now, how to check consistency? It is simple, before sending message over ports, we generate the CRC and pass it with original message, in receiver end, the CRC is appended with the message and then it generates CRC again, this time, if now error has occurred, CRC will be 0.

Ahh, quite short code :)

Saturday, November 20, 2010

Matrix Exponentiation



Introduction:


Don't be confused with the title, this article has nothing to do with "how to calculate an exponent of a given matrix", rather it will discuss on how to use this technique to solve a specific class of problems.


Sometimes we face some problems, where, we can easily derive a recursive relation (mostly suitable for dynamic programming approach), but the given constraints make us about to cry, there comes the matrix exponentiation idea. The situation can be made more clear with the following example:


Let, a problem says: find f(n) : n'th Fibonacci number. When n is sufficiently small, we can solve the problem with a simple recursion, f(n) = f(n-1) + f(n-2), or, we can follow dynamic programming approach to avoid the calculation of same function over and over again. But, what will you do if the problem says, given 0 < n < 1000000000, find f(n) % 999983 ? No doubt dynamic programming will fail!


We'll develop the idea on how and why these types of problems could be solved by matrix exponentiation later, first lets see how matrix exponentiation can help is to represent recurrence relations.


Prerequisite:


Before continuing, you need to know:

  • Given two matrices, how to find their product, or given the product matrix of two matrices, and one of them, how to find the other matrix.
  • Given a matrix of size d x d, how to find its n'th power in O( d3log(n) ).


Patterns:


What we need first, is a recursive relation, and what we want to do, is to find a matrix M which can lead us to the desired state from a set of already known states. Let, we know k states of a given recurrence relation, and want to find the (k+1)th state. Let M be a k x k matrix, and we build a matrix A:[k x 1] matrix from the known states of the recurrence relation, now we want to get a matrix B:[k x 1] which will represent the set of next states, i.e. M x A = B, as shown below:


| f(n) | | f(n+1) |
| f(n-1) | | f(n) |
M x | f(n-2) | = | f(n-1) |
| ...... | | ...... |
| f(n-k) | |f(n-k+1)|

So, if we can design M accordingly, job's done!, the matrix will then be used to represent the recurrence relation.


  • Type 1:

    Lets start by the simplest one, f(n) = f(n-1) + f(n-2).
    So, f(n+1) = f(n) + f(n-1)
    Let, we know, f(n) and f(n-1); we want to get f(n+1)
    From the above situation, matrix A and B can be formed as shown below:





    Matrix AMatrix B

    | f(n) |
    | f(n-1) |


    | f(n+1) |
    | f(n) |

    [Note: matrix A will be always designed in such a way that, every state on which f(n+1) depends, will be present]
    So, now, we need to design a 2x2 matrix M such that, it satisfies M x A = B as stated above.
    The first element of B is f(n+1) which is actually f(n) + f(n-1). To get this, from matrix A, we need, 1 f(n) and 1 f(n-1). So, the 1st row of M will be [1 1].

    | 1 1 | x | f(n) | = | f(n+1) |
    | ----- | | f(n-1) | | ------ |

    [Note: ----- means, we are not concerned about this value]
    Similarly, 2nd item of B is f(n) which we can get by simply taking 1 f(n) from A. So, the 2nd row of M is [1 0].

    | ----- | x | f(n) | = | ------ |
    | 1 0 | | f(n-1) | | f(n) |

    [I hope you know how a matrix multiplication is done and how the values ar assigned!]
    Thus we get the desired 2 x 2 matrix M:

    | 1 1 | x | f(n) | = | f(n+1) |
    | 1 0 | | f(n-1) | | f(n) |


    If you are confused about how the above matrix is calculated, you might try doing it this way:


    We know, the multiplication of an n x n matrix M with an n x 1 matrix A will generate an n x 1 matrix B, i.e. M x A = B. The k'th element in the product matrix B is the product of k'th row of the n x n matrix M with the n x 1 matrix A in the left side.
    In the above example, the 1st element in B is f(n+1) = f(n) + f(n-1). So, it's the product of 1st row of matrix M and matrix B. Let, the first row of M is [x y]. So, according to matrix multiplication,


    x * f(n) + y * f(n-1) = f(n+1) = f(n) + f(n-1)
    ⇒ x = 1, y = 1

    Thus we can find the first row of matrix M is [1 1]. Similarly, let, the 2nd row of matrix M is [x y], and according to matrix multiplication:

    x * f(n) + y * f(n-1) = f(n)
    ⇒ x = 1, y = 0

    Thus we get the second row of M is [1 0].

  • Type 2:

    Now, we make it a bit complex: find f(n) = a * f(n-1) + b * f(n-2), where a, b are some constants.
    This tells us, f(n+1) = a * f(n) + b * f(n-1).
    By this far, this should be clear that the dimension of the matrices will be equal to the number of dependencies, i.e. in this particular example, again 2. So, for A and B, we can build two matrices of size 2 x 1:





    Matrix AMatrix B

    | f(n) |
    | f(n-1) |


    | f(n+1) |
    | f(n) |

    Now for f(n+1) = a * f(n) + b * f(n-1), we need [a b] in the first row of objective matrix M instead of [1 1] from the previous example. Because, now we need a of f(n)'s and b of f(n-1)'s.

    | a b | x | f(n) | = | f(n+1) |
    | ----- | | f(n-1) | | ------ |

    And, for the 2nd item in B i.e. f(n), we already have that in matrix A, so we just take that, which leads, the 2nd row of the matrix M will be [1 0] as the previous one.

    | ----- | x | f(n) | = | ------ |
    | 1 0 | | f(n-1) | | f(n) |

    So, this time we get:

    | a b | x | f(n) | = | f(n+1) |
    | 1 0 | | f(n-1) | | f(n) |

    Pretty simple as the previous one...


  • Type 3:

    We've already grown much older, now lets face a bit complex relation: find f(n) = a * f(n-1) + c * f(n-3).
    Ooops! a few minutes ago, all we saw were contiguous states, but here, the state f(n-2) is missing. Now? what to do?


    Actually, this is not a problem anymore, we can convert the relation as follows: f(n) = a * f(n-1) + 0 * f(n-2) + c * f(n-3), deducing f(n+1) = a * f(n) + 0 * f(n-1) + c * f(n-2). Now, we see that, this is actually a form described in Type 2. So, here, the objective matrix M will be 3 x 3, and the elements are:


    | a 0 c | | f(n) | | f(n+1) |
    | 1 0 0 | x | f(n-1) | = | f(n) |
    | 0 1 0 | | f(n-2) | | f(n-1) |

    These are calculated in the same way as Type 2. [Note, if you find it difficult, try on pen and paper!]


  • Type 4:

    Life is getting complex as hell, and Mr. problem now asks you to find f(n) = f(n-1) + f(n-2) + c where c is any constant.


    Now, this is a new one and all we have seen in past, after the multiplication, each state in A transforms to its next state in B.
    f(n) = f(n-1) + f(n-2) + c
    f(n+1) = f(n) + f(n-1) + c
    f(n+2) = f(n+1) + f(n) + c
    ................................. so on
    So, normally we can't get it through the previous fashions. But, how about now we add c as a state?


    | f(n) | | f(n+1) |
    M x | f(n-1) | = | f(n) |
    | c | | c |

    Now, its not much hard to design M according to the previous fashion. Here it is done, but don't forget to verify on yours:

    | 1 1 1 | | f(n) | | f(n+1) |
    | 1 0 0 | x | f(n-1) | = | f(n) |
    | 0 0 1 | | c | | c |


  • Type 5:

    Lets put it altogether: find matrix suitable for f(n) = a * f(n-1) + c * f(n-3) + d * f(n-4) + e.
    I would leave it as an exercise to reader. The final matrix is given here, check if it matches with your solution. Also find matrix A and B.


    | a 0 c d 1 |
    | 1 0 0 0 0 |
    | 0 1 0 0 0 |
    | 0 0 1 0 0 |
    | 0 0 0 0 1 |

    [Note: you may take a look back to Type 3 and 4]


  • Type 6:



    Sometimes, a recurrence is given like this:


    f(n) = if n is odd, f(n-1) else, f(n-2)
    In short:
    f(n) = (n&1) * f(n-1) + (!(n&1)) * f(n-2)

    Here, we can just split the functions in the basis of odd even and keep 2 different matrix for both of them and calculate separately. Actually, there might appear many different patterns, but these are the basic patterns.


  • Type 7:

    Sometimes we may need to maintain more than one recurrence, where they are interrelated. For example, let a recurrence relation be:
    g(n) = 2g(n-1) + 2g(n-2) + f(n), where, f(n) = 2f(n-1) + 2f(n-2). Here, recurrence g(n) is dependent upon f(n) and the can be calculated in the same matrix but of increased dimensions. Lets design the matrices A, B then we'll try to find matrix M.




    Matrix AMatrix B

    | g(n) |
    | g(n-1) |
    | f(n+1) |
    | f(n) |


    | g(n+1) |
    | g(n) |
    | f(n+2) |
    | f(n+1) |

    Here, g(n+1) = 2g(n) + 2g(n-1) + f(n+1) and f(n+2) = 2f(n+1) + 2f(n).
    Now, using the above process, we can generate the objective matrix M as follows:

    | 2 2 1 0 |
    | 1 0 0 0 |
    | 0 0 2 2 |
    | 0 0 1 0 |

So, these are the basic categories of recurrence relations which are used to be solved by this simple technique.

Analysis:


Now that we have seen how matrix multiplication can be used to maintain recurrence relations, we are back to out first question, how this helps us in solving recurrences on a huge range.


Recall the recurrence f(n) = f(n-1) + f(n-2).
We already know that:


M x | f(n) | = | f(n+1) |
| f(n-1) | | f(n) |
............(1)

How about we multiply M multiple times? Like this:

M x M x | f(n) | = | f(n+1) |
| f(n-1) | | f(n) |

Replacing from (1):

M x M x | f(n) | = M x | f(n+1) | = | f(n+2) |
| f(n-1) | | f(n) | | f(n+1) |

So, we finally get:

M^2 x | f(n) | = | f(n+2) |
| f(n-1) | | f(n+1) |

Similarly we can show:




M^3 x | f(n) | = | f(n+3) |
| f(n-1) | | f(n+2) |

M^4 x | f(n) | = | f(n+4) |
| f(n-1) | | f(n+3) |

...............................
...............................
...............................

M^k x | f(n) | = | f(n+k) |
| f(n-1) | |f(n+k-1)|


Thus we can get any state f(n) by simply raising the power of objective matrix M to n-1 in O( d3log(n) ), where d is the dimension of square matrix M. So, even if n = 1000000000, still this can be calculated pretty easily as long as d3 is sufficiently small.


Related problems:


UVa 10229 : Modular Fibonacci
UVa 10870 : Recurrences
UVa 11651 : Krypton Number System
UVa 10754 : Fantastic Sequence
UVa 11551 : Experienced Endeavour

Regards, Zobayer Hasan.



Merge Sort Improvement?



C++ STL uses a trick to improve the performance of merge sort. Which is, when the number of elements in a call goes down to a few 50 or something near, it uses insertion sort to sort those elements instead of recurring farther. However, this doesn't seem to be much helpful when we write the merge sort procedure manually.

Probably this happens to STL because of, being OOP nature, STL methods have huge overhead on mostly each of them, so allocating more recursive calls might turn out to be more costly than a small O(k^2) sub-routine. As we have seen on the previous posts, recursive calls of merge sort procedure creates a binary tree like structure and the deeper it goes, the number of nodes increases dramatically. Moreover, allocating new functions on call stack always has a overhead for computer systems. So, the just change the algorithm a bit so that the bottom few depths are cut from the tree reducing huge number of nodes from it, i.e. reducing huge overhead which in result, improves performance.


So they changes the algorithm of MERGE-SORT() a bit:

MERGE-SORT(A, p, r):
if r - p < k ;k is the tolerance limit
INSERTION-SORT(A, p, r)
return
if p < r
then q := (r + p) / 2
MERGE-SORT(A, p, q)
MERGE-SORT(A, q+1, r)
MERGE(A, p, q, r)

Now, what happens when we use it manually? I tried it using a huge range and used k = 15, which means, when the number of elements are below 15, it will use insertion sort. I found interesting result in this experiment. Insertion sort seems better under certain range, but after that, it keeps getting slower. So, the value k is a tread off between the overhead of recursive calls and running time. This graph shows the result of my experiment, certainly it will vary if you try to do the same.

[click on the image to get a clear view]

Here is a simple java tester I used for this comparison. Have a look:

import java.util.*;
import java.io.*;

public class Main {

static final boolean useInsertion = false;
static final int limit = 15;

public static void main(String[] args) throws IOException {
Scanner stdin = new Scanner(new FileReader("in.txt"));
PrintWriter out = new PrintWriter(new FileWriter("out.txt"));
int n = stdin.nextInt();
int[] a = new int[n];
for(int i = 0; i < n; i++) a[i] = stdin.nextInt();
out.println("Elements: " + n);
long start = System.nanoTime();
mergeSort(a, 0, n-1);
long end = System.nanoTime();
for(int i = 0; i < n; i++) out.println(a[i] + " ");
out.println("Elapsed Time: " +(double)(end - start)/1000000.0 + "ms.");
out.flush();
stdin.close();
out.close();
}

static void mergeSort(int[] a, int p, int r) {
if(useInsertion && r-p < limit) {
insertionSort(a, p, r);
return;
}
if(p < r) {
int q = (p + r) / 2;
mergeSort(a, p, q);
mergeSort(a, q+1, r);
merge(a, p, q, r);
}
}

static void merge(int[] a, int p, int q, int r) {
int n1 = q-p+1, n2 = r-q;
int[] L = new int[n1];
int[] R = new int[n2];
for(int i = 0; i < n1; i++) L[i] = a[p+i];
for(int j = 0; j < n2; j++) R[j] = a[q+j+1];
for(int k = p, i = 0, j = 0; k <= r; k++) {
if(j >= n2 || (i < n1 && L[i] <= R[j])) a[k] = L[i++];
else a[k] = R[j++];
}
}

static void insertionSort(int[] a, int p, int r) {
for(int i = p+1; i <= r; i++) {
int t = a[i];
for(int j = i - 1; j >= p; j--) {
if(t > a[j]) break;
a[j+1] = a[j];
}
a[j+1] = t;
}
}
}

Change the value of static final boolean useInsertion = false; to 'true' to enable using insertion sort and change the value of static final int limit = 15; to suitable limit, this is the number of elements when to apply insertion sort.

Keep digging!