## Friday, December 21, 2012

### Section 3.3: Riding the Fences

This is just another simple Euler path problem. Given a undirected unweighted graph, which contains multiple edges, you have to find the lexicographically smallest Euler path.

I don't know what the problem setter was thinking, but he made a big mistake. Problem statement says "Your program must output the path of intersections that, if interpreted as a base 500 number, would have the smallest magnitude." where nodes are numbered from 1 to 500. Clearly, 500 is not a base 500 digit. I made several wrong submissions for trying to do some foolish tricks to get this done, then I just did a lexicographic ordering, and it passed. Pretty funny!

Problem statement ensures that there is no input for which Euler path does not exist. This makes the solution easier, just check if all the degrees are even, if so, start from the smallest numbered node, otherwise start from smaller numbered odd degree node.

```algorithm for finding euler path:
graph[u][v] contains number of edges between u and v

find( node u ):
for each adjacent v of u in lexicographical order:
while( graph[u][v] > 0 ):
graph[u][v]--;
graph[v][u]--;
find(v);
stack.push(u);
```
Now the stack contains the euler path in reverse order, so just keep popping!

## Wednesday, December 19, 2012

### SPOJ: 348. Expedition

Nice problem! The first observation is, if you want to reach the city, say, point 0, you have to ensure that every single point between the current position and city must also be reachable. Now, the task is to minimize the number of stoppages for fuel, which is at most 10000. So, we sort the fuel stations, and start from current position. For every fuel station, if we want to reach it, we must have fuel f more than or equal to the distance d. Also, using the larger capacities will always reduce the number of stations we must stop.

So, for each stoppage, starting from farthest, if we can reach this stoppage with existing fuel, we push it in a priority queue (max heap in this case) for future use. If we cannot reach a particular stoppage, then we keep popping the queue and keep adding the amounts with currently available until we are able to reach current stoppage, and then pushing this value. This strategy ensures an optimal solution to reach a particular stoppage, as the priority queue will hold maximum capacities seen so far along the path, but not used yet.

If at any time, the queue gets empty and the amount of fuel is not sufficient, then the particular stoppage cannot be reached, hence, it will be impossible to reach the city.

Happy Coding

### SPOJ: 11409. Fibonacci With a Twist

After passing hours on it, finally I was able to solve it. Problem statement is pretty clear, given a recursive function, find its nth term.

f(0) = 0
f(1) = 1
f(n) = f(n-1) + f(n-2) + (n-1)

The given range on n makes it obvious that this solution must have a logarithmic algorithm or constant close form. However, I don't know about the close form, mathematics was not my thing. I am also not sure if this can be solved directly using matrix exponentiation.

Looking at the recurrence, it is obvious that the equation can be written in the following form for higher n:

f(n) = anf(0) + bnf(1) + cn

Here, an, bn and cn are coefficients for nth term. We can also discard all the terms containing f(0) which is actually 0. So, if we continue writing this, we can find a really nice pattern:

f(0) = 0
f(1) = 1
f(2) = f(1) + (2-1) = 1 + (1)
f(3) = f(2) + f(1) + (3-1) = 2 + (1) + (2)
f(4) = f(3) + f(2) + (4-1) = 3 + 2(1) + (2) + (3)

similarly...
f(5) = 5 + 3(1) + 2(2) + (3) + (4)
f(6) = 8 + 5(1) + 3(2) + 2(3) + (4) + (5)
f(7) = 13 + 8(1) + 5(2) + 3(3) + 2(4) + (5) + (6)
f(8) = 21 + 13(1) + 8(2) + 5(3) + 3(4) + 2(5) + (6) + (7)

I think some Fibonacci numbers caught our eyes already. So, now we try to generalize. Just to note: we can do this because the coefficients of similar terms are consecutive fibonacci numbers.
f(n) = fib(n) + fib(n-1)*1 + fib(n-2)*2 + fib(n-3)*3 + ... ... + fib(1)*(n-1)
f(n+1) = fib(n+1) + fib(n)*1 + fib(n-1)*2 + fib(n-2)*3 + ... ... + fib(1) * n

subtracting f(n) from f(n+1), we can get:
f(n+1)-f(n) = fib(n+1) + {fib(n-1) + ... ... + fib(1)}
f(n+1)-f(n) = fib(n+1) + fib(n+1) - 1 [sum(fib(n)) = fib(n+2)-1]
f(n+1) = f(n) + 2fib(n+1) - 1

This can be solved directly using matrix exponentiation. However, we will need a 4x4 matrix to do that. But, we can actually reduce this equation to a non recursive version:

f(n+1) = f(n) + 2fib(n+1) - 1
f(n+1) = f(n-1) 2fib(n) + 2fib(n+1) - 2
f(n+1) = f(n-2) + 2fib(n-1) + 2fib(n) + 2fib(n+1) - 3
f(n+1) = f(0) + 2fib(1) + 2fib(2) + 2fib(3) + ... ... + 2fib(n+1) - (n+1)
f(n+1) = 2(fib(n+3) - 1) - (n+1)

So, rewriting for f(n):
f(n) = 2(fib(n+2) - 1) - n

Now, this is no more a hard recursive function, we just need to know (n+2)th fibonacci term, which can be computed using only a 2x2 matrix. This post shows you how, if you need to know.

However, for this specific problem, be careful about the final result, you may need modulus correction.

## Friday, December 14, 2012

(Part 1)

### Apriori Algorithm:

The algorithm works as follows: first it generates all the frequent itemsets of length 1 w.r.t. the given threshold (minimum support). Then it continue to generate itemsets of lengths 2,3, ... ,n if possible. There are lots of improvements and pruning possible in the implementation.

Do we need to sort frequently? First observation is, if the items are taken in sorted order on step k = 1, all the frequent patterns generated in future will also be in sorted order if maintained properly. So this eliminates the necessity of sorting or storing itemsets in logarithmic data structures like map or set. Rather, we can store them in arraylist or vector like data structure.

How do we generate itemsets of length k+1? We generate itemsets of length k+1 by merging two itemsets of length k. If two itemsets I1 and I2 have a common prefix of length k-1, and I1[k] < I2[k], we can take I1[1 ... k-1] I1[k] I2[k] which is an itemset of length k+1. As our itemsets are sorted, and frequent itemsets generated are also sorted, this can be done in O(N*K^2). Well, if we follow the naive approach, it will take O(N*K^3), however, as we can pre-calculate the length of common prefix of consecutive items in O(N*K), later, we can use this to perform joining operation stated above and also do early termination rather than looking for the entire N items. The approach is demonstrated in source code.

What are the prunings? The most important observation on apriori algorithm is, if an itemset is not frequent, none of its superset can be frequent. Which also tells us, if an itemset has to be frequent, all of its subset must be frequent. Well, the first one is automatically checked in the algorithm. As it takes the frequent itemsets of previous step to calculate itemsets of current step. Now, how the second rule is checked? Well, we don't need to check all the subsets really, all we need is to check the subsets of length k-1 for step k, which can be easily done by performing manual hashing, or using java hashmap. This saves a lots of hassles.

```/*
Author: Zobayer Hasan
*/
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;

public class Apriori extends Thread {
public static boolean debugger = false;

private final Database db;
private final List< Integer > itemset;
private final List< List< Integer > > frequent;
private double minsup;

public Apriori(String thrName, Database db, double minsup) {
super(thrName);
this.db = db;
itemset = db.getItemset();
frequent = new ArrayList< List< Integer > >();
this.minsup = minsup;
}

@Override
public void run() {
double startTime = System.currentTimeMillis();

int k = 1, n = db.dbSize();
List< List< Integer > > Ck = new ArrayList< List< Integer > >();
List< List< Integer > > Lk = new ArrayList< List< Integer > >();
HashMap< List< Integer>, Integer > seenK = new HashMap< List< Integer >, Integer >();

for(Integer item : itemset) {
List< Integer > temp = new ArrayList< Integer >();
}

while(k <= n && !Lk.isEmpty()) {
if(debugger) {
System.out.println("Step " + k);
System.out.println("Lk: " + Lk);
}

seenK.clear();
Ck.clear();
for(List< Integer > kth : Lk) {
int count = db.scanDatabase(kth);
if((double)count < Math.ceil(minsup * (double)n / 100.0)) continue;
}

if(debugger) {
System.out.println("Ck: " + Ck);
}

if(Ck.isEmpty()) break;

for(List< Integer > freq : Ck) {
seenK.put(freq, k);
}

int[] prefixlen = new int[Ck.size()];
prefixlen[0] = 0;
for(int i = 1; i < Ck.size(); i++) {
prefixlen[i] = prefixLen(Ck.get(i-1), Ck.get(i));
}

List< List< Integer > > temp = new ArrayList< List< Integer > >();
for(int i = 0; i < Ck.size(); i++) {
for(int j = i + 1; j < Ck.size(); j++) {
if(prefixlen[j] == k-1) {
if(debugger) {
System.out.println("Joining: " + i + ":" + Ck.get(i) + " + " + j + ":" + Ck.get(j) + " Prefix Length " + prefixlen[j]);
}
}
else break;
}
}

if(debugger) {
System.out.println("Temporary: " + temp);
}

Lk.clear();
for(List< Integer > list : temp) {
boolean candid = true;
if(k > 1) {
for(int i = 0; i < list.size(); i++) {
List< Integer > prev = new ArrayList< Integer >();
for(int j = 0; j < list.size(); j++) {
}
if(!seenK.containsKey(prev)) {
candid = false;
break;
}
}
}
if(candid) {
}
}

if(debugger) {
System.out.println("Pruned: " + Lk);
}

k++;
}

double endTime = System.currentTimeMillis();
System.out.println("Apriori completed in " + (endTime - startTime)/1000.0 + " seconds");
}

public void printPatterns() {
System.out.println("Frequent Itemsets");
for(List< Integer > pattern : frequent) {
System.out.println(pattern);
}
System.out.println("Total " + frequent.size() + " itemsets");
}

private int prefixLen(List< Integer > left, List< Integer > right) {
int len = 0;
for(len = 0; len < left.size() && len < right.size(); len++) {
if(left.get(len).compareTo(right.get(len)) != 0) return len;
}
return len;
}

private List< Integer > prefixJoin(List< Integer > left, List< Integer > right) {
List< Integer > ret = new ArrayList< Integer >();
for(Integer i : left) {
}
return ret;
}
}
```
This class is threaded, so it is possible to take advantages of multicore processors.

A sample test class is shown here.
```/*
Author: Zobayer Hasan
*/
public class FIMtest {

public static void main(String[] args) {
Database db = null;
try {
db = new Database("mushroom.dat");

} catch(Exception e) {
e.printStackTrace();
}

System.out.println("\nStarting Apriori");

Apriori test1 = new Apriori("test1", db, 40.0);
Apriori.debugger = true;
test1.start();
try {
test1.join();
test1.printPatterns();
} catch(Exception e) {
e.printStackTrace();
}
}
}
```

So, this is it, an efficient implementation of Apriori algorithm in java. It is possible that there are some lacking or mistakes in either source code or analysis. If you find any, please let me know. Here is a sample run on mushroom for 40% minimum support.

Have fun...

## Thursday, December 13, 2012

### Introduction:

Apriori is a very basic and straight forward algorithm for frequent pattern mining, I will not be discussing much about the approach, as those can already be studied from different lectures/books available on net. I will basically present an implementation of mine which is an efficient implementation of the standard apriori algorithm in Java. The target input files are taken from Frequent Itemset Mining Dataset Repository, for example, the mushroom dataset. Due to the space and runtime complexity, apriori is not suitable for larger files having 50,000 records or so. It also may take huge time for very dense files.

### The Database:

First we need to read the files and store them in some data structure which will be easy to access and efficient at the same time. Here. I am presenting my java source code, and then I will present an explaining analysis why this works better.
```/*
Author: Zobayer Hasan
*/

import java.io.FileInputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Comparator;
import java.util.List;
import java.util.PriorityQueue;

class Entry {
public Integer first;
public Integer second;
Entry() {}
Entry(Integer first, Integer second) {
this.first = first;
this.second = second;
}
}

public class Database {
public static boolean debugger = false;

private final List< List< Integer > > transactions;
private final List< Integer > items;

public Database(String dataFileName) throws Exception {
if(debugger) {
System.out.println("Processing " + dataFileName);
}

transactions = new ArrayList< List< Integer > >();
items = new ArrayList< Integer >();

FileInputStream fin = new FileInputStream(dataFileName);

String line;

double startTime = System.currentTimeMillis();

while((line = stdin.readLine()) != null) {
List< Integer > transaction = new ArrayList< Integer >();
String[] temp = line.split("\\s+");

for(String num : temp) {
}

if(transaction.isEmpty()) continue;

Collections.sort(transaction);
}

fin.close();
istream.close();
stdin.close();

int n = transactions.size();
PriorityQueue< Entry > pQ = new PriorityQueue< Entry >(n, new Comparator< Entry >() {
public int compare(Entry item1, Entry item2) {
if(item1.first.equals(item2.first)) {
return item1.second.compareTo(item2.second);
} else {
return item1.first.compareTo(item2.first);
}
}
});

for(int i = 0; i < n; i++) {
}

while(!pQ.isEmpty()) {
Entry peek = pQ.remove();
int val = peek.first;
int idx = peek.second;
if(items.isEmpty() || items.get(items.size()-1) < val) {
}
}
}
}

double endTime = System.currentTimeMillis();
System.out.println("Database created in " + (endTime - startTime)/1000.0 + " seconds");
}

public int scanDatabase(List< Integer > transaction) {
int count = 0;
for(List< Integer > row : transactions) {
boolean found = true;
for(Integer item : transaction) {
int idx, stp, st = 0, en = row.size(), cnt = en - st;
while(cnt > 0) {
stp = cnt >> 1; idx = st + stp;
if(row.get(idx).compareTo(item) < 0) {
st = ++idx;
cnt -= stp+1;
}
else {
cnt = stp;
}
}
if(st == row.size() || row.get(st).compareTo(item) != 0) {
found = false;
break;
}
}
if(found) count++;
}
return count;
}

public List< Integer > getItemset() {
return items;
}

public int dbSize() {
return transactions.size();
}

public List< Integer > getRow(int row) {
try {
return transactions.get(row);
} catch(Exception e) {
throw e;
}
}
}
```
Clearly it provides some interface to access the database where the constructor must be called with a file name for example 'mushroom.dat'. The sorting on line 56 is not necessary if you know the transactions in the file will be sorted in ascending order. All the data files in this repository are already sorted, so sorting can be disabled.

Now, if we look at the constructor, what is this huge code doing actually? First it treads each transaction as a string, then parses it and inserts the transaction in a sorted list. Now, we need a list of unique elements. Well, this could be done by sorting all the records together, and then eliminating duplicates. However, this can be performed better with the help of a priority queue in O(NK log N) time, where we have n transactions, and each transaction has K records on an average. Actually this works much better than it would be in naive approach which would take O(NK log NK) because of extra sorting overhead.

Then comes the last important part, scanDatabase() method. Which basically searches each record in the database for a set of elements. It takes O(NK log K) time at most, where, we have N records, each having an average length of K. This is much better than looking in O(N*K^2) following a naive approach. Here is some improvement possible, if we know the length of transactions, we could use a bit vector in addition to each transaction. Then the query can be performed in O(NK). However, as the datasets are huge, I didn't go for it due to the necessity of using huge memory.

Part 2 contains the source code and explanation for the apriori algorithm.

## Saturday, October 13, 2012

### Static Routing In GNS3

Actually this is our first lab using GNS3. The previous post contains a few info on how to open a project and save along with configurations. In previous post, we initiated RIP on routers which computed the routing table entries automatically. Here I will add the first experiment which required manual routing table entry.

Simply open a new project and create a simple topology like shown below. (Click on any image to enlarge)

Here we have 4 routers, where routers R1 and R4 uses only one interface and R2, R3 uses two interfaces. First we need to configure each interface in use. So, after starting all the routers, we open their terminal and then configure the interfaces as we did in previous post. Configuring R1:

Configuring R2:

Configuring R3:

Configuring R4:

Now you can use show ip route command to see routing table on each table. You will see that the routing tables are incomplete. In fact, if you try to ping 192.168.3.1 from R1, it wont be possible to do so. Because ping packets need a round trip route. So, in router R1, we will need to add routing table entries for network 192.168.2.0 and 192.168.3.0. Similarly, router R2 will only need information for network 192.168.3.0 as the other two networks are directly connected to it.. R3 also only need information for network 192.168.1.0 while R4 needs information for network 192.168.1.0 and 192.168.2.0. We will now add these information. In config mode, we can use ip route to add a routing table entry. Next screenshots shows entries for each router.

For router R1:

For router R2:

For router R3:

For router R4:

Now try to ping from each router to each interface, and it works :D Also use show ip route in routers to see the updated routing table entries we have just added. Have fun experimenting!

### Simple RIP Routing In GNS3

This is a very basic lab in GNS3 where we will create a simple ring topology of routers and ehternet switches and then run RIP routing protocol on these routers. When we were experimenting in network simulation lab, we found that it is very hard to find simple GNS3 examples showing very basic things. So I am going to put them here as we follow the lab experiments. We will be using CISCO c2600 routers. It is possible to do similar experiment using other routers for which you have an IOS image.

Designing a topology is very easy in GNS3. Just drag and drop from left side dock. Make sure to open a empty project and in new blank project dialogue, make sure both the checkboxes are ticked. Also, make sure idle values of routers are set. If not set yet, right click on a router and select idle-pc. Here is the snapshot of a network we will be using here. (Click on any image to enlarge)

Here each router has two interfaces, namely f0/0 and f0/1. We need to setup ip addresses on each interfaces of all 5 routers. Here I will show the configuration commands for router 1. Make sure all the routers are started, and to get the console, right click on a router and select console.

enable
R1#conf t
R1(config)#int f0/0
R1(config-if)#no sh
R1(config-if)#exit
R1(config)#int f0/1
R1(config-if)#no sh
R1(config-if)#end
R1#copy running-config startup-config

Here is a screenshot

Some explanation of short forms: conf t stands for configure terminal, which is simply telling that we will be configuring this router via terminal. However there are other ways to do so. Then we specify the interface we wish to configure in this router, for example: interface f0/0. Then we add ip address and subnet mask. Next is no sh, which stands for no shutdown. It means, this interface will remain "up" the whole time. Here, we can bring it down by providing a shutdown command.

So after configuring all the interfaces, we save the project and copy running configuration to startup configuration file, so that if we close or stop it, we will be able to load the settings directly without having it configured manually again which is a great hassle.

In the same process, we configure all the interfaces in all the routers. This is the first step of this lab. Now we need to tell the routers that we are going to run RIP on them.

To do so, go into configure mode again (by giving conf t command).
R1(config)#router rip
R1(config-router)#network 192.168.1.0
R1(config-router)#network 192.168.5.0
R1(config-router)#end

As we can see, every router is associated with two networks, so you just provide the network addresses. As we did before, we copy the running configuration to startup configuration once again. After configuring all the routers, we can check their IP routing table to see if all the networks have been discovered by RIP. Snapshot of this step:

Now that all the paths have been discovered. We can now test the network by pinging all the IPs to ensure that all the nodes are reachable from all the others. For example:

The final requirement was to test how RIP respond to any change on the network. For example on router R1, we can see that it has two ways to reach network 192.168.3.0. What happens if we disable interface f0/1, it should not be able to forward packets via hop 192.168.1.2. So we enter configuration mode again and select interface f0/1 on router R1, and give "shutdown" command which will stop interface f0/1. Then we wait a 2-3 seconds (in slow machines, we may need to wait longer, even minutes). Now if you check routing table on R1 by show ip route command, you will see that, network 192.168.3.0 has now only one way. Snapshot is given below. Compare it with what you got last time with show ip route command. You can enable this interface again by putting "no shutdown" command.

So, these are the simple commands we used to complete this task. Have fun with experimenting,

# 6906. Raining Parabolas

Problem Summery

Given an array of n items hi ( 0≤i<n ), we have to perform an update operation for a given range [i,j] such that, hx = (hx + f(x))%M, ( i≤x≤j and f(x) = ax2+bx+c ). We also have to perform a query operation for a range [i,j] such that, result = (hi + hi+1 + hi+2 + … + hj)%M.

Solution Idea

Solution idea with the help of a segment tree is quite simple. First, we need to observe the types of information we need to store on each node of the tree. As we will be facing range updates, lazy propagation can be utilized, so, we can have three carry variables, namely ca, cb, cc. If we look closely to the update procedure, parts with x2 are added together and parts with x are added together and there is another part which are the constants. We can resemble them namely with variables, a, b, c, i.e. a is the variable that holds the sum for x2 part, b is the variable that holds the sum for x part and c is the last one that holds the constant part. Also, we can keep the pre-computed values of sum of x2 and x on tree node ranges for the ease of calculation.

Now, let’s dissect the update part. Say, on node p (i to j) we already have in a:
a1xi2 + a2xi+12 + a3xi+22 + … + anxj2
Now we want to add a new parabola (a, b, c) on this range, as only x2 parts will be added here, so the new value of a on this node will be:
(a1+a)xi2 + (a2+a)xi+12 + (a3+a)xi+22 + … + (an+a)xj2
So, we can easily see the extra value added with the previous in variable a:
a * (sum of x2 in range [i,j])

Similarly, for x part, i.e. variable b we can show the added value is:
b * (sum of x in range [i,j])
And for the constant part, i.e. variable c we can show the added value is:
c * n where n is the size of range [i,j];

So we can perform these updates now using lazy propagation, and query is also done in the similar way, the final result for a range is simply the sum of variable a, b, c from that range. And obviously modulus operations must be handled properly.

Have fun!

## Monday, June 25, 2012

### CSEDU Training 2012 Week 5, 6, 7, 8

Week 5

The main focus was mathematics and a few more 2-SAT problems. We've tried several problems regarding Gaussian Elimination. Gaussian Elimination problems are bit difficult because there are lots of things to consider during the process and sometimes there are scopes of optimization. In the practice problemset, there are a few gaussian elimination problems to test you gaussian elimination skills.

Problems from APIO 2009 was also discussed. Next week, we hope to discuss APIO 2010.

Week 6

The main focus was dynamic programming and a bit of computational geometry. We also discussed problems on heavy light decomposition problems (For example SPOJ GSS7 and problem A from Buet contest). However, there is a way where particular instance of heavy light decomposition can be solved using LCA-to-RMQ approach (tutorial from topcoder). APIO 2010 problems were postponed to next week because no one had their homework in time :P

Week 7

Problemset consists of APIO 2010 and a number of computational geometry. Vector geometry was discussed. The three types of transformation i.e. Rotation, Translation, Scaling was discussed and how to get them done using matrices. For rotation, there are a simpler way to do where you do not need to remember the matrix.

Then, 3D convex hull gave us a lot of trouble. We could easily derive and idea of O(n^4) but this is too costly. Then we've found a O(n^2) idea which was hard to code. Next schedule was APIO 2011 which is to be discussed on the next class after IUT Contest. Our instructor gave us a few idea on some of the problems though, but the solutions were quite astonishing and none was able to go near the correct solution unfortunately.

Week 8
No practice contest was hosted due to the close coming IUT Contest. Also not much was discussed, a few chit chat and several random problems.

In the mean time, we had a few team contests for IUT practice. Here are the links: (in no particular order)

However, we performed really bad in IUT contest, and not that we were not able to solve the problems, but, we failed to manage time and team co-ordination. Better luck next time :)

## Monday, May 28, 2012

### CSEDU Training 2012 Week 3, 4

Ah, I am two days late!

Week 3 was off due to ACM ICPC World Finals 2012. It was really intense and SPbNRU ITMO won the contest. A practice contest was hosted and here is the link.

Problem from APIO 2008 were discussed. The main focus of the class was on 2-SAT problems. Problems and strategies discussed from infoarena.ro and lightoj.com. The main difficulty on solving 2-SAT problem is understanding that the problem can be formulated as a 2-SAT problem. Generally 3-SAT or k-SAT problems falls on NP range, but 2-SAT is most of the time solvable by finding strongly connected component on a directed graph where edges indicates the clauses.

A little flashback on segment tree was also present on the class. We solved 56-E from codeforces.com which was apparently the solution idea of Google Codejam 2012 Round 2 Problem A.

This week's virtual contest was also based on data-structure based problems. Contest is already finished and the link is here.

## Friday, May 11, 2012

### CSEDU Training 2012 Week 2

Problems from APIO 2007 were discussed and the solutions seem to be not that hard (well that happens whenever someone tells you the solutions).  A few more data structure based solutions were discussed along with problems from last week's virtual contest.

Next day we will be discussing on APIO 2008 problems, and the problem from tonight's virtual contest. A famous problem from SPOJ is discussed (problem code: CHAIN). I have already seen this problem before but could not manage to solve it and all I got was numerous infamous WA (wrong answer). Shanto vai gave as a different type of simpler solution based on weighted union-find structure (really I heard this for the first time in my life today).

From next week, the classes will be held from 2:30PM instead of 4PM as now, and they will be extended up to 5:30PM hopefully.

Edit: A virtual contest has been arranged. Visit this link.

## Monday, May 7, 2012

### GIT What I Need

Mainly I use GITHUB for my personal repository so that I can access some files not when I am home, and every time I try to add something to github from my local repository, I almost always forget the commands. That's why I am writing the three line commands I will always need, just for my convenience :)

git status -s

git config --global user.name "<my_name>"
git config --global user.email <my_email>

git commit -m "<my_commit_msg>"

git push origin master

That will do for now. A total reference can be found here.

## Thursday, May 3, 2012

### CSEDU Training 2012 Week 1

Today, we discussed several data structure dependent problems from this page. Most of them are classical type, i.e. common problems and we found the solution ideas quite interesting.

Here is the summery:
#0: Lazy propagation and slicing (compressing search space) ideas in segment tree.
#1: Finding number of intersecting points among axis parallel line segments.
#2: Perimeter and area of union of rectangles.
#3: Finding points in number of rectangles and rectangles covering points.
#4: Sliding window ideas, finding maximal number of points covered by a rectangle placed arbitrarily.
-- and a few others.

After the discussion, we solved this problem to test if we were able to learn anything at all.

Hopefully, the next class, we will have a discussion on the problems on APIO:2007

Problems to be solved before the next class:

All are from http://www.spoj.pl/

Edit: A virtual contest has been arranged. Visit this link.

## Friday, March 30, 2012

### How to run a java executable or jar without popping up the command prompt window?

This can be done by running the following command instead of "java", assuming the jar file is named "Main.jar"

```javaw -jar -Xms1024m -Xmx1024m Main.jar
```

For example, lets start with the following simple snippet, it just creates a frame and we will the run the program without the black console window. Lets name the file as FacelessVoid.java

```import javax.swing.*;

public class FacelessVoid {
public static void main(String[] args) throws Exception {
JFrame frame = new JFrame("FrameDemo");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(300, 300);
frame.setVisible(true);
}
}
```

Running the program from IDE's will straight show you the window, which contains nothing at all. This is just for demonstration purpose, so fill free to try something else if you need. Now, we will create a jar file containing this class (we are in windows os).

```javac FacelessVoid.java
echo Main-Class: FacelessVoid > manifest.txt
jar cfm Main.jar manifest.txt FacelessVoid.class
```

Note, if you have more than one source file, i.e. .java files, you can write *.java instead of where we put FacelessVoid.java in line 1. In line 2, the main class name should be the class which contains the main() method. In line 3, if there are more than one class file to be put into jar file, you can use *.class instead of just FacelessVoid.class in our example.

If everything goes fine, the jar file named Main.jar should be created without any problem. Now we can create a batch file in the same directory where the Main.jar is located, lets call it Main.bat, we just put the following line into it:

```start javaw -jar -Xms1024m -Xmx1024m Main.jar
```

Now double clicking on Main.bat will open the window without any command prompt window.

### How can I vanish the window totally but still keep the program running?

There are a few ways to do this, I am presenting here a simple way using the previous files, the only thing needed to change is frame.setVisible(true); to frame.setVisible(false); This will hide the window now completely but the program is still running. Note, you will need to create the jar file again, i.e. re-compile and the make jar again. Now if you click on the bat file, it will start executing, but you wont be able to see it any more.

This can be helpful for servers which remains idle and waits for clients. We can just put them beyond visibility and they still keeps executing. For example, this code is just a modification of the previous one, and it will wait for a client to connect on the specified port. However, whenever a client gets a connection, we just change the visibility just to show that the program was actually running even though we were not able to see it. So you may wish to remove the line frame.setVisible(true); from the while loop and write your own program logic and make it permanently invisible.

```import javax.swing.*;
import java.net.*;

public class FacelessVoid {
public static void main(String[] args) throws Exception {
JFrame frame = new JFrame("FrameDemo");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setSize(300, 300);
frame.setVisible(false);
ServerSocket ss = new ServerSocket(8080);
Socket client = null;
while((client = ss.accept()) != null) {
frame.setVisible(true);
}
}
}
```

Now, we can keep the server running without being able to see it. Here is a sample client program at your disposal

```import java.net.*;

public class Client {
public static void main(String[] args) throws Exception {
Socket sock = new Socket("localhost", 8080);
}
}
```

You just run and compile it and the server window will pop in from nowhere :D

Let me know if this works for you too...