Thursday, November 11, 2010

Hadoop Basics

Hadoop is an open source project for processing large datasets in parallel with the use of low level commodity machines.

Hadoop is build on two main parts. An special file system called Hadoop Distributed File System (HDFS) and the Map Reduce Framework.

The HDFS File System is an optimized file system for distributed processing of very large datasets on commodity hardware.

The map reduce framework works in two main phases to process the data. Which are the Map phase and the Reduce phase.

To explain this let's create a sample Hadoop application.

This application will take different dictionaries of english to other languages (English-Spanish) (English-Italian)(English-French) and create a Dictionary file that has the english word followed by all the translations pipe-separated.

- The first thing is of course downloading Hadoop. We go to the directory we want to install hadoop and download it wget http://apache.favoritelinks.net//hadoop/core/stable/hadoop-0.20.2.tar.gz.


Then unzip it tar zxvf hadoop-0.21.0.tar.gz.

- Now we get our dictionary files. I downloaded them from http://www.ilovelanguages.com/IDP/files/.txt

- The next thing will be to put our files in HDFS (This example doesn’t really need to do this, but i’m doing it just to show how). For this we need first to format a filesystem to HDFS. This is done in the following way:

- We go to the bin directory of hadoop and execute ./hadoop namenode -format. This will by default format the directory /tmp/hadoop-username/dfs/name.


- After the system is formated we need to put our dictionary files into this filesystem. Hadoop works better with one large files than with many small ones. So we'll merge the files into one to put them there.

- Although this should better be done while writing to the hadoop file system using a PutMerge operation, we are merging the files first and then copying them to hdfs which is easier and our example files are small.

1. cat French.txt >> fulldictionary.txt

2. cat Italian.txt >> fulldictionary.txt

3. cat Spanish.txt >> fulldictionary.txt



- To copy the file to hdfs we execute the following command:
./hadoop fs -put /home/cscarioni/Documentos/hadooparticlestuff/fulldictionary.txt /tmp/hadoop-cscarioni/dfs/name/file

- We will create now the actual map reduce program to process the data. The program will be completely contained in one unique Java file. In the file we will have the Map and the Reduce algorithms. Let's see the code and then explain how the map reduce framework works.



import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.KeyValueTextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;
public class Dictionary
{
    public static class WordMapper extends Mapper<Text, Text, Text, Text>
    {
        private Text word = new Text();
        public void map(Text key, Text value, Context context) throws IOException, InterruptedException
        {
            StringTokenizer itr = new StringTokenizer(value.toString(),",");
            while (itr.hasMoreTokens())
            {
                word.set(itr.nextToken());
                context.write(key, word);
            }
        }
    }
    public static class AllTranslationsReducer
    extends Reducer<Text,Text,Text,Text>
    {
        private Text result = new Text();
        public void reduce(Text key, Iterable&lt;Text&gt; values,
        Context context
        ) throws IOException, InterruptedException
        {
            String translations = "";
            for (Text val : values)
            {
                translations += "|"+val.toString();
            }
            result.set(translations);
            context.write(key, result);
        }
    }
    public static void main(String[] args) throws Exception
    {
        Configuration conf = new Configuration();
        Job job = new Job(conf, "dictionary");
        job.setJarByClass(Dictionary.class);
        job.setMapperClass(WordMapper.class);
        job.setReducerClass(AllTranslationsReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        job.setInputFormatClass(KeyValueTextInputFormat.class);
        FileInputFormat.addInputPath(job, new Path("/tmp/hadoop-cscarioni/dfs/name/file"));
        FileOutputFormat.setOutputPath(job, new Path("output"));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}




Watching at the code we can see that our class is built basically of three parts. A static class holds the mapper, other static class holds the reducer, and the main method works as the driver of our application. Follow along with the code as you read the next few paragraphs.

First let’s talk about the mapper:

Our mapper is a very standard mapper. A mapper’s main work is to produce a list of key value pairs to be processed later. The ideal structure of this list of key value pairs is so that the keys will be repeated in many elements of the list (produced by this same mapper or another one that will combine it’s results with this one) so the next phases of the map reduce algorithm make use of them. A mapper receives a key, value pair as parameters, and as said, produce a list of new key, value pairs.

The key value pair received by the mapper depends on the InputFormat implementation used. In our example we are using KeyValueTextInputFormat. This implementation uses as each key value pair, the begining of each line of the input file until the first space as the key, and the rest of the line as the value. So if a line contains aaa bbb,ccc,ddd we’ll get aaa as the key and bbb,ccc,ddd as the value.

From each input to the mapper, the generated list of key value pairs is the key combined with each of the values separated by comma. explaining: For the input aaa bbb,ccc,ddd the output will be: List(aaa bbb, aaa ccc, aaa ddd) and that for each input to the mapper.

The reducer

After the mapper, and before the reducer, the shuffler and combining phases take place. The shuffler phase assures that every key value pair with the same key goes to the same reducer, the combining part converts all the key value pairs of the same key to the grouping form key,list(values), which is what the reducer ultimately receives.

The more standard reducer’s job is to take the key list(values) pair, operate on the grouped values, and store it somewhere. That is exactly what our reducer does. It takes the key list(values) pair, loop through the values concatenating them to a pipe-separated string, and send the new key value pair to the output, so the pair aaa list(aaa,bbb) is converted to aaa aaa|bbb and stored out.

To run our program simply run it as a normal java main file with hadoop libs on the classpath (all the jars in the hadoop home directory and all the jars in the hadoop lib directory. you can also run the hadoop command with the classpath option to get the full classpath needed). For this first test i used the IDE DrJava.

Running the program in my case generated a file called part-r-00000 with the expected result.

Distributing it:

Map Reduce framework main reason of existence is to run the processing of large ammounts of data in a distributed manner, in commodity machines. In fact running it on only one machine doesn’t have much more utility than teaching us how it works.
Distributing the application can be the subject of another more advanced post.

Carlo.
Great books on Hadoop with comprehensive coverage:

22 comments:

Mahesh Lalwani said...

Hey ! Thanks for providing great info.

I was wondering for basic hadoop program as i m new into this nd you provide me that..

But still i m not able to run my java program through eclipse if you kn tht thn plaese rely

Carlo Scarioni said...

Hi Mahesh. Don't know exactly what you mean.
If you want to run the little hadoop program from eclipse, you just have to create a project, and copy the source code, and include all the hadoop libraries in the classpath.
The run it as Java Application.

Is that what you want?

Cheers,
Carlo

Mahesh Lalwani said...

I had tried the same what you mentioned but I am not able to even run program.

I also installed plugins required for it but still I am wondering for it

Senthil said...

Good Info.. I was searching for so many articles to learn hadoop basics. This post is made me to understand the basics. Can you explain mapreduce somewat deeper. Can you provide a simple example to show whats happening over HDFS. Nice post.

raj said...

g8... thanks for info :)

Kiran@what is hadoop said...

Thanks for detailed article on basics Hadoop. You need linux based simulation tools for working in java. You can use CYGWIN in windows

Please click here to know more on Hadoop instalation
setup hadoop in windows

Anonymous said...

Hi Carlo, My name is Ashok. I am Linux admin and wants to be Hadoop admin. Will that be possible & can I expect Hadoop Administrator profile specifically?

Anonymous said...

I would expect a list of hadoop command line instructions, as to copy a file from a native file system to hadoop, to list files on hadoop, to copy a file from hadoop to the native file system.

Ash said...

Carlo if we want to read a file in addition to the Input DataSet either from the local file system or from the HDFS, how do you do it?

Javin @ chmod command in linux said...

Great information on hadoop, never knew this much about it.Thanks

Anonymous said...

Hey ! can anyone send materials on hadoop briefly & also pl send me how to develop applications in hadoop.
i am new to hadoop

pranav said...

Very nice tutorial about hadoop basics

Sharry said...

what are the dependency jar files , which I need to load..
I've loaded apache common loggin, lang and common configuration... however still getting error:-


Exception in thread "main" java.lang.NoSuchMethodError: org.apache.commons.lang.StringUtils.uncapitalize(Ljava/lang/String;)Ljava/lang/String;

suzata said...

nice post !!
we have chosen hadoop as our final year project in undergrad. and we were trying to find a simple applications that can be developed in hadoop ..this is a lot helpful.
Keep updating !!
cheers ...

suzata said...

i have a query -- is it the same procedure if we do it in windows using cygwin ??
thanks ..

Ritendra said...

Sorry, but this article lacks completeness. Just bits n pieces

Ezhil Vathani said...

sir im doing my project in hadoop but i faced lot of difficulties everytime

Ezhil Vathani said...

hi sir im doing my pg.and i use hadoop for my project .while running java programs in that i faced lot of problem

Anonymous said...

Hi, thanks for this excellent post. The dictionary text files are no longer at the url you supply and I'm having trouble finding the new location. Are you able to give us a reference to where those text files are? Thanks again

Vyankatesh said...

Hi,
This Post is Reply for to Question of unable to find Dictionary Files.
http://www.ilovelanguages.com/IDP/files/.txt
that's the referenced URL you just have to append the language Name that u want the dictionary file
e.g.
http://www.ilovelanguages.com/IDP/files/Spanish.txt
as well as just replace with other languages such as French,Italian,Latin,German etc.
Thanks.

komal Hotwani said...

Hello All,

Thank you for your info for hadoop.

I am starting to learn Hadoop and have lot of interest for it.

Please post more such basic programs.

Thanks

Anonymous said...

Hi Carlo,

Can we get the complete source code for reference.

Thanks,
Amp