Hadoop CH4 and Hbase Installation on Ubuntu 12.04

Posted By : Abhimanyu Singh | 17-Jul-2013

Hadoop

I have recently installed Hadoop CH4 and Hbase on my Ubuntu 12.04 machine . I have posted this blog to share the steps I have followed to install the Hadoop CH4 and Hbase . Before we move further , Please refer here for basic information about Hadoop and prerequisites for Hadoop Installation . HBASE - HBase is an open source, non-relational, distributed database modeled after Google's BigTable and is written in Java .Use Apache HBase when you need random, realtime read/write access to your Big Data . Hbase runs in two mode Standalone Mode This is the default mode. In standalone mode, HBase does not use HDFS - it uses the local filesystem instead and it runs all HBase daemons and a local ZooKeeper all up in the same JVM. Zookeeper binds to a well known port so clients may talk to HBase. Distributed Mode We have pseudo Distributed Mode and Fully-Distributed Mode . In Pseudo Distributed Mode Hbase daemon use HDFS but on single machine where as in Fully Distributed Mode Hbase Daemon is spread over all the nodes . Zookeeper -It is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services

Installation Steps

(I assume you have followed the steps given on here for basic information and your machine met the all prerequisites condition )

  • Create a new file

     # vi /etc/apt/sources.list.d/cloudera.list
    
  • Add the following below line in above file

    deb http://archive.cloudera.com/debian <release>-cdh4 contrib 
    deb-src http://archive.cloudera.com/debian <release>-cdh4 contrib 
    

    where: RELEASE is the name of your distribution.. For example, to install CDH4 for Ubuntu Lucid, use lucid-cdh4 in the command above. I used release : maverick-cdh4

  • Now you have to add the gnu-public license using following lines

    
    curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | sudo apt-key 
    add -
    
  • Now do the update

     

    
        sudo apt-get update
    
  • Install Hadoop on your machine using following command

    sudo apt-get install hadoop-0.20-conf-pseudo
    sudo -u hdfs hdfs namenode -format
    
  • Start HDFS using folowing command

     

     for service in /etc/init.d/hadoop-hdfs-*
    > do
    > sudo $service start
    > done
    
  • Now hit the following Url http://localhost:50070/ to check whether HDFS is running

  • Create the /tmp Directory

    $sudo -u hdfs hadoop fs -mkdir /tmp
    $ sudo -u hdfs hadoop fs -chmod -R 1777 /tmp
    
     
  • Create the MapReduce system directories

    $ sudo -u hdfs hadoop fs -mkdir /var
    $ sudo -u hdfs hadoop fs -mkdir /var/lib
    $ sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-hdfs
    $ udo -u hdfs hadoop fs -mkdir /var/lib/hadoop-hdfs/cache
    $ sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-hdfs/cache/mapred
    $ sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-hdfs/cache/mapred/mapred
    $ sudo -u hdfs hadoop fs -mkdir /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
    $ sudo -u hdfs hadoop fs -chmod 1777 /var/lib/hadoop-hdfs/cache/mapred/mapred/staging
    $ sudo -u hdfs hadoop fs -chown -R mapred /var/lib/hadoop-hdfs/cache/mapred
    
  • Verify the HDFS File Structure

    $ sudo -u hdfs hadoop fs -ls -R /
    
  • Start MapReduce

    for service in /etc/init.d/hadoop-0.20-mapreduce-*
    > do
    > sudo $service start
    > done
    
    
     

    Hit follwoing url in your browser to check if MapReduce is running

    http://localhost:50030/

  • Create User Directories

    
    $ sudo -u hdfs hadoop fs -mkdir /user/
    $ sudo -u hdfs hadoop fs -chown  /user/
    
    where is the Linux username of each user.
  • Stop The daemonns using follwoing command

    
    $ for service in /etc/init.d/hadoop-hdfs-* /etc/init.d/hadoop-0.20-mapreduce-*
    > do
    > sudo $service stop
    > done
    
  • Install Habse , Habse-masteron your system using following command

    
    $ sudo apt-get install hbase
    $ sudo apt-get install hbase-master
    $ sudo jps
    
     
  •  
  • Stop Hbase-master


sudo /etc/init.d/hbase-master stop
  • Creating the /hbase Directory in HDFS

    
    $ sudo -u hdfs hadoop fs -mkdir /hbase
    $ sudo -u hdfs hadoop fs -chown hbase /hbase
    
  • Installing Zookeeper Server

     
    $ sudo apt-get install zookeeper-server
    $ sudo /etc/init.d/zookeeper-server init
    $ sudo /etc/init.d/zookeeper-server start
    
  • Now start hbase-master server on your machine

     $ sudo /etc/init.d/hbase-master start
    
  • To enable the HBase Region Server on Ubuntu and Debian systems

    
    $ sudo apt-get install hbase-regionserver
    

    Hit following Url to check if Hbase-master is running successfully on your system http://localhost:60010/

  • Installing and Starting the HBase Thrift Server

    
    $ sudo apt-get install hbase-thrift
    
  • Installing and Configuring REST

    
    sudo apt-get install hbase-rest
    
  • Stop all the services .
  • Start all the service .
  • Thanks Abhimanyu

 

About Author

Author Image
Abhimanyu Singh

Abhimanyu is an seasoned technologist . He always keeps himself ahead in embracing and adapting new technologies/frameworks to solve business problems. He specialise in Blockchain technology , Video Content Management & enterprise software .

Request for Proposal

Name is required

Comment is required

Sending message..