Different ways to start hadoop daemon processes and difference among them.
Posted By : Md Qasim Siddiqui | 11-Jun-2015
Hi freinds, in this blog Ilike to tell you about different ways to start hadoop daemon processes and what is the difference among them usually newbies know how to start hadoop processes but they dont know the differences among them.
So basically hadoop processes can be start or stop in three ways:
1- start-all.sh and stop-all.sh
2- start.dfs.sh, stop.dfs.sh and start-yarn.sh, stop-yarn.sh
3- hadoop.daemon.sh start namenode/datanode and hadoop.daemon.sh stop namenode/datanode
1- start-all.sh and stop-all.sh: Used to start and stop hadoop daemons all at once. Issuing it on the master machine will start/stop the daemons on all the nodes of a cluster.
2- start.dfs.sh, stop.dfs.sh and start-yarn.sh, stop-yarn.sh: Same as above but start/stop HDFS and YARN daemons separately from the master machine on all the nodes. It is advisable to use these commands now over start-all.sh & stop-all.sh
3- hadoop.daemon.sh start namenode/datanode and hadoop.daemon.sh stop namenode/datanode: To start individual daemons on an individual machine manually. You need to go to a particular node and issue these commands.
Use case : Suppose you have added a new datanode to your cluster and you need to start the datanode daemon only on this machine
$HADOOP_HOME/bin/hadoop-daemon.sh start datanode
Md Qasim Siddiqui
Qasim is an experienced web app developer with expertise in groovy and grails,Hadoop , Hive, Mahout, AngularJS and Spring frameworks. He likes to listen music in idle time and plays counter strike.