博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Hadoop启动脚本分析
阅读量:6476 次
发布时间:2019-06-23

本文共 11624 字,大约阅读时间需要 38 分钟。

                     Hadoop启动脚本分析

                                      作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

 

 

    能看到这篇博客的你估计对Hadoop已经有一个系统的了解了,最起码各种搭建方式你应该是会的,不会也没有关系,可以参考我的笔记,里面有各种搭建方式,哈哈哈~

 

[yinzhengjie@s101 ~]$ cat `which xcall.sh`#!/bin/bash#@author :yinzhengjie#blog:http://www.cnblogs.com/yinzhengjie#EMAIL:y1053419035@qq.com#判断用户是否传参if [ $# -lt 1 ];then        echo "请输入参数"        exitfi#获取用户输入的命令cmd=$@for (( i=101;i<=104;i++ ))do        #使终端变绿色         tput setaf 2        echo ============= s$i $cmd ============        #使终端变回原来的颜色,即白灰色        tput setaf 7        #远程执行命令        ssh s$i $cmd        #判断命令是否执行成功        if [ $? == 0 ];then                echo "命令执行成功"        fidone[yinzhengjie@s101 ~]$
xcall.sh 脚本内容,我在测试的时候经常使用它

 

 

一.start-all.sh脚本分析

[yinzhengjie@s101 ~]$ cat `which start-all.sh`  | grep -v ^# | grep -v ^$echo "This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh"bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexecHADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/hadoop-config.shif [ -f "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh ]; then  "${HADOOP_HDFS_HOME}"/sbin/start-dfs.sh --config $HADOOP_CONF_DIRfiif [ -f "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh ]; then  "${HADOOP_YARN_HOME}"/sbin/start-yarn.sh --config $HADOOP_CONF_DIRfi[yinzhengjie@s101 ~]$

  从这个脚本中的第一行我们可以看出来,这个脚本已经过时了,取而代之的是:“This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh”,也就是 “start-dfs.sh”和“start-yarn.sh”。

 

二.start-dfs.sh 脚本分析

[yinzhengjie@s101 ~]$ more `which start-dfs.sh` | grep -v ^# | grep -v ^$usage="Usage: start-dfs.sh [-upgrade|-rollback] [other options such as -clusterId]"bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexecHADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/hdfs-config.shif [[ $# -ge 1 ]]; then  startOpt="$1"  shift  case "$startOpt" in    -upgrade)      nameStartOpt="$startOpt"    ;;    -rollback)      dataStartOpt="$startOpt"    ;;    *)      echo $usage      exit 1    ;;  esacfinameStartOpt="$nameStartOpt $@"NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -namenodes)echo "Starting namenodes on [$NAMENODES]""$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \  --config "$HADOOP_CONF_DIR" \  --hostnames "$NAMENODES" \  --script "$bin/hdfs" start namenode $nameStartOptif [ -n "$HADOOP_SECURE_DN_USER" ]; then  echo \    "Attempting to start secure cluster, skipping datanodes. " \    "Run start-secure-dns.sh as root to complete startup."else  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \    --config "$HADOOP_CONF_DIR" \    --script "$bin/hdfs" start datanode $dataStartOptfiSECONDARY_NAMENODES=$($HADOOP_PREFIX/bin/hdfs getconf -secondarynamenodes 2>/dev/null)if [ -n "$SECONDARY_NAMENODES" ]; then  echo "Starting secondary namenodes [$SECONDARY_NAMENODES]"  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \      --config "$HADOOP_CONF_DIR" \      --hostnames "$SECONDARY_NAMENODES" \      --script "$bin/hdfs" start secondarynamenodefiSHARED_EDITS_DIR=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.namenode.shared.edits.dir 2>&-)case "$SHARED_EDITS_DIR" inqjournal://*)  JOURNAL_NODES=$(echo "$SHARED_EDITS_DIR" | sed 's,qjournal://\([^/]*\)/.*,\1,g; s/;/ /g; s/:[0-9]*//g')  echo "Starting journal nodes [$JOURNAL_NODES]"  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \      --config "$HADOOP_CONF_DIR" \      --hostnames "$JOURNAL_NODES" \      --script "$bin/hdfs" start journalnode ;;esacAUTOHA_ENABLED=$($HADOOP_PREFIX/bin/hdfs getconf -confKey dfs.ha.automatic-failover.enabled)if [ "$(echo "$AUTOHA_ENABLED" | tr A-Z a-z)" = "true" ]; then  echo "Starting ZK Failover Controllers on NN hosts [$NAMENODES]"  "$HADOOP_PREFIX/sbin/hadoop-daemons.sh" \    --config "$HADOOP_CONF_DIR" \    --hostnames "$NAMENODES" \    --script "$bin/hdfs" start zkfcfi[yinzhengjie@s101 ~]$

   以上的注释已经被我过滤掉了,从这个脚本中大致可以看出这个脚本是用来启动hdfs进程的,即分别是:NameNode,DataNode以及secondaryNameNode。

1>.单独启动NameNode脚本用法如下:

[yinzhengjie@s101 ~]$ hadoop-daemon.sh --hostnames s101 start namenodestarting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============11531 Jps11453 NameNode命令执行成功============= s102 jps ============3657 Jps命令执行成功============= s103 jps ============3627 Jps命令执行成功============= s104 jps ============3598 Jps命令执行成功[yinzhengjie@s101 ~]$

  以上是单独启动NameNode节点的脚本用法,如果你想要批量启动的话可以使用hadoop-daemons.sh命令,只不过由于我部署的集群环境只有一个NameNode节点,因此看不出来有任何效果。

[yinzhengjie@s101 ~]$ hadoop-daemons.sh --hostnames  ` hdfs getconf -namenodes` start namenodes101: starting namenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-namenode-s101.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============13395 Jps13318 NameNode命令执行成功============= s102 jps ============3960 Jps命令执行成功============= s103 jps ============3930 Jps命令执行成功============= s104 jps ============3899 Jps命令执行成功[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hadoop-daemons.sh --hostnames ` hdfs getconf -namenodes` start namenode

2>.单独启动DataNode脚本如下:

[yinzhengjie@s101 ~]$ hadoop-daemon.sh start datanodestarting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s101.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============12119 Jps12045 DataNode命令执行成功============= s102 jps ============3779 Jps命令执行成功============= s103 jps ============3750 Jps命令执行成功============= s104 jps ============3719 Jps命令执行成功[yinzhengjie@s101 ~]$

  以上是单独启动DataNode的脚本用法,想要执行如果你想要批量启动的话可以使用hadoop-daemons.sh命令,由于我有三个节点,看起来效果就很明显了。

[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============14482 Jps命令执行成功============= s102 jps ============4267 Jps命令执行成功============= s103 jps ============4238 Jps命令执行成功============= s104 jps ============4206 Jps命令执行成功[yinzhengjie@s101 ~]$ hadoop-daemons.sh start datanodes102: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s102.outs104: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s104.outs103: starting datanode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-datanode-s103.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============14552 Jps命令执行成功============= s102 jps ============4386 Jps4316 DataNode命令执行成功============= s103 jps ============4357 Jps4287 DataNode命令执行成功============= s104 jps ============4325 Jps4255 DataNode命令执行成功[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hadoop-daemons.sh start datanode

3>.单独启动secondaryNameNode

[yinzhengjie@s101 ~]$ hadoop-daemon.sh --hostnames s101 start secondarynamenodestarting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s101.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============15127 SecondaryNameNode15179 Jps命令执行成功============= s102 jps ============4541 Jps命令执行成功============= s103 jps ============4513 Jps命令执行成功============= s104 jps ============4480 Jps命令执行成功[yinzhengjie@s101 ~]$

  以上是单独启动secondaryNameNode的脚本用法,想要执行如果你想要批量启动的话可以使用hadoop-daemons.sh命令,由于我有三个节点,看起来效果就很明显了。

[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============17273 Jps命令执行成功============= s102 jps ============4993 Jps命令执行成功============= s103 jps ============4965 Jps命令执行成功============= s104 jps ============4929 Jps命令执行成功[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ for i in `cat /soft/hadoop/etc/hadoop/slaves | grep -v ^#` ;do  hadoop-daemons.sh --hostnames $i start secondarynamenode ;dones102: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s102.outs103: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s103.outs104: starting secondarynamenode, logging to /soft/hadoop-2.7.3/logs/hadoop-yinzhengjie-secondarynamenode-s104.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============17394 Jps命令执行成功============= s102 jps ============5089 Jps5042 SecondaryNameNode命令执行成功============= s103 jps ============5061 Jps5014 SecondaryNameNode命令执行成功============= s104 jps ============5026 Jps4979 SecondaryNameNode命令执行成功[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ for i in `cat /soft/hadoop/etc/hadoop/slaves | grep -v ^#` ;do hadoop-daemons.sh --hostnames $i start secondarynamenode ;done

 

三.start-yarn.sh 脚本分析

[yinzhengjie@s101 ~]$ cat /soft/hadoop/sbin/start-yarn.sh | grep -v ^# | grep -v ^$echo "starting yarn daemons"bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexecHADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/yarn-config.sh"$bin"/yarn-daemon.sh --config $YARN_CONF_DIR  start resourcemanager"$bin"/yarn-daemons.sh --config $YARN_CONF_DIR  start nodemanager[yinzhengjie@s101 ~]$

  其实用法跟上面的类似,单独启动进程如下:

[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============18290 Jps命令执行成功============= s102 jps ============5314 Jps命令执行成功============= s103 jps ============5288 Jps命令执行成功============= s104 jps ============5249 Jps命令执行成功[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ yarn-daemon.sh start  nodemanagerstarting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s101.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============18344 NodeManager18474 Jps命令执行成功============= s102 jps ============5337 Jps命令执行成功============= s103 jps ============5311 Jps命令执行成功============= s104 jps ============5273 Jps命令执行成功[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ yarn-daemon.sh start nodemanager

  如果想要想要批量启动的,实操如下:

[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============18570 Jps命令执行成功============= s102 jps ============5383 Jps命令执行成功============= s103 jps ============5357 Jps命令执行成功============= s104 jps ============5319 Jps命令执行成功[yinzhengjie@s101 ~]$ yarn-daemons.sh start  nodemanagers102: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s102.outs104: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s104.outs103: starting nodemanager, logging to /soft/hadoop-2.7.3/logs/yarn-yinzhengjie-nodemanager-s103.out[yinzhengjie@s101 ~]$ xcall.sh jps============= s101 jps ============18645 Jps命令执行成功============= s102 jps ============5562 Jps5436 NodeManager命令执行成功============= s103 jps ============5536 Jps5410 NodeManager命令执行成功============= s104 jps ============5498 Jps5372 NodeManager命令执行成功[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ yarn-daemons.sh start nodemanager

 

二.stop-all.sh脚本分析

[yinzhengjie@s101 ~]$ cat `which stop-all.sh` | grep -v ^#  | grep -v ^$echo "This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh"bin=`dirname "${BASH_SOURCE-$0}"`bin=`cd "$bin"; pwd`DEFAULT_LIBEXEC_DIR="$bin"/../libexecHADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}. $HADOOP_LIBEXEC_DIR/hadoop-config.shif [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh ]; then  "${HADOOP_HDFS_HOME}"/sbin/stop-dfs.sh --config $HADOOP_CONF_DIRfiif [ -f "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh ]; then  "${HADOOP_HDFS_HOME}"/sbin/stop-yarn.sh --config $HADOOP_CONF_DIRfi[yinzhengjie@s101 ~]$

  看到第一行时:echo "This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh",估计你已经明白是怎么回事了把,就是把上面的所有start参数换成了stop参数。从这个脚本中的第一行我们可以看出来,这个脚本已经过时了,取而代之的是:“This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh”,也就是 “stop-dfs.sh”和“stop-yarn.sh”。

 

三.小结

  综上所述,我们可以得到以下四个等式:

1>.start-all.sh = start-dfs.sh + start-yarn.sh

2>.stop-all.sh = stop-dfs.sh + stop-yarn.sh

3>.hadoop-damons.sh = hadoop-damon.sh + slaves

4>.yarn-damons.sh = yarn-damon.sh + slaves 

 

你可能感兴趣的文章
SVN服务器使用(二)
查看>>
反射获取内部类以及调用内部类方法
查看>>
App里面如何正确显示用户头像
查看>>
DATAGUARD维护:从库宕机后如何恢复到管理恢复模式
查看>>
U-BOOT之一:BootLoader 的概念与功能
查看>>
我的路上
查看>>
Velocity处理多余空白和多余空白行问题
查看>>
DB2与oracle有什么区别
查看>>
创建一个多级文件目录
查看>>
Picasa生成图片幻灯片页面图文教程
查看>>
js获取当前时间的前一天/后一天
查看>>
svn status 显示 ~xx
查看>>
常用HiveQL总结
查看>>
[转]使用Visual Studio Code开发Asp.Net Core WebApi学习笔记(三)-- Logger
查看>>
POJ 3311 Hie with the Pie(状压DP + Floyd)
查看>>
Security updates and resources
查看>>
深入理解JavaScript系列(25):设计模式之单例模式
查看>>
DNS为什么通常都会设置为14.114.114.114
查看>>
Sqoop架构(四)
查看>>
golang copy函数
查看>>