博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
hadoop集群之HDFS和YARN启动和停止命令
阅读量:7211 次
发布时间:2019-06-29

本文共 7206 字,大约阅读时间需要 24 分钟。

hot3.png

hadoop集群之HDFS和YARN启动和停止命令 博客分类: hadoop  

假如我们只有3台linux虚拟机,主机名分别为hadoop01、hadoop02和hadoop03,在这3台机器上,hadoop集群的部署情况如下:

hadoop01:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager;hadoop02:1个namenode,1个datanode,1个journalnode,1个zkfc,1个resourcemanager,1个nodemanager;hadoop03:1个datenode,1个journalnode,1个nodemanager;

 

下面我们来介绍启动hdfs和yarn的一些命令。

 

1.启动hdfs集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/start-dfs.sh
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-dfs.sh Starting namenodes on [hadoop01 hadoop02]hadoop01: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.outhadoop02: starting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.outhadoop03: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.outhadoop02: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.outhadoop01: starting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.outStarting journal nodes [hadoop01 hadoop02 hadoop03]hadoop03: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.outhadoop02: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.outhadoop01: starting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.outStarting ZK Failover Controllers on NN hosts [hadoop01 hadoop02]hadoop01: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.outhadoop02: starting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out[root@hadoop01 ~]#
复制代码

从上面的启动日志可以看出,start-dfs.sh这个启动脚本是通过ssh对多个节点的namenode、datanode、journalnode以及zkfc进程进行批量启动的。

 

2.停止hdfs集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/stop-dfs.sh
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-dfs.sh Stopping namenodes on [hadoop01 hadoop02]hadoop02: stopping namenodehadoop01: stopping namenodehadoop02: stopping datanodehadoop03: stopping datanodehadoop01: stopping datanodeStopping journal nodes [hadoop01 hadoop02 hadoop03]hadoop03: stopping journalnodehadoop02: stopping journalnodehadoop01: stopping journalnodeStopping ZK Failover Controllers on NN hosts [hadoop01 hadoop02]hadoop01: stopping zkfchadoop02: stopping zkfc[root@hadoop01 ~]#
复制代码

3.启动单个进程

[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenodestarting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start namenodestarting namenode, logging to /root/apps/hadoop/logs/hadoop-root-namenode-hadoop02.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanodestarting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanodestarting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start datanodestarting datanode, logging to /root/apps/hadoop/logs/hadoop-root-datanode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnodestarting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnodestarting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop02.out
[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh start journalnodestarting journalnode, logging to /root/apps/hadoop/logs/hadoop-root-journalnode-hadoop03.out
[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfcstarting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop01.out
[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh start zkfcstarting zkfc, logging to /root/apps/hadoop/logs/hadoop-root-zkfc-hadoop02.out

 分别查看启动后3台虚拟机上的进程情况:

复制代码
[root@hadoop01 ~]# jps6695 DataNode2002 QuorumPeerMain6879 DFSZKFailoverController7035 Jps6800 JournalNode6580 NameNode[root@hadoop01 ~]#
复制代码

 

复制代码
[root@hadoop02 ~]# jps6360 JournalNode6436 DFSZKFailoverController2130 QuorumPeerMain6541 Jps6255 DataNode6155 NameNode[root@hadoop02 ~]#
复制代码

 

[root@hadoop03 apps]# jps5331 Jps5103 DataNode5204 JournalNode2258 QuorumPeerMain[root@hadoop03 apps]#

 

3.停止单个进程

复制代码
[root@hadoop01 ~]# jps6695 DataNode2002 QuorumPeerMain8486 Jps6879 DFSZKFailoverController6800 JournalNode6580 NameNode[root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# [root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfcstopping zkfc[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnodestopping journalnode[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanodestopping datanode[root@hadoop01 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenodestopping namenode[root@hadoop01 ~]# jps2002 QuorumPeerMain8572 Jps[root@hadoop01 ~]#
复制代码

 

复制代码
[root@hadoop02 ~]# jps6360 JournalNode6436 DFSZKFailoverController2130 QuorumPeerMain7378 Jps6255 DataNode6155 NameNode[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop zkfcstopping zkfc[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnodestopping journalnode[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanodestopping datanode[root@hadoop02 ~]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop namenodestopping namenode[root@hadoop02 ~]# jps7455 Jps2130 QuorumPeerMain[root@hadoop02 ~]#
复制代码

 

复制代码
[root@hadoop03 apps]# jps5103 DataNode5204 JournalNode5774 Jps2258 QuorumPeerMain[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop journalnodestopping journalnode[root@hadoop03 apps]# /root/apps/hadoop/sbin/hadoop-daemon.sh stop datanodestopping datanode[root@hadoop03 apps]# jps5818 Jps2258 QuorumPeerMain[root@hadoop03 apps]#
复制代码

 

 

3.启动yarn集群(使用hadoop的批量启动脚本)

/root/apps/hadoop/sbin/start-yarn.sh

 

复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/start-yarn.sh starting yarn daemonsstarting resourcemanager, logging to /root/apps/hadoop/logs/yarn-root-resourcemanager-hadoop01.outhadoop03: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop03.outhadoop02: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop02.outhadoop01: starting nodemanager, logging to /root/apps/hadoop/logs/yarn-root-nodemanager-hadoop01.out[root@hadoop01 ~]#
复制代码

 

从上面的启动日志可以看出,start-yarn.sh启动脚本只在本地启动一个ResourceManager进程,而3台机器上的nodemanager都是通过ssh的方式启动的。所以hadoop02机器上的ResourceManager需要我们手动去启动。

4.启动hadoop02上的ResourceManager进程

/root/apps/hadoop/sbin/yarn-daemon.sh start resourcemanager

 

 

 5.停止yarn

/root/apps/hadoop/sbin/stop-yarn.sh
复制代码
[root@hadoop01 ~]# /root/apps/hadoop/sbin/stop-yarn.sh stopping yarn daemonsstopping resourcemanagerhadoop01: stopping nodemanagerhadoop03: stopping nodemanagerhadoop02: stopping nodemanagerno proxyserver to stop[root@hadoop01 ~]#
复制代码

 

 通过上面的停止日志可以看出,stop-yarn.sh脚本只停止了本地的那个ResourceManager进程,所以hadoop02上的那个resourcemanager我们需要单独去停止。

 

6.停止hadoop02上的resourcemanager

/root/apps/hadoop/sbin/yarn-daemon.sh stop resourcemanager

 

 

注意:启动和停止单个hdfs相关的进程使用的是"hadoop-daemon.sh"脚本,而启动和停止yarn使用的是"yarn-daemon.sh"脚本。

 

http://www.cnblogs.com/jun1019/p/6266615.html

转载于:https://my.oschina.net/xiaominmin/blog/1599753

你可能感兴趣的文章
ipython notebook [jupyter] 使用
查看>>
[原创]Editplus巧删文本中大量空行
查看>>
SharePoint常见问题一:无法连接数据库
查看>>
LAMP自动安装脚本(上)
查看>>
安全规范和指南系列之二
查看>>
IT草根的江湖之路之七: 挑战,刚刚开始
查看>>
总结之:CentOS6.5 DNS服务BIND配置、正反向解析、主从及压力测试(1)
查看>>
Spring Security(16)——基于表达式的权限控制
查看>>
Oracle中的LOB数据类型以及ibatis中处理该类型的typeHandler
查看>>
917:Knight Moves
查看>>
【IT基础】windows核心编程整理(上)
查看>>
[arm驱动]linux并发与竞态---并发控制
查看>>
jailkit 限制用户活动范围和权限
查看>>
WMI技术的使用
查看>>
Socket编程实践(10) --select的限制与poll的使用
查看>>
析构函数(C# 编程指南)
查看>>
Unix Study之--AIX安装和配置SSH
查看>>
Silverlight粉丝们 让微软听到我们的声音
查看>>
领悟rrdtool
查看>>
perl_常用的函数
查看>>