文章目录
- HDFS模块启动:
$ sbin/hadoop-daemon.sh start namenode
$ sbin/hadoop-daemon.sh start datanode
$ sbin/hadoop-daemon.sh start secondarynamenode
- YARN模块启动:
$ sbin/yarn-daemon.sh start resourcemanager
$ sbin/yarn-daemon.sh start nodemanager
说明:
start-all.sh 命令其实是Instead use start-dfs.sh and start-yarn.sh
start-dfs.sh 会启动 NameNode和SecondNameNode–>slave是DataNode
start-yarn.sh 会启动ResourceManage–>slave是NodeManage
hadoop3的命令
1.单个节点操作
启动|停止单个节点
hdfs --daemon start|stop datanode
hdfs --daemon start|stop namenode
启动|停止单个节点的NodeManageer
yarn --daemon stop|start nodemanager
启动|停止ResourceManager
yarn --daemon start|stop resourcemanager
关于start-all.sh与hadoop-daemon.sh的区别:
前者会分布式启动;
后者只在自己机器启动
- YARN历史服务器
[hadoop@hadoop1 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
- 查看机架图
hadoop dfsadmin -printTopology
注意写脚本时可能遇到的问题及解决:
hadoop-daemon.sh不能解决时,将其换成hadoop-daemons.sh
不要加sh
如:
start-hadoop.sh:
cd $HADOOP_HOME
#hadoop namenode -format
sbin/hadoop-daemons.sh start namenode
sbin/yarn-daemons.sh start resourcemanager
for (( VAR = 1; VAR < 7; ++VAR )); do
ssh hadoop-slave$VAR "$HADOOP_HOME/sbin/hadoop-daemons.sh start datanode"
ssh hadoop-slave$VAR "$HADOOP_HOME/sbin/yarn-daemons.sh start nodemanager"
done
- journalnode
$ sbin/hadoop-daemon.sh start journalnode