文章目录
服务
主机 | 服务 |
---|---|
01 | NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode |
02 | NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode |
03 | Zookeeper,JournalNode,JobHistory,SparkHitory,Haproxy(history),balance,trash |
04 | HiveServer2,MetaStore,HaProxy(hs2) |
05 | HiveServer2,MetaStore,HaProxy(hs2) |
06 | DataNode,NodeManager |
07 | DataNode,NodeManager |
08 | DataNode,NodeManager |
09 | DataNode,NodeManager |
配置
zookeeper
修改配置文件
zoo.cfg
创建数据目录:
mkdir -p /home/hadoop/zkdata/zookeeper
修改myid:
分别
echo '1' >> zkdata/zookeeper/myid;
echo '2' >> zkdata/zookeeper/myid;
echo '3' >> zkdata/zookeeper/myid
启动:
zookeeper-current/bin/zkServer.sh start
journalnode
修改配置文件
core-site.xml
, hdfs-site.xml
,
创建数据目录:
sudo mkdir -p /data/hadoopdata/journaldata
sudo chown -R hadoop:hadoop /data/hadoopdata
启动:
journalnode-current/bin/hdfs --daemon start journalnode
namenode
修改配置文件
hadoop-env.sh
修改JAVA_HOME、内存大小等配置项
core-site.xml
、
hdfs-site.xml
等配置。
slave
配置。
先别启动,等ZKFC。
ZKFC
先关闭nn
修改配置:
core-site.xml
: ha.zookeeper.quorum
hdfs-site.xml
:
hadoop-env.sh
: JAVA_HOME、内存等
格式化zkfc(active nn):
/home/hadoop/zkfc-current/bin/hdfs zkfc -formatZK
格式化hdfs(active nn):
/home/hadoop/hadoop-current/bin/hdfs namenode -format
格式化hdfs(standby nn):
/home/hadoop/hadoop-current/bin/hdfs namenode -bootstrapStandby
启动namenode:
/home/hadoop/hadoop-current/bin/hdfs --daemon start namenode
启动zkfc:
/home/hadoop/zkfc-current/bin/hdfs --daemon start zkfc
DataNode
core-site.xml
:
hdfs-site.xml
:
hadoop-env.sh
:
sudo mkdir -p /var/lib/hadoop-hdfs
sudo chown -R hadoop:hadoop /var/lib/hadoop-hdfs
说明:上述dn上短路读配置目录
ResourceManager
core-site.xml
:
hadoop-env.sh
:
hdfs-site.xml
:
mapred-site.xml
yarn-site.xml
yarn.resourcemanager.hostname.rm1, yarn.resourcemanager.hostname.rm2
yarn.resourcemanager.zk-address
yarn.resourcemanager.mapreduce.history.url
yarn.resourcemanager.spark.history.url
yarn.historyproxy.webapp.http.address
yarn.timeline-service.webapp.address
创建hdfs目录并修改权限
~/hadoop-current/bin/hdfs dfs -mkdir -p /user/yarn
~/hadoop-current/bin/hadoop fs -chown -R yarn:yarn hdfs://yourNs/user/yarn
启动命令:
/home/yarn/hadoop-current/sbin/yarn-daemon.sh start resourcemanager
JobHistory
core-site.xml
:
hdfs-site.xml
:
启动命令:
~/jobHistory-current/sbin/mr-jobhistory-daemon.sh start historyserver
SparkHistory
mkdir -p /tmp/spark-events
启动命令:
sh /home/hadoop/sparkhistory-current/sbin/start-history-server.sh
hsproxyser
core-site.xml
:
hdfs-site.xml
:
historyproxy-site.xml
:
yarn.historyproxy.appstore.zk.addr
启动命令:
/home/hadoop/hsproxyser-current/sbin/yarn-daemon.sh start historyproxy
NodeManager
core-site.xml
:
hdfs-site.xml
:
yarn-site.xml
其他
-
hdfs审计日志
-
fsimage表
-
yarn审计日志
-
spark history log:hdfs路径:
/tmp/spark/staging/historylog
/tmp/spark/staging/historylog_archive/ -
hive history log:hdfs路径:/tmp/hadoop-yarn/staging/history/done/xxx//.xml
/tmp/hadoop-yarn/staging/history/done/xxx.jhist -
hive metastore接口 API:thrift://ip:port