离线架构HADOOP/HIVE/SPARK服务端环境

服务

主机服务
01NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode
02NameNode, ResourceManager,Zookeeper,ZKFC,JournalNode
03Zookeeper,JournalNode,JobHistory,SparkHitory,Haproxy(history),balance,trash
04HiveServer2,MetaStore,HaProxy(hs2)
05HiveServer2,MetaStore,HaProxy(hs2)
06DataNode,NodeManager
07DataNode,NodeManager
08DataNode,NodeManager
09DataNode,NodeManager

配置

zookeeper

修改配置文件
zoo.cfg

创建数据目录:
mkdir -p /home/hadoop/zkdata/zookeeper

修改myid:
分别

echo '1' >> zkdata/zookeeper/myid; 
echo '2' >> zkdata/zookeeper/myid;
echo '3' >> zkdata/zookeeper/myid

启动:

zookeeper-current/bin/zkServer.sh start

journalnode

修改配置文件

core-site.xmlhdfs-site.xml

创建数据目录:

sudo mkdir -p /data/hadoopdata/journaldata
sudo chown -R hadoop:hadoop /data/hadoopdata

启动:

journalnode-current/bin/hdfs --daemon start journalnode   

namenode

修改配置文件
hadoop-env.sh 修改JAVA_HOME、内存大小等配置项
core-site.xml
hdfs-site.xml等配置。
slave配置。

先别启动,等ZKFC。

ZKFC

先关闭nn
修改配置:
core-site.xml : ha.zookeeper.quorum
hdfs-site.xml:
hadoop-env.sh: JAVA_HOME、内存等

格式化zkfc(active nn):

/home/hadoop/zkfc-current/bin/hdfs zkfc -formatZK

格式化hdfs(active nn):

/home/hadoop/hadoop-current/bin/hdfs namenode -format

格式化hdfs(standby nn):

/home/hadoop/hadoop-current/bin/hdfs namenode -bootstrapStandby

启动namenode:

/home/hadoop/hadoop-current/bin/hdfs --daemon start namenode

启动zkfc:

/home/hadoop/zkfc-current/bin/hdfs --daemon start zkfc

DataNode

core-site.xml:
hdfs-site.xml:
hadoop-env.sh:

sudo mkdir -p /var/lib/hadoop-hdfs
sudo chown -R hadoop:hadoop /var/lib/hadoop-hdfs

说明:上述dn上短路读配置目录

ResourceManager

core-site.xml
hadoop-env.sh:
hdfs-site.xml:
mapred-site.xml
yarn-site.xml

yarn.resourcemanager.hostname.rm1, yarn.resourcemanager.hostname.rm2
yarn.resourcemanager.zk-address
yarn.resourcemanager.mapreduce.history.url
yarn.resourcemanager.spark.history.url
yarn.historyproxy.webapp.http.address
yarn.timeline-service.webapp.address

创建hdfs目录并修改权限

~/hadoop-current/bin/hdfs dfs -mkdir -p /user/yarn
~/hadoop-current/bin/hadoop fs -chown -R yarn:yarn hdfs://yourNs/user/yarn

启动命令:

/home/yarn/hadoop-current/sbin/yarn-daemon.sh start resourcemanager

JobHistory

core-site.xml:
hdfs-site.xml:

启动命令:

~/jobHistory-current/sbin/mr-jobhistory-daemon.sh start  historyserver

SparkHistory

mkdir -p /tmp/spark-events

启动命令:

sh /home/hadoop/sparkhistory-current/sbin/start-history-server.sh

hsproxyser

core-site.xml:
hdfs-site.xml:
historyproxy-site.xml:

yarn.historyproxy.appstore.zk.addr 

启动命令:

/home/hadoop/hsproxyser-current/sbin/yarn-daemon.sh start historyproxy

NodeManager

core-site.xml:
hdfs-site.xml:
yarn-site.xml

其他

  1. hdfs审计日志

  2. fsimage表

  3. yarn审计日志

  4. spark history log:hdfs路径:
    /tmp/spark/staging/historylog
    /tmp/spark/staging/historylog_archive/

  5. hive history log:hdfs路径:/tmp/hadoop-yarn/staging/history/done/xxx//.xml
    /tmp/hadoop-yarn/staging/history/done/xxx.jhist

  6. hive metastore接口 API:thrift://ip:port

[root@master apache-hive-2.1.1-bin]# bin/hive which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/java/jdk1.8.0_144/bin:/usr/zookeeper/zookeeper-3.4.10/bin:/usr/hadoop/hadoop-2.7.3/bin:/usr/hadoop/hadoop-2.7.3/sbin:/root/bin:/usr/java/jdk1.8.0_144/bin:/usr/zookeeper/zookeeper-3.4.10/bin:/usr/hadoop/hadoop-2.7.3/bin:/usr/hadoop/hadoop-2.7.3/sbin:/usr/hive/apache-hive-2.1.1-bin/bin) SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/hive/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/hadoop/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in jar:file:/usr/hive/apache-hive-2.1.1-bin/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:591) at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:531) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:705) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:641) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hiv
最新发布
03-13
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值