对于Hadoop来说,最主要的是两个方面,一个是分布式文件系统HDFS,另一个是MapReduce计算模型,下面讲解下我在搭建Hadoop 环境过程。Hadoop 测试环境
共4台测试机,1台namenode 3台datanode OS版本:RHEL 5.5 X86_64 Hadoop:0.20.203.0 Jdk:jdk1.7.0 角色 ip地址 namenode 192.168.57.75 datanode1 192.168.57.76 datanode2 192.168.57.78 datanode3 192.168.57.79
一 部署 Hadoop 前的准备工作
1 需要知道hadoop依赖Java和SSH Java 1.5.x (以上),必须安装。 ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。 2 建立 Hadoop 公共帐号 所有的节点应该具有相同的用户名,可以使用如下命令添加: useradd hadoop passwd hadoop 3 配置 host 主机名 tail -n 3 /etc/hosts 192.168.57.75 namenode 192.168.57.76 datanode1 192.168.57.78 datanode2 192.168.57.79 datanode3 4 以上几点要求所有节点(namenode|datanode)配置全部相同
二 ssh 配置
1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
d6:63:76:43:e2:5b:8e:85🆎67:a2:7c:a6:8f:23:f9 hadoop@hadoop1.test.com
2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件
[hadoop@hadoop1 ~]$ ls .ssh/
authorized_keys id_rsa id_rsa.pub known_hosts
3 把公匙文件上传到datanode服务器
[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1
28
hadoop@datanode1’s password:
Now try logging into the machine, with “ssh ‘hadoop@datanode1’”, and check in:
.ssh/authorized_keys
to make sure we haven’t added extra keys that you weren’t expecting.
[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2
28
hadoop@datanode2’s password:
Now try logging into the machine, with “ssh ‘hadoop@datanode2’”, and check in:
.ssh/authorized_keys
to make sure we haven’t added extra keys that you weren’t expecting.
[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3
28
hadoop@datanode3’s password:
Now try logging into the machine, with “ssh ‘hadoop@datanode3’”, and check in:
.ssh/authorized_keys
to make sure we haven’t added extra keys that you weren’t expecting.
[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost
28
hadoop@localhost’s password:
Now try logging into the machine, with “ssh ‘hadoop@localhost’”, and check in:
.ssh/authorized_keys
to make sure we haven’t added extra keys that you weren’t expecting.
4 验证
[hadoop@hadoop1 ~]$ ssh datanode1
Last login: Thu Feb 2 09:01:16 2012 from 192.168.57.71
[hadoop@hadoop2 ~]$ exit
logout
[hadoop@hadoop1 ~]$ ssh datanode2
Last login: Thu Feb 2 09:01:18 2012 from 192.168.57.71
[hadoop@hadoop3 ~]$ exit
logout
[hadoop@hadoop1 ~]$ ssh datanode3
Last login: Thu Feb 2 09:01:20 2012 from 192.168.57.71
[hadoop@hadoop4 ~]$ exit
logout
[hadoop@hadoop1 ~]$ ssh localhost
Last login: Thu Feb 2 09:01:24 2012 from 192.168.57.71
[hadoop@hadoop1 ~]$ exit
logout
三 java环境配置
1 下载合适的jdk
//此文件为64Linux 系统使用的 RPM包
wget http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.rpm
2 安装jdk
rpm -ivh jdk-7-linux-x64.rpm
3 验证java
[root@hadoop1 ~]# java -version
java version “1.7.0”
Java™ SE Runtime Environment (build 1.7.0-b147)
Java HotSpot™ 64-Bit Server VM (build 21.0-b17, mixed mode)
[root@hadoop1 ~]# ls /usr/java/
default jdk1.7.0 latest
4 配置java环境变量
#vim /etc/profile //在profile文件中加入如下信息:
#add for hadoop
export JAVA_HOME=/usr/java/jdk1.7.0
export CLASSPATH=.: