hadoop 2.2、 hbase0.94.18 集群安装

这里我们安装三台的集群:集群ip:192.168.157.132-134

master:192.168.157.132

slaves:192.168.157.133-134

hbase依赖zookeeper自行安装。

1、配置ssh 无密码 免登陆

1>安装ssh:132 、134、 133

yum install openssh-clients

2> 配置无密码免登陆:132 、134、 133

[root@localhost ~]# ssh-keygen -t rsa一路回车[root@localhost ~]# cd /root/.ssh/[root@localhost .ssh]# cat id_rsa.pub >> authorized_keys[root@localhost .ssh]# scp authorized_keys root@192.168.157.132:/root/.ssh[root@localhost .ssh]# scp authorized_keys root@192.168.157.133:/root/.ssh[root@localhost .ssh]# scp authorized_keys root@192.168.157.134:/root/.ssh

修改权限三台

[root@localhost ~]# chmod 700 .ssh/[root@localhost ~]# chmod 600 ~/.ssh/authorized_keys

2、修改机器名称

1> 通过命令修改:重启失效

[root@localhost .ssh]# hostname dev-157-132[root@localhost .ssh]# hostnamedev-157-132 2> 修改配置文件:永久生效 [root@localhost .ssh]# vim /etc/sysconfig/networkNETWORKING=yesNETWORKING_IPV6=noHOSTNAME=dev-157-132GATEWAY=192.168.248.254 3>三台先后经过两个步骤修改即可:dev-157-132 dev-157-133 dev-157-134 4>修改三台/etc/hosts [root@localhost .ssh]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.157.132 dev-157-132192.168.157.133 dev-157-133192.168.157.134 dev-157-1343、关闭防火墙(三台)

service iptables stop4、安装JDK 略…5、安装hadoop2.2.0

[root@dev-157-132 servers]# tar -xf hadoop-2.2.0.tar.gz [root@dev-157-132 servers]# cd hadoop-2.2.0/etc/hadoop 1> 修改hadoop-env.sh [root@dev-157-132 hadoop]# vim hadoop-env.sh export JAVA_HOME=/export/servers/jdk1.6.0_25 (这java_home)其他默认 2> 修改core-site.xml

[root@dev-157-132 hadoop]# vim core-site.xml<configuration><property><name>fs.defaultFS</name><value>hdfs://dev-157-132:9100</value><description></description></property><property><name>hadoop.tmp.dir</name><value>/export/servers/hadoop-2.2.0/data/hadoop_tmp</value><description></description> </property> <property><name>io.native.lib.available</name><value>true</value><description></description> </property></configuration> 3>修改 mapred-site.xml

<configuration> <property><name>mapreduce.framework.name</name><value>yarn</value> </property></configuration> 4>修改yarn-site.xml

[root@dev-157-132 hadoop]# vim yarn-site.xml<property><name>yarn.resourcemanager.resource-tracker.address</name><value>dev-157-132:8031</value><description></description> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>dev-157-132:8030</value> <description></description> </property> <property><name>yarn.resourcemanager.scheduler.class</name><value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value><description></description> </property> <property><name>yarn.resourcemanager.address</name><value>dev-157-132:8032</value><description>the host is the hostname of the ResourceManager and the port is the port onwhich the clients can talk to the Resource Manager. </description> </property> 5>修改hdfs-site.xml [root@dev-157-132 hadoop]# vim hdfs-site.xml <property> <name>dfs.namenode.name.dir</name> <value>file:/export/servers/hadoop-2.2.0/data/nn</value></property><property> <name>dfs.datanode.data.dir</name><value>file:/export/servers/hadoop-2.2.0/data/dfs</value></property><property> <name>dfs.permissions</name> <value>false</value></property>

6>修改slaves

[root@dev-157-132 hadoop]# vim slavesdev-157-133dev-157-1347> scp 到slave机器上

scp -r hadoop-2.2.0 root@192.168.157.133:/export/servers scp -r hadoop-2.2.0 root@192.168.157.134:/export/servers

8> 设置三台环境变量

人生难免有挫折,但你是逃避不了的,一定要去面对它

hadoop 2.2、 hbase0.94.18 集群安装

相关文章:

你感兴趣的文章:

标签云: