完全分布模式安装Hadoop

  典型的在一个集群中:NameNode部署在一台机器上,JobTracker部署在另一台机器上,作为主节点;其他的机器部署DataNode和TaskTracker,作为从节点。

  我的机子内存太小,只虚拟了三个Linux系统,NameNode和JobTracker部署在同一个机器上,作为主节点,美国空间,另外两个部署DataNode和TaskTracker,作为从节点。

主机名IP

主节点h1192.168.0.129

从节点h2192.168.0.130

从节点h3192.169.0.131

  

  1、各个节点配置

    1)确保各个机器的主机名和IP地址之间能够正常解析,修改/etc/hosts文件。

      如果该机器作为主节点,则需要在文件中添加集群中所有机器的IP及其对应的主机名;

      如果该机器作为从节点,则只需要在文件中添加本机的IP地址及其对应的主机名和主服务的IP的地址及其对应的主机名

      所有节点的/etc/hosts文件配置如下:

[root@h1 ~]# vi /etc/hosts192.168.0.128 centoshadoop127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.129 h1192.168.0.130 h2192.168.0.131 h3

      关闭所有节点服务器的防火墙:

[root@h1 ~]# service iptables stop

      关闭所有节点的selinux

[root@h1 ~]# SELINUX= can take one of these three values:#enforcing – SELinux security policy is enforced.#permissive – SELinux prints warnings instead of enforcing.#disabled – No SELinux policy is loaded.SELINUX=disabled# SELINUXTYPE= can take one of these two values:#targeted – Targeted processes are protected,#mls – Multi Level Security protection.SELINUXTYPE=targeted

    2)在所有的机器上建立相同的用户 coder

[root@h1 ~]# useradd coder[root@h1 ~]# passwd coderChanging password for user coder.New password: BAD PASSWORD: it is based on a dictionary wordRetype new password: passwd: all authentication tokens updated successfully.[root@h1 ~]#

    3)在所有节点的上以coder账号登陆,并进入到coder的主目录

[coder@h1 ~]$ pwd/home/coder[coder@h1 ~]$

    4)SSH配置

      4.1)在所有节点上生成密钥对。

[coder@h1 ~]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/coder/.ssh/id_rsa): Created directory ‘/home/coder/.ssh’.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/coder/.ssh/id_rsa.Your public key has been saved in /home/coder/.ssh/id_rsa.pub.The key fingerprint is:29:1c:df:59:0d:5f:ee:28:07:c0:57:21:15:af:a3:88 coder@h1The key’s randomart image is:+–[ RSA 2048]—-+|.. oo=o.||…= + ||. .o o o||. o o o . + ||o S o . = .||. . . + . ||E . . |||||+—————–+[coder@h1 ~]$

      4.2)然后进入.ssh目录,把公钥复制到authorized_keys

[coder@h1 ~]$ cd .ssh[coder@h1 .ssh]$ lsid_rsa id_rsa.pub[coder@h1 .ssh]$ cp id_rsa.pub authorized_keys

      4.3)分发ssh公钥到各个节点,把authorized_keys的内容互相拷贝加入到各个节点的authorized_keys中,这样就可以免密码彼此ssh连入。

        可以使用scp命令把h2和h3节点的authorized_keys传送到h1主节点汇总成一个authorized_keys文件后再传送到各个节点

[root@h2 .ssh]# scp authorized_keys h1:/softs

        汇总成一个文件可以使用catauthorized_keys >> /authorized_keys ,汇总完成后再把authorized_keys传回到各个节点相应的位置。

         最后合成的文件如下:

[root@h1 .ssh]# cat authorized_keysssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwVeTrgPPwMlI5l8cBVafmx3SSnDF/ad62xdJgnRFtzcBbwGpDcciAcPTLrjwbrodhDS/jWk1CIwQegPziHK94+Z9D9hGmyzJg3qRc9iH9tJF8BxZnsiM5zaXvU921mHdbQO/eeXGROvlX1VmkeoAZFXamzfPSXPL/ooyWxBvNiG8j8G4mxd2Cm/UpaaEI/C+gBB5hgerKJCCpyHudNuiqwz7SDZxIOOCU1hEG4xnMZJtbZg39QMPuLOYcodSMI3cGHb+zdwct62IxMMV/ZupQW2h5rXN0SmVTNDB5dsd5NIDCdsGEJU59ZSid2d71yek8iUk9t497cwZvKwrd7lVTw== coder@h1ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA8mrNyz17fHLzqyPWcmDhNwCu3iQPOS9gH4yK/10kNNu0Lly6G+OxzfB93+mfOQK2ejGjmYwNYKeSkDqyvHBw/A7Gyb+r2WZBlq/hNwrQsulUo57EUPLvF4g3cTlPAznhBGu4fFgSE7VXR1YZ6R0qUBwLRqvnZODhHzOklIH4Jasyc3oE1bHkEeixwo+V9MuwlnTLmy2R9HgixFCCzNQqiaIRgdi+/10FLQH18QGYTP2CQMpvtWoFXmOLL3VbHQXMlosfVYSXg3/wJA1X6KqYXrWkX5FAPMpeVyFl1OpHC+oH1SNf7FcVsAJ2E8QjQZ3UQxjN+wOzwe8AauLkyNhnbw== coder@h2ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAtQkOcpp9/s3v+jyX3T8jO7XiTqW0KxoUGl9ZdIcToish57tzJq1ajkKDMFDVBXXb8h3m+T9dPc6lhYtb1r6WBF6HB0fQ8Cwt8Sg6WxkhJDGhNzZTYL6U1rLWDLXQ6Y0NVub5mktu1ToDzHJw8GHri125b0RuRjwx12eo1kt0E3hP6DCEFtQfEyO/24dFOlbVTqF+/LT5HIA7lJFwlWZcRx0WrpB/w3lzQ3qKShAqo5MiCMJ7F5oEzgIeNcTQIqn4TJxci3NVG3VLga/MR2K9O2OZQjKhBUxMKPaZUlQefkbrxPBcKSfS1khqdAuXyTYfeSD0QPzrtSBxo9bLB7+urQ== coder@h3[root@h1 .ssh]#

 

  2、安装、配置Hadoop文件

    1)把hadoop解压到 /home/coder/

    2)主节点上的配置

      2.1)hadoop-env.sh,找到 export JAVA_HOME,修改为JDK的安装路径

export JAVA_HOME=/usr/java/jdk1.6.0_38

      2.1)core-site.xml,一定要配上namenode所在的主节点ip或者主机名

[coder@h1 conf]$ vi core-site.xmlfs.default.namehdfs://192.168.0.129:9000

      2.3)hdfs-site.xml,有两个从节点,数据复制的份数可以改成2

志在山顶的人,不会贪念山腰的风景。

完全分布模式安装Hadoop

相关文章:

你感兴趣的文章:

标签云: