重庆分公司,新征程启航

为企业提供网站建设、域名注册、服务器等服务

HA模式下的Hadoop2.7.4+ZooKeeper3.4.10搭建

一、概述

成都创新互联是专业的铜官网站建设公司,铜官接单;提供网站制作、成都网站制作,网页设计,网站设计,建网站,PHP网站建设等专业做网站服务;采用PHP框架,可快速的进行铜官网站开发网页制作和功能扩展;专业做搜索引擎喜爱的网站,专业的做网站团队,希望更多企业前来合作!

本次实验采用VMware虚拟机,linux版本为CentOS7;

因为实验所需的5台机器配置大多相同,所以采用配置其中一台,然后使用克隆功能复制另外4份再做具体修改;

其中有些步骤以前配置过,此处就说明一下不再做具体配置,具体配置可翻阅以前的博文。

二、实验环境

1.关闭selinux和firewall

2.hadoop-2.7.4.tar.gz;zookeeper-3.4.10.tar.gz;jdk-8u131-linux-x64.tar.gz

三、主机规划

IPHost进程
192.168.100.11hadoop1

NameNode

ResourceManager

DFSZKFailoverController

192.168.100.12hadoop2

NameNode

ResourceManager

DFSZKFailoverController

192.168.100.13hadoop3

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.100.14hadoop4

DataNode

NodeManager

JournalNode

QuorumPeerMain

192.168.100.15hadoop5

DataNode

NodeManager

JournalNode

QuorumPeerMain

四、环境准备

1.设置IP地址:192.168.100.11

2.设置主机名:hadoop1

3.设置IP和主机名的映射

[root@hadoop1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.11 hadoop1
192.168.100.12 hadoop2
192.168.100.13 hadoop3
192.168.100.14 hadoop4
192.168.100.15 hadoop5

4.配置ssh分发脚本

5.解压jdk

[root@hadoop1 ~]# tar -zxf jdk-8u131-linux-x64.tar.gz
[root@hadoop1 ~]# cp -r jdk1.8.0_131/ /usr/local/jdk

6.解压hadoop

[root@hadoop1 ~]# tar -zxf hadoop-2.7.4.tar.gz 
[root@hadoop1 ~]# cp -r hadoop-2.7.4 /usr/local/hadoop

7.解压zookeeper

[root@hadoop1 ~]# tar -zxf zookeeper-3.4.10.tar.gz 
[root@hadoop1 ~]# cp -r zookeeper-3.4.10 /usr/local/hadoop/zookeeper
[root@hadoop1 ~]# cd /usr/local/hadoop/zookeeper/conf/
[root@hadoop1 conf]# cp zoo_sample.cfg zoo.cfg
[root@hadoop1 conf]# vim zoo.cfg 
#修改dataDir
dataDir=/usr/local/hadoop/zookeeper/data
#添加下面三行
server.1=hadoop3:2888:3888
server.2=hadoop4:2888:3888
server.3=hadoop5:2888:3888
[root@hadoop1 conf]# cd ..
[root@hadoop1 zookeeper]# mkdir data
#此处还有操作,但是hadoop1上不部署zookeeper模块所以后面再修改

8.配置环境变量

[root@hadoop1 ~]# tail -4 /etc/profile
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export ZOOKEEPER_HOME=/usr/local/hadoop/zookeeper
export PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
[root@hadoop1 ~]# source /etc/profile

9.测试环境变量可用

[root@hadoop1 ~]# java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
[root@hadoop1 ~]# hadoop version
Hadoop 2.7.4
Subversion Unknown -r Unknown
Compiled by root on 2017-08-28T09:30Z
Compiled with protoc 2.5.0
From source with checksum 50b0468318b4ce9bd24dc467b7ce1148
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.4.jar

五、配置hadoop

1.core-site.xml


    
    
	fs.defaultFS
	hdfs://master/
    
    
    
	hadoop.tmp.dir
	/usr/local/hadoop/tmp
    
    
    
	ha.zookeeper.quorum
	hadoop3:2181,hadoop4:2181,hadoop5:2181
    

2.hdfs-site.xml


    
        dfs.namenode.name.dir
        /usr/local/hadoop/dfs/name
    
    
        dfs.datanode.data.dir
        /usr/local/hadoop/dfs/data
    
    
        dfs.replication
        2
    

    
    
    
        dfs.nameservices
        master
    
    
    
        dfs.ha.namenodes.master
        nn1,nn2
    

    
    
        dfs.namenode.rpc-address.master.nn1
        hadoop1:9000
    
    
        dfs.namenode.rpc-address.master.nn2
        hadoop2:9000
    

    
    
        dfs.namenode.http-address.master.nn1
        hadoop1:50070
    
    
        dfs.namenode.http-address.master.nn2
        hadoop2:50070
    

    
    
    
        dfs.journalnode.http-address
        0.0.0.0:8480
    
    
        dfs.journalnode.rpc-address
        0.0.0.0:8485
    
    
        
        dfs.namenode.shared.edits.dir
        qjournal://hadoop3:8485;hadoop4:8485;hadoop5:8485/master
    

    
        
        dfs.journalnode.edits.dir
        /usr/local/hadoop/dfs/journal
    
    
        
        dfs.client.failover.proxy.provider.master
        org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
    

    
    
    
        dfs.ha.fencing.methods
        sshfence
                shell(/bin/true)
    
    
    
        dfs.ha.fencing.ssh.private-key-files
        /root/.ssh/id_rsa
    
    
    
        dfs.ha.fencing.ssh.connect-timeout
        30000
    

    
    
        dfs.ha.automatic-failover.enabled
        true
    
    
        ha.zookeeper.quorum
	hadoop3:2181,hadoop4:2181,hadoop5:2181
    
    
        
        ha.zookeeper.session-timeout.ms
        2000
    

3.yarn-site.xml


    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.resourcemanager.connect.retry-interval.ms
        2000
    
    
        yarn.resourcemanager.ha.enabled
        true
    
    
    
        yarn.resourcemanager.cluster-id
        yrc
    
    
    
        yarn.resourcemanager.ha.rm-ids
        rm1,rm2
    
    
    
        yarn.resourcemanager.hostname.rm1
        hadoop1
    
    
    
        yarn.resourcemanager.hostname.rm2
        hadoop2
    
    
    
        yarn.resourcemanager.ha.automatic-failover.enabled
        true
    
    
    
	yarn.resourcemanager.recovery.enabled 
        true 
    
    
    
        yarn.resourcemanager.store.class
        org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
    
    
    
        yarn.resourcemanager.zk-address
        hadoop3:2181,hadoop4:2181,hadoop5:2181
    
    
    
        yarn.resourcemanager.scheduler.address.rm1
        hadoop1:8030
    
    
        yarn.resourcemanager.scheduler.address.rm2
        hadoop2:8030
    
     
    
        yarn.resourcemanager.resource-tracker.address.rm1
        hadoop1:8031
    
    
        yarn.resourcemanager.resource-tracker.address.rm2
        hadoop2:8031
    
    
    
        yarn.resourcemanager.address.rm1
        hadoop1:8032
    
    
        yarn.resourcemanager.address.rm2
        hadoop2:8032
    
    
    
        yarn.resourcemanager.admin.address.rm1
        hadoop1:8033
    
    
        yarn.resourcemanager.admin.address.rm2
        hadoop2:8033
    
    
    
        yarn.resourcemanager.webapp.address.rm1
        hadoop1:8088
    
    
        yarn.resourcemanager.webapp.address.rm2
        hadoop2:8088
    

4.mapred-site.xml


     
   
	mapreduce.framework.name
	yarn
           
   
   
        mapreduce.jobhistory.address
        hadoop1:10020
   
   
   
        mapreduce.jobhistory.webapp.address
        hadoop1:19888
   

5.slaves

[root@hadoop1 hadoop]# cat slaves 
hadoop3
hadoop4
hadoop5

6.hadoop-env.sh

export JAVA_HOME=/usr/local/jdk    #在后面添加

六、克隆虚拟机

1.使用hadoop1为模板克隆4台虚拟机,并将网卡的MAC地址重新生成

2.修改主机名为hadoop2-hadoop5

3.修改IP地址

4.配置所有机器之间的ssh免密登陆(ssh公钥分发)

七、配置zookeeper

[root@hadoop3 ~]# echo 1 > /usr/local/hadoop/zookeeper/data/myid    #在hadoop3上
[root@hadoop4 ~]# echo 2 > /usr/local/hadoop/zookeeper/data/myid    #在hadoop4上
[root@hadoop5 ~]# echo 3 > /usr/local/hadoop/zookeeper/data/myid    #在hadoop5上

八、启动集群

1.在hadoop3-5上启动zookeeper

[root@hadoop3 ~]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /usr/local/hadoop/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@hadoop3 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/hadoop/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@hadoop3 ~]# jps
2184 QuorumPeerMain
2237 Jps
#hadoop4和hadoop5相同操作

2.在hadoop1上格式化 ZooKeeper 集群 

[root@hadoop1 ~]# hdfs zkfc -formatZK

3.在hadoop3-5上启动journalnode

[root@hadoop3 ~]# hadoop-daemon.sh start journalnode
starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-hadoop3.out
[root@hadoop3 ~]# jps
2244 JournalNode
2293 Jps
2188 QuorumPeerMain

4.在hadoop1上格式化namenode

[root@hadoop1 ~]# hdfs namenode -format
...
17/08/29 22:53:30 INFO util.ExitUtil: Exiting with status 0
17/08/29 22:53:30 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.100.11
************************************************************/

5.在hadoop1上启动刚格式化的namenode

[root@hadoop1 ~]# hadoop-daemon.sh start namenode
starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop1.out
[root@hadoop1 ~]# jps
2422 Jps
2349 NameNode

6.在hadoop2上同步nn1(hadoop1)数据到nn2(hadoop2)

[root@hadoop2 ~]# hdfs namenode -bootstrapStandby
...
17/08/29 22:55:45 INFO namenode.TransferFsImage: Image Transfer timeout configured to 60000 milliseconds
17/08/29 22:55:45 INFO namenode.TransferFsImage: Transfer took 0.00s at 0.00 KB/s
17/08/29 22:55:45 INFO namenode.TransferFsImage: Downloaded file fsp_w_picpath.ckpt_0000000000000000000 size 321 bytes.
17/08/29 22:55:45 INFO util.ExitUtil: Exiting with status 0
17/08/29 22:55:45 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop2/192.168.100.12
************************************************************/

7.启动hadoop2上的namenode

[root@hadoop2 ~]# hadoop-daemon.sh start namenode

8.启动集群中的所有服务

[root@hadoop1 ~]# start-all.sh

9.在hadoop2上启动yarn

[root@hadoop2 ~]# yarn-daemon.sh start resourcemanager

10.开启historyserver

[root@hadoop1 ~]# mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /usr/local/hadoop/logs/mapred-root-historyserver-hadoop1.out
[root@hadoop1 ~]# jps
3026 DFSZKFailoverController
3110 ResourceManager
3894 JobHistoryServer
3927 Jps
2446 NameNode

11.查看进程

[root@hadoop3 ~]# jps
2480 DataNode
2722 Jps
2219 JournalNode
2174 QuorumPeerMain
2606 NodeManager
[root@hadoop4 ~]# jps
2608 NodeManager
2178 QuorumPeerMain
2482 DataNode
2724 Jps
2229 JournalNode
[root@hadoop5 ~]# jps
2178 QuorumPeerMain
2601 NodeManager
2475 DataNode
2717 Jps
2223 JournalNode

九、测试

1.连接

HA 模式下的 Hadoop2.7.4+ZooKeeper3.4.10搭建

HA 模式下的 Hadoop2.7.4+ZooKeeper3.4.10搭建

2.kill hadoop2上的namenode

[root@hadoop2 ~]# jps
2742 NameNode
3016 DFSZKFailoverController
4024 JobHistoryServer
4057 Jps
3133 ResourceManager
[root@hadoop2 ~]# kill -9 2742
[root@hadoop2 ~]# jps
3016 DFSZKFailoverController
3133 ResourceManager
4205 Jps

HA 模式下的 Hadoop2.7.4+ZooKeeper3.4.10搭建

HA 模式下的 Hadoop2.7.4+ZooKeeper3.4.10搭建


网站栏目:HA模式下的Hadoop2.7.4+ZooKeeper3.4.10搭建
浏览路径:http://cqcxhl.cn/article/gejops.html

其他资讯

在线咨询
服务热线
服务热线:028-86922220
TOP