Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建

本节为下一节Kafka与Spark Streaming做铺垫

主要内容

1.kafka 集群搭建

1. kafka 集群搭建

kafka 安装与配置

到下面的地址下载:Scala 2.10 – kafka_2.10-0.8.2.1.tgz 下载完成后,使用命令

tar -zxvf kafka_2.10-0.8.2.1.tgz

解压,解压后的目录如下

进入config目录,将server.properties文件内容如下:

############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=0############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkmaster//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. “127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002”.# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000

将整个安装文件进行跨机器拷贝:

root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave01:/hadoopLearning/ root@sparkmaster:/hadoopLearning# scp -r kafka_2.10-0.8.2.1/ sparkslave02:/hadoopLearning/

将sparkslave01机器上的server.properties文件内容如下:

############################# Server Basics ############################## The id of the broker. This must be set to a unique integer for each broker.broker.id=1############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkslave01//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. “127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002”.# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000

将sparkslave02机器上的server.properties文件内容如下:

# The id of the broker. This must be set to a unique integer for each broker.broker.id=2############################# Socket Server Settings ############################## The port the socket server listens onport=9092# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=sparkslave02//中间省略,默认配置即可############################# Zookeeper ############################## Zookeeper connection string (see zookeeper docs for details).# This is a comma separated host:port pairs, each corresponding to a zk# server. e.g. “127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002”.# You can also append an optional chroot string to the urls to specify the# root directory for all kafka znodes.zookeeper.connect=sparkmaster:2181,sparkslave01:2181,sparkslave02:2181# Timeout in ms for connecting to zookeeperzookeeper.connection.timeout.ms=6000充满了恐惧的声音,一种不确定的归宿的流动。

Spark修炼之道(进阶篇)——Spark入门到精通:第十五节 Kafka 0.8.2.1 集群搭建

相关文章:

你感兴趣的文章:

标签云: