ng failover 以及Load balance测试及注意事项

好久没写博客了。最近在研究storm、flume和kafka。今天给大伙写下我测试flume failover以及load balance的场景以及一些结论;

测试环境包含5个配置文件,也就是5个agent。

一个主的配置文件,也就是我们配置failover以及load balance关系的配置文件(flume-sink.properties),这个文件在下面的场景

会变动,所以这里就不列举出来了,会在具体的场景中写明;

其他4个配置文件类似:

#Name the compents on this agenta2.sources = r1a2.sinks = k1a2.channels = c1#Describe/configure the sourcea2.sources.r1.type = avroa2.sources.r1.channels = c1a2.sources.r1.bind = 192.168.220.159a2.sources.r1.port = 44411#Describe the sinka2.sinks.k1.type = loggera2.sinks.k1.channel = c1#Use a channel which buffers events in memorya2.channels.c1.type = memorya2.channels.c1.capacity = 1000a2.channels.c1.transactionCapacity = 100另外三个只需修改agent名称即可。

场景1:—想法(flume failover以及load balance同时应用,兼顾负载和容错,使环境更加有保障)

配置文件(flume-sink.properties):

#Name the compents on this agenta1.sources = r1a1.sinks = k1 k2 k3a1.channels = c1#Describe the sinkgroupsa1.sinkgroups = g1 g2a1.sinkgroups.g1.sinks = k1 k2a1.sinkgroups.g1.processor.type = failovera1.sinkgroups.g1.processor.priority.k1 = 10a1.sinkgroups.g1.processor.priority.k2 = 5a1.sinkgroups.g1.processor.maxpenalty = 10000a1.sinkgroups.g2.sinks = k1 k3a1.sinkgroups.g2.processor.type = load_balancea1.sinkgroups.g2.processor.backoff = truea1.sinkgroups.g2.processor.selector = round_robin#Describe/config the sourcea1.sources.r1.type = syslogtcpa1.sources.r1.port = 5140a1.sources.r1.host = localhosta1.sources.r1.channels = c1#Describe the sinka1.sinks.k1.type = avroa1.sinks.k1.channel = c1a1.sinks.k1.hostname = 192.168.220.159a1.sinks.k1.port = 44411a1.sinks.k2.type = avroa1.sinks.k2.channel = c1a1.sinks.k2.hostname = 192.168.220.159a1.sinks.k2.port = 44422a1.sinks.k4.type = avroa1.sinks.k4.channel = c1测试结果:

发现K1,K2,K3都可以接收到数据,但是这里K2的优先级别是最低的,原本不应该获得数据的;除非K1挂掉了。

这个测试之前最开始的时候我测试了,但是忘记配置k3的声明了,结果给我报了好多错误,让我以为sinkgroups的sink不能共享,

然后我就跑去社区发邮件问,是不是sinkgroups的sink不能共享,结果有个哥们跟我说是。现在知道我和那哥们都错了。哈哈

社区邮件地址:

其实上面我还少配置了一个东西,大伙可以看上面的配置文件,K1做了容错,但是K3没有。所以这里需要添加以下K3的容错节点K4。

这点是@晨色星空跟我说的,我一下明白了,,哈;

#Describe the sinkgroupsa1.sinkgroups = g1 g2 g3a1.sinkgroups.g1.sinks = k1 k2a1.sinkgroups.g1.processor.type = failovera1.sinkgroups.g1.processor.priority.k1 = 10a1.sinkgroups.g1.processor.priority.k2 = 5a1.sinkgroups.g1.processor.maxpenalty = 10000a1.sinkgroups.g2.sinks = k1 k3a1.sinkgroups.g2.processor.type = load_balancea1.sinkgroups.g2.processor.backoff = truea1.sinkgroups.g2.processor.selector = round_robina1.sinkgroups.g3.sinks = k3 k4a1.sinkgroups.g3.processor.type = failovera1.sinkgroups.g3.processor.priority.k3 = 10a1.sinkgroups.g3.processor.priority.k4 = 5a1.sinkgroups.g3.processor.maxpenalty = 10000

那我们把g3和k4加上,然后发送数据看怎么样;

好把。竟然4个都接收到了。

接下来我们把K1断掉,看什么结果;

结果这里K2,K3,K4都接收到了数据,真的有点诡异;K4是做K3的容错的不应该接收到数据才对。K2接收到数据倒是在理;

我们再把K1启起来。然后我们来断掉K3看看。

结果和上面的情况类似;我只能说测试结果有点奇怪;哈哈

场景2:—想法(failover和load balance分开不同sink)

配置文件(flume-sink.properties):

#Name the compents on this agenta1.sources = r1a1.sinks = k1 k2 k3 k4a1.channels = c1#Describe the sinkgroupsa1.sinkgroups = g1 g2a1.sinkgroups.g1.sinks = k1 k2a1.sinkgroups.g1.processor.type = failovera1.sinkgroups.g1.processor.priority.k1 = 10a1.sinkgroups.g1.processor.priority.k2 = 5a1.sinkgroups.g1.processor.maxpenalty = 10000a1.sinkgroups.g2.sinks = k3 k4a1.sinkgroups.g2.processor.type = load_balancea1.sinkgroups.g2.processor.backoff = truea1.sinkgroups.g2.processor.selector = round_robin#Describe/config the sourcea1.sources.r1.type = syslogtcpa1.sources.r1.port = 5140a1.sources.r1.host = localhosta1.sources.r1.channels = c1#Describe the sinka1.sinks.k1.type = avroa1.sinks.k1.channel = c1a1.sinks.k1.hostname = 192.168.220.159a1.sinks.k1.port = 44411a1.sinks.k2.type = avroa1.sinks.k2.channel = c1a1.sinks.k2.hostname = 192.168.220.159a1.sinks.k2.port = 44422a1.sinks.k3.type = avroa1.sinks.k3.channel = c1a1.sinks.k3.hostname = 192.168.220.159a1.sinks.k3.port = 44433a1.sinks.k4.type = avroa1.sinks.k4.channel = c1a1.sinks.k4.hostname = 192.168.220.159a1.sinks.k4.port = 44444#Use a channel which butters events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100测试结果:在繁华中体会热闹;若是厌倦了喧嚣,寻一处宁静的幽谷,

ng failover 以及Load balance测试及注意事项

相关文章:

你感兴趣的文章:

标签云: