LVS+Keepalived构建高可用负载均衡(测试篇)推荐

首先,启动每个real server节点的服务:[root@localhost ~]# /etc/init.d/lvsrs startstart LVS of REALServer然后,分别在主备Director Server启动Keepalived服务:[root@DR1 ~]#/etc/init.d/Keepalived start[root@DR1 ~]#/ ipvsadm -LIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags – RemoteAddress:Port Forward Weight ActiveConn InActConnTCP bogon:http rr – real-server1:http Route 1 1 0 – real-server2:http Route 11 0

此时查看Keepalived服务的系统日志信息如下:[root@localhost ~]# tail -f /var/log/messagesFeb 28 10:01:56 localhost Keepalived: Starting Keepalived v1.1.19 (02/27,2011) Feb 28 10:01:56 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.25 addedFeb 28 10:01:56 localhost Keepalived_healthcheckers: Opening file ‘/etc/keepalived/keepalived.conf’. Feb 28 10:01:56 localhost Keepalived_healthcheckers: Configuration is using : 12063 BytesFeb 28 10:01:56 localhost Keepalived: Starting Healthcheck child process, pid=4623Feb 28 10:01:56 localhost Keepalived_vrrp: Netlink reflector reports IP 192.168.12.25 addedFeb 28 10:01:56 localhost Keepalived: Starting VRRP child process, pid=4624Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.246:80]Feb 28 10:01:56 localhost Keepalived_vrrp: Opening file ‘/etc/keepalived/keepalived.conf’. Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.237:80]Feb 28 10:01:57 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEFeb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATEFeb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Feb 28 10:01:58 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 addedFeb 28 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.

二、 高可用性功能测试高可用性是通过LVS的两个Director Server完成的,为了模拟故障,我们先将主Director Server上面的Keepalived服务停止,然后观察备用Director Server上Keepalived的运行日志,信息如下:Feb 28 10:08:52 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEFeb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATEFeb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135Feb 28 10:08:54 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 addedFeb 28 10:08:54 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 addedFeb 28 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.Feb 28 10:08:59 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135从日志中可以看出,主机出现故障后,备机立刻检测到,此时备机变为MASTER角色,并且接管了主机的虚拟IP资源,最后将虚拟IP绑定在eth0设备上。接着,重新启动主Director Server上的Keepalived服务,继续观察备用Director Server的日志状态:

备用Director Server的日志状态:Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advertFeb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATEFeb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.Feb 28 10:12:11 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 removedFeb 28 10:12:11 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 removedFeb 28 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.从日志可知,备机在检测到主机重新恢复正常后,重新返回BACKUP角色,并且释放了虚拟IP资源。

三、 负载均衡测试这里假定两个real server节点配置www服务的网页文件根目录均为/webdata/www目录,然后分别执行如下操作:在real server1 执行:echo This is real server1 /webdata/www/index.html在real server2 执行: echo This is real server2 /webdata/www/index.html接着打开浏览器,访问http://192.168.12.135这个地址,然后不断刷新此页面,如果能分别看到 This is real server1 和 This is real server2 就表明LVS已经在进行负载均衡了。

四、 故障切换测试故障切换是测试当某个节点出现故障后,Keepalived监控模块是否能及时发现,然后屏蔽故障节点,同时将服务转移到正常节点来执行。这里我们将real server 1节点服务停掉,假定这个节点出现故障,然后查看主、备机日志信息,相关日志如下:Feb 28 10:14:12 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] failed !!!Feb 28 10:14:12 localhost Keepalived_healthcheckers: Removing service [192.168.12.246:80] from VS [192.168.12.135:80]Feb 28 10:14:12 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.Feb 28 10:14:12 localhost Keepalived_healthcheckers: SMTP alert successfully sent.通过日志可以看出,Keepalived监控模块检测到192.168.12.246这台主机出现故障后,将此节点从集群系统中剔除掉了。此时访问http://192.168.12.135这个地址,应该只能看到 This is real server2 了,这是因为节点1出现故障,而Keepalived监控模块将节点1从集群系统中剔除了。

下面重新启动real server 1节点的服务,可以看到Keepalived日志信息如下:Feb 28 10:15:48 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] success.Feb 28 10:15:48 localhost Keepalived_healthcheckers: Adding service [192.168.12.246:80] to VS [192.168.12.135:80]Feb 28 10:15:48 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.Feb 28 10:15:48 localhost Keepalived_healthcheckers: SMTP alert successfully sent.从日志可知,Keepalived监控模块检测到192.168.12.246这台主机恢复正常后,又将此节点加入了集群系统中。此时再次访问http://192.168.12.135这个地址,然后不断刷新此页面,应该又能分别看到 This is real server1 和 This is real server2 页面了,这说明在real server 1节点恢复正常后,Keepalived监控模块将此节点加入了集群系统中。

有的旅行是为了拓宽眼界,浏览风景名胜。

LVS+Keepalived构建高可用负载均衡(测试篇)推荐

相关文章:

你感兴趣的文章:

标签云: