LVS+Keepalived构建高可用负载均衡(测试篇)

   2015-07-23 0
核心提示:这篇文章主要介绍了LVS+Keepalived构建高可用负载均衡的测试方法,需要的朋友可以参考下
一、 启动LVS高可用集群服务

首先,启动每个real server节点的服务:
[root@localhost ~]# /etc/init.d/lvsrs start
start LVS of REALServer
然后,分别在主备Director Server启动Keepalived服务:
[root@DR1 ~]#/etc/init.d/Keepalived start
[root@DR1 ~]#/ ipvsadm -L
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP bogon:http rr
-> real-server1:http Route 1 1 0
-> real-server2:http Route 1 1 0
此时查看Keepalived服务的系统日志信息如下:
[root@localhost ~]# tail -f /var/log/messages
Feb 28 10:01:56 localhost Keepalived: Starting Keepalived v1.1.19 (02/27,2011)
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Configuration is using : 12063 Bytes
Feb 28 10:01:56 localhost Keepalived: Starting Healthcheck child process, pid=4623
Feb 28 10:01:56 localhost Keepalived_vrrp: Netlink reflector reports IP 192.168.12.25 added
Feb 28 10:01:56 localhost Keepalived: Starting VRRP child process, pid=4624
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.246:80]
Feb 28 10:01:56 localhost Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'.
Feb 28 10:01:56 localhost Keepalived_healthcheckers: Activating healtchecker for service [192.168.12.237:80]
Feb 28 10:01:57 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:01:58 localhost Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:01:58 localhost Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:01:58 localhost avahi-daemon[2778]: Registering new address record for 192.168.12.135 on eth0.

二、 高可用性功能测试

高可用性是通过LVS的两个Director Server完成的,为了模拟故障,我们先将主Director Server上面的Keepalived服务停止,然后观察备用Director Server上Keepalived的运行日志,信息如下:
Feb 28 10:08:52 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
Feb 28 10:08:54 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 added
Feb 28 10:08:54 lvs-backup avahi-daemon[3349]: Registering new address record for 192.168.12.135 on eth0.
Feb 28 10:08:59 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.12.135
从日志中可以看出,主机出现故障后,备机立刻检测到,此时备机变为MASTER角色,并且接管了主机的虚拟IP资源,最后将虚拟IP绑定在eth0设备上。
接着,重新启动主Director Server上的Keepalived服务,继续观察备用Director Server的日志状态:
备用Director Server的日志状态:
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.
Feb 28 10:12:11 lvs-backup Keepalived_vrrp: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup Keepalived_healthcheckers: Netlink reflector reports IP 192.168.12.135 removed
Feb 28 10:12:11 lvs-backup avahi-daemon[3349]: Withdrawing address record for 192.168.12.135 on eth0.
从日志可知,备机在检测到主机重新恢复正常后,重新返回BACKUP角色,并且释放了虚拟IP资源。

三、 负载均衡测试

这里假定两个real server节点配置www服务的网页文件根目录均为/webdata/www目录,然后分别执行如下操作:
在real server1 执行:
echo "This is real server1" /webdata/www/index.html
在real server2 执行:
echo "This is real server2" /webdata/www/index.html
接着打开浏览器,访问http://192.168.12.135这个地址,然后不断刷新此页面,如果能分别看到“This is real server1”和“This is real server2”就表明LVS已经在进行负载均衡了。

四、 故障切换测试

故障切换是测试当某个节点出现故障后,Keepalived监控模块是否能及时发现,然后屏蔽故障节点,同时将服务转移到正常节点来执行。
这里我们将real server 1节点服务停掉,假定这个节点出现故障,然后查看主、备机日志信息,相关日志如下:
Feb 28 10:14:12 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] failed !!!
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Removing service [192.168.12.246:80] from VS [192.168.12.135:80]
Feb 28 10:14:12 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:14:12 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
通过日志可以看出,Keepalived监控模块检测到192.168.12.246这台主机出现故障后,将此节点从集群系统中剔除掉了。
此时访问http://192.168.12.135这个地址,应该只能看到“This is real server2”了,这是因为节点1出现故障,而Keepalived监控模块将节点1从集群系统中剔除了。
下面重新启动real server 1节点的服务,可以看到Keepalived日志信息如下:
Feb 28 10:15:48 localhost Keepalived_healthcheckers: TCP connection to [192.168.12.246:80] success.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Adding service [192.168.12.246:80] to VS [192.168.12.135:80]
Feb 28 10:15:48 localhost Keepalived_healthcheckers: Remote SMTP server [192.168.12.1:25] connected.
Feb 28 10:15:48 localhost Keepalived_healthcheckers: SMTP alert successfully sent.
从日志可知,Keepalived监控模块检测到192.168.12.246这台主机恢复正常后,又将此节点加入了集群系统中。
此时再次访问http://192.168.12.135这个地址,然后不断刷新此页面,应该又能分别看到“This is real server1”和“This is real server2”页面了,这说明在real server 1节点恢复正常后,Keepalived监控模块将此节点加入了集群系统中。

本文出自 “技术成就梦想” 博客
 
反对 0举报 0 评论 0
 

免责声明:本文仅代表作者个人观点,与乐学笔记(本网)无关。其原创性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容、文字的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
    本网站有部分内容均转载自其它媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责,若因作品内容、知识产权、版权和其他问题,请及时提供相关证明等材料并与我们留言联系,本网站将在规定时间内给予删除等相关处理.

  • centos下lvs配置 centos7安装lvm分区
    一、lvs-nat模式网络配置:lvs-server eth0 :host-only adapter 192.168.56.101lvs-server eth1 :Internal 192.168.0.1real-server-1 eth0:Internal 192.168.0.2real-server-2 eth0:Internal 192.168.0.3 lvs-server:1、安装ipvsadm软件yum -y insta
    02-09
  • 基于VMware的虚拟Linux集群搭建-lvs+keepalived
    基于VMware的虚拟Linux集群搭建-lvs+keepalived
    通过 keepalived 实现 lvs 服务器的的双机热备和真实服务器之间的负载均衡的blog 挺多的,在搭建的时候也参考了相关博文,不同人搭建的实验环境不一样,本文是基于 VM 搭建的虚拟集群环境,全部虚拟机网卡模式为 NAT 方式相连,处于同一网段。搭建环境: 使用
  • Centos7实践LVS+Keepalived
    Centos7实践LVS+Keepalived
    网上有很多关于这个主题的配置,要么是陈旧的,要么是错误,胡乱转载,有时参考这些文章,反而给你的工作带来更大的麻烦。经过实践,博主 javacoder.cn将其整理成文,转载请注明出处 ,希望对你有所帮助。 其实keepalived集成了LVS的功能,所以理论上只要安装
  • Linux Cluster之LVS
    Linux Cluster之LVS
    一、Linux Cluster 基础: Cluster:计算机集合为解决某个特定问题组合起来形成的单个系统Linux Cluster类型:LB(Load Banlancing):负载均衡HA(High Availability):高可用。提高服务可用性,避免出现单点故障HP(High Performance):高性能可用性衡量
    11-03 LVSLinux
  • linux cluster—-lvs
    linux cluster—-lvs
    一. Linux Cluster:Cluster:计算机集合,为解决某个特定问题组合起来形成的单个系统;Linux Cluster类型:LB:Load Balancing,负载均衡;HA:High Availiablity,高可用;A=MTBF无故障运行时长/(MTBF无故障运行时长+MTTR故障时间)衡量指数: (0,1):90%,
    10-31 LVSLinux
  • ARP在LVS中的应用
    最近在玩LVS,碰到一些问题,顺便记录一下测试环境在本地Mac用Parallel起了一个虚拟机+------------------+ +-----------+|Parallel VM |ping |Mac||Ubuntu 12.04 LTS-------------+ ||| |en0: 192.168.2.222|| | ||| | ||eth0: 192.168.2.101/24| ||eth1: 10
    09-29 LVSMacOS
  • LVS+Keepalived构建高可用负载均衡配置方法(配置篇)
    LVS+Keepalived构建高可用负载均衡配置方法(配
    这篇文章主要介绍了LVS+Keepalived构建高可用负载均衡配置方法,需要的朋友可以参考下
  • linux服务器之LVS、Nginx和HAProxy负载均衡器对
    这篇文章主要介绍了linux服务器之LVS、Nginx和HAProxy负载均衡器对比,需要的朋友可以参考下
点击排行