基于VMware的虚拟Linux集群搭建-lvs+keepalived

   2017-02-05 0
核心提示:通过 keepalived 实现 lvs 服务器的的双机热备和真实服务器之间的负载均衡的blog 挺多的,在搭建的时候也参考了相关博文,不同人搭建的实验环境不一样,本文是基于 VM 搭建的虚拟集群环境,全部虚拟机网卡模式为 NAT 方式相连,处于同一网段。搭建环境: 使用

通过 keepalived 实现 lvs 服务器的的双机热备和真实服务器之间的负载均衡 blog 挺多的,在搭建的时候也参考了相关博文,不同人搭建的实验环境不一样,本文是基于 VM 搭建的虚拟集群环境,全部虚拟机网卡模式为 NAT 方式相连,处于同一网段。

搭建环境:

使用 redhead 2.6.32-431.el6.x86_64 版本的 linux ,创建四台,配置相关网络信息,确保同一网段下。

基于VMware的虚拟Linux集群搭建-lvs+keepalived

配置好各自 ip 即可,虚拟 ip 在配置 lvs 时设定。

安装配置 LVS

1. 在主备Lvs上安装ipvsadm和keepalived

LVS是通过IPVS模块实现的,检查kernel是否支持LVS的IPVS模块,再安装IPVS管理软件ipvsadm

[root@rex ~]# modprobe -l | grep ipvs

kernel/net/netfilter/ipvs/ip_vs.ko

kernel/net/netfilter/ipvs/ip_vs_rr.ko

kernel/net/netfilter/ipvs/ip_vs_wrr.ko

kernel/net/netfilter/ipvs/ip_vs_lc.ko

kernel/net/netfilter/ipvs/ip_vs_wlc.ko

kernel/net/netfilter/ipvs/ip_vs_lblc.ko

kernel/net/netfilter/ipvs/ip_vs_lblcr.ko

kernel/net/netfilter/ipvs/ip_vs_dh.ko

kernel/net/netfilter/ipvs/ip_vs_sh.ko

kernel/net/netfilter/ipvs/ip_vs_sed.ko

kernel/net/netfilter/ipvs/ip_vs_nq.ko

kernel/net/netfilter/ipvs/ip_vs_ftp.ko

kernel/net/netfilter/ipvs/ip_vs_pe_sip.ko

安装ipvsadm

[root@rex ~]# yum install ipvsadm

能通过 ipvsadm --help 查看相关信息即安装成功。

安装 keepalived

安装依赖库:yum install -y openssl openssl-devel (我是缺这两个,具体编译时候如果报错则安装相关提示补安装相关缺漏)

下载keepalived:

http://www.keepalived.org/software/keepalived-1.2.6.tar.gz

安装命令:

# tar zxvf keepalived-1.2.6.tar.gz

# cd keepalived-1.2.6

# ./configure --sysconf=/etc --with-kernel-dir=/lib/modules/2.6.32-431.el6.x86_64 #指定配置文件存放路径,指定使用内核源码中的头文件。

#make

#make install

#ln -s /usr/local/sbin/keepalived /sbin/ #把启动命令软连接到sbin,方便以后直接使用

能通过keepalived --help查看相关信息即安装成功。

配置 LVS

LVS

! Configuration File for keepalived

global_defs {

notification_email {  #配置告警邮件发送

XXXX @qq.com

}

notification_email_from Keepalived@localhost

smtp_server 192.168.200.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER  #主LVS标志

interface eth1  #设置对外服务口

virtual_router_id 51

priority 100  #设置优先级,优先级高的为主机

advert_int 1  #设置同步时间间隔

authentication {  #设置主备LVS验证类型和密码,两边必须一致

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {  #设置虚拟服务IP

192.168.153.110

}

}

virtual_server 192.168.153.110 80 {

delay_loop 6  #健康检查时间间隔

lb_algo rr  #负载均衡算法,此为轮询算法

lb_kind DR  #负载均衡转发规则,也就是IP转发规则,DR转发方式最快,但是所有机器必须在一个网段内,还有NAT和TUNEL两种转发方式

protocol TCP

real_server 192.168.153.131 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

}

}

real_server 192.168.153.135 80 {

weight 1

TCP_CHECK {

connect_timeout 3

nb_get_retry 3

delay_before_retry 3

}

}

}

备用LVS配置:复制上面配置修改两处即可:1. state MASTER 改为 state BACKUP 2.将 priority 100 改低,这里改为80

配置Realserver

对所有 Realserver 重复做一下步骤

在/etc/init.d下创建脚本lvsrs

   1 #!/bin/bash

2

3 VIP=192.168.153.110

4 ./etc/rc.d/init.d/functions

5 case "$1" in

6 start)

7         echo "Start LVS of Realserver!"

8         /sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

9         echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore

10         echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

11         echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

12         echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

13

14 ;;

15

16 stop)

17

18         /sbin/ifconfig lo:0 down

19         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

20         echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

21         echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore

22         echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce

23         echo "RealServer Stoped"

24

25 ;;

26

27 *)

28

29         echo "Usage: $0 {start|stop}"

30

31         exit 1

32

33  esac

修改其执行权限:chmod 755 /etc/init.d/lvsrs

执行启动脚本:

#service lvsrs start

查看其ip:

lo        Link encap:Local Loopback  

inet addr:127.0.0.1  Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING  MTU:16436  Metric:1

RX packets:8 errors:0 dropped:0 overruns:0 frame:0

TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:480 (480.0 b)  TX bytes:480 (480.0 b)

lo:0      Link encap:Local Loopback  

inet addr:192.168.153.110  Mask:255.255.255.255

UP LOOPBACK RUNNING  MTU:16436  Metric:1

在realserver上安装apache并且启动!

 /var/www/hmtl/下添加index.html测试主页 ,不同 server用不同界面进行区分。

启动LVS并且测试:

主备LVS启动:/etc/init.d/keepalived start

查看启动状态:

主LVS:

Feb  4 20:11:41 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Transition to MASTER STATE

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Entering MASTER STATE

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) setting protocol VIPs.

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 192.168.153.110

Feb  4 20:11:42 rex Keepalived_healthcheckers[9521]: Netlink reflector reports IP 192.168.153.110 added

Feb  4 20:11:47 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 192.168.153.110

备LVS:

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Received higher prio advert

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Entering BACKUP STATE

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) removing protocol VIPs.

Feb  4 20:18:16 rex Keepalived_healthcheckers[9521]: Netlink reflector reports IP 192.168.153.110 removed

Feb  4 20:19:06 rex dhclient[1265]: DHCPREQUEST on eth2 to 192.168.153.254 port 67 (xid=0x6f9b7b38)

用ip addr和ipvsadm查看路由情况

主LVS# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 00:0c:29:3e:ce:ce brd ff:ff:ff:ff:ff:ff

inet 192.168.153.133/24 brd 192.168.153.255 scope global eth1

inet 192.168.153.110/32 scope global eth1

inet6 fe80::20c:29ff:fe3e:cece/64 scope link

valid_lft forever preferred_lft forever

备LVS# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000

link/ether 00:0c:29:0e:6c:b0 brd ff:ff:ff:ff:ff:ff

inet 192.168.153.134/24 brd 192.168.153.255 scope global eth2

inet6 fe80::20c:29ff:fe0e:6cb0/64 scope link

valid_lft forever preferred_lft forever

[root@rex ~]# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port           Forward Weight ActiveConn InActConn

TCP  bogon:http rr

-> bogon:http                   Route   1      0          0         

  -> bogon:http                   Route   1      0          0   

测试:

1. 高可用性功能测试--主备lvs切换

把主lvs stop!然后查看备lvs日志,然后把主恢复,再看备lvs的日志

Feb  4 20:11:41 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Transition to MASTER STATE

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Entering MASTER STATE

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) setting protocol VIPs.

Feb  4 20:11:42 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 192.168.153.110

Feb  4 20:11:42 rex Keepalived_healthcheckers[9521]: Netlink reflector reports IP 192.168.153.110 added

Feb  4 20:11:47 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth2 for 192.168.153.110

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Received higher prio advert

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) Entering BACKUP STATE

Feb  4 20:18:16 rex Keepalived_vrrp[9522]: VRRP_Instance(VI_1) removing protocol VIPs.

Feb  4 20:18:16 rex Keepalived_healthcheckers[9521]: Netlink reflector reports IP 192.168.153.110 removed

Feb  4 20:19:06 rex dhclient[1265]: DHCPREQUEST on eth2 to 192.168.153.254 port 67 (xid=0x6f9b7b38)

Feb  4 20:19:06 rex dhclient[1265]: DHCPACK from 192.168.153.254 (xid=0x6f9b7b38)

Feb  4 20:19:06 rex dhclient[1265]: bound to 192.168.153.134 -- renewal in 737 seconds.

Feb  4 20:19:06 rex NetworkManager[1241]: <info> (eth2): DHCPv4 state changed renew -> renew

Feb  4 20:19:06 rex NetworkManager[1241]: <info>   address 192.168.153.134

Feb  4 20:19:06 rex NetworkManager[1241]: <info>   prefix 24 (255.255.255.0)

Feb  4 20:19:06 rex NetworkManager[1241]: <info>   gateway 192.168.153.2

Feb  4 20:19:06 rex NetworkManager[1241]: <info>   nameserver '192.168.153.2'

Feb  4 20:19:06 rex NetworkManager[1241]: <info>   domain name 'localdomain'

2. 负载均衡测试

在本地浏览器上访问192.168.153.110,不断刷新,可以看到代表不同realserver的主界面。

基于VMware的虚拟Linux集群搭建-lvs+keepalived

基于VMware的虚拟Linux集群搭建-lvs+keepalived

3. 故障切换测试

把realserver1的服务stop掉,查看LVS的日志,然后恢复再看:

Feb  4 20:25:59 rex Keepalived_healthcheckers[9588]: Netlink reflector reports IP 192.168.153.110 added

Feb  4 20:26:04 rex Keepalived_vrrp[9589]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth1 for 192.168.153.110

Feb  4 20:27:53 rex Keepalived_healthcheckers[9588]: TCP connection to [192.168.153.135]:80 failed !!!

Feb  4 20:27:53 rex Keepalived_healthcheckers[9588]: Removing service [192.168.153.135]:80 from VS [192.168.153.110]:80

Feb  4 20:27:53 rex Keepalived_healthcheckers[9588]: Remote SMTP server [192.168.200.1]:25 connected.

Feb  4 20:28:14 rex Keepalived_healthcheckers[9588]: Error reading data from remote SMTP server [192.168.200.1]:25.

Feb  4 20:28:29 rex Keepalived_healthcheckers[9588]: TCP connection to [192.168.153.135]:80 success.

Feb  4 20:28:29 rex Keepalived_healthcheckers[9588]: Adding service [192.168.153.135]:80 to VS [192.168.153.110]:80

Feb  4 20:28:29 rex Keepalived_healthcheckers[9588]: Remote SMTP server [192.168.200.1]:25 connected.

 
标签: LVS keepalived
反对 0举报 0 评论 0
 

免责声明:本文仅代表作者个人观点,与乐学笔记(本网)无关。其原创性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容、文字的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
    本网站有部分内容均转载自其它媒体,转载目的在于传递更多信息,并不代表本网赞同其观点和对其真实性负责,若因作品内容、知识产权、版权和其他问题,请及时提供相关证明等材料并与我们留言联系,本网站将在规定时间内给予删除等相关处理.

  • centos下lvs配置 centos7安装lvm分区
    一、lvs-nat模式网络配置:lvs-server eth0 :host-only adapter 192.168.56.101lvs-server eth1 :Internal 192.168.0.1real-server-1 eth0:Internal 192.168.0.2real-server-2 eth0:Internal 192.168.0.3 lvs-server:1、安装ipvsadm软件yum -y insta
    02-09
  • Centos7实践LVS+Keepalived
    Centos7实践LVS+Keepalived
    网上有很多关于这个主题的配置,要么是陈旧的,要么是错误,胡乱转载,有时参考这些文章,反而给你的工作带来更大的麻烦。经过实践,博主 javacoder.cn将其整理成文,转载请注明出处 ,希望对你有所帮助。 其实keepalived集成了LVS的功能,所以理论上只要安装
  • Linux Cluster之LVS
    Linux Cluster之LVS
    一、Linux Cluster 基础: Cluster:计算机集合为解决某个特定问题组合起来形成的单个系统Linux Cluster类型:LB(Load Banlancing):负载均衡HA(High Availability):高可用。提高服务可用性,避免出现单点故障HP(High Performance):高性能可用性衡量
    11-03 LVSLinux
  • linux cluster—-lvs
    linux cluster—-lvs
    一. Linux Cluster:Cluster:计算机集合,为解决某个特定问题组合起来形成的单个系统;Linux Cluster类型:LB:Load Balancing,负载均衡;HA:High Availiablity,高可用;A=MTBF无故障运行时长/(MTBF无故障运行时长+MTTR故障时间)衡量指数: (0,1):90%,
    10-31 LVSLinux
  • ARP在LVS中的应用
    最近在玩LVS,碰到一些问题,顺便记录一下测试环境在本地Mac用Parallel起了一个虚拟机+------------------+ +-----------+|Parallel VM |ping |Mac||Ubuntu 12.04 LTS-------------+ ||| |en0: 192.168.2.222|| | ||| | ||eth0: 192.168.2.101/24| ||eth1: 10
    09-29 LVSMacOS
  • LVS+Keepalived构建高可用负载均衡配置方法(配置篇)
    LVS+Keepalived构建高可用负载均衡配置方法(配
    这篇文章主要介绍了LVS+Keepalived构建高可用负载均衡配置方法,需要的朋友可以参考下
  • LVS+Keepalived构建高可用负载均衡(测试篇)
    这篇文章主要介绍了LVS+Keepalived构建高可用负载均衡的测试方法,需要的朋友可以参考下
  • linux服务器之LVS、Nginx和HAProxy负载均衡器对
    这篇文章主要介绍了linux服务器之LVS、Nginx和HAProxy负载均衡器对比,需要的朋友可以参考下
点击排行