Linux集群
LB:
并發(fā)處理能力
HA:High Availability,高可用
在線(xiàn)時(shí)間/(在線(xiàn)時(shí)間+故障處理時(shí)間)
99%,99.9%,99.99%
HP(HPC):高性能
High Performance
并行處理集群
分布式存儲(chǔ):分布式文件系統(tǒng)
將大任務(wù)切割為小任務(wù),分別進(jìn)行處理的機(jī)制
rsync+inotify:文件同步
sersync:文件同步
fencing:隔離
節(jié)點(diǎn)隔離:STONTIH
資源隔離:
LVS
Linux Virtual Server
Hardware
F5, BIG IP
Citrix ,Netscaler
A10
Software
四層
LVS
七層:反向代理
Nginx
http , smtp ,pop3 ,imap
haproxy
http,tcp (mysql, smtp)
LVS
ipvsadm:管理集群服務(wù)的命令行工具
ipvs:內(nèi)核模塊
CIP-->VIP-->DIP-->RIP
三種類(lèi)型
NAT:地址轉(zhuǎn)換
集群節(jié)點(diǎn)跟director必須在同一個(gè)IP網(wǎng)絡(luò)中
RIP地址通常是私有地址,僅用于各集群節(jié)點(diǎn)之間通信
director位于client和real server之間,并負(fù)責(zé)處理進(jìn)出的所有通信
realserver必須將網(wǎng)關(guān)指向DIP
支持端口映射
realserver可以使用任意操作系統(tǒng)
較大規(guī)模應(yīng)用場(chǎng)景中,director易成為系統(tǒng)瓶頸
DR:直接路由
集群節(jié)點(diǎn)跟director必須在同一個(gè)物理網(wǎng)絡(luò)中
RIP可以使用公網(wǎng)地址,實(shí)現(xiàn)便捷的遠(yuǎn)程管理和監(jiān)控
director僅負(fù)責(zé)處理入站請(qǐng)求,響應(yīng)報(bào)文則由director直接發(fā)往客戶(hù)端
realserver不能能將網(wǎng)關(guān)指定DIP
不支持端口映射
TUN:隧道
集群節(jié)點(diǎn)可以跨越互聯(lián)網(wǎng)
RIP必須是公網(wǎng)地址
director僅服務(wù)處理入站請(qǐng)求,響應(yīng)報(bào)文則由realserver直接發(fā)往客戶(hù)端
realserver網(wǎng)關(guān)不能指向director
只有指向隧道功能的系統(tǒng)才能用于realserver
不支持端口映射
固定調(diào)度
rr:輪叫,輪詢(xún)
wrr:weight,加權(quán)
sh:source hash ,源地址hash
dh:
動(dòng)態(tài)調(diào)度
lc:最少連接
active*256+inactive
誰(shuí)的小,挑誰(shuí)
wlc:加權(quán)最少連接
(active*256+inactive)/weight
sed : 最短期望延遲
(active+1)*256/weight
nq:never queue
LBLC : 基于本地的最少連接
DH:
LBLCR:基于本地的帶復(fù)制功能的最少連接
ipvsadm:
管理集群服務(wù)
添加:-A -t|u|f service-address [-s scheduler]
-t:tcp協(xié)議集群
-u:udp協(xié)議的集群
-f:fwm:防火墻標(biāo)記
server-address:mark number
#ipvsadm -A -t 172.16.100.1:80 -s rr
修改:-E
刪除:-D -t|u|f service-address
管理集群服務(wù)中的RS
添加:-a -t|u|f service-address -r server-address [-g|i|m] [-w weight] [-x upper] [-y lower]
-t|u|f service-address:事先定義好的某集群服務(wù)
-r server-address :某RS的地址,在NAT模型中,可使用IP:prot實(shí)現(xiàn)端口映射
[-g|i|m]:LVS類(lèi)型
-g:DR
-i:TUN
-m:NAT
[-w weight] :定義服務(wù)器權(quán)重
修改:-e
刪除:-d -t|u|f service-address -r server-address
# ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.8 -m
#ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.9 -m
查看:
-L| l
-n:數(shù)字格式顯示主機(jī)地址和端口
--stats:統(tǒng)計(jì)數(shù)據(jù)
--rate:速率
--timeout:顯示tcp、tcpfin和udp的會(huì)話(huà)超時(shí)時(shí)長(zhǎng)
--sort : 排序
-c:顯示當(dāng)前的ipvs鏈接狀況
刪除所有集群服務(wù)
-C:清空ipvs規(guī)則
保存規(guī)則
-S
#ipvsadm -S > /path/to/somefile
載入此前的規(guī)則:
-R
# ipvsadm -R < /path/torm/somefile
各節(jié)點(diǎn)之間的世界偏差不應(yīng)該超出1秒種
NTP:Network Time Protocol
ntpdate timeserver (和時(shí)間服務(wù)器同步時(shí)間,server啟動(dòng)ntp服務(wù))
NAT類(lèi)型配置
director:VIP:172.16.100.1 DIP:192.168.10.7# yum install ipvsadm -y install
# ipvsadm -A 172.16.100.1:80 -s rr
# ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.8 -m
# ipvsadm -a -t 172.16.100.1:80 -r 192.168.10.9 -m
# service ipvsadm save
# echo 1 >/proc/sys/net/ipv4/ip_forward
更改為wrr
# ipvsadm -E -t 172.16.100.1:80 -s wrr
# ipvsadm -e -t 172.16.100.1:80 -r 192.168.10.8 -m -w 3
# ipvsadm -e -t 172.16.100.1:80 -r 192.168.10.9 -m -w 1
realserver1:192.168.10.8 網(wǎng)關(guān):192.168.10.7
# yum install httpd -y
# ntpdate 192.168.10.7
# echo "my test one" >/var/httpd/html/index.html
# service httpd restart
realserver2:192.168.10.9 網(wǎng)關(guān):192.168.10.7
# yum install httpd -y
# ntpdate 192.168.10.7
# echo "my test two" >/var/httpd/html/index.html
# service httpd restart
DR:
arptables:
kernel parameter
arp_ignore:定義接收到ARP請(qǐng)求時(shí)的響應(yīng)級(jí)別
0:只要本地配置的有相應(yīng)地址,就給予響應(yīng)
1:僅在請(qǐng)求的目標(biāo)地址配置請(qǐng)求到達(dá)的接口上的時(shí)候,才給予響應(yīng)
arp_announce:定義將自己地址向外通告時(shí)的通告級(jí)別
0:將本地任何接口上的任何地址向外通告
1:試圖僅向目標(biāo)網(wǎng)絡(luò)通告與其網(wǎng)絡(luò)匹配的地址
2:僅向與本地接口上地址匹配的網(wǎng)絡(luò)進(jìn)行通告DR類(lèi)型配置
Director:
eth0,DIP:172.16.100.2
eth0:0 VIP : 172.16.100.1
#ifconfig eth0:0 172.16.100.1/16
#route add -host 172.16.100.1 dev eth0:0
#ipvsadm -A -t 172.16.100.1:80 -s wlc
#ipvsadm -a -t 172.16.100.1:80 -r 172.16.100.7 -g -w 2
#ipvsadm -a -t 172.16.100.1:80 -r 172.16.100.8 -g -w 1
RS1:
eth0:rip1:172.16.100.7
Lo:0 , vip :172.16.100.1
#sysctl -w net.ipv4.conf.eht0.arp_announce=2
#sysctl -w net.ipv4.conf.all.arp_announce=2
#sysctl -w net.ipv4.conf.eht0.arp_ignore=1
#sysctl -w net.ipv4.conf.all.arp_ignore=1#ifconfig lo:0 172.16.100.1 broadcast 172.16.100.1 netmask 255.255.255.255
#route add -host 172.16.100.1 dev lo:0
RS2:
eth0:rip1:172.16.100.8
Lo:0 , vip :172.16.100.1#sysctl -w net.ipv4.conf.eht0.arp_announce=2
#sysctl -w net.ipv4.conf.all.arp_announce=2#sysctl -w net.ipv4.conf.eht0.arp_ignore=1
#sysctl -w net.ipv4.conf.all.arp_ignore=1
#ifconfig lo:0 172.16.100.1 broadcast 172.16.100.1 netmask 255.255.255.255#route add -host 172.16.100.1 dev lo:0
DR類(lèi)型中,Director和RealServer的配置腳本示例:
Director腳本:
#!/bin/bash
#
# LVS script for VS/DR
# chkconfig: - 90 10
#
. /etc/rc.d/init.d/functions
#
VIP=172.16.100.1
DIP=172.16.100.2
RIP1=172.16.100.7
RIP2=172.16.100.8
PORT=80
RSWEIGHT1=2
RSWEIGHT2=5
#
case "$1" in
start)
/sbin/ifconfig eth0:1 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev eth0:0
# Since this is the Director we must be able to forward packets
echo 1 > /proc/sys/net/ipv4/ip_forward
# Clear all iptables rules.
/sbin/iptables -F
# Reset iptables counters.
/sbin/iptables -Z
# Clear all ipvsadm rules/services.
/sbin/ipvsadm -C
# Add an IP virtual service for VIP 192.168.0.219 port 80
# In this recipe, we will use the round-robin scheduling method.
# In production, however, you should use a weighted, dynamic scheduling method.
/sbin/ipvsadm -A -t $VIP:80 -s wlc
# Now direct packets for this VIP to
# the real server IP (RIP) inside the cluster
/sbin/ipvsadm -a -t $VIP:80 -r $RIP1 -g -w $RSWEIGHT1
/sbin/ipvsadm -a -t $VIP:80 -r $RIP2 -g -w $RSWEIGHT2
/bin/touch /var/lock/subsys/ipvsadm &> /dev/null
;;
stop)
# Stop forwarding packets
echo 0 > /proc/sys/net/ipv4/ip_forward
# Reset ipvsadm
/sbin/ipvsadm -C
# Bring down the VIP interface
/sbin/ifconfig eth0:0 down
/sbin/route del $VIP
/bin/rm -f /var/lock/subsys/ipvsadm
echo "ipvs is stopped..."
;;
status)
if [ ! -e /var/lock/subsys/ipvsadm ]; then
echo "ipvsadm is stopped ..."
else
echo "ipvs is running ..."
ipvsadm -L -n
fi
;;
*)
echo "Usage: $0 {start|stop|status}"
;;
esac
RealServer腳本:
#!/bin/bash
#
# Script to start LVS DR real server.
# chkconfig: - 90 10
# description: LVS DR real server
#
. /etc/rc.d/init.d/functions
VIP=172.16.100.1
host=`/bin/hostname`
case "$1" in
start)
# Start LVS-DR real server on this machine.
/sbin/ifconfig lo down
/sbin/ifconfig lo up
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up
/sbin/route add -host $VIP dev lo:0
;;
stop)
# Stop LVS-DR real server loopback device(s).
/sbin/ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
;;
status)
# Status of LVS-DR real server.
islothere=`/sbin/ifconfig lo:0 | grep $VIP`
isrothere=`netstat -rn | grep "lo:0" | grep $VIP`
if [ ! "$islothere" -o ! "isrothere" ];then
# Either the route or the lo:0 device
# not found.
echo "LVS-DR real server Stopped."
else
echo "LVS-DR real server Running."
fi
;;
*)
# Invalid entry.
echo "$0: Usage: $0 {start|status|stop}"
exit 1
;;
esac
RS健康狀態(tài)檢查腳本示例第一版:
#!/bin/bash
#
VIP=192.168.10.3
CPORT=80
FAIL_BACK=127.0.0.1
FBSTATUS=0
RS=("192.168.10.7" "192.168.10.8")
RSTATUS=("1" "1")
RW=("2" "1")
RPORT=80
TYPE=g
add() {
ipvsadm -a -t $VIP:$CPORT -r $1:$RPORT -$TYPE -w $2
[ $? -eq 0 ] && return 0 || return 1
}
del() {
ipvsadm -d -t $VIP:$CPORT -r $1:$RPORT
[ $? -eq 0 ] && return 0 || return 1
}
while :; do
let COUNT=0
for I in ${RS[*]}; do
if curl --connect-timeout 1 http://$I &> /dev/null; then
if [ ${RSTATUS[$COUNT]} -eq 0 ]; then
add $I ${RW[$COUNT]}
[ $? -eq 0 ] && RSTATUS[$COUNT]=1
fi
else
if [ ${RSTATUS[$COUNT]} -eq 1 ]; then
del $I
[ $? -eq 0 ] && RSTATUS[$COUNT]=0
fi
fi
let COUNT++
done
sleep 5
done
RS健康狀態(tài)檢查腳本示例第二版:
#!/bin/bash
#
VIP=192.168.10.3
CPORT=80
FAIL_BACK=127.0.0.1
RS=("192.168.10.7" "192.168.10.8")
declare -a RSSTATUS
RW=("2" "1")
RPORT=80
TYPE=g
CHKLOOP=3
LOG=/var/log/ipvsmonitor.log
addrs() {
ipvsadm -a -t $VIP:$CPORT -r $1:$RPORT -$TYPE -w $2
[ $? -eq 0 ] && return 0 || return 1
}
delrs() {
ipvsadm -d -t $VIP:$CPORT -r $1:$RPORT
[ $? -eq 0 ] && return 0 || return 1
}
checkrs() {
local I=1
while [ $I -le $CHKLOOP ]; do
if curl --connect-timeout 1 http://$1 &> /dev/null; then
return 0
fi
let I++
done
return 1
}
initstatus() {
local I
local COUNT=0;
for I in ${RS[*]}; do
if ipvsadm -L -n | grep "$I:$RPORT" && > /dev/null ; then
RSSTATUS[$COUNT]=1
else
RSSTATUS[$COUNT]=0
fi
let COUNT++
done
}
initstatus
while :; do
let COUNT=0
for I in ${RS[*]}; do
if checkrs $I; then
if [ ${RSSTATUS[$COUNT]} -eq 0 ]; then
addrs $I ${RW[$COUNT]}
[ $? -eq 0 ] && RSSTATUS[$COUNT]=1 && echo "`date +'%F %H:%M:%S'`, $I is back." >> $LOG
fi
else
if [ ${RSSTATUS[$COUNT]} -eq 1 ]; then
delrs $I
[ $? -eq 0 ] && RSSTATUS[$COUNT]=0 && echo "`date +'%F %H:%M:%S'`, $I is gone." >> $LOG
fi
fi
let COUNT++
done
sleep 5
done
LVS持久連接:
無(wú)論使用算法,LVS持久都能實(shí)現(xiàn)在一定時(shí)間內(nèi),將來(lái)自同一個(gè)客戶(hù)端請(qǐng)求派發(fā)至此前選定的RS。
持久連接模板(內(nèi)存緩沖區(qū)):
每一個(gè)客戶(hù)端 及分配給它的RS的映射關(guān)系;
ipvsadm -A|E ... -p timeout:
timeout: 持久連接時(shí)長(zhǎng),默認(rèn)300秒;單位是秒;
在基于SSL,需要用到持久連接;
PPC:將來(lái)自于同一個(gè)客戶(hù)端對(duì)同一個(gè)集群服務(wù)的請(qǐng)求,始終定向至此前選定的RS; 持久端口連接
PCC:將來(lái)自于同一個(gè)客戶(hù)端對(duì)所有端口的請(qǐng)求,始終定向至此前選定的RS; 持久客戶(hù)端連接
PNMPP:持久防火墻標(biāo)記連接
PREROUTING
80:10
23:10
#iptables -t mangle -A PREROUTING -d 192.168.10.3 -i eht0 -p tcp --dport 80 -j MARK --set-mark 8
#iptables -t mangle -A PREROUTING -d 192.168.10.3 -i eht0 -p tcp --dport 23 -j MARK --set-mark 8
#ipvsadm -A -f 8 -s rr
#ipvsadm -a -f 8 -r 192.168.10.7 -g -w 2
#ipvsadm -a -f 8 -r 192.168.10.8 -g -w 5
高可用集群原理詳解
FailOver:故障轉(zhuǎn)移
資源粘性:location
資源更傾向于運(yùn)行于哪個(gè)節(jié)點(diǎn),通過(guò)score定義
CRM:集群資源管理器(crmd端口5560)
LRM:本地資源管理器
RA:資源代理(腳本)
資源約束:Constraint
排列約束:(colation)
資源是否能夠運(yùn)行與同一節(jié)點(diǎn)
score:
正值:可以在一起
負(fù)值:不能在一起
位置約束:(location),score(分?jǐn)?shù))
正值:傾向于此節(jié)點(diǎn)
負(fù)值:傾向于逃離于此節(jié)點(diǎn)
順序約束:(order)
定義資源啟動(dòng)或關(guān)閉時(shí)的次序
vip,ipvs
ipvs-->vip
-inf:負(fù)無(wú)窮
inf:正無(wú)窮
資源隔離:
節(jié)點(diǎn)級(jí)別:
STONITH
資源級(jí)別:
例如:FC SAN switch 可以實(shí)現(xiàn)在儲(chǔ)存資源級(jí)別拒絕某節(jié)點(diǎn)的訪(fǎng)問(wèn)
split-brain:集群節(jié)點(diǎn)無(wú)法有效獲取其他節(jié)點(diǎn)的狀態(tài)信息時(shí),產(chǎn)生腦裂
后果之一:搶占共享存儲(chǔ)
active/active: 高可用
IDE:(ATA),130M
SATA:600M
7200rpm
IOPS: 100
SCSI: 320M
SAS:
15000rpm
IOPS: 200
USB 3.0: 400M
機(jī)械:
隨機(jī)讀寫(xiě)
順序讀寫(xiě)
固態(tài):
IDE, SCSI: 并口
SATA, SAS, USB: 串口
DAS:
Direct Attached Storage
直接接到主板總線(xiàn),BUS
文件:塊
NAS:
Network
文件服務(wù)器:文件級(jí)別
SAN:
Storage Area network
存儲(chǔ)區(qū)域網(wǎng)絡(luò)
FC SAN
IP SAN: iSCSI
SCSI: Small Computer System Interface
without_quorum_policy:
freeze:
stop:停止
ignore:忽略
Messaging Layer
heartbeat(v1,v2,v3):UDP:694
heartbeat v1: 自帶的資源管理器
haresources
heartbeat v2:自帶的資源管理器
haresources
crm
heartbeat v3 :資源管理器crm發(fā)展為獨(dú)立的項(xiàng)目,pacemaker
heartbeat,pacemaker,cluster-glue
corosync
cman
keepalived
ultramokey
CRM
haresource,crm(heartbeat v1/v2)
pacermaker(hearbeat v corosync)
rgmanager(cman)
Resource:
primitive(native)
clone
STONITH
Cluster Filesystem
dlm: Distributed lock Manager
group
master/slave
RA: Resource Agent
RA Classes:
Legacy heartbeat v1 RA
LSB (/etc/rc.d/init.d)
OCF (Open Cluster Framework)
pacemaker
linbit (drbd)
stonith
隔離級(jí)別:
節(jié)點(diǎn)級(jí)別
STONTIH
資源級(jí)別
FC SAN Switch
Stonith設(shè)備
1、Power Distribution Units (PDU)
Power Distribution Units are an essential element in managing power capacity and functionality for critical network, server and data center equipment. They can provide remote load monitoring of connected equipment and individual outlet power control for remote power recycling.
2、Uninterruptible Power Supplies (UPS)
A stable power supply provides emergency power to connected equipment by supplying power from a separate source in the event of utility power failure.
3、Blade Power Control Devices
If you are running a cluster on a set of blades, then the power control device in the blade enclosure is the only candidate for fencing. Of course, this device must be
capable of managing single blade computers.
4、Lights-out Devices
Lights-out devices (IBM RSA, HP iLO, Dell DRAC) are becoming increasingly popular and may even become standard in off-the-shelf computers. However, they are inferior to UPS devices, because they share a power supply with their host (a cluster node). If a node stays without power, the device supposed to control it would be just as useless. In that case, the CRM would continue its attempts to fence the node indefinitely while all other resource operations would wait for the fencing/STONITH operation to complete.
5、Testing Devices
Testing devices are used exclusively for testing purposes. They are usually more gentle on the hardware. Once the cluster goes into production, they must be replaced
with real fencing devices.
ssh 172.16.100.1 'reboot'
meatware
STONITH的實(shí)現(xiàn):
stonithd
stonithd is a daemon which can be accessed by local processes or over the network. It accepts the commands which correspond to fencing operations: reset, power-off, and power-on. It can also check the status of the fencing device.
The stonithd daemon runs on every node in the CRM HA cluster. The stonithd instance running on the DC node receives a fencing request from the CRM. It is up to this and other stonithd programs to carry out the desired fencing operation.
STONITH Plug-ins
For every supported fencing device there is a STONITH plug-in which is capable of controlling said device. A STONITH plug-in is the interface to the fencing device.
On each node, all STONITH plug-ins reside in /usr/lib/stonith/plugins (or in /usr/lib64/stonith/plugins for 64-bit architectures). All STONITH plug-ins look the same to stonithd, but are quite different on the other side reflecting the nature of the fencing device.
Some plug-ins support more than one device. A typical example is ipmilan (or external/ipmi) which implements the IPMI protocol and can control any device which supports this protocol.
CIB: Cluster Information Base xml格式
heartbeat安裝配置
heartbeat v2
ha web
node1 node2
節(jié)點(diǎn)名稱(chēng),/etc/hosts
節(jié)點(diǎn)名稱(chēng)必須跟uname -n命令的執(zhí)行結(jié)果一致
ssh互相通信
時(shí)間同步
1、配置ip
node1 172.16.100.6
node2 172.16.100.7
2、配置hostname
#hostname node1.name
#vim /etc/sysconfig/network
#hostname node2.name
#vim /etc/sysconfig/network
3、ssh互相通信
#ssh-keygen -t rsa -f ~/.ssh/id_rsa -P''
#ssh-copy-id -i .ssh/id_rsa.pub root@172.16.100.7
#ssh-keygen -t rsa -f ~/.ssh/id_rsa -P''
#ssh-copy-id -i .ssh/id_rsa.pub root@172.16.100.64、配置host
#vim /etc/hosts
172.16.100.6 node1.name node1
172.16.100.6 node2.name node2
#scp /etc/hosts node2:/etc
5、時(shí)間同步
#ntpdate server_ip
#crontab -e
*/5 * * * * /sbin/ntpdate server_ip &>/dev/null
#scp /var/spool/cron/root node2:/var/spool/cron/
6、安裝heartbeat
heartbeat - Heartbeat subsystem for High-Availability Linux 核心包
heartbeat-devel - Heartbeat development package 開(kāi)發(fā)包
heartbeat-gui - Provides a gui interface to manage heartbeat clusters 圖形界面
heartbeat-ldirectord - Monitor daemon for maintaining high availability resources, 為ipvs高可用提供規(guī)則自動(dòng)生成及后端realserver健康狀態(tài)檢查的組件;
heartbeat-pils - Provides a general plugin and interface loading library 裝載庫(kù)的接口
heartbeat-stonith - Provides an interface to Shoot The Other Node In The Head
7、配置hearbeat
三個(gè)配置文件:
1、密鑰文件,600,anthkeys
2、heartbeat服務(wù)的配置文件ha.cf
3、資源管理配置文件
haresources
#cp -p /usr/share/doc/heartbeat/{authkeys,ha.cf,haresources} /etc/ha.d/
#dd if=/dev/random count=1 bs=512 | md5sum //生成隨機(jī)數(shù)
#vim /etc/ha.d/authkeys
auth 1
1 md5 隨機(jī)數(shù)#vim /etc/ha.d/ha.cf
#node ken3
#node kathynode node1.name
node node2.name
bcast //去掉前面的#號(hào),啟動(dòng)廣播
#chkconfig httpd off
#vim /etc/ha.d/haresources
node1.name IPaddr::172.16.100.1/16/eth0 httdp
#scp -p authkeys haresources ha.cf node2:/etc/ha.d/
#service heartbeat start
#ssh node2'service heartbeat start'
#/usr/lib/heartbeat/hb_standby //切換為備用節(jié)點(diǎn)
8、共享磁盤(pán)(172.16.100.10)
#vim /etc/exports
/web/htdocs 172.16.0.0/255.255.0.0(ro)
node1~#ssh node2 '/etc/init.d/heartbeat stop'
node1~#service hearbeat stop
node1~#vim /etc/ha.d/haresources
node1.name IPaddr::172.16.100.1/16/eth0 Filesystem::172.16.100.10:/web/htdocs::/var/www/html::nfs httdp
node1~#scp /etc/ha.d/haresources node2:/etc/ha.d/
node1~#service heartbeat start
node1~#ssh node2'services heartbeat start'
基于crm進(jìn)行資源管理
node1~#ssh node2 '/etc/init.d/heartbeat stop'
node1~#service hearbeat stopnode1~#vim /etc/ha.d/ha.cf
mcast eth0 225.0.0.15 694 1 0 //去掉前面的# 啟用組播
crm respawn
node1~#/usr/lib/heartbeat/ha_propagete
node1~#service heartbeat start
node1~#ssh node2'services heartbeat start'
基于hb v2,crm來(lái)實(shí)現(xiàn)MySQL高可用集群
nfs,samba,iscsi
NFS:MySQL app ,data
/etc/my.cnf --> /etc/mysql/mysql.cnf
$MYSQL_BASE
--default-extra-file =
共享磁盤(pán)
#pvcreate /dev/sdb2
#vgcreate myvg /dev/sdb2
#lvcreate -L 10G -n mydata myvg
#mke2fs -j /dev/myvg/mydata
#groupadd -g 3306 mysql
#useradd -u 3306 -g mysql -s /sbin/nologin -M mysql
#vim /etc/fstab
/dev/myvg/mydata /mydata ext3 defaults 0 0
#mkdir /mydata/data
#chown -R mysql.mysql /mysata/data/
#vim /etc/exports
/mydata 172.16.0.0/255.255.0.0(no_root_squash,rw)
#exportfs -arv
node1~#ssh node2'service heartbeat stop'
node1~#service heartbeat stop
node1~#groupadd -g 3306 mysql
node1~#useradd -g 3306 -s /sbin/nologin -M mysql
node1~#mount -t nfs 172.16.100.10:/mydata /mydata //掛載以后測(cè)試mysql用戶(hù)是否有寫(xiě)入權(quán)限
node1~#umount /mydaa
node2~#groupadd -g 3306 mysql
node2~#useradd -g 3306 -s /sbin/nologin -M mysql
node2~#mount -t nfs 172.16.100.10:/mydata /mydata //掛載以后測(cè)試mysql用戶(hù)是否有寫(xiě)入權(quán)限
node2~#umount /mydata
在node1和node2安裝mysql 指定數(shù)據(jù)庫(kù)數(shù)據(jù)目錄為/mydata/data
node1~#service heartbeat start
node1~#ssh node2'services heartbeat start'
REHL 6.x RHCS: corosync
RHEL 5.x RHCS: openais, cman, rgmanager
corosync: Messaging Layer
openais: AIS
corosync --> pacemaker
SUSE Linux Enterprise Server: Hawk, WebGUI
LCMC: Linux Cluster Management Console
RHCS: Conga(luci/ricci)
webGUI
keepalived: VRRP, 2節(jié)點(diǎn)