Classified directory archiving: Linux

2024 Excellent Open Source Image Station

Enterprise station
Tencent Cloud //mirrors.tencent.com/
Alibaba Cloud //mirrors.aliyun.com/
Netease //mirrors.163.com/
Huawei //mirrors.huaweicloud.com/
Capital Online //mirrors.yun-idc.com/
Education station
Tsinghua University //mirrors.tuna.tsinghua.edu.cn/
University of Science and Technology of China //mirrors.ustc.edu.cn/
Beijing Jiaotong University //mirror.bjtu.edu.cn/cn/
Zhejiang University //mirrors.zju.edu.cn/
Official station
CentOS //mirror-status.centos.org/#cn
Ubuntu //launchpad.net/ubuntu/+cdmirrors
Archlinux //www.archlinux.org/mirrors/status/
Debian //www.debian.org/mirror/list
Fedora //admin.fedoraproject.org/ mirrormanager /mirrors

Shell script cleans disks in batches and nails push notifications

 # BLOG://renwole.com/ #The following scripts need to be executed in batch by using a springboard machine or a password free login machine, but can also be modified to be executed on a single machine. Just note out the login phase # tite=$(date "+%Y-%m-%d %H:%M:%S") #Hostnamelist="Host file list, one IP per line, without double quotation marks, for example;/mnt/renwolecom" hostnamelist="/tmp/renwole_com" #Clearing failed logs renwole_com_disk_clean_log="/tmp/renwole_clean_failure.log" #for envNamelist in ${Name[@]};  do #If you use the api interface to get the host list, you can use jq to filter josn, but first install jq 1.5 and above, yum install epel&&yum install jq, the default version of Centos 7 for epel source is 1.5+, and the Centos6+version is 1.3 #Curl - s can obtain the josn format through url and filter through jq | jq. data | jq - r [].hostname #done >$hostnamelist #The nailing notification can specify multiple unqualified notifications according to the if condition dingding_Notice() { dingtalk_openapi_url="//oapi.dingtalk.com" dingtalk_openapi_token="Token" if [[ "$(disk_use)" -ge 80 ]];  then curl ''$dingtalk_openapi_url'/robot/send?access_token='$dingtalk_openapi_token'' \ -H 'Content-Type: application/json' \ -d '{"msgtype": "text", "text": { "Content": "The system log has been cleared successfully, but the used space still exceeds 80%. Please repair it in time. The machine is: '" $hostname "'" } }' fi } #Check whether the machine can log in. Try 5 times at most and wait for 5 seconds each time (if the machine is extremely stuck, failure to retry may cause false alarms) check_login() { for i in $(seq 1 5);  do [ $i -gt 1 ] $login_docker 2> /dev/null exit && antl=0 && break || antl=$?; sleep 5 done echo $antl } #After logging in to the container, check whether the relevant commands are available, and retry 3 times in case of failure (some machines are extremely stuck, and will timeout or retry when getting the results) check_login_command() { for i in $(seq 1 3);  do [[ $i -gt 1 ]] $login_docker >/dev/null 2>&1 "type du" >/dev/null 2>&1 && antc=0 && break || antc=$?; sleep 5 done echo $antc } #Check whether the machine can be connected to the network normally. Retry 5 times if failure occurs, and wait 10 seconds for each retry (optional) check_login_command_netwotk() { #If the network is abnormal, please check this file on the container machine gateway="www.renwole.com" curl_timeout="curl -I -s --connect-timeout 16" for i in $(seq 1 5);  do [[ $i -gt 1 ]] $login_docker "$curl_timeout $gateway 2>/dev/null -w %{http_code} | tail -n1" && antn=200 && break || antn="$?"; sleep 10 done } #Get the current disk usage size remotely disk_use() { disk_uses=$($login_docker "df -P|sort -n|grep /dev |grep -v -E '(shm|tmp|boot)'") disk_status=$(echo $disk_uses | awk -F'[ %]+' '{print $5}') echo $disk_status } #Find the log directory and file type to be cleaned, and specify the size to be cleaned. Here, specify 20M system_log_file() { #Multiple directory patrols are supported, separated by spaces, and the '/' must be followed by renwole_log_dir=(/var/log/ /usr/local/nginx/logs/) for log_dirs in ${renwole_log_dir[*]};  do #Log in to the machine remotely, define the file type to be cleaned, and then throw it into the black hole $login_docker "find $log_dirs 2> /dev/null -type f -size +20M \( -name "*.data*" \ -o -name "message*" -o -iname "wtm*" -o -name "vsa*" -o -name "secu*" \ -o -name "cron*" -o -name "*.log*" \) -exec cp /dev/null {} \; " done } #Get which directories or files occupy large space, sort by size, and get the first two lines of file size directories system_file_size_check() { home_file_size=$($login_docker "du -h --max-depth=5 /home/* | sort -h | tail -n5 |head -n2") echo $home_file_size } #Pipeline token | Process concurrency renwole_com="/tmp/renwolcomfile" [ -e "$renwole_com" ] || mkfifo $renwole_com exec 3<>$renwole_com rm -rf $renwole_com for ((i=1; i<=1000;i++)); do echo >&3 done #Start to cycle check the use status of machine disks and disks. If the conditions are not met, an alarm will be given for hostname in `cat $hostnamelist`; do read -u3 { #Remotely detect whether the machine can be logged in. The maximum attempt is 30s. Machines that cannot log in without password will automatically skip the login login_docker="timeout 30 ssh -o BatchMode=yes $hostname" if [[ $(check_login) -ne 0 ]];  then Echo "Container login failed $hostname">>$renwole_com_disk_clean_log #If conditions are met, terminate this cycle and continue the next cycle echo >&3 continue  fi if [[ $(check_login_command) -ne 0 ]];  then Echo "Cannot find du command $hostname">>$renwole_com_disk_clean_log echo >&3 continue fi if [[ $(disk_use)  -gt 80 ]];  then #If the disk utilization rate is greater than 80%, trigger cleaning system_log_file else Echo "The current disk usage $(disk_use)% does not exceed 80%, $hostname" echo >&3 continue fi if [[ $(disk_use)  -gt 80 ]];  then Echo "$title $hostname Disk cleaning failed. The current usage rate is $(disk_use)%. File size: $(system_file_size_check)">>$renwole_com_disk_clean_log Echo "Failed to clean the disk. The current usage rate is $(disk_use)% $hostname" else Echo "The disk is cleaned successfully. The usage rate after cleaning is $(disk_use)% $hostname" fi echo >&3 }& done wait #Close exec 3<&- exec 3>&- #Whether to enable nailing notification, which is closed by default # dingding_Notice

Linux detects CPU | memory | disk usage Shell script (nailing notification)

 #!/ bin/bash # BLOG : //renwole.com now_time=$(date -u -d"+8 hour" +'%Y-%m-%d %H:%M:%S') #Get domain name hostnamelist=$(hostname) #When the CPU utilization rate is greater than the set threshold, an alarm is triggered cpu_warn="60" #Trigger the alarm when there is only 2048MB of memory left mem_warn="2048" #When the disk utilization rate is greater than the set threshold, an alarm is triggered disk_warn="80" #Each execution will generate the corresponding log on the machine renwole_check_log="/tmp/renwole_check_mem_cpu_disk.log" #Nailing alarm token dingtalk_openapi="//oapi.dingtalk.com" dingtalk_openapi_token="Token" #Get CPU usage item_cpu () { cpu_idle=$(top -b -n 1 | grep Cpu | awk '{print $2}' | cut -f 1 -d ".") Echo "$now Current CPU usage is $cpu_idle">>$renwole_check_log if [[ "$cpu_idle" -gt "$cpu_warn" ]];  then curl ''$dingtalk_openapi'/robot/send?access_token='$dingtalk_openapi_token'' \ -H 'Content-Type: application/json' \ -d '{"msgtype": "text", "text": { "Content": "Warning: The CPU utilization of the current machine '$hostnamelist' has reached 60%, please know." } }' else Echo "The CPU is healthy" fi } #Get memory consumption item_mem () { mem_free=$(free -m | grep "Mem" | awk '{print $4+$6}') Echo "$now The current remaining memory space is$ {mem_free}MB " >> $renwole_check_log if [[ "$mem_free" -lt "$mem_warn" ]];  then curl ''$dingtalk_openapi'/robot/send?access_token='$dingtalk_openapi_token'' \ -H 'Content-Type: application/json' \ -d '{"msgtype": "text", "text": { "Content": "Warning: The memory usage rate of the current machine '$hostnamelist' is less than 2048MB, please know." } }' else Echo "The memory usage rate is normal, and you can use it with confidence" fi } #Get Disk Usage item_disk () { disk_use=$(df -P | grep /dev/sdb1 | grep -v -E '(tmp|boot)' | awk '{print $5}' | cut -f 1 -d "%") Echo "$now The current disk usage is $disk_use">>$renwole_check_log if [[ "$disk_use" -gt "$disk_warn" ]];  then curl ''$dingtalk_openapi'/robot/send?access_token='$dingtalk_openapi_token'' \ -H 'Content-Type: application/json' \ -d '{"msgtype": "text", "text": { "Content": "Warning: The disk utilization rate of the current machine '$hostnamelist' has reached 80%, please know." } }' else Echo "The hard disk utilization rate does not exceed 80%, so you can use it with confidence" fi } item_cpu item_mem item_disk

Shell Script Variable Judgment Parameter Command Learning Chapter

Recently, I have learned shell scripts in depth. I will review the most basic ones first. If I don't touch them for a long time, I may forget some parameters, so I will take notes here for future reference.

1. System variable

 $n Parameters passed to script or function. N is a number indicating the number of parameters. For example, the first parameter is $1, and the second parameter is $2 $?    The exit status of the last command, or the return value of the function. 0 is returned on success, 1 on failure $# Number of parameters passed to script or function $* All these parameters are enclosed in double quotation marks. If a script receives two parameters, $* equals $1 $2 $0 is the name of the command being executed. For shell scripts, this is the path of the activated command When $@ is contained in double quotation marks (""), it is slightly different from $*. If a script receives two parameters, $@ is equivalent to $1 $2 $$The process number of the current shell. For shell script, this is the process ID when it is executing $!    The process number of the previous background command

2. File or directory judgment

 -B file True if the file exists and is a block special file -C file True if the file exists and is a special character file -D file True if the file exists and is a directory -E file True if the file exists -F file True if the file exists and is a rule file -G file True if the file exists and the value of SGID bit is set -H file True if the file is a soft link -K file If the file exists and the value of "sticky" bit is set -L file True if the file is a symbolic link -P file True if the file exists and is a named pipe -R file True if the file is readable -S file Judge whether the file exists and is non empty. If it is not empty, it is true -S file to determine whether the file exists and whether it is a socket file -T file file descriptor (1 by default) is true when the specified device is a terminal -U file True if the file exists and SUID bit is set -W file True if the file is writable -X file True if the file is executable [file1 - nt file2] True if file1 is newer than file2, or if file1 exists but file2 does not exist [file1 - ot file2] True if file1 is older than file2, or if file2 exists but file1 does not exist [file1 - ef file2] Returns true if file1 and file2 point to the same device and node number

3. Integer judgment

 -If two eq numbers are equal, it is a true example: if ["$a" - eq "$b"] -If ne is not equal, it is a true example: if ["$a" - ne "$b"] -If gt a is greater than b, it is a true example: if ["$a" - gt "$b"] -If ge is greater than or equal to, it is a true example: if ["$a" - ge "$b"] -Lt a is less than b, it is a true example: if ["$a" - lt "$b"] -If le a is less than or equal to b, it is a true example: if ["$a" - le "$b"] <less than (double brackets are required) Example: (("$a"<"$b")) <=less than or equal to (double brackets are required) Example: ("$a"<="$b") >Greater than (need double brackets) Example: (("$a">"$b")) >=Greater than or equal to (double brackets are required) Example: (("$a">="$b"))

Small data comparison can be used AWK

4. Logical operation judge

 !   [ !  false ]               Returns true, logical No, condition is false, result is true -A [$a - lt 2 - a $b - gt 5] returns true logical AND. If both expressions are true, it is true -O [$a - lt 2 - o $b - gt 5] Returns true logical OR, which is true as long as one expression is true [] | | [] Use OR to merge two conditions []&&[] Combine two conditions with AND

5. String judgment

 ==True if two strings are the same, equivalent to=: ["str1"="str2"] !=  If the strings are different, it is a true example: ["str1"!="str2"] <If the str1 dictionary is sorted before str2, it is a true example: [["str1"<"str2"]]>If the str1 dictionary is sorted after str2, it is a true example: ["str1" >"str2"] -N If str length is non-zero, it is true, that is, it is not empty Example: [- n "str1"] -Z If the file length is zero, that is, empty, it is a true example: [- z "str1"]

Note: On [] In structure " < "It needs to be escaped, for example: [ "str1" /< "str2" ] , there is no need to escape in double brackets.

Summary:

use -n To test in the [] structure, you must use "" Lead the variable and use a "" String, please use ! - z If you use a variable without double quotation marks, it can work, but it is not safe. It is a good habit to use double quotation marks to enclose variable test strings.
In addition, [[ ]] Structural ratio [ ] The structure is more general.

Centos 7 uses memory to optimize disk cache read and write speed

In Linux /dev/shm The directory does not belong to the disk, but the memory. If you use /dev/shm/ As the disk file read/write cache in Linux, the directory is surprisingly efficient.

default /dev/shm The directory is not mounted. You need to mount it manually.

Add the following at the end of the following file:

 $ vim /etc/fstab
 tmps /dev/shm tmpfs defaults,size=1G 0 0

Please add according to the size of your physical memory, generally about 10-50% of the physical memory.

mount /dev/shm/ catalog:

 $ mount -o remount /dev/shm/ $ mkdir /dev/shm/tmp $ chmod 755 /dev/shm/tmp $ mount -B /dev/shm/tmp /tmp

be careful:

/dev/shm/tmp After the system restarts, the mount will be lost, and you need to reset the mount. Here is a shell script, which you can add to the startup automatically:

 $ vim /etc/init.d/shmtmp.sh
 #!/ bin/bash mkdir /dev/shm/tmp chmod 755 /dev/shm/tmp mount -B /dev/shm/tmp/ /tmp

Then add the following at the end of the following documents:

 $ vim /etc/rc.local
 sh /etc/init.d/shmtmp.sh

In this way, you can restart and mount automatically. You can use memory to improve read-write performance, for example: session , and other caches /tmp Under the directory, the speed and efficiency are multiplied.

Let's Encrypt SSL certificate renewal failure ascii codec cannot encode

Today, I reviewed the SSL certificate of the server and found that the Let's Encrypt certificate is about to expire. Check the scheduled task plan log of the crontab and it is also executed normally. For example:

 $ cat /var/log/cron
 ... CROND[31471]: (root) CMD ( /usr/bin/certbot renew --quiet && /bin/systemctl restart nginx ) CROND[31470]: (root) MAIL (mailed 375 bytes of output but got status 0x004b#012) CROND[31482]: (root) CMD (run-parts /etc/cron.hourly) ...

Strangely, the certificate was not renewed normally. Why? Later, the certificate was updated manually:

 $ /usr/bin/certbot renew --quiet
 Attempting to renew cert from /etc/letsencrypt/renewal/renwole.com.conf produced an unexpected error: 'ascii' codec can't encode characters in position 247-248: ordinal not in range(128).  Skipping. All renewal attempts failed. The following certs could not be renewed: /etc/letsencrypt/live/renwole.com.conf/fullchain.pem (failure) 1 renew failure(s), 0 parse failure(s)

Update failed, prompt“ ascii ”The codec cannot encode characters.

After analysis and research, it is found that the developer has modified the root directory of the website, resulting in LetsEncrypt not finding the relevant configuration file.
PS: Alas, if something goes wrong, it's all about operation and maintenance.

Solution

Modify the site root directory in the following configuration file:

 $ vim /etc/letsencrypt/renewal/renwole.com.conf
 ... # Options used in the renewal process [renewalparams] authenticator = webroot installer = None account = a07a7160ea489g586aeaada1368ce0d6 [[webroot_map]] renwole.com = /apps/data/www/renwolecom ...

Modify the root directory specified by Nginx in blue, and save it by default.

The certificate was updated again successfully.

Use the following command to view the renewal status:

 $ certbot certificates

How does Centos 7 back up and restore Redis data

What is Redis?

Redis is a key value cache and storage in memory (i.e. database), and can also be permanently saved to disk. In this article, you will learn how to back up and restore your Redis database on Centos 7.

Backup Restore Instructions

By default, Redis data will be saved to the. rdb file on the disk, which is a point in time snapshot of the Redis dataset. Snapshots are taken at specified intervals, so they are perfect for backup.

1. Data backup

In Centos 7 and other Linux distributions, the default Redis database directory is/var/lib/Redis. However, if you change the Redis storage location, you can find it by typing the following command:

 [ root@renwolecom  ~]# find / -name *rdb

Use the redis cli management tool to access the database:

 [ root@renwolecom  ~]# redis-cli

Since most of the data runs in memory, Redis only saves it every other period of time. To get the latest copy, execute the following command:

 10.10.204.64:6379> save OK (1.02s)

In addition, if Redis has set user authentication, it needs to be verified before saving, for example:

 10.10.204.64:6379> auth RenwoleQxl5qpKHrh9khuTW 10.10.204.64:6379> save

Then backup, for example:

 [ root@renwolecom  ~]# cp /var/lib/redis/dump.rdb /apps/redis-backup-20180129

2. Data restoration

To restore the backup, you need to replace the existing Redis database file with the recovery file. To ensure that the original data files are not damaged, we recommend that you restore to the new Redis server as much as possible.

Stop the Redis database. Once stopped, the Redis database will be offline.

 [ root@renwolecom  ~]# systemctl stop redis

If restoring to the original Redis server, please rename the current data file before restoring:

 [ root@renwolecom  ~]# mv /var/lib/redis/dump.rdb /var/lib/redis/dump.rdb.old [ root@renwolecom  ~]# cp -p /apps/redis-backup-20180129/dump.rdb /var/lib/redis/dump.rdb

Set the dump.rdb file permissions. The copied data files may not have the Redis user and read permissions, which need to be manually given:

 [ root@renwolecom  ~]# chown redis:redis /var/lib/redis/dump.rdb [ root@renwolecom  ~]# chmod 660 /var/lib/redis/dump.rdb

Start Redis

 [ root@renwolecom  ~]# systemctl start redis

be accomplished! Now you can log in to Redis to verify the data.

Note:

Close AOF as required, and AOF tracks every write operation to the Redis database. Since we are trying to recover from a point in time backup, we do not want Redis to recreate the operations stored in its AOF file.

Whether to enable AOF can be learned by viewing the file:

 [ root@renwolecom  ~]# ls /var/lib/redis/

If you see a file with the suffix. aof, you have enabled AOF.

Rename the. aof file,

 [ root@renwolecom  ~]# mv /var/lib/redis/*.aof /var/lib/redis/appendonly.aof.old

If there are multiple. aof files, please name them separately.

Edit your Redis profile and temporarily close AOF:

 [ root@renwolecom  ~]# vim /etc/redis/redis.conf appendonly no

If you have any questions during the backup, please leave a message.

Centos 7 Add/Remove Swap Partition

Swap Introduction:

Linux divides physical memory into memory segments, called pages. Swapping refers to the process of copying memory pages to the preset hard disk space (called swap space), in order to free memory for pages. The total size of physical memory and swap space is the total amount of virtual memory available.

Swap is to swap partitions, which is similar to the virtual memory of Windows, but when the physical memory is insufficient, part of the hard disk space is used as virtual memory, thus solving the problem of insufficient physical memory capacity.

Advantages: cost saving.
Disadvantages: insufficient performance.

This method is not limited to Centos 7, and can be used on Linux systems.

Operating user: root.

1. Add swap partition space

Use dd command to create swap partition file /dev/mapper/centos-swap , size 2G:

 $ dd if=/dev/zero of=/dev/mapper/centos-swap bs=1024 count=2048000

Format the swap partition:

 $ mkswap /dev/mapper/centos-swap

Set the swap partition:

 $ mkswap -f /dev/mapper/centos-swap

Activate the swap partition:

 $ swapon /dev/mapper/centos-swap

Set to start automatically:

 $ vim /etc/fstab

Add the following at the bottom of the file:

 /dev/mapper/centos-swap swap swap default 0 0

2. Delete the swap partition

Stop the swap partition in use:

 $ swapoff /dev/mapper/centos-swap

Delete the swap partition file:

 $ rm /dev/mapper/centos-swap

Delete or comment in /etc/fstab The following contents of the file are automatically attached after startup:

 /dev/mapper/centos-swap swap swap default 0 0

be accomplished!

Keepalive Nginx dual network (internal and external network) fault asynchronous drift dual active dual main mode (actual combat)

Introduction:

With a high-performance combination like keepalived+Lvs, why do we need keepalived+Nginx. Keepalived is designed for Lvs. Lvs is a four tiered load balancing device. Although it has the advantage of high performance, it does not have a back-end server health check mechanism. Keepalived provides a series of health check mechanisms for lvs, such as TCP_CHECK, UDP_CHECK, HTTP_GET, etc. At the same time, lvs can also write its own health check script. Or combine ldirectory to implement backend health detection. However, LVS has always been unable to get rid of the fact that it is a four layer device and cannot parse the upper layer protocol. Nginx is different. Nginx is a 7-tier device that can parse the 7-tier protocol, filter some requests, and cache the request results. These are the unique advantages of Nginx. However, keepalived does not provide health detection for Nginx. You need to write some footsteps to conduct health inspection.

The following mainly explains the Keepalived+Nginx mode, excluding lvs. If it is not a large load, LVS is generally unavailable. Of course, you can also refer to:《 Keepalive LVS-DR Nginx single network dual active dual main configuration mode (actual combat) 》Article.

Prepare four servers or virtual machines:

Web Nginx Intranet: 10.16.8.8/10.16.8.9

Keepalived Intranet: 10.16.8.10 (ka67)/10.16.8.11 (ka68)
Keepalived public network: 172.16.8.10/172.16.8.11

Keepalived Intranet VIP: 10.16.8.100/10.16.8.101
Keepalived public network VIP: 172.16.8.100/172.16.8.101

OS:CentOS Linux release 7.4.1708 (Core)

precondition:

Install keepalived.
Time synchronization.
Set SELinux and firewall.
Between each other /etc/hosts Add the opposite host name to the file (optional).
Confirm that the network interface supports multicast (multicast) by default.

For the above deployment, please refer to:《 Keepalived installation and configuration file explanation 》。

1. ka67 configuration file

 global_defs { notification_email { root@localhost } notification_email_from  ka@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 vrrp_mcast_group4 224.0.0.111 } vrrp_instance External_1 { state MASTER interface eth1 virtual_router_id 171 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole0 } virtual_ipaddress { 10.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"   } vrrp_instance External_2 { state BACKUP interface eth1 virtual_router_id 172 priority 95 advert_int 1 authentication { auth_type PASS auth_pass renwole1 } virtual_ipaddress { 10.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"   } vrrp_instance Internal_1 { state MASTER interface eth0 virtual_router_id 191 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole2 } virtual_ipaddress { 172.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           } vrrp_instance Internal_2 { state BACKUP interface eth0 virtual_router_id 192 priority 95 advert_int 1 authentication { auth_type PASS auth_pass renwole3 } virtual_ipaddress { 172.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           }

2. ka68 configuration file

 global_defs { notification_email { root@localhost } notification_email_from  ka@localhost smtp_server 127.0.0.1 smtp_connect_timeout 30 vrrp_mcast_group4 224.0.0.111 } vrrp_instance External_1 { state BACKUP interface eth1 virtual_router_id 171 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole0 } virtual_ipaddress { 10.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           } vrrp_instance External_2 { state MASTER interface eth1 virtual_router_id 172 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole1 } virtual_ipaddress { 10.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           } vrrp_instance Internal_1 { state BACKUP interface eth0 virtual_router_id 191 priority 95 advert_int 1 authentication { auth_type PASS auth_pass renwole2 } virtual_ipaddress { 172.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           } vrrp_instance Internal_2 { state MASTER interface eth0 virtual_router_id 192 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole3 } virtual_ipaddress { 172.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"           }

3. Create a common script for detection

 $ vim /usr/local/keepalived/etc/keepalived/notify.sh
 #!/ bin/bash # contact=' root@localhost ' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master    ;; backup) notify backup systemctl start nginx #After configuring here, the Nginx service can automatically start if it hangs ;; fault) notify fault     ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac

4. Start the keepalived service and test

Check the network card status after starting ka67:

 [ root@ka67  ~]# systemctl start keepalived
 [ root@ka67  ~]# ip a
 1: lo: <LOOPBACK, UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.8.100/32 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.8.101/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::436e:b837:43b:797c/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1 valid_lft forever preferred_lft forever inet 10.16.8.100/32 scope global eth1 valid_lft forever preferred_lft forever inet 10.16.8.101/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::1261:7633:b595:7719/64 scope link valid_lft forever preferred_lft forever

When ka68 is not started, ka67 adds four VIPs:

Public network eth0:

172.16.8.100/32
172.16.8.101/32

Intranet eth1:

10.16.8.100/32
10.16.8.101/32

Check the network card status after starting ka68:

 [ root@ka68  ~]# systemctl start keepalived
 [ root@ka68  ~]# ip a
 1: lo: <LOOPBACK, UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:79 brd ff:ff:ff:ff:ff:ff inet 172.16.8.11/24 brd 103.28.204.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.8.101/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::3d2c:ecdc:5e6d:70ba/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:82 brd ff:ff:ff:ff:ff:ff inet 10.16.8.11/24 brd 10.16.8.255 scope global eth1 valid_lft forever preferred_lft forever inet 10.16.8.101/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::4fb3:d0a8:f08c:4536/64 scope link valid_lft forever preferred_lft forever

Ka68 added two VIPs, namely:

Public network eth0:

172.16.8.101/32

Intranet eth1:

10.16.8.101/32

Check the network card status information of ka67 again:

 [ root@ka67  ~]# ip a
 1: lo: <LOOPBACK, UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:78 brd ff:ff:ff:ff:ff:ff inet 172.16.8.10/24 brd 172.16.8.255 scope global eth0 valid_lft forever preferred_lft forever inet 172.16.8.100/32 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::436e:b837:43b:797c/64 scope link valid_lft forever preferred_lft forever 3: eth1: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:15:5d:ae:02:84 brd ff:ff:ff:ff:ff:ff inet 10.16.8.10/24 brd 10.16.8.255 scope global eth1 valid_lft forever preferred_lft forever inet 10.16.8.100/32 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::1261:7633:b595:7719/64 scope link valid_lft forever preferred_lft forever

be aware 172.16.8.101/10.16.8.101 It has been removed. At this time, no matter whether any server is stopped, the 4 VIPs will not stop communication.

In addition, you can ka67/ka68 View the heartbeat status of the multicast address through the following command:

 [ root@ka67  ~]# tcpdump -nn -i eth1 host 224.0.0.111
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet),  capture size 262144 bytes 02:00:15.690389 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20 02:00:15.692654 IP 10.16.8.11 > 224.0.0.111: VRRPv2,  Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20 02:00:16.691552 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20 02:00:16.693814 IP 10.16.8.11 > 224.0.0.111: VRRPv2,  Advertisement, vrid 172, prio 100, authtype simple, intvl 1s, length 20 02:00:17.692710 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 171, prio 100, authtype simple, intvl 1s, length 20

up to now, vrrp High availability configuration of&test completed, let's continue to configure Web Nginx service.

5. Install and configure Nginx

In the back-end server 10.16.8.8/10.16.8.9 Install Nginx:

For Nginx see:《 Centos 7 source code compilation and installation Nginx 》。

Or by yum Install Nginx; Simple and fast:

 $ yum install epel-release -y $ yum install nginx -y

In order to distinguish different machines in the test environment, the server IP address is set for the web page, but the content obtained in the production environment is consistent.

Respectively in 10.16.8.8/10.16.8.9 Execute the following command:

 $ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html $ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

Test whether the access is normal:

 $ curl //10.16.8.8 Server 10.16.8.8

Respectively in ka67/ka68 Install Nginx on the. I use it here yum Installation:

 $ yum install nginx psmisc -y

explain: psmisc Includes: fuser , killall , pstree And so on.

stay ka67/ka68 Configure Nginx on:

Backup default configuration file:

 $ mv /etc/nginx/conf.d/default.conf{,.bak} $ mv /etc/nginx/nginx.conf{,.bak}

Respectively in ka67/ka68 Add the following content to the nginx main configuration file:

 $ vim /etc/nginx/nginx.conf
 user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log  /var/log/nginx/access.log  main; sendfile            on; tcp_nopush          on; tcp_nodelay         on; keepalive_timeout   65; types_hash_max_size 2048; include             /etc/nginx/mime.types; default_type        application/octet-stream; include /etc/nginx/conf.d/*.conf; upstream webserverapps { server 10.16.8.8:80; server 10.16.8.9:80; #server 127.0.0.1:8080 backup; }

 server { listen 80; server_name _; location / { proxy_pass //webserverapps; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; add_header Access-Control-Allow-Origin *; } } }

Note: The blue part is mainly added to the above configuration. Other defaults are only for testing. Please adjust the configuration of the production environment according to your own needs.

stay ka67/ka68 Restart Nginx service:

 $ systemctl restart nginx

Test on ka67/ka68 respectively:

 [ root@ka67  ~]# for i in `seq 10`;  do curl 10.16.8.10; done Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.9 Server 10.16.8.9

So far, Nginx reverse generation has also been realized. Next, we will combine Nginx with Keepalived to make Nginx support high availability.

6. Configure Keepalive Nginx high availability

Respectively in ka67/ka68 configuration file /usr/local/keepalived/etc/keepalived/keepalived.conf Global configuration block for global_defs Add below vrrp_script Configuration block:

 vrrp_script chk_nginx { script "killall -0 nginx" interval 2 weight -10 fall 2 rise 2 }

At all vrrp_instance Instance block, add track_script Blocks:

 track_script { chk_nginx }

For example:

 ... vrrp_instance External_1 { state BACKUP interface eth1 virtual_router_id 171 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole0 } virtual_ipaddress { 10.16.8.100 } track_script { chk_nginx } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_"/usr/local/keepalived/etc/keepalived/notify.sh fault" } ...

After configuration, restart ka67/ka68 's keepalived service:

 $ systemctl stop keepalived $ systemctl start keepalived

Summary:

During the configuration process, there is a problem that cannot be drifted and cross network segments. To solve the channel problem, we need to see more logs, analyze and judge more, and finally we can solve the problem. In any case, since you have chosen to keep alive, you should firmly believe in your original intention.
If you have any problem in the configuration process, please leave a message to solve the problem together.

Keepalive LVS-DR Nginx single network dual active dual main configuration mode (actual combat)

What is LVS/DR mode?

LVS is the abbreviation of Linux Virtual Server, which means Linux virtual server. It is a virtual server cluster system. LVS currently has three IP load balancing technologies (VS/NAT, VS/TUN, and VS/DR) and ten scheduling algorithms (rrr | wrr | lc | wlc | lblc | lblcr | dh | sh | sed | nq).

LVS exists in the Unix like system as a front end (Director), also known as a scheduler. It does not provide any services, but only accepts requests coming through the Internet and forwards them to the real server running in the background (RealServer) for processing, and then responds to the client.

LVS cluster adopts IP load balancing technology and content-based request distribution technology. The scheduler has a good throughput. It transfers requests evenly to different servers for execution, and the scheduler automatically shields server failures, thus forming a group of servers into a high-performance, highly available virtual server. The structure of the entire server cluster is transparent to the client, and there is no need to modify the client and server programs. Therefore, the transparency, scalability, high availability and manageability of the system should be considered in the design.

LVS has two important components: IPVS and IPVSADM. IPvs is the core component of LVS. It is just a framework, similar to iptables, and works in the kernel space. Ipvsadm is used to define the forwarding rules of LVS and works in the user space.

LVS has three forwarding types:

LVS-NAT mode:

It is called network address translation. It is relatively simple to implement. All RealServer cluster nodes and front-end scheduler directors must be in the same subnet. This model can implement port mapping, The operating system of RealServer can be any operating system. The front-end director not only needs to process the request from the client, but also needs to process the response information of the back-end RealServer, and then forwards the response information of the RealServer to the client. The front-end director can easily become the bottleneck of the performance of the entire cluster system. Generally, the IP address of the RealServer (hereinafter referred to as RIP) is a private address to facilitate communication between RealServer cluster nodes. Generally, the front-end director has two IP addresses, one is a VIP, which is a virtual IP address. The client sends a request to this IP address. One is DIP, which is the IP address of the real director. The RIP gateway should point to the director's DIP.

LVS-DR mode:

DR: Direct routing mode. This mode works through MAC address forwarding. All RealServer cluster nodes and front-end scheduler directors must be in the same physical network. This mode does not support port mapping. This mode has better performance than LVS-NAT. RIP can use the public network IP, and RIP gateway cannot point to DIP.

advantage:

Compared with the LVS/NAT mode, the DR mode does not need to forward the returned data through load balancing. If you want to take advantage of the DR mode, the number and length of the corresponding data packets should be far greater than the request data packets. Fortunately, most Web services have this feature, and the response and request are asymmetric. Therefore, common Web services can use this mode.

In this way, the load balancer is no longer the bottleneck of the system. If your load balancer only has a 100M full duplex network card and bandwidth, the horizontal expansion of the cluster can also enable the entire system to achieve 1G traffic.

The test results from the official LVS website also tell us that LVS-DR can accommodate more than 100 actual application servers, just for general services, which is enough.

Insufficient:

In DR mode, you cannot forward data across network segments. If you must load across network segments, you must use LVS/TUN mode.

LVS-TUN mode:

The RealServer server, known as the tunnel model, and the front-end director can be in different networks. This model also does not support port mapping. The RealServer can only use the operating systems that support IP tunnels. The front-end director only processes the client's requests, and then forwards the requests to the RealServer. The back-end RealServer directly responds to the client without passing through the director, RIP must not be a private IP. In DR and TUN modes, data packets are returned directly to users. Therefore, this address needs to be set on the Director Server and on each node of the cluster. This IP is usually bound to the loopback address on the Real Server, such as lo: 0. Similarly, on the Director Server, the virtual IP is bound to the real network interface device, such as eth0:0.

Start deployment:

Prepare four servers or virtual machines:

Web Nginx:10.16.8.8/10.16.8.9
Keepalived:10.16.8.10/10.16.8.11
Keepalived VIP:10.16.8.100/10.16.8.101
OS:CentOS Linux release 7.4.1708 (Core)

precondition:

Install keepalived.
Time synchronization.
Set SELinux and firewall.
Add each other's hostname to the/etc/hosts file (optional).
Confirm that the network interface supports multicast (multicast) by default.

For the above deployment, please refer to:《 Keepalived installation and configuration file explanation 》。

1. ka67 configuration file

 $ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
 global_defs { notification_email { root@localhost } notification_email_from  ka@localhost smtp_server 127.0.0.1 smtp_connect_timeout 60 vrrp_mcast_group4 224.0.0.111 } vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 191 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole0 } virtual_ipaddress { 10.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         } vrrp_instance VI_2 { state BACKUP interface eth1 virtual_router_id 192 priority 95 advert_int 1 authentication { auth_type PASS auth_pass renwole1 } virtual_ipaddress { 10.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          }

2. ka68 configuration file

 $ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
 global_defs { notification_email { root@localhost } notification_email_from  ka@localhost smtp_server 127.0.0.1 smtp_connect_timeout 60 vrrp_mcast_group4 224.0.0.111 } vrrp_instance VI_1 { state BACKUP interface eth1 virtual_router_id 191 priority 95 advert_int 1 authentication { auth_type PASS auth_pass renwole0 } virtual_ipaddress { 10.16.8.100 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"          } vrrp_instance VI_2 { state MASTER interface eth1 virtual_router_id 192 priority 100 advert_int 1 authentication { auth_type PASS auth_pass renwole1 } virtual_ipaddress { 10.16.8.101 } notify_master "/usr/local/keepalived/etc/keepalived/notify.sh master" notify_backup "/usr/local/keepalived/etc/keepalived/notify.sh backup" notify_fault "/usr/local/keepalived/etc/keepalived/notify.sh fault"         }

3. Create a general notify.sh detection script

Create this script separately:

 $ vim /usr/local/keepalived/etc/keepalived/notify.sh
 #!/ bin/bash # contact=' root@localhost ' notify() { local mailsubject="$(hostname) to be $1, vip floating" local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1" echo "$mailbody" | mail -s "$mailsubject" $contact } case $1 in master) notify master    ;; backup) notify backup    ;; fault) notify fault     ;; *) echo "Usage: $(basename $0) {master|backup|fault}" exit 1 ;; esac

4. Start the keepalived service

 $ systemctl start keepalived $ systemctl enable keepalived

5. View multicast status

We can also view the multicast heartbeat status on any keepalived node through the tcpdump command, for example:

 $ tcpdump -nn -i eth1 host 224.0.0.111
 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth1, link-type EN10MB (Ethernet),  capture size 262144 bytes 00:32:31.714987 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20 00:32:31.715739 IP 10.16.8.11 > 224.0.0.111: VRRPv2,  Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20 00:32:32.716150 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20 00:32:32.716292 IP 10.16.8.11 > 224.0.0.111: VRRPv2,  Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20 00:32:33.717327 IP 10.16.8.10 > 224.0.0.111: VRRPv2,  Advertisement, vrid 191, prio 100, authtype simple, intvl 1s, length 20 00:32:33.721361 IP 10.16.8.11 > 224.0.0.111: VRRPv2,  Advertisement, vrid 192, prio 100, authtype simple, intvl 1s, length 20

If an error is prompted: -bash: tcpdump: command not found.

Install tcpdump:

 $ yum install tcpdump -y

6. Configure LVS

Install LVS separately. CentOS7 has integrated the core of LVS, so you only need to install LVS management tools:

 $ yum -y install ipvsadm

Stop the keepalived service of ka67/ka68 respectively:

 $ systemctl stop keepalived

Add the Virtual Server configuration at the end of the ka67/ka68 configuration file:

 $ vim /usr/local/keepalived/etc/keepalived/keepalived.conf
 virtual_server 10.16.8.100 80 { delay_loop 3 lb_algo rr lb_kind DR protocol TCP # sorry_server 127.0.0.1 80 real_server 10.16.8.8 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } real_server 10.16.8.9 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } } virtual_server 10.16.8.101 80 { delay_loop 3 lb_algo rr lb_kind DR protocol TCP # sorry_server 127.0.0.1 80 real_server 10.16.8.8 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } real_server 10.16.8.9 80 { weight 1 HTTP_GET { url { path / status_code 200 } connect_timeout 1 nb_get_retry 3 delay_before_retry 1 } } }

7. Configure RS (Real Server) Web service

Install Apache Httpd or Nginx as web services on the web server, and install Nginx here.

For Nginx see:《 Centos 7 source code compilation and installation Nginx 》。

Or install Nginx in the following ways, which is simple and fast:

 $ yum install epel-release -y $ yum install nginx -y

In order to distinguish different machines in the test environment, the display page is set to the server IP address, but the content obtained in the production environment is consistent.

Execute the following commands on web8/web9 respectively:

 $ echo "Server 10.16.8.8" > /usr/share/nginx/html/index.html $ echo "Server 10.16.8.9" > /usr/share/nginx/html/index.html

Test whether the access is normal:

 $ curl //127.0.0.1 Server 10.16.8.8

8. Add RS script

Because some commands in this script are not available in Centos7 minimal installation, please install the network toolkit first:

 $ yum install net-tools -y

Add rs.sh scripts on the web server respectively:

 $ vim /tmp/rs.sh
 #!/ bin/bash vip1=10.16.8.100 vip2=10.16.8.101 dev1=lo:1 dev2=lo:2 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $dev1 $vip1 netmask 255.255.255.255 broadcast $vip1 up ifconfig $dev2 $vip2 netmask 255.255.255.255 broadcast $vip2 up echo "VS Server is Ready! " ;; stop) ifconfig $dev down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce echo "VS Server is Cancel! " ;; *) echo "Usage `basename $0` start|stop" exit 1 ;; esac

Then start the script separately:

 $ /tmp/rs.sh start

If you need to stop, execute the following command:

 $ /tmp/rs.sh stop

9. Test

Test whether it can be accessed on another server

 [ root@localhost  ~]# for i in `seq 5`;  do >     curl 10.16.8.100 >     curl 10.16.8.101 > done Server 10.16.8.9 Server 10.16.8.8 Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.9 Server 10.16.8.8 Server 10.16.8.8 Server 10.16.8.9 Server 10.16.8.9 Server 10.16.8.8

According to the test results, the Keepalive+LVS-DR+Nginx high availability failover mode has been realized.