概述
文章目录
- 负载均衡
- 实验环境
- server1
- server4
- 高可用(利用roles实现)
- 实验环境
- server1
注意:此实验是在前几篇博客的基础上做,需要做好解析,server1安装好ansible,server2和3实现与server1的免密,建立普通用户devops并授权等。
负载均衡
实验环境
主机名(IP) | 服务 |
---|---|
server1(172.25.11.1) | haproxy |
server2(172.25.11.2) | apache |
server3(172.25.11.3) | apache |
server4(172.25.11.4) | 动态加入的apache |
server1
- 对普通用户devops授权。
[root@server1 ~]# vim /etc/sudoers
92 devops ALL=(ALL) NOPASSWD: ALL
- 安装haproxy服务(目的是得到作为模版的配置文件haproxy.cnf)
[devops@server1 ansible]$ yum list haproxy
[devops@server1 ansible]$ sudo yum install -y haproxy
- copy并编辑模版文件为如下,添加监控页,修改监听端口以及后端主机2和3:
[devops@server1 ansible]$ vim template/haproxy.cfg.j2
[devops@server1 ansible]$ cp /etc/haproxy/haproxy.cfg template/haproxy.cfg.j2
[devops@server1 ansible]$ vim template/haproxy.cfg.j2
- 编写.yml文件。
[devops@server1 ansible]$ vim playbook.yml
---
- hosts: webserver #两个后端主机server2和server3
vars:
http_port: 80
tasks:
- name: install httpd
yum:
name: httpd
state: present
- name: copy index.html
copy:
content: "{{ ansible_facts['hostname'] }}" #为了实验效果明显,将发布页的内容设置为各自主机的主机名
dest: /var/www/html/index.html
- name: configure httpd
template:
src: template/httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
owner: root
group: root
mode: 644
notify: restart httpd
- name: start httpd and firewalld
service:
name: "{{ item }}"
state: started
loop:
- httpd
- firewalld
- name: configure firewalld
firewalld:
service: http
permanent: yes
immediate: yes
state: enabled
handlers:
- name: restart httpd
service:
name: httpd
state: restarted
- hosts: localhost #本机安装haproxy
tasks:
- name: install haproxy
yum:
name: haproxy
state: present
- name: configure haproxy #拷贝模版文件,并重启服务。
template:
src: template/haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
notify: restart haproxy
- name: start haproxy
service:
name: haproxy
state: started
handlers:
- name: restart haproxy
service:
name: haproxy
state: restarted
- 查看任务列表,并执行,在浏览器中访问调度器server1测试:
[devops@server1 ansible]$ ansible-playbook playbook.yml --list-tasks
[devops@server1 ansible]$ ansible-playbook playbook.yml
- 由上可以看到我们的负载均衡配置成功,实现了轮询。
- 但是上述的负载均衡的实现是静态的,即当我们有新的主机有httpd服务时,他不能主动加入集群进行轮询,我们接下来的配置可以解决这个问题,可以实现主机的动态加入。
server4
- 新添加主机server4,给server4上也新建普通用户devops并授权。
[root@server4 ~]# useradd devops
[root@server4 ~]# passwd devops
[root@server4 ~]# vim /etc/sudoers
devops ALL=(ALL) NOPASSWD: ALL
- server1和server4的免密。
[devops@server1 ansible]$ ssh-copy-id server4
[devops@server1 ansible]$ ssh server4
- 编辑inventory文件,加入server4。
[devops@server1 ansible]$ vim inventory
[test]
server2 http_host=172.25.24.2
[prod]
server3 http_host=172.25.24.3
server4 http_host=172.25.24.4
[webserver:children]
test
prod
- 修改haproxy的模版文件为如下,需要注意的是只有变量采用大括号阔起来,常量要写在括号外:
[devops@server1 ansible]$ vim template/haproxy.cfg.j2
75 {% for host in groups['webserver'] %}
76 server {{ hostvars[host]['ansible_facts']['hostname'] }} {{ hostvars[host ]['ansible_facts']['eth0']['ipv4']['address'] }}:80 check
77 {% endfor %}
- 执行并在浏览器中访问测试:
[devops@server1 ansible]$ ansible-playbook playbook.yml
- 如上,我们成功实现了动态加入有服务的节点。
- 查看haproxy的配置文件
/etc/haproxy/haproxy.cfg
:
[devops@server1 ansible]$ vim /etc/haproxy/haproxy.cfg
高可用(利用roles实现)
实验环境
主机名(IP) | 服务 |
---|---|
server1(172.25.11.1) | haproxy+keepalived(主) |
server2(172.25.11.2) | apache |
server3(172.25.11.3) | apache |
server4(172.25.11.4) | haproxy+keepalived(备) |
server1
- 创建一个keepalived模版
[devops@server1 ansible]$ cd roles/
[devops@server1 roles]$ ls
apache haproxy
[devops@server1 roles]$ ansible-galaxy init keepalived
[devops@server1 ansible]$ cd roles/
[devops@server1 roles]$ cd keepalived/
[devops@server1 keepalived]$ rm -fr README.md tests
- 编辑任务的main.yml文件
---
- name: install keepalived
yum:
name: keepalived
state: present
- name: configure haproxy
template:
src: keepalived.conf.j2 #注意路径
dest: /etc/keepalived/keepalived.conf
notify: restart keepalived
- name: start keepalived
service:
name: keepalived
state: started
- 给主机copy并编辑模版文件
[devops@server1 keepalived]$ cp ~/ansible/templates/keepalived.conf.j2 templates/
[devops@server1 keepalived]$ cd templates/
[devops@server1 templates]$ ls
keepalived.conf.j2
[devops@server1 keepalived]$ vim templates/keepalived.conf.j2
- 编辑触发器handlers的main.yml文件
---
- name: restart keepalived
service:
name: keepalived
state: restarted
- 给inventory文件中加入变量
[devops@server1 ansible]$ vim inventory
[lb]
server1 STATE=MASTER VRID=11 PRIORITY=100
server4 STATE=BACKUP VRID=11 PRIORITY=50
[test]
server2
[prod]
server3
[webserver:children]
test
prod
- 编辑apache.yml文件
[devops@server1 ansible]$ vim apache.yml
---
- hosts: all
tasks:
- import_role:
name: apache
when: ansible_hostname in groups['webserver']
- import_role:
name: haproxy
when: ansible_hostname in groups['lb']
- import_role:
name: keepalived
when: ansible_hostname in groups['lb']
- 执行
[devops@server1 ansible]$ ansible-playbook apache.yml
- 浏览器访问
http://172.25.11.100
测试(VIP):
- 当我们关掉server1(master)的keepalived服务时,查看vip已经飘移到server4(backup)上。
[devops@server1 ansible]$ systemctl stop keepalived
[devops@server1 ansible]$ ip a
- 再次开启server1的keepalived服务,发现vip又一次漂移回来了。
[devops@server1 ansible]$ sudo systemctl start keepalived
[devops@server1 ansible]$ ip a
- 综上,我们的负载均衡和高可用部署成功。
最后
以上就是哭泣楼房为你收集整理的Ansible与haproxy实现负载均衡+高可用的全部内容,希望文章能够帮你解决Ansible与haproxy实现负载均衡+高可用所遇到的程序开发问题。
如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复