본문으로 바로가기

작년 초에 Multi-node에 Openstackit Xena 서버를 설정하는 과정을 포스팅하였습니다.
이 Openstack 의 개발주기로 인하여 기존 서버를 재설정하고 Openstackit Antelope버전으로 갱신하여 설치하는 방법을 다시 정리하여 포스팅합니다.

Kolla-ansible로 Openstack Multi-Node 설치하기(Installing Openstack Multi-Node with Kolla-ansible)

 

 

Kolla-ansible로 Openstack Multi-Node 설치하기(Installing Openstack Multi-Node with Kolla-ansible)

본 글에서는 Kolla-ansible을 기반으로 Openstack을 다중노드에 설치하는 과정에 대해서 설명합니다. 운영체제 설치 운영체제 이름/버전 : Ubuntu 20.04.5 Live Server 가상네트워크 : 192.168.140.50 각 운영체제

parandol.tistory.com

 

Openstack 모듈 및 설치 환경

설치 개요

Openstack 모듈

본 설치 설명에서는 아래의 구성 요소를 설치합니다.

구성요소 설명
아이덴티티 (Keystone) 오픈스택 아이덴티티(Keystone)는 사용자들이 접근할 수 있는 오픈스택 서비스들에 매핑되는 사용자들의 중앙 디렉터리를 제공한다
컴퓨트 (Nova) Nova는 OpenStack 컴퓨팅 리소스를 위한 전체 관리 및 액세스 툴로 스케줄링, 생성, 삭제를 처리합니다.
이미지 (Glance) Glance는 다양한 위치에 있는 가상 머신 디스크의 이미지를 저장하고 검색합니다.
네트워킹 (Neutron) Neutron은 기타 OpenStack 서비스 전반에서 네트워크를 연결합니다.
블록 스토리지 (Cinder) Cinder는 셀프 서비스 API를 통해 액세스할 수 있는 퍼시스턴트 블록 스토리지입니다.
대시보드 (Horizon) 오픈스택 대시보드(Horizon)는 관리자와 사용자들에게 클라우드 기반 자원 배치의 접근, 제공, 자동화를 위한 그래픽 인터페이스를 제공한다.
오케스트레이션 (Heat) Heat는 오픈스택 네이티브 REST API와 클라우드포메이션 호환 쿼리 API를 통해 여러 개의 복합 클라우드 애플리케이션들을 조직하기 위한 서비스이다.
로드밸런서 서비스 (Octavia) 가상 머신이나 컨테이너에 애플리케이션 올려 외부 트래픽에 노출할 때 로드밸런서를 구성합니다.
Masakari 인프라의 장애를 감지하고 자동으로 복구합니다.
Manila 공유 스토리지를 제공하고 관리합니다.

설치 계획

물리장비 정보

 
Host
IP
Net. Interface
Role
Spec
devcon01 192.168.140.51
enp7s0
Controller #1
NFS Master
CPU: Intel(R) Core(TM) i7-9700F CPU @ 3.00GHz
Memory : 32G
SSD : 500G
HDD : 1T
devcon02 192.168.140.52
enp2s0
Controller #2
NFS Slave
CPU : Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz
Memory : 16G
SSD : 200G
HDD : 1T
devcom01 192.168.140.53
enp3s0
Compute #1 CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
Memory : 16G
SSD : 256G, 256G
HDD : 1T
devcom02 192.168.140.54
enp0s31f6
Compute #2 CPU: Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz
Memory : 32G
SSD : 240G
HDD : 1T
devcom03 192.168.140.55
enp0s31f6
Compute #3 CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz
Memory : 16G
SSD : 256G
HDD : -

운영체제 정보

2023년 11월 20일 기준으로 Openstack Antelope(2023.1) 버전은 Ubuntu 22.04 이상의 버전에서 설치가 가능합니다.

  • 운영체제 : Ubuntu Server 22.04.3 Desktop LTS
  • 운영체제 : Ubuntu Server 22.04.3 LTS Live Server

네트워크 계획

  • 외부 네트워크
    • VIP 네트워크 : 192.168.140.50
    • 외부 네트워크를 통해 외부에서 openstack에 접근이 가능한 통신용으로 대표하는 IP로 설정됩니다.

저장소 계획

  • NFS 구성
    • 기본적으로 공유 스토리지가 아니기 때문에 동일한 장치에 있어야 합니다.
    • glance, nova, cinder, cinder_backup 스토리지를 각각 구성합니다.

설치 준비하기

서버는 Ubuntu Server 22.04.5 버전을 기준으로 설치하였습니다. 운영체제를 최소 설치로 필요한 종속적인 모듈이 필요합니다. 다른 운영체제에서는 명령이 다를 수 있으니 다른 명령을 사용해야 할 수 있습니다. (예, apt, yum 등)

네트워크 설정

Desktop 을 사용중일 경우에는 NetworkManager를 비활성화시킨 후 아래와 같이 netplan을 설정합니다.

# systemctl disable NetworkManager.service
# systemctl stop NetworkManager.service

netplan 설정

# vi /etc/netplan/00-installer-config.yaml

# devcon01 
network: 
  ethernets: 
    enp1s0f0: 
      dhcp4: no 
      addresses:
        - 192.168.140.51/24
      routes:
      - to: default
        via: 192.168.140.254 
      nameservers: 
        addresses:
        - 8.8.8.8
        - 168.126.63.1
        search:
        - devcon01
  version: 2 
  renderer: networkd
  

# netplan apply
# ping 192.168.140.254

Ping 체크를 통해 노드 간 통신이 되는지 확인

 

root@devcon01:~# ping 192.168.140.51
PING 192.168.140.51 (192.168.140.51) 56(84) bytes of data.
64 bytes from 192.168.140.51: icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from 192.168.140.51: icmp_seq=2 ttl=64 time=0.041 ms
^C
--- 192.168.140.51 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.037/0.039/0.041/0.002 ms

root@devcon01:~# ping 192.168.140.52
PING 192.168.140.52 (192.168.140.52) 56(84) bytes of data.
64 bytes from 192.168.140.52: icmp_seq=1 ttl=64 time=0.702 ms
64 bytes from 192.168.140.52: icmp_seq=2 ttl=64 time=0.493 ms
^C
--- 192.168.140.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 0.493/0.597/0.702/0.104 ms

root@devcon01:~# ping 192.168.140.53
PING 192.168.140.53 (192.168.140.53) 56(84) bytes of data.
64 bytes from 192.168.140.53: icmp_seq=1 ttl=64 time=0.671 ms
64 bytes from 192.168.140.53: icmp_seq=2 ttl=64 time=0.404 ms
^C
--- 192.168.140.53 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1027ms
rtt min/avg/max/mdev = 0.404/0.537/0.671/0.133 ms

root@devcon01:~# ping 192.168.140.54
PING 192.168.140.54 (192.168.140.54) 56(84) bytes of data.
64 bytes from 192.168.140.54: icmp_seq=1 ttl=64 time=0.414 ms
64 bytes from 192.168.140.54: icmp_seq=2 ttl=64 time=0.211 ms
^C
--- 192.168.140.54 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1018ms
rtt min/avg/max/mdev = 0.211/0.312/0.414/0.101 ms

root@devcon01:~# ping 192.168.140.55
PING 192.168.140.54 (192.168.140.55) 56(84) bytes of data.
64 bytes from 192.168.140.55: icmp_seq=1 ttl=64 time=0.414 ms
64 bytes from 192.168.140.55: icmp_seq=2 ttl=64 time=0.211 ms
^C
--- 192.168.140.55 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1018ms
rtt min/avg/max/mdev = 0.211/0.312/0.414/0.101 ms

root@devcon01:~# 

기본 프로그램 설치 및 설정

SSH 대몬 설치 및 설정

# devcon01, devcon02, devcom01, devcom02, devcom03 
# sudo apt-get update 
# sudo apt-get install ssh 
# sudo systemctl enable ssh  

ssh root로 로그인 허용

# devcon01, devcon02, devcom01, devcom02, devcom03 
# vi /etc/ssh/sshd_config 
PermitRootLogin yes 

# devcon01, devcon02, devcom01, devcom02, devcom03 
# systemctl restart sshd 

관리자 암호 설정SSH root 접속 테스트

# devcon01, devcon02, devcom01, devcom02, devcom03 
# sudo -i 
[sudo] password for openstack: 

# devcon01, devcon02, devcom01, devcom02, devcom03 
# passwd 
New password: 
Retype new password: 
passwd: password updated successfully 

# devcon01, devcon02, devcom01, devcom02, devcom03
# ssh root@localhost 
root@localhost's password:  
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.4.0-139-generic x86_64)

기본적인 도구 설치

# devcon01, devcon02, devcom01, devcom02, devcom03 
# apt-get install net-tools xfsprogs -y

Openstack 설치 준비

호스트 설정

각 서버에 통신을 하기 위해서 각 서버 정보를 호스트 파일에 등록합니다.

etc/hosts에 관리 노드를 추가 등록합니다.

# devcon01, devcon02, devcom01, devcom02, devcom03
# vi /etc/hosts 

기존 127.0.0.1 devcon01 삭제 후 
192.168.140.51 devcon01 
192.168.140.52 devcon02 
192.168.140.53 devcom01 
192.168.140.54 devcom02 
192.168.140.55 devcom03 

SSH Key생성 및 배포

각 서버와 암호없이 통신하도록 SSH키를 만들고 공유합니다.

# devcon01
root@devcon01:~# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase):        // 입력제외
Enter same passphrase again:                       // 입력제외
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:cfOnbJVLqlnQ6xv0xgMRlH9y/9UZqFqDa4J4Tq7lPtk root@devcon01
The key's randomart image is:
+---[RSA 3072]----+
|           .o.   |
|            ..   |
|        . o ...  |
|         o + oooo|
|        S o * =+=|
|         . B @ o+|
|    .o+   + X * o|
|   .== E + * o ..|
|   .==. o o o.   |
+----[SHA256]-----+


# devcon01
root@devcon01:~# for node in {devcon01,devcon02,devcom01,devcom02,devcom03}; do ssh-copy-id root@$node; done

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'devcon01 (192.168.140.51)' can't be established.
ECDSA key fingerprint is SHA256:JQrsWk4MqRC26wphUyqcrSS5YlyRhmfeOPD5GuTqVJc.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@devcon01's password:

Number of key(s) added: 1

......

Now try logging into the machine, with:   "ssh 'root@devcom02'"
and check to make sure that only the key(s) you wanted were added.

Chrony 설치 & 동기화 확인 및 한국 시간대로 변경

시간 동기화를 위해 설치합니다.

# devcon01, devcon02, devcom01, devcom02, devcom03 
# apt install chrony -y
 
# devcon01, devcon02, devcom01, devcom02, devcom03
# systemctl enable chrony 
 
# devcon01, devcon02, devcom01, devcom02, devcom03
# systemctl start chrony 
 
# devcon01, devcon02, devcom01, devcom02, devcom03
# chronyc sources -v 
210 Number of sources = 10 
 
  .-- Source mode  '^' = server, '=' = peer, '#' = local clock. 
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined, 
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable. 
||                                                 .- xxxx [ yyyy ] +/- zzzz 
||      Reachability register (octal) -.           |  xxxx = adjusted offset, 
||      Log2(Polling interval) --.      |          |  yyyy = measured offset, 
||                                \     |          |  zzzz = estimated error. 
||                                 |    |           \ 
MS Name/IP address         Stratum Poll Reach LastRx Last sample 
=============================================================================== 
^- prod-ntp-5.ntp1.ps5.cano>     2   9   377   923  +7478us[+7394us] +/-  124ms 
^- pugot.canonical.com           2   9   377   143  -4571us[-4571us] +/-  162ms 
^- prod-ntp-3.ntp4.ps5.cano>     2   9   377   407    +15ms[  +15ms] +/-  130ms 
^- alphyn.canonical.com          2   9   377   416  -9816us[-9860us] +/-  151ms 
^* 121.174.142.81                3   9   377   157   +290us[ +239us] +/-   37ms 
^- 121.162.54.1                  2   7   377    87  -5377us[-5377us] +/-   52ms 
^+ ap-northeast-2.clearnet.>     2   8   327   216   -561us[ -610us] +/-   30ms 
^? any.time.nl                   0   6     0     -     +0ns[   +0ns] +/-    0ns 
^? ap-northeast-2.clearnet.>     0   6     0     -     +0ns[   +0ns] +/-    0ns 
^? 2406:da12:b86:2c32:3045:>     0   6     0     -     +0ns[   +0ns] +/-    0ns


# 한국 시간대로 변경
# devcon01, devcon02, devcom01, devcom02, devcom03
# ln -s /usr/share/zoneinfo/Asia/Seoul /etc/localtime

Openstack NFS 저장소 설정

GLANCE / NOVA / CINDER / CINDER _BACKUP 스토리지를 구성합니다.

Controller 노드에 NFS 서버를 설치해 NAS처럼 사용할 수 있습니다.

디스크 초기화 (필요시)

디스크의 파티션을 초기화하려면 다음 명령을 사용합니다.

# wipefs -af /dev/sda
# wipefs -af /dev/sda1
# reboot

NFS Master 설정

# devcon01
# apt install nfs-kernel-server -y

# devcon01
# systemctl status nfs-kernel-server
● nfs-server.service - NFS server and services
     Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
     Active: active (exited) since Tue 2023-11-21 12:50:53 KST; 3s ago
    Process: 19783 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
    Process: 19784 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
   Main PID: 19784 (code=exited, status=0/SUCCESS)

# 서버마다 환경이 다르므로 장치의 이름을 lsblk 명령으로 작업할 디스크를 찾습니다.
# mkfs.xfs /dev/sdb

# 스토리지 마운트
# mkdir /NAS
# mount /dev/sdb /NAS/ 
#  df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             16G     0   16G   0% /dev
......
/dev/sdb        932G  6.6G  925G   1% /NAS


# 서버 재시작 시에도 mount 될 수 있도록 fstab에 내용 추가 
# vi /etc/fstab 
/dev/sdb /NAS xfs defaults 0 0  

# 부팅없이 마운팅 진행
# mount -a


# 용도에 맞는 디렉토리 추가
# cd /NAS
# mkdir GLANCE
# mkdir NOVA
# mkdir CINDER CINDER_BACKUP
# ls -al
total 4
drwxr-xr-x  6 root root   67 11월 21 12:37 .
drwxr-xr-x 21 root root 4096 11월 21 12:32 ..
drwxr-xr-x  2 root root    6 11월 21 12:37 CINDER
drwxr-xr-x  2 root root    6 11월 21 12:37 CINDER_BACKUP
drwxr-xr-x  2 root root    6 11월 21 12:37 GLANCE
drwxr-xr-x  2 root root    6 11월 21 12:37 NOVA


# 서비스할 디렉토리 설정
# vim /etc/exports 
# /NAS/[각 폴더 명] [스토리지 용 관리 IP] 
/NAS/NOVA 192.168.140.0/24(rw,no_root_squash,sync,no_subtree_check)
/NAS/GLANCE 192.168.140.0/24(rw,no_root_squash,sync,no_subtree_check)
/NAS/CINDER 192.168.140.0/24(rw,no_root_squash,sync,no_subtree_check)
/NAS/CINDER_BACKUP 192.168.140.0/24(rw,no_root_squash,sync,no_subtree_check)


# 서비스 재시작
# systemctl restart nfs-kernel-server 
# exportfs -v
/NAS/NOVA       192.168.140.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/NAS/GLANCE     192.168.140.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/NAS/CINDER     192.168.140.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)
/NAS/CINDER_BACKUP
                192.168.140.0/24(rw,wdelay,no_root_squash,no_subtree_check,sec=sys,rw,secure,no_root_squash,no_all_squash)

NFS Slave 설정

# devcon01, devcon02
# apt install nfs-common

# 서버마다 환경이 다르므로 장치의 이름을 lsblk 명령으로 작업할 디스크를 찾습니다.
# mkfs.xfs /dev/sda

# 스토리지 마운트
# mkdir /NAS
# mount /dev/sda /NAS/ 
#  df -h
Filesystem      Size  Used Avail Use% Mounted on
udev                               7.7G     0  7.7G   0% /dev
......
/dev/sda        932G  6.6G  925G   1% /NAS


# 서버 재시작 시에도 mount 될 수 있도록 fstab에 내용 추가 
# vi /etc/fstab 
/dev/sda                          /NAS                  xfs   defaults   0   0

# 부팅없이 마운팅 진행
# mount -a


# 용도에 맞는 디렉토리 추가
# cd /NAS
# mkdir GLANCE
# mkdir NOVA
# mkdir CINDER CINDER_BACKUP
# ls -al

# showmount -e 192.168.140.51
Export list for 192.168.140.51:
/NAS/CINDER_BACKUP 192.168.140.0/24
/NAS/CINDER        192.168.140.0/24
/NAS/GLANCE        192.168.140.0/24
/NAS/NOVA          192.168.140.0/24

# 마스터와 연결
# mount -t nfs 192.168.140.51:/NAS/NOVA /NAS/NOVA
# mount -t nfs 192.168.140.51:/NAS/GLANCE /NAS/GLANCE
# mount -t nfs 192.168.140.51:/NAS/CINDER /NAS/CINDER
# mount -t nfs 192.168.140.51:/NAS/CINDER_BACKUP /NAS/CINDER_BACKUP


# 부팅시 마운트되도록 설정
# vi /etc/fstab
/dev/sda                          /NAS                  xfs   defaults   0   0
192.168.140.51:/NAS/NOVA          /NAS/NOVA             nfs   defaults   0   0
192.168.140.51:/NAS/GLANCE        /NAS/GLANCE           nfs   defaults   0   0
192.168.140.51:/NAS/CINDER        /NAS/CINDER           nfs   defaults   0   0
192.168.140.51:/NAS/CINDER_BACKUP /NAS/CINDER_BACKUP    nfs   defaults   0   0

# 부팅없이 마운팅 진행
# mount -a

# 마운팅 확인
# ls -al CINDER

참고 : https://vittorio-lee.tistory.com/17

Kolla-Ansible 설치 및 배포

Kolla-Ansible 설치

kolla-ansible 리포지토리 추가 설치

# devcon01# devcon01
root@openstack:~# apt-get install -y python3-pip

# devcon01
root@openstack:~# pip3 install pip

# devcon01
root@openstack:~# pip install git+https://opendev.org/openstack/kolla-ansible@stable/2023.1
Collecting git+https://opendev.org/openstack/kolla-ansible@stable/2023.1
  Cloning https://opendev.org/openstack/kolla-ansible (to revision stable/2023.1) to /tmp/pip-req-build-1miswxkc
  Running command git clone --filter=blob:none --quiet https://opendev.org/openstack/kolla-ansible /tmp/pip-req-build-1miswxkc
  Running command git checkout -b stable/2023.1 --track origin/stable/2023.1
  Switched to a new branch 'stable/2023.1'
  Branch 'stable/2023.1' set up to track remote branch 'stable/2023.1' from 'origin'.
  Resolved https://opendev.org/openstack/kolla-ansible to commit 10857e95014c29d26de158b586a4a7006638ef3f
  Preparing metadata (setup.py) ... done
......
Building wheels for collected packages: kolla-ansible
  Building wheel for kolla-ansible (setup.py) ... done
  Created wheel for kolla-ansible: filename=kolla_ansible-16.5.1.dev13-py3-none-any.whl size=1426583 sha256=febd510aa3229d798a357942e49b2c69721519212f62c3ebf72bbf45b8ad2541
  Stored in directory: /tmp/pip-ephem-wheel-cache-mrnowwf4/wheels/94/ea/de/b27ea7f7f73eb0341a85de0a16d85e3b3c84a8d5f1e8f99878
Successfully built kolla-ansible
Installing collected packages: wrapt, tzdata, rfc3986, pbr, packaging, netaddr, jmespath, iso8601, charset-normalizer, stevedore, requests, oslo.i18n, debtcollector, oslo.utils, oslo.config, hvac, kolla-ansible
  Attempting uninstall: requests
    Found existing installation: requests 2.25.1
    Not uninstalling requests at /usr/lib/python3/dist-packages, outside environment /usr
    Can't uninstall 'requests'. No files were found to uninstall.
Successfully installed charset-normalizer-3.3.2 debtcollector-3.0.0 hvac-2.1.0 iso8601-2.1.0 jmespath-1.0.1 kolla-ansible-16.5.1.dev13 netaddr-1.2.1 oslo.config-9.4.0 oslo.i18n-6.3.0 oslo.utils-7.1.0 packaging-24.0 pbr-6.0.0 requests-2.31.0 rfc3986-2.0.0 stevedore-5.2.0 tzdata-2024.1 wrapt-1.16.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv


# devcon01
root@openstack:~# sudo apt update
Hit:1 http://kr.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]
Get:3 http://kr.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]
Hit:4 http://kr.archive.ubuntu.com/ubuntu jammy-backports InRelease 
Fetched 229 kB in 2s (132 kB/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
9 packages can be upgraded. Run 'apt list --upgradable' to see them.


// Kolla-ansible 동작 여부확인(필요 ansible 버전확인)
root@openstack:~# kolla-ansible install-deps
ERROR: Ansible is not installed in the current (virtual) environment.
Ansible version should be between 2.13 and 2.14.

Ansible 모듈 설치 및 버전 확인

Kolla-ansible에서 요구되는 버전은 ansible 2.13, 2.14 버전으로 pip를 이용해 2.13비전은 ansible>=6,<7값으로 2.14버전은 ansible>=7,<8값을 지정하여 설치가 가능합니다.

# devcon01
root@openstack:~# pip install --no-cache-dir 'ansible>=7,<8'
Collecting ansible<8,>=7
  Downloading ansible-7.7.0-py3-none-any.whl.metadata (7.9 kB)
Collecting ansible-core~=2.14.7 (from ansible<8,>=7)
  Downloading ansible_core-2.14.15-py3-none-any.whl.metadata (6.9 kB)
Requirement already satisfied: jinja2>=3.0.0 in /usr/lib/python3/dist-packages (from ansible-core~=2.14.7->ansible<8,>=7) (3.0.3)
Requirement already satisfied: PyYAML>=5.1 in /usr/lib/python3/dist-packages (from ansible-core~=2.14.7->ansible<8,>=7) (5.4.1)
Requirement already satisfied: cryptography in /usr/lib/python3/dist-packages (from ansible-core~=2.14.7->ansible<8,>=7) (3.4.8)
Requirement already satisfied: packaging in /usr/local/lib/python3.10/dist-packages (from ansible-core~=2.14.7->ansible<8,>=7) (24.0)
Requirement already satisfied: resolvelib<0.9.0,>=0.5.3 in /usr/local/lib/python3.10/dist-packages (from ansible-core~=2.14.7->ansible<8,>=7) (0.5.4)
Downloading ansible-7.7.0-py3-none-any.whl (46.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 MB 4.2 MB/s eta 0:00:00
Downloading ansible_core-2.14.15-py3-none-any.whl (2.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2/2.2 MB 15.3 MB/s eta 0:00:00
Installing collected packages: ansible-core, ansible
Successfully installed ansible-7.7.0 ansible-core-2.14.15
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv


# devcon01
root@openstack:~# ansible --version
ansible [core 2.14.18]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (/usr/bin/python3)
  jinja version = 3.0.3
  libyaml = True

Kolla-ansible 에 필요한 모듈 설치

# devcon01
root@openstack:~# kolla-ansible install-deps
Installing Ansible Galaxy dependencies
Starting galaxy collection install process
Process install dependency map
Cloning into '/root/.ansible/tmp/ansible-local-5515z318obnk/tmpsmm_7t0p/ansible-collection-kollavqef4wzg'...
remote: Enumerating objects: 882, done.
remote: Counting objects: 100% (318/318), done.
remote: Compressing objects: 100% (120/120), done.
remote: Total 882 (delta 276), reused 198 (delta 198), pack-reused 564
Receiving objects: 100% (882/882), 146.51 KiB | 491.00 KiB/s, done.
Resolving deltas: 100% (413/413), done.
Branch 'stable/2023.1' set up to track remote branch 'stable/2023.1' from 'origin'.
Switched to a new branch 'stable/2023.1'
Starting collection install process
Installing 'openstack.kolla:1.0.0' to '/root/.ansible/collections/ansible_collections/openstack/kolla'
Created collection for openstack.kolla:1.0.0 at /root/.ansible/collections/ansible_collections/openstack/kolla
openstack.kolla:1.0.0 was installed successfully

Kolla-Ansible 설정 경로 생성 및 주요 파일 복사

# devcon01
root@openstack:~# mkdir /etc/kolla

# devcon01
root@openstack:~# cp -r /usr/local/share/kolla-ansible/etc_examples/kolla/* /etc/kolla/

# devcon01
root@openstack:~# cp -r /usr/local/share/kolla-ansible/ansible/inventory .

# devcon01
root@openstack:~/inventory# ll
total 32
drwxr-xr-x 2 root root 4096 Apr  4 06:38 ./
drwx------ 8 root root 4096 Apr  4 06:38 ../
-rw-r--r-- 1 root root 9040 Apr  4 06:38 all-in-one
-rw-r--r-- 1 root root 9492 Apr  4 06:38 multinode

Openstack 암호 복사 및 편집

# devcon01
root@openstack:~# kolla-genpwd

WARNING: Passwords file "/etc/kolla/passwords.yml" is world-readable. The permissions will be changed.


// 데이터 베이스와 keystone 암호를 고정
# devcon01
root@aio01:~# vi /etc/kolla/passwords.yml
database_password: openstack
keystone_admin_password: openstack

인벤토리 설정

참고 : https://docs.openstack.org/kolla-ansible/latest/admin/production-architecture-guide.html

네트워크 인터페이스는 각 서버에 맞게 정확하게 맞추어야 합니다.

  • ifconfig 명령을 통해 네트워크 인터페이스를 확인하여 아래의 내용을 추가합니다. 기본적으로 네트워크 인터페이스를 지정하지 않는 경우는 eth0를 기본값으로 사용하나 이 인터페이스가 없으면 설치시 오류가 발생합니다.
  • controller 노드가 compute 기능을 동시에 수행하는 경우에는 hacluster_remote 정보를 추가하여 compute역할만 수행하는 노드를 별도로 구성합니다. 이 설정을 변경하지 않을 경우에는 Kolla-ansible 가 hacluster 구성 시 control 노드에는 corosync/pacemaker/pacemaker_remote 모듈을 동시에 설정하려고 하여 pacemaker와 pacemaker_remote가 충돌(네트워크 포트 충돌)을 일으킵니다.
  • hacluster-remote:children 항목을 찾아서 compute 에서 hacluster_remote로 값을 변경합니다.
# devcon01 
# vi inventory/multimode 


# These initial groups are the only groups required to be modified. The
# additional groups are for more control of the environment.
[control]
# These hostname must be resolvable from your deployment host
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0

# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0

[compute]
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0
devcom01 network_interface=enp4s0 api_interface=enp4s0 storage_interface=enp4s0 tunnel_interface=enp4s0 migration_interface=enp4s0 neutron_external_interface=enp4s0
devcom02 network_interface=enp0s31f6 api_interface=enp0s31f6 storage_interface=enp0s31f6 tunnel_interface=enp0s31f6 migration_interface=enp0s31f6 neutron_external_interface=enp0s31f6

[monitoring]
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0

# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1

[storage]
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0

[hacluster_control]
devcon01 network_interface=enp7s0 api_interface=enp7s0 neutron_external_interface=enp7s0 kolla_external_vip_interface=enp7s0 migration_interface=enp7s0
devcon02 network_interface=enp2s0 api_interface=enp2s0 neutron_external_interface=enp2s0 kolla_external_vip_interface=enp2s0 migration_interface=enp2s0

[hacluster_compute]
devcom01 network_interface=enp4s0 api_interface=enp4s0 storage_interface=enp4s0 tunnel_interface=enp4s0 migration_interface=enp4s0 neutron_external_interface=enp4s0
devcom02 network_interface=enp0s31f6 api_interface=enp0s31f6 storage_interface=enp0s31f6 tunnel_interface=enp0s31f6 migration_interface=enp0s31f6 neutron_external_interface=enp0s31f6

......

[hacluster:children]
hacluster_control

[hacluster-remote:children]
hacluster_compute

......

인벤토리 설정 확인

각 서버와 정상적으로 통신 되는지 확인합니다.

# devcon01 
# ansible -i /root/inventory/multinode all -m ping
[WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
localhost | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
devcon01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
devcom02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
devcon02 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}
devcom01 | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python3"
    },
    "changed": false,
    "ping": "pong"
}

global.yml 수정

설치하려는 서비스를 선택하고 선택한 서비스에 적절한 환경 설정을 구성합니다.

TLS 설정은 https://docs.openstack.org/kolla-ansible/latest/admin/tls.html 참고합니다.

# devcon01 
# cat /etc/kolla/globals.yml | grep -v ^# | grep -v ^$ 
--- 
workaround_ansible_issue_8743: yes
kolla_base_distro: "ubuntu"
openstack_release: "2023.1"
kolla_internal_vip_address: "192.168.140.50"
kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
om_enable_rabbitmq_tls: "{{ rabbitmq_enable_tls | bool }}"
om_rabbitmq_cacert: "{{ rabbitmq_cacert }}"
neutron_plugin_agent: "ovn"
kolla_enable_tls_internal: "yes"
kolla_enable_tls_external: "{{ kolla_enable_tls_internal if kolla_same_external_internal_vip | bool else 'yes' }}"
kolla_certificates_dir: "{{ node_config }}/certificates"
kolla_copy_ca_into_containers: "yes"
haproxy_backend_cacert: "{{ 'ca-certificates.crt' if kolla_base_distro in ['debian', 'ubuntu'] else 'ca-bundle.trust.crt' }}"
haproxy_backend_cacert_dir: "/etc/ssl/certs"
openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"
keystone_enable_tls_backend: "yes"
kolla_enable_tls_backend: "yes"
kolla_verify_tls_backend: "no"
enable_openstack_core: "yes"
enable_glance: "{{ enable_openstack_core | bool }}"
enable_hacluster: "no"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_keystone: "{{ enable_openstack_core | bool }}"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "{{ enable_openstack_core | bool }}"
enable_nova: "{{ enable_openstack_core | bool }}"
enable_rabbitmq: "{{ 'yes' if om_rpc_transport == 'rabbit' or om_notify_transport == 'rabbit' else 'no' }}"
enable_cinder: "yes"
enable_cinder_backup: "yes"
enable_cinder_backend_nfs: "yes"
enable_fluentd: "yes"
enable_heat: "{{ enable_openstack_core | bool }}"
enable_horizon: "{{ enable_openstack_core | bool }}"
enable_horizon_magnum: "{{ enable_magnum | bool }}"
enable_horizon_manila: "{{ enable_manila | bool }}"
enable_horizon_masakari: "{{ enable_masakari | bool }}"
enable_horizon_octavia: "{{ enable_octavia | bool }}"
enable_magnum: "yes"
enable_manila: "yes"
enable_manila_backend_generic: "yes"
enable_masakari: "yes"
enable_neutron_provider_networks: "yes"
enable_nova_ssh: "yes"
enable_octavia: "yes"
enable_octavia_driver_agent: "{{ enable_octavia | bool and neutron_plugin_agent == 'ovn' }}"
enable_ovn: "{{ enable_neutron | bool and neutron_plugin_agent == 'ovn' }}"
enable_placement: "{{ enable_nova | bool or enable_zun | bool }}"
rabbitmq_enable_tls: "yes"
rabbitmq_cacert: "/etc/ssl/certs/{{ 'ca-certificates.crt' if kolla_base_distro in ['debian', 'ubuntu'] else 'ca-bundle.trust.crt' }}"
glance_backend_file: "yes"
glance_file_datadir_volume: "/NAS/GLANCE"                                // Add
cinder_backup_driver: "nfs"
cinder_backup_share: "192.168.140.51:/NAS/CINDER_BACKUP"
cinder_backup_mount_options_nfs: "vers=4"
nova_instance_datadir_volume: "/NAS/NOVA"                                // Add
nova_console: "novnc"
neutron_ovn_distributed_fip: "yes"
octavia_auto_configure: "yes"
octavia_amp_flavor:
  name: "amphora"
  is_public: no
  vcpus: 1
  ram: 1024
  disk: 5
octavia_amp_security_groups:
    mgmt-sec-grp:
      name: "lb-mgmt-sec-grp"
      enabled: "yes"
      rules:
        - protocol: icmp
        - protocol: tcp
          src_port: 22
          dst_port: 22
        - protocol: tcp
          src_port: "{{ octavia_amp_listen_port }}"
          dst_port: "{{ octavia_amp_listen_port }}"
octavia_amp_network:
  name: lb-mgmt-net
  shared: false
  subnet:
    name: lb-mgmt-subnet
    cidr: "{{ octavia_amp_network_cidr }}"
    no_gateway_ip: yes
    enable_dhcp: yes
octavia_amp_network_cidr: 10.1.0.0/24
octavia_amp_router:
  name: lb-mgmt-router
  subnet: "{{ octavia_amp_network['subnet']['name'] }}"
octavia_amp_image_tag: "amphora"
octavia_loadbalancer_topology: "ACTIVE_STANDBY"
octavia_certs_country: KR                                       // Add
octavia_certs_state: Seoul                                      // Add
octavia_certs_organization: Openstack                           // Add
octavia_certs_organizational_unit: Octavia                      // Add
hacluster_corosync_port: 5405

참고 : https://docs.openstack.org/kolla-ansible/rocky/reference/manila-guide.html

참고 : https://atl.kr/dokuwiki/doku.php/octavia_security_group_%EC%83%9D%EC%84%B1_%EC%8B%A4%ED%8C%A8

Nova 경로 지정

# vim /etc/kolla/globals.yml
# nova 옵션에 nova_instance_datadir_volume 속성 추가
nova_instance_datadir_volume: "/NAS/NOVA"

Glance 경로 지정

# vim /etc/kolla/globals.yml
glance_file_datadir_volume: "/NAS/GLANCE"

Cinder 경로 지정

# mkdir /etc/kolla/config
# vim /etc/kolla/config/nfs_shares
192.168.140.51:/NAS/CINDER

Kolla-Ansible 버그 패치

RabbitMQ 버그 패치

# vi /usr/local/share/kolla-ansible/ansible/roles/service-rabbitmq/tasks/main.yml
          user: "{{ item.user }}"
          password: "{{ item.password }}"        // 추가
          node: "rabbit@{{ ansible_f

NOVA + NFS 버그 패치

중복 마운트 관련 버그를 다음과 같이 수정합니다.

*스토리지를 NFS로 구성하는 경우만 해당됩니다.

참고 : https://review.opendev.org/c/openstack/kolla-ansible/+/825514

kolla-ansible main.yml 수정

# vi /usr/local/share/kolla-ansible/ansible/roles/nova-cell/defaults/main.yml

nova/mnt => nova-mnt // 3부분 수정
"{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova/mnt:shared{% endif %}"
=>
"{% if enable_shared_var_lib_nova_mnt | bool %}/var/lib/nova/mnt:/var/lib/nova-mnt:shared{% endif %}"
 

libvirt.conf.j2 수정

# vi /usr/local/share/kolla-ansible/ansible/roles/nova-cell/templates/nova.conf.d/libvirt.conf.j2 

#29번 라인 이후에 30번 라인부터 아래 내용 추가 후 저장
{% if enable_shared_var_lib_nova_mnt | bool and enable_cinder_backend_nfs | bool %}
nfs_mount_point_base = /var/lib/nova-mnt
{% endif %}
{% if enable_shared_var_lib_nova_mnt | bool and enable_cinder_backend_quobyte | bool %}
quobyte_mount_point_base = /var/lib/nova-mnt
{% endif %}
 

nova-mnt 추가

# vi /usr/local/share/kolla-ansible/ansible/roles/nova-cell/templates/nova-compute.json.j2
    "permissions": [
        {
            "path": "/var/log/kolla/nova",
            "owner": "nova:nova",
            "recurse": true
        },
        {
            "path": "/var/lib/nova",
            "owner": "nova:nova",
            "recurse": true
        },
        {
            "path": "/var/lib/nova-mnt",
            "owner": "nova:nova",
            "recurse": true
        }
    ]

Glance + NFS 버그 패치

Glance Datafile 위치를 NFS기반으로 변경합니다.

*스토리지를 NFS로 구성하는 경우만 해당됩니다.

kolla-ansible glance main.yml 수정

# vi /etc/kolla/globals.yml
glance_file_datadir_volume: /NAS/GLANCE

// 이 부분은 일단 보류
# vi /usr/local/share/kolla-ansible/ansible/roles/glance/defaults/main.yml
#  - "{{ glance_file_datadir_volume }}:/var/lib/glance/"
#=>
#  - "{{ glance_file_datadir_volume }}:/NAS/GLANCE/"
728x90