상세 컨텐츠

본문 제목

Nutanix HCI에 Db2 pureScale 설치하기

Db2 for LUW

by 파란디비 2024. 11. 19. 12:24

본문

일반적으로 고 성능 확보를 위해 데이터베이스 구축 시 온프레미즈 환경에서는 Baremetal을 선택해왔습니다. 그러나 최근에는 가상화 환경이 제공하는 배포 간편성 및 관리 용이성, 빠른 복구의 잇점을 활용하기 위해 데이터베이스도 가상화 환경에 구축하려는 고객들이 점차 증가하고 있습니다. 오늘은 Db2가 지원하는 여러 가상화 환경 중, Nutanix HCI에 Db2 pureScale을 구축하는 방법을 공유하고자 합니다. 

 

Nutanix환경에서 2개의 VM에 CF와 Member를 함께 구성하였습니다. 

 

 

대부분 베어메탈 환경과 동일한 절차로 구성하면 되지만,  몇가지 유의 사항들이 있습니다.  먼저 GPFS 파일시스템은 pureScale 설치(db2setup)명령어 실행 전에 구성되어야 합니다. 그 이유는 GPFS용 Tiebreaker가 Virtual Disk로는 지정되지 못해 설치가 실패하기 때문입니다. 또한 GPFS 파일시스템들의 Fast I/O Fencing 지원을 위해 SCSI-3-PR 활성화 조치가 필요합니다. 마지막으로 TSA 용 Tiebreaker의 경우 Virtual Disk에 WWID가 없어 db2cluster 명령어 대신 TSA 명령어를 통해 Tiebreaker를 지정합니다.

 

1) GPFS파일시스템 구성 : pureScale 설치(db2setup) 실행 전에 진행합니다.

/opt/ibm/db2/V11.5/bin/db2cluster -cfs -create -host dbhost1 -domain db2psdomain

[root@dbhost1 /]# mmlscluster

GPFS cluster information

========================

  GPFS cluster name:         db2psdomain.kolon.com

  GPFS cluster id:           13939910878862097013

  GPFS UID domain:           db2psdomain.kolon.com

  Remote shell command:      /var/db2/db2ssh/db2locssh

  Remote file copy command:  /var/db2/db2ssh/db2scp

  Repository type:           CCR

 Node  Daemon node name   IP address      Admin node name    Designation

-------------------------------------------------------------------------

   1   dbhost1.kolon.com  172.18.229.181  dbhost1.kolon.com  quorum-manager

/opt/ibm/db2/V11.5/bin/db2cluster -cfs -add -host dbhost2

[root@dbhost1 /]# /opt/ibm/db2/V11.5/bin/db2cluster -cfs -add -host dbhost2

Adding node 'dbhost2' to the shared file system cluster ...

Host 'dbhost2' has been successfully added to the shared file system cluster

•mmstartup –a

[root@dbhost1 /]# mmgetstate -aL

 Node number  Node name  Quorum  Nodes up  Total nodes  GPFS state    Remarks

---------------------------------------------------------------------------------

           1  dbhost1       2         2          2      active        quorum node

           3  dbhost2       2         2          2      active        quorum node

/opt/ibm/db2/V11.5/bin/db2cluster -cfs -create -host dbhost1 -domain db2psdomain

[root@dbhost1 /]# mmlscluster

GPFS cluster information

========================

  GPFS cluster name:         db2psdomain.kolon.com

  GPFS cluster id:           13939910878862097013

  GPFS UID domain:           db2psdomain.kolon.com

  Remote shell command:      /var/db2/db2ssh/db2locssh

  Remote file copy command:  /var/db2/db2ssh/db2scp

  Repository type:           CCR

 Node  Daemon node name   IP address      Admin node name    Designation

-------------------------------------------------------------------------

   1   dbhost1.kolon.com  172.18.229.181  dbhost1.kolon.com  quorum-manager

•/opt/ibm/db2/V11.5/bin/db2cluster -cfs -add -host dbhost2

[root@dbhost1 /]# /opt/ibm/db2/V11.5/bin/db2cluster -cfs -add -host dbhost2

Adding node 'dbhost2' to the shared file system cluster ...

Host 'dbhost2' has been successfully added to the shared file system cluster

mmstartup a

[root@dbhost1 /]# mmgetstate -aL

 Node number  Node name  Quorum  Nodes up  Total nodes  GPFS state    Remarks

---------------------------------------------------------------------------------

           1  dbhost1       2         2          2      active        quorum node

           3  dbhost2       2         2          2      active        quorum node

[root@dbhost1 /]# /usr/lpp/mmfs/bin/tsprinquiry sdb

NUTANIX :VDISK           :0

[root@dbhost1 /]# vi /var/mmfs/etc/prcapdevices (파일 생성)

[root@dbhost1 /]# cat /var/mmfs/etc/prcapdevices (위에서 수행한 tsprinquiry sdb의 output 입력)

NUTANIX:VDISK:0

[root@dbhost1 /]# scp /var/mmfs/etc/prcapdevices dbhost2:/var/mmfs/etc/

prcapdevices                                                                                           100%   16    36.8KB/s   00:00

[root@dbhost1 /]# /usr/lpp/mmfs/bin/mmcommon startCcrMonitor
[root@dbhost1 /]# ssh dbhost2 /usr/lpp/mmfs/bin/mmcommon startCcrMonitor
[root@dbhost1 /]# mmshutdown -a
[root@dbhost1 /]# /usr/lpp/mmfs/bin/mmchconfig usePersistentReserve=yes

Verifying GPFS is stopped on all nodes ...

mmchconfig: Processing disk gpfs2nsd

mmchconfig: Processing disk gpfs3nsd

mmchconfig: Processing disk gpfs1nsd

mmchconfig: Command successfully completed

mmchconfig: Propagating the cluster configuration data to all

  affected nodes.  This is an asynchronous process.

[root@dbhost1 /]# mmlsnsd -X

 Disk name       NSD volume ID      Device          Devtype  Node name or Class       Remarks

-------------------------------------------------------------------------------------------------------

 gpfs1nsd        AC12E5B5672DC25F   /dev/sdb        generic  dbhost1.kolon.com        pr=yes

 gpfs2nsd        AC12E5B5672DC5A7   /dev/sdc        generic  dbhost1.kolon.com        pr=yes

 gpfs3nsd        AC12E5B5672DC5A8   /dev/sdd        generic  dbhost1.kolon.com        pr=yes

 

 

참조 : How to install Db2 pureScale on Nutanix ,Db2 pureScale 공유 파일 시스템 에 대한 SCSI-3 영구 예약 활성화

 

2) GPFS/TSA Tiebreaker 지정 : pureScale 설치(db2setup)  완료 이후에 진행합니다.

 

For GPFS,

/opt/ibm/db2/V11.5/bin/db2cluster -cfs -set -tiebreaker -disk /dev/sdb

The quorum type has been successfully changed to 'disk’.

/opt/ibm/db2/V11.5/bin/db2cluster -cfs -list –tiebreaker

The current quorum device is of type Disk with the following specifics: /dev/sdb.

 

For TSA,

/opt/ibm/db2/V11.5/bin/db2cluster -cm -list –tiebreaker

The current quorum device is of type Operator.•

mkrsrc IBM.TieBreaker Name="mySCSIPRTieBreaker" Type=SCSIPR DeviceInfo="DEVICE=/dev/sde" HeartbeatPeriod=5
chrsrc -c IBM.PeerNode OpQuorumTieBreaker="mySCSIPRTieBreaker
/opt/ibm/db2/V11.5/bin/db2cluster -cm -list –tiebreaker

The current quorum device is of type Disk with the following specifics: DEVICE=/dev/sde.

 

 

전체 작업 순서 및 수행 명령어는 아래와 같습니다.

구분 No Task Host Command(Link)
H/W 준비 1 시스템 준비 (VM, Virtual Disk) dbhost1,dbhost2 - VM
12 cores, 128GB per VM, 2 VMs
- Virtual Disk
 /dev/sdb(2TB)-db2home
 /dev/sdc(2TB), /dev/sdd(2TB) - db2data
 /dev/sde(2GB) - CS Tiebreaker
S/W 사전 작업 2 /etc/hosts dbhost1,dbhost2 172.18.xxx.181 dbhost1.kolon.com dbhost1
192.168.10.101 dbhost1-p.kolon.com dbhost1-p
172.18.xxx.182 dbhost2.kolon.com dbhost2
192.168.10.102 dbhost2-p.kolon.com dbhost2-p
3 /etc/services dbhost1  DB2_db2inst1    20022/tcp
DB2_db2inst1_1  20023/tcp
DB2_db2inst1_2  20024/tcp
DB2_db2inst1_3  20025/tcp
DB2_db2inst1_4  20026/tcp
DB2_db2inst1_END        20027/tcp
db2c_db2inst1   25020/tcp
4 Db2 image 준비 dbhost1,dbhost2 /images/tar -xvf special_47198_v11.5.9_linuxx64_server_dec.tar
5 SAM prereq 확인 dbhost1,dbhost2 /images/server_dec./db2/linuxamd64/tsamp/prereqSAM
6 Db2 prereq 확인 dbhost1,dbhost2 /images/server_dec./db2prereqcheck -p -l
7 db2prereqchk 후 리눅스 패키지 설치 dbhost1,dbhost2 $yum install -y ksh elfutils-libelf-devel patch make perl kernel-headers kernel-devel NetworkManager-config-server m4 gcc-c++ cpp binutils gcc pam-1.5.1-14.el9.i686 libstdc++-11.3.1-4.3.el9.i686 mksh sg3_utils sg_persist
dnf install libtirpc compat-openssl11
8 가상디스크의 SCSI-3 PR 지원여부 확인 (sg_persist) dbhost1 https://www.ibm.com/docs/en/db2/11.5?topic=aix-shared-storage-support
(Nutanix blog->
https://blog.ntnx.jp/entry/2024/04/14/043154)
9 xwindows(GUI) 설정 dbhost1 dnf groupinstall "Server with GUI"
systemctl set-default graphical.target
systemctl start gdm
10 selinux 비활성화 dbhost1,dbhost2 vi /etc/selinux/config
SELINUX=disabled
reboot
11 db2fenc1, db2inst1, db2sshid 계정 생성 --> Home dir은 로컬 dbhost1,dbhost2 groupadd -g 1000 db2igrp
groupadd -g 1010 db2fgrp
useradd -g 1000 -u 1000 -d /home/db2inst1 db2inst1
useradd -g 1010 -u 1010 -d /home/db2fenc1 db2fenc1
useradd -g 1010 -u 1001 -d /home/db2sshid db2sshid
12 Db2 pureScale 이미지 설치 dbhost1,dbhost2 /images/server_dec./db2_install  (purescale, GPFS, TSA)
13 db2locssh구성  dbhost1,dbhost2 https://www.ibm.com/docs/en/db2/11.5?topic=environment-setting-up-db2locssh
/images/server_dec/db2/linuxamd64/utilities/setup_db2locssh db2sshid
14 방화벽해제  dbhost1,dbhost2 (1191(GPFS), 56000,56001(CF)),25020(Db2)
$firewall-cmd --add-port=1191,56000,56001,25020/tcp --permanent; done
$i firewall-cmd --reload
15 TSA preprpnode 설정 dbhost1 $ preprpnode dbhost1 dbhost2
GPFS (공유) 파일시스템 구성 16 GPFS 클러스터 생성 dbhost1 /opt/ibm/db2/V11.5/bin/db2cluster -cfs -create -host dbhost1 -domain db2psdomain
/opt/ibm/db2/V11.5/bin/db2cluster -cfs -add -host dbhost2
17 GPFS 파일시스템 생성 dbhost1 mmstartup -a
/opt/ibm/db2/V11.5/bin/db2cluster -cfs -create -filesystem db2home -DISK /dev/sdb -MOUNT /db2home
/opt/ibm/db2/V11.5/bin/db2cluster -cfs -create -filesystem db2data -DISK /dev/sdc,/dev/sdd -MOUNT /db2data
chown db2inst1.db2igrp /db2home
chown db2inst1.db2igrp /db2data
18 GPFS Config 변경 dbhost1 mmchconfig totalPingTimeout=45
mmchconfig sharedMemLimit=2047M
19 GPFS SCSI-3 PR 활성화 dbhost1,dbhost2 https://www.ibm.com/docs/en/db2/11.5?topic=ppit-enabling-scsi-3-persistent-reserve-db2-purescale-shared-file-systems
20 GPFS usepersistentreserve 설정 dbhost1 /usr/lpp/mmfs/bin/mmchconfig usePersistentReserve=yes
21 GPFS 클러스터 시작 dbhost1 mmstartup -a
(확인) mmgetstate -aL
pureScale setup 22 db2setup 실행 dbhost1 /opt/ibm/db2/V11.5/sd/db2setup 
Tiebreaker 지정 23 TSA tie-breaker 지정 dbhost1 /opt/ibm/db2/V11.5/bin/db2cluster -cm -list -tiebreaker
mkrsrc IBM.TieBreaker Name="mySCSIPRTieBreaker" Type=SCSIPR DeviceInfo="DEVICE=/dev/sde" HeartbeatPeriod=5
chrsrc -c IBM.PeerNode OpQuorumTieBreaker="mySCSIPRTieBreaker"
/opt/ibm/db2/V11.5/bin/db2cluster -cm -list -tiebreaker
24 GPFS tie-breaker 지정 dbhost1 /opt/ibm/db2/V11.5/bin/db2cluster -cfs -list -tiebreaker
/opt/ibm/db2/V11.5/bin/db2cluster -cfs -set -tiebreaker -disk /dev/sdb
/opt/ibm/db2/V11.5/bin/db2cluster -cfs -list -tiebreaker
설치 후 작업 25 Db2 HA script 생성 dbhost1,dbhost2 /opt/ibm/db2/V11.5/install/tsamp/db2cptsa
26 export CT_MANAGEMENT_SCOPE=2 dbhost1,dbhost2 어느노드에서나 TSA 수정시 반영되도록 함
root 계정의 .profile에 "export CT_MANAGEMENT_SCOPE=2"추가
27 netmon.cf 확인 dbhost1,dbhost2 /var/ct/cfg/netmon.cf 에 G/W IP가 있는지 확인
[root@dbhost1 ~]# cat /var/ct/cfg/netmon.cf
!IBQPORTONLY !ALL
!LINK_STATE_REQD_ONLY !ALL
!REQD ens3 172.18.228.1
28 Db2 라이선스 추가 dbhost1,dbhost2 /opt/ibm/db2/V11.5/adm/db2licm -a db2adv_vpc.lic
pureScale 클러스터Health Check 29 pureScale 클러스터 구성 정상 여부 확인 dbhost1 /opt/ibm/db2/V11.5/bin/db2cluster -verify
Db2기동 및 데이터베이스 생성 30 db2 인스턴스 기동 dbhost1 su - db2inst1; db2start
31 db2instance 확인 dbhost1 db2instance -list
32 Sample DB 생성 dbhost1 db2sampl on /db2data
33 activate db  dbhost1 db2 activate db sample

 

이상으로 Nutanix 가상환경에서 Db2 pureScale 구성에 대한 기술적 방법을 자세히 살펴보았습니다. 

'Db2 for LUW' 카테고리의 다른 글

무료 라이센스로 Db2 사용 하기  (0) 2024.12.02
Db2 12.1 제품 패키징  (0) 2024.11.19
Db2 pureScale on AWS  (0) 2023.10.05
Db2 on Windows 에 'SQL1092N' 에러 해결하기  (0) 2023.10.03
Db2 테이블 파티셔닝  (0) 2023.07.02

관련글 더보기

댓글 영역