Intern:Cluster/xora: Unterschied zwischen den Versionen

Aus Wiki StuRa HTW Dresden
Zur Navigation springen Zur Suche springen
Zeile 879: Zeile 879:


: <code>/etc/pve/corosync.conf</code>
: <code>/etc/pve/corosync.conf</code>
 
<pre>
logging {
logging {
   debug: off
   debug: off

Version vom 2. Oktober 2018, 10:16 Uhr

Cluster/xora soll ein Verbund (Cluster) von Servern mit Proxmox VE sein. Es handelt sich vorerst um einen Test. Wenn der Test zufriedenstellend verläuft, soll damit dann der gesamte Betrieb zu Servern damit realisiert werden.

Test

Grundsätzlich soll ein möglichst ausfallsicherer Verbund an Servern mit Proxmox VE getestet werden.

Getestet werden soll:

  • Datensicherung
    • Replikation
  • Ausfallüberbrückung

Bezeichnung

Es ist die künstlich geschaffene Bezeichnung für den Verbund von mehreren Servern.

Verbund

Für einen Verbund an Servern mit Proxmox VE bedarf es wohl mindestens 3 Servern.

Zu dem Verbund gehören (theoretisch)

mindestens
zur Vervollständigung auch
zur Ergänzung eventuell auch

.

Geräte

einzelne Geräte

Gerät Name IPv4 OS DNS (A) OS WUI OS Mail OS Geburtshilfe Start (Dauer) IPv4 IPMI DNS (A) IPMI WUI IPMI Mail IPMI Netzwerkschnittstellen Massenspeicher Eigentum
27090
cora 141.56.51.123 cora.stura-dresden.de https://cora.stura-dresden.de:8006/ cora@stura.htw-dresden.de James 3 min 141.56.51.113 irmc.cora.stura-dresden.de https://irmc.cora.stura-dresden.de/ irmc.cora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_0
27091
dora 141.56.51.124 dora.stura-dresden.de https://dora.stura-dresden.de:8006/ dora@stura.htw-dresden.de Fullforce 3 min 141.56.51.114 irmc.dora.stura-dresden.de https://irmc.dora.stura-dresden.de/ irmc.dora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_1
8529
lora 141.56.51.127 lora.stura-dresden.de https://lora.stura-dresden.de:8006/ lora@stura.htw-dresden.de  min 141.56.51.117 irmc.lora.stura-dresden.de https://irmc.lora.stura-dresden.de/ irmc.lora@stura.htw-dresden.de
M 2 1




3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_8529
8
nora 141.56.51.128 nora.stura-dresden.de https://nora.stura-dresden.de:8006/ nora@stura.htw-dresden.de 2 min 141.56.51.118 irmc.nora.stura-dresden.de https://irmc.nora.stura-dresden.de/ irmc.nora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
StuRa (srs3008)
5100
zora 141.56.51.129 zora.stura-dresden.de https://zora.stura-dresden.de:8006/ zora@stura.htw-dresden.de  min 141.56.51.119 drac.cora.stura-dresden.de https://drac.cora.stura-dresden.de/ drac.zora@stura.htw-dresden.de
M
1
2
3.5 ″ 3.5 ″ 3.5 ″ 3.5 ″
2 TB 2 TB

2 TB 2 TB

bsd.services:user:vater:hw#dell_poweredge_r510

BIOS

F2

Boot
alles andere
USB KEY: …
PCI SCSI: #0100 ID000 LN0 HGST H
PCI SCSI: #0100 ID004 LN0 HGST H

Installation Betriebssystem

Install Proxmox VE
Loading Proxmox Installer ...
Loading initial ramdisk ...
Proxmox startup
End User License Agreement (EULA)
I agree
Proxmox Virtualiszation Environment (PVE)
Options
Filesystem
ext4zfs (RAID1)
Disk Setup
Harddisk 0
/dev/sda (1863GB, HUS726020ALS214)
Harddisk 1
/dev/sdb (1863GB, HUS726020ALS214)
Harddisk 2
-- do not use --
Advanced Options
ashift
12
compress
on
checksum
on
copies
1
OK
Target
zfs (RAID1)
Next
Location and Time Zone selection
Country
Germany
Time zone
Europe/Berlin
Keyboard Layout
German
Next
Administration Password and E-Mail Address
Password
8
Confirm
8
E-Mail
siehe #einzelne Geräte
Next
Management Network Configuration
Management Interface
enp8s0f0 - … (igb)
Hostname (FQDN)
siehe #einzelne Geräte
IP Address
siehe #einzelne Geräte
Netmask
255.255.255.0
Gateway
141.56.51.254
DNS Server
141.56.1.1
Next
Installation successful!
Reboot

nach der Installation

erste Aktualisierung
  • Update (WUI)
    • Refresh (WUI)
      Neustart (WUI)
    • Upgrade (WUI)
(optionales) ZFS anschauen
zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda2    ONLINE       0     0     0
	    sdb2    ONLINE       0     0     0

errors: No known data errors
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             9.40G  1.75T   104K  /rpool
rpool/ROOT         919M  1.75T    96K  /rpool/ROOT
rpool/ROOT/pve-1   919M  1.75T   919M  /
rpool/data          96K  1.75T    96K  /rpool/data
rpool/swap        8.50G  1.75T    56K  -
(optionales) Anschauen der Partitionierung
fdisk -l /dev/sd{a,b}
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A3CA01C-D0CE-4750-A26A-C07C1541EF1D

Device          Start        End    Sectors  Size Type
/dev/sda1          34       2047       2014 1007K BIOS boot
/dev/sda2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sda9  3907012750 3907029134      16385    8M Solaris reserved 1


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C0D3B0CA-C966-4B00-B367-EEDBD04872F7

Device          Start        End    Sectors  Size Type
/dev/sdb1          34       2047       2014 1007K BIOS boot
/dev/sdb2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sdb9  3907012750 3907029134      16385    8M Solaris reserved 1
Sicherung des initialen Zustandes von PVE (mit der erfolgten Aktualisierung)

Erstellen eines Schnappschusses vom gesamten Pool

zfs snapshot -r rpool@fresh-installed-pve-and-updated

Im Übrigen kann überlegt werden, dass darüber hinaus auch eine Sicherung von /dev/sd{a,b}1 vorgenommen wird.

Erstellung vom Cluster

optionales Begutachten für die Erstellung von einem Cluster

cora dora nora
less /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.123
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.124
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.128
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

iface eno2 inet manual
less /etc/hosts
127.0.0.1 localhost.localdomain localhost
141.56.51.123 cora.stura-dresden.de cora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.124 dora.stura-dresden.de dora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.128 nora.stura-dresden.de nora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Erzeugung vom Cluster xora

auf einer der Server, die zum Cluster gehören sollen

vorgenommen auf #8 (nora)
(alternativ) grafische Oberfläche Kommandozeile
  • Datacenter -> Cluster -> Create Cluster
    Create Cluster
    Cluster Name
    xora
    Ring 0 Address
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
TASK OK

pvecm create xora

Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
less /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 1
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
pvecm status
Quorum information
------------------
Date:             Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 141.56.51.128 (local)

Erstellung von einem internen Netz

frei gewählt verwenden wir 10.10.10.0/24

cora dora nora

grafische Oberfläche (mit notwendigen Neustart)

less /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.123
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.123
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.124
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.124
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

auto eno2
iface eno2 inet static
        address  10.10.10.128
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.128
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

Eintragung der anderen Nodes unabhängig von DNS

cora dora nora
less /etc/hosts

####    members of the cluster xora
10.10.10.123 cora.xora.stura-dresden.de cora.xora
10.10.10.124 dora.xora.stura-dresden.de dora.xora
10.10.10.128 nora.xora.stura-dresden.de nora.xora
# The following lines are desirable for IPv6 capable hosts

Hinzufügen von anderen Servern

(auch in Anlehnung an https://pve.proxmox.com/wiki/Separate_Cluster_Network)

/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 141.56.51.123
  }
  node {
    name: dora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 141.56.51.124
  }
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 3
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: cora.xora
  }
  node {
    name: dora
    nodeid: 4
    quorum_votes: 1
    ring0_addr: dora.xora
  }
  node {
    name: nora
    nodeid: 8
    quorum_votes: 1
    ring0_addr: nora.xora
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 4
  interface {
    bindnetaddr: nora.xora
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Siehe auch