Intern:Cluster/xora: Unterschied zwischen den Versionen

Aus Wiki StuRa HTW Dresden
Zur Navigation springen Zur Suche springen
 
(60 dazwischenliegende Versionen von 2 Benutzern werden nicht angezeigt)
Zeile 1: Zeile 1:
[[Server/sc0]] soll ein Verbund (Cluster) von Servern mit [[Proxmox VE]] sein. Es handelt sich vorerst um einen Test. Wenn der Test zufriedenstellend verläuft, soll damit dann der gesamte Betrieb zu [[Server]]n damit realisiert werden.  
[[{{FULLPAGENAME}}|{{PAGENAME}}]] war ein erster Test zu einem Verbund (Cluster) von Servern mit [[Proxmox VE]].
 
Später wurde der Test erneut bei der [[Server/Aktualisierung/2019]] als [[Intern:ora]] vorgenommen.


== Test ==
== Test ==
Zeile 47: Zeile 49:
! Geburtshilfe
! Geburtshilfe
! Start (Dauer)
! Start (Dauer)
! Eigentum
! IPv4 IPMI
! IPv4 IPMI
! DNS (A) IPMI
! DNS (A) IPMI
! WUI IPMI
! WUI IPMI
! Mail IPMI
! Mail IPMI
! Netzwerkschnittstellen
! Massenspeicher
! Eigentum
|-
|-
|
|
===== 27090 =====
<h5>27090</h5>
| cora
| cora
| 141.56.51.123
| 141.56.51.123
Zeile 62: Zeile 66:
| James
| James
| 3&nbsp;min
| 3&nbsp;min
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_0 bsd.services:user:vater:hw#rx300_s6_2709_0]
| 141.56.51.113
| 141.56.51.113
| irmc.cora.stura-dresden.de
| irmc.cora.stura-dresden.de
| https://irmc.cora.stura-dresden.de/
| https://irmc.cora.stura-dresden.de/
| <s>irmc.cora@stura.htw-dresden.de</s>
| <s>irmc.cora@stura.htw-dresden.de</s>
|
{| class="wikitable"
|-
! M
! 2
! 1
|-
|}
{| class="wikitable"
|-
! X
|-
|
----
|-
|
----
|-
|-
|
|
===== 27091 =====
----
|-
|}
|
{| class="wikitable"
|-
! 3.5&nbsp;″
! 3.5&nbsp;″
|-
|
----
|
----
|-
|
----
|
----
|-
| 2&nbsp;TB
| 2&nbsp;TB
|-
|}
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_0 bsd.services:user:vater:hw#rx300_s6_2709_0]
|-
|
<h5>27091</h5>
| dora
| dora
| 141.56.51.124
| 141.56.51.124
Zeile 77: Zeile 123:
| Fullforce
| Fullforce
| 3&nbsp;min
| 3&nbsp;min
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_1 bsd.services:user:vater:hw#rx300_s6_2709_1]
| 141.56.51.114
| 141.56.51.114
| irmc.dora.stura-dresden.de
| irmc.dora.stura-dresden.de
| https://irmc.dora.stura-dresden.de/
| https://irmc.dora.stura-dresden.de/
| <s>irmc.dora@stura.htw-dresden.de</s>
| <s>irmc.dora@stura.htw-dresden.de</s>
|
{| class="wikitable"
|-
! M
! 2
! 1
|-
|}
{| class="wikitable"
|-
! X
|-
|
----
|-
|
----
|-
|
----
|-
|}
|
{| class="wikitable"
|-
! 3.5&nbsp;″
! 3.5&nbsp;″
|-
|
----
|
----
|-
|
----
|
----
|-
| 2&nbsp;TB
| 2&nbsp;TB
|-
|}
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_1 bsd.services:user:vater:hw#rx300_s6_2709_1]
|-
|-
|
|
===== 8529 =====
<h5>8529</h5>
| <s>lora</s>
| <s>lora</s>
| 141.56.51.127
| 141.56.51.127
Zeile 91: Zeile 179:
| lora@stura.htw-dresden.de
| lora@stura.htw-dresden.de
|  
|  
| ?&nbsp;min
| &nbsp;min
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_0 bsd.services:user:vater:hw#rx300_s6_8529]
| 141.56.51.117
| 141.56.51.117
| irmc.lora.stura-dresden.de
| irmc.lora.stura-dresden.de
| https://irmc.lora.stura-dresden.de/
| https://irmc.lora.stura-dresden.de/
| <s>irmc.lora@stura.htw-dresden.de</s>
| <s>irmc.lora@stura.htw-dresden.de</s>
|
{| class="wikitable"
|-
| M
| 2
| 1
|-
|}
{| class="wikitable"
|-
|-
|
|
===== 8 =====
----
| <s>nora</s>
|-
|
----
|-
|
----
|-
|
----
|-
|}
|
{| class="wikitable"
|-
! 3.5&nbsp;″
! 3.5&nbsp;″
|-
|
----
|
----
|-
|
----
|
----
|-
| 2&nbsp;TB
| 2&nbsp;TB
|-
|}
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#rx300_s6_2709_0 bsd.services:user:vater:hw#rx300_s6_8529]
|-
|
<h5>8</h5>
| nora
| 141.56.51.128
| 141.56.51.128
| nora.stura-dresden.de
| nora.stura-dresden.de
Zeile 106: Zeile 237:
| nora@stura.htw-dresden.de
| nora@stura.htw-dresden.de
|  
|  
| ?&nbsp;min
| 2&nbsp;min
| [[StuRa]] ([[srs3008]])
| 141.56.51.118
| 141.56.51.118
| irmc.nora.stura-dresden.de
| irmc.nora.stura-dresden.de
| https://irmc.nora.stura-dresden.de/
| https://irmc.nora.stura-dresden.de/
| <s>irmc.nora@stura.htw-dresden.de</s>
| <s>irmc.nora@stura.htw-dresden.de</s>
|
{| class="wikitable"
|-
! M
! 2
! 1
|-
|}
{| class="wikitable"
|-
! X
|-
|
----
|-
|
----
|-
|
----
|-
|}
|
{| class="wikitable"
|-
! 3.5&nbsp;″
! 3.5&nbsp;″
|-
|
----
|
----
|-
|
----
|
----
|-
| 2&nbsp;TB
| 2&nbsp;TB
|-
|}
| [[StuRa]] ([[srs3008]])
|-
|-
|
|
===== 5100 =====
<h5>5100</h5>
| <s>zora</s>
| <s>zora</s>
| 141.56.51.129
| 141.56.51.129
Zeile 121: Zeile 294:
| zora@stura.htw-dresden.de
| zora@stura.htw-dresden.de
|  
|  
| ?&nbsp;min
| &nbsp;min
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#dell_poweredge_r510 bsd.services:user:vater:hw#dell_poweredge_r510]
| 141.56.51.119
| 141.56.51.119
| drac.cora.stura-dresden.de
| drac.cora.stura-dresden.de
| https://drac.cora.stura-dresden.de/
| https://drac.cora.stura-dresden.de/
| <s>drac.zora@stura.htw-dresden.de</s>
| <s>drac.zora@stura.htw-dresden.de</s>
|
{| class="wikitable"
|-
| <s>M</s>
|-
|}
{| class="wikitable"
|-
! 1
|-
! 2
|-
|}
|
{| class="wikitable"
|-
! 3.5&nbsp;″
! 3.5&nbsp;″
! 3.5&nbsp;″
! 3.5&nbsp;″
|-
| 2&nbsp;TB
| 2&nbsp;TB
|
----
|
----
|-
| 2&nbsp;TB
| 2&nbsp;TB
|
----
|
----
|-
|}
| [https://bsd.services/wiki/doku.php?id=user:vater:hw#dell_poweredge_r510 bsd.services:user:vater:hw#dell_poweredge_r510]
|-
|-
|}
|}
Zeile 143: Zeile 352:


=== Installation Betriebssystem ===
=== Installation Betriebssystem ===
==== Vorbereitung Installation Betreibssystem ====
==== Durchführung Installation Betreibssystem ====


  '''Install Proxmox VE'''
  '''Install Proxmox VE'''
Zeile 194: Zeile 407:
: <u>R</u>eboot
: <u>R</u>eboot


=== nach der Installation ===
==== Nachbereitung Installation Betreibssystem ====


; erste Aktualisierung:
; erste Aktualisierung:
Zeile 201: Zeile 414:
**: Neustart (WUI)
**: Neustart (WUI)
** Upgrade (WUI)
** Upgrade (WUI)
; (optionales) ZFS anschauen:
: <code>zpool status</code>
<pre>
  pool: rpool
state: ONLINE
  scan: none requested
config:
NAME        STATE    READ WRITE CKSUM
rpool      ONLINE      0    0    0
  mirror-0  ONLINE      0    0    0
    sda2    ONLINE      0    0    0
    sdb2    ONLINE      0    0    0
errors: No known data errors
</pre>
: <code>zfs list</code>
<pre>
NAME              USED  AVAIL  REFER  MOUNTPOINT
rpool            9.40G  1.75T  104K  /rpool
rpool/ROOT        919M  1.75T    96K  /rpool/ROOT
rpool/ROOT/pve-1  919M  1.75T  919M  /
rpool/data          96K  1.75T    96K  /rpool/data
rpool/swap        8.50G  1.75T    56K  -
</pre>
; (optionales) Anschauen der Partitionierung
: <code>fdisk -l /dev/sd{a,b}</code>
<pre>
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A3CA01C-D0CE-4750-A26A-C07C1541EF1D
Device          Start        End    Sectors  Size Type
/dev/sda1          34      2047      2014 1007K BIOS boot
/dev/sda2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sda9  3907012750 3907029134      16385    8M Solaris reserved 1
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C0D3B0CA-C966-4B00-B367-EEDBD04872F7
Device          Start        End    Sectors  Size Type
/dev/sdb1          34      2047      2014 1007K BIOS boot
/dev/sdb2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sdb9  3907012750 3907029134      16385    8M Solaris reserved 1
</pre>
; Sicherung des initialen Zustandes von PVE (mit der erfolgten Aktualisierung):
Erstellen eines Schnappschusses vom gesamten Pool
: <code>zfs snapshot -r rpool@fresh-installed-pve-and-updated</code>
Im Übrigen kann überlegt werden, dass darüber hinaus auch eine Sicherung von /dev/sd{a,b}1 vorgenommen wird.
==== Anpassung der Quellen ====
; Siehe auch: [[Server/Proxmox#sources.list]]
===== Ergänzung der Quelle pve-no-subscription =====
----
auf schnell!
: <code>echo 'deb http://download.proxmox.com/debian/pve stretch pve-no-subscription' > /etc/apt/sources.list.d/pve-download.list && apt update</code>
----
<code>cat /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
cat: /etc/apt/sources.list.d/pve-enterprise.list: No such file or directory
</pre>
<code>$EDITOR /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
</pre>
<code>cat /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
</pre>
===== Entfernung der Quelle pve-enterprise =====
<code>cat /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
</pre>
<code>$EDITOR /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
</pre>
<code>cat /etc/apt/sources.list.d/pve-enterprise.list</code>
<pre>
####deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
</pre>
=== Erstellung vom Cluster ===
==== optionales Begutachten für die Erstellung von einem Cluster ====
{|
!
! cora
! dora
! nora
|-
|
: <code>less /etc/network/interfaces</code>
|-
|
|
<pre>
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
auto vmbr0
iface vmbr0 inet static
        address 141.56.51.123
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
iface enp8s0f1 inet manual
</pre>
|
<pre>
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
auto vmbr0
iface vmbr0 inet static
        address 141.56.51.124
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
iface enp8s0f1 inet manual
</pre>
|
<pre>
auto lo
iface lo inet loopback
iface eno1 inet manual
auto vmbr0
iface vmbr0 inet static
        address 141.56.51.128
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
iface enp2s0f0 inet manual
iface enp2s0f1 inet manual
iface enp2s0f2 inet manual
iface enp2s0f3 inet manual
iface eno2 inet manual
</pre>
|-
|
: <code>less /etc/hosts</code>
|-
|
|
<pre>
127.0.0.1 localhost.localdomain localhost
141.56.51.123 cora.stura-dresden.de cora pvelocalhost
# The following lines are desirable for IPv6 capable hosts
::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
</pre>
|
<pre>
127.0.0.1 localhost.localdomain localhost
141.56.51.124 dora.stura-dresden.de dora pvelocalhost
# The following lines are desirable for IPv6 capable hosts
::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
</pre>
|
<pre>
127.0.0.1 localhost.localdomain localhost
141.56.51.128 nora.stura-dresden.de nora pvelocalhost
# The following lines are desirable for IPv6 capable hosts
::1    ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
</pre>
|-
|}
==== Erzeugung vom Cluster ''xora'' ====
auf einer der Server, die zum Cluster gehören sollen
: vorgenommen auf [[#8]] (nora)
{|
|-
! (alternativ) grafische Oberfläche
! Kommandozeile
|-
|
* Datacenter -> Cluster -> Create Cluster
*: Create Cluster
*:; Cluster Name: ''xora''
*:; Ring 0 Address:
<pre>
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
TASK OK
</pre>
|
<code>pvecm create xora</code>
<pre>
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
</pre>
|-
|}
: <code>less /etc/pve/corosync.conf</code>
<pre>
logging {
  debug: off
  to_syslog: yes
}
nodelist {
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}
quorum {
  provider: corosync_votequorum
}
totem {
  cluster_name: xora
  config_version: 1
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
</pre>
: <code>pvecm status</code>
<pre>
Quorum information
------------------
Date:            Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/12
Quorate:          Yes
Votequorum information
----------------------
Expected votes:  1
Highest expected: 1
Total votes:      1
Quorum:          1 
Flags:            Quorate
Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 141.56.51.128 (local)
</pre>
==== Erstellung von einem internen Netz für das Cluster von Proxmox ====
frei gewählt verwenden wir 10.10.10.0/24
{|
!
! cora
! dora
! nora
|-
| colspan="4" |
grafische Oberfläche (mit notwendigen Neustart)
|-
|
: <code>less /etc/network/interfaces</code>
|-
|
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.123
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.123
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
</pre>
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.124
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.124
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
</pre>
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface eno1 inet manual
iface enp2s0f0 inet manual
iface enp2s0f1 inet manual
iface enp2s0f2 inet manual
iface enp2s0f3 inet manual
auto eno2
iface eno2 inet static
        address  10.10.10.128
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.128
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
</pre>
|-
|}
==== Eintragung der anderen Nodes unabhängig von DNS ====
{|
!
! cora
! dora
! nora
|-
|
: <code>less /etc/hosts</code>
|-
|
| colspan="3" |
<pre></pre>
<pre>
####    members of the cluster xora
10.10.10.123 cora.xora.stura-dresden.de cora.xora
10.10.10.124 dora.xora.stura-dresden.de dora.xora
10.10.10.128 nora.xora.stura-dresden.de nora.xora
</pre>
<pre>
# The following lines are desirable for IPv6 capable hosts
</pre>
<pre></pre>
|-
|}
==== Hinzufügen von anderen Servern ====
{|
|-
|
! cora
! dora
! nora
|-
|
|
: <code>pvecm add nora.xora</code>
<pre>
Please enter superuser (root) password for 'nora.xora':
</pre>
<pre>
                                                      Password for root@nora.xora: ********
</pre>
<pre>
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
</pre>
<pre>
Are you sure you want to continue connecting (yes/no)? yes
</pre>
<pre>
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462961.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'cora' to cluster.
</pre>
|
: <code>pvecm add nora.xora</code>
<pre>
Please enter superuser (root) password for 'nora.xora':
</pre>
<pre>
                                                      Password for root@nora.xora: ********
</pre>
<pre>
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
</pre>
<pre>
Are you sure you want to continue connecting (yes/no)? yes
</pre>
<pre>
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462994.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'dora' to cluster.
</pre>
|
|-
|
|
|
|
: <code>/etc/pve/corosync.conf</code>
<pre>
logging {
  debug: off
  to_syslog: yes
}
nodelist {
  node {
    name: cora
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 141.56.51.123
  }
  node {
    name: dora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 141.56.51.124
  }
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}
quorum {
  provider: corosync_votequorum
}
totem {
  cluster_name: xora
  config_version: 3
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
</pre>
(auch in Anlehnung an https://pve.proxmox.com/wiki/Separate_Cluster_Network)
: <code>/etc/pve/corosync.conf</code>
<pre>
logging {
  debug: off
  to_syslog: yes
}
nodelist {
  node {
    name: cora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: cora.xora
  }
  node {
    name: dora
    nodeid: 4
    quorum_votes: 1
    ring0_addr: dora.xora
  }
  node {
    name: nora
    nodeid: 8
    quorum_votes: 1
    ring0_addr: nora.xora
  }
}
quorum {
  provider: corosync_votequorum
}
totem {
  cluster_name: xora
  config_version: 4
  interface {
    bindnetaddr: nora.xora
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}
</pre>
|-
|}
==== Prüfen vom Verbund ====
{|
|-
|
! cora
! dora
! nora
|-
|
: <code>pvecm status</code>
|
|
|
<pre>
Quorum information
------------------
Date:            Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000008
Ring ID:          3/52
Quorate:          Yes
Votequorum information
----------------------
Expected votes:  3
Highest expected: 3
Total votes:      3
Quorum:          2 
Flags:            Quorate
Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 10.10.10.123
0x00000004          1 10.10.10.124
0x00000008          1 10.10.10.128 (local)
</pre>
|-
|}
=== Erstellen vom Cluster für Ceph ===
==== Erstellung von einem internen Netz für das Cluster von Ceph ====
frei gewählt verwenden wir 10.10.11.0/24
* Netzwerkschnittstellen
*: 10.10.11.0/24
*:: 10.10.11.123/24
*:: 10.10.11.124/24
*:: 10.10.11.128/24
Neustart (wegen Netzwerkschnittstellen)
<!--
---- nur vorlage, um die inhalte nachzutragen
{|
!
! cora
! dora
! nora
|-
| colspan="4" |
grafische Oberfläche (mit notwendigen Neustart)
|-
|
: <code>less /etc/network/interfaces</code>
|-
|
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.123
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.123
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
</pre>
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface enp8s0f0 inet manual
iface ens4f0 inet manual
iface ens4f1 inet manual
iface ens4f2 inet manual
iface ens4f3 inet manual
auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.124
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.124
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
</pre>
|
<pre>
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!
auto lo
iface lo inet loopback
iface eno1 inet manual
iface enp2s0f0 inet manual
iface enp2s0f1 inet manual
iface enp2s0f2 inet manual
iface enp2s0f3 inet manual
auto eno2
iface eno2 inet static
        address  10.10.10.128
        netmask  255.255.255.0
auto vmbr0
iface vmbr0 inet static
        address  141.56.51.128
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0
</pre>
|-
|}
-->
Testweises Verbinden (zu den jeweils anderen beiden Servern im Cluster) per <code>ssh</code>
: <code>ssh root@10.10.11.123</code>
<!--
<pre>
The authenticity of host '10.10.11.123 (10.10.11.123)' can't be established.
</pre>
<pre></pre>
<pre>
Are you sure you want to continue connecting (yes/no)? yes
</pre>
<pre>
Warning: Permanently added '10.10.11.123' (ECDSA) to the list of known hosts.
Linux nora 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
</pre>
<pre></pre>
-->
: <code>ssh root@10.10.11.124</code>
<!--
<pre>
The authenticity of host '10.10.11.124 (10.10.11.124)' can't be established.
</pre>
<pre></pre>
<pre>
Are you sure you want to continue connecting (yes/no)? yes
</pre>
<pre>
Warning: Permanently added '10.10.11.124' (ECDSA) to the list of known hosts.
Linux nora 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
</pre>
<pre></pre>
-->
: <code>ssh root@10.10.11.128</code>
<!--
<pre>
The authenticity of host '10.10.11.128 (10.10.11.128)' can't be established.
</pre>
<pre></pre>
<pre>
Are you sure you want to continue connecting (yes/no)? yes
</pre>
<pre>
Warning: Permanently added '10.10.11.128' (ECDSA) to the list of known hosts.
Linux nora 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
</pre>
<pre></pre>
-->
==== Installation von Ceph ====
: <code>pveceph install</code>
<!--
<pre>
update available package list
Reading package lists... Done
Building dependency tree     
Reading state information... Done
gdisk is already the newest version (1.0.1-1).
The following additional packages will be installed:
  binutils ceph-base ceph-mgr ceph-mon ceph-osd cryptsetup-bin libcephfs2 libcurl3 libgoogle-perftools4
  libjs-jquery libjs-sphinxdoc libjs-underscore libleveldb1v5 liblttng-ust-ctl2 liblttng-ust0 libparted2 librados2
  libradosstriper1 librbd1 librgw2 libtcmalloc-minimal4 libunwind8 parted python-bs4 python-cephfs
  python-cffi-backend python-cherrypy3 python-click python-colorama python-cryptography python-dnspython
  python-enum34 python-flask python-formencode python-idna python-ipaddress python-itsdangerous python-jinja2
  python-logutils python-mako python-markupsafe python-openssl python-paste python-pastedeploy
  python-pastedeploy-tpl python-pecan python-prettytable python-pyasn1 python-rados python-rbd python-repoze.lru
  python-rgw python-routes python-setuptools python-simplegeneric python-singledispatch python-tempita
  python-waitress python-webob python-webtest python-werkzeug
Suggested packages:
  binutils-doc ceph-mds libparted-dev libparted-i18n parted-doc python-cryptography-doc python-cryptography-vectors
  python-enum34-doc python-flask-doc python-egenix-mxdatetime python-jinja2-doc python-beaker python-mako-doc
  python-openssl-doc python-openssl-dbg httpd-wsgi libapache2-mod-python libapache2-mod-scgi python-pastescript
  python-pastewebkit doc-base python-setuptools-doc python-waitress-doc python-webob-doc python-webtest-doc ipython
  python-genshi python-lxml python-greenlet python-redis python-pylibmc | python-memcache python-werkzeug-doc
Recommended packages:
  ceph-mds ntp | time-daemon javascript-common python-lxml | python-html5lib python-blinker python-simplejson
  libjs-mochikit python-openid python-scgi python-pastescript python-lxml python-pyquery python-pyinotify
The following packages will be REMOVED:
  libpve-guest-common-perl libpve-storage-perl proxmox-ve pve-container pve-ha-manager pve-manager qemu-server
The following NEW packages will be installed:
  binutils ceph ceph-base ceph-mgr ceph-mon ceph-osd cryptsetup-bin libcephfs2 libcurl3 libgoogle-perftools4
  libjs-jquery libjs-sphinxdoc libjs-underscore libleveldb1v5 liblttng-ust-ctl2 liblttng-ust0 libparted2
  libtcmalloc-minimal4 libunwind8 parted python-bs4 python-cffi-backend python-cherrypy3 python-click
  python-colorama python-cryptography python-dnspython python-enum34 python-flask python-formencode python-idna
  python-ipaddress python-itsdangerous python-jinja2 python-logutils python-mako python-markupsafe python-openssl
  python-paste python-pastedeploy python-pastedeploy-tpl python-pecan python-prettytable python-pyasn1
  python-repoze.lru python-rgw python-routes python-setuptools python-simplegeneric python-singledispatch
  python-tempita python-waitress python-webob python-webtest python-werkzeug
The following packages will be upgraded:
  ceph-common librados2 libradosstriper1 librbd1 librgw2 python-cephfs python-rados python-rbd
8 upgraded, 55 newly installed, 7 to remove and 1 not upgraded.
Need to get 54.8 MB of archives.
After this operation, 169 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://ftp.de.debian.org/debian stretch/main amd64 binutils amd64 2.28-5 [3,770 kB]
Get:2 http://security.debian.org stretch/updates/main amd64 libcurl3 amd64 7.52.1-5+deb9u7 [292 kB]               
Get:3 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-common amd64 12.2.8-pve1 [12.9 MB] 
Get:4 http://ftp.de.debian.org/debian stretch/main amd64 liblttng-ust-ctl2 amd64 2.9.0-2+deb9u1 [99.4 kB]
Get:5 http://ftp.de.debian.org/debian stretch/main amd64 liblttng-ust0 amd64 2.9.0-2+deb9u1 [174 kB]         
Get:6 http://ftp.de.debian.org/debian stretch/main amd64 python-prettytable all 0.7.2-3 [22.4 kB]       
Get:7 http://ftp.de.debian.org/debian stretch/main amd64 libtcmalloc-minimal4 amd64 2.5-2.2 [121 kB]         
Get:8 http://ftp.de.debian.org/debian stretch/main amd64 libunwind8 amd64 1.1-4.1 [48.7 kB]                   
Get:9 http://ftp.de.debian.org/debian stretch/main amd64 libgoogle-perftools4 amd64 2.5-2.2 [224 kB]   
Get:10 http://ftp.de.debian.org/debian stretch/main amd64 libleveldb1v5 amd64 1.18-5 [136 kB]                 
Get:11 http://ftp.de.debian.org/debian stretch/main amd64 cryptsetup-bin amd64 2:1.7.3-4 [221 kB]       
Get:12 http://ftp.de.debian.org/debian stretch/main amd64 python-repoze.lru all 0.6-6 [12.3 kB]           
Get:13 http://ftp.de.debian.org/debian stretch/main amd64 libjs-jquery all 3.1.1-2 [154 kB]                   
Get:14 http://ftp.de.debian.org/debian stretch/main amd64 libjs-underscore all 1.8.3~dfsg-1 [63.8 kB]   
Get:15 http://ftp.de.debian.org/debian stretch/main amd64 libjs-sphinxdoc all 1.4.9-2 [69.5 kB]             
Get:16 http://ftp.de.debian.org/debian stretch/main amd64 python-routes all 2.3.1-2 [100 kB]               
Get:17 http://ftp.de.debian.org/debian stretch/main amd64 python-cherrypy3 all 3.5.0-2 [1,321 kB]       
Get:18 http://ftp.de.debian.org/debian stretch/main amd64 python-markupsafe amd64 0.23-3 [14.4 kB]           
Get:19 http://ftp.de.debian.org/debian stretch/main amd64 python-jinja2 all 2.8-1 [111 kB]                   
Get:20 http://ftp.de.debian.org/debian stretch/main amd64 python-cffi-backend amd64 1.9.1-2 [69.0 kB]   
Get:21 http://ftp.de.debian.org/debian stretch/main amd64 python-enum34 all 1.1.6-1 [35.0 kB]                   
Get:22 http://ftp.de.debian.org/debian stretch/main amd64 python-idna all 2.2-1 [32.6 kB]                 
Get:23 http://ftp.de.debian.org/debian stretch/main amd64 python-ipaddress all 1.0.17-1 [18.1 kB]       
Get:24 http://ftp.de.debian.org/debian stretch/main amd64 python-pyasn1 all 0.1.9-2 [51.8 kB]                 
Get:25 http://ftp.de.debian.org/debian stretch/main amd64 python-setuptools all 33.1.1-1 [297 kB]           
Get:26 http://ftp.de.debian.org/debian stretch/main amd64 python-cryptography amd64 1.7.1-3 [211 kB]         
Get:27 http://ftp.de.debian.org/debian stretch/main amd64 python-openssl all 16.2.0-1 [43.7 kB]                 
Get:28 http://ftp.de.debian.org/debian stretch/main amd64 python-logutils all 0.3.3-5 [17.2 kB]             
Get:29 http://ftp.de.debian.org/debian stretch/main amd64 python-mako all 1.0.6+ds1-2 [62.1 kB]         
Get:30 http://ftp.de.debian.org/debian stretch/main amd64 python-simplegeneric all 0.8.1-1 [11.9 kB]     
Get:31 http://ftp.de.debian.org/debian stretch/main amd64 python-singledispatch all 3.4.0.3-2 [9,690 B]           
Get:32 http://ftp.de.debian.org/debian stretch/main amd64 python-webob all 1:1.6.2-2 [63.7 kB]                     
Get:33 http://ftp.de.debian.org/debian stretch/main amd64 python-bs4 all 4.5.3-1 [86.7 kB]                 
Get:34 http://ftp.de.debian.org/debian stretch/main amd64 python-waitress all 1.0.1-1 [54.3 kB]           
Get:35 http://ftp.de.debian.org/debian stretch/main amd64 python-dnspython all 1.15.0-1 [102 kB]               
Get:36 http://ftp.de.debian.org/debian stretch/main amd64 python-formencode all 1.3.0-2 [140 kB]               
Get:37 http://ftp.de.debian.org/debian stretch/main amd64 python-tempita all 0.5.2-2 [13.8 kB]                 
Get:38 http://ftp.de.debian.org/debian stretch/main amd64 python-paste all 2.0.3+dfsg-4 [474 kB]               
Get:39 http://ftp.de.debian.org/debian stretch/main amd64 python-pastedeploy-tpl all 1.5.2-4 [8,024 B]   
Get:40 http://ftp.de.debian.org/debian stretch/main amd64 python-pastedeploy all 1.5.2-4 [30.4 kB]                 
Get:41 http://ftp.de.debian.org/debian stretch/main amd64 python-webtest all 2.0.24-1 [34.0 kB]                   
Get:42 http://ftp.de.debian.org/debian stretch/main amd64 python-pecan all 1.1.2-3 [104 kB]                   
Get:43 http://ftp.de.debian.org/debian stretch/main amd64 python-werkzeug all 0.11.15+dfsg1-1 [173 kB]     
Get:44 http://ftp.de.debian.org/debian stretch/main amd64 python-colorama all 0.3.7-1 [25.7 kB]               
Get:45 http://ftp.de.debian.org/debian stretch/main amd64 python-click all 6.6-1 [56.1 kB]                     
Get:46 http://ftp.de.debian.org/debian stretch/main amd64 python-itsdangerous all 0.24+dfsg1-2 [13.0 kB]   
Get:47 http://ftp.de.debian.org/debian stretch/main amd64 python-flask all 0.12.1-1 [62.2 kB]                     
Get:48 http://ftp.de.debian.org/debian stretch/main amd64 libparted2 amd64 3.2-17 [276 kB]                 
Get:49 http://ftp.de.debian.org/debian stretch/main amd64 parted amd64 3.2-17 [194 kB]                   
Get:50 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 python-rados amd64 12.2.8-pve1 [289 kB]
Get:51 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 librgw2 amd64 12.2.8-pve1 [1,806 kB]
Get:52 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 libradosstriper1 amd64 12.2.8-pve1 [320 kB]
Get:53 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 python-rbd amd64 12.2.8-pve1 [155 kB]
Get:54 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 librbd1 amd64 12.2.8-pve1 [993 kB]
Get:55 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 librados2 amd64 12.2.8-pve1 [2,723 kB]
Get:56 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 libcephfs2 amd64 12.2.8-pve1 [410 kB]
Get:57 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 python-cephfs amd64 12.2.8-pve1 [94.2 kB]
Get:58 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 python-rgw amd64 12.2.8-pve1 [98.2 kB]
Get:59 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-base amd64 12.2.8-pve1 [3,330 kB]
Get:60 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-mgr amd64 12.2.8-pve1 [3,520 kB]
Get:61 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-mon amd64 12.2.8-pve1 [4,474 kB]
Get:62 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-osd amd64 12.2.8-pve1 [14.0 MB]
Get:63 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph amd64 12.2.8-pve1 [7,164 B]
Fetched 54.8 MB in 2s (19.6 MB/s)
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really you want to permanently remove 'proxmox-ve' from your system, run the following command
W: (pve-apt-hook) touch '/please-remove-proxmox-ve'
W: (pve-apt-hook) and repeat your apt-get/apt invocation.
W: (pve-apt-hook)
W: (pve-apt-hook) If you are unsure why 'proxmox-ve' would be removed, please verify
W: (pve-apt-hook) - your APT repository settings
W: (pve-apt-hook) - that you are using 'apt-get dist-upgrade' or 'apt full-upgrade' to upgrade your system
E: Sub-process /usr/share/proxmox-ve/pve-apt-hook returned an error code (1)
E: Failure running script /usr/share/proxmox-ve/pve-apt-hook
</pre>
-->
[[#Ergänzung der Quelle pve-no-subscription]]
: <code>pveceph install</code>
<!--
<pre>
update available package list
Reading package lists... Done
Building dependency tree     
Reading state information... Done
gdisk is already the newest version (1.0.1-1).
The following additional packages will be installed:
  binutils ceph-base ceph-fuse ceph-mgr ceph-mon ceph-osd cryptsetup-bin libcephfs2 libcurl3 libgoogle-perftools4
  libjs-jquery libjs-sphinxdoc libjs-underscore libleveldb1v5 liblttng-ust-ctl2 liblttng-ust0 libparted2
  libpve-common-perl libpve-guest-common-perl libpve-storage-perl librados2 libradosstriper1 librbd1 librgw2
  libtcmalloc-minimal4 libunwind8 parted pve-manager python-bs4 python-cephfs python-cffi-backend python-cherrypy3
  python-click python-colorama python-cryptography python-dnspython python-enum34 python-flask python-formencode
  python-idna python-ipaddress python-itsdangerous python-jinja2 python-logutils python-mako python-markupsafe
  python-openssl python-paste python-pastedeploy python-pastedeploy-tpl python-pecan python-prettytable
  python-pyasn1 python-rados python-rbd python-repoze.lru python-rgw python-routes python-setuptools
  python-simplegeneric python-singledispatch python-tempita python-waitress python-webob python-webtest
  python-werkzeug qemu-server
Suggested packages:
  binutils-doc ceph-mds libparted-dev libparted-i18n parted-doc python-cryptography-doc python-cryptography-vectors
  python-enum34-doc python-flask-doc python-egenix-mxdatetime python-jinja2-doc python-beaker python-mako-doc
  python-openssl-doc python-openssl-dbg httpd-wsgi libapache2-mod-python libapache2-mod-scgi python-pastescript
  python-pastewebkit doc-base python-setuptools-doc python-waitress-doc python-webob-doc python-webtest-doc ipython
  python-genshi python-lxml python-greenlet python-redis python-pylibmc | python-memcache python-werkzeug-doc
Recommended packages:
  ceph-mds ntp | time-daemon javascript-common python-lxml | python-html5lib python-blinker python-simplejson
  libjs-mochikit python-openid python-scgi python-pastescript python-lxml python-pyquery python-pyinotify
The following NEW packages will be installed:
  binutils ceph ceph-base ceph-fuse ceph-mgr ceph-mon ceph-osd cryptsetup-bin libcephfs2 libcurl3
  libgoogle-perftools4 libjs-jquery libjs-sphinxdoc libjs-underscore libleveldb1v5 liblttng-ust-ctl2 liblttng-ust0
  libparted2 libtcmalloc-minimal4 libunwind8 parted python-bs4 python-cffi-backend python-cherrypy3 python-click
  python-colorama python-cryptography python-dnspython python-enum34 python-flask python-formencode python-idna
  python-ipaddress python-itsdangerous python-jinja2 python-logutils python-mako python-markupsafe python-openssl
  python-paste python-pastedeploy python-pastedeploy-tpl python-pecan python-prettytable python-pyasn1
  python-repoze.lru python-rgw python-routes python-setuptools python-simplegeneric python-singledispatch
  python-tempita python-waitress python-webob python-webtest python-werkzeug
The following packages will be upgraded:
  ceph-common libpve-common-perl libpve-guest-common-perl libpve-storage-perl librados2 libradosstriper1 librbd1
  librgw2 pve-manager python-cephfs python-rados python-rbd qemu-server
13 upgraded, 56 newly installed, 0 to remove and 24 not upgraded.
Need to get 4,734 kB/59.5 MB of archives.
After this operation, 189 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 qemu-server amd64 5.0-36 [169 kB]
Get:2 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 libpve-guest-common-perl all 2.0-18 [16.7 kB]
Get:3 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 pve-manager amd64 5.2-9 [1,922 kB]
Get:4 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 libpve-common-perl all 5.0-40 [90.8 kB]
Get:5 http://download.proxmox.com/debian/ceph-luminous stretch/main amd64 ceph-fuse amd64 12.2.8-pve1 [2,448 kB]
Get:6 http://download.proxmox.com/debian/pve stretch/pve-no-subscription amd64 libpve-storage-perl all 5.0-29 [88.3 kB]
Fetched 4,734 kB in 0s (13.2 MB/s)           
apt-listchanges: Reading changelogs...
Extracting templates from packages: 100%
Selecting previously unselected package binutils.
(Reading database ... 40422 files and directories currently installed.)
Preparing to unpack .../00-binutils_2.28-5_amd64.deb ...
Unpacking binutils (2.28-5) ...
Preparing to unpack .../01-qemu-server_5.0-36_amd64.deb ...
Unpacking qemu-server (5.0-36) over (5.0-26) ...
Preparing to unpack .../02-libpve-guest-common-perl_2.0-18_all.deb ...
Unpacking libpve-guest-common-perl (2.0-18) over (2.0-16) ...
Preparing to unpack .../03-pve-manager_5.2-9_amd64.deb ...
Unpacking pve-manager (5.2-9) over (5.2-1) ...
Preparing to unpack .../04-libpve-common-perl_5.0-40_all.deb ...
Unpacking libpve-common-perl (5.0-40) over (5.0-31) ...
Selecting previously unselected package libtcmalloc-minimal4.
Preparing to unpack .../05-libtcmalloc-minimal4_2.5-2.2_amd64.deb ...
Unpacking libtcmalloc-minimal4 (2.5-2.2) ...
Selecting previously unselected package libunwind8.
Preparing to unpack .../06-libunwind8_1.1-4.1_amd64.deb ...
Unpacking libunwind8 (1.1-4.1) ...
Selecting previously unselected package libgoogle-perftools4.
Preparing to unpack .../07-libgoogle-perftools4_2.5-2.2_amd64.deb ...
Unpacking libgoogle-perftools4 (2.5-2.2) ...
Selecting previously unselected package ceph-fuse.
Preparing to unpack .../08-ceph-fuse_12.2.8-pve1_amd64.deb ...
Unpacking ceph-fuse (12.2.8-pve1) ...
Preparing to unpack .../09-libpve-storage-perl_5.0-29_all.deb ...
Unpacking libpve-storage-perl (5.0-29) over (5.0-23) ...
Preparing to unpack .../10-ceph-common_12.2.8-pve1_amd64.deb ...
Unpacking ceph-common (12.2.8-pve1) over (10.2.5-7.2) ...
Preparing to unpack .../11-python-rados_12.2.8-pve1_amd64.deb ...
Unpacking python-rados (12.2.8-pve1) over (10.2.5-7.2) ...
Selecting previously unselected package libcurl3:amd64.
Preparing to unpack .../12-libcurl3_7.52.1-5+deb9u7_amd64.deb ...
Unpacking libcurl3:amd64 (7.52.1-5+deb9u7) ...
Preparing to unpack .../13-librgw2_12.2.8-pve1_amd64.deb ...
Unpacking librgw2 (12.2.8-pve1) over (10.2.5-7.2) ...
Preparing to unpack .../14-libradosstriper1_12.2.8-pve1_amd64.deb ...
Unpacking libradosstriper1 (12.2.8-pve1) over (10.2.5-7.2) ...
Selecting previously unselected package liblttng-ust-ctl2:amd64.
Preparing to unpack .../15-liblttng-ust-ctl2_2.9.0-2+deb9u1_amd64.deb ...
Unpacking liblttng-ust-ctl2:amd64 (2.9.0-2+deb9u1) ...
Selecting previously unselected package liblttng-ust0:amd64.
Preparing to unpack .../16-liblttng-ust0_2.9.0-2+deb9u1_amd64.deb ...
Unpacking liblttng-ust0:amd64 (2.9.0-2+deb9u1) ...
Preparing to unpack .../17-python-rbd_12.2.8-pve1_amd64.deb ...
Unpacking python-rbd (12.2.8-pve1) over (10.2.5-7.2) ...
Preparing to unpack .../18-librbd1_12.2.8-pve1_amd64.deb ...
Unpacking librbd1 (12.2.8-pve1) over (10.2.5-7.2) ...
Preparing to unpack .../19-librados2_12.2.8-pve1_amd64.deb ...
Unpacking librados2 (12.2.8-pve1) over (10.2.5-7.2) ...
Selecting previously unselected package libcephfs2.
Preparing to unpack .../20-libcephfs2_12.2.8-pve1_amd64.deb ...
Unpacking libcephfs2 (12.2.8-pve1) ...
Preparing to unpack .../21-python-cephfs_12.2.8-pve1_amd64.deb ...
Unpacking python-cephfs (12.2.8-pve1) over (10.2.5-7.2) ...
Selecting previously unselected package python-prettytable.
Preparing to unpack .../22-python-prettytable_0.7.2-3_all.deb ...
Unpacking python-prettytable (0.7.2-3) ...
Selecting previously unselected package python-rgw.
Preparing to unpack .../23-python-rgw_12.2.8-pve1_amd64.deb ...
Unpacking python-rgw (12.2.8-pve1) ...
Selecting previously unselected package libleveldb1v5:amd64.
Preparing to unpack .../24-libleveldb1v5_1.18-5_amd64.deb ...
Unpacking libleveldb1v5:amd64 (1.18-5) ...
Selecting previously unselected package cryptsetup-bin.
Preparing to unpack .../25-cryptsetup-bin_2%3a1.7.3-4_amd64.deb ...
Unpacking cryptsetup-bin (2:1.7.3-4) ...
Selecting previously unselected package ceph-base.
Preparing to unpack .../26-ceph-base_12.2.8-pve1_amd64.deb ...
Unpacking ceph-base (12.2.8-pve1) ...
Selecting previously unselected package python-repoze.lru.
Preparing to unpack .../27-python-repoze.lru_0.6-6_all.deb ...
Unpacking python-repoze.lru (0.6-6) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../28-libjs-jquery_3.1.1-2_all.deb ...
Unpacking libjs-jquery (3.1.1-2) ...
Selecting previously unselected package libjs-underscore.
Preparing to unpack .../29-libjs-underscore_1.8.3~dfsg-1_all.deb ...
Unpacking libjs-underscore (1.8.3~dfsg-1) ...
Selecting previously unselected package libjs-sphinxdoc.
Preparing to unpack .../30-libjs-sphinxdoc_1.4.9-2_all.deb ...
Unpacking libjs-sphinxdoc (1.4.9-2) ...
Selecting previously unselected package python-routes.
Preparing to unpack .../31-python-routes_2.3.1-2_all.deb ...
Unpacking python-routes (2.3.1-2) ...
Selecting previously unselected package python-cherrypy3.
Preparing to unpack .../32-python-cherrypy3_3.5.0-2_all.deb ...
Unpacking python-cherrypy3 (3.5.0-2) ...
Selecting previously unselected package python-markupsafe.
Preparing to unpack .../33-python-markupsafe_0.23-3_amd64.deb ...
Unpacking python-markupsafe (0.23-3) ...
Selecting previously unselected package python-jinja2.
Preparing to unpack .../34-python-jinja2_2.8-1_all.deb ...
Unpacking python-jinja2 (2.8-1) ...
Selecting previously unselected package python-cffi-backend.
Preparing to unpack .../35-python-cffi-backend_1.9.1-2_amd64.deb ...
Unpacking python-cffi-backend (1.9.1-2) ...
Selecting previously unselected package python-enum34.
Preparing to unpack .../36-python-enum34_1.1.6-1_all.deb ...
Unpacking python-enum34 (1.1.6-1) ...
Selecting previously unselected package python-idna.
Preparing to unpack .../37-python-idna_2.2-1_all.deb ...
Unpacking python-idna (2.2-1) ...
Selecting previously unselected package python-ipaddress.
Preparing to unpack .../38-python-ipaddress_1.0.17-1_all.deb ...
Unpacking python-ipaddress (1.0.17-1) ...
Selecting previously unselected package python-pyasn1.
Preparing to unpack .../39-python-pyasn1_0.1.9-2_all.deb ...
Unpacking python-pyasn1 (0.1.9-2) ...
Selecting previously unselected package python-setuptools.
Preparing to unpack .../40-python-setuptools_33.1.1-1_all.deb ...
Unpacking python-setuptools (33.1.1-1) ...
Selecting previously unselected package python-cryptography.
Preparing to unpack .../41-python-cryptography_1.7.1-3_amd64.deb ...
Unpacking python-cryptography (1.7.1-3) ...
Selecting previously unselected package python-openssl.
Preparing to unpack .../42-python-openssl_16.2.0-1_all.deb ...
Unpacking python-openssl (16.2.0-1) ...
Selecting previously unselected package python-logutils.
Preparing to unpack .../43-python-logutils_0.3.3-5_all.deb ...
Unpacking python-logutils (0.3.3-5) ...
Selecting previously unselected package python-mako.
Preparing to unpack .../44-python-mako_1.0.6+ds1-2_all.deb ...
Unpacking python-mako (1.0.6+ds1-2) ...
Selecting previously unselected package python-simplegeneric.
Preparing to unpack .../45-python-simplegeneric_0.8.1-1_all.deb ...
Unpacking python-simplegeneric (0.8.1-1) ...
Selecting previously unselected package python-singledispatch.
Preparing to unpack .../46-python-singledispatch_3.4.0.3-2_all.deb ...
Unpacking python-singledispatch (3.4.0.3-2) ...
Selecting previously unselected package python-webob.
Preparing to unpack .../47-python-webob_1%3a1.6.2-2_all.deb ...
Unpacking python-webob (1:1.6.2-2) ...
Selecting previously unselected package python-bs4.
Preparing to unpack .../48-python-bs4_4.5.3-1_all.deb ...
Unpacking python-bs4 (4.5.3-1) ...
Selecting previously unselected package python-waitress.
Preparing to unpack .../49-python-waitress_1.0.1-1_all.deb ...
Unpacking python-waitress (1.0.1-1) ...
Selecting previously unselected package python-dnspython.
Preparing to unpack .../50-python-dnspython_1.15.0-1_all.deb ...
Unpacking python-dnspython (1.15.0-1) ...
Selecting previously unselected package python-formencode.
Preparing to unpack .../51-python-formencode_1.3.0-2_all.deb ...
Unpacking python-formencode (1.3.0-2) ...
Selecting previously unselected package python-tempita.
Preparing to unpack .../52-python-tempita_0.5.2-2_all.deb ...
Unpacking python-tempita (0.5.2-2) ...
Selecting previously unselected package python-paste.
Preparing to unpack .../53-python-paste_2.0.3+dfsg-4_all.deb ...
Unpacking python-paste (2.0.3+dfsg-4) ...
Selecting previously unselected package python-pastedeploy-tpl.
Preparing to unpack .../54-python-pastedeploy-tpl_1.5.2-4_all.deb ...
Unpacking python-pastedeploy-tpl (1.5.2-4) ...
Selecting previously unselected package python-pastedeploy.
Preparing to unpack .../55-python-pastedeploy_1.5.2-4_all.deb ...
Unpacking python-pastedeploy (1.5.2-4) ...
Selecting previously unselected package python-webtest.
Preparing to unpack .../56-python-webtest_2.0.24-1_all.deb ...
Unpacking python-webtest (2.0.24-1) ...
Selecting previously unselected package python-pecan.
Preparing to unpack .../57-python-pecan_1.1.2-3_all.deb ...
Unpacking python-pecan (1.1.2-3) ...
Selecting previously unselected package python-werkzeug.
Preparing to unpack .../58-python-werkzeug_0.11.15+dfsg1-1_all.deb ...
Unpacking python-werkzeug (0.11.15+dfsg1-1) ...
Selecting previously unselected package ceph-mgr.
Preparing to unpack .../59-ceph-mgr_12.2.8-pve1_amd64.deb ...
Unpacking ceph-mgr (12.2.8-pve1) ...
Selecting previously unselected package python-colorama.
Preparing to unpack .../60-python-colorama_0.3.7-1_all.deb ...
Unpacking python-colorama (0.3.7-1) ...
Selecting previously unselected package python-click.
Preparing to unpack .../61-python-click_6.6-1_all.deb ...
Unpacking python-click (6.6-1) ...
Selecting previously unselected package python-itsdangerous.
Preparing to unpack .../62-python-itsdangerous_0.24+dfsg1-2_all.deb ...
Unpacking python-itsdangerous (0.24+dfsg1-2) ...
Selecting previously unselected package python-flask.
Preparing to unpack .../63-python-flask_0.12.1-1_all.deb ...
Unpacking python-flask (0.12.1-1) ...
Selecting previously unselected package ceph-mon.
Preparing to unpack .../64-ceph-mon_12.2.8-pve1_amd64.deb ...
Unpacking ceph-mon (12.2.8-pve1) ...
Selecting previously unselected package libparted2:amd64.
Preparing to unpack .../65-libparted2_3.2-17_amd64.deb ...
Unpacking libparted2:amd64 (3.2-17) ...
Selecting previously unselected package parted.
Preparing to unpack .../66-parted_3.2-17_amd64.deb ...
Unpacking parted (3.2-17) ...
Selecting previously unselected package ceph-osd.
Preparing to unpack .../67-ceph-osd_12.2.8-pve1_amd64.deb ...
Unpacking ceph-osd (12.2.8-pve1) ...
Selecting previously unselected package ceph.
Preparing to unpack .../68-ceph_12.2.8-pve1_amd64.deb ...
Unpacking ceph (12.2.8-pve1) ...
Setting up python-dnspython (1.15.0-1) ...
Setting up python-idna (2.2-1) ...
Setting up libjs-jquery (3.1.1-2) ...
Setting up python-repoze.lru (0.6-6) ...
Setting up python-setuptools (33.1.1-1) ...
Setting up python-prettytable (0.7.2-3) ...
Setting up libparted2:amd64 (3.2-17) ...
Setting up python-simplegeneric (0.8.1-1) ...
Setting up libjs-underscore (1.8.3~dfsg-1) ...
Setting up liblttng-ust-ctl2:amd64 (2.9.0-2+deb9u1) ...
Setting up libpve-common-perl (5.0-40) ...
Setting up libcurl3:amd64 (7.52.1-5+deb9u7) ...
Setting up python-pyasn1 (0.1.9-2) ...
Setting up libjs-sphinxdoc (1.4.9-2) ...
Setting up python-webob (1:1.6.2-2) ...
Setting up python-colorama (0.3.7-1) ...
Setting up python-waitress (1.0.1-1) ...
update-alternatives: using /usr/bin/waitress-serve-python2 to provide /usr/bin/waitress-serve (waitress-serve) in auto mode
Setting up parted (3.2-17) ...
Setting up libleveldb1v5:amd64 (1.18-5) ...
Setting up python-markupsafe (0.23-3) ...
Setting up python-werkzeug (0.11.15+dfsg1-1) ...
Setting up python-cffi-backend (1.9.1-2) ...
Setting up libtcmalloc-minimal4 (2.5-2.2) ...
Setting up libunwind8 (1.1-4.1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up python-bs4 (4.5.3-1) ...
Setting up python-mako (1.0.6+ds1-2) ...
Setting up python-enum34 (1.1.6-1) ...
Processing triggers for systemd (232-25+deb9u4) ...
Setting up cryptsetup-bin (2:1.7.3-4) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up binutils (2.28-5) ...
Setting up python-singledispatch (3.4.0.3-2) ...
Setting up python-pastedeploy-tpl (1.5.2-4) ...
Setting up python-itsdangerous (0.24+dfsg1-2) ...
Setting up python-tempita (0.5.2-2) ...
Setting up python-ipaddress (1.0.17-1) ...
Setting up liblttng-ust0:amd64 (2.9.0-2+deb9u1) ...
Setting up python-logutils (0.3.3-5) ...
Setting up python-routes (2.3.1-2) ...
Setting up python-formencode (1.3.0-2) ...
Setting up python-jinja2 (2.8-1) ...
Setting up python-click (6.6-1) ...
Setting up python-paste (2.0.3+dfsg-4) ...
Setting up librados2 (12.2.8-pve1) ...
Setting up libcephfs2 (12.2.8-pve1) ...
Setting up libgoogle-perftools4 (2.5-2.2) ...
Setting up python-cryptography (1.7.1-3) ...
Setting up python-flask (0.12.1-1) ...
Setting up python-rados (12.2.8-pve1) ...
Setting up python-cherrypy3 (3.5.0-2) ...
Setting up python-cephfs (12.2.8-pve1) ...
Setting up libradosstriper1 (12.2.8-pve1) ...
Setting up python-openssl (16.2.0-1) ...
Setting up librgw2 (12.2.8-pve1) ...
Setting up python-rgw (12.2.8-pve1) ...
Setting up python-pastedeploy (1.5.2-4) ...
Setting up ceph-fuse (12.2.8-pve1) ...
Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target.
Setting up librbd1 (12.2.8-pve1) ...
Setting up python-rbd (12.2.8-pve1) ...
Setting up ceph-common (12.2.8-pve1) ...
Installing new version of config file /etc/default/ceph ...
Installing new version of config file /etc/logrotate.d/ceph-common ...
Setting system user ceph properties..usermod: no changes
..done
Fixing /var/run/ceph ownership....done
Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target.
Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service.
Setting up libpve-storage-perl (5.0-29) ...
Setting up python-webtest (2.0.24-1) ...
Setting up ceph-base (12.2.8-pve1) ...
Setting up libpve-guest-common-perl (2.0-18) ...
Setting up python-pecan (1.1.2-3) ...
update-alternatives: using /usr/bin/gunicorn_pecan-python2 to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode
update-alternatives: using /usr/bin/pecan-python2 to provide /usr/bin/pecan (pecan) in auto mode
Setting up ceph-osd (12.2.8-pve1) ...
chown: cannot access '/var/lib/ceph/osd/*/block*': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.
Setting up qemu-server (5.0-36) ...
Setting up pve-manager (5.2-9) ...
Setting up ceph-mon (12.2.8-pve1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.
Processing triggers for pve-ha-manager (2.0-5) ...
Setting up ceph-mgr (12.2.8-pve1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.
Setting up ceph (12.2.8-pve1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for systemd (232-25+deb9u4) ...
replacing ceph init script with own ceph.service
'/usr/share/doc/pve-manager/examples/ceph.service' -> '/etc/systemd/system/ceph.service'
Synchronizing state of ceph.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable ceph
</pre>
-->
==== Initialisieren von Ceph ====
gemäß doku ''10.10.10.0/24'', aber das wurde (versehentlich schon für die [[#Erstellung von einem internen Netz für das Cluster von Proxmox]] verwendet)
: <code>pveceph init --network 10.10.11.0/24</code>
WUI
<pre>
rados_connect failed - No such file or directory (500)
</pre>
: <code>pveceph createmon</code>
<pre>
creating /etc/pve/priv/ceph.client.admin.keyring
monmaptool: monmap file /tmp/monmap
monmaptool: generated fsid 4571c8c1-89c2-44e5-8527-247470f74809
epoch 0
fsid 4571c8c1-89c2-44e5-8527-247470f74809
last_changed 2018-10-02 13:30:43.174771
created 2018-10-02 13:30:43.174771
0: 10.10.11.128:6789/0 mon.nora
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@nora.service -> /lib/systemd/system/ceph-mon@.service.
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-nora'
creating keys for 'mgr.nora'
setting owner for directory
enabling service 'ceph-mgr@nora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@nora.service'
</pre>
: <code>less /etc/pve/storage.cfg</code>
<pre>
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup
zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir
</pre>
: <code>less /etc/pve/ceph.conf</code>
<pre>
[global]
        auth client required = cephx
        auth cluster required = cephx
        auth service required = cephx
        cluster network = 10.10.11.0/24
        fsid = 5df2a0f7-2362-488e-9c5a-4b9ed2a16bfe
        keyring = /etc/pve/priv/$cluster.$name.keyring
        mon allow pool delete = true
        osd journal size = 5120
        osd pool default min size = 2
        osd pool default size = 3
        public network = 10.10.11.0/24
[osd]
        keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.nora]
        host = nora
        mon addr = 10.10.11.128:6789
</pre>
==== Hinzufügen von anderen Servern zum Pool von Ceph ====
* nora -> Ceph -> Monitor -> Create
*: Create Ceph Monitor/Manager
*:; Host: cora
: <tt>Task viewer: Ceph Monitor mon.cora - Create</tt>
WUI
: Output
<pre>
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@cora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-cora'
creating keys for 'mgr.cora'
setting owner for directory
enabling service 'ceph-mgr@cora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@cora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@cora.service'
TASK OK
</pre>
* nora -> Ceph -> Monitor -> Create
*: Create Ceph Monitor/Manager
*:; Host: dora
*:: Create
: <tt>Task viewer: Ceph Monitor mon.cora - Create</tt>
WUI
: Output
<pre>
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@dora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-dora'
creating keys for 'mgr.dora'
setting owner for directory
enabling service 'ceph-mgr@dora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@dora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@dora.service'
TASK OK
</pre>
==== Erstellung vom Pool für das Cluster von Ceph ====
WUI
* nora -> Ceph -> Pools -> Create
*: Create Ceph Monitor/Manager
*:; Name: xora
*:; Size: 3
*:; Min. Size: 2
*:; Crush Rule: replicated_rule
*:; pg_num: 64
*:; Add Storage: X
*:: Create
=== Containerisierung ===
=== Proxmox CT ===
==== Proxmox CT Erstellung ====
===== Proxmox CT Erstellung alle =====
; Create CT:
:; General:
::; Node:
::; Hostname: <tt>(stage)</tt> <tt>software</tt>_<tt>zweck</tt>_<tt>organisation</tt>
::: <tt>(stage)</tt> ist beispielsweise ''test'' oder ''dev'', wenn es nicht produktiv sein soll.
::: <tt>software</tt> ist beispielsweise ''plone'' oder ''openldap''. (Software, nicht Dienst! also nicht ''cms'' oder ''ldap'', oder ''www'' oder ''acc''!)
::: <tt>zweck</tt> ist beispielsweise ''website-2010'' oder ''nothilfe-2020''.
::: <tt>organisation</tt> ist beispielsweise ''stura-htw-dresden'' oder ''kss-sachsen''.
::; Unprivileged container:
::: [X]
::; Nesting:
::: [?]
::; Password: <small>nach Lebensdauer</small>
::: bei Projekten mindestens ''8''
:; Template:
::; Storage: ''cephfs''
:::; Type: rdb
::; Template:
:; Root Disk:
::; Storage: ''storage''
::; Disk size (GiB): <small>nach Bedarf</small>
:; CPU:
::; Cores: <small>2, oder nach Bedarf</small>
::: Die Verwendung von nur einen Prozessor oder mehr als zwei Prozessor ist zu begründen.
:; Memory:
::: <small>nach Bedarf</small>
::; Memory (MiB): <small>nach Bedarf</small>
::; Swap (MiB): <small>nach Bedarf</small>
::: Die Größe soll der Größe, die bei ''Memory (MiB)'' verwendet wird, entsprechen. (Die Verwendung von mehr oder weniger Größe ist zu begründen.)
::: Die Größe soll höchstens die Hälfte der Größe, die bei (''Root Disk'' ->) ''Disk size (GiB)'' verwendet wird, betragen.
:; Network:
::; Bridge: ''vmbr1''
::; IPv4: Static
::; IPv4/CIDR: 141.56.51.<tt>321</tt>/24
::; Gateway (IPv4): 141.56.51.254
::: <tt>321</tt> ist die "verwendbare" Adresse für IPv4, die unverzüglich bei [[Intern:Server#Verwendung von IP-Adressen]] einzutragen ist.
:; DNS:
::; DNS domain:
::; DNS servers: 141.56.1.1
:; Confirm:
::; Start after created: [ ]
::; Finish:
; Datacenter (cluster):
:; HA:
::; Resources:
:::; Add:
::::; VM: <tt>110</tt>
::::: <tt>110</tt> ist die ID der Instanz innerhalb von Proxmox.
::::; Group: ''HA_cluster''
::::; Add:
===== Proxmox CT Erstellung TurnKey =====
; Create CT:
:; General:
::; Unprivileged conatainer: [ ]
: <code>turnkey-init</code>
==== Proxmox CT Verwaltung ====
Auflisten aller Dateien für die Konfiguration der Container auf dem jeweiligen node
: <code>ls /etc/pve/lxc/</code>
=== Virtualisierung ===
=== Proxmox VM ===
=== Ausfallsicherheit ===
==== Ausfallsicherheit CT ====
: <code>ping 141.56.51.321</code>
<pre>
PING 141.56.51.321 (141.56.51.321) 56(84) bytes of data.
64 bytes from 141.56.51.321: icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from 141.56.51.321: icmp_seq=2 ttl=64 time=0.213 ms
64 bytes from 141.56.51.321: icmp_seq=3 ttl=64 time=0.286 ms
From 141.56.51.456 icmp_seq=4 Destination Host Unreachable
From 141.56.51.456 icmp_seq=5 Destination Host Unreachable
</pre>
<pre></pre>
<pre>
From 141.56.51.456 icmp_seq=78 Destination Host Unreachable
From 141.56.51.456 icmp_seq=79 Destination Host Unreachable
64 bytes from 141.56.51.321: icmp_seq=80 ttl=64 time=2609 ms
64 bytes from 141.56.51.321: icmp_seq=81 ttl=64 time=1585 ms
64 bytes from 141.56.51.321: icmp_seq=82 ttl=64 time=561 ms
64 bytes from 141.56.51.321: icmp_seq=83 ttl=64 time=0.260 ms
64 bytes from 141.56.51.321: icmp_seq=84 ttl=64 time=0.295 ms
64 bytes from 141.56.51.157: icmp_seq=85 ttl=64 time=0.200 ms
64 bytes from 141.56.51.157: icmp_seq=86 ttl=64 time=0.274 ms
</pre>
<pre></pre>
==== Ausfallsicherheit Fehler ====
===== Fehler bei der Ausfallsicherheit bei der Verwendung von ZFS, statt RDB =====
<pre>
2020-04-29 09:14:46 starting migration of CT 110 to node 'n1' (10.1.0.31)
2020-04-29 09:14:46 found local volume 'local-zfs:subvol-110-disk-0' (in current VM config)
cannot open 'rpool/data/subvol-110-disk-0': dataset does not exist
usage:
snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
For the property list, run: zfs set|get
2020-04-29 09:14:46 ERROR: zfs error: For the delegated permission list, run: zfs allow|unallow
2020-04-29 09:14:46 aborting phase 1 - cleanup resources
2020-04-29 09:14:46 ERROR: found stale volume copy 'local-zfs:subvol-110-disk-0' on node 'n1'
2020-04-29 09:14:46 start final cleanup
2020-04-29 09:14:46 ERROR: migration aborted (duration 00:00:01): zfs error: For the delegated permission list, run: zfs allow|unallow
TASK ERROR: migration aborted
</pre>
== Anpassungen für Produktivebetrieb ==
* Backup Limit auf 10 erhöht.
* Vorbereitung für Plone 5 Umzug (101) [[Plone 5]]


== Siehe auch ==
== Siehe auch ==

Aktuelle Version vom 17. Oktober 2022, 15:06 Uhr

Cluster/xora war ein erster Test zu einem Verbund (Cluster) von Servern mit Proxmox VE.

Später wurde der Test erneut bei der Server/Aktualisierung/2019 als Intern:ora vorgenommen.

Grundsätzlich soll ein möglichst ausfallsicherer Verbund an Servern mit Proxmox VE getestet werden.

Getestet werden soll:

  • Datensicherung
    • Replikation
  • Ausfallüberbrückung

Bezeichnung

[Bearbeiten]

Es ist die künstlich geschaffene Bezeichnung für den Verbund von mehreren Servern.

Verbund

[Bearbeiten]

Für einen Verbund an Servern mit Proxmox VE bedarf es wohl mindestens 3 Servern.

Zu dem Verbund gehören (theoretisch)

mindestens
zur Vervollständigung auch
zur Ergänzung eventuell auch

.

Geräte

[Bearbeiten]

einzelne Geräte

[Bearbeiten]
Gerät Name IPv4 OS DNS (A) OS WUI OS Mail OS Geburtshilfe Start (Dauer) IPv4 IPMI DNS (A) IPMI WUI IPMI Mail IPMI Netzwerkschnittstellen Massenspeicher Eigentum
27090
cora 141.56.51.123 cora.stura-dresden.de https://cora.stura-dresden.de:8006/ cora@stura.htw-dresden.de James 3 min 141.56.51.113 irmc.cora.stura-dresden.de https://irmc.cora.stura-dresden.de/ irmc.cora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_0
27091
dora 141.56.51.124 dora.stura-dresden.de https://dora.stura-dresden.de:8006/ dora@stura.htw-dresden.de Fullforce 3 min 141.56.51.114 irmc.dora.stura-dresden.de https://irmc.dora.stura-dresden.de/ irmc.dora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_2709_1
8529
lora 141.56.51.127 lora.stura-dresden.de https://lora.stura-dresden.de:8006/ lora@stura.htw-dresden.de  min 141.56.51.117 irmc.lora.stura-dresden.de https://irmc.lora.stura-dresden.de/ irmc.lora@stura.htw-dresden.de
M 2 1




3.5 ″ 3.5 ″




2 TB 2 TB
bsd.services:user:vater:hw#rx300_s6_8529
8
nora 141.56.51.128 nora.stura-dresden.de https://nora.stura-dresden.de:8006/ nora@stura.htw-dresden.de 2 min 141.56.51.118 irmc.nora.stura-dresden.de https://irmc.nora.stura-dresden.de/ irmc.nora@stura.htw-dresden.de
M 2 1
X



3.5 ″ 3.5 ″




2 TB 2 TB
StuRa (srs3008)
5100
zora 141.56.51.129 zora.stura-dresden.de https://zora.stura-dresden.de:8006/ zora@stura.htw-dresden.de  min 141.56.51.119 drac.cora.stura-dresden.de https://drac.cora.stura-dresden.de/ drac.zora@stura.htw-dresden.de
M
1
2
3.5 ″ 3.5 ″ 3.5 ″ 3.5 ″
2 TB 2 TB

2 TB 2 TB

bsd.services:user:vater:hw#dell_poweredge_r510

F2

Boot
alles andere
USB KEY: …
PCI SCSI: #0100 ID000 LN0 HGST H
PCI SCSI: #0100 ID004 LN0 HGST H

Installation Betriebssystem

[Bearbeiten]

Vorbereitung Installation Betreibssystem

[Bearbeiten]

Durchführung Installation Betreibssystem

[Bearbeiten]
Install Proxmox VE
Loading Proxmox Installer ...
Loading initial ramdisk ...
Proxmox startup
End User License Agreement (EULA)
I agree
Proxmox Virtualiszation Environment (PVE)
Options
Filesystem
ext4zfs (RAID1)
Disk Setup
Harddisk 0
/dev/sda (1863GB, HUS726020ALS214)
Harddisk 1
/dev/sdb (1863GB, HUS726020ALS214)
Harddisk 2
-- do not use --
Advanced Options
ashift
12
compress
on
checksum
on
copies
1
OK
Target
zfs (RAID1)
Next
Location and Time Zone selection
Country
Germany
Time zone
Europe/Berlin
Keyboard Layout
German
Next
Administration Password and E-Mail Address
Password
8
Confirm
8
E-Mail
siehe #einzelne Geräte
Next
Management Network Configuration
Management Interface
enp8s0f0 - … (igb)
Hostname (FQDN)
siehe #einzelne Geräte
IP Address
siehe #einzelne Geräte
Netmask
255.255.255.0
Gateway
141.56.51.254
DNS Server
141.56.1.1
Next
Installation successful!
Reboot

Nachbereitung Installation Betreibssystem

[Bearbeiten]
erste Aktualisierung
  • Update (WUI)
    • Refresh (WUI)
      Neustart (WUI)
    • Upgrade (WUI)
(optionales) ZFS anschauen
zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	rpool       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    sda2    ONLINE       0     0     0
	    sdb2    ONLINE       0     0     0

errors: No known data errors
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             9.40G  1.75T   104K  /rpool
rpool/ROOT         919M  1.75T    96K  /rpool/ROOT
rpool/ROOT/pve-1   919M  1.75T   919M  /
rpool/data          96K  1.75T    96K  /rpool/data
rpool/swap        8.50G  1.75T    56K  -
(optionales) Anschauen der Partitionierung
fdisk -l /dev/sd{a,b}
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A3CA01C-D0CE-4750-A26A-C07C1541EF1D

Device          Start        End    Sectors  Size Type
/dev/sda1          34       2047       2014 1007K BIOS boot
/dev/sda2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sda9  3907012750 3907029134      16385    8M Solaris reserved 1


Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C0D3B0CA-C966-4B00-B367-EEDBD04872F7

Device          Start        End    Sectors  Size Type
/dev/sdb1          34       2047       2014 1007K BIOS boot
/dev/sdb2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sdb9  3907012750 3907029134      16385    8M Solaris reserved 1
Sicherung des initialen Zustandes von PVE (mit der erfolgten Aktualisierung)

Erstellen eines Schnappschusses vom gesamten Pool

zfs snapshot -r rpool@fresh-installed-pve-and-updated

Im Übrigen kann überlegt werden, dass darüber hinaus auch eine Sicherung von /dev/sd{a,b}1 vorgenommen wird.

Anpassung der Quellen

[Bearbeiten]
Siehe auch
Server/Proxmox#sources.list
Ergänzung der Quelle pve-no-subscription
[Bearbeiten]

auf schnell!

echo 'deb http://download.proxmox.com/debian/pve stretch pve-no-subscription' > /etc/apt/sources.list.d/pve-download.list && apt update

cat /etc/apt/sources.list.d/pve-enterprise.list

cat: /etc/apt/sources.list.d/pve-enterprise.list: No such file or directory

$EDITOR /etc/apt/sources.list.d/pve-enterprise.list


cat /etc/apt/sources.list.d/pve-enterprise.list

deb http://download.proxmox.com/debian/pve stretch pve-no-subscription
Entfernung der Quelle pve-enterprise
[Bearbeiten]

cat /etc/apt/sources.list.d/pve-enterprise.list

deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise

$EDITOR /etc/apt/sources.list.d/pve-enterprise.list


cat /etc/apt/sources.list.d/pve-enterprise.list

####deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise

Erstellung vom Cluster

[Bearbeiten]

optionales Begutachten für die Erstellung von einem Cluster

[Bearbeiten]
cora dora nora
less /etc/network/interfaces
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.123
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.124
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

iface enp8s0f1 inet manual
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
        address 141.56.51.128
        netmask 255.255.255.0
        gateway 141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

iface eno2 inet manual
less /etc/hosts
127.0.0.1 localhost.localdomain localhost
141.56.51.123 cora.stura-dresden.de cora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.124 dora.stura-dresden.de dora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 localhost.localdomain localhost
141.56.51.128 nora.stura-dresden.de nora pvelocalhost

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

Erzeugung vom Cluster xora

[Bearbeiten]

auf einer der Server, die zum Cluster gehören sollen

vorgenommen auf #8 (nora)
(alternativ) grafische Oberfläche Kommandozeile
  • Datacenter -> Cluster -> Create Cluster
    Create Cluster
    Cluster Name
    xora
    Ring 0 Address
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
TASK OK

pvecm create xora

Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
Writing corosync config to /etc/pve/corosync.conf
Restart corosync and cluster filesystem
less /etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 1
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
pvecm status
Quorum information
------------------
Date:             Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          1/12
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 141.56.51.128 (local)

Erstellung von einem internen Netz für das Cluster von Proxmox

[Bearbeiten]

frei gewählt verwenden wir 10.10.10.0/24

cora dora nora

grafische Oberfläche (mit notwendigen Neustart)

less /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.123
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.123
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0f0 inet manual

iface ens4f0 inet manual

iface ens4f1 inet manual

iface ens4f2 inet manual

iface ens4f3 inet manual

auto enp8s0f1
iface enp8s0f1 inet static
        address  10.10.10.124
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.124
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports enp8s0f0
        bridge_stp off
        bridge_fd 0
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eno1 inet manual

iface enp2s0f0 inet manual

iface enp2s0f1 inet manual

iface enp2s0f2 inet manual

iface enp2s0f3 inet manual

auto eno2
iface eno2 inet static
        address  10.10.10.128
        netmask  255.255.255.0

auto vmbr0
iface vmbr0 inet static
        address  141.56.51.128
        netmask  255.255.255.0
        gateway  141.56.51.254
        bridge_ports eno1
        bridge_stp off
        bridge_fd 0

Eintragung der anderen Nodes unabhängig von DNS

[Bearbeiten]
cora dora nora
less /etc/hosts

####    members of the cluster xora
10.10.10.123 cora.xora.stura-dresden.de cora.xora
10.10.10.124 dora.xora.stura-dresden.de dora.xora
10.10.10.128 nora.xora.stura-dresden.de nora.xora
# The following lines are desirable for IPv6 capable hosts

Hinzufügen von anderen Servern

[Bearbeiten]
cora dora nora
pvecm add nora.xora
Please enter superuser (root) password for 'nora.xora':
                                                       Password for root@nora.xora: ********
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462961.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'cora' to cluster.
pvecm add nora.xora
Please enter superuser (root) password for 'nora.xora':
                                                       Password for root@nora.xora: ********
Etablishing API connection with host 'nora.xora'
The authenticity of host 'nora.xora' can't be established.
X509 SHA256 key fingerprint is F3:A9:2D:9E:D5:59:DA:AE:5E:76:71:1E:02:D9:49:B6:67:5C:40:B0:0C:C0:05:FF:C5:D7:62:37:00:D8:CA:DD.
Are you sure you want to continue connecting (yes/no)? yes
Login succeeded.
Request addition of this node
Join request OK, finishing setup locally
stopping pve-cluster service
backup old database to '/var/lib/pve-cluster/backup/config-1538462994.sql.gz'
waiting for quorum...OK
(re)generate node files
generate new node certificate
merge authorized SSH keys and known hosts
generated new node certificate, restart pveproxy and pvedaemon services
successfully added node 'dora' to cluster.
/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 141.56.51.123
  }
  node {
    name: dora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 141.56.51.124
  }
  node {
    name: nora
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 141.56.51.128
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 3
  interface {
    bindnetaddr: 141.56.51.128
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

(auch in Anlehnung an https://pve.proxmox.com/wiki/Separate_Cluster_Network)

/etc/pve/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: cora
    nodeid: 3
    quorum_votes: 1
    ring0_addr: cora.xora
  }
  node {
    name: dora
    nodeid: 4
    quorum_votes: 1
    ring0_addr: dora.xora
  }
  node {
    name: nora
    nodeid: 8
    quorum_votes: 1
    ring0_addr: nora.xora
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: xora
  config_version: 4
  interface {
    bindnetaddr: nora.xora
    ringnumber: 0
  }
  ip_version: ipv4
  secauth: on
  version: 2
}

Prüfen vom Verbund

[Bearbeiten]
cora dora nora
pvecm status
Quorum information
------------------
Date:             Fri Mmm dd HH:MM:SS yyyy
Quorum provider:  corosync_votequorum
Nodes:            3
Node ID:          0x00000008
Ring ID:          3/52
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   3
Highest expected: 3
Total votes:      3
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000003          1 10.10.10.123
0x00000004          1 10.10.10.124
0x00000008          1 10.10.10.128 (local)

Erstellen vom Cluster für Ceph

[Bearbeiten]

Erstellung von einem internen Netz für das Cluster von Ceph

[Bearbeiten]

frei gewählt verwenden wir 10.10.11.0/24

  • Netzwerkschnittstellen
    10.10.11.0/24
    10.10.11.123/24
    10.10.11.124/24
    10.10.11.128/24

Neustart (wegen Netzwerkschnittstellen)


Testweises Verbinden (zu den jeweils anderen beiden Servern im Cluster) per ssh

ssh root@10.10.11.123
ssh root@10.10.11.124
ssh root@10.10.11.128

Installation von Ceph

[Bearbeiten]
pveceph install

#Ergänzung der Quelle pve-no-subscription

pveceph install

Initialisieren von Ceph

[Bearbeiten]

gemäß doku 10.10.10.0/24, aber das wurde (versehentlich schon für die #Erstellung von einem internen Netz für das Cluster von Proxmox verwendet)

pveceph init --network 10.10.11.0/24

WUI

rados_connect failed - No such file or directory (500)
pveceph createmon
creating /etc/pve/priv/ceph.client.admin.keyring
monmaptool: monmap file /tmp/monmap
monmaptool: generated fsid 4571c8c1-89c2-44e5-8527-247470f74809
epoch 0
fsid 4571c8c1-89c2-44e5-8527-247470f74809
last_changed 2018-10-02 13:30:43.174771
created 2018-10-02 13:30:43.174771
0: 10.10.11.128:6789/0 mon.nora
monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@nora.service -> /lib/systemd/system/ceph-mon@.service.
admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
INFO:ceph-create-keys:ceph-mon admin socket not ready yet.
INFO:ceph-create-keys:Key exists already: /etc/ceph/ceph.client.admin.keyring
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-nora'
creating keys for 'mgr.nora'
setting owner for directory
enabling service 'ceph-mgr@nora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@nora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@nora.service'
less /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir
less /etc/pve/ceph.conf
[global]
         auth client required = cephx
         auth cluster required = cephx
         auth service required = cephx
         cluster network = 10.10.11.0/24
         fsid = 5df2a0f7-2362-488e-9c5a-4b9ed2a16bfe
         keyring = /etc/pve/priv/$cluster.$name.keyring
         mon allow pool delete = true
         osd journal size = 5120
         osd pool default min size = 2
         osd pool default size = 3
         public network = 10.10.11.0/24

[osd]
         keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.nora]
         host = nora
         mon addr = 10.10.11.128:6789

Hinzufügen von anderen Servern zum Pool von Ceph

[Bearbeiten]
  • nora -> Ceph -> Monitor -> Create
    Create Ceph Monitor/Manager
    Host
    cora
Task viewer: Ceph Monitor mon.cora - Create

WUI

Output
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@cora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-cora'
creating keys for 'mgr.cora'
setting owner for directory
enabling service 'ceph-mgr@cora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@cora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@cora.service'
TASK OK
  • nora -> Ceph -> Monitor -> Create
    Create Ceph Monitor/Manager
    Host
    dora
    Create
Task viewer: Ceph Monitor mon.cora - Create

WUI

Output
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@dora.service -> /lib/systemd/system/ceph-mon@.service.
INFO:ceph-create-keys:ceph-mon is not in quorum: u'synchronizing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'probing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:ceph-mon is not in quorum: u'electing'
INFO:ceph-create-keys:Talking to monitor...
exported keyring for client.admin
updated caps for client.admin
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
INFO:ceph-create-keys:Talking to monitor...
creating manager directory '/var/lib/ceph/mgr/ceph-dora'
creating keys for 'mgr.dora'
setting owner for directory
enabling service 'ceph-mgr@dora.service'
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@dora.service -> /lib/systemd/system/ceph-mgr@.service.
starting service 'ceph-mgr@dora.service'
TASK OK

Erstellung vom Pool für das Cluster von Ceph

[Bearbeiten]

WUI

  • nora -> Ceph -> Pools -> Create
    Create Ceph Monitor/Manager
    Name
    xora
    Size
    3
    Min. Size
    2
    Crush Rule
    replicated_rule
    pg_num
    64
    Add Storage
    X
    Create

Containerisierung

[Bearbeiten]

Proxmox CT

[Bearbeiten]

Proxmox CT Erstellung

[Bearbeiten]
Proxmox CT Erstellung alle
[Bearbeiten]
Create CT
General
Node
Hostname
(stage) software_zweck_organisation
(stage) ist beispielsweise test oder dev, wenn es nicht produktiv sein soll.
software ist beispielsweise plone oder openldap. (Software, nicht Dienst! also nicht cms oder ldap, oder www oder acc!)
zweck ist beispielsweise website-2010 oder nothilfe-2020.
organisation ist beispielsweise stura-htw-dresden oder kss-sachsen.
Unprivileged container
[X]
Nesting
[?]
Password
nach Lebensdauer
bei Projekten mindestens 8
Template
Storage
cephfs
Type
rdb
Template
Root Disk
Storage
storage
Disk size (GiB)
nach Bedarf
CPU
Cores
2, oder nach Bedarf
Die Verwendung von nur einen Prozessor oder mehr als zwei Prozessor ist zu begründen.
Memory
nach Bedarf
Memory (MiB)
nach Bedarf
Swap (MiB)
nach Bedarf
Die Größe soll der Größe, die bei Memory (MiB) verwendet wird, entsprechen. (Die Verwendung von mehr oder weniger Größe ist zu begründen.)
Die Größe soll höchstens die Hälfte der Größe, die bei (Root Disk ->) Disk size (GiB) verwendet wird, betragen.
Network
Bridge
vmbr1
IPv4
Static
IPv4/CIDR
141.56.51.321/24
Gateway (IPv4)
141.56.51.254
321 ist die "verwendbare" Adresse für IPv4, die unverzüglich bei Intern:Server#Verwendung von IP-Adressen einzutragen ist.
DNS
DNS domain
DNS servers
141.56.1.1
Confirm
Start after created
[ ]
Finish
Datacenter (cluster)
HA
Resources
Add
VM
110
110 ist die ID der Instanz innerhalb von Proxmox.
Group
HA_cluster
Add
Proxmox CT Erstellung TurnKey
[Bearbeiten]
Create CT
General
Unprivileged conatainer
[ ]
turnkey-init

Proxmox CT Verwaltung

[Bearbeiten]

Auflisten aller Dateien für die Konfiguration der Container auf dem jeweiligen node

ls /etc/pve/lxc/

Virtualisierung

[Bearbeiten]

Proxmox VM

[Bearbeiten]

Ausfallsicherheit

[Bearbeiten]

Ausfallsicherheit CT

[Bearbeiten]
ping 141.56.51.321
PING 141.56.51.321 (141.56.51.321) 56(84) bytes of data.
64 bytes from 141.56.51.321: icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from 141.56.51.321: icmp_seq=2 ttl=64 time=0.213 ms
64 bytes from 141.56.51.321: icmp_seq=3 ttl=64 time=0.286 ms
From 141.56.51.456 icmp_seq=4 Destination Host Unreachable
From 141.56.51.456 icmp_seq=5 Destination Host Unreachable

From 141.56.51.456 icmp_seq=78 Destination Host Unreachable
From 141.56.51.456 icmp_seq=79 Destination Host Unreachable
64 bytes from 141.56.51.321: icmp_seq=80 ttl=64 time=2609 ms
64 bytes from 141.56.51.321: icmp_seq=81 ttl=64 time=1585 ms
64 bytes from 141.56.51.321: icmp_seq=82 ttl=64 time=561 ms
64 bytes from 141.56.51.321: icmp_seq=83 ttl=64 time=0.260 ms
64 bytes from 141.56.51.321: icmp_seq=84 ttl=64 time=0.295 ms
64 bytes from 141.56.51.157: icmp_seq=85 ttl=64 time=0.200 ms
64 bytes from 141.56.51.157: icmp_seq=86 ttl=64 time=0.274 ms

Ausfallsicherheit Fehler

[Bearbeiten]
Fehler bei der Ausfallsicherheit bei der Verwendung von ZFS, statt RDB
[Bearbeiten]
2020-04-29 09:14:46 starting migration of CT 110 to node 'n1' (10.1.0.31)
2020-04-29 09:14:46 found local volume 'local-zfs:subvol-110-disk-0' (in current VM config)
cannot open 'rpool/data/subvol-110-disk-0': dataset does not exist
usage:
	snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...
For the property list, run: zfs set|get
2020-04-29 09:14:46 ERROR: zfs error: For the delegated permission list, run: zfs allow|unallow
2020-04-29 09:14:46 aborting phase 1 - cleanup resources
2020-04-29 09:14:46 ERROR: found stale volume copy 'local-zfs:subvol-110-disk-0' on node 'n1'
2020-04-29 09:14:46 start final cleanup
2020-04-29 09:14:46 ERROR: migration aborted (duration 00:00:01): zfs error: For the delegated permission list, run: zfs allow|unallow
TASK ERROR: migration aborted

Anpassungen für Produktivebetrieb

[Bearbeiten]
  • Backup Limit auf 10 erhöht.
  • Vorbereitung für Plone 5 Umzug (101) Plone 5

Siehe auch

[Bearbeiten]