Building Cluster of 2 nodes ProxmoxV4
This method is an adaptation of https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster in order to avoid having a block “Waiting of quorum…” during the “pvecm add” of the second node.
The problem occurs when the nodes don't have a multicast connection (it is the case with a non configurable cheap switch).
Therefore the connection must be configurated as unicast.
aptitude install omping
127.0.0.1 localhost.localdomain localhost 192.168.2.160 proxmox1.domain.tld proxmox1 pvelocalhost 192.168.2.150 proxmox2.domain.tld proxmox2 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
root@proxmox1:~# pvecm create cluster-GuedeL Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey.
root@proxmox-miniitx:~# ls /etc/corosync/ authkey corosync.conf
and nano /etc/corosync/corosync.conf and /etc/pve/corosync.conf give the same:
totem { version: 2 secauth: on cluster_name: cluster-GuedeL config_version: 1 ip_version: ipv4 interface { ringnumber: 0 bindnetaddr: 192.168.2.160 } } nodelist { node { ring0_addr: proxmox-miniitx name: proxmox-miniitx nodeid: 1 quorum_votes: 1 } } quorum { provider: corosync_votequorum } logging { to_syslog: yes debug: off }
ls -l /etc/pve
cp /etc/corosync/corosync.conf /etc/pve/corosync.conf.orig
cp /etc/corosync/corosync.conf /etc/pve/corosync.conf.modif nano /etc/pve/corosync.conf.modif
and add “expected_votes: 1” and “two_node: 1” into the section “quorum” and “transport: udpu” into the section “totem” in order to obtain:
logging { debug: off to_syslog: yes } nodelist { node { name: proxmox-miniitx nodeid: 1 quorum_votes: 1 ring0_addr: proxmox-miniitx } node { name: proxmox-asrock nodeid: 2 quorum_votes: 1 ring0_addr: proxmox-asrock } } quorum { expected_votes: 1 provider: corosync_votequorum two_node: 1 } totem { cluster_name: cluster-GuedeL config_version: 2 ip_version: ipv4 secauth: on transport: udpu version: 2 interface { bindnetaddr: 192.168.2.160 ringnumber: 0 } }
cp /etc/pve/corosync.conf.modif /etc/pve/corosync.conf
and reboot.
aptitude install omping
127.0.0.1 localhost.localdomain localhost 192.168.2.150 proxmox2.domain.tld proxmox2 pvelocalhost 192.168.2.160 proxmox1.domain.tld proxmox1 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
pvecm add 192.168.2.160
This method https://forum.proxmox.com/threads/waiting-for-quorum-in-proxmox-ve-4-0-beta-2.23551/page-2 Post #27 didn't work by me: I had further a “waiting for quorum…”.
Shutdown the node to be removed and from another node staying into the cluster:
pvecm delnode node_to:be_removed
rm /root/.ssh/*
works if all can be deleted.
rm /etc/corosync/authkey
service pve-cluster stop pmxcfs -l rm /etc/pve/corosync.conf service pve-cluster stop service pve-cluster start ## some errors can occur!! service pvedaemon restart service pveproxy restart service pvestatd restart restart systemctl status pve-cluster.service ## should be without errors now