**Building Cluster of 2 nodes ProxmoxV4** This method is an adaptation of [[https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster]] in order to avoid having a block "Waiting of quorum..." during the "pvecm add" of the second node. \\ The problem occurs when the nodes don't have a multicast connection (it is the case with a non configurable cheap switch). \\ Therefore the connection must be configurated as unicast. ===== on Proxmox1: ===== * Install of "omping" (not necessary) aptitude install omping * config /etc/hosts as follow (replacing "proxmox1&2" by the real host name, "domain.tld" by the real domain and with their real IPs) and reboot: 127.0.0.1 localhost.localdomain localhost 192.168.2.160 proxmox1.domain.tld proxmox1 pvelocalhost 192.168.2.150 proxmox2.domain.tld proxmox2 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters * Build the cluster root@proxmox1:~# pvecm create cluster-GuedeL Corosync Cluster Engine Authentication key generator. Gathering 1024 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. * Have a check: /etc/corosync isn't empty any more root@proxmox-miniitx:~# ls /etc/corosync/ authkey corosync.conf and nano /etc/corosync/corosync.conf and /etc/pve/corosync.conf give the same: totem { version: 2 secauth: on cluster_name: cluster-GuedeL config_version: 1 ip_version: ipv4 interface { ringnumber: 0 bindnetaddr: 192.168.2.160 } } nodelist { node { ring0_addr: proxmox-miniitx name: proxmox-miniitx nodeid: 1 quorum_votes: 1 } } quorum { provider: corosync_votequorum } logging { to_syslog: yes debug: off } * Configure transmission unicast (see [[https://forum.proxmox.com/threads/proxmox4-cluster-2-nodes-problem-with-quorum.25445/]]): * check that /etc/pve/corosync.conf is writable (see [[https://www.guedel.eu/dokuwiki/doku.php?id=Welcome:Proxmox:Cluster#Delete%20the%20cluster%20config%20of%20the%20removed%20node]]) ls -l /etc/pve * save a copy of /etc/pve/corosync cp /etc/corosync/corosync.conf /etc/pve/corosync.conf.orig * create a modified file cp /etc/corosync/corosync.conf /etc/pve/corosync.conf.modif nano /etc/pve/corosync.conf.modif and add "expected_votes: 1" and "two_node: 1" into the section "quorum" and "transport: udpu" into the section "totem" in order to obtain: logging { debug: off to_syslog: yes } nodelist { node { name: proxmox-miniitx nodeid: 1 quorum_votes: 1 ring0_addr: proxmox-miniitx } node { name: proxmox-asrock nodeid: 2 quorum_votes: 1 ring0_addr: proxmox-asrock } } quorum { expected_votes: 1 provider: corosync_votequorum two_node: 1 } totem { cluster_name: cluster-GuedeL config_version: 2 ip_version: ipv4 secauth: on transport: udpu version: 2 interface { bindnetaddr: 192.168.2.160 ringnumber: 0 } } Don't forget to increment each time /etc/pve/corosync.conf is modified the value of "config_version" into the section "totem" * * activate the modified file: cp /etc/pve/corosync.conf.modif /etc/pve/corosync.conf and reboot. ===== on Proxmox2: ===== * Install of "omping" (not necessary) aptitude install omping * config /etc/hosts as follow (replacing "proxmox1&2" by the real host name, "domain.tld" by the real domain and with their real IPs) and reboot: 127.0.0.1 localhost.localdomain localhost 192.168.2.150 proxmox2.domain.tld proxmox2 pvelocalhost 192.168.2.160 proxmox1.domain.tld proxmox1 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters * add this machine to the cluster (replace by the IP of Proxmox1): pvecm add 192.168.2.160 ===== Troubleshooting / general infos ===== ==== Links ==== * proxmox cluster: [[https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster]] * Multicast (for proxmoxV3): [[https://pve.proxmox.com/wiki/Multicast_notes#Use_unicast_instead_of_multicast_.28if_all_else_fails.29]] * Cluster file system: [[http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29]] * Manpage of corosync: [[http://manpages.ubuntu.com/manpages/maverick/man5/corosync.conf.5.html]] ==== Configure broadcast instead of multicast ==== This method [[https://forum.proxmox.com/threads/waiting-for-quorum-in-proxmox-ve-4-0-beta-2.23551/page-2]] Post #27 didn't work by me: I had further a "waiting for quorum...". ==== Remove a node from the cluster ==== Shutdown the node to be removed and from another node staying into the cluster: pvecm delnode node_to:be_removed ==== Delete the cluster config of the removed node ==== * Delete the ssh-keys and hosts from /root/.ssh. rm /root/.ssh/* works if all can be deleted. * Delete the key of corosync: rm /etc/corosync/authkey * Delete /etc/pve/corosync.conf: * if /etc/pve/corosync is writable, simply delete it * if /etc/pve/corosync is not writable ([[http://pve.proxmox.com/wiki/Proxmox_Cluster_file_system_%28pmxcfs%29]]) service pve-cluster stop pmxcfs -l rm /etc/pve/corosync.conf service pve-cluster stop service pve-cluster start ## some errors can occur!! service pvedaemon restart service pveproxy restart service pvestatd restart restart systemctl status pve-cluster.service ## should be without errors now modifying or deleting /etc/corosync/corosync.conf seems to have no influence on /etc/pve/corosync.conf