Outils pour utilisateurs

Outils du site


Panneau latéral

Menu tree

welcome:proxmox:restoring_the_hypervisor

Difficulté
Difficile
Restoring a Proxmox from its snapshots (zfs file system)

Source: https://www.youtube.com/watch?v=6ayd2NHkBXk&t=931s :-X

Configuration:

Proxmox5, running only 2 mirrored SSDs à 240Go (pool zfs RAID1).
In this case Proxox runs on dataset rpool/ROOT/pve-1 and the VMs are on datasets rpool/data/VM#######.
Restoring the Proxmox is restoring “pve-1” and restoring the VMs.

Preparing tasks:

  • stop the VMs
  • if you can, make a snapshot of the entire pool “rpool” to get the current state of the OS and of the VMs and e.g. sending the snapshot to a FreeNAS if the necessary storage capacity is not available with a USB disk. From the Proxmox:
    # zfs snapshot -r rpool@complete
    # zfs send -Rpv rpool@complete | ssh root@FreeNAS.domain.tld zfs recv -vF pool/backup/Proxmox 

Restore:

Starting point:

  • A fresh installed new Proxmox as zfs RAID1
  • The Proxmox installation usb stick.

The Proxmox OS and the VMs can be restored independently from each other.
As the restoring is made from a usb device containing the snapshots, I think that the easier way is to restore only the OS in a first time and the VMs in a second time after the system is running again. In this case a simple usb stick is sufficient.

Step 1:

Getting the snapshot of rpool/ROOT/pve-1 on the usb stick:

  • plug the stick into the FreeNAS, create a “restore” pool and send the snapshot on it:
  • root@FreeNAS $ zfs send -pv pool/backup/Proxmox/rpool/ROOT/pve-1@complete | zfs recv restore/pve-1

Step 2

  • Plug the restore USB stick
  • Then start a normal installation of Proxmox with the installation stick.
  • When the screen about the condition of use is displayed, press Ctrl-Alt-F1 to swith into the shell an press Ctrl-C to stop the installer.
    From this state, there is enough OS in live modus to manage zfs. This trick is magic, isn't it??? 8-)
the keyboard has the US layout!
  •  zpool import

    shows the pool recreated during the new install and the pool for restore from the USB stick.

  •  zpool import -f rpool 
    $ zfs list

    shows the datasets created during the fresh installation of Proxmox, present on the RAID1 pool.

Step 3:

  • The next step will be get dataset “rpool/ROOT/pve-1” and mountpoint “/” available for the data to be restored:
    $ zfs rename rpool/ROOT/pve-1 rpool/ROOT/pve-2
    $ zfs get mountpoint rpool/ROOT/pve-2
    NAME              PROPERTY    VALUE      SOURCE
    rpool/ROOT/pve-2  mountpoint  /          local      ### this confirms that rpool/ROOT/pve-2 is mounted on "/"
    $ zfs set mountpoint=/rpool/ROOT/pve-2 rpool/ROOT/pve-2 ### or the mountpoint you want
    $ zfs get mountpoint rpool/ROOT/pve-2
    NAME              PROPERTY    VALUE              SOURCE
    rpool/ROOT/pve-2  mountpoint  /rpool/ROOT/pve-2  local          ### => OK 
  • import the pool “restore”.
    $ zpool import restore
  • Have a look to the datasets and cehck that the snapshot for restoration is present:
    $ zfs list
    $ zfs list -t snap
  • Now we copy the data from the “restore” pool into a new created rpool/ROOT/pve-1 and set its mountpoint on “/”
    $ zfs send -pv restore/rpool/ROOT/pve-1@complete | zfs recv -dvF rpool/ROOT 


    The transfer of data should be visible.

  • When this is over:
    $ zfs set mountpoint=/ rpool/ROOT/pve-1     ##### It is possible that "/" is already mounted because Proxmox have already done the mounting automatically.
    $ zfs get mountpoint rpool/ROOT/pve-1  ## will confirm 
  • Remove the restore stick:
    $ zpool export restore
  • Have a look and reboot:
    $ zfs list
    $ exit

Step 4:

I had some minor issues at the reboot:

  • device (= the “old” dataset for pve-1) no found but the boot process didn't stop here. In case of problems, use the function “boot rescue” of the USB-stick for installation
  • zfs: the first boot stops because the import of dataset “pve-1” must be forced by hand (“-f”) the fist time because of having been mounted on another system (= the previous used temporary OS for restoring).
  • nfs: nfs was not working and there were some error messages during the boot. Another reboot solved it.8-)

After the OS runs:

# update-grub2

and reboot to solve the error messages at boot up.

Restoring the VMs

Restore the disks of the VMs:
From the FreeNAS:

# zfs send -pv pool/backup/Proxmox/rpool/data/vm-100-disk-0@complete | ssh root@proxmox.domain.tld zfs recv rpool/data/vm-100-disk-0 

and so on…

welcome/proxmox/restoring_the_hypervisor.txt · Dernière modification: 2021/09/16 19:59 (modification externe)

DokuWiki Appliance - Powered by TurnKey Linux