Upgrade from 2 to 3: Difference between revisions

From Proxmox Backup Server
Jump to navigation Jump to search
(reserve)
 
(Proxmox Backup Server 3.0 release)
Line 1: Line 1:
= Introduction =


Proxmox Backup Server 3 is based on Debian 12 Bookworm, a new major release, and introduces several new major features and changes.
You should plan the upgrade carefully, '''make and verify backups''' before beginning, and test extensively.
Depending on the existing configuration, several manual steps — including some downtime — may be required.
'''Note:''' A valid and tested backup is ''always'' required before starting the upgrade process.
You can test the backup beforehand, for example, in a (virtualized) test lab setup.
In case the system is customized and/or uses additional packages or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Bookworm.
= In-place Upgrade =
== Prerequisites ==
* Perform these actions via SSH, a physical console or a remote management console like iKVM or IPMI.
** If you use SSH, you should use a terminal multiplexer (for example, tmux or screen) to ensure the upgrade can continue even if the SSH connection gets interrupted.
** Do not carry out the upgrade via the web UI console directly, as this will get interrupted during the upgrade.
* Upgraded to the latest version of Proxmox Backup Server 2.4, see the [[Roadmap#Release History|roadmap]] for potential important changes in the stable release.
*: Use <code>apt update</code> and <code>apt dist-upgrade</code> (still with Debian Bullseye repos setup) to upgrade to latest 2.4
** Verify version:
**:<code>proxmox-backup-manager versions</code>
**:<code>proxmox-backup-server 2.4.2-2 running version: 2.4.2</code> (or higher)
** If you do not get updates check correct [https://pbs.proxmox.com/docs/installation.html#debian-package-repositories package repository] configuration.
* Make a backup of <code>/etc/proxmox-backup</code> to ensure that in the worst case, any relevant configuration can be recovered:
tar czf "pbs2-etc-backup-$(date -I).tar.gz" -C "/etc" "proxmox-backup"
* Ensure that you have at least 5 GB free disk space on the root mount point:
df -h /
In-place upgrades are carried out via APT. Basic familiarity with APT is required to proceed with this upgrade mechanism.
=== Installed alongside Proxmox VE ===
For systems with Proxmox VE and Proxmox Backup Server installed together, you should also read the [https://pve.proxmox.com/wiki/Upgrade_from_7_to_8 Proxmox VE upgrade from 7 to 8 how-to] carefully.
You can upgrade both in one go, by syncing the steps in which the APT repositories are changed.
== Actions Step-by-Step ==
Before starting the upgrade process, ensure that your Proxmox Backup Server 2.x host is up-to-date.
=== Optional: Enable Maintenance Mode ===
Enabling the read-only [https://pbs.proxmox.com/docs/maintenance.html#maintenance-mode maintenance mode] on all datastores ensures that no new backup can be started during the upgrade, while keeping existing ones available to read.
The read-only maintenance mode allows you to enforce a known and stable datastore state and reduces the I/O and general load of the Proxmox Backup Server during the upgrade, making that faster.
You can enable and disable the maintenance mode either via the web UI, in the Options tab of each datastore menu entry, or using the command line interface (CLI):
# enable read-only mode (replace DATASTORE-ID with actual value)
proxmox-backup-manager datastore update DATASTORE-ID --maintenance-mode read-only
# disable read-only mode
proxmox-backup-manager datastore update DATASTORE-ID --delete maintenance-mode
=== Update the Configured APT Repositories ===
First, make sure that the system is using the latest Proxmox Backup Server 2.4 packages:
apt update
apt dist-upgrade
proxmox-backup-manager versions
The last command should report at least <code>2.4-2</code> or newer.
==== Update Debian Base Repositories to Bookworm ====
Update all repository entries to Bookworm:
sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
Ensure that there are no remaining Debian Bullseye specific repositories left, if you can use the <code>#</code> symbol at the start of the respective line to comment these repositories out.
Check all files in the </code>/etc/apt/sources.list.d/pve-enterprise.list</code> and <code>/etc/apt/sources.list</code> and see [[Package_Repositories]] for the correct Proxmox VE 8 / Debian Bookworm repositories.
==== Add the Proxmox Backup Server 3 Package Repository ====
Update the enterprise repository to Bookworm:
echo "deb https://enterprise.proxmox.com/debian/pbs bookworm pbs-enterprise" > /etc/apt/sources.list.d/pbs-enterprise.list
For the no-subscription repository, see [https://pbs.proxmox.com/docs/installation.html#debian-package-repositories Package Repositories].
Rather than commenting out/removing the Proxmox Backup Server 2 repositories, as was previously mentioned, you could also run the following command to update to the Proxmox Backup Server 3 repositories:
sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*.list
Make sure to check that all the <code>.list</code> files you added in /etc/apt/sources.list.d/ got switched over to Bookworm correctly.
Finally, update the repositories' package index:
apt update
Note that this command does not start the upgrade itself, it only refreshes the package index and must not return any error.
=== Upgrade the System ===
Note that the time required for finishing this step heavily depends on the system's performance, especially the root filesystem's IOPS and bandwidth.
A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in 5 minutes.
{{Note|While the packages are being upgraded certain operations and requests to the API might fail (for example logging in as system user in the <code>pam</code> realm)|reminder}}
To get the initial set of upgraded packages, run:
apt update
apt dist-upgrade
During the above step, you will be asked to approve changes to configuration files, where the default config has been updated by their respective package.
It's suggested to check the difference for each file in question and choose the answer accordingly to what's most appropriate for your setup.
Common configuration files with changes, and the recommended choices are:
* <code>/etc/issue</code> -> Proxmox VE will auto-generate this file on boot, and it has only cosmetic effects on the login console.
*: Using the default "No" (keep your currently-installed version) is safe here.
* <code>/etc/lvm/lvm.conf</code> -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful.
*: If you did not make extra changes yourself and are unsure it's suggested to choose "Yes" (install the package maintainer's version) here.
* <code>/etc/ssh/sshd_config</code> -> If you have not changed this file manually, the only differences should be a replacement of <code>ChallengeResponseAuthentication no</code> with <code>KbdInteractiveAuthentication no</code> and some irrelevant changes in comments (lines starting with <code>#</code>).
*: If this is the case, both options are safe, though we would recommend installing the package maintainer's version in order to move away from the deprecated <code>ChallengeResponseAuthentication</code> option. If there are other changes, we suggest to inspect them closely and decide accordingly.
* <code>/etc/default/grub</code> -> Here you may want to take special care, as this is normally only asked for if you changed it manually, e.g., for adding some kernel command line option.
*: It's recommended to check the difference for any relevant change, note that changes in comments (lines starting with <code>#</code>) are not relevant.
*: If unsure, we suggested to selected "No" (keep your currently-installed version)
=== Check Result & Reboot Into Updated Kernel ===
If the command exits successfully, you can reboot the system in order to enable the new kernel.
systemctl reboot
Please note that you should reboot even if you already used the 6.2 kernel previously, through the opt-in package on Proxmox Backup Server 3.
== Following the Proxmox Backup Server upgrade ==
Check that the statuses of the main services are <code>active (running)</code>
systemctl status proxmox-backup-proxy.service proxmox-backup.service
=== Optional: Disable Maintenance Mode Again ===
If you enabled the maintenance mode before the upgrade, don't forget to disable it again.
You can do it via the web UI, in the Options tab of each datastore menu entry, or using the command line interface (CLI):
# disable read-only mode (replace DATASTORE-ID with actual value)
proxmox-backup-manager datastore update DATASTORE-ID --delete maintenance-mode
= Potential Issues =
== General ==
As a Debian based distribution, Proxmox Backup Server is affected by most issues and changes affecting Debian.
Thus, ensure that you read the [https://www.debian.org/releases/bookworm/amd64/release-notes/ch-information.en.html upgrade specific issues for Debian Bookworm], for example the [https://www.debian.org/releases/bookworm/amd64/release-notes/ch-information.en.html#changes-to-packages-that-set-the-system-clock transition from classic NTP to NTPsec]
Please also check the known issue list from the Proxmox Backup Server 3.0 changelog: https://pbs.proxmox.com/wiki/Roadmap#3.0-known-issues
== Older Hardware and New 6.2 Kernel ==
Compatibility of old hardware (released >= 10 years ago) is not as thoroughly tested as more recent hardware.
For old hardware we highly recommend testing compatibility of Proxmox VE 8 with identical (or at least similar) hardware before upgrading any production machines.
We will expand this section with potential pitfalls and workarounds once they arise.
== Network ==
=== Network Interface Name Change ===
Due to the new kernel recognizing more features of some hardware, like for example virtual functions, and interface naming often derives from the PCI(e) address, some NICs may change their name, in which case the network configuration needs to be adapted.
In general, it's recommended to either have an independent remote connection to the Proxmox Backup Server's host console, for example, through IPMI or iKVM, or physical access for managing the server even when its own network doesn't come up after a major upgrade or network change.
=== Network Fails on Boot Due to NTPsec Hook ===
Some users reported that after the upgrade their network failed to come up cleanly on boot, but worked if triggered manually (e.g., using <code>ifreload -a</code>), when ntpsec was installed.
We're still investigating for a definitive root cause, but it seems that an udev hook which the <code>/etc/network/if-up.d/ntpsec-ntpdate</code> might hang on some hardware, albeit due to changes not directly related to ntpsec.
Since the chrony NTP daemon is used as default for new installations since Proxmox Backup Server 2.0 the simplest solution might be switching to that via <code>apt install chrony</code>.
== Systemd-Boot (for ZFS on Root and UEFI Systems Only) ==
Systems booting via UEFI from a ZFS on root setup should install the <code>systemd-boot</code> package after the upgrade. You will get a Warning from the <code>pve7to8</code> script after the upgrade if your system is affected - in all other cases you can safely ignore this point.
The <code>systemd-boot</code> was split out from the <code>systemd</code> package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.
Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of <code>proxmox-boot-tool status</code> and the  [https://pve.proxmox.com/pve-docs/chapter-sysadmin.html#sysboot_determine_bootloader_used relevant documentation]).
Note that the system remains bootable even without the package installed.
It is not recommended installing <code>systemd-boot</code> on systems which don't need it, as it would replace <code>grub</code> as bootloader in its <code>postinst</code> script.
[[Category: Upgrade]]

Revision as of 09:07, 28 June 2023

Introduction

Proxmox Backup Server 3 is based on Debian 12 Bookworm, a new major release, and introduces several new major features and changes. You should plan the upgrade carefully, make and verify backups before beginning, and test extensively. Depending on the existing configuration, several manual steps — including some downtime — may be required.

Note: A valid and tested backup is always required before starting the upgrade process. You can test the backup beforehand, for example, in a (virtualized) test lab setup.

In case the system is customized and/or uses additional packages or any other third party repositories/packages, ensure those packages are also upgraded to and compatible with Debian Bookworm.

In-place Upgrade

Prerequisites

  • Perform these actions via SSH, a physical console or a remote management console like iKVM or IPMI.
    • If you use SSH, you should use a terminal multiplexer (for example, tmux or screen) to ensure the upgrade can continue even if the SSH connection gets interrupted.
    • Do not carry out the upgrade via the web UI console directly, as this will get interrupted during the upgrade.
  • Upgraded to the latest version of Proxmox Backup Server 2.4, see the roadmap for potential important changes in the stable release.
    Use apt update and apt dist-upgrade (still with Debian Bullseye repos setup) to upgrade to latest 2.4
    • Verify version:
      proxmox-backup-manager versions
      proxmox-backup-server 2.4.2-2 running version: 2.4.2 (or higher)
    • If you do not get updates check correct package repository configuration.
  • Make a backup of /etc/proxmox-backup to ensure that in the worst case, any relevant configuration can be recovered:
tar czf "pbs2-etc-backup-$(date -I).tar.gz" -C "/etc" "proxmox-backup"
  • Ensure that you have at least 5 GB free disk space on the root mount point:
df -h /

In-place upgrades are carried out via APT. Basic familiarity with APT is required to proceed with this upgrade mechanism.

Installed alongside Proxmox VE

For systems with Proxmox VE and Proxmox Backup Server installed together, you should also read the Proxmox VE upgrade from 7 to 8 how-to carefully.

You can upgrade both in one go, by syncing the steps in which the APT repositories are changed.

Actions Step-by-Step

Before starting the upgrade process, ensure that your Proxmox Backup Server 2.x host is up-to-date.

Optional: Enable Maintenance Mode

Enabling the read-only maintenance mode on all datastores ensures that no new backup can be started during the upgrade, while keeping existing ones available to read. The read-only maintenance mode allows you to enforce a known and stable datastore state and reduces the I/O and general load of the Proxmox Backup Server during the upgrade, making that faster.

You can enable and disable the maintenance mode either via the web UI, in the Options tab of each datastore menu entry, or using the command line interface (CLI):

# enable read-only mode (replace DATASTORE-ID with actual value)
proxmox-backup-manager datastore update DATASTORE-ID --maintenance-mode read-only
# disable read-only mode
proxmox-backup-manager datastore update DATASTORE-ID --delete maintenance-mode

Update the Configured APT Repositories

First, make sure that the system is using the latest Proxmox Backup Server 2.4 packages:

apt update
apt dist-upgrade
proxmox-backup-manager versions

The last command should report at least 2.4-2 or newer.

Update Debian Base Repositories to Bookworm

Update all repository entries to Bookworm:

sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list

Ensure that there are no remaining Debian Bullseye specific repositories left, if you can use the # symbol at the start of the respective line to comment these repositories out. Check all files in the /etc/apt/sources.list.d/pve-enterprise.list and /etc/apt/sources.list and see Package_Repositories for the correct Proxmox VE 8 / Debian Bookworm repositories.

Add the Proxmox Backup Server 3 Package Repository

Update the enterprise repository to Bookworm:

echo "deb https://enterprise.proxmox.com/debian/pbs bookworm pbs-enterprise" > /etc/apt/sources.list.d/pbs-enterprise.list

For the no-subscription repository, see Package Repositories.

Rather than commenting out/removing the Proxmox Backup Server 2 repositories, as was previously mentioned, you could also run the following command to update to the Proxmox Backup Server 3 repositories:

sed -i -e 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*.list 

Make sure to check that all the .list files you added in /etc/apt/sources.list.d/ got switched over to Bookworm correctly.

Finally, update the repositories' package index:

apt update

Note that this command does not start the upgrade itself, it only refreshes the package index and must not return any error.

Upgrade the System

Note that the time required for finishing this step heavily depends on the system's performance, especially the root filesystem's IOPS and bandwidth. A slow spinner can take up to 60 minutes or more, while for a high-performance server with SSD storage, the dist-upgrade can be finished in 5 minutes.

Note: While the packages are being upgraded certain operations and requests to the API might fail (for example logging in as system user in the pam realm)

To get the initial set of upgraded packages, run:

apt update
apt dist-upgrade

During the above step, you will be asked to approve changes to configuration files, where the default config has been updated by their respective package.

It's suggested to check the difference for each file in question and choose the answer accordingly to what's most appropriate for your setup.

Common configuration files with changes, and the recommended choices are:

  • /etc/issue -> Proxmox VE will auto-generate this file on boot, and it has only cosmetic effects on the login console.
    Using the default "No" (keep your currently-installed version) is safe here.
  • /etc/lvm/lvm.conf -> Changes relevant for Proxmox VE will be updated, and a newer config version might be useful.
    If you did not make extra changes yourself and are unsure it's suggested to choose "Yes" (install the package maintainer's version) here.
  • /etc/ssh/sshd_config -> If you have not changed this file manually, the only differences should be a replacement of ChallengeResponseAuthentication no with KbdInteractiveAuthentication no and some irrelevant changes in comments (lines starting with #).
    If this is the case, both options are safe, though we would recommend installing the package maintainer's version in order to move away from the deprecated ChallengeResponseAuthentication option. If there are other changes, we suggest to inspect them closely and decide accordingly.
  • /etc/default/grub -> Here you may want to take special care, as this is normally only asked for if you changed it manually, e.g., for adding some kernel command line option.
    It's recommended to check the difference for any relevant change, note that changes in comments (lines starting with #) are not relevant.
    If unsure, we suggested to selected "No" (keep your currently-installed version)

Check Result & Reboot Into Updated Kernel

If the command exits successfully, you can reboot the system in order to enable the new kernel.

systemctl reboot

Please note that you should reboot even if you already used the 6.2 kernel previously, through the opt-in package on Proxmox Backup Server 3.

Following the Proxmox Backup Server upgrade

Check that the statuses of the main services are active (running)

systemctl status proxmox-backup-proxy.service proxmox-backup.service

Optional: Disable Maintenance Mode Again

If you enabled the maintenance mode before the upgrade, don't forget to disable it again. You can do it via the web UI, in the Options tab of each datastore menu entry, or using the command line interface (CLI):

# disable read-only mode (replace DATASTORE-ID with actual value)
proxmox-backup-manager datastore update DATASTORE-ID --delete maintenance-mode

Potential Issues

General

As a Debian based distribution, Proxmox Backup Server is affected by most issues and changes affecting Debian. Thus, ensure that you read the upgrade specific issues for Debian Bookworm, for example the transition from classic NTP to NTPsec

Please also check the known issue list from the Proxmox Backup Server 3.0 changelog: https://pbs.proxmox.com/wiki/Roadmap#3.0-known-issues

Older Hardware and New 6.2 Kernel

Compatibility of old hardware (released >= 10 years ago) is not as thoroughly tested as more recent hardware. For old hardware we highly recommend testing compatibility of Proxmox VE 8 with identical (or at least similar) hardware before upgrading any production machines.

We will expand this section with potential pitfalls and workarounds once they arise.

Network

Network Interface Name Change

Due to the new kernel recognizing more features of some hardware, like for example virtual functions, and interface naming often derives from the PCI(e) address, some NICs may change their name, in which case the network configuration needs to be adapted.

In general, it's recommended to either have an independent remote connection to the Proxmox Backup Server's host console, for example, through IPMI or iKVM, or physical access for managing the server even when its own network doesn't come up after a major upgrade or network change.

Network Fails on Boot Due to NTPsec Hook

Some users reported that after the upgrade their network failed to come up cleanly on boot, but worked if triggered manually (e.g., using ifreload -a), when ntpsec was installed.

We're still investigating for a definitive root cause, but it seems that an udev hook which the /etc/network/if-up.d/ntpsec-ntpdate might hang on some hardware, albeit due to changes not directly related to ntpsec.

Since the chrony NTP daemon is used as default for new installations since Proxmox Backup Server 2.0 the simplest solution might be switching to that via apt install chrony.

Systemd-Boot (for ZFS on Root and UEFI Systems Only)

Systems booting via UEFI from a ZFS on root setup should install the systemd-boot package after the upgrade. You will get a Warning from the pve7to8 script after the upgrade if your system is affected - in all other cases you can safely ignore this point.

The systemd-boot was split out from the systemd package for Debian Bookworm based releases. It won't get installed automatically upon upgrade from Proxmox VE 7.4 as it can cause trouble on systems not booting from UEFI with ZFS on root setup by the Proxmox VE installer.

Systems which have ZFS on root and boot in UEFI mode will need to manually install it if they need to initialize a new ESP (see the output of proxmox-boot-tool status and the relevant documentation).

Note that the system remains bootable even without the package installed.

It is not recommended installing systemd-boot on systems which don't need it, as it would replace grub as bootloader in its postinst script.