Originally for v6.4-rc7 and now it also got already into some stable
trees, but not yet into a (released) ubuntu tag – so backport it
already.
Link: https://forum.proxmox.com/threads/133104/post-590457
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Avoids regressions where some code falsely think they cannot use some
CPU features like AVX1, e.g., ZFS.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
A user of ours reported an issue with p2p thunderbolt-net w.r.t. IPv6
and failure to reestablish the connection after a reboot of a peer
node, in the forum [0] and the relayed it upstream, so lets
cherry-pick those two patches to our 6.2. Especially the IPv6 one
seems straight forward, and the other one makes it actually spec
conform and should only improve things.
[0]: https://forum.proxmox.com/threads/133104/
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The mailing list thread [0] (found by Friedrich, many thanks!) leading
up to this patch sounds very familiar to issues users reported in the
community forum [1] and enterprise support channel, where a VM would
be stuck for no discernable reason with all vCPU threads spinning.
[0]: https://lore.kernel.org/all/f023d927-52aa-7e08-2ee5-59a2fbc65953@gameservers.com/T/#u
[1]: https://forum.proxmox.com/threads/127459/
Suggested-by: Friedrich Weber <f.weber@proxmox.com>
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
While there is no actual issue, users are still nervous about the
faulty logging [0]. It might take a while until the fix comes in via
upstream, so just pick it up manually.
[0]: https://forum.proxmox.com/threads/130628/post-583864
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
There were several reports about issues related to igc and tx timeout
and while the issue couldn't be reproduced locally, the hope is that
this fix Friedrich found will resolve the issue for the users. The
kernel versions in the reports would match with when 9b275176270e
("igc: Add ndo_tx_timeout support"), i.e. the one fixed by this
commit, landed.
[0]: https://forum.proxmox.com/threads/130935/
[1]: https://forum.proxmox.com/threads/130415/#post-580064
[2]: https://forum.proxmox.com/threads/132138/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
by cherry-picking the relevant commits from launchpad/lunar [0].
(relevant commits are based on k.o/stable commits for this)
minimally tested by booting my (ryzen) machine with this kernel and
skimming through dmesg after boot.
[0] git://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/lunar
Signed-off-by: Stoiko Ivanov <s.ivanov@proxmox.com>
the actual fix is the microcode update, but this is a stop-gap (with
a performance penalty) setting a chicken bit on affected CPUs that do
not have the new enough microcode loaded, disabling some features.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
we got quite some reports for this (e.g., Bugzilla or [0]), well in
non-enterprise setups as those cheap NVMe's just don't bother holding
up basic principles...
[0]: https://forum.proxmox.com/threads/128738/#post-567249
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Fixes live-migrations & snapshot-rollback of VMs with a restricted
CPU type (e.g., qemu64) from our 5.15 based kernel (default Proxmox
VE 7.4) to the 6.2 (and future newer) of Proxmox VE 8.0.
Previous to (upstream kernel) commit ad856280ddea ("x86/kvm/fpu: Limit
guest user_xfeatures to supported bits of XCR0") the PKRU bit of the
host could leak into the state from the guest, which caused trouble
when migrating between hosts with different CPUs, i.e., where the
source supported it but the target did not, causing a general
protection fault when the guest tried to use a pkru related
instruction after the migration.
But the fix, while welcome, caused a temporary out-of-sync state when
migrating such a VM from a kernel without the fix to a kernel with
the fix, as it threw of KVM when the CPUID of the guest and most of
the state doesn't report XSAVE and thus any xfeatures, but PKRU and
the related state is set as enabled, causing the vCPU to spin at 100%
without any progress forever.
The fix could be at two sites, either in QEMU or in the kernel, I
choose the kernel as we have all the info there for a targeted
heuristic so that we don't have to adapt QEMU and qemu-server, the
latter even on both sides.
Still, a short summary of the possible fixes and short drawbacks:
* on QEMU-side either
- clear the PKRU state in the migration saved state would be rather
complicated to implement as the vCPU is initialised way before we
have the saved xfeature state available to check what we'd need
to do, plus the user-space only gets a memory blob from ioctl
KVM_GET_XSAVE2 that it passes to KVM_SET_XSAVE ioctl, there are
no ABI guarantees, and while the struct seem stable for 5.15 to
6.5-rc1, that doesn't has to be for future kernels, so off the
table.
- enforce that the CPUID reports PKU support even if it normally
wouldn't. While this works (tested by hard-coding it as POC) it
is a) not really nice and b) needs some interaction from
qemu-server to enable this flag as otherwise we have no good info
to decide when it's OK to do this, which means we need to adapt
both PVE 7 and 8's qemu-server and also pve-qemu, workable but
not optimal
* on Kernel/KVM-side we can hook into the set XSAVE ioctl specific to
the KVM subsystem, which already reduces chance of regression for
all other places. There we have access to the union/struct
definitions of the saved state and thus can savely cast to that.
We also got access to the vCPU's CPUID capabilities, meaning we can
check if the XCR0 (first XSAVE Control Register) reports
that it support the PKRU feature, and if it does *NOT* but the
saved xfeatures register from XSAVE *DOES* report it, we can safely
assume that this combination is due to an migration from an older,
leaky kernel – and clear the bit in the xfeature register before
restoring it to the guest vCPU KVM state, avoiding the confusing
situation that made the vCPU spin at 100%.
This should be safe to do, as the guest vCPU CPUID never reported
support for the PKRU feature, and it's also a relatively niche and
newish feature.
If it gains us something we can drop this patch a bit in the future
Proxmox VE 9 major release, but we should ensure that VMs that where
started before PVE 8 cannot be directly live-migrated to the release
that includes that change; so we should rather only drop it if the
maintenance burden is high.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Should fix compat with SRIOV based Nvidia vGPU until they switch over
to using the vfio-pci-core framework instead of MDEV.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
Several people reported IO-related issues since kernel 6.1.6 [0].
Things got better with 6.1.10, but apparently the issues are not fully
resolved (e.g. [1]).
I ran into an issue with PBS backup of a VM with passed-through disks
(error with 6.1.6, hang with 6.1.10+) and found that the issue did not
occur anymore with v6.3-rc1. Bisecting what fixed the issue led to the
commit in this patch. The hope is that it fixes some other issues too.
The commit has a CC-stable tag for 5.15+, but telling from the absence
of user reports, it was much less likely to trigger before 6.1.x (it's
not clear what x is, because of the other issue in 6.1.6). The commit
says it depends on 613b14884b85 ("block: handle bio_split_to_limits()
NULL return") which is already present as a3f1c82e0413 ("block:
handle bio_split_to_limits() NULL return") in the Ubuntu tree.
[0]: https://forum.proxmox.com/threads/119483/post-530365
[1]: https://forum.proxmox.com/threads/119483/post-537991
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
so that plain Debian crda + wireless-regdb can work, alternatively we
could disable CRDA and bake in the regdb directly in the kernel,
using the CFG80211_INTERNAL_REGDB KConfig.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
but allow discarding BTF information when loading modules, so that upgrades
which are otherwise ABI compatible still work. this allows using BTF
information when matching and available, while degrading gracefully if the
currently running kernel is not identical to the one that module was built for.
in case of a mismatch, the kernel will log a warning when loading the module,
for example:
Jan 30 13:57:58 test kernel: BPF: type_id=184 bits_offset=4096
Jan 30 13:57:58 test kernel: BPF:
Jan 30 13:57:58 test kernel: BPF: Invalid name
Jan 30 13:57:58 test kernel: BPF:
Jan 30 13:57:58 test kernel: failed to validate module [bonding] BTF: -22
Signed-off-by: Fabian Grünbichler <f.gruenbichler@proxmox.com>
this was actually intended for the stable 5.15 branch, already
included in 5.19.
This reverts commit 198fde3a16.
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>
The following issue reported on the community forum [0] is likely
fixed by this.
In my case, loading a VM snapshot that originally was taken on an
Intel CPU on my AMD-based host often caused problems in other VMs. In
particular, it often led to CPU stalls, and sometimes clock jumps far
into the future. With this backport applied, everything seems to run
smoothly even after loading the "bad" snapshot 10 times.
The backport from upstream commit 11d39e8cc43e ("KVM: SVM: fix tsc
scaling cache logic consisted of dropping the parts for nested TSC
scaling, which is not yet present in our kernel, renaming the constant
for the default ratio, and some context changes.
[0] https://forum.proxmox.com/threads/112756/
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com>
hio driver got removed by ubuntu already in jammy, but then they
forgot to remove this instance too, failing the clean build target,
my patch got accepted but was forgotten when doing the same in
kinetic, so here we go again
Signed-off-by: Thomas Lamprecht <t.lamprecht@proxmox.com>