commit 21732fd224978ebddf4bded7cb99663d2be1e732 Author: Greg Kroah-Hartman Date: Tue Aug 8 19:56:37 2023 +0200 Linux 5.4.252 Signed-off-by: Greg Kroah-Hartman commit 9399ea1ce48162091fb88b162e40a9e3fbb103e5 Author: Greg Kroah-Hartman Date: Tue Aug 8 19:20:48 2023 +0200 x86: fix backwards merge of GDS/SRSO bit Stable-tree-only change. Due to the way the GDS and SRSO patches flowed into the stable tree, it was a 50% chance that the merge of the which value GDS and SRSO should be. Of course, I lost that bet, and chose the opposite of what Linus chose in commit 64094e7e3118 ("Merge tag 'gds-for-linus-2023-08-01' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip") Fix this up by switching the values to match what is now in Linus's tree as that is the correct value to mirror. Signed-off-by: Greg Kroah-Hartman commit bc7b9a6c2ca42b116b0f24dbaa52b5a07d96d1d6 Author: Ross Lagerwall Date: Thu Aug 3 08:41:22 2023 +0200 xen/netback: Fix buffer overrun triggered by unusual packet commit 534fc31d09b706a16d83533e16b5dc855caf7576 upstream. It is possible that a guest can send a packet that contains a head + 18 slots and yet has a len <= XEN_NETBACK_TX_COPY_LEN. This causes nr_slots to underflow in xenvif_get_requests() which then causes the subsequent loop's termination condition to be wrong, causing a buffer overrun of queue->tx_map_ops. Rework the code to account for the extra frag_overflow slots. This is CVE-2023-34319 / XSA-432. Fixes: ad7f402ae4f4 ("xen/netback: Ensure protocol headers don't fall in the non-linear area") Signed-off-by: Ross Lagerwall Reviewed-by: Paul Durrant Reviewed-by: Wei Liu Signed-off-by: Juergen Gross Signed-off-by: Greg Kroah-Hartman commit 43ed6f79b3e7cb1587ee61686bf8882d254a481c Author: Kim Phillips Date: Tue Jan 10 16:46:37 2023 -0600 x86/cpu, kvm: Add support for CPUID_80000021_EAX commit 8415a74852d7c24795007ee9862d25feb519007c upstream. Add support for CPUID leaf 80000021, EAX. The majority of the features will be used in the kernel and thus a separate leaf is appropriate. Include KVM's reverse_cpuid entry because features are used by VM guests, too. [ bp: Massage commit message. ] Signed-off-by: Kim Phillips Signed-off-by: Borislav Petkov (AMD) Acked-by: Sean Christopherson Link: https://lore.kernel.org/r/20230124163319.2277355-2-kim.phillips@amd.com [bwh: Backported to 6.1: adjust context] Signed-off-by: Ben Hutchings Signed-off-by: Greg Kroah-Hartman commit 1f0618bb24563aaa51473f23863b1e09bbf2ff2c Author: Borislav Petkov (AMD) Date: Sat Jul 8 10:21:35 2023 +0200 x86/bugs: Increase the x86 bugs vector size to two u32s Upstream commit: 0e52740ffd10c6c316837c6c128f460f1aaba1ea There was never a doubt in my mind that they would not fit into a single u32 eventually. Signed-off-by: Borislav Petkov (AMD) Signed-off-by: Greg Kroah-Hartman commit 08ba48152a8a866c72a9b844d2274ce9daa846e1 Author: Arnaldo Carvalho de Melo Date: Tue Mar 2 17:16:17 2021 -0300 tools headers cpufeatures: Sync with the kernel sources commit 1a9bcadd0058a3e81c1beca48e5e08dee9446a01 upstream. To pick the changes from: 3b9c723ed7cfa4e1 ("KVM: SVM: Add support for SVM instruction address check change") b85a0425d8056f3b ("Enumerate AVX Vector Neural Network instructions") fb35d30fe5b06cc2 ("x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX]") This only causes these perf files to be rebuilt: CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o And addresses this perf build warning: Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h' diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h Cc: Borislav Petkov Cc: Kyung Min Park Cc: Paolo Bonzini Cc: Sean Christopherson Cc: Wei Huang Signed-off-by: Arnaldo Carvalho de Melo Signed-off-by: Greg Kroah-Hartman commit 694b40dcfb41a691ce56ce164d8039b05071fc41 Author: Sean Christopherson Date: Fri Jan 22 12:40:46 2021 -0800 x86/cpufeatures: Assign dedicated feature word for CPUID_0x8000001F[EAX] commit fb35d30fe5b06cc24444f0405da8fbe0be5330d1 upstream. Collect the scattered SME/SEV related feature flags into a dedicated word. There are now five recognized features in CPUID.0x8000001F.EAX, with at least one more on the horizon (SEV-SNP). Using a dedicated word allows KVM to use its automagic CPUID adjustment logic when reporting the set of supported features to userspace. No functional change intended. Signed-off-by: Sean Christopherson Signed-off-by: Borislav Petkov Reviewed-by: Brijesh Singh Link: https://lkml.kernel.org/r/20210122204047.2860075-2-seanjc@google.com Signed-off-by: Greg Kroah-Hartman commit 4fa849d4af687540e1c4d8309bea15ec613279d1 Author: Tom Lendacky Date: Thu Dec 10 11:09:36 2020 -0600 x86/cpu: Add VM page flush MSR availablility as a CPUID feature commit 69372cf01290b9587d2cee8fbe161d75d55c3adc upstream. On systems that do not have hardware enforced cache coherency between encrypted and unencrypted mappings of the same physical page, the hypervisor can use the VM page flush MSR (0xc001011e) to flush the cache contents of an SEV guest page. When a small number of pages are being flushed, this can be used in place of issuing a WBINVD across all CPUs. CPUID 0x8000001f_eax[2] is used to determine if the VM page flush MSR is available. Add a CPUID feature to indicate it is supported and define the MSR. Signed-off-by: Tom Lendacky Message-Id: Signed-off-by: Paolo Bonzini Signed-off-by: Greg Kroah-Hartman commit 998eec0666074375eafc70920eeb5988ab2b2026 Author: Tom Lendacky Date: Mon Sep 7 15:15:06 2020 +0200 x86/cpufeatures: Add SEV-ES CPU feature commit 360e7c5c4ca4fd8e627781ed42f95d58bc3bb732 upstream. Add CPU feature detection for Secure Encrypted Virtualization with Encrypted State. This feature enhances SEV by also encrypting the guest register state, making it in-accessible to the hypervisor. Signed-off-by: Tom Lendacky Signed-off-by: Joerg Roedel Signed-off-by: Borislav Petkov Link: https://lkml.kernel.org/r/20200907131613.12703-6-joro@8bytes.org Signed-off-by: Greg Kroah-Hartman commit 3e21d8b0f3a9c0acc8d4f0d6829efbdad4157830 Author: Dave Hansen Date: Tue Aug 1 07:31:07 2023 -0700 Documentation/x86: Fix backwards on/off logic about YMM support commit 1b0fc0345f2852ffe54fb9ae0e12e2ee69ad6a20 upstream These options clearly turn *off* XSAVE YMM support. Correct the typo. Reported-by: Ben Hutchings Fixes: 553a5c03e90a ("x86/speculation: Add force option to GDS mitigation") Signed-off-by: Dave Hansen Signed-off-by: Greg Kroah-Hartman commit ad7670dd65cbd981ead04a4516621af348fdd70d Author: Peter Zijlstra Date: Tue Oct 25 21:38:25 2022 +0200 x86/mm: Initialize text poking earlier commit 5b93a83649c7cba3a15eb7e8959b250841acb1b1 upstream. Move poking_init() up a bunch; specifically move it right after mm_init() which is right before ftrace_init(). This will allow simplifying ftrace text poking which currently has a bunch of exceptions for early boot. Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.881703081@infradead.org Signed-off-by: Greg Kroah-Hartman commit 979366f5c2aa6dc7415f30002b771c671afcb59d Author: Peter Zijlstra Date: Tue Oct 25 21:38:18 2022 +0200 mm: Move mm_cachep initialization to mm_init() commit af80602799681c78f14fbe20b6185a56020dedee upstream. In order to allow using mm_alloc() much earlier, move initializing mm_cachep into mm_init(). Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.751153381@infradead.org Signed-off-by: Greg Kroah-Hartman commit 3d1b8cfdd0c9e2d941f96b09738472ecf91bc599 Author: Peter Zijlstra Date: Tue Oct 25 21:38:21 2022 +0200 x86/mm: Use mm_alloc() in poking_init() commit 3f4c8211d982099be693be9aa7d6fc4607dff290 upstream. Instead of duplicating init_mm, allocate a fresh mm. The advantage is that mm_alloc() has much simpler dependencies. Additionally it makes more conceptual sense, init_mm has no (and must not have) user state to duplicate. Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20221025201057.816175235@infradead.org Signed-off-by: Greg Kroah-Hartman commit ddcf05fe8850110fe521845b995dd1867d7517b7 Author: Juergen Gross Date: Mon Jan 9 16:09:22 2023 +0100 x86/mm: fix poking_init() for Xen PV guests commit 26ce6ec364f18d2915923bc05784084e54a5c4cc upstream. Commit 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") broke the kernel for running as Xen PV guest. It seems as if the new address space is never activated before being used, resulting in Xen rejecting to accept the new CR3 value (the PGD isn't pinned). Fix that by adding the now missing call of paravirt_arch_dup_mmap() to poking_init(). That call was previously done by dup_mm()->dup_mmap() and it is a NOP for all cases but for Xen PV, where it is just doing the pinning of the PGD. Fixes: 3f4c8211d982 ("x86/mm: Use mm_alloc() in poking_init()") Signed-off-by: Juergen Gross Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20230109150922.10578-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman commit 3f8968f1f0ad259591a0f050372c4abe3aa9b067 Author: Juergen Gross Date: Mon Jul 3 15:00:32 2023 +0200 x86/xen: Fix secondary processors' FPU initialization commit fe3e0a13e597c1c8617814bf9b42ab732db5c26e upstream. Moving the call of fpu__init_cpu() from cpu_init() to start_secondary() broke Xen PV guests, as those don't call start_secondary() for APs. Call fpu__init_cpu() in Xen's cpu_bringup(), which is the Xen PV replacement of start_secondary(). Fixes: b81fac906a8f ("x86/fpu: Move FPU initialization into arch_cpu_finalize_init()") Signed-off-by: Juergen Gross Signed-off-by: Borislav Petkov (AMD) Reviewed-by: Boris Ostrovsky Acked-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230703130032.22916-1-jgross@suse.com Signed-off-by: Greg Kroah-Hartman commit e56c1e0f91343007dbeb5e05e0b09f5c9e8a5560 Author: Daniel Sneddon Date: Wed Jul 12 19:43:14 2023 -0700 KVM: Add GDS_NO support to KVM commit 81ac7e5d741742d650b4ed6186c4826c1a0631a7 upstream Gather Data Sampling (GDS) is a transient execution attack using gather instructions from the AVX2 and AVX512 extensions. This attack allows malicious code to infer data that was previously stored in vector registers. Systems that are not vulnerable to GDS will set the GDS_NO bit of the IA32_ARCH_CAPABILITIES MSR. This is useful for VM guests that may think they are on vulnerable systems that are, in fact, not affected. Guests that are running on affected hosts where the mitigation is enabled are protected as if they were running on an unaffected system. On all hosts that are not affected or that are mitigated, set the GDS_NO bit. Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit ed56430ab2532bf5c91e91e9b4eb9d094d0f36fa Author: Daniel Sneddon Date: Wed Jul 12 19:43:13 2023 -0700 x86/speculation: Add Kconfig option for GDS commit 53cf5797f114ba2bd86d23a862302119848eff19 upstream Gather Data Sampling (GDS) is mitigated in microcode. However, on systems that haven't received the updated microcode, disabling AVX can act as a mitigation. Add a Kconfig option that uses the microcode mitigation if available and disables AVX otherwise. Setting this option has no effect on systems not affected by GDS. This is the equivalent of setting gather_data_sampling=force. Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit e35c6579436504435c04f6908c364a9a37e353e8 Author: Daniel Sneddon Date: Wed Jul 12 19:43:12 2023 -0700 x86/speculation: Add force option to GDS mitigation commit 553a5c03e90a6087e88f8ff878335ef0621536fb upstream The Gather Data Sampling (GDS) vulnerability allows malicious software to infer stale data previously stored in vector registers. This may include sensitive data such as cryptographic keys. GDS is mitigated in microcode, and systems with up-to-date microcode are protected by default. However, any affected system that is running with older microcode will still be vulnerable to GDS attacks. Since the gather instructions used by the attacker are part of the AVX2 and AVX512 extensions, disabling these extensions prevents gather instructions from being executed, thereby mitigating the system from GDS. Disabling AVX2 is sufficient, but we don't have the granularity to do this. The XCR0[2] disables AVX, with no option to just disable AVX2. Add a kernel parameter gather_data_sampling=force that will enable the microcode mitigation if available, otherwise it will disable AVX on affected systems. This option will be ignored if cmdline mitigations=off. This is a *big* hammer. It is known to break buggy userspace that uses incomplete, buggy AVX enumeration. Unfortunately, such userspace does exist in the wild: https://www.mail-archive.com/bug-coreutils@gnu.org/msg33046.html [ dhansen: add some more ominous warnings about disabling AVX ] Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit f68f9f2df68e246548bdc1a2279c55f98c4ca473 Author: Daniel Sneddon Date: Wed Jul 12 19:43:11 2023 -0700 x86/speculation: Add Gather Data Sampling mitigation commit 8974eb588283b7d44a7c91fa09fcbaf380339f3a upstream Gather Data Sampling (GDS) is a hardware vulnerability which allows unprivileged speculative access to data which was previously stored in vector registers. Intel processors that support AVX2 and AVX512 have gather instructions that fetch non-contiguous data elements from memory. On vulnerable hardware, when a gather instruction is transiently executed and encounters a fault, stale data from architectural or internal vector registers may get transiently stored to the destination vector register allowing an attacker to infer the stale data using typical side channel techniques like cache timing attacks. This mitigation is different from many earlier ones for two reasons. First, it is enabled by default and a bit must be set to *DISABLE* it. This is the opposite of normal mitigation polarity. This means GDS can be mitigated simply by updating microcode and leaving the new control bit alone. Second, GDS has a "lock" bit. This lock bit is there because the mitigation affects the hardware security features KeyLocker and SGX. It needs to be enabled and *STAY* enabled for these features to be mitigated against GDS. The mitigation is enabled in the microcode by default. Disable it by setting gather_data_sampling=off or by disabling all mitigations with mitigations=off. The mitigation status can be checked by reading: /sys/devices/system/cpu/vulnerabilities/gather_data_sampling Signed-off-by: Daniel Sneddon Signed-off-by: Dave Hansen Acked-by: Josh Poimboeuf Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 6e60443668978131a442df485db3deccb31d5651 Author: Thomas Gleixner Date: Wed Jun 14 01:39:46 2023 +0200 x86/fpu: Move FPU initialization into arch_cpu_finalize_init() commit b81fac906a8f9e682e513ddd95697ec7a20878d4 upstream Initializing the FPU during the early boot process is a pointless exercise. Early boot is convoluted and fragile enough. Nothing requires that the FPU is set up early. It has to be initialized before fork_init() because the task_struct size depends on the FPU register buffer size. Move the initialization to arch_cpu_finalize_init() which is the perfect place to do so. No functional change. This allows to remove quite some of the custom early command line parsing, but that's subject to the next installment. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.902376621@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 2ee37a46aa13a8445064a71ba8bdc284831ad0ef Author: Thomas Gleixner Date: Wed Jun 14 01:39:45 2023 +0200 x86/fpu: Mark init functions __init commit 1703db2b90c91b2eb2d699519fc505fe431dde0e upstream No point in keeping them around. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.841685728@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 77fe8150579c052c560a7076705f5bbd5d82d31e Author: Thomas Gleixner Date: Wed Jun 14 01:39:43 2023 +0200 x86/fpu: Remove cpuinfo argument from init functions commit 1f34bb2a24643e0087652d81078e4f616562738d upstream Nothing in the call chain requires it Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.783704297@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 95356fff6fee88ff4624a95ba4c1c0960780becb Author: Thomas Gleixner Date: Wed Jun 14 01:39:41 2023 +0200 init, x86: Move mem_encrypt_init() into arch_cpu_finalize_init() commit 439e17576eb47f26b78c5bbc72e344d4206d2327 upstream Invoke the X86ism mem_encrypt_init() from X86 arch_cpu_finalize_init() and remove the weak fallback from the core code. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.670360645@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 7aa2cec22e28a1ddda18ae3cb475fb30f4430a05 Author: Thomas Gleixner Date: Wed Jun 14 01:39:39 2023 +0200 init: Invoke arch_cpu_finalize_init() earlier commit 9df9d2f0471b4c4702670380b8d8a45b40b23a7d upstream X86 is reworking the boot process so that initializations which are not required during early boot can be moved into the late boot process and out of the fragile and restricted initial boot phase. arch_cpu_finalize_init() is the obvious place to do such initializations, but arch_cpu_finalize_init() is invoked too late in start_kernel() e.g. for initializing the FPU completely. fork_init() requires that the FPU is initialized as the size of task_struct on X86 depends on the size of the required FPU register buffer. Fortunately none of the init calls between calibrate_delay() and arch_cpu_finalize_init() is relevant for the functionality of arch_cpu_finalize_init(). Invoke it right after calibrate_delay() where everything which is relevant for arch_cpu_finalize_init() has been set up already. No functional change intended. Signed-off-by: Thomas Gleixner Reviewed-by: Rick Edgecombe Link: https://lore.kernel.org/r/20230613224545.612182854@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 944d5c3ffa4b3411749fa32d3cbeb56bb5475cd3 Author: Thomas Gleixner Date: Wed Jun 14 01:39:38 2023 +0200 init: Remove check_bugs() leftovers commit 61235b24b9cb37c13fcad5b9596d59a1afdcec30 upstream Everything is converted over to arch_cpu_finalize_init(). Remove the check_bugs() leftovers including the empty stubs in asm-generic, alpha, parisc, powerpc and xtensa. Signed-off-by: Thomas Gleixner Reviewed-by: Richard Henderson Link: https://lore.kernel.org/r/20230613224545.553215951@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit a03ef708788eba340a68f5991c7f2962a391a8a5 Author: Thomas Gleixner Date: Wed Jun 14 01:39:36 2023 +0200 um/cpu: Switch to arch_cpu_finalize_init() commit 9349b5cd0908f8afe95529fc7a8cbb1417df9b0c upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Acked-by: Richard Weinberger Link: https://lore.kernel.org/r/20230613224545.493148694@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 98c3955e145f3ef889d3a91629a82c9e099d03d6 Author: Thomas Gleixner Date: Wed Jun 14 01:39:35 2023 +0200 sparc/cpu: Switch to arch_cpu_finalize_init() commit 44ade508e3bfac45ae97864587de29eb1a881ec0 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Sam Ravnborg Link: https://lore.kernel.org/r/20230613224545.431995857@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 568d68fc1dd48af90ed4528ae718167722e265df Author: Thomas Gleixner Date: Wed Jun 14 01:39:33 2023 +0200 sh/cpu: Switch to arch_cpu_finalize_init() commit 01eb454e9bfe593f320ecbc9aaec60bf87cd453d upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.371697797@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 18cd611a3eaa697b1c03015c9b71997fb75533b3 Author: Thomas Gleixner Date: Wed Jun 14 01:39:32 2023 +0200 mips/cpu: Switch to arch_cpu_finalize_init() commit 7f066a22fe353a827a402ee2835e81f045b1574d upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.312438573@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 2febb4a730045daa8d817f99db1ff5176e50c422 Author: Thomas Gleixner Date: Wed Jun 14 01:39:30 2023 +0200 m68k/cpu: Switch to arch_cpu_finalize_init() commit 9ceecc2589b9d7cef6b321339ed8de484eac4b20 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Acked-by: Geert Uytterhoeven Link: https://lore.kernel.org/r/20230613224545.254342916@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 1f4494ea77e870b6556dcfbb865a890fdf311eeb Author: Thomas Gleixner Date: Wed Jun 14 01:39:27 2023 +0200 ia64/cpu: Switch to arch_cpu_finalize_init() commit 6c38e3005621800263f117fb00d6787a76e16de7 upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.137045745@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 73719e89e32b52050d8c19bca32be59307ac584b Author: Thomas Gleixner Date: Wed Jun 14 01:39:25 2023 +0200 ARM: cpu: Switch to arch_cpu_finalize_init() commit ee31bb0524a2e7c99b03f50249a411cc1eaa411f upstream check_bugs() is about to be phased out. Switch over to the new arch_cpu_finalize_init() implementation. No functional change. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224545.078124882@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit 1743bc756b6bd0f3f453f80bc0c3a8ee1bc47728 Author: Thomas Gleixner Date: Wed Jun 14 01:39:24 2023 +0200 x86/cpu: Switch to arch_cpu_finalize_init() commit 7c7077a72674402654f3291354720cd73cdf649e upstream check_bugs() is a dumping ground for finalizing the CPU bringup. Only parts of it has to do with actual CPU bugs. Split it apart into arch_cpu_finalize_init() and cpu_select_mitigations(). Fixup the bogus 32bit comments while at it. No functional change. Signed-off-by: Thomas Gleixner Reviewed-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230613224545.019583869@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman commit afe787cf253b4e0d179ee5c01593650a839cf2b4 Author: Thomas Gleixner Date: Wed Jun 14 01:39:22 2023 +0200 init: Provide arch_cpu_finalize_init() commit 7725acaa4f0c04fbefb0e0d342635b967bb7d414 upstream check_bugs() has become a dumping ground for all sorts of activities to finalize the CPU initialization before running the rest of the init code. Most are empty, a few do actual bug checks, some do alternative patching and some cobble a CPU advertisement string together.... Aside of that the current implementation requires duplicated function declaration and mostly empty header files for them. Provide a new function arch_cpu_finalize_init(). Provide a generic declaration if CONFIG_ARCH_HAS_CPU_FINALIZE_INIT is selected and a stub inline otherwise. This requires a temporary #ifdef in start_kernel() which will be removed along with check_bugs() once the architectures are converted over. Signed-off-by: Thomas Gleixner Link: https://lore.kernel.org/r/20230613224544.957805717@linutronix.de Signed-off-by: Daniel Sneddon Signed-off-by: Greg Kroah-Hartman