commit 3cfa7ce38ae6c2c8e57201e2978178c42051defb Author: Greg Kroah-Hartman Date: Fri Mar 11 12:42:10 2022 +0100 Linux 5.16.14 Link: https://lore.kernel.org/r/20220309155859.086952723@linuxfoundation.org Tested-by: Shuah Khan Tested-by: Fox Chen Tested-by: Bagas Sanjaya Link: https://lore.kernel.org/r/20220310140811.832630727@linuxfoundation.org Tested-by: Jon Hunter Tested-by: Shuah Khan Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Ron Economos Tested-by: Florian Fainelli Tested-by: Fox Chen Tested-by: Guenter Roeck Tested-by: Zan Aziz Tested-by: Rudi Heitbaum Tested-by: Bagas Sanjaya Signed-off-by: Greg Kroah-Hartman commit cf6eae0facc6022d642b3588517f93c088a75bfe Author: Greg Kroah-Hartman Date: Thu Mar 10 14:47:12 2022 +0100 Revert "ACPI: PM: s2idle: Cancel wakeup before dispatching EC GPE" This reverts commit 4287509b4d21e34dc49266c6239920811e189e40 which is commit dc0075ba7f387fe4c48a8c674b11ab6f374a6acc upstream. It's been reported to cause problems with a number of Fedora and Arch Linux users, so drop it for now until that is resolved. Link: https://lore.kernel.org/r/CAJZ5v0gE52NT=4kN4MkhV3Gx=M5CeMGVHOF0jgTXDb5WwAMs_Q@mail.gmail.com Link: https://lore.kernel.org/r/31b9d1cd-6a67-218b-4ada-12f72e6f00dc@redhat.com Reported-by: Hans de Goede Reported-by: Rafael J. Wysocki Cc: Sasha Levin Cc: Justin Forbes Cc: Mark Pearson Signed-off-by: Greg Kroah-Hartman commit 34630641e955f23ae06db178822d99d0a9d89b20 Author: Juergen Gross Date: Fri Feb 25 16:05:43 2022 +0100 xen/netfront: react properly to failing gnttab_end_foreign_access_ref() Commit 66e3531b33ee51dad17c463b4d9c9f52e341503d upstream. When calling gnttab_end_foreign_access_ref() the returned value must be tested and the reaction to that value should be appropriate. In case of failure in xennet_get_responses() the reaction should not be to crash the system, but to disable the network device. The calls in setup_netfront() can be replaced by calls of gnttab_end_foreign_access(). While at it avoid double free of ring pages and grant references via xennet_disconnect_backend() in this case. This is CVE-2022-23042 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 6d168929b4c497a8c1945dd71e467b2db12bbad2 Author: Juergen Gross Date: Fri Feb 25 16:05:43 2022 +0100 xen/gnttab: fix gnttab_end_foreign_access() without page specified Commit 42baefac638f06314298087394b982ead9ec444b upstream. gnttab_end_foreign_access() is used to free a grant reference and optionally to free the associated page. In case the grant is still in use by the other side processing is being deferred. This leads to a problem in case no page to be freed is specified by the caller: the caller doesn't know that the page is still mapped by the other side and thus should not be used for other purposes. The correct way to handle this situation is to take an additional reference to the granted page in case handling is being deferred and to drop that reference when the grant reference could be freed finally. This requires that there are no users of gnttab_end_foreign_access() left directly repurposing the granted page after the call, as this might result in clobbered data or information leaks via the not yet freed grant reference. This is part of CVE-2022-23041 / XSA-396. Reported-by: Simon Gaiser Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 674d0ae05bc8210779781c6c0fee136d3abaf7d0 Author: Juergen Gross Date: Fri Feb 25 16:05:43 2022 +0100 xen/pvcalls: use alloc/free_pages_exact() Commit b0576cc9c6b843d99c6982888d59a56209341888 upstream. Instead of __get_free_pages() and free_pages() use alloc_pages_exact() and free_pages_exact(). This is in preparation of a change of gnttab_end_foreign_access() which will prohibit use of high-order pages. This is part of CVE-2022-23041 / XSA-396. Reported-by: Simon Gaiser Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit d83dd50f3c23bc887e4c67d547e5a21a23fb8bb8 Author: Juergen Gross Date: Fri Feb 25 16:05:42 2022 +0100 xen/9p: use alloc/free_pages_exact() Commit 5cadd4bb1d7fc9ab201ac14620d1a478357e4ebd upstream. Instead of __get_free_pages() and free_pages() use alloc_pages_exact() and free_pages_exact(). This is in preparation of a change of gnttab_end_foreign_access() which will prohibit use of high-order pages. By using the local variable "order" instead of ring->intf->ring_order in the error path of xen_9pfs_front_alloc_dataring() another bug is fixed, as the error path can be entered before ring->intf->ring_order is being set. By using alloc_pages_exact() the size in bytes is specified for the allocation, which fixes another bug for the case of order < (PAGE_SHIFT - XEN_PAGE_SHIFT). This is part of CVE-2022-23041 / XSA-396. Reported-by: Simon Gaiser Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit b3df6e19199db0b7bc15e28124b8e23f33205c8c Author: Juergen Gross Date: Fri Feb 25 16:05:42 2022 +0100 xen: remove gnttab_query_foreign_access() Commit 1dbd11ca75fe664d3e54607547771d021f531f59 upstream. Remove gnttab_query_foreign_access(), as it is unused and unsafe to use. All previous use cases assumed a grant would not be in use after gnttab_query_foreign_access() returned 0. This information is useless in best case, as it only refers to a situation in the past, which could have changed already. Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 3b72403eb1850f79deef77497763a6eb65654863 Author: Juergen Gross Date: Fri Feb 25 16:05:42 2022 +0100 xen/gntalloc: don't use gnttab_query_foreign_access() Commit d3b6372c5881cb54925212abb62c521df8ba4809 upstream. Using gnttab_query_foreign_access() is unsafe, as it is racy by design. The use case in the gntalloc driver is not needed at all. While at it replace the call of gnttab_end_foreign_access_ref() with a call of gnttab_end_foreign_access(), which is what is really wanted there. In case the grant wasn't used due to an allocation failure, just free the grant via gnttab_free_grant_reference(). This is CVE-2022-23039 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 5ee69f5e01888f9e0efceb0b43fade3015e06297 Author: Juergen Gross Date: Fri Feb 25 16:05:42 2022 +0100 xen/scsifront: don't use gnttab_query_foreign_access() for mapped status Commit 33172ab50a53578a95691310f49567c9266968b0 upstream. It isn't enough to check whether a grant is still being in use by calling gnttab_query_foreign_access(), as a mapping could be realized by the other side just after having called that function. In case the call was done in preparation of revoking a grant it is better to do so via gnttab_try_end_foreign_access() and check the success of that operation instead. This is CVE-2022-23038 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 741052b05bcdc295dd715a71549b28c926266800 Author: Juergen Gross Date: Fri Feb 25 16:05:41 2022 +0100 xen/netfront: don't use gnttab_query_foreign_access() for mapped status Commit 31185df7e2b1d2fa1de4900247a12d7b9c7087eb upstream. It isn't enough to check whether a grant is still being in use by calling gnttab_query_foreign_access(), as a mapping could be realized by the other side just after having called that function. In case the call was done in preparation of revoking a grant it is better to do so via gnttab_end_foreign_access_ref() and check the success of that operation instead. This is CVE-2022-23037 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 2261393c1a3e597f7490ec7acc09ef2298e6b85d Author: Juergen Gross Date: Fri Feb 25 16:05:41 2022 +0100 xen/blkfront: don't use gnttab_query_foreign_access() for mapped status Commit abf1fd5919d6238ee3bc5eb4a9b6c3947caa6638 upstream. It isn't enough to check whether a grant is still being in use by calling gnttab_query_foreign_access(), as a mapping could be realized by the other side just after having called that function. In case the call was done in preparation of revoking a grant it is better to do so via gnttab_end_foreign_access_ref() and check the success of that operation instead. For the ring allocation use alloc_pages_exact() in order to avoid high order pages in case of a multi-page ring. If a grant wasn't unmapped by the backend without persistent grants being used, set the device state to "error". This is CVE-2022-23036 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Roger Pau Monné Signed-off-by: Greg Kroah-Hartman commit 0ba1ab25bb5057869621b340dbd411cca3242467 Author: Juergen Gross Date: Fri Feb 25 16:05:41 2022 +0100 xen/grant-table: add gnttab_try_end_foreign_access() Commit 6b1775f26a2da2b05a6dc8ec2b5d14e9a4701a1a upstream. Add a new grant table function gnttab_try_end_foreign_access(), which will remove and free a grant if it is not in use. Its main use case is to either free a grant if it is no longer in use, or to take some other action if it is still in use. This other action can be an error exit, or (e.g. in the case of blkfront persistent grant feature) some special handling. This is CVE-2022-23036, CVE-2022-23038 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 69e581afd2eafd51df6d4a24ab488cb8863c2dcd Author: Juergen Gross Date: Fri Feb 25 16:05:40 2022 +0100 xen/xenbus: don't let xenbus_grant_ring() remove grants in error case Commit 3777ea7bac3113005b7180e6b9dadf16d19a5827 upstream. Letting xenbus_grant_ring() tear down grants in the error case is problematic, as the other side could already have used these grants. Calling gnttab_end_foreign_access_ref() without checking success is resulting in an unclear situation for any caller of xenbus_grant_ring() as in the error case the memory pages of the ring page might be partially mapped. Freeing them would risk unwanted foreign access to them, while not freeing them would leak memory. In order to remove the need to undo any gnttab_grant_foreign_access() calls, use gnttab_alloc_grant_references() to make sure no further error can occur in the loop granting access to the ring pages. It should be noted that this way of handling removes leaking of grant entries in the error case, too. This is CVE-2022-23040 / part of XSA-396. Reported-by: Demi Marie Obenour Signed-off-by: Juergen Gross Reviewed-by: Jan Beulich Signed-off-by: Greg Kroah-Hartman commit 624a3c7d15efc6eebaaf7c1aba4085f5ae79abb4 Author: Russell King (Oracle) Date: Thu Mar 10 10:22:14 2022 +0000 ARM: fix build warning in proc-v7-bugs.c commit b1a384d2cbccb1eb3f84765020d25e2c1929706e upstream. The kernel test robot discovered that building without HARDEN_BRANCH_PREDICTOR issues a warning due to a missing argument to pr_info(). Add the missing argument. Reported-by: kernel test robot Fixes: 9dd78194a372 ("ARM: report Spectre v2 status through sysfs") Signed-off-by: Russell King (Oracle) Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit f68edacfcba00cb9269ebaf72baaa1a2536bc516 Author: Nathan Chancellor Date: Wed Mar 9 12:16:34 2022 -0700 arm64: Do not include __READ_ONCE() block in assembly files commit 52c9f93a9c482251cb0d224faa602ba26d462be8 upstream. When building arm64 defconfig + CONFIG_LTO_CLANG_{FULL,THIN}=y after commit 558c303c9734 ("arm64: Mitigate spectre style branch history side channels"), the following error occurs: :4:2: error: invalid fixup for movz/movk instruction mov w0, #ARM_SMCCC_ARCH_WORKAROUND_3 ^ Marc figured out that moving "#include " in include/linux/arm-smccc.h into a !__ASSEMBLY__ block resolves it. The full include chain with CONFIG_LTO=y from include/linux/arm-smccc.h: include/linux/init.h include/linux/compiler.h arch/arm64/include/asm/rwonce.h arch/arm64/include/asm/alternative-macros.h arch/arm64/include/asm/assembler.h The asm/alternative-macros.h include in asm/rwonce.h only happens when CONFIG_LTO is set, which ultimately casues asm/assembler.h to be included before the definition of ARM_SMCCC_ARCH_WORKAROUND_3. As a result, the preprocessor does not expand ARM_SMCCC_ARCH_WORKAROUND_3 in __mitigate_spectre_bhb_fw, which results in the error above. Avoid this problem by just avoiding the CONFIG_LTO=y __READ_ONCE() block in asm/rwonce.h with assembly files, as nothing in that block is useful to assembly files, which allows ARM_SMCCC_ARCH_WORKAROUND_3 to be properly expanded with CONFIG_LTO=y builds. Fixes: e35123d83ee3 ("arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y") Cc: # 5.11.x Link: https://lore.kernel.org/r/20220309155716.3988480-1-maz@kernel.org/ Reported-by: Marc Zyngier Acked-by: James Morse Signed-off-by: Nathan Chancellor Link: https://lore.kernel.org/r/20220309191633.2307110-1-nathan@kernel.org Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman commit 7d2413841bd094a0a431ccf8ba362c2f21f8f809 Author: Nathan Chancellor Date: Wed Mar 9 15:07:27 2022 -0700 ARM: Do not use NOCROSSREFS directive with ld.lld commit 36168e387fa7d0f1fe0cd5cf76c8cea7aee714fa upstream. ld.lld does not support the NOCROSSREFS directive at the moment, which breaks the build after commit b9baf5c8c5c3 ("ARM: Spectre-BHB workaround"): ld.lld: error: ./arch/arm/kernel/vmlinux.lds:34: AT expected, but got NOCROSSREFS Support for this directive will eventually be implemented, at which point a version check can be added. To avoid breaking the build in the meantime, just define NOCROSSREFS to nothing when using ld.lld, with a link to the issue for tracking. Cc: stable@vger.kernel.org Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Link: https://github.com/ClangBuiltLinux/linux/issues/1609 Signed-off-by: Nathan Chancellor Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 3c685791cedff04001160b9788e1809ad4d8695f Author: Russell King (Oracle) Date: Wed Mar 9 19:08:42 2022 +0000 ARM: fix co-processor register typo commit 33970b031dc4653cc9dc80f2886976706c4c8ef1 upstream. In the recent Spectre BHB patches, there was a typo that is only exposed in certain configurations: mcr p15,0,XX,c7,r5,4 should have been mcr p15,0,XX,c7,c5,4 Reported-by: kernel test robot Fixes: b9baf5c8c5c3 ("ARM: Spectre-BHB workaround") Signed-off-by: Russell King (Oracle) Acked-by: Catalin Marinas Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit e65cdb9c4adf34dce5398e4d77a4b3da4ef1e0dc Author: Emmanuel Gil Peyrot Date: Tue Mar 8 20:18:20 2022 +0100 ARM: fix build error when BPF_SYSCALL is disabled commit 330f4c53d3c2d8b11d86ec03a964b86dc81452f5 upstream. It was missing a semicolon. Signed-off-by: Emmanuel Gil Peyrot Reviewed-by: Nathan Chancellor Fixes: 25875aa71dfe ("ARM: include unprivileged BPF status in Spectre V2 reporting"). Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman commit 80d1978b8062cbad01cbea2aec2a5aac8f61c366 Author: James Morse Date: Thu Mar 3 16:53:56 2022 +0000 arm64: proton-pack: Include unprivileged eBPF status in Spectre v2 mitigation reporting commit 58c9a5060cb7cd529d49c93954cdafe81c1d642a upstream. The mitigations for Spectre-BHB are only applied when an exception is taken from user-space. The mitigation status is reported via the spectre_v2 sysfs vulnerabilities file. When unprivileged eBPF is enabled the mitigation in the exception vectors can be avoided by an eBPF program. When unprivileged eBPF is enabled, print a warning and report vulnerable via the sysfs vulnerabilities file. Acked-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 2df4d0aba0e673d37be14901e853d1d540b19bbd Author: James Morse Date: Fri Dec 10 14:32:56 2021 +0000 arm64: Use the clearbhb instruction in mitigations commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit e1e87704621efcf0310bd1543a8e6352156a43bd Author: James Morse Date: Fri Dec 10 11:16:18 2021 +0000 KVM: arm64: Allow SMCCC_ARCH_WORKAROUND_3 to be discovered and migrated commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 0b2bf1b37b5ebd90e69e30d8c2d6e1cd0c1f37b4 Author: James Morse Date: Wed Nov 10 14:48:00 2021 +0000 arm64: Mitigate spectre style branch history side channels commit 558c303c9734af5a813739cd284879227f7297d2 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation. When taking an exception from user-space, a sequence of branches or a firmware call overwrites or invalidates the branch history. The sequence of branches is added to the vectors, and should appear before the first indirect branch. For systems using KPTI the sequence is added to the kpti trampoline where it has a free register as the exit from the trampoline is via a 'ret'. For systems not using KPTI, the same register tricks are used to free up a register in the vectors. For the firmware call, arch-workaround-3 clobbers 4 registers, so there is no choice but to save them to the EL1 stack. This only happens for entry from EL0, so if we take an exception due to the stack access, it will not become re-entrant. For KVM, the existing branch-predictor-hardening vectors are used. When a spectre version of these vectors is in use, the firmware call is sufficient to mitigate against Spectre-BHB. For the non-spectre versions, the sequence of branches is added to the indirect vector. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit f0567fc3fb835499eda68f20e30ce16f9b83d774 Author: James Morse Date: Tue Feb 8 16:08:13 2022 +0000 arm64: proton-pack: Report Spectre-BHB vulnerabilities as part of Spectre-v2 commit dee435be76f4117410bbd90573a881fd33488f37 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation as part of a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that previously reported 'Not affected' are now moderately mitigated by CSV2. Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to also show the state of the BHB mitigation. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit d4293ed32d390ce363d964a9216ce9ab0ff9d74c Author: James Morse Date: Tue Nov 23 18:29:25 2021 +0000 arm64: Add percpu vectors for EL1 commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit b9c29587c533faaa0aefeaaf7a4a4ff834975ba4 Author: James Morse Date: Thu Nov 25 14:25:34 2021 +0000 arm64: entry: Add macro for reading symbol addresses from the trampoline commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream. The trampoline code needs to use the address of symbols in the wider kernel, e.g. vectors. PC-relative addressing wouldn't work as the trampoline code doesn't run at the address the linker expected. tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is set, in which case it uses the data page as a literal pool because the data page can be unmapped when running in user-space, which is required for CPUs vulnerable to meltdown. Pull this logic out as a macro, instead of adding a third copy of it. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit d535ca624f6d439424aeeb0a3cc4a426cfd9a993 Author: James Morse Date: Thu Nov 18 13:59:46 2021 +0000 arm64: entry: Add vectors that have the bhb mitigation sequences commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 448a95af1b7ae205eb762c2c1fb35b290cc3032e Author: James Morse Date: Wed Nov 24 15:03:15 2021 +0000 arm64: entry: Add non-kpti __bp_harden_el1_vectors for mitigations commit aff65393fa1401e034656e349abd655cfe272de0 upstream. kpti is an optional feature, for systems not using kpti a set of vectors for the spectre-bhb mitigations is needed. Add another set of vectors, __bp_harden_el1_vectors, that will be used if a mitigation is needed and kpti is not in use. The EL1 ventries are repeated verbatim as there is no additional work needed for entry from EL1. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 483fa5319f16b627e7873c1079e35ebbfb04cf45 Author: James Morse Date: Thu Nov 18 15:04:32 2021 +0000 arm64: entry: Allow the trampoline text to occupy multiple pages commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 984e7e3ebac334d7af0069a4d3636cf2338525df Author: James Morse Date: Thu Nov 18 13:16:23 2021 +0000 arm64: entry: Make the kpti trampoline's kpti sequence optional commit c47e4d04ba0f1ea17353d85d45f611277507e07a upstream. Spectre-BHB needs to add sequences to the vectors. Having one global set of vectors is a problem for big/little systems where the sequence is costly on cpus that are not vulnerable. Making the vectors per-cpu in the style of KVM's bh_harden_hyp_vecs requires the vectors to be generated by macros. Make the kpti re-mapping of the kernel optional, so the macros can be used without kpti. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit e0077b0a66f14998c0d18508bf945a40a0d3ebab Author: James Morse Date: Thu Nov 18 14:02:30 2021 +0000 arm64: entry: Move trampoline macros out of ifdef'd section commit 13d7a08352a83ef2252aeb464a5e08dfc06b5dfd upstream. The macros for building the kpti trampoline are all behind CONFIG_UNMAP_KERNEL_AT_EL0, and in a region that outputs to the .entry.tramp.text section. Move the macros out so they can be used to generate other kinds of trampoline. Only the symbols need to be guarded by CONFIG_UNMAP_KERNEL_AT_EL0 and appear in the .entry.tramp.text section. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit dfb25997bdefca7a3cd69c1dea872ba52133d31f Author: James Morse Date: Wed Nov 24 13:40:09 2021 +0000 arm64: entry: Don't assume tramp_vectors is the start of the vectors commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream. The tramp_ventry macro uses tramp_vectors as the address of the vectors when calculating which ventry in the 'full fat' vectors to branch to. While there is one set of tramp_vectors, this will be true. Adding multiple sets of vectors will break this assumption. Move the generation of the vectors to a macro, and pass the start of the vectors as an argument to tramp_ventry. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 5da0c4bbae492434b534ffd39aac5d5610190491 Author: James Morse Date: Wed Nov 24 11:40:18 2021 +0000 arm64: entry: Allow tramp_alias to access symbols after the 4K boundary commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream. Systems using kpti enter and exit the kernel through a trampoline mapping that is always mapped, even when the kernel is not. tramp_valias is a macro to find the address of a symbol in the trampoline mapping. Adding extra sets of vectors will expand the size of the entry.tramp.text section to beyond 4K. tramp_valias will be unable to generate addresses for symbols beyond 4K as it uses the 12 bit immediate of the add instruction. As there are now two registers available when tramp_alias is called, use the extra register to avoid the 4K limit of the 12 bit immediate. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 62cfcf8d06ca7786e781e1b60f57b67f43448868 Author: James Morse Date: Tue Nov 23 15:43:31 2021 +0000 arm64: entry: Move the trampoline data page before the text page commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream. The trampoline code has a data page that holds the address of the vectors, which is unmapped when running in user-space. This ensures that with CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be discovered until after the kernel has been mapped. If the trampoline text page is extended to include multiple sets of vectors, it will be larger than a single page, making it tricky to find the data page without knowing the size of the trampoline text pages, which will vary with PAGE_SIZE. Move the data page to appear before the text page. This allows the data page to be found without knowing the size of the trampoline text pages. 'tramp_vectors' is used to refer to the beginning of the .entry.tramp.text section, do that explicitly. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 57e9a5fbd1f8b8ac5b7f849715fcdf5a32dda040 Author: James Morse Date: Tue Nov 23 18:41:43 2021 +0000 arm64: entry: Free up another register on kpti's tramp_exit path commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream. Kpti stashes x30 in far_el1 while it uses x30 for all its work. Making the vectors a per-cpu data structure will require a second register. Allow tramp_exit two registers before it unmaps the kernel, by leaving x30 on the stack, and stashing x29 in far_el1. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 7b63df956358d183c25178e970f6ed304cd0f659 Author: James Morse Date: Wed Nov 24 15:36:12 2021 +0000 arm64: entry: Make the trampoline cleanup optional commit d739da1694a0eaef0358a42b76904b611539b77b upstream. Subsequent patches will add additional sets of vectors that use the same tricks as the kpti vectors to reach the full-fat vectors. The full-fat vectors contain some cleanup for kpti that is patched in by alternatives when kpti is in use. Once there are additional vectors, the cleanup will be needed in more cases. But on big/little systems, the cleanup would be harmful if no trampoline vector were in use. Instead of forcing CPUs that don't need a trampoline vector to use one, make the trampoline cleanup optional. Entry at the top of the vectors will skip the cleanup. The trampoline vectors can then skip the first instruction, triggering the cleanup to run. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 46af6fc4f7a22ada597982ff01db34fb4bdba6a3 Author: James Morse Date: Tue Nov 16 15:06:19 2021 +0000 KVM: arm64: Allow indirect vectors to be used without SPECTRE_V3A commit 5bdf3437603d4af87f9c7f424b0c8aeed2420745 upstream. CPUs vulnerable to Spectre-BHB either need to make an SMC-CC firmware call from the vectors, or run a sequence of branches. This gets added to the hyp vectors. If there is no support for arch-workaround-1 in firmware, the indirect vector will be used. kvm_init_vector_slots() only initialises the two indirect slots if the platform is vulnerable to Spectre-v3a. pKVM's hyp_map_vectors() only initialises __hyp_bp_vect_base if the platform is vulnerable to Spectre-v3a. As there are about to more users of the indirect vectors, ensure their entries in hyp_spectre_vector_selector[] are always initialised, and __hyp_bp_vect_base defaults to the regular VA mapping. The Spectre-v3a check is moved to a helper kvm_system_needs_idmapped_vectors(), and merged with the code that creates the hyp mappings. Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit 346793c01582f62f4a5536c325a3dfc627ca543a Author: James Morse Date: Tue Nov 16 15:00:51 2021 +0000 arm64: spectre: Rename spectre_v4_patch_fw_mitigation_conduit commit 1b33d4860deaecf1d8eec3061b7e7ed7ab0bae8d upstream. The spectre-v4 sequence includes an SMC from the assembly entry code. spectre_v4_patch_fw_mitigation_conduit is the patching callback that generates an HVC or SMC depending on the SMCCC conduit type. As this isn't specific to spectre-v4, rename it smccc_patch_fw_mitigation_conduit so it can be re-used. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit d1e3d6d26d9eab22548c3b7373ec12bbfecc765f Author: James Morse Date: Wed Nov 17 15:15:26 2021 +0000 arm64: entry.S: Add ventry overflow sanity checks commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream. Subsequent patches add even more code to the ventry slots. Ensure kernels that overflow a ventry slot don't get built. Reviewed-by: Russell King (Oracle) Reviewed-by: Catalin Marinas Signed-off-by: James Morse Signed-off-by: Greg Kroah-Hartman commit b6f9598cc07850dcfe3ca420cbe700e84ec74e2e Author: Joey Gouly Date: Fri Dec 10 16:54:32 2021 +0000 arm64: cpufeature: add HWCAP for FEAT_RPRES commit 1175011a7d0030d49dc9c10bde36f08f26d0a8ee upstream. Add a new HWCAP to detect the Increased precision of Reciprocal Estimate and Reciprocal Square Root Estimate feature (FEAT_RPRES), introduced in Armv8.7. Also expose this to userspace in the ID_AA64ISAR2_EL1 feature register. Signed-off-by: Joey Gouly Cc: Will Deacon Cc: Jonathan Corbet Acked-by: Marc Zyngier Link: https://lore.kernel.org/r/20211210165432.8106-4-joey.gouly@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman commit 549ee0965ba6c5188cd6e66dbf5d1bc15c770617 Author: Joey Gouly Date: Fri Dec 10 16:54:30 2021 +0000 arm64: cpufeature: add HWCAP for FEAT_AFP commit 5c13f042e73200b50573ace63e1a6b94e2917616 upstream. Add a new HWCAP to detect the Alternate Floating-point Behaviour feature (FEAT_AFP), introduced in Armv8.7. Also expose this to userspace in the ID_AA64MMFR1_EL1 feature register. Signed-off-by: Joey Gouly Cc: Will Deacon Acked-by: Marc Zyngier Link: https://lore.kernel.org/r/20211210165432.8106-2-joey.gouly@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman commit 6876d90c4abedc8b14427a5708bd2ce269df1741 Author: Joey Gouly Date: Fri Dec 10 16:54:31 2021 +0000 arm64: add ID_AA64ISAR2_EL1 sys register commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by: Joey Gouly Cc: Will Deacon Cc: Marc Zyngier Cc: James Morse Cc: Alexandru Elisei Cc: Suzuki K Poulose Cc: Reiji Watanabe Acked-by: Marc Zyngier Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by: Catalin Marinas Signed-off-by: Greg Kroah-Hartman commit 680e356c1be19a7663d8077be12e0ab048430ebd Author: Russell King (Oracle) Date: Mon Mar 7 19:28:32 2022 +0000 ARM: include unprivileged BPF status in Spectre V2 reporting commit 25875aa71dfefd1959f07e626c4d285b88b27ac2 upstream. The mitigations for Spectre-BHB are only applied when an exception is taken, but when unprivileged BPF is enabled, userspace can load BPF programs that can be used to exploit the problem. When unprivileged BPF is enabled, report the vulnerable status via the spectre_v2 sysfs file. Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman commit 9fd1d31810ccf6b4e4df8ccf2e68bbdcf528d186 Author: Russell King (Oracle) Date: Thu Feb 10 16:05:45 2022 +0000 ARM: Spectre-BHB workaround commit b9baf5c8c5c356757f4f9d8180b5e9d234065bc3 upstream. Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15. Reviewed-by: Catalin Marinas Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman commit 21ceffd3628edfc775d33851cf56ea1d85c528fc Author: Russell King (Oracle) Date: Fri Feb 11 19:49:50 2022 +0000 ARM: use LOADADDR() to get load address of sections commit 8d9d651ff2270a632e9dc497b142db31e8911315 upstream. Use the linker's LOADADDR() macro to get the load address of the sections, and provide a macro to set the start and end symbols. Acked-by: Catalin Marinas Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman commit 0f0fd6ef76dee10aae861c69635c42b1c427e577 Author: Russell King (Oracle) Date: Fri Feb 11 19:46:15 2022 +0000 ARM: early traps initialisation commit 04e91b7324760a377a725e218b5ee783826d30f5 upstream. Provide a couple of helpers to copy the vectors and stubs, and also to flush the copied vectors and stubs. Acked-by: Catalin Marinas Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman commit f5eb0f1dcde4b7c2b5ee920ae53bcecaaba03947 Author: Russell King (Oracle) Date: Fri Feb 11 16:45:54 2022 +0000 ARM: report Spectre v2 status through sysfs commit 9dd78194a3722fa6712192cdd4f7032d45112a9a upstream. As per other architectures, add support for reporting the Spectre vulnerability status via sysfs CPU. Acked-by: Catalin Marinas Signed-off-by: Russell King (Oracle) Signed-off-by: Greg Kroah-Hartman commit db3eaccf6e16c2d15d3fc8dc7c430d7df7c7bcc4 Author: Josh Poimboeuf Date: Fri Feb 25 14:32:28 2022 -0800 x86/speculation: Warn about eIBRS + LFENCE + Unprivileged eBPF + SMT commit 0de05d056afdb00eca8c7bbb0c79a3438daf700c upstream. The commit 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") added a warning for the "eIBRS + unprivileged eBPF" combination, which has been shown to be vulnerable against Spectre v2 BHB-based attacks. However, there's no warning about the "eIBRS + LFENCE retpoline + unprivileged eBPF" combo. The LFENCE adds more protection by shortening the speculation window after a mispredicted branch. That makes an attack significantly more difficult, even with unprivileged eBPF. So at least for now the logic doesn't warn about that combination. But if you then add SMT into the mix, the SMT attack angle weakens the effectiveness of the LFENCE considerably. So extend the "eIBRS + unprivileged eBPF" warning to also include the "eIBRS + LFENCE + unprivileged eBPF + SMT" case. [ bp: Massage commit message. ] Suggested-by: Alyssa Milburn Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman commit 19b78a9a32286216ec4aa64924d39ba5748b506d Author: Josh Poimboeuf Date: Fri Feb 25 14:31:49 2022 -0800 x86/speculation: Warn about Spectre v2 LFENCE mitigation commit eafd987d4a82c7bb5aa12f0e3b4f8f3dea93e678 upstream. With: f8a66d608a3e ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd") it became possible to enable the LFENCE "retpoline" on Intel. However, Intel doesn't recommend it, as it has some weaknesses compared to retpoline. Now AMD doesn't recommend it either. It can still be left available as a cmdline option. It's faster than retpoline but is weaker in certain scenarios -- particularly SMT, but even non-SMT may be vulnerable in some cases. So just unconditionally warn if the user requests it on the cmdline. [ bp: Massage commit message. ] Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman commit 23cc87c04704f7ed978c85a4ec5901719fd26580 Author: Kim Phillips Date: Mon Feb 28 11:23:16 2022 -0600 x86/speculation: Update link to AMD speculation whitepaper commit e9b6013a7ce31535b04b02ba99babefe8a8599fa upstream. Update the link to the "Software Techniques for Managing Speculation on AMD Processors" whitepaper. Signed-off-by: Kim Phillips Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman commit 1984feb9872b905420af97d471d60051b6dd5851 Author: Kim Phillips Date: Mon Feb 28 11:23:15 2022 -0600 x86/speculation: Use generic retpoline by default on AMD commit 244d00b5dd4755f8df892c86cab35fb2cfd4f14b upstream. AMD retpoline may be susceptible to speculation. The speculation execution window for an incorrect indirect branch prediction using LFENCE/JMP sequence may potentially be large enough to allow exploitation using Spectre V2. By default, don't use retpoline,lfence on AMD. Instead, use the generic retpoline. Signed-off-by: Kim Phillips Signed-off-by: Borislav Petkov Signed-off-by: Greg Kroah-Hartman commit 6710bd548ec701bfedc175d145323cbf6930decb Author: Josh Poimboeuf Date: Fri Feb 18 11:49:08 2022 -0800 x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting commit 44a3918c8245ab10c6c9719dd12e7a8d291980d8 upstream. With unprivileged eBPF enabled, eIBRS (without retpoline) is vulnerable to Spectre v2 BHB-based attacks. When both are enabled, print a warning message and report it in the 'spectre_v2' sysfs vulnerabilities file. Signed-off-by: Josh Poimboeuf Signed-off-by: Borislav Petkov Reviewed-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman commit 227649a0a673da2c87c02c26c476a8599989a4d8 Author: Peter Zijlstra Date: Wed Feb 16 20:57:02 2022 +0100 Documentation/hw-vuln: Update spectre doc commit 5ad3eb1132453b9795ce5fd4572b1c18b292cca9 upstream. Update the doc with the new fun. [ bp: Massage commit message. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman commit 9b629eeb33222615a367eee8f038db907a797d3f Author: Peter Zijlstra Date: Wed Feb 16 20:57:01 2022 +0100 x86/speculation: Add eIBRS + Retpoline options commit 1e19da8522c81bf46b335f84137165741e0d82b7 upstream. Thanks to the chaps at VUsec it is now clear that eIBRS is not sufficient, therefore allow enabling of retpolines along with eIBRS. Add spectre_v2=eibrs, spectre_v2=eibrs,lfence and spectre_v2=eibrs,retpoline options to explicitly pick your preferred means of mitigation. Since there's new mitigations there's also user visible changes in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to reflect these new mitigations. [ bp: Massage commit message, trim error messages, do more precise eIBRS mode checking. ] Co-developed-by: Josh Poimboeuf Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Patrick Colp Reviewed-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman commit 56829c19c8171303faca18d6ab3511ecdf3e7e23 Author: Peter Zijlstra (Intel) Date: Wed Feb 16 20:57:00 2022 +0100 x86/speculation: Rename RETPOLINE_AMD to RETPOLINE_LFENCE commit d45476d9832409371537013ebdd8dc1a7781f97a upstream. The RETPOLINE_AMD name is unfortunate since it isn't necessarily AMD only, in fact Hygon also uses it. Furthermore it will likely be sufficient for some Intel processors. Therefore rename the thing to RETPOLINE_LFENCE to better describe what it is. Add the spectre_v2=retpoline,lfence option as an alias to spectre_v2=retpoline,amd to preserve existing setups. However, the output of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. [ bp: Fix typos, massage. ] Co-developed-by: Josh Poimboeuf Signed-off-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov Reviewed-by: Thomas Gleixner Signed-off-by: Greg Kroah-Hartman