commit 0047d57e6c91177bb731bed5ada6c211868bc27c Author: Greg Kroah-Hartman Date: Mon May 30 09:24:09 2022 +0200 Linux 5.18.1 Link: https://lore.kernel.org/r/20220527084801.223648383@linuxfoundation.org Tested-by: Ronald Warsow Tested-by: Justin M. Forbes Tested-by: Ron Economos Tested-by: Bagas Sanjaya Tested-by: Linux Kernel Functional Testing Tested-by: Rudi Heitbaum Signed-off-by: Greg Kroah-Hartman commit 80213405895162c448b30d28623f1f95337f9b3e Author: Edward Matijevic Date: Fri May 20 23:45:15 2022 -0500 ALSA: ctxfi: Add SB046x PCI ID commit 1b073ebb174d0c7109b438e0a5eb4495137803ec upstream. Adds the PCI ID for X-Fi cards sold under the Platnum and XtremeMusic names Before: snd_ctxfi 0000:05:05.0: chip 20K1 model Unknown (1102:0021) is found After: snd_ctxfi 0000:05:05.0: chip 20K1 model SB046x (1102:0021) is found [ This is only about defining the model name string, and the rest is handled just like before, as a default unknown device. Edward confirmed that the stuff has been working fine -- tiwai ] Signed-off-by: Edward Matijevic Cc: Link: https://lore.kernel.org/r/cae7d1a4-8bd9-7dfe-7427-db7e766f7272@gmail.com Signed-off-by: Takashi Iwai Signed-off-by: Greg Kroah-Hartman commit 453f8156652106e335176db8bfff05333e248498 Author: Lorenzo Pieralisi Date: Thu Apr 7 11:51:20 2022 +0100 ACPI: sysfs: Fix BERT error region memory mapping commit 1bbc21785b7336619fb6a67f1fff5afdaf229acc upstream. Currently the sysfs interface maps the BERT error region as "memory" (through acpi_os_map_memory()) in order to copy the error records into memory buffers through memory operations (eg memory_read_from_buffer()). The OS system cannot detect whether the BERT error region is part of system RAM or it is "device memory" (eg BMC memory) and therefore it cannot detect which memory attributes the bus to memory support (and corresponding kernel mapping, unless firmware provides the required information). The acpi_os_map_memory() arch backend implementation determines the mapping attributes. On arm64, if the BERT error region is not present in the EFI memory map, the error region is mapped as device-nGnRnE; this triggers alignment faults since memcpy unaligned accesses are not allowed in device-nGnRnE regions. The ACPI sysfs code cannot therefore map by default the BERT error region with memory semantics but should use a safer default. Change the sysfs code to map the BERT error region as MMIO (through acpi_os_map_iomem()) and use the memcpy_fromio() interface to read the error region into the kernel buffer. Link: https://lore.kernel.org/linux-arm-kernel/31ffe8fc-f5ee-2858-26c5-0fd8bdd68702@arm.com Link: https://lore.kernel.org/linux-acpi/CAJZ5v0g+OVbhuUUDrLUCfX_mVqY_e8ubgLTU98=jfjTeb4t+Pw@mail.gmail.com Signed-off-by: Lorenzo Pieralisi Tested-by: Veronika Kabatova Tested-by: Aristeu Rozanski Acked-by: Ard Biesheuvel Signed-off-by: Rafael J. Wysocki Cc: dann frazier Signed-off-by: Greg Kroah-Hartman commit 068108f53811d99b5a470eafe5b0ddd523ed2c79 Author: Jason A. Donenfeld Date: Sun May 22 22:25:41 2022 +0200 random: check for signals after page of pool writes commit 1ce6c8d68f8ac587f54d0a271ac594d3d51f3efb upstream. get_random_bytes_user() checks for signals after producing a PAGE_SIZE worth of output, just like /dev/zero does. write_pool() is doing basically the same work (actually, slightly more expensive), and so should stop to check for signals in the same way. Let's also name it write_pool_user() to match get_random_bytes_user(), so this won't be misused in the future. Before this patch, massive writes to /dev/urandom would tie up the process for an extremely long time and make it unterminatable. After, it can be successfully interrupted. The following test program can be used to see this works as intended: #include #include #include #include static unsigned char x[~0U]; static void handle(int) { } int main(int argc, char *argv[]) { pid_t pid = getpid(), child; int fd; signal(SIGUSR1, handle); if (!(child = fork())) { for (;;) kill(pid, SIGUSR1); } fd = open("/dev/urandom", O_WRONLY); pause(); printf("interrupted after writing %zd bytes\n", write(fd, x, sizeof(x))); close(fd); kill(child, SIGTERM); return 0; } Result before: "interrupted after writing 2147479552 bytes" Result after: "interrupted after writing 4096 bytes" Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 08467db994c0cf6c16e347a99e374a432e6c3509 Author: Jens Axboe Date: Thu May 19 17:31:37 2022 -0600 random: wire up fops->splice_{read,write}_iter() commit 79025e727a846be6fd215ae9cdb654368ac3f9a6 upstream. Now that random/urandom is using {read,write}_iter, we can wire it up to using the generic splice handlers. Fixes: 36e2c7421f02 ("fs: don't allow splice read/write without explicit ops") Signed-off-by: Jens Axboe [Jason: added the splice_write path. Note that sendfile() and such still does not work for read, though it does for write, because of a file type restriction in splice_direct_to_actor(), which I'll address separately.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit fdb1b354e301dd2aa56df862b9d4894b79039c40 Author: Jens Axboe Date: Thu May 19 17:43:15 2022 -0600 random: convert to using fops->write_iter() commit 22b0a222af4df8ee9bb8e07013ab44da9511b047 upstream. Now that the read side has been converted to fix a regression with splice, convert the write side as well to have some symmetry in the interface used (and help deprecate ->write()). Signed-off-by: Jens Axboe [Jason: cleaned up random_ioctl a bit, require full writes in RNDADDENTROPY since it's crediting entropy, simplify control flow of write_pool(), and incorporate suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 09ea8bc7276aead310e5faa21ae29ceeecea87c9 Author: Jens Axboe Date: Thu May 19 17:31:36 2022 -0600 random: convert to using fops->read_iter() commit 1b388e7765f2eaa137cf5d92b47ef5925ad83ced upstream. This is a pre-requisite to wiring up splice() again for the random and urandom drivers. It also allows us to remove the INT_MAX check in getrandom(), because import_single_range() applies capping internally. Signed-off-by: Jens Axboe [Jason: rewrote get_random_bytes_user() to simplify and also incorporate additional suggestions from Al.] Cc: Al Viro Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 806afdc6d97fe17a9b96a8d6964f7d2da610ee58 Author: Jason A. Donenfeld Date: Sun May 15 00:22:05 2022 +0200 random: unify batched entropy implementations commit 3092adcef3ffd2ef59634998297ca8358461ebce upstream. There are currently two separate batched entropy implementations, for u32 and u64, with nearly identical code, with the goal of avoiding unaligned memory accesses and letting the buffers be used more efficiently. Having to maintain these two functions independently is a bit of a hassle though, considering that they always need to be kept in sync. This commit factors them out into a type-generic macro, so that the expansion produces the same code as before, such that diffing the assembly shows no differences. This will also make it easier in the future to add u16 and u8 batches. This was initially tested using an always_inline function and letting gcc constant fold the type size in, but the code gen was less efficient, and in general it was more verbose and harder to follow. So this patch goes with the boring macro solution, similar to what's already done for the _wait functions in random.h. Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 463ebd6f8707e57847680541a05c910947a2518a Author: Jason A. Donenfeld Date: Sat May 14 13:59:30 2022 +0200 random: move randomize_page() into mm where it belongs commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream. randomize_page is an mm function. It is documented like one. It contains the history of one. It has the naming convention of one. It looks just like another very similar function in mm, randomize_stack_top(). And it has always been maintained and updated by mm people. There is no need for it to be in random.c. In the "which shape does not look like the other ones" test, pointing to randomize_page() is correct. So move randomize_page() into mm/util.c, right next to the similar randomize_stack_top() function. This commit contains no actual code changes. Cc: Andrew Morton Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4f276d40e6032bcc19b341185e335dee65a431aa Author: Jason A. Donenfeld Date: Fri May 13 16:17:12 2022 +0200 random: move initialization functions out of hot pages commit 560181c27b582557d633ecb608110075433383af upstream. Much of random.c is devoted to initializing the rng and accounting for when a sufficient amount of entropy has been added. In a perfect world, this would all happen during init, and so we could mark these functions as __init. But in reality, this isn't the case: sometimes the rng only finishes initializing some seconds after system init is finished. For this reason, at the moment, a whole host of functions that are only used relatively close to system init and then never again are intermixed with functions that are used in hot code all the time. This creates more cache misses than necessary. In order to pack the hot code closer together, this commit moves the initialization functions that can't be marked as __init into .text.unlikely by way of the __cold attribute. Of particular note is moving credit_init_bits() into a macro wrapper that inlines the crng_ready() static branch check. This avoids a function call to a nop+ret, and most notably prevents extra entropy arithmetic from being computed in mix_interrupt_randomness(). Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 6fd3ff02dadcf6eacf6efd2e0840e75506d15905 Author: Jason A. Donenfeld Date: Fri May 13 13:18:46 2022 +0200 random: make consistent use of buf and len commit a19402634c435a4eae226df53c141cdbb9922e7b upstream. The current code was a mix of "nbytes", "count", "size", "buffer", "in", and so forth. Instead, let's clean this up by naming input parameters "buf" (or "ubuf") and "len", so that you always understand that you're reading this variety of function argument. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit da71481fefa21c7511244b66d000590704bd21ba Author: Jason A. Donenfeld Date: Fri May 13 12:32:23 2022 +0200 random: use proper return types on get_random_{int,long}_wait() commit 7c3a8a1db5e03d02cc0abb3357a84b8b326dfac3 upstream. Before these were returning signed values, but the API is intended to be used with unsigned values. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4df97cbda270c1254fd866a9ec80bbd4bab868a8 Author: Jason A. Donenfeld Date: Fri May 13 12:29:38 2022 +0200 random: remove extern from functions in header commit 7782cfeca7d420e8bb707613d4cfb0f7ff29bb3a upstream. Accoriding to the kernel style guide, having `extern` on functions in headers is old school and deprecated, and doesn't add anything. So remove them from random.h, and tidy up the file a little bit too. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 05daffcd83dae217daedfa17577bbf8dc91f48fd Author: Jason A. Donenfeld Date: Tue May 3 15:30:45 2022 +0200 random: use static branch for crng_ready() commit f5bda35fba615ace70a656d4700423fa6c9bebee upstream. Since crng_ready() is only false briefly during initialization and then forever after becomes true, we don't need to evaluate it after, making it a prime candidate for a static branch. One complication, however, is that it changes state in a particular call to credit_init_bits(), which might be made from atomic context, which means we must kick off a workqueue to change the static key. Further complicating things, credit_init_bits() may be called sufficiently early on in system initialization such that system_wq is NULL. Fortunately, there exists the nice function execute_in_process_context(), which will immediately execute the function if !in_interrupt(), and otherwise defer it to a workqueue. During early init, before workqueues are available, in_interrupt() is always false, because interrupts haven't even been enabled yet, which means the function in that case executes immediately. Later on, after workqueues are available, in_interrupt() might be true, but in that case, the work is queued in system_wq and all goes well. Cc: Theodore Ts'o Cc: Sultan Alsawaf Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 91657c81fc3d0401189060acf54cddc201464645 Author: Jason A. Donenfeld Date: Thu May 12 15:32:26 2022 +0200 random: credit architectural init the exact amount commit 12e45a2a6308105469968951e6d563e8f4fea187 upstream. RDRAND and RDSEED can fail sometimes, which is fine. We currently initialize the RNG with 512 bits of RDRAND/RDSEED. We only need 256 bits of those to succeed in order to initialize the RNG. Instead of the current "all or nothing" approach, actually credit these contributions the amount that is actually contributed. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d9394ac04eeb2d60518566e78835e952ed395166 Author: Jason A. Donenfeld Date: Thu May 5 02:20:22 2022 +0200 random: handle latent entropy and command line from random_init() commit 2f14062bb14b0fcfcc21e6dc7d5b5c0d25966164 upstream. Currently, start_kernel() adds latent entropy and the command line to the entropy bool *after* the RNG has been initialized, deferring when it's actually used by things like stack canaries until the next time the pool is seeded. This surely is not intended. Rather than splitting up which entropy gets added where and when between start_kernel() and random_init(), just do everything in random_init(), which should eliminate these kinds of bugs in the future. While we're at it, rename the awkwardly titled "rand_initialize()" to the more standard "random_init()" nomenclature. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 68f784cbb1757f82061a74c3d22b6781348ba04d Author: Jason A. Donenfeld Date: Tue May 10 15:20:42 2022 +0200 random: use proper jiffies comparison macro commit 8a5b8a4a4ceb353b4dd5bafd09e2b15751bcdb51 upstream. This expands to exactly the same code that it replaces, but makes things consistent by using the same macro for jiffy comparisons throughout. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 443a7b15c858ec799ae7ba5146316c4d30c778d6 Author: Jason A. Donenfeld Date: Mon May 9 16:13:18 2022 +0200 random: remove ratelimiting for in-kernel unseeded randomness commit cc1e127bfa95b5fb2f9307e7168bf8b2b45b4c5e upstream. The CONFIG_WARN_ALL_UNSEEDED_RANDOM debug option controls whether the kernel warns about all unseeded randomness or just the first instance. There's some complicated rate limiting and comparison to the previous caller, such that even with CONFIG_WARN_ALL_UNSEEDED_RANDOM enabled, developers still don't see all the messages or even an accurate count of how many were missed. This is the result of basically parallel mechanisms aimed at accomplishing more or less the same thing, added at different points in random.c history, which sort of compete with the first-instance-only limiting we have now. It turns out, however, that nobody cares about the first unseeded randomness instance of in-kernel users. The same first user has been there for ages now, and nobody is doing anything about it. It isn't even clear that anybody _can_ do anything about it. Most places that can do something about it have switched over to using get_random_bytes_wait() or wait_for_random_bytes(), which is the right thing to do, but there is still much code that needs randomness sometimes during init, and as a geeneral rule, if you're not using one of the _wait functions or the readiness notifier callback, you're bound to be doing it wrong just based on that fact alone. So warning about this same first user that can't easily change is simply not an effective mechanism for anything at all. Users can't do anything about it, as the Kconfig text points out -- the problem isn't in userspace code -- and kernel developers don't or more often can't react to it. Instead, show the warning for all instances when CONFIG_WARN_ALL_UNSEEDED_RANDOM is set, so that developers can debug things need be, or if it isn't set, don't show a warning at all. At the same time, CONFIG_WARN_ALL_UNSEEDED_RANDOM now implies setting random.ratelimit_disable=1 on by default, since if you care about one you probably care about the other too. And we can clean up usage around the related urandom_warning ratelimiter as well (whose behavior isn't changing), so that it properly counts missed messages after the 10 message threshold is reached. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 4d77c5dc80264c0b9d9d4799d2afa8024e23a64f Author: Jason A. Donenfeld Date: Mon May 9 13:53:24 2022 +0200 random: move initialization out of reseeding hot path commit 68c9c8b192c6dae9be6278e98ee44029d5da2d31 upstream. Initialization happens once -- by way of credit_init_bits() -- and then it never happens again. Therefore, it doesn't need to be in crng_reseed(), which is a hot path that is called multiple times. It also doesn't make sense to have there, as initialization activity is better associated with initialization routines. After the prior commit, crng_reseed() now won't be called by multiple concurrent callers, which means that we can safely move the "finialize_init" logic into crng_init_bits() unconditionally. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit e84123f90491618c6d0095495502e566bc5b41c4 Author: Jason A. Donenfeld Date: Mon May 9 13:40:55 2022 +0200 random: avoid initializing twice in credit race commit fed7ef061686cc813b1f3d8d0edc6c35b4d3537b upstream. Since all changes of crng_init now go through credit_init_bits(), we can fix a long standing race in which two concurrent callers of credit_init_bits() have the new bit count >= some threshold, but are doing so with crng_init as a lower threshold, checked outside of a lock, resulting in crng_reseed() or similar being called twice. In order to fix this, we can use the original cmpxchg value of the bit count, and only change crng_init when the bit count transitions from below a threshold to meeting the threshold. Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a199b1cf0d53b2da8994b084cd25a84f26cb462a Author: Jason A. Donenfeld Date: Sun May 8 13:20:30 2022 +0200 random: use symbolic constants for crng_init states commit e3d2c5e79a999aa4e7d6f0127e16d3da5a4ff70d upstream. crng_init represents a state machine, with three states, and various rules for transitions. For the longest time, we've been managing these with "0", "1", and "2", and expecting people to figure it out. To make the code more obvious, replace these with proper enum values representing the transition, and then redocument what each of these states mean. Reviewed-by: Dominik Brodowski Cc: Joe Perches Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 65419736ad67ae4d42647eecdde4f71770dbf50c Author: Jason A. Donenfeld Date: Sat May 7 14:03:46 2022 +0200 siphash: use one source of truth for siphash permutations commit e73aaae2fa9024832e1f42e30c787c7baf61d014 upstream. The SipHash family of permutations is currently used in three places: - siphash.c itself, used in the ordinary way it was intended. - random32.c, in a construction from an anonymous contributor. - random.c, as part of its fast_mix function. Each one of these places reinvents the wheel with the same C code, same rotation constants, and same symmetry-breaking constants. This commit tidies things up a bit by placing macros for the permutations and constants into siphash.h, where each of the three .c users can access them. It also leaves a note dissuading more users of them from emerging. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3cac6963de9f3014c9b980caede71f2344d47daa Author: Jason A. Donenfeld Date: Fri May 6 23:19:43 2022 +0200 random: help compiler out with fast_mix() by using simpler arguments commit 791332b3cbb080510954a4c152ce02af8832eac9 upstream. Now that fast_mix() has more than one caller, gcc no longer inlines it. That's fine. But it also doesn't handle the compound literal argument we pass it very efficiently, nor does it handle the loop as well as it could. So just expand the code to spell out this function so that it generates the same code as it did before. Performance-wise, this now behaves as it did before the last commit. The difference in actual code size on x86 is 45 bytes, which is less than a cache line. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 1e34e244a3727c13c7f8858c42c3d82bc3d1d818 Author: Jason A. Donenfeld Date: Fri May 6 18:30:51 2022 +0200 random: do not use input pool from hard IRQs commit e3e33fc2ea7fcefd0d761db9d6219f83b4248f5c upstream. Years ago, a separate fast pool was added for interrupts, so that the cost associated with taking the input pool spinlocks and mixing into it would be avoided in places where latency is critical. However, one oversight was that add_input_randomness() and add_disk_randomness() still sometimes are called directly from the interrupt handler, rather than being deferred to a thread. This means that some unlucky interrupts will be caught doing a blake2s_compress() call and potentially spinning on input_pool.lock, which can also be taken by unprivileged users by writing into /dev/urandom. In order to fix this, add_timer_randomness() now checks whether it is being called from a hard IRQ and if so, just mixes into the per-cpu IRQ fast pool using fast_mix(), which is much faster and can be done lock-free. A nice consequence of this, as well, is that it means hard IRQ context FPU support is likely no longer useful. The entropy estimation algorithm used by add_timer_randomness() is also somewhat different than the one used for add_interrupt_randomness(). The former looks at deltas of deltas of deltas, while the latter just waits for 64 interrupts for one bit or for one second since the last bit. In order to bridge these, and since add_interrupt_randomness() runs after an add_timer_randomness() that's called from hard IRQ, we add to the fast pool credit the related amount, and then subtract one to account for add_interrupt_randomness()'s contribution. A downside of this, however, is that the num argument is potentially attacker controlled, which puts a bit more pressure on the fast_mix() sponge to do more than it's really intended to do. As a mitigating factor, the first 96 bits of input aren't attacker controlled (a cycle counter followed by zeros), which means it's essentially two rounds of siphash rather than one, which is somewhat better. It's also not that much different from add_interrupt_randomness()'s use of the irq stack instruction pointer register. Cc: Thomas Gleixner Cc: Filipe Manana Cc: Peter Zijlstra Cc: Borislav Petkov Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 157edd57d51f5945e8aa207dc64cc10560ffc582 Author: Jason A. Donenfeld Date: Fri May 6 18:27:38 2022 +0200 random: order timer entropy functions below interrupt functions commit a4b5c26b79ffdfcfb816c198f2fc2b1e7b5b580f upstream. There are no code changes here; this is just a reordering of functions, so that in subsequent commits, the timer entropy functions can call into the interrupt ones. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 0e4944c9c11ebb207355d6edab0ac5b5a958e9a6 Author: Jason A. Donenfeld Date: Sat Apr 30 22:03:29 2022 +0200 random: do not pretend to handle premature next security model commit e85c0fc1d94c52483a603651748d4c76d6aa1c6b upstream. Per the thread linked below, "premature next" is not considered to be a realistic threat model, and leads to more serious security problems. "Premature next" is the scenario in which: - Attacker compromises the current state of a fully initialized RNG via some kind of infoleak. - New bits of entropy are added directly to the key used to generate the /dev/urandom stream, without any buffering or pooling. - Attacker then, somehow having read access to /dev/urandom, samples RNG output and brute forces the individual new bits that were added. - Result: the RNG never "recovers" from the initial compromise, a so-called violation of what academics term "post-compromise security". The usual solutions to this involve some form of delaying when entropy gets mixed into the crng. With Fortuna, this involves multiple input buckets. With what the Linux RNG was trying to do prior, this involves entropy estimation. However, by delaying when entropy gets mixed in, it also means that RNG compromises are extremely dangerous during the window of time before the RNG has gathered enough entropy, during which time nonces may become predictable (or repeated), ephemeral keys may not be secret, and so forth. Moreover, it's unclear how realistic "premature next" is from an attack perspective, if these attacks even make sense in practice. Put together -- and discussed in more detail in the thread below -- these constitute grounds for just doing away with the current code that pretends to handle premature next. I say "pretends" because it wasn't doing an especially great job at it either; should we change our mind about this direction, we would probably implement Fortuna to "fix" the "problem", in which case, removing the pretend solution still makes sense. This also reduces the crng reseed period from 5 minutes down to 1 minute. The rationale from the thread might lead us toward reducing that even further in the future (or even eliminating it), but that remains a topic of a future commit. At a high level, this patch changes semantics from: Before: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every five minutes, but only if 256 new "bits" have been accumulated since the last reseeding. After: Seed for the first time after 256 "bits" of estimated entropy have been accumulated since the system booted. Thereafter, reseed once every minute. Most of this patch is renaming and removing: POOL_MIN_BITS becomes POOL_INIT_BITS, credit_entropy_bits() becomes credit_init_bits(), crng_reseed() loses its "force" parameter since it's now always true, the drain_entropy() function no longer has any use so it's removed, entropy estimation is skipped if we've already init'd, the various notifiers for "low on entropy" are now only active prior to init, and finally, some documentation comments are cleaned up here and there. Link: https://lore.kernel.org/lkml/YmlMGx6+uigkGiZ0@zx2c4.com/ Cc: Theodore Ts'o Cc: Nadia Heninger Cc: Tom Ristenpart Reviewed-by: Eric Biggers Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7176e96d798807e536bf4e76b8767a5e94a301f6 Author: Jason A. Donenfeld Date: Sat Apr 30 15:08:20 2022 +0200 random: use first 128 bits of input as fast init commit 5c3b747ef54fa2a7318776777f6044540d99f721 upstream. Before, the first 64 bytes of input, regardless of how entropic it was, would be used to mutate the crng base key directly, and none of those bytes would be credited as having entropy. Then 256 bits of credited input would be accumulated, and only then would the rng transition from the earlier "fast init" phase into being actually initialized. The thinking was that by mixing and matching fast init and real init, an attacker who compromised the fast init state, considered easy to do given how little entropy might be in those first 64 bytes, would then be able to bruteforce bits from the actual initialization. By keeping these separate, bruteforcing became impossible. However, by not crediting potentially creditable bits from those first 64 bytes of input, we delay initialization, and actually make the problem worse, because it means the user is drawing worse random numbers for a longer period of time. Instead, we can take the first 128 bits as fast init, and allow them to be credited, and then hold off on the next 128 bits until they've accumulated. This is still a wide enough margin to prevent bruteforcing the rng state, while still initializing much faster. Then, rather than trying to piecemeal inject into the base crng key at various points, instead just extract from the pool when we need it, for the crng_init==0 phase. Performance may even be better for the various inputs here, since there are likely more calls to mix_pool_bytes() then there are to get_random_bytes() during this phase of system execution. Since the preinit injection code is gone, bootloader randomness can then do something significantly more straight forward, removing the weird system_wq hack in hwgenerator randomness. Cc: Theodore Ts'o Cc: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f0cb03957175452e8dfc18f3f7d8dd2698553bd4 Author: Jason A. Donenfeld Date: Tue May 3 14:14:32 2022 +0200 random: do not use batches when !crng_ready() commit cbe89e5a375a51bbb952929b93fa973416fea74e upstream. It's too hard to keep the batches synchronized, and pointless anyway, since in !crng_ready(), we're updating the base_crng key really often, where batching only hurts. So instead, if the crng isn't ready, just call into get_random_bytes(). At this stage nothing is performance critical anyhow. Cc: Theodore Ts'o Reviewed-by: Dominik Brodowski Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 8f4d6e33b53405de6696a92caea2d71a5b7c15e0 Author: Jason A. Donenfeld Date: Tue Apr 12 19:59:57 2022 +0200 random: insist on random_get_entropy() existing in order to simplify commit 4b758eda851eb9336ca86a0041a4d3da55f66511 upstream. All platforms are now guaranteed to provide some value for random_get_entropy(). In case some bug leads to this not being so, we print a warning, because that indicates that something is really very wrong (and likely other things are impacted too). This should never be hit, but it's a good and cheap way of finding out if something ever is problematic. Since we now have viable fallback code for random_get_entropy() on all platforms, which is, in the worst case, not worse than jiffies, we can count on getting the best possible value out of it. That means there's no longer a use for using jiffies as entropy input. It also means we no longer have a reason for doing the round-robin register flow in the IRQ handler, which was always of fairly dubious value. Instead we can greatly simplify the IRQ handler inputs and also unify the construction between 64-bits and 32-bits. We now collect the cycle counter and the return address, since those are the two things that matter. Because the return address and the irq number are likely related, to the extent we mix in the irq number, we can just xor it into the top unchanging bytes of the return address, rather than the bottom changing bytes of the cycle counter as before. Then, we can do a fixed 2 rounds of SipHash/HSipHash. Finally, we use the same construction of hashing only half of the [H]SipHash state on 32-bit and 64-bit. We're not actually discarding any entropy, since that entropy is carried through until the next time. And more importantly, it lets us do the same sponge-like construction everywhere. Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7ebcfba533435b50a80096064e374a3cc08aabbb Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 xtensa: use fallback for random_get_entropy() instead of zero commit e10e2f58030c5c211d49042a8c2a1b93d40b2ffb upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Max Filippov Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 32a177ccb3884717251e2394750f283ef3b6c916 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 sparc: use fallback for random_get_entropy() instead of zero commit ac9756c79797bb98972736b13cfb239fd2cffb79 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: David S. Miller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d161ede79be763eabaa2a617eae0c9892f3869b6 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 um: use fallback for random_get_entropy() instead of zero commit 9f13fb0cd11ed2327abff69f6501a2c124c88b5a upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. This is accomplished by just including the asm-generic code like on other architectures, which means we can get rid of the empty stub function here. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Weinberger Cc: Anton Ivanov Acked-by: Johannes Berg Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit c37981032472f827d920dd37d59b9c248fbe0b8c Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 x86/tsc: Use fallback for random_get_entropy() instead of zero commit 3bd4abc07a267e6a8b33d7f8717136e18f921c53 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is suboptimal. Instead, fallback to calling random_get_entropy_fallback(), which isn't extremely high precision or guaranteed to be entropic, but is certainly better than returning zero all the time. If CONFIG_X86_TSC=n, then it's possible for the kernel to run on systems without RDTSC, such as 486 and certain 586, so the fallback code is only required for that case. As well, fix up both the new function and the get_cycles() function from which it was derived to use cpu_feature_enabled() rather than boot_cpu_has(), and use !IS_ENABLED() instead of #ifndef. Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Borislav Petkov Cc: x86@kernel.org Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit cca2a7ad060e5abae5d12f4b58de64679d59fa81 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 nios2: use fallback for random_get_entropy() instead of zero commit c04e72700f2293013dab40208e809369378f224c upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Dinh Nguyen Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a9ebafbe00f5086a8149d2b9eebe6033dce509d5 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 arm: use fallback for random_get_entropy() instead of zero commit ff8a8f59c99f6a7c656387addc4d9f2247d75077 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Reviewed-by: Russell King (Oracle) Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9d3ee85be180b174fb3a28668e067d37c93c6ae5 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 mips: use fallback for random_get_entropy() instead of just c0 random commit 1c99c6a7c3c599a68321b01b9ec243215ede5a68 upstream. For situations in which we don't have a c0 counter register available, we've been falling back to reading the c0 "random" register, which is usually bounded by the amount of TLB entries and changes every other cycle or so. This means it wraps extremely often. We can do better by combining this fast-changing counter with a potentially slower-changing counter from random_get_entropy_fallback() in the more significant bits. This commit combines the two, taking into account that the changing bits are in a different bit position depending on the CPU model. In addition, we previously were falling back to 0 for ancient CPUs that Linux does not support anyway; remove that dead path entirely. Cc: Thomas Gleixner Cc: Arnd Bergmann Tested-by: Maciej W. Rozycki Acked-by: Thomas Bogendoerfer Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f4e9fe58d4af645cf92a889d54a842ff79969a53 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 riscv: use fallback for random_get_entropy() instead of zero commit 6d01238623faa9425f820353d2066baf6c9dc872 upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Paul Walmsley Acked-by: Palmer Dabbelt Reviewed-by: Palmer Dabbelt Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 955a870ebdf691c84baa09ea27fd0e80135d0ad7 Author: Jason A. Donenfeld Date: Fri Apr 8 18:03:13 2022 +0200 m68k: use fallback for random_get_entropy() instead of zero commit 0f392c95391f2d708b12971a07edaa7973f9eece upstream. In the event that random_get_entropy() can't access a cycle counter or similar, falling back to returning 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Geert Uytterhoeven Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 7a5a2e2ccfd9496dcc9290c2e8799da612270607 Author: Jason A. Donenfeld Date: Sun Apr 10 16:49:50 2022 +0200 timekeeping: Add raw clock fallback for random_get_entropy() commit 1366992e16bddd5e2d9a561687f367f9f802e2e4 upstream. The addition of random_get_entropy_fallback() provides access to whichever time source has the highest frequency, which is useful for gathering entropy on platforms without available cycle counters. It's not necessarily as good as being able to quickly access a cycle counter that the CPU has, but it's still something, even when it falls back to being jiffies-based. In the event that a given arch does not define get_cycles(), falling back to the get_cycles() default implementation that returns 0 is really not the best we can do. Instead, at least calling random_get_entropy_fallback() would be preferable, because that always needs to return _something_, even falling back to jiffies eventually. It's not as though random_get_entropy_fallback() is super high precision or guaranteed to be entropic, but basically anything that's not zero all the time is better than returning zero all the time. Finally, since random_get_entropy_fallback() is used during extremely early boot when randomizing freelists in mm_init(), it can be called before timekeeping has been initialized. In that case there really is nothing we can do; jiffies hasn't even started ticking yet. So just give up and return 0. Suggested-by: Thomas Gleixner Signed-off-by: Jason A. Donenfeld Reviewed-by: Thomas Gleixner Cc: Arnd Bergmann Cc: Theodore Ts'o Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 62d1c104c7f6ee38d4f2717b1539a0dd54f71ff9 Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 powerpc: define get_cycles macro for arch-override commit 408835832158df0357e18e96da7f2d1ed6b80e7f upstream. PowerPC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Acked-by: Michael Ellerman Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit d0d24c89d88505011b09eec0dec43bec1c617bef Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 alpha: define get_cycles macro for arch-override commit 1097710bc9660e1e588cf2186a35db3d95c4d258 upstream. Alpha defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Richard Henderson Cc: Ivan Kokshaysky Acked-by: Matt Turner Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9a51867dc50cdc4df16200d8897bced7f243aaab Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 parisc: define get_cycles macro for arch-override commit 8865bbe6ba1120e67f72201b7003a16202cd42be upstream. PA-RISC defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Acked-by: Helge Deller Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit f11c51290d0f854f83632d79a03af9605ca2c32a Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 s390: define get_cycles macro for arch-override commit 2e3df523256cb9836de8441e9c791a796759bb3c upstream. S390x defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Cc: Vasily Gorbik Cc: Alexander Gordeev Cc: Christian Borntraeger Cc: Sven Schnelle Acked-by: Heiko Carstens Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 9d923d15fc4b99d55a4cf0f8b1336ed7b8af218e Author: Jason A. Donenfeld Date: Sat Apr 23 21:11:41 2022 +0200 ia64: define get_cycles macro for arch-override commit 57c0900b91d8891ab43f0e6b464d059fda51d102 upstream. Itanium defines a get_cycles() function, but it does not do the usual `#define get_cycles get_cycles` dance, making it impossible for generic code to see if an arch-specific function was defined. While the get_cycles() ifdef is not currently used, the following timekeeping patch in this series will depend on the macro existing (or not existing) when defining random_get_entropy(). Cc: Thomas Gleixner Cc: Arnd Bergmann Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit a27dffcb938836affc211846a44e6dfa306582f7 Author: Jason A. Donenfeld Date: Thu May 5 02:20:22 2022 +0200 init: call time_init() before rand_initialize() commit fe222a6ca2d53c38433cba5d3be62a39099e708e upstream. Currently time_init() is called after rand_initialize(), but rand_initialize() makes use of the timer on various platforms, and sometimes this timer needs to be initialized by time_init() first. In order for random_get_entropy() to not return zero during early boot when it's potentially used as an entropy source, reverse the order of these two calls. The block doing random initialization was right before time_init() before, so changing the order shouldn't have any complicated effects. Cc: Andrew Morton Reviewed-by: Stafford Horne Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 3162bd8ac0530d5488a6e861f91871a2002ae416 Author: Jason A. Donenfeld Date: Tue May 3 21:43:58 2022 +0200 random: fix sysctl documentation nits commit 069c4ea6871c18bd368f27756e0f91ffb524a788 upstream. A semicolon was missing, and the almost-alphabetical-but-not ordering was confusing, so regroup these by category instead. Signed-off-by: Jason A. Donenfeld Signed-off-by: Greg Kroah-Hartman commit 8c37b3bc825d4ce567d01533782224cdec8337a5 Author: Basavaraj Natikar Date: Mon May 9 18:50:20 2022 +0530 HID: amd_sfh: Add support for sensor discovery commit b5d7f43e97dabfa04a4be5ff027ce7da119332be upstream. Sensor discovery status fails in case of broken sensors or platform not supported. Hence disable driver on failure of sensor discovery. Signed-off-by: Mario Limonciello Signed-off-by: Basavaraj Natikar Signed-off-by: Jiri Kosina Cc: Mario Limonciello Signed-off-by: Greg Kroah-Hartman commit eca56bf0066ef2f1e7be0e3fa7564b85a309872c Author: Daniel Thompson Date: Mon May 23 19:11:02 2022 +0100 lockdown: also lock down previous kgdb use commit eadb2f47a3ced5c64b23b90fd2a3463f63726066 upstream. KGDB and KDB allow read and write access to kernel memory, and thus should be restricted during lockdown. An attacker with access to a serial port (for example, via a hypervisor console, which some cloud vendors provide over the network) could trigger the debugger so it is important that the debugger respect the lockdown mode when/if it is triggered. Fix this by integrating lockdown into kdb's existing permissions mechanism. Unfortunately kgdb does not have any permissions mechanism (although it certainly could be added later) so, for now, kgdb is simply and brutally disabled by immediately exiting the gdb stub without taking any action. For lockdowns established early in the boot (e.g. the normal case) then this should be fine but on systems where kgdb has set breakpoints before the lockdown is enacted than "bad things" will happen. CVE: CVE-2022-21499 Co-developed-by: Stephen Brennan Signed-off-by: Stephen Brennan Reviewed-by: Douglas Anderson Signed-off-by: Daniel Thompson Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman