ChangeSet@1.1547, 2004-12-17 23:11:00-02:00, solar@openwall.com [PATCH] Fix booting off USB CD-ROMs (do_mounts.c) Fix for booting off USB CD-ROMs (to do_mounts.c). ChangeSet@1.1546, 2004-12-17 22:55:57-02:00, marcelo@logos.cnet [PATCH] Backport v2.6 tty/ldisc locking fixes ChangeSet@1.1545, 2004-12-17 22:17:50-02:00, akpm@osdl.org [PATCH] If the machine is full of unmapped clean pagecache pages the kernel won't even try to reclaim inodes > vm_vfs_scan_ratio: > ------------------ > is what proportion of the VFS queues we will scan in one go. > A value of 6 for vm_vfs_scan_ratio implies that 1/6th of the > unused-inode, dentry and dquot caches will be freed during a > normal aging round. > Big fileservers (NFS, SMB etc.) probably want to set this > value to 3 or 2. > > The default value is 6. > ============================================================= Thanks for the info - but doesn't increasing the value of vm_vfs_scan_ratio mean that less of the caches will be freed? Doing a few tests (on another test file system with 2 million or so files and 1Gb of memory) running 'find $disk -type f', with vm_vfs_scan_ratio set to 6 (or 10), the first two column values for xfs_inode, linvfs_icache and dentry_cache in /proc/slabinfo reach about 900000 and stay around that value, but setting vm_vfs_scan_ratio to 1, then each value still reaches 900000, but then falls to a few thousand and increases up to 900000 and then drop away again and repeats. This still happens when I cat many large files (100Mb) to /dev/null at the same time as running the find i.e. the inode caches can still reach 90% of the memory before being reclaimed (with vm_vfs_scan_ratio set to 1). If I stop the find process when the inode caches reach about 90% of the memory, and then start cat'ing the large files, it appears the inode caches are never reclaimed (or longer than it takes to cat 100Gb of data to /dev/null) - is this expected behaviour? It seems the inode cache has priority over cached file data. What triggers the 'normal ageing round'? Is it possible to trigger this earlier (at a lower memory usage), or give a higher priority to cached data? From: Andrew Morton It does. If the machine is full of unmapped clean pagecache pages the kernel won't even try to reclaim inodes. This should help a bit: ChangeSet@1.1544, 2004-12-17 22:02:43-02:00, hjl@lucon.org [PATCH] backport v2.6: Fix pty race condition http://bugme.osdl.org/show_bug.cgi?id=3894 Created an attachment (id=4277) --> (http://bugme.osdl.org/attachment.cgi?id=4277&action=view) A patch for kernel 2.4.29 This patch should apply to kernel 2.4.29. ChangeSet@1.1543, 2004-12-17 21:45:58-02:00, chrisw@osdl.org [PATCH] Backport of 2.6 fix to insert_vm_struct to make it return an error rather than BUG(). Backport of 2.6 fix to insert_vm_struct to make it return an error rather than BUG(). This eliminates a user triggerable BUG() when user created a large vma that overlapped with arg pages during exec (could be triggered with a.out on i386 and x86_64 and elf on ia64). Signed-off-by: Chris Wright ===== arch/ia64/ia32/binfmt_elf32.c 1.13 vs edited ===== ChangeSet@1.1542, 2004-12-17 16:30:45-02:00, geert@linux-m68k.org [PATCH] m68k: fix incorrect config comment in check_bugs() M68k: Fix incorrect config comment in check_bugs() Signed-off-by: Geert Uytterhoeven ChangeSet@1.1541, 2004-12-16 16:26:55-02:00, marcelo@logos.cnet Solar Designer: Fix do_follow_link() comment ChangeSet@1.1540, 2004-12-16 16:06:31-02:00, chrisw@osdl.org [PATCH] a.out: error check on set_brk It's possible for do_brk() to fail during set_brk() when exec'ing and a.out. This was noted with Florian's a.out binary and overcommit set to 0. Capture this error and terminate properly. Signed-off-by: Chris Wright Signed-off-by: Linus Torvalds ChangeSet@1.1539, 2004-12-16 06:20:26-02:00, marcelo@logos.cnet Changed EXTRAVERSION to -pre2 TAG: v2.4.29-pre2