squashfs: add optional full compressed block caching

The commit 93e72b3c61 ("squashfs: migrate from ll_rw_block usage
to BIO") removed caching of compressed blocks in SquashFS, causing fio
performance regression in workloads with repeated file reads.  Without
caching, every read triggers disk I/O, severely impacting performance in
tools like fio.

This patch introduces a new CONFIG_SQUASHFS_COMP_CACHE_FULL Kconfig option
to enable caching of all compressed blocks, restoring performance to
pre-BIO migration levels.  When enabled, all pages in a BIO are cached in
the page cache, reducing disk I/O for repeated reads.  The fio test
results with this patch confirm the performance restoration:

For example, fio tests (iodepth=1, numjobs=1,
ioengine=psync) show a notable performance restoration:

Disable CONFIG_SQUASHFS_COMP_CACHE_FULL:
  IOPS=815, BW=102MiB/s (107MB/s)(6113MiB/60001msec)
Enable CONFIG_SQUASHFS_COMP_CACHE_FULL:
  IOPS=2223, BW=278MiB/s (291MB/s)(16.3GiB/59999msec)

The tradeoff is increased memory usage due to caching all compressed
blocks.  The CONFIG_SQUASHFS_COMP_CACHE_FULL option allows users to enable
this feature selectively, balancing performance and memory usage for
workloads with frequent repeated reads.

Link: https://lkml.kernel.org/r/20250521072559.2389-1-chanho.min@lge.com
Signed-off-by: Chanho Min <chanho.min@lge.com>
Reviewed-by Phillip Lougher <phillip@squashfs.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
This commit is contained in:
Chanho Min 2025-05-21 16:25:59 +09:00 committed by Andrew Morton
parent 4496e1c135
commit 2e227ff5e2
2 changed files with 49 additions and 0 deletions

View file

@ -149,6 +149,27 @@ config SQUASHFS_XATTR
If unsure, say N. If unsure, say N.
config SQUASHFS_COMP_CACHE_FULL
bool "Enable full caching of compressed blocks"
depends on SQUASHFS
default n
help
This option enables caching of all compressed blocks, Without caching,
repeated reads of the same files trigger excessive disk I/O, significantly
reducinng performance in workloads like fio-based benchmarks.
For example, fio tests (iodepth=1, numjobs=1, ioengine=psync) show:
With caching: IOPS=2223, BW=278MiB/s (291MB/s)
Without caching: IOPS=815, BW=102MiB/s (107MB/s)
Enabling this option restores performance to pre-regression levels by
caching all compressed blocks in the page cache, reducing disk I/O for
repeated reads. However, this increases memory usage, which may be a
concern in memory-constrained environments.
Enable this option if your workload involves frequent repeated reads and
memory usage is not a limiting factor. If unsure, say N.
config SQUASHFS_ZLIB config SQUASHFS_ZLIB
bool "Include support for ZLIB compressed file systems" bool "Include support for ZLIB compressed file systems"
depends on SQUASHFS depends on SQUASHFS

View file

@ -88,6 +88,10 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
struct bio_vec *bv; struct bio_vec *bv;
int idx = 0; int idx = 0;
int err = 0; int err = 0;
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
struct page **cache_pages = kmalloc_array(page_count,
sizeof(void *), GFP_KERNEL | __GFP_ZERO);
#endif
bio_for_each_segment_all(bv, fullbio, iter_all) { bio_for_each_segment_all(bv, fullbio, iter_all) {
struct page *page = bv->bv_page; struct page *page = bv->bv_page;
@ -110,6 +114,11 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
head_to_cache = page; head_to_cache = page;
else if (idx == page_count - 1 && index + length != read_end) else if (idx == page_count - 1 && index + length != read_end)
tail_to_cache = page; tail_to_cache = page;
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
/* Cache all pages in the BIO for repeated reads */
else if (cache_pages)
cache_pages[idx] = page;
#endif
if (!bio || idx != end_idx) { if (!bio || idx != end_idx) {
struct bio *new = bio_alloc_clone(bdev, fullbio, struct bio *new = bio_alloc_clone(bdev, fullbio,
@ -163,6 +172,25 @@ static int squashfs_bio_read_cached(struct bio *fullbio,
} }
} }
#ifdef CONFIG_SQUASHFS_COMP_CACHE_FULL
if (!cache_pages)
goto out;
for (idx = 0; idx < page_count; idx++) {
if (!cache_pages[idx])
continue;
int ret = add_to_page_cache_lru(cache_pages[idx], cache_mapping,
(read_start >> PAGE_SHIFT) + idx,
GFP_NOIO);
if (!ret) {
SetPageUptodate(cache_pages[idx]);
unlock_page(cache_pages[idx]);
}
}
kfree(cache_pages);
out:
#endif
return 0; return 0;
} }