Hello,
I would be grateful for anyone who has an insight
into the ext4 filesystem and/or VFS.
It would be great to use the maximum IPC data transfer
size (64K). The maximum size ext4 currently allows
through is 4K. Would it be ok to just bump the parameters
to 64K? At least from the first glance it works correctly,
the only issue I see might be higher fragmentation
for a lot of small files. Are there any other implications
I am missing? I just proportionally increased the parameters
to correspond to 64K.
Whole code snippet below with a diff containing my additions.
Is `blocks_group = 8192 * 64;` fine?
Is there any other way to use the whole transfer size? I've
been thinking about something like a "vectorization" of IO,
maybe also parallelized.
Take care.
--
mc
uspace/lib/ext4/src/superblock.c: ext4_superblock_create():
switch (fs_bsize) {
case 1024:
first_block = 1;
fs_bsize_log = 0;
blocks_group = 8192;
break;
case 2048:
fs_bsize_log = 1;
blocks_group = 8192 * 2;
break;
case 4096:
fs_bsize_log = 2;
blocks_group = 8192 * 4;
break;
case 65536:
fs_bsize_log = 6;
blocks_group = 8192 * 64;
break;
default:
return ENOTSUP;
}
diff:
diff --git a/uspace/lib/ext4/src/superblock.c b/uspace/lib/ext4/src/superblock.c
index 8b84a6434..5b351d751 100644
--- a/uspace/lib/ext4/src/superblock.c
+++ b/uspace/lib/ext4/src/superblock.c
@@ -1553,6 +1553,10 @@ errno_t ext4_superblock_create(size_t dev_bsize,
uint64_t dev_bcnt,
fs_bsize_log = 2;
blocks_group = 8192 * 4;
break;
+ case 65536:
+ fs_bsize_log = 6;
+ blocks_group = 8192 * 64;
+ break;
default:
return ENOTSUP;
}
_______________________________________________
HelenOS-devel mailing list
[email protected]
http://lists.modry.cz/listinfo/helenos-devel