Thanks everyone for the answers. Tested this PAGE_COMPRESSION a bit and make 
some reading. Not sure if I fully understand how that works... is that correct? 
  1. Impossible without punch hole as if we don't have it the server will 
just write partially used pages which will take full page size anyway. But in 
docs it seems it described its use with punch hole disabled. What's the 
reason to still have page compression without punch hole?   2. After enabling 
compression each page needs to be trimmed using punch hole, so if we have 1 TB 
database and 64k blocks, the database files will have over 15 million fragments 
( 1000000000000/64000)   So if we get like 50% compression and then we'll 
be able to put another 1TB of data into these holes we'll be having close 
to 31 million file fragments. I'm not sure if under such conditions the FS 
will be able to even allocate space efficiently anymore, copy these files, etc. 
Or im not getting something?   3. I was testing compression of ~1.5GB table 
with nice results on default 16kb page size. One thing i noticed is that l was 
always getting exactly the same compressed size using `du`. However the table 
read/copy times were different from 40s for uncompressed to 80-400 seconds 
depending on algo. Is that possible that is because page size is too low?   4. 
Myrocks seems to be very slow compared to compressed inno. Was also reading 
docs related to it. For its usecase is it more suitable for storing large rows 
rather than many small rows?   5. I remember reading that some innodb 
compression is going to be obsolete. Is that row compression? Or there are no 
plans to obsolete any compression model?   Thanks.  Dnia 16 sierpnia 2023 16:08 
Gordan Bobic via discuss <  [email protected] > napisał(a):  
Larger pages will yield higher compression ratios, as will zstd (slightly 
slower than lz4).   Mileage varies, but on some of my databases, between the 
64KB pages (zfs recordsize) and zstd, my compression ratios approximately 
doubled, from about 1.8:1 to a little under 4:1. This will all heavily depend 
on the type of data you are storing, though.   zstd has adjustable compression, 
so if you have tons of CPU to spare, you could turn it up from the default of 3 
(probably optimal cost/size compromise) to high teens for a compression boost.  
On Wed, 16 Aug 2023, 14:34 Reinis Rozitis, <  [email protected] > wrote:  > 
From: Gordan Bobic via discuss <  [email protected] >   >   
> One of the best compromises is to configure InnoDB with 64KB pages, put it 
on ZFS with recordsize=64K, and enable zstd compression on ZFS.  Sorry for 
hijacking the thread a bit. After the discontinuation of tokudb I chose the 
innodb+zfs route (bad previous/early experience with rocks, also million knobs 
to adjust for your workload etc) but at that time didn't think of 
increasing the Innodb page size to 64KB - do you have any (even rough) numbers 
of 16 vs 64KB (on zfs)?   Also, what would be the benefit of zstd vs lz4? 
Higher compression ratio?  rr  _______________________________________________  
discuss mailing list -- [email protected]  To unsubscribe send an email 
to [email protected]
_______________________________________________
discuss mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to