I think compression of elasticsearch indices is enabled by default within 
ES, but to save further disk space, I've used a file system that supports 
transparent compression, like btrfs or zfs.

zfs has higher memory requirements than btrfs, and both will slow down your 
disk performance performance a little, as the compression adds overhead; 
but in my case I've found it quite usable. I've saved 15% to 20% on disk 
space this way, your mileage may vary. I'm using zlib (gzip) compression 
option as it is faster, but if you can afford the performance penalty, LZO 
compression will save you more disk space.

I believe that you can convert most filesystems (ext4, etc.) to btrfs in 
place... you then have to enabled the transparent compression feature of 
btrfs.

There is also a deduplication feature in both zfs and btrfs, but zfs is 
more advanced there. Again, you will need to allocate a lot more RAM on ZFS 
to enable this feature.

Here is a good resource on btrfs file system:
https://docs.oracle.com/cd/E37670_01/E37355/html/ol_about_btrfs.html



On Friday, May 5, 2017 at 3:07:17 PM UTC-4, RWagner wrote:
>
> Hi Guys!
>
> My elasticsearch indexes are filling the disk. I would like to compress 
> these indexes. Is it possible to compress these indexes in a way that I can 
> restore when needed?
>
> Would anyone help me?
>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"ossec-list" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ossec-list+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to