Hi,
i wasn´t really that aware that this could only led to higher usage of CPU
and RAM, but to say so, the cpu load has indeed increased by about 20-30%
compared to not compressing the storage. The RAM usage didnt increase by a
big deal.
IMHO a bit higher CPU-load is definietly worth it, if yo
You are aware of the fact the kind of search performance you mean
depends on RAM and virtual memory organization of the cluster, not on
storage, so "without any siginifcant performace losses" could be expected ?
Jörg
Am 04.08.14 12:41, schrieb horst knete:
We are indexing all sort of events (W
We are indexing all sort of events (Windows, Linux, Apache, Netflow and so
on...) and impact is defined in speed of the Kibana GUI / how long it takes
to load 7 or 14 days of data. Thats what is important for my colleagues.
Am Montag, 4. August 2014 10:52:25 UTC+2 schrieb Mark Walkom:
>
> What
What sort of data are you indexing? When you said performance impact was
minimal, how minimal and at what points are you seeing it?
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 4 August 2014 16:43, horst knete wr
Hi again,
a quick report regarding compression:
we are using a 3-TB btrfs-volume with 32k block size now which reduced the
amount of data from 3,2 TB to 1,1TB without any segnificant performance
losses ( we are using a 8 CPU, 20 GB Memory machine with an iSCSI.Link to
the volume ).
So for us
Hi,
gzip/zlib compression is very bad for performance, so it can be interesting for
closed indices, but for live data I would not recommend it.
Also, you must know that:
Compression using lz4 is already enabled into indices,
ES/Lucene/Java usually read&write 4k blocks,
-> hence, compression is
Hey guys,
we have mounted an btrfs file system with the compression method "zlib" for
testing purposes on our elasticsearchserver and copied one of the indices
on the btrfs volume, unfortunately it had no success and still got the size
of 50gb :/
I will further try it with other compression me
Hi Horst,
I wouldn't bother with this for the reasons Joerg mentioned, but should you
try it anyway, I'd love to hear your findings/observations.
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Wednesday, July 16, 2014 6
Ups, not true, Elasticsearch uses Lucene codec compression, and this is
also LZ4 (LZF only for backwards compatibility)
Here are some numbers:
http://blog.jpountz.net/post/33247161884/efficient-compressed-stored-fields-with-lucene
Jörg
On Wed, Jul 16, 2014 at 2:28 PM, joergpra...@gmail.com <
j
You will not gain much advantage because ES already compresses data on disk
with LZF, ZFS is using LZ4, which compression output is quite similar. In
the file system statistics you will notice the compression ratio, and this
will be no good value. So instead of having ZFS trying to compress where
n
There's a few previous threads on this topic in the archives, though I
don't immediately recall seeing any performance metrics unfortunately.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 16 July 2014 20:56, horst
Hey Guys,
to save a lot of hard disk space, we are going to use an compression file
system, which allows us transparent compression for the es-indices. (It
seems like es-indices are very good compressable, got up to 65%
compression-rate in some tests).
Currently the indices are laying at a ext
12 matches
Mail list logo