>> page compression at rebalancing is a good idea even is we have problems
with storing on disc.
BTW, do we have or going to have rebalancing based on pages streaming
instead of entries streaming?
2018-03-27 3:03 GMT+03:00 Dmitriy Setrakyan :
> AG,
>
> I would also ask about the compression itsel
AG,
I would also ask about the compression itself. How and where do we store
the compression meta information? We cannot be compressing every page
separately, it will not be effective. However, if we try to store the
compression metadata, how do we make other nodes aware of it? Has this been
discu
Vova, thanks for comments.
Anyway, page compression at rebalancing is a good idea even is we have
problems with storing on disc.
2018-03-26 19:51 GMT+03:00 Vyacheslav Daradur :
> Since PDS is strongly depending on memory page's size I'd like to
> compress serialized data inside page exclude pag
Since PDS is strongly depending on memory page's size I'd like to
compress serialized data inside page exclude page header.
On Mon, Mar 26, 2018 at 7:49 PM, Vladimir Ozerov wrote:
> Alex,
>
> In fact there are many approaches to this. Some vendors decided stick to
> page - page is filled with dat
Alex,
In fact there are many approaches to this. Some vendors decided stick to
page - page is filled with data and then compressed when certain threshold
is reached (e.g. page is full or filled up to X%). Another approach is to
store data in memory in *larger blocks* than on the disk, and when it
Hi Anton,
Do you have suggestions for this approach?
Sincerely,
Dmitriy Pavlov
пн, 26 мар. 2018 г. в 19:46, Anton Vinogradov :
> Can we use another approach to store compressed pages?
>
> 2018-03-26 19:06 GMT+03:00 Dmitry Pavlov :
>
> > +1 to Alexey's concern. No reason to compress if we use pr
Can we use another approach to store compressed pages?
2018-03-26 19:06 GMT+03:00 Dmitry Pavlov :
> +1 to Alexey's concern. No reason to compress if we use previous offset as
> pageIdx*pageSize.
>
> пн, 26 мар. 2018 г. в 18:59, Alexey Goncharuk >:
>
> > Guys,
> >
> > How does this fit the PageMe
+1 to Alexey's concern. No reason to compress if we use previous offset as
pageIdx*pageSize.
пн, 26 мар. 2018 г. в 18:59, Alexey Goncharuk :
> Guys,
>
> How does this fit the PageMemory concept? Currently it assumes that the
> size of the page in memory and the size of the page on disk is the sam
Guys,
How does this fit the PageMemory concept? Currently it assumes that the
size of the page in memory and the size of the page on disk is the same, so
only per-entry level compression within a page makes sense.
If you compress a whole page, how do you calculate the page offset in the
target da
Gents,
If I understood the idea correctly, the proposal is to compress pages on
eviction and decompress them on read from disk. Is it correct?
On Mon, Mar 26, 2018 at 5:13 PM, Anton Vinogradov wrote:
> + 1 to Taras's vision.
>
> Compression on eviction is a good case to store more.
> Pages at m
+ 1 to Taras's vision.
Compression on eviction is a good case to store more.
Pages at memory always hot a real system, so complession in memory will
definetely slowdown the system, I think.
Anyway, we can split issue to "on eviction compression" and to "in-memory
compression".
2018-03-06 12:14
Hi,
I guess page level compression make sense on page loading / eviction.
In this case we can decrease I/O operation and performance boost can be
reached.
What is goal for in-memory compression? Holds about 2-5x data in memory
with performance drop?
Also please clarify the case with compressi
12 matches
Mail list logo