On (04/27/16 17:54), Sergey Senozhatsky wrote:
> #jobs4
> READ: 19948MB/s 20013MB/s
> READ: 17732MB/s 17479MB/s
> WRITE: 630690KB/s 495078KB/s
> WRITE: 1843.2MB/s 2226.9MB/s
> READ: 160
Hello,
more tests. I did only 8streams vs per-cpu this time. the changes
to the test are:
-- mem-hogger now per-faults pages in parallel with fio
-- mem-hogger alloc size increased from 3GB to 4GB.
the system couldn't survive 4GB/4GB
zram(buffer_compress_percentage=11)/mem-hogger
split (OOM), s
On (04/27/16 16:55), Minchan Kim wrote:
[..]
> > > Could you test concurrent mem hogger with fio rather than pre-fault
> > > before fio test
> > > in next submit?
> >
> > this test will not prove anything, unfortunately. I performed it;
> > and it's impossible to guarantee even remotely stable re
On Wed, Apr 27, 2016 at 04:43:35PM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (04/27/16 16:29), Minchan Kim wrote:
> [..]
> > > the test:
> > >
> > > -- 4 GB x86_64 box
> > > -- zram 3GB, lzo
> > > -- mem-hogger pre-faults 3GB of pages before the fio test
> > > -- fio test has been modified
Hello,
On (04/27/16 16:29), Minchan Kim wrote:
[..]
> > the test:
> >
> > -- 4 GB x86_64 box
> > -- zram 3GB, lzo
> > -- mem-hogger pre-faults 3GB of pages before the fio test
> > -- fio test has been modified to have 11% compression ratio (to increase the
> >
Hello Sergey,
On Tue, Apr 26, 2016 at 08:23:05PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (04/19/16 17:00), Minchan Kim wrote:
> [..]
> > I'm convinced now with your data. Super thanks!
> > However, as you know, we need data how bad it is in heavy memory pressure.
> > Maybe, you c
Hello Minchan,
On (04/19/16 17:00), Minchan Kim wrote:
[..]
> I'm convinced now with your data. Super thanks!
> However, as you know, we need data how bad it is in heavy memory pressure.
> Maybe, you can test it with fio and backgound memory hogger,
it's really hard to produce stable test results
Hello Minchan,
On (04/19/16 17:00), Minchan Kim wrote:
> Great!
>
> So, based on your experiment, the reason I couldn't see such huge win
> in my mahcine is cache size difference(i.e., yours is twice than mine,
> IIRC.) and my perf stat didn't show such big difference.
> If I have a time, I will
On Mon, Apr 18, 2016 at 04:57:58PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> sorry, it took me so long to return back to testing.
>
> I collected extended stats (perf), just like you requested.
> - 3G zram, lzo; 4 CPU x86_64 box.
> - fio with perf stat
>
> 4 streams
Hello Minchan,
sorry, it took me so long to return back to testing.
I collected extended stats (perf), just like you requested.
- 3G zram, lzo; 4 CPU x86_64 box.
- fio with perf stat
4 streams8 streams per-cpu
=
Hello Minchan,
On (04/04/16 09:27), Minchan Kim wrote:
> Hello Sergey,
>
> On Sat, Apr 02, 2016 at 12:38:29AM +0900, Sergey Senozhatsky wrote:
> > Hello Minchan,
> >
> > On (03/31/16 15:34), Sergey Senozhatsky wrote:
> > > > I tested with you suggested parameter.
> > > > In my side, win is bette
Hello Sergey,
On Sat, Apr 02, 2016 at 12:38:29AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (03/31/16 15:34), Sergey Senozhatsky wrote:
> > > I tested with you suggested parameter.
> > > In my side, win is better compared to my previous test but it seems
> > > your test is so fast.
Hello Minchan,
On (03/31/16 15:34), Sergey Senozhatsky wrote:
> > I tested with you suggested parameter.
> > In my side, win is better compared to my previous test but it seems
> > your test is so fast. IOW, filesize is small and loops is just 1.
> > Please test filesize=500m loops=10 or 20.
fio
Hello Minchan,
On (03/31/16 14:53), Minchan Kim wrote:
> Hello Sergey,
>
> > that's a good question. I quickly looked into the fio source code,
> > we need to use "buffer_pattern=str" option, I think. so the buffers
> > will be filled with the same data.
> >
> > I don't mind to have buffer_compre
Hello Sergey,
On Thu, Mar 31, 2016 at 10:26:26AM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (03/31/16 07:12), Minchan Kim wrote:
> [..]
> > > I used a bit different script. no `buffer_compress_percentage' option,
> > > because it provide "a mix of random data and zeroes"
> >
> > Normally,
Hello,
On (03/31/16 07:12), Minchan Kim wrote:
[..]
> > I used a bit different script. no `buffer_compress_percentage' option,
> > because it provide "a mix of random data and zeroes"
>
> Normally, zram's compression ratio is 3 or 2 so I used it.
> Hmm, isn't it more real practice usecase?
this
On Wed, Mar 30, 2016 at 05:34:19PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
> sorry for long reply.
>
> On (03/28/16 12:21), Minchan Kim wrote:
> [..]
> > group_reporting
> > buffer_compress_percentage=50
> > filename=/dev/zram0
> > loops=10
>
> I used a bit different script. no `buffer_
Hello Minchan,
sorry for long reply.
On (03/28/16 12:21), Minchan Kim wrote:
[..]
> group_reporting
> buffer_compress_percentage=50
> filename=/dev/zram0
> loops=10
I used a bit different script. no `buffer_compress_percentage' option,
because it provide "a mix of random data and zeroes"
buffer_
Hi Sergey,
On Fri, Mar 25, 2016 at 10:47:06AM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (03/25/16 08:41), Minchan Kim wrote:
> [..]
> > > Test #10 iozone -t 10 -R -r 80K -s 0M -I +Z
> > >Initial write3213973.56 2731512.62 4416466.25*
> > > Rewrite
Hello Minchan,
On (03/25/16 08:41), Minchan Kim wrote:
[..]
> > Test #10 iozone -t 10 -R -r 80K -s 0M -I +Z
> >Initial write3213973.56 2731512.62 4416466.25*
> > Rewrite3066956.44* 2693819.50 332671.94
> > Read7769523.25* 26
Hi Sergey,
On Wed, Mar 23, 2016 at 05:18:27PM +0900, Sergey Senozhatsky wrote:
> ( was "[PATCH] zram: export the number of available comp streams"
>forked from http://marc.info/?l=linux-kernel&m=145860707516861 )
>
> d'oh sorry, now actually forked.
>
>
> Hello Minchan,
>
> forked i
( was "[PATCH] zram: export the number of available comp streams"
forked from http://marc.info/?l=linux-kernel&m=145860707516861 )
d'oh sorry, now actually forked.
Hello Minchan,
forked into a separate tread.
> On (03/22/16 09:39), Minchan Kim wrote:
> > zram_bvec_write()
> > {
>
22 matches
Mail list logo