Re: [OmniOS-discuss] write amplification zvol

2017-10-02 Thread Richard Elling

> On Oct 2, 2017, at 12:51 AM, anthony omnios <icoomn...@gmail.com> wrote:
> 
> Hi, 
> 
> i have tried with a pool with ashift=9 and there is no write amplification, 
> problem is solved.

ashift=13 means that the minumum size (bytes) written will be 8k (1<<13). So 
when you write
a single byte, there will be at least 2 writes for the data (both sides of the 
mirror), 4 writes for
metadata (both sides of the mirror * 2 copies of metadata for redundancy). Each 
metadata 
block contains information on 128 or more data blocks, so there is not a 1:1 
correlation between
data and metadata writes.

Reducing ashift doesn't change the number of blocks written for a single byte 
write. It can only
reduce or increase the size in bytes of the writes.

HTH
 -- richard

> 
> But i can't used a ashift=9 with ssd (850 evo), i have read many articles 
> indicated problems with ashift=9 on ssd.
> 
> How ca i do ? does i need to tweak specific zfs value ?
> 
> Thanks,
> 
> Anthony
> 
> 
> 
> 2017-09-28 11:48 GMT+02:00 anthony omnios <icoomn...@gmail.com 
> <mailto:icoomn...@gmail.com>>:
> Thanks for you help Stephan.
> 
> i have tried differents LUN with default of 512b and 4096:
> 
> LU Name: 600144F04D4F060059A588910001
> Operational Status: Online
> Provider Name : sbd
> Alias : /dev/zvol/rdsk/filervm2/hdd-110002b
> View Entry Count  : 1
> Data File : /dev/zvol/rdsk/filervm2/hdd-110002b
> Meta File : not set
> Size  : 26843545600
> Block Size: 4096
> Management URL: not set
> Vendor ID : SUN 
> Product ID: COMSTAR 
> Serial Num: not set
> Write Protect : Disabled
> Writeback Cache   : Disabled
> Access State  : Active
> 
> Problem is the same.
> 
> Cheers,
> 
> Anthony
> 
> 2017-09-28 10:33 GMT+02:00 Stephan Budach <stephan.bud...@jvm.de 
> <mailto:stephan.bud...@jvm.de>>:
> - Ursprüngliche Mail -
> 
> > Von: "anthony omnios" <icoomn...@gmail.com <mailto:icoomn...@gmail.com>>
> > An: "Richard Elling" <richard.ell...@richardelling.com 
> > <mailto:richard.ell...@richardelling.com>>
> > CC: omnios-discuss@lists.omniti.com <mailto:omnios-discuss@lists.omniti.com>
> > Gesendet: Donnerstag, 28. September 2017 09:56:42
> > Betreff: Re: [OmniOS-discuss] write amplification zvol
> 
> > Thanks Richard for your help.
> 
> > My problem is that i have a network ISCSI traffic of 2 MB/s, each 5
> > seconds i need to write on disks 10 MB of network traffic but on
> > pool filervm2 I am writing much more that, approximatively 60 MB
> > each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5
> > second. When i check with smartmootools every ssd is writing
> > approximatively 250 GB of data each day.
> 
> > How can i reduce amont of data writting on each ssd ? i have try to
> > reduce block size of zvol but it change nothing.
> 
> > Anthony
> 
> > 2017-09-28 1:29 GMT+02:00 Richard Elling <
> > richard.ell...@richardelling.com <mailto:richard.ell...@richardelling.com> 
> > > :
> 
> > > Comment below...
> >
> 
> > > > On Sep 27, 2017, at 12:57 AM, anthony omnios <
> > > > icoomn...@gmail.com <mailto:icoomn...@gmail.com>
> > > > > wrote:
> >
> > > >
> >
> > > > Hi,
> >
> > > >
> >
> > > > i have a problem, i used many ISCSI zvol (for each vm), network
> > > > traffic is 2MB/s between kvm host and filer but i write on disks
> > > > many more than that. I used a pool with separated mirror zil
> > > > (intel s3710) and 8 ssd samsung 850 evo 1To
> >
> > > >
> >
> > > > zpool status
> >
> > > > pool: filervm2
> >
> > > > state: ONLINE
> >
> > > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20
> > > > 15:45:48
> > > > 2017
> >
> > > > config:
> >
> > > >
> >
> > > > NAME STATE READ WRITE CKSUM
> >
> > > > filervm2 ONLINE 0 0 0
> >
> > > > mirror-0 ONLINE 0 0 0
> >
> > > > c7t5002538D41657AAFd0 ONLINE 0 0 0
> >
> > > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0
> >
> > > > mirror-2 ONLINE 0 0 0
> >
> > > > c7t5002538D41CC7105d0 ONLINE 0 0 0
> >
> > > > c7t5002538D41CC7127d0 ONLINE 0 0 0
> >
> > > > mirror-3 ONLINE 

Re: [OmniOS-discuss] write amplification zvol

2017-10-02 Thread anthony omnios
Hi,

i have tried with a pool with ashift=9 and there is no write amplification,
problem is solved.

But i can't used a ashift=9 with ssd (850 evo), i have read many articles
indicated problems with ashift=9 on ssd.

How ca i do ? does i need to tweak specific zfs value ?

Thanks,

Anthony



2017-09-28 11:48 GMT+02:00 anthony omnios <icoomn...@gmail.com>:

> Thanks for you help Stephan.
>
> i have tried differents LUN with default of 512b and 4096:
>
> LU Name: 600144F04D4F060059A588910001
> Operational Status: Online
> Provider Name : sbd
> Alias : /dev/zvol/rdsk/filervm2/hdd-110002b
> View Entry Count  : 1
> Data File : /dev/zvol/rdsk/filervm2/hdd-110002b
> Meta File : not set
> Size  : 26843545600
> Block Size: 4096
> Management URL: not set
> Vendor ID : SUN
> Product ID: COMSTAR
> Serial Num: not set
> Write Protect : Disabled
> Writeback Cache   : Disabled
> Access State  : Active
>
> Problem is the same.
>
> Cheers,
>
> Anthony
>
> 2017-09-28 10:33 GMT+02:00 Stephan Budach <stephan.bud...@jvm.de>:
>
>> - Ursprüngliche Mail -
>>
>> > Von: "anthony omnios" <icoomn...@gmail.com>
>> > An: "Richard Elling" <richard.ell...@richardelling.com>
>> > CC: omnios-discuss@lists.omniti.com
>> > Gesendet: Donnerstag, 28. September 2017 09:56:42
>> > Betreff: Re: [OmniOS-discuss] write amplification zvol
>>
>> > Thanks Richard for your help.
>>
>> > My problem is that i have a network ISCSI traffic of 2 MB/s, each 5
>> > seconds i need to write on disks 10 MB of network traffic but on
>> > pool filervm2 I am writing much more that, approximatively 60 MB
>> > each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5
>> > second. When i check with smartmootools every ssd is writing
>> > approximatively 250 GB of data each day.
>>
>> > How can i reduce amont of data writting on each ssd ? i have try to
>> > reduce block size of zvol but it change nothing.
>>
>> > Anthony
>>
>> > 2017-09-28 1:29 GMT+02:00 Richard Elling <
>> > richard.ell...@richardelling.com > :
>>
>> > > Comment below...
>> >
>>
>> > > > On Sep 27, 2017, at 12:57 AM, anthony omnios <
>> > > > icoomn...@gmail.com
>> > > > > wrote:
>> >
>> > > >
>> >
>> > > > Hi,
>> >
>> > > >
>> >
>> > > > i have a problem, i used many ISCSI zvol (for each vm), network
>> > > > traffic is 2MB/s between kvm host and filer but i write on disks
>> > > > many more than that. I used a pool with separated mirror zil
>> > > > (intel s3710) and 8 ssd samsung 850 evo 1To
>> >
>> > > >
>> >
>> > > > zpool status
>> >
>> > > > pool: filervm2
>> >
>> > > > state: ONLINE
>> >
>> > > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20
>> > > > 15:45:48
>> > > > 2017
>> >
>> > > > config:
>> >
>> > > >
>> >
>> > > > NAME STATE READ WRITE CKSUM
>> >
>> > > > filervm2 ONLINE 0 0 0
>> >
>> > > > mirror-0 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41657AAFd0 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0
>> >
>> > > > mirror-2 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CC7105d0 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CC7127d0 ONLINE 0 0 0
>> >
>> > > > mirror-3 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CD7F7Ed0 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CD83FDd0 ONLINE 0 0 0
>> >
>> > > > mirror-4 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CD7F7Ad0 ONLINE 0 0 0
>> >
>> > > > c7t5002538D41CD7F7Dd0 ONLINE 0 0 0
>> >
>> > > > logs
>> >
>> > > > mirror-1 ONLINE 0 0 0
>> >
>> > > > c4t2d0 ONLINE 0 0 0
>> >
>> > > > c4t4d0 ONLINE 0 0 0
>> >
>> > > >
>> >
>> > > > i used correct ashift of 13 for samsung 850 evo
>> >
>> > > > zdb|grep ashift :
>> >
>> > 

Re: [OmniOS-discuss] write amplification zvol

2017-09-28 Thread anthony omnios
Thanks for you help Stephan.

i have tried differents LUN with default of 512b and 4096:

LU Name: 600144F04D4F060059A588910001
Operational Status: Online
Provider Name : sbd
Alias : /dev/zvol/rdsk/filervm2/hdd-110002b
View Entry Count  : 1
Data File : /dev/zvol/rdsk/filervm2/hdd-110002b
Meta File : not set
Size  : 26843545600
Block Size: 4096
Management URL: not set
Vendor ID : SUN
Product ID: COMSTAR
Serial Num: not set
Write Protect : Disabled
Writeback Cache   : Disabled
Access State  : Active

Problem is the same.

Cheers,

Anthony

2017-09-28 10:33 GMT+02:00 Stephan Budach <stephan.bud...@jvm.de>:

> - Ursprüngliche Mail -
>
> > Von: "anthony omnios" <icoomn...@gmail.com>
> > An: "Richard Elling" <richard.ell...@richardelling.com>
> > CC: omnios-discuss@lists.omniti.com
> > Gesendet: Donnerstag, 28. September 2017 09:56:42
> > Betreff: Re: [OmniOS-discuss] write amplification zvol
>
> > Thanks Richard for your help.
>
> > My problem is that i have a network ISCSI traffic of 2 MB/s, each 5
> > seconds i need to write on disks 10 MB of network traffic but on
> > pool filervm2 I am writing much more that, approximatively 60 MB
> > each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5
> > second. When i check with smartmootools every ssd is writing
> > approximatively 250 GB of data each day.
>
> > How can i reduce amont of data writting on each ssd ? i have try to
> > reduce block size of zvol but it change nothing.
>
> > Anthony
>
> > 2017-09-28 1:29 GMT+02:00 Richard Elling <
> > richard.ell...@richardelling.com > :
>
> > > Comment below...
> >
>
> > > > On Sep 27, 2017, at 12:57 AM, anthony omnios <
> > > > icoomn...@gmail.com
> > > > > wrote:
> >
> > > >
> >
> > > > Hi,
> >
> > > >
> >
> > > > i have a problem, i used many ISCSI zvol (for each vm), network
> > > > traffic is 2MB/s between kvm host and filer but i write on disks
> > > > many more than that. I used a pool with separated mirror zil
> > > > (intel s3710) and 8 ssd samsung 850 evo 1To
> >
> > > >
> >
> > > > zpool status
> >
> > > > pool: filervm2
> >
> > > > state: ONLINE
> >
> > > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20
> > > > 15:45:48
> > > > 2017
> >
> > > > config:
> >
> > > >
> >
> > > > NAME STATE READ WRITE CKSUM
> >
> > > > filervm2 ONLINE 0 0 0
> >
> > > > mirror-0 ONLINE 0 0 0
> >
> > > > c7t5002538D41657AAFd0 ONLINE 0 0 0
> >
> > > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0
> >
> > > > mirror-2 ONLINE 0 0 0
> >
> > > > c7t5002538D41CC7105d0 ONLINE 0 0 0
> >
> > > > c7t5002538D41CC7127d0 ONLINE 0 0 0
> >
> > > > mirror-3 ONLINE 0 0 0
> >
> > > > c7t5002538D41CD7F7Ed0 ONLINE 0 0 0
> >
> > > > c7t5002538D41CD83FDd0 ONLINE 0 0 0
> >
> > > > mirror-4 ONLINE 0 0 0
> >
> > > > c7t5002538D41CD7F7Ad0 ONLINE 0 0 0
> >
> > > > c7t5002538D41CD7F7Dd0 ONLINE 0 0 0
> >
> > > > logs
> >
> > > > mirror-1 ONLINE 0 0 0
> >
> > > > c4t2d0 ONLINE 0 0 0
> >
> > > > c4t4d0 ONLINE 0 0 0
> >
> > > >
> >
> > > > i used correct ashift of 13 for samsung 850 evo
> >
> > > > zdb|grep ashift :
> >
> > > >
> >
> > > > ashift: 13
> >
> > > > ashift: 13
> >
> > > > ashift: 13
> >
> > > > ashift: 13
> >
> > > > ashift: 13
> >
> > > >
> >
> > > > But i write a lot on ssd every 5 seconds (many more than the
> > > > network traffic of 2 MB/s)
> >
> > > >
> >
> > > > iostat -xn -d 1 :
> >
> > > >
> >
> > > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> >
> > > > 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2
> >
>
> > > filervm2 is seeing 3067 writes per second. This is the interface to
> > > the upper layers.
> >
> > > These writes are small.
> >
>
> > > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool
> &g

Re: [OmniOS-discuss] write amplification zvol

2017-09-28 Thread Stephan Budach
- Ursprüngliche Mail - 

> Von: "anthony omnios" <icoomn...@gmail.com>
> An: "Richard Elling" <richard.ell...@richardelling.com>
> CC: omnios-discuss@lists.omniti.com
> Gesendet: Donnerstag, 28. September 2017 09:56:42
> Betreff: Re: [OmniOS-discuss] write amplification zvol

> Thanks Richard for your help.

> My problem is that i have a network ISCSI traffic of 2 MB/s, each 5
> seconds i need to write on disks 10 MB of network traffic but on
> pool filervm2 I am writing much more that, approximatively 60 MB
> each 5 seconds. Each ssd of filervm2 is writting 15 MB every 5
> second. When i check with smartmootools every ssd is writing
> approximatively 250 GB of data each day.

> How can i reduce amont of data writting on each ssd ? i have try to
> reduce block size of zvol but it change nothing.

> Anthony

> 2017-09-28 1:29 GMT+02:00 Richard Elling <
> richard.ell...@richardelling.com > :

> > Comment below...
> 

> > > On Sep 27, 2017, at 12:57 AM, anthony omnios <
> > > icoomn...@gmail.com
> > > > wrote:
> 
> > >
> 
> > > Hi,
> 
> > >
> 
> > > i have a problem, i used many ISCSI zvol (for each vm), network
> > > traffic is 2MB/s between kvm host and filer but i write on disks
> > > many more than that. I used a pool with separated mirror zil
> > > (intel s3710) and 8 ssd samsung 850 evo 1To
> 
> > >
> 
> > > zpool status
> 
> > > pool: filervm2
> 
> > > state: ONLINE
> 
> > > scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20
> > > 15:45:48
> > > 2017
> 
> > > config:
> 
> > >
> 
> > > NAME STATE READ WRITE CKSUM
> 
> > > filervm2 ONLINE 0 0 0
> 
> > > mirror-0 ONLINE 0 0 0
> 
> > > c7t5002538D41657AAFd0 ONLINE 0 0 0
> 
> > > c7t5002538D41F85C0Dd0 ONLINE 0 0 0
> 
> > > mirror-2 ONLINE 0 0 0
> 
> > > c7t5002538D41CC7105d0 ONLINE 0 0 0
> 
> > > c7t5002538D41CC7127d0 ONLINE 0 0 0
> 
> > > mirror-3 ONLINE 0 0 0
> 
> > > c7t5002538D41CD7F7Ed0 ONLINE 0 0 0
> 
> > > c7t5002538D41CD83FDd0 ONLINE 0 0 0
> 
> > > mirror-4 ONLINE 0 0 0
> 
> > > c7t5002538D41CD7F7Ad0 ONLINE 0 0 0
> 
> > > c7t5002538D41CD7F7Dd0 ONLINE 0 0 0
> 
> > > logs
> 
> > > mirror-1 ONLINE 0 0 0
> 
> > > c4t2d0 ONLINE 0 0 0
> 
> > > c4t4d0 ONLINE 0 0 0
> 
> > >
> 
> > > i used correct ashift of 13 for samsung 850 evo
> 
> > > zdb|grep ashift :
> 
> > >
> 
> > > ashift: 13
> 
> > > ashift: 13
> 
> > > ashift: 13
> 
> > > ashift: 13
> 
> > > ashift: 13
> 
> > >
> 
> > > But i write a lot on ssd every 5 seconds (many more than the
> > > network traffic of 2 MB/s)
> 
> > >
> 
> > > iostat -xn -d 1 :
> 
> > >
> 
> > > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
> 
> > > 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2
> 

> > filervm2 is seeing 3067 writes per second. This is the interface to
> > the upper layers.
> 
> > These writes are small.
> 

> > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool
> 
> > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
> 
> > > 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0
> 
> > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0
> 
> > > 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0
> 

> > The log devices are seeing 552 writes per second and since
> > sync=standard that
> 
> > means that the upper layers are requesting syncs.
> 

> > > 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41657AAFd0
> 
> > > 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3 c7t5002538D41CC7127d0
> 
> > > 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4 c7t5002538D41CC7105d0
> 
> > > 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3 c7t5002538D41F85C0Dd0
> 
> > > 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD83FDd0
> 
> > > 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4 c7t5002538D41CD7F7Ed0
> 
> > > 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Dd0
> 
> > > 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5 c7t5002538D41CD7F7Ad0
> 

> > The pool disks see 1989 writes per second total or 994 writes per
> > second logically.
> 

> > It seems to me that reducing 3067 requested writes to 994 logical
> > writes is the opposite
> 
> > of amplification. What do y

Re: [OmniOS-discuss] write amplification zvol

2017-09-27 Thread Richard Elling
Comment below...

> On Sep 27, 2017, at 12:57 AM, anthony omnios  wrote:
> 
> Hi,
> 
> i have a problem, i used many ISCSI zvol (for each vm), network traffic is 
> 2MB/s between kvm host and filer but i write on disks many more than that. I 
> used a pool with separated mirror zil (intel s3710) and 8 ssd samsung  850 
> evo 1To
> 
>  zpool status
>   pool: filervm2
>  state: ONLINE
>   scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 15:45:48 2017
> config:
> 
> NAME   STATE READ WRITE CKSUM
> filervm2   ONLINE   0 0 0
>   mirror-0 ONLINE   0 0 0
> c7t5002538D41657AAFd0  ONLINE   0 0 0
> c7t5002538D41F85C0Dd0  ONLINE   0 0 0
>   mirror-2 ONLINE   0 0 0
> c7t5002538D41CC7105d0  ONLINE   0 0 0
> c7t5002538D41CC7127d0  ONLINE   0 0 0
>   mirror-3 ONLINE   0 0 0
> c7t5002538D41CD7F7Ed0  ONLINE   0 0 0
> c7t5002538D41CD83FDd0  ONLINE   0 0 0
>   mirror-4 ONLINE   0 0 0
> c7t5002538D41CD7F7Ad0  ONLINE   0 0 0
> c7t5002538D41CD7F7Dd0  ONLINE   0 0 0
> logs
>   mirror-1 ONLINE   0 0 0
> c4t2d0 ONLINE   0 0 0
> c4t4d0 ONLINE   0 0 0
> 
> i used correct ashift of 13 for samsung 850 evo
> zdb|grep ashift :
> 
> ashift: 13
> ashift: 13
> ashift: 13
> ashift: 13
> ashift: 13
> 
> But i write a lot on ssd every 5 seconds (many more than the network traffic 
> of 2 MB/s)
> 
> iostat -xn -d 1 : 
> 
>  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>11.0 3067.5  288.3 153457.4  6.8  0.52.20.2   5  14 filervm2

filervm2 is seeing 3067 writes per second. This is the interface to the upper 
layers.
These writes are small.

> 0.00.00.00.0  0.0  0.00.00.0   0   0 rpool
> 0.00.00.00.0  0.0  0.00.00.0   0   0 c4t0d0
> 0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0
> 0.0  552.60.0 17284.0  0.0  0.10.00.2   0   8 c4t2d0
> 0.0  552.60.0 17284.0  0.0  0.10.00.2   0   8 c4t4d0

The log devices are seeing 552 writes per second and since sync=standard that 
means that the upper layers are requesting syncs.

> 1.0  233.3   48.1 10051.6  0.0  0.00.00.1   0   3 
> c7t5002538D41657AAFd0
> 5.0  250.3  144.2 13207.3  0.0  0.00.00.1   0   3 
> c7t5002538D41CC7127d0
> 2.0  254.3   24.0 13207.3  0.0  0.00.00.1   0   4 
> c7t5002538D41CC7105d0
> 3.0  235.3   72.1 10051.6  0.0  0.00.00.1   0   3 
> c7t5002538D41F85C0Dd0
> 0.0  228.30.0 16178.7  0.0  0.00.00.2   0   4 
> c7t5002538D41CD83FDd0
> 0.0  225.30.0 16210.7  0.0  0.00.00.2   0   4 
> c7t5002538D41CD7F7Ed0
> 0.0  282.30.0 19991.1  0.0  0.00.00.2   0   5 
> c7t5002538D41CD7F7Dd0
> 0.0  280.30.0 19871.0  0.0  0.00.00.2   0   5 
> c7t5002538D41CD7F7Ad0

The pool disks see 1989 writes per second total or 994 writes per second 
logically.

It seems to me that reducing 3067 requested writes to 994 logical writes is the 
opposite
of amplification. What do you expect?
 -- richard

> 
> I used zvol of 64k, i try with 8k and problem is the same.
> 
> zfs get all filervm2/hdd-110022a :
> 
> NAME  PROPERTY  VALUE  SOURCE
> filervm2/hdd-110022a  type  volume -
> filervm2/hdd-110022a  creation  Tue May 16 10:24 2017  -
> filervm2/hdd-110022a  used  5.26G  -
> filervm2/hdd-110022a  available 2.90T  -
> filervm2/hdd-110022a  referenced5.24G  -
> filervm2/hdd-110022a  compressratio 3.99x  -
> filervm2/hdd-110022a  reservation   none   default
> filervm2/hdd-110022a  volsize   25Glocal
> filervm2/hdd-110022a  volblocksize  64K-
> filervm2/hdd-110022a  checksum  on default
> filervm2/hdd-110022a  compression   lz4local
> filervm2/hdd-110022a  readonly  offdefault
> filervm2/hdd-110022a  copies1  default
> filervm2/hdd-110022a  refreservationnone   default
> filervm2/hdd-110022a  primarycache  alldefault
> filervm2/hdd-110022a  secondarycachealldefault
> filervm2/hdd-110022a  usedbysnapshots   15.4M   

Re: [OmniOS-discuss] write amplification zvol

2017-09-27 Thread anthony omnios
 thanks, this is the result of the test:

./iscsisvrtop 1 30 >> /tmp/iscsisvrtop.txt
more /tmp/iscsisvrtop.txt :

Tracing... Please wait.
2017 Sep 27 17:01:48 load: 0.22 read_KB: 345 write_KB: 56
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247   4   0   0   0   0   0
0   0   0   0   0   0
1.1.193.250 105  91   1   0 345  56
3  56   4 756   0 100
all 109  91   1   0 345  56
3  56   4 756   0   0
2017 Sep 27 17:01:49 load: 0.22 read_KB: 163 write_KB: 41
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  32  26   2   0 117  41
4  20   6 417   0 100
1.1.193.250  42  34   0   0  46   0
1   0   7   0   0   0
all  74  60   2   0 163  41
2  20   7 417   0   0
2017 Sep 27 17:01:50 load: 0.22 read_KB: 499 write_KB: 232
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  45  40   3   0 210 196
5  65   5 763   0 100
1.1.193.250  77  65   2   0 288  36
4  18   4 439   0 100
all 122 105   5   0 499 232
4  46   4 634   0   0
2017 Sep 27 17:01:51 load: 0.22 read_KB: 314 write_KB: 84
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247   3   1   0   0   0   0
0   0   2   0   0   0
1.1.193.250 100  88   4   0 313  84
3  21   4 396   0 100
all 103  89   4   0 314  84
3  21   4 396   0   0
2017 Sep 27 17:01:52 load: 0.22 read_KB: 184 write_KB: 104
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  23  17   1   0  59  88
3  88   5 871   0 100
1.1.193.250  50  44   1   0 125  16
2  16   8 445   0 100
all  73  61   2   0 184 104
3  52   7 658   0   0
2017 Sep 27 17:01:53 load: 0.22 read_KB: 250 write_KB: 1920
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247   7   6   0   0  12   0
2   0   6   0   0   0
1.1.193.250  71  44  16   0 2631920   5
120   62531   0 100
all  78  50  16   0 2761920   5
120   62531   0   0
2017 Sep 27 17:01:54 load: 0.22 read_KB: 93 write_KB: 0
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247   7   0   0   0   0   0
0   0   0   0   0   0
1.1.193.250  38  28   0   0  70   0
2   0   6   0   0   0
all  45  28   0   0  70   0
2   0   6   0   0   0
2017 Sep 27 17:01:55 load: 0.22 read_KB: 467 write_KB: 156
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  23  21   0   0  23   0
1   0   6   0   0   0
1.1.193.250 115 106   4   0 441 156
4  39   5 538   0 100
all 138 127   4   0 464 156
3  39   5 538   0   0
2017 Sep 27 17:01:56 load: 0.22 read_KB: 485 write_KB: 152
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  16  13   0   0  22   0
1   0   2   0   0   0
1.1.193.250 133 119   4   0 462 152
3  38   4 427   0 100
all 149 132   4   0 485 152
3  38   3 427   0   0
2017 Sep 27 17:01:57 load: 0.22 read_KB: 804 write_KB: 248
client  ops   reads  writesnops   rd_bw   wr_bw  ard_sz
awr_szrd_twr_t   nop_t  align%
1.1.193.247  36  33   1   0 137 104   4
104   61064   0 100
1.1.193.250 133 131   2   0 667 144
5  72   5 885   0 100
all 169 164   3   0 804 248
4  82   5 

Re: [OmniOS-discuss] write amplification zvol

2017-09-27 Thread Artem Penner
Use https://github.com/richardelling/tools/blob/master/iscsisvrtop to
observe iscsi I/O


ср, 27 сент. 2017 г. в 11:06, anthony omnios :

> Hi,
>
> i have a problem, i used many ISCSI zvol (for each vm), network traffic is
> 2MB/s between kvm host and filer but i write on disks many more than that.
> I used a pool with separated mirror zil (intel s3710) and 8 ssd samsung
> 850 evo 1To
>
>  zpool status
>   pool: filervm2
>  state: ONLINE
>   scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 15:45:48 2017
> config:
>
> NAME   STATE READ WRITE CKSUM
> filervm2   ONLINE   0 0 0
>   mirror-0 ONLINE   0 0 0
> c7t5002538D41657AAFd0  ONLINE   0 0 0
> c7t5002538D41F85C0Dd0  ONLINE   0 0 0
>   mirror-2 ONLINE   0 0 0
> c7t5002538D41CC7105d0  ONLINE   0 0 0
> c7t5002538D41CC7127d0  ONLINE   0 0 0
>   mirror-3 ONLINE   0 0 0
> c7t5002538D41CD7F7Ed0  ONLINE   0 0 0
> c7t5002538D41CD83FDd0  ONLINE   0 0 0
>   mirror-4 ONLINE   0 0 0
> c7t5002538D41CD7F7Ad0  ONLINE   0 0 0
> c7t5002538D41CD7F7Dd0  ONLINE   0 0 0
> logs
>   mirror-1 ONLINE   0 0 0
> c4t2d0 ONLINE   0 0 0
> c4t4d0 ONLINE   0 0 0
>
> i used correct ashift of 13 for samsung 850 evo
> zdb|grep ashift :
>
> ashift: 13
> ashift: 13
> ashift: 13
> ashift: 13
> ashift: 13
>
> But i write a lot on ssd every 5 seconds (many more than the network
> traffic of 2 MB/s)
>
> iostat -xn -d 1 :
>
>  r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
>11.0 3067.5  288.3 153457.4  6.8  0.52.20.2   5  14 filervm2
> 0.00.00.00.0  0.0  0.00.00.0   0   0 rpool
> 0.00.00.00.0  0.0  0.00.00.0   0   0 c4t0d0
> 0.00.00.00.0  0.0  0.00.00.0   0   0 c4t1d0
> 0.0  552.60.0 17284.0  0.0  0.10.00.2   0   8 c4t2d0
> 0.0  552.60.0 17284.0  0.0  0.10.00.2   0   8 c4t4d0
> 1.0  233.3   48.1 10051.6  0.0  0.00.00.1   0   3
> c7t5002538D41657AAFd0
> 5.0  250.3  144.2 13207.3  0.0  0.00.00.1   0   3
> c7t5002538D41CC7127d0
> 2.0  254.3   24.0 13207.3  0.0  0.00.00.1   0   4
> c7t5002538D41CC7105d0
> 3.0  235.3   72.1 10051.6  0.0  0.00.00.1   0   3
> c7t5002538D41F85C0Dd0
> 0.0  228.30.0 16178.7  0.0  0.00.00.2   0   4
> c7t5002538D41CD83FDd0
> 0.0  225.30.0 16210.7  0.0  0.00.00.2   0   4
> c7t5002538D41CD7F7Ed0
> 0.0  282.30.0 19991.1  0.0  0.00.00.2   0   5
> c7t5002538D41CD7F7Dd0
> 0.0  280.30.0 19871.0  0.0  0.00.00.2   0   5
> c7t5002538D41CD7F7Ad0
>
> I used zvol of 64k, i try with 8k and problem is the same.
>
> zfs get all filervm2/hdd-110022a :
>
> NAME  PROPERTY  VALUE  SOURCE
> filervm2/hdd-110022a  type  volume -
> filervm2/hdd-110022a  creation  Tue May 16 10:24 2017  -
> filervm2/hdd-110022a  used  5.26G  -
> filervm2/hdd-110022a  available 2.90T  -
> filervm2/hdd-110022a  referenced5.24G  -
> filervm2/hdd-110022a  compressratio 3.99x  -
> filervm2/hdd-110022a  reservation   none   default
> filervm2/hdd-110022a  volsize   25Glocal
> filervm2/hdd-110022a  volblocksize  64K-
> filervm2/hdd-110022a  checksum  on default
> filervm2/hdd-110022a  compression   lz4local
> filervm2/hdd-110022a  readonly  offdefault
> filervm2/hdd-110022a  copies1  default
> filervm2/hdd-110022a  refreservationnone   default
> filervm2/hdd-110022a  primarycache  alldefault
> filervm2/hdd-110022a  secondarycachealldefault
> filervm2/hdd-110022a  usedbysnapshots   15.4M  -
> filervm2/hdd-110022a  usedbydataset 5.24G  -
> filervm2/hdd-110022a  usedbychildren0  -
> filervm2/hdd-110022a  usedbyrefreservation  0  -
> filervm2/hdd-110022a  logbias   latencydefault
> filervm2/hdd-110022a  dedup offdefault
> filervm2/hdd-110022a  mlslabel  none   default
>