Re: zpool - low speed write

2010-08-13 Thread Boris Samorodov
Hi!

I'm not sure if the problem was solved, so...

On Thu, 5 Aug 2010 18:49:14 +0800 Alex V. Petrov wrote:

> smartctl -a /dev/ad8

> Device Model: WDC WD10EADS-00M2B0
> Firmware Version: 01.00A01

> 193 Load_Cycle_Count0x0032   184   184   000Old_age   Always  
>  
> -   49237

> smartctl -a /dev/ad10

> Device Model: WDC WD10EADS-00L5B1
> Firmware Version: 01.01A01

> 193 Load_Cycle_Count0x0032   200   200   000Old_age   Always  
>  
> -   24

> smartctl -a /dev/ad12

> Device Model: WDC WD10EADS-00M2B0
> Firmware Version: 01.00A01

> 193 Load_Cycle_Count0x0032   200   200   000Old_age   Always  
>  
> -   91

>From the above info I'd say that you may try to play with load
cycles (set a bigger delay, etc.) with /dev/ad8.

-- 
WBR, Boris Samorodov (bsam)
Research Engineer, http://www.ipt.ru Telephone & Internet SP
FreeBSD Committer, http://www.FreeBSD.org The Power To Serve
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-09 Thread Alex V. Petrov
> Can you please remove use of the zpool entirely (e.g. zpool destroy
> tank) and do a write test to each disk itself?  E.g.:
> 
> dd if=/dev/zero of=/dev/ad8 bs=64k count=100
> dd if=/dev/zero of=/dev/ad10 bs=64k count=100
> dd if=/dev/zero of=/dev/ad12 bs=64k count=100
> 
> Thanks.

zpool destroy tank

At one time 3 dd

dd if=/dev/zero of=/dev/ad8 bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 605.173955 secs (108292830 bytes/sec)

dd if=/dev/zero of=/dev/ad10 bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 759.946393 secs (86237662 bytes/sec)

dd if=/dev/zero of=/dev/ad12 bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 605.139062 secs (108299074 bytes/sec)



Disks   ad4   ad6   ad8  ad10  ad12   da0   da1
KB/t   0,00 16,00 64,00 64,00 64,00  0,00  0,00
tps   0 1  1667  1329  1646 0 0
MB/s   0,00  0,02   104 83,04   103  0,00  0,00
%busy 0 0939493 0 0

ie this problem is not the controller

-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-08 Thread Alex V. Petrov
> 
> If only one of the dds shows bad throughput, then please:
> 
> - Install ports/sysutils/smartmontools and run smartctl -a /dev/XXX,
>   where XXX is the disk which has bad throughput
> - Try making a ZFS pool with all 3 disks, but then do "zpool offline
>   tank XXX" and then re-attempt the following dd:
>   dd if=/dev/zero of=/tank/test.zero bs=64k count=100
>   And see what throughput looks like.
> 

zpool destroy tank

zpool create tank ad8 ad10 ad12

dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 83.219741 secs (37800262 bytes/sec)

zpool offline tank ad10
cannot offline ad10: no valid replicas

zpool destroy tank

zpool create tank ad8 ad12

dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 126.873102 secs (24794286 bytes/sec)

dd if=/dev/zero of=/tank/test.zero bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 1735.680273 secs (37758106 bytes/sec)

zpool destroy tank

zpool create tank ad8

dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 39.550739 secs (79536517 bytes/sec)

dd if=/dev/zero of=/tank/test.zero bs=64k count=10
10+0 records in
10+0 records out
655360 bytes transferred in 90.810344 secs (72167990 bytes/sec)

=-O 

-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-08 Thread Alex V. Petrov
В сообщении от 5 августа 2010 14:19:59 вы написали:

> 
> Can you please remove use of the zpool entirely (e.g. zpool destroy
> tank) and do a write test to each disk itself?  E.g.:
> 
> dd if=/dev/zero of=/dev/ad8 bs=64k count=100
> dd if=/dev/zero of=/dev/ad10 bs=64k count=100
> dd if=/dev/zero of=/dev/ad12 bs=64k count=100
> 
> I don't recommend using large block sizes (e.g. bs=1M, bs=3M).

dd if=/dev/zero of=/dev/ad8 bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 604.849406 secs (108350937 bytes/sec)

dd if=/dev/zero of=/dev/ad10 bs=64k count=100
100+0 records in  
100+0 records out
6553600 bytes transferred in 757.755459 secs (86487005 bytes/sec)

dd if=/dev/zero of=/dev/ad12 bs=64k count=100
100+0 records in
100+0 records out
6553600 bytes transferred in 604.857282 secs (108349526 bytes/sec)
 
> If all of the above dds show good/decent throughput, then there's
> something strange going on with ZFS.  If this is the case, I would
> recommend filing a PR and posting to freebsd-fs about the problem,
> pointing folks to this thread.
> 
> If all of the dds show bad throughput, then could you please do the
> following:
> 
> - Provide vmstat -i output
> - Install ports/sysutils/smartmontools and run smartctl -a /dev/ad8,
>   smartctl -a /dev/ad10, and smartctl -a /dev/ad12
> 
> If only one of the dds shows bad throughput, then please:
> 
> - Install ports/sysutils/smartmontools and run smartctl -a /dev/XXX,
>   where XXX is the disk which has bad throughput
> - Try making a ZFS pool with all 3 disks, but then do "zpool offline
>   tank XXX" and then re-attempt the following dd:
>   dd if=/dev/zero of=/tank/test.zero bs=64k count=100
>   And see what throughput looks like.
> 
> Thanks.

-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-08 Thread Alex V. Petrov
В сообщении от 8 августа 2010 22:00:57 автор Vladislav V. Prodan написал:
> 05.08.2010 6:43, Alex V. Petrov wrote:
> > Intel® ICH10
> > motherboard Gigabyte GA-EP43-DS3  (rev. 1.0) P43 / Socket 775
> > CPU: Intel(R) Core(TM)2 Quad  CPU   Q8200  @ 2.33GHz (2335.41-MHz
> > K8-class CPU)
> 
> Please, show output:
> atacontrol mode ada2
> atacontrol mode ada3
> atacontrol mode ada4
> 
> Install /usr/ports/sysutils/smartmontools and show output:
> smartctl -x /dev/ada2
> smartctl -x /dev/ada3
> smartctl -x /dev/ada4
> 
> And run the Western Digital Data Lifeguard Diagnostic and check all the
> HDD(screws :)) .
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

System without ahci now.


atacontrol mode ad8
current mode = UDMA100 SATA 3Gb/s
atacontrol mode ad10
current mode = UDMA100 SATA 3Gb/s
atacontrol mode ad12
current mode = UDMA100 SATA 3Gb/s



smartctl -x /dev/ad8
smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.1-STABLE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green family
Device Model: WDC WD10EADS-00M2B0
Serial Number:WD-WCAV51709425
Firmware Version: 01.00A01
User Capacity:1 000 204 886 016 bytes
Device is:In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:Sun Aug  8 23:35:53 2010 KRAST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84) Offline data collection activity
was suspended by an interrupting 
command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (  25) The self-test routine was aborted by
the host.
Total time to complete Offline 
data collection: (20400) seconds.
Offline data collection
capabilities:(0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine 
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:( 235) minutes.
Conveyance self-test routine
recommended polling time:(   5) minutes.
SCT capabilities:  (0x303f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always   
-   0
  3 Spin_Up_Time0x0027   122   109   021Pre-fail  Always   
-   6875
  4 Start_Stop_Count0x0032   100   100   000Old_age   Always   
-   72
  5 Reallocated_Sector_Ct   0x0033   200   200   140Pre-fail  Always   
-   0
  7 Seek_Error_Rate 0x002e   200   200   000Old_age   Always   
-   0
  9 Power_On_Hours  0x0032   093   093   000Old_age   Always   
-   5636
 10 Spin_Retry_Count0x0032   100   253   000Old_age   Always   
-   0
 11 Calibration_Retry_Count 0x0032   100   253   000Old_age   Always   
-   0
 12 Power_Cycle_Count   0x0032   100   100   000Old_age   Always   
-   41
192 Power-Off_Retract_Count 0x0032   200   200   000Old_age   Always   
-   35
193 Load_Cycle_Count0x0032   184   184   000Old_age   Always   
-   49237
194 Temperature_Celsius 0x0022   108   104   000Old_age   Always   
-   39
196 Reallocated_Event_Count 0x0032   200   20

Re: zpool - low speed write

2010-08-08 Thread Vladislav V. Prodan
05.08.2010 6:43, Alex V. Petrov wrote:

> Intel® ICH10 
> motherboard Gigabyte GA-EP43-DS3  (rev. 1.0) P43 / Socket 775
> CPU: Intel(R) Core(TM)2 Quad  CPU   Q8200  @ 2.33GHz (2335.41-MHz K8-class 
> CPU)
> 

Please, show output:
atacontrol mode ada2
atacontrol mode ada3
atacontrol mode ada4

Install /usr/ports/sysutils/smartmontools and show output:
smartctl -x /dev/ada2
smartctl -x /dev/ada3
smartctl -x /dev/ada4

And run the Western Digital Data Lifeguard Diagnostic and check all the
HDD(screws :)) .
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-08 Thread Vladislav V. Prodan
08.08.2010 8:57, Alex V. Petrov wrote:
> I think these settings are relevant only for i386, but not for amd64

These settings are just for amd64!
They guarantee maximum performance ZFS.

On my system amd64 + ZFS speed read-write into ZFS on 5-20% different
from the physical read-write speed on single discs.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-08 Thread Jeremy Chadwick
On Sun, Aug 08, 2010 at 01:57:09PM +0800, Alex V. Petrov wrote:
> В сообщении от 8 августа 2010 12:13:29 автор Vladislav V. Prodan написал:
> > 08.08.2010 6:07, Alex V. Petrov wrote:
> > > sysctl -a | grep vm.kmem
> > > vm.kmem_size_max: 329853485875
> > > 
> > > sysctl -a | grep vfs.zfs.arc
> > > vfs.zfs.arc_meta_used: 13797096
> > > vfs.zfs.arc_max: 858721280
> > > 
> > > 
> > > sysctl -a | grep vm.kvm
> > > vm.kvm_free: 547673337856
> > > vm.kvm_size: 549755809792
> > 
> > Please, insert into /boot/loader.conf only this options:
> > 
> > vm.kmem_size="999M"
> > vm.kmem_size_max="999M"
> > vfs.zfs.arc_max="160M"
> > 
> > And after reboot, please run dd...
> 
> I think these settings are relevant only for i386, but not for amd64

They're relevant to both, I can assure you.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-07 Thread Alex V. Petrov
В сообщении от 8 августа 2010 12:13:29 автор Vladislav V. Prodan написал:
> 08.08.2010 6:07, Alex V. Petrov wrote:
> > sysctl -a | grep vm.kmem
> > vm.kmem_size_max: 329853485875
> > 
> > sysctl -a | grep vfs.zfs.arc
> > vfs.zfs.arc_meta_used: 13797096
> > vfs.zfs.arc_max: 858721280
> > 
> > 
> > sysctl -a | grep vm.kvm
> > vm.kvm_free: 547673337856
> > vm.kvm_size: 549755809792
> 
> Please, insert into /boot/loader.conf only this options:
> 
> vm.kmem_size="999M"
> vm.kmem_size_max="999M"
> vfs.zfs.arc_max="160M"
> 
> And after reboot, please run dd...

I think these settings are relevant only for i386, but not for amd64


dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 425.627495 secs (7390801 bytes/sec)

zpool iostat -v 10 10
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 759G  1,98T  0 43  31,7K  4,37M
  ad12   230G   701G  0 15  10,8K  1,55M
  ad8244G   684G  0 15  11,6K  1,55M
  ad10   286G   642G  0 13  9,33K  1,27M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 759G  1,98T  0 80  0  3,26M
  ad12   230G   701G  0 27  0  1,24M
  ad8244G   684G  0 27  0  1,08M
  ad10   286G   642G  0 25  0   962K
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 759G  1,98T  0100  6,39K  3,31M
  ad12   230G   701G  0 34  0  1,23M
  ad8244G   684G  0 33  0  1,13M
  ad10   286G   642G  0 33  6,39K   970K
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0132  0  7,21M
  ad12   230G   701G  0 41  0  2,23M
  ad8244G   684G  0 47  0  2,79M
  ad10   286G   642G  0 43  0  2,20M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0110  6,39K  5,19M
  ad12   230G   701G  0 39  0  2,12M
  ad8244G   684G  0 36  6,39K  1,67M
  ad10   286G   642G  0 33  0  1,40M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0100  6,39K  4,81M
  ad12   230G   701G  0 33  0  1,79M
  ad8244G   684G  0 34  0  1,65M
  ad10   286G   642G  0 32  6,39K  1,37M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0117  0  6,21M
  ad12   230G   701G  0 41  0  2,23M
  ad8244G   684G  0 39  0  2,14M
  ad10   286G   642G  0 37  0  1,84M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0113  0  5,87M
  ad12   230G   701G  0 36  0  1,77M
  ad8244G   684G  0 40  0  2,19M
  ad10   286G   642G  0 37  0  1,90M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0102  12,8K  5,19M
  ad12   230G   701G  0 37  6,39K  2,02M
  ad8244G   684G  0 34  0  1,84M
  ad10   286G   642G  0 30  6,39K  1,32M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank 760G  1,98T  0124  0  6,32M
  ad12   230G   701G  0 43  0  2,31M
  ad8244G   684G  0 41  0  2,16M
  ad10   286G   642G  0 39  0  1,85M
-

Re: zpool - low speed write

2010-08-07 Thread Vladislav V. Prodan
08.08.2010 6:07, Alex V. Petrov wrote:

> sysctl -a | grep vm.kmem
> vm.kmem_size_max: 329853485875
> 
> sysctl -a | grep vfs.zfs.arc
> vfs.zfs.arc_meta_used: 13797096
> vfs.zfs.arc_max: 858721280
> 

> sysctl -a | grep vm.kvm
> vm.kvm_free: 547673337856
> vm.kvm_size: 549755809792
> 
> 
Please, insert into /boot/loader.conf only this options:

vm.kmem_size="999M"
vm.kmem_size_max="999M"
vfs.zfs.arc_max="160M"

And after reboot, please run dd...
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-07 Thread Alex V. Petrov
В сообщении от 8 августа 2010 07:01:02 автор Vladislav V. Prodan написал:
> 04.08.2010 15:08, Alex V. Petrov пишет:
> > Hi All!
> > 
> > $ dd if=/dev/random of=/tank/test bs=3M count=1000
> > 1000+0 records in
> > 1000+0 records out
> > 3145728000 bytes transferred in 298.153293 secs (10550707 bytes/sec)
> > 
> > 
> > Any ideas?
> 
> Please, show output:
> vmstat -z
> netstat -m
> sysctl -a | grep vm.kmem
> sysctl -a | grep vfs.zfs.arc
> sysctl -a | grep kern.maxvnodes
> sysctl -a | grep vm.kvm

vmstat -z
ITEM SIZE LIMIT  USED  FREE  REQUESTS  FAILURES

UMA Kegs: 208,0,  191,   13,  191,0
UMA Zones:320,0,  191,1,  191,0
UMA Slabs:568,0, 5146, 1301, 24288196,0
UMA RCntSlabs:568,0, 1631, 1015,   329163,0
UMA Hash: 256,0,   79,   11,   82,0
16 Bucket:152,0,   16,  109,  167,0
32 Bucket:280,0,   30,   82,  274,2
64 Bucket:536,0,   49,   63,  793,   89
128 Bucket:  1048,0,  514,   95,   187117,   123359
VM OBJECT:216,0,34974,   213300, 413596412,0
MAP:  232,0,7,   25,7,0
KMAP ENTRY:   120,   150474,  183, 7040, 66175818,0
MAP ENTRY:120,0,24333,10759, 1025909468,0
DP fakepg:120,0,  216,  187,  544,0
SG fakepg:120,0,  667, 1689, 6212,0
mt_zone: 2056,0,  295,   54,  295,0
16:16,0, 3916,13724, 325519968,0
32:32,0, 3598, 1654, 65243323,0
64:64,0,16671,29529, 905000121,0
128:  128,0,12097,30011, 473239749,0
256:  256,0, 3300, 4650, 793867992,0
512:  512,0, 5211, 7690, 54770551,0
1024:1024,0,  239,  697,  5912498,0
2048:2048,0, 1335,  721, 35819870,0
4096:4096,0,  715,  617,  7671562,0
Files: 80,0, 2258, 1162, 323524959,0
TURNSTILE:136,0, 1673,   67, 1677,0
umtx pi:   96,0,0,0,0,0
MAC labels:40,0,0,0,0,0
PROC:1120,0,  187,  824,  6336408,0
THREAD:   984,0, 1236,  436,20593,0
SLEEPQUEUE:80,0, 1673,   67, 1677,0
VMSPACE:  392,0,  159,  831,  6334753,0
cpuset:72,0,2,   98,2,0
audit_record: 952,0,0,0,0,0
mbuf_packet:  256,0,  371,  525, 348284916,0
mbuf: 256,0,  231, 1888, 2247652340,0
mbuf_cluster:2048,25600,  896,  804,   188084,0
mbuf_jumbo_page: 4096,12800,   27,  754, 593845769,0
mbuf_jumbo_9k:   9216, 6400,0,0,0,0
mbuf_jumbo_16k: 16384, 3200,0,0,0,0
mbuf_ext_refcnt:4,0,0,  672,   774185,0
g_bio:232,0,   20, 1644, 54750899,0
ttyinq:   160,0,  330,  126, 1275,0
ttyoutq:  256,0,  168,   87,  632,0
ata_request:  320,0,   10,  782, 21908538,0
ata_composite:336,0,0,0,0,0
nv_stack_t: 12288,0,   10,   10,   22,0
VNODE:472,0,32144,63544, 25299060,0
VNODEPOLL:112,0,   63,  135,   65,0
S VFS Cache:  108,0,23801,   111928, 38399522,0
L VFS Cache:  328,0, 4752, 1368,  2652656,0
NAMEI:   1024,0,0,  764, 1639822943,0
NFSMOUNT: 616,0,0,0,0,0
NFSNODE:  656,0,0,0,0,   

Re: zpool - low speed write

2010-08-07 Thread Vladislav V. Prodan
04.08.2010 15:08, Alex V. Petrov пишет:
> Hi All!
>
> $ dd if=/dev/random of=/tank/test bs=3M count=1000
> 1000+0 records in
> 1000+0 records out
> 3145728000 bytes transferred in 298.153293 secs (10550707 bytes/sec)
>
>
> Any ideas?


Please, show output:
vmstat -z
netstat -m
sysctl -a | grep vm.kmem
sysctl -a | grep vfs.zfs.arc
sysctl -a | grep kern.maxvnodes
sysctl -a | grep vm.kvm
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-07 Thread Marek 'Buki' Kozlovský
On Sat, Aug 07, 2010 at 01:51:21PM +0200, Ivan Voras wrote:
> On 5.8.2010 6:47, Alex V. Petrov wrote:
> 
> > camcontrol identify ada2
> > pass2:  ATA-8 SATA 2.x device
> 
> Aren't those 4k sector drives?

no, 4k drives have 'R' in type (ie. EARS and AARS models)

> To verify this hypotesis though, you will have to destroy the zpool, use
> gnop to create a virtual 4k sector drive for each physical drive and try
> testing everything again, using these new virtual drives.
> 
> Unfortunately, if this is the case, it will be troublesome to find a
> production solution just yet. I have an idea but no time to try it.

Buki


pgpGeiBg7dh4Q.pgp
Description: PGP signature


Re: zpool - low speed write

2010-08-07 Thread Artem Belevich
On Sat, Aug 7, 2010 at 4:51 AM, Ivan Voras  wrote:
> On 5.8.2010 6:47, Alex V. Petrov wrote:
>
>> camcontrol identify ada2
>> pass2:  ATA-8 SATA 2.x device
>
> Aren't those 4k sector drives?

EADS drives use regular 512-byte sectors AFAIK. It's EA*R*S models
that use 4K sectors.

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-07 Thread Ivan Voras
On 5.8.2010 6:47, Alex V. Petrov wrote:

> camcontrol identify ada2
> pass2:  ATA-8 SATA 2.x device

Aren't those 4k sector drives?

To verify this hypotesis though, you will have to destroy the zpool, use
gnop to create a virtual 4k sector drive for each physical drive and try
testing everything again, using these new virtual drives.

Unfortunately, if this is the case, it will be troublesome to find a
production solution just yet. I have an idea but no time to try it.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-05 Thread Artem Belevich
On Wed, Aug 4, 2010 at 9:47 PM, Alex V. Petrov  wrote:
...
>> > vfs.zfs.cache_flush_disable=1
>> > vfs.zfs.zil_disable=1
>>
>> I question both of these settings, especially the latter.  Please remove
>> them both and re-test your write performance.
>
> I removed all settings of zfs.
> Now it default.
>

ZFS would throttle writes if it thinks that not enough memory is
available. Did you by any chance tinker with VM parameters, too? Could
you post output of following commands?

sysctl vm |grep kmem
sysctl vfs.zfs
sysctl kstat.zfs  (before and after after you do some of your write speed tests)

--Artem
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-05 Thread Alex V. Petrov
> 
> Can you please remove use of the zpool entirely (e.g. zpool destroy
> tank) and do a write test to each disk itself?  E.g.:
> 
> dd if=/dev/zero of=/dev/ad8 bs=64k count=100
> dd if=/dev/zero of=/dev/ad10 bs=64k count=100
> dd if=/dev/zero of=/dev/ad12 bs=64k count=100

I don't have free space for move my data from zpool (1,91T)
 
> I don't recommend using large block sizes (e.g. bs=1M, bs=3M).

dd if=/dev/zero of=/tank/test.zero bs=64k count=1
1+0 records in
1+0 records out
65536 bytes transferred in 50.294832 secs (13030365 bytes/sec)
 
> If all of the above dds show good/decent throughput, then there's
> something strange going on with ZFS.  If this is the case, I would
> recommend filing a PR and posting to freebsd-fs about the problem,
> pointing folks to this thread.
> 
> If all of the dds show bad throughput, then could you please do the
> following:
> 
> - Provide vmstat -i output

vmstat -i output
interrupt  total   rate
irq1: atkbd02368  0
irq6: fdc017  0
irq16: vgapci0 ath+  1728264100
irq18: uhci2 ehci0*  2183829127
irq19: uhci4+ 427434 24
irq21: uhci1   42295  2
irq23: uhci3 ehci1 18154  1
cpu0: timer 34317326   1997
irq256: hdac01561005 90
irq257: re0  2458465143
cpu1: timer 34316042   1997
cpu3: timer 34316081   1997
cpu2: timer 34316130   1997
Total  145687410   8482

> - Install ports/sysutils/smartmontools and run smartctl -a /dev/ad8,
>   smartctl -a /dev/ad10, and smartctl -a /dev/ad12

In the first message I wrote that smatmontools installed

smartd daily output:
Checking health of /dev/ada2: OK
Checking health of /dev/ada3: OK
Checking health of /dev/ada4: OK

In the logs there are no any error messages that the controller and drives

smartctl -a /dev/ad8
smartctl 5.39.1 2010-01-28 r3054 [FreeBSD 8.1-STABLE amd64] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green family
Device Model: WDC WD10EADS-00M2B0
Serial Number:WD-WCAV51709425
Firmware Version: 01.00A01
User Capacity:1 000 204 886 016 bytes
Device is:In smartctl database [for details use: -P show]
ATA Version is:   8
ATA Standard is:  Exact ATA specification draft version not indicated
Local Time is:Thu Aug  5 18:42:22 2010 KRAST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status:  (  25) The self-test routine was aborted by
the host.
Total time to complete Offline 
data collection: (20400) seconds.
Offline data collection
capabilities:(0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities:(0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability:(0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine 
recommended polling time:(   2) minutes.
Extended self-test routine
recommended polling time:( 235) minutes.
Conveyance self-test routine
recommended polling time:(   5) minutes.
SCT capabilities:  (0x303f) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE  UPDATED  
WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate 0x002f   200   200   051Pre-fail  Always   
-   

Re: zpool - low speed write

2010-08-04 Thread Jeremy Chadwick
On Thu, Aug 05, 2010 at 02:09:57PM +0800, Alex V. Petrov wrote:
> В сообщении от 5 августа 2010 13:35:04 вы написали:
> > Write performance here is abysmal, agreed.  This is very odd.
> > 
> > I hate to say this, but can you remove ahci.ko (ahci_load="yes") from
> > your loader.conf and reboot?  You may need to change filesystem names
> > around in /etc/fstab for your OS disk (assuming it's on ada0), but for
> > ZFS it should just magically find the disks on adXX.
> > 
> > If you could also provide pciconf -lvc output that would be helpful.
> > Thanks.
> 
> dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
> 1000+0 records in
> 1000+0 records out
> 3145728000 bytes transferred in 485.431690 secs (6480269 bytes/sec)

Can you please remove use of the zpool entirely (e.g. zpool destroy
tank) and do a write test to each disk itself?  E.g.:

dd if=/dev/zero of=/dev/ad8 bs=64k count=100
dd if=/dev/zero of=/dev/ad10 bs=64k count=100
dd if=/dev/zero of=/dev/ad12 bs=64k count=100

I don't recommend using large block sizes (e.g. bs=1M, bs=3M).

If all of the above dds show good/decent throughput, then there's
something strange going on with ZFS.  If this is the case, I would
recommend filing a PR and posting to freebsd-fs about the problem,
pointing folks to this thread.

If all of the dds show bad throughput, then could you please do the
following:

- Provide vmstat -i output
- Install ports/sysutils/smartmontools and run smartctl -a /dev/ad8,
  smartctl -a /dev/ad10, and smartctl -a /dev/ad12

If only one of the dds shows bad throughput, then please:

- Install ports/sysutils/smartmontools and run smartctl -a /dev/XXX,
  where XXX is the disk which has bad throughput
- Try making a ZFS pool with all 3 disks, but then do "zpool offline
  tank XXX" and then re-attempt the following dd:
  dd if=/dev/zero of=/tank/test.zero bs=64k count=100
  And see what throughput looks like.

Thanks.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
В сообщении от 5 августа 2010 13:35:04 вы написали:
> Write performance here is abysmal, agreed.  This is very odd.
> 
> I hate to say this, but can you remove ahci.ko (ahci_load="yes") from
> your loader.conf and reboot?  You may need to change filesystem names
> around in /etc/fstab for your OS disk (assuming it's on ada0), but for
> ZFS it should just magically find the disks on adXX.
> 
> If you could also provide pciconf -lvc output that would be helpful.
> Thanks.

dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 485.431690 secs (6480269 bytes/sec)

zpool iostat -v 10 10
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0 64  49,7K  4,73M
  ad12   598G   333G  0 22  19,9K  1,71M
  ad8633G   295G  0 21  13,8K  1,65M
  ad10   727G   201G  0 19  15,9K  1,36M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0116  0  5,91M
  ad12   598G   333G  0 39  0  2,20M
  ad8633G   295G  0 39  0  2,02M
  ad10   727G   201G  0 37  0  1,68M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0140  0  9,09M
  ad12   598G   333G  0 51  0  3,69M
  ad8633G   295G  0 45  0  2,94M
  ad10   727G   201G  0 43  0  2,46M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0133  0  7,66M
  ad12   598G   333G  0 46  0  2,84M
  ad8633G   295G  0 44  0  2,59M
  ad10   727G   201G  0 43  0  2,23M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0133  6,39K  5,84M
  ad12   598G   333G  0 47  6,39K  2,34M
  ad8633G   295G  0 43  0  1,83M
  ad10   727G   201G  0 42  0  1,67M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0113  0  5,54M
  ad12   598G   333G  0 39  0  1,97M
  ad8633G   295G  0 37  0  1,98M
  ad10   727G   201G  0 35  0  1,59M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G  0152  0  10,1M
  ad12   598G   333G  0 52  0  3,41M
  ad8633G   295G  0 52  0  3,65M
  ad10   727G   201G  0 47  0  3,06M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   828G  0116  0  5,61M
  ad12   598G   333G  0 41  0  2,16M
  ad8633G   295G  0 40  0  1,95M
  ad10   727G   201G  0 34  0  1,50M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   828G  0176  0  11,1M
  ad12   598G   333G  0 60  0  3,78M
  ad8634G   294G  0 60  0  3,95M
  ad10   727G   201G  0 55  0  3,35M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   828G  0112  0  7,55M
  ad12   598G   333G  0 39  0  2,73M
  ad8634G   294G  0 39  0  2,66M
  ad10   727G   201G  0 33  0  2,15M
--  -  -  -  -  -  -

pciconf -lvc
hos...@pci0:0:0:0:  class=0x06 card=0x50001458 chip=0x2e208086 
rev=0x02 hdr=0x00

Re: zpool - low speed write

2010-08-04 Thread Jeremy Chadwick
On Thu, Aug 05, 2010 at 12:47:42PM +0800, Alex V. Petrov wrote:
> > Your ada3 disk is different from the other two.  Can you please provide
> > the output from the following 3 commands?
> > 
> > camcontrol identify ada2
> > camcontrol identify ada3
> > camcontrol identify ada4
> > 
> > > vfs.zfs.cache_flush_disable=1
> > > vfs.zfs.zil_disable=1
> > 
> > I question both of these settings, especially the latter.  Please remove
> > them both and re-test your write performance.
> 
> I removed all settings of zfs.
> Now it default.
> 
> camcontrol identify ada2
> pass2:  ATA-8 SATA 2.x device
> pass2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> 
> protocol  ATA/ATAPI-8 SATA 2.x
> device model  WDC WD10EADS-00M2B0
> firmware revision 01.00A01
> serial number WD-WCAV51709425
> WWN   50014ee2adf88aae
> cylinders 16383
> heads 16
> sectors/track 63
> sector size   logical 512, physical 512, offset 0
> LBA supported 268435455 sectors
> LBA48 supported   1953525168 sectors
> PIO supported PIO4
> DMA supported WDMA2 UDMA6 
> 
> Feature  Support  EnableValue   Vendor
> read ahead yes  yes
> write cacheyes  yes
> flush cacheyes  yes
> overlapno
> Tagged Command Queuing (TCQ)   no   no
> Native Command Queuing (NCQ)   yes  32 tags
> SMART  yes  yes
> microcode download yes  yes
> security   yes  no
> power management   yes  yes
> advanced power management  no   no  0/0x00
> automatic acoustic management  yes  no  254/0xFE128/0x80
> media status notification  no   no
> power-up in Standbyyes  no
> write-read-verify  no   no  0/0x0
> unload no   no
> free-fall  no   no
> data set management (TRIM) no
> 
> *
> 
> camcontrol identify ada3
> pass3:  ATA-8 SATA 2.x device
> pass3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> 
> protocol  ATA/ATAPI-8 SATA 2.x
> device model  WDC WD10EADS-00L5B1
> firmware revision 01.01A01
> serial number WD-WCAU4D726772
> WWN   50014ee238ab988
> cylinders 16383
> heads 16
> sectors/track 63
> sector size   logical 512, physical 512, offset 0
> LBA supported 268435455 sectors
> LBA48 supported   1953525168 sectors
> PIO supported PIO4
> DMA supported WDMA2 UDMA6 
> 
> Feature  Support  EnableValue   Vendor
> read ahead yes  yes
> write cacheyes  yes
> flush cacheyes  yes
> overlapno
> Tagged Command Queuing (TCQ)   no   no
> Native Command Queuing (NCQ)   yes  32 tags
> SMART  yes  yes
> microcode download yes  yes
> security   yes  no
> power management   yes  yes
> advanced power management  no   no  0/0x00
> automatic acoustic management  yes  no  254/0xFE128/0x80
> media status notification  no   no
> power-up in Standbyyes  no
> write-read-verify  no   no  0/0x0
> unload no   no
> free-fall  no   no
> data set management (TRIM) no
> 
> *
> 
> camcontrol identify ada4
> pass4:  ATA-8 SATA 2.x device
> pass4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> 
> protocol  ATA/ATAPI-8 SATA 2.x
> device model  WDC WD10EADS-00M2B0
> firmware revision 01.00A01
> serial number WD-WMAV50095864
> WWN   50014ee014f3265
> cylinders 16383
> heads 16
> sectors/track 63
> sector size   logical 512, physical 512, offset 0
> LBA supported 268435455 sectors
> LBA48 supported   1953525168 sectors
> PIO supported PIO4
> DMA supported WDMA2 UDMA6 
> 
> Feature  Support  EnableValue   Vendor
> read ahead yes  yes
> write cacheyes  yes
> flush cacheyes  yes
> overlapno
> Tagged Command Queuing (TCQ)   no   no
> Native Command Queuing (NCQ)   yes  32 tags
> SMART  yes  yes
> microcode download yes  yes
> security   yes  no
> power management   yes  yes
> advanced power management  no   no  0/0x00
> automatic acoustic management  yes  no  254/0xFE128/0x80
>

Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
> Your ada3 disk is different from the other two.  Can you please provide
> the output from the following 3 commands?
> 
> camcontrol identify ada2
> camcontrol identify ada3
> camcontrol identify ada4
> 
> > vfs.zfs.cache_flush_disable=1
> > vfs.zfs.zil_disable=1
> 
> I question both of these settings, especially the latter.  Please remove
> them both and re-test your write performance.

I removed all settings of zfs.
Now it default.

camcontrol identify ada2
pass2:  ATA-8 SATA 2.x device
pass2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol  ATA/ATAPI-8 SATA 2.x
device model  WDC WD10EADS-00M2B0
firmware revision 01.00A01
serial number WD-WCAV51709425
WWN   50014ee2adf88aae
cylinders 16383
heads 16
sectors/track 63
sector size   logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported   1953525168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6 

Feature  Support  EnableValue   Vendor
read ahead yes  yes
write cacheyes  yes
flush cacheyes  yes
overlapno
Tagged Command Queuing (TCQ)   no   no
Native Command Queuing (NCQ)   yes  32 tags
SMART  yes  yes
microcode download yes  yes
security   yes  no
power management   yes  yes
advanced power management  no   no  0/0x00
automatic acoustic management  yes  no  254/0xFE128/0x80
media status notification  no   no
power-up in Standbyyes  no
write-read-verify  no   no  0/0x0
unload no   no
free-fall  no   no
data set management (TRIM) no

*

camcontrol identify ada3
pass3:  ATA-8 SATA 2.x device
pass3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol  ATA/ATAPI-8 SATA 2.x
device model  WDC WD10EADS-00L5B1
firmware revision 01.01A01
serial number WD-WCAU4D726772
WWN   50014ee238ab988
cylinders 16383
heads 16
sectors/track 63
sector size   logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported   1953525168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6 

Feature  Support  EnableValue   Vendor
read ahead yes  yes
write cacheyes  yes
flush cacheyes  yes
overlapno
Tagged Command Queuing (TCQ)   no   no
Native Command Queuing (NCQ)   yes  32 tags
SMART  yes  yes
microcode download yes  yes
security   yes  no
power management   yes  yes
advanced power management  no   no  0/0x00
automatic acoustic management  yes  no  254/0xFE128/0x80
media status notification  no   no
power-up in Standbyyes  no
write-read-verify  no   no  0/0x0
unload no   no
free-fall  no   no
data set management (TRIM) no

*

camcontrol identify ada4
pass4:  ATA-8 SATA 2.x device
pass4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)

protocol  ATA/ATAPI-8 SATA 2.x
device model  WDC WD10EADS-00M2B0
firmware revision 01.00A01
serial number WD-WMAV50095864
WWN   50014ee014f3265
cylinders 16383
heads 16
sectors/track 63
sector size   logical 512, physical 512, offset 0
LBA supported 268435455 sectors
LBA48 supported   1953525168 sectors
PIO supported PIO4
DMA supported WDMA2 UDMA6 

Feature  Support  EnableValue   Vendor
read ahead yes  yes
write cacheyes  yes
flush cacheyes  yes
overlapno
Tagged Command Queuing (TCQ)   no   no
Native Command Queuing (NCQ)   yes  32 tags
SMART  yes  yes
microcode download yes  yes
security   yes  no
power management   yes  yes
advanced power management  no   no  0/0x00
automatic acoustic management  yes  no  254/0xFE128/0x80
media status notification  no   no
power-up in Standbyyes  no
write-read-verify  no   no  0/0x0
unload no   no
free-fall  no   no
data set management (TRIM) no

*

dd if=/dev/zero of=/tank/test.zero bs=3M co

Re: zpool - low speed write

2010-08-04 Thread Jeremy Chadwick
On Wed, Aug 04, 2010 at 08:08:24PM +0800, Alex V. Petrov wrote:
> ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
> ada2:  ATA-8 SATA 2.x device
> ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> ada2: Command Queueing enabled
> ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
> ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
> ada3:  ATA-8 SATA 2.x device
> ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> ada3: Command Queueing enabled
> ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
> ada4 at ahcich4 bus 0 scbus4 target 0 lun 0
> ada4:  ATA-8 SATA 2.x device
> ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
> ada4: Command Queueing enabled

Your ada3 disk is different from the other two.  Can you please provide
the output from the following 3 commands?

camcontrol identify ada2
camcontrol identify ada3
camcontrol identify ada4

> vfs.zfs.cache_flush_disable=1
> vfs.zfs.zil_disable=1

I question both of these settings, especially the latter.  Please remove
them both and re-test your write performance.

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Jeremy Chadwick
On Wed, Aug 04, 2010 at 10:13:36PM -0400, Joshua Boyd wrote:
> On Wed, Aug 4, 2010 at 10:13 AM, Alex V. Petrov wrote:
> 
> > interesting results:
> >
> > From single-UDF-disk to pool:
> > $ dd if=petrovs-disk1.iso of=/tank/petrovs-disk1.iso bs=1M
> > 3545+1 records in
> > 3545+1 records out
> > 3718002688 bytes transferred in 438.770195 secs (8473690 bytes/sec)
> >
> > From single-UDF-disk to null:
> > $ dd if=petrovs-disk1.iso of=/dev/null bs=1M
> > 3545+1 records in
> > 3545+1 records out
> > 3718002688 bytes transferred in 83.304575 secs (44631435 bytes/sec)
> > --
> > Alex V. Petrov
> > ___
> > freebsd-stable@freebsd.org mailing list
> > http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> > To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
> >
> 
> What controllers are you using?
> 
> What's the results of dd if=/dev/ada4 of=/dev/null bs=1M count=100 ?

His problem is with writes, not reads.

I strongly doubt his problem is with the controller (Intel ICHxx and
ESBxx controllers are heavily tested on FreeBSD, both with and without
AHCI, including ahci.ko).

-- 
| Jeremy Chadwick   j...@parodius.com |
| Parodius Networking   http://www.parodius.com/ |
| UNIX Systems Administrator  Mountain View, CA, USA |
| Making life hard for others since 1977.  PGP: 4BD6C0CB |

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
> 
> What controllers are you using?
> 
> What's the results of dd if=/dev/ada4 of=/dev/null bs=1M count=100 ?
> 
> Have you tried switching to the ad driver? Maybe ada is buggy on your
> hardware.

$ dd if=/dev/ada4 of=/dev/null bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes transferred in 1.399283 secs (74936655 bytes/sec)

Intel® ICH10 
motherboard Gigabyte GA-EP43-DS3  (rev. 1.0) P43 / Socket 775
CPU: Intel(R) Core(TM)2 Quad  CPU   Q8200  @ 2.33GHz (2335.41-MHz K8-class 
CPU)

switch to ad I'll try later

-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Joshua Boyd
On Wed, Aug 4, 2010 at 10:13 AM, Alex V. Petrov wrote:

> interesting results:
>
> From single-UDF-disk to pool:
> $ dd if=petrovs-disk1.iso of=/tank/petrovs-disk1.iso bs=1M
> 3545+1 records in
> 3545+1 records out
> 3718002688 bytes transferred in 438.770195 secs (8473690 bytes/sec)
>
> From single-UDF-disk to null:
> $ dd if=petrovs-disk1.iso of=/dev/null bs=1M
> 3545+1 records in
> 3545+1 records out
> 3718002688 bytes transferred in 83.304575 secs (44631435 bytes/sec)
> --
> Alex V. Petrov
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
>

What controllers are you using?

What's the results of dd if=/dev/ada4 of=/dev/null bs=1M count=100 ?

Have you tried switching to the ad driver? Maybe ada is buggy on your
hardware.

-- 
Joshua Boyd
JBipNet

E-mail: boy...@jbip.net

http://www.jbip.net
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
В сообщении от 4 августа 2010 22:40:22 вы написали:
>  Try booting with the following on /boot/loader.conf:
> vfs.zfs.vdev.max_pending="10"
> vfs.zfs.txg.write_limit_override=268435456
> 
> And remove setting:
> vfs.zfs.cache_flush_disable=1
> 
> Then try a dd from /dev/zero.

OK.

dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 394.974934 secs (7964374 bytes/sec)

During execution dd:

zpool iostat -v 10 10
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   819G  0 52  53,5K  4,09M
  ada4   601G   330G  0 18  20,2K  1,47M
  ada2   637G   291G  0 18  17,3K  1,45M
  ada3   730G   198G  0 16  15,9K  1,18M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G 13125  1,08M  9,22M
  ada4   600G   331G  4 43   390K  3,25M
  ada2   636G   292G  5 43   371K  3,32M
  ada3   729G   199G  3 38   345K  2,64M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  0183  0  15,2M
  ada4   600G   331G  0 63  0  5,28M
  ada2   636G   292G  0 64  0  5,58M
  ada3   729G   199G  0 55  0  4,37M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  0177  0  14,6M
  ada4   600G   331G  0 62  0  5,29M
  ada2   636G   292G  0 60  0  5,14M
  ada3   729G   199G  0 53  0  4,17M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  0193  0  15,6M
  ada4   601G   330G  0 65  0  5,41M
  ada2   636G   292G  0 68  0  5,60M
  ada3   729G   199G  0 59  0  4,60M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  0178  0  12,8M
  ada4   601G   330G  0 61  0  4,45M
  ada2   636G   292G  0 63  0  4,73M
  ada3   729G   199G  0 53  0  3,65M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  0190  0  14,1M
  ada4   601G   330G  0 68  0  5,24M
  ada2   636G   292G  0 64  0  4,88M
  ada3   729G   199G  0 57  0  3,95M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   821G  4178   269K  14,4M
  ada4   601G   330G  1 64   122K  5,47M
  ada2   636G   292G  1 60  77,2K  4,86M
  ada3   729G   199G  0 53  70,3K  4,03M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   820G  5128   327K  7,26M
  ada4   601G   330G  1 39   128K  2,02M
  ada2   636G   292G  1 46   109K  2,95M
  ada3   730G   198G  1 41  90,5K  2,29M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   820G  9181   541K  10,3M
  ada4   601G   330G  3 65   181K  4,07M
  ada2   636G   292G  2 59   192K  3,44M
  ada3   730G   198G  2 56   168K  2,78M
--  -  -  -  -  -  -


-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Martin Matuska
 Try booting with the following on /boot/loader.conf:
vfs.zfs.vdev.max_pending="10"
vfs.zfs.txg.write_limit_override=268435456

And remove setting:
vfs.zfs.cache_flush_disable=1

Then try a dd from /dev/zero.

Cheers,
mm

Dňa 4. 8. 2010 16:13, Alex V. Petrov  wrote / napísal(a):
> interesting results:
>
> From single-UDF-disk to pool:
> $ dd if=petrovs-disk1.iso of=/tank/petrovs-disk1.iso bs=1M
> 3545+1 records in
> 3545+1 records out
> 3718002688 bytes transferred in 438.770195 secs (8473690 bytes/sec)
>
> From single-UDF-disk to null:
> $ dd if=petrovs-disk1.iso of=/dev/null bs=1M
> 3545+1 records in
> 3545+1 records out
> 3718002688 bytes transferred in 83.304575 secs (44631435 bytes/sec)
> --
> Alex V. Petrov
> ___
> freebsd-stable@freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
В сообщении от 4 августа 2010 20:45:26 вы написали:
> on 04/08/2010 15:08 Alex V. Petrov said the following:
> > Hi All!
> > 
> > $ dd if=/dev/random of=/tank/test bs=3M count=1000
> 
> /dev/random is slow.

On single disk (UFS) speed of write faster:

$ dd if=/dev/random of=/home/alex/temp/test bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 75.778427 secs (41512184 bytes/sec)

Result for /dev/zero:

$ dd if=/dev/zero of=/tank/test bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 113.110421 secs (27811124 bytes/sec)

In Krusader speed copying files from a single disk to an pool of about 8 MB/s

Fragment of `systat -v 1` in proces copying:

Disks  ada0  ada1  ada2  ada3  ada4
KB/t   0,00   122 44,35 32,89 48,50
tps   054565354
MB/s   0,00  6,42  2,42  1,70  2,55
%busy 011 7 482

-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
В сообщении от 4 августа 2010 21:09:23 вы написали:
> Guckux Alex
> 

> The first output of zpool iostat shows you only a statistical output for a
> specific time (unknown to me now).
> Have you tried s.th. like:
> zpool iostat -v 10 10
> shows you the first statistic summary, followed 9 output with 10sec delay
> and a statistic average to the last output.
> 
Result for 1 proces copying (Krusader)
single HDD to pool

zpool iostat -v 10 10
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G 98  5  12,0M   206K
  ada4   599G   332G 30  1  3,69M  74,4K
  ada2   634G   294G 31  1  3,90M  72,5K
  ada3   728G   200G 36  1  4,44M  59,6K
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G  0172  0  7,96M
  ada4   599G   332G  0 58  0  2,88M
  ada2   634G   294G  0 58  0  2,70M
  ada3   728G   200G  0 55  0  2,38M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G  0166  0  8,06M
  ada4   599G   332G  0 56  0  2,99M
  ada2   634G   294G  0 56  0  2,81M
  ada3   728G   200G  0 53  0  2,26M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G  0162  0  7,61M
  ada4   599G   332G  0 54  0  2,64M
  ada2   634G   294G  0 56  0  2,80M
  ada3   728G   200G  0 52  0  2,18M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G  0145  0  6,27M
  ada4   599G   332G  0 47  0  2,14M
  ada2   634G   294G  0 51  0  2,29M
  ada3   728G   200G  0 46  0  1,84M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   826G  0130  0  6,37M
  ada4   599G   332G  0 45  0  2,52M
  ada2   634G   294G  0 44  0  2,09M
  ada3   728G   200G  0 40  0  1,77M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   826G  0143  0  6,68M
  ada4   599G   332G  0 48  0  2,43M
  ada2   634G   294G  0 49  0  2,39M
  ada3   728G   200G  0 45  0  1,86M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   826G  0147  0  7,03M
  ada4   599G   332G  0 49  0  2,63M
  ada2   634G   294G  0 49  0  2,39M
  ada3   728G   200G  0 48  0  2,02M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   826G  0176  0  7,61M
  ada4   599G   332G  0 58  0  2,89M
  ada2   634G   294G  0 60  0  2,61M
  ada3   728G   200G  0 56  0  2,10M
--  -  -  -  -  -  -

   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,92T   826G  0142  0  7,27M
  ada4   599G   332G  0 47  0  2,54M
  ada2   634G   294G  0 50  0  2,67M
  ada3   728G   200G  0 45  0  2,06M
--  -  -  -  -  -  -

In time 
dd if=/dev/zero of=/tank/test.zero bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 91.863862 secs (34243368 bytes/sec)

zpool iostat -v 10 10
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  ---

Re: zpool - low speed write

2010-08-04 Thread Alex V. Petrov
interesting results:

From single-UDF-disk to pool:
$ dd if=petrovs-disk1.iso of=/tank/petrovs-disk1.iso bs=1M
3545+1 records in
3545+1 records out
3718002688 bytes transferred in 438.770195 secs (8473690 bytes/sec)

From single-UDF-disk to null:
$ dd if=petrovs-disk1.iso of=/dev/null bs=1M
3545+1 records in
3545+1 records out
3718002688 bytes transferred in 83.304575 secs (44631435 bytes/sec)
--
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Stefan Huerter
Guckux Alex

> zpool iostat -v
>capacity operationsbandwidth
> pool used  avail   read  write   read  write
> --  -  -  -  -  -  -
> tank1,91T   829G101  4  12,4M   148K
>   ada4   597G   334G 31  1  3,81M  53,4K
>   ada2   633G   295G 32  1  4,03M  51,8K
>   ada3   727G   201G 37  1  4,59M  42,9K
> --  -  -  -  -  -  -

only this command?
The first output of zpool iostat shows you only a statistical output for a
specific time (unknown to me now).
Have you tried s.th. like:
zpool iostat -v 10 10
shows you the first statistic summary, followed 9 output with 10sec delay
and a statistic average to the last output.

Bye
   Stefan
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Marco van Tol
On Wed, Aug 04, 2010 at 03:45:26PM +0300, Andriy Gapon wrote:
> on 04/08/2010 15:08 Alex V. Petrov said the following:
> > Hi All!
> > 
> > $ dd if=/dev/random of=/tank/test bs=3M count=1000
> 
> /dev/random is slow.

For comparing, try to see what happens with /dev/zero. :)

-- 
Als het niet gaat zoals het moet, dan moet het zoals het gaat.
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


Re: zpool - low speed write

2010-08-04 Thread Andriy Gapon
on 04/08/2010 15:08 Alex V. Petrov said the following:
> Hi All!
> 
> $ dd if=/dev/random of=/tank/test bs=3M count=1000

/dev/random is slow.

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


zpool - low speed write

2010-08-04 Thread Alex V. Petrov
Hi All!

$ dd if=/dev/random of=/tank/test bs=3M count=1000
1000+0 records in
1000+0 records out
3145728000 bytes transferred in 298.153293 secs (10550707 bytes/sec)

What, i think, very-very low :-(

FreeBSD alex.super 8.1-STABLE FreeBSD 8.1-STABLE #76: Mon Aug  2 20:19:09 
KRAST 2010 a...@alex.super:/usr/obj/usr/src/sys/ALEX  amd64

real memory  = 4294967296 (4096 MB)
avail memory = 4098732032 (3908 MB)

zpool iostat -v
   capacity operationsbandwidth
pool used  avail   read  write   read  write
--  -  -  -  -  -  -
tank1,91T   829G101  4  12,4M   148K
  ada4   597G   334G 31  1  3,81M  53,4K
  ada2   633G   295G 32  1  4,03M  51,8K
  ada3   727G   201G 37  1  4,59M  42,9K
--  -  -  -  -  -  -

zpool status -v
  pool: tank
 state: ONLINE
 scrub: scrub completed after 7h12m with 0 errors on Tue Aug  3 04:54:14 2010
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  ada4  ONLINE   0 0 0
  ada2  ONLINE   0 0 0
  ada3  ONLINE   0 0 0

errors: No known data errors

zpool history
History for 'tank':
2009-07-16.19:46:24 zpool create tank ad12
2009-12-13.14:58:46 zpool add tank ad8
2010-04-24.01:59:41 zpool upgrade tank
2010-05-09.02:16:34 zpool add tank ada3
2010-05-25.17:57:12 zpool scrub tank
2010-06-27.16:02:45 zpool scrub tank
2010-08-02.21:41:53 zpool scrub tank

ada2 at ahcich2 bus 0 scbus2 target 0 lun 0
ada2:  ATA-8 SATA 2.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada3 at ahcich3 bus 0 scbus3 target 0 lun 0
ada3:  ATA-8 SATA 2.x device
ada3: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada3: Command Queueing enabled
ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada4 at ahcich4 bus 0 scbus4 target 0 lun 0
ada4:  ATA-8 SATA 2.x device
ada4: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada4: Command Queueing enabled

smartd daily output:
Checking health of /dev/ada2: OK
Checking health of /dev/ada3: OK
Checking health of /dev/ada4: OK

/boot/loader.conf:
ahci_load="YES"
sem_load="YES"
snd_hda_load="YES"
nvidia_load="YES"
linux_load="YES"
wlan_xauth_load="YES"
vboxdrv_load="YES"
atapicam_load="YES"
coretemp_load="YES"
aio_load="YES"
vfs.zfs.prefetch_disable=0
vfs.zfs.cache_flush_disable=1
vfs.zfs.zil_disable=1

Any ideas?
-
Alex V. Petrov
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"