Re: [zfs-discuss] VMGuest IOMeter numbers

2010-07-25 Thread Oliver Seidel
Hello Mark,

I assume you have a read-intensive workload, not many synchronous writes, so 
leave out the ZIL, please try:

* configure the controller to show individual disks, no RAID
* create one large striped pool (zpool create tank c0t0d{1,2,3,4,5})
* if your SSD is c0t0d6, use it as an L2ARC (zpool create tank c0t0d{1,2,3,4,5} 
cache c0t0d6), not a ZIL
* use the 134 developer release from here: 
http://www.genunix.org/distributions/indiana/
* give 80% of all the memory that your box has to the Opensolaris instance that 
is serving the ZFS

I'm new here, so others may improve on these suggestions.

Best regards,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] NexentaStor Community edition 3.0.3 released

2010-07-01 Thread Oliver Seidel
Hello,

this may not apply to your machine.  I have two changes to your setup:
* Opensolaris instead of Nexenta
* DL585G1 instead of your DL380G4

Here's my problem: reproducible crash after a certain time (1:30h in my case).

Explanation: the HP machine has enterprise features (ECC RAM) and performs 
scrubbing of the RAM, just as you could scrub ZFS disks; with the 4 AMD dual 
core CPUs, the memory is divided into 4 chunks and when the scrubber hits a 
hole, then the machine crashes without so much as a crashdump

Solution: add the following to /etc/system

set snooping=1
set pcplusmp:apic_panic_on_nmi=1
set cpu_ms.AuthenticAMD.15:ao_scrub_policy = 1
set cpu_ms.AuthenticAMD.15:ao_scrub_rate_dcache = 0
set cpu_ms.AuthenticAMD.15:ao_scrub_rate_l2cache = 0 
set mc-amd:mc_no_attach=1
set disable_memscrub = 1


Best regards,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] /bin/cp vs /usr/gnu/bin/pc

2010-06-26 Thread Oliver Seidel
Hello,

I came across this blog post:

http://kevinclosson.wordpress.com/2007/03/15/copying-files-on-solaris-slow-or-fast-its-your-choice/

and would like to hear from you performance gurus how this 2007 article relates 
to the 2010 ZFS implementation?  What should I use and why?

Thanks,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror resilver @500k/s

2010-05-17 Thread Oliver Seidel
Hello Everybody,

thank you for your support.  I have been able to find a sustained 50-70mb 
resilvering with the "iostat -x 10" command.  On one out of 3 discs.  The other 
two discs are now on their way back to the vendor and I hope to be able to 
report better success when I get them back.

Thanks again,

Oliver
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirror resilver @500k/s

2010-05-14 Thread Oliver Seidel
Hello Will,

thank you for the explanation of "zpool iostat -v data" without any further 
arguments!

I will run the two suggested commands when I get back from work.

Yes, the 20gb have taken about 12h to resilver.  Now there's just 204gb left to 
do ...

Thanks to everyone for your replies,

Oliver

(now back to my temporarily disabled user ID)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mirror resilver @500k/s

2010-05-13 Thread Oliver Seidel
Hello,

I'm a grown-up and willing to read, but I can't find where to read.  Please 
point me to the place that explains how I can diagnose this situation: adding a 
mirror to a disk fills the mirror with an apparent rate of 500k per second.

1) what diagnostic information should I look at (and perhaps provide to you 
people here)?
2) how should I have gone about seeking help for a problem like this?
3) on a related note -- why is "zpool status -v data" slower to run as root 
than it is as a normal user?

Thanks for your time!

Oliver

os10...@giant:~$ (zpool status -v data; zpool iostat -v data; dmesg | tail -5) 
| egrep -v '^$'
  pool: data
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 12h13m, 45.74% done, 14h29m to go
config:
NAMESTATE READ WRITE CKSUM
dataONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
c9t3d0  ONLINE   0 0 0
c9t1d0  ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
c9t2d0  ONLINE   0 0 0
c9t0d0  ONLINE   0 0 0  20.4G resilvered
errors: No known data errors
   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -
data 755G  1.76T 67  3   524K  14.1K
  mirror 530G  1.29T  6  1  40.8K  5.89K
c9t3d0  -  -  3  1   183K  6.04K
c9t1d0  -  -  3  0   183K  6.04K
  mirror 224G   472G 60  1   484K  8.24K
c9t2d0  -  - 13  0   570K  4.05K
c9t0d0  -  -  0 34 17   490K
--  -  -  -  -  -  -
May 13 10:33:38 giant genunix: [ID 936769 kern.notice] sv0 is /pseudo/s...@0
May 13 10:33:38 giant pseudo: [ID 129642 kern.notice] pseudo-device: ii0
May 13 10:33:38 giant genunix: [ID 936769 kern.notice] ii0 is /pseudo/i...@0
May 13 10:34:34 giant su: [ID 810491 auth.crit] 'su root' failed for os1 on 
/dev/pts/4
May 13 20:44:09 giant pcplusmp: [ID 805372 kern.info] pcplusmp: ide (ata) 
instance 1 irq 0xf vector 0x45 ioapic 0x4 intin 0xf is bound to cpu 6
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss