Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-10 Thread Garrett D'Amore
also, snapshot destroys are much slower with older releases such as 134.  i 
recommend an upgrade.  but an upgrade will not help much if you are using dedup.

  -- Garrett D'Amore

On Aug 10, 2011, at 8:32 PM, Edward Ned Harvey 
 wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Ian Collins
>> 
>>> I am facing issue with zfs destroy, this takes almost 3 Hours to
>> delete the snapshot of size 150G.
>>> 
>> Do you have dedup enabled?
> 
> I have always found, zfs destroy takes some time.  zpool destroy takes no
> time.
> 
> Although zfs destroy takes some time, it's not terrible unless you have
> dedup enabled.  If you have dedup enabled, then yes it's terrible, as Ian
> suggested.
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-10 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ian Collins
> 
> > I am facing issue with zfs destroy, this takes almost 3 Hours to
> delete the snapshot of size 150G.
> >
> Do you have dedup enabled?

I have always found, zfs destroy takes some time.  zpool destroy takes no
time.

Although zfs destroy takes some time, it's not terrible unless you have
dedup enabled.  If you have dedup enabled, then yes it's terrible, as Ian
suggested.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read errors

2011-08-10 Thread steven
Hello,

Thanks for the reply. I used to use the onboard SATA ports (intel DQ965DF) but 
i have added a LSI-3041E-S controller and this is where i get the problems. 
The controller is a Sun version (note the S not R in model) but it is a PC 
version and I tried flashing it with the firmware from both LSI and SUN (I 
still get the errors).
Pity as I had hoped the controllers would solve my expansion problems. 
Also I am mixing SATA and SATA II drives, but have tried seting the drive 
jumpers to SATA I mode. 

The LSI card wont let me Ctrl-C at the BIOS to get into the config utility , 
but i believe this is normal on Intel Mboards. I am not after the RAID 
capabilities. 

Thanks for the help, let me know if there are any ideas how to get this card 
working (SUN Part number SG-XPCIE4SAS3-Z.)

Regards
Steven
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs destory snapshot takes an hours.

2011-08-10 Thread Ian Collins

 On 08/10/11 05:13 PM, Nix wrote:

Hi,

I am facing issue with zfs destroy, this takes almost 3 Hours to delete the 
snapshot of size 150G.

Could you please help me to resolve this issue, why zfs destroy takes this much 
time.


Do you have dedup enabled?

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Paul Kraus
On Wed, Aug 10, 2011 at 2:55 PM, Gregory Durham
 wrote:

> 3) In order to deal with caching, I am writing larger amounts of data
> to the disk then I have memory for.

The other trick is to limit the ARC to a much smaller value and then
you can test with sane amounts of data.

Add the following to /etc/system and reboot:

set zfs:zfs_arc_max = 

 can be decimal or hex (but don't use a scale like 4g). Best to
keep it a power of 2.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read errors

2011-08-10 Thread Roy Sigurd Karlsbakk
What sort of controller/backplane/etc are you using? I've seen similar iostat 
output with western drives on a supermicro SAS expander

roy

- Original Message -
> Also should be getting Illegal Request errors? (no hard or soft
> errors)
> 
> Some more info: (I am doing a Scrub hence the high blocking levels)
> 
> var/log$ iostat -Ex
> extended device statistics
> device r/s w/s kr/s kw/s wait actv svc_t %w %b
> sd0 1.1 16.9 30.6 463.0 0.2 0.0 10.6 1 1
> sd1 1.0 16.9 30.3 463.0 0.2 0.0 13.5 2 2
> sd2 208.7 4.8 14493.8 16.0 3.0 0.5 16.5 45 48
> sd3 212.4 4.8 14493.4 16.0 2.6 0.4 13.8 41 44
> sd4 221.9 4.8 14491.9 16.0 0.0 1.8 8.1 0 46
> sd5 212.3 4.8 14493.5 16.0 2.5 0.4 13.4 41 44
> sd6 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
> sd8 231.7 4.8 14692.7 16.3 0.0 1.8 7.5 0 42
> sd9 239.9 4.7 14691.7 16.3 0.0 1.2 5.1 0 36
> sd0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
> Size: 250.06GB <250059350016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 3 Predictive Failure Analysis: 0
> sd1 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD2500JD-75G Revision: 5D02 Serial No:
> Size: 250.00GB <2500 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 4 Predictive Failure Analysis: 0
> sd2 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD3200SD-01K Revision: 5J08 Serial No:
> Size: 320.07GB <320072933376 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 2 Predictive Failure Analysis: 0
> sd3 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
> Size: 250.06GB <250059350016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 2 Predictive Failure Analysis: 0
> sd4 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD5000YS-01M Revision: 2E07 Serial No:
> Size: 500.11GB <500107862016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 2 Predictive Failure Analysis: 0
> sd5 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
> Size: 250.06GB <250059350016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 2 Predictive Failure Analysis: 0
> sd6 Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
> Vendor: CREATIVE Product: DVD-ROM DVD1243E Revision: IC01 Serial No:
> Size: 0.00GB <0 bytes>
> Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
> Illegal Request: 0 Predictive Failure Analysis: 0
> sd8 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: WDC WD5000AACS-0 Revision: 1B01 Serial No:
> Size: 500.11GB <500106780160 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 7 Predictive Failure Analysis: 0
> sd9 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
> Vendor: ATA Product: ST31000340AS Revision: SD15 Serial No:
> Size: 1000.20GB <1000204886016 bytes>
> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
> Illegal Request: 7 Predictive Failure Analysis: 0
> /var/log$
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

-- 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read errors

2011-08-10 Thread Bob Friesenhahn

On Wed, 10 Aug 2011, steven wrote:


Also should be getting Illegal Request errors? (no hard or soft errors)


Illegal Request sounds like the OS is making a request that drive 
firmware does not support.  It is also possible that the request 
became corrupted due to interface issue.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Roy Sigurd Karlsbakk
then create a ZVOL and share it over iSCSI and from the initiator host, run 
some benchmarks. You'll never get good results from local tests. For that sort 
of load, I'd guess a stripe of mirrors should be good. RAIDzN will probably be 
rather bad

roy

- Original Message -
> This system is for serving VM images through iSCSI to roughly 30
> xenserver hosts. I would like to know what type of performance I can
> expect in the coming months as we grow this system out. We currently
> have 2 intel ssds mirrored for the zil and 2 intel ssds for the l2arc
> in a stripe. I am interested more in max throughput of the local
> storage at this point and time.
> 
> On Wed, Aug 10, 2011 at 12:01 PM, Roy Sigurd Karlsbakk
>  wrote:
> > What sort of load will this server be serving? sync or async writes?
> > what sort of reads? random i/o or sequential? if sequential, how
> > many streams/concurrent users? those are factors you need to
> > evaluate before running a test. A local test will usually be using
> > async i/o and a dd with only 4k blocksize is bound to be slow,
> > probably because of cpu overhead.
> >
> > roy
> >
> > - Original Message -
> >> Hello All,
> >> Sorry for the lack of information. Here is some answers to some
> >> questions:
> >> 1) createPool.sh:
> >> essentially can take 2 params, one is number of disks in pool, the
> >> second is either blank or mirrored, blank means number of disks in
> >> the
> >> pool i.e. raid 0, mirrored makes 2 disk mirrors.
> >>
> >> #!/bin/sh
> >> disks=( `cat diskList | grep Hitachi | awk '{print $2}' | tr '\n' '
> >> '`
> >> )
> >> #echo ${disks[1]}
> >> #$useDisks=" "
> >> for (( i = 0; i < $1; i++ ))
> >> do
> >> #echo "Thus far: "$useDisks
> >> if [ "$2" = "mirrored" ]
> >> then
> >> if [ $(($i % 2)) -eq 0 ]
> >> then
> >> useDisks="$useDisks mirror ${disks[i]}"
> >> else
> >> useDisks=$useDisks" "${disks[i]}
> >> fi
> >> else
> >> useDisks=$useDisks" "${disks[i]}
> >> fi
> >>
> >> if [ $(($i - $1)) -le 2 ]
> >> then
> >> echo "spares are: ${disks[i]}"
> >> fi
> >> done
> >>
> >> #echo $useDisks
> >> zpool create -f fooPool0 $useDisks
> >>
> >>
> >>
> >> 2) hardware:
> >> Each server attached to each storage array is a dell r710 with 32
> >> GB
> >> memory each. To test for issues with another platform the below
> >> info,
> >> is from a dell 1950 server with 8GB memory. However, I see similar
> >> results from the r710s as well.
> >>
> >>
> >> 3) In order to deal with caching, I am writing larger amounts of
> >> data
> >> to the disk then I have memory for.
> >>
> >> 4) I have tested with bonnie++ as well and here are the results, i
> >> have read that it is best to test with 4x the amount of memory:
> >> /usr/local/sbin/bonnie++ -s 32000 -d /fooPool0/test -u gdurham
> >> Using uid:101, gid:10.
> >> Writing with putc()...done
> >> Writing intelligently...done
> >> Rewriting...done
> >> Reading with getc()...done
> >> Reading intelligently...done
> >> start 'em...done...done...done...
> >> Create files in sequential order...done.
> >> Stat files in sequential order...done.
> >> Delete files in sequential order...done.
> >> Create files in random order...done.
> >> Stat files in random order...done.
> >> Delete files in random order...done.
> >> Version 1.03d --Sequential Output-- --Sequential Input-
> >> --Random-
> >> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> >> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> >> %CP
> >> cm-srfe03 32000M 230482 97 477644 76 223687 44 209868 91 541182
> >> 41 1900 5
> >> --Sequential Create-- Random Create
> >> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> >> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> >> 16 29126 100 + +++ + +++ 24761 100 + +++ + +++
> >> cm-srfe03,32000M,230482,97,477644,76,223687,44,209868,91,541182,41,1899.7,5,16,29126,100,+,+++,+,+++,24761,100,+,+++,+,+++
> >>
> >>
> >> I will run these with the r710 server as well and will report the
> >> results.
> >>
> >> Thanks for the help!
> >>
> >> -Greg
> >>
> >>
> >>
> >> On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
> >>  wrote:
> >> > I would generally agree that dd is not a great benchmarking tool,
> >> > but you
> >> > could use multiple instances to multiple files, and larger block
> >> > sizes are
> >> > more efficient. And it's always good to check iostat and mpstat
> >> > for
> >> > io and
> >> > cpu bottlenecks. Also note that an initial run that creates files
> >> > may be
> >> > quicker because it just allocates blocks, whereas subsequent
> >> > rewrites
> >> > require copy-on-write.
> >> >
> >> > - Reply message -
> >> > From: "Peter Tribble" 
> >> > To: "Gregory Durham" 
> >> > Cc: 
> >> > Subject: [zfs-discuss] Issues with supermicro
> >> > Date: Wed, Aug 10, 2011 10:56
> >> >
> >> >
> >> > On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
> >> >  wrote:
> >> >> Hello,
> >> >> We just pur

Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Gregory Durham
This system is for serving VM images through iSCSI to roughly 30
xenserver hosts. I would like to know what type of performance I can
expect in the coming months as we grow this system out. We currently
have 2 intel ssds mirrored for the zil and 2 intel ssds for the l2arc
in a stripe. I am interested more in max throughput of the local
storage at this point and time.

On Wed, Aug 10, 2011 at 12:01 PM, Roy Sigurd Karlsbakk
 wrote:
> What sort of load will this server be serving? sync or async writes? what 
> sort of reads? random i/o or sequential? if sequential, how many 
> streams/concurrent users? those are factors you need to evaluate before 
> running a test. A local test will usually be using async i/o and a dd with 
> only 4k blocksize is bound to be slow, probably because of cpu overhead.
>
> roy
>
> - Original Message -
>> Hello All,
>> Sorry for the lack of information. Here is some answers to some
>> questions:
>> 1) createPool.sh:
>> essentially can take 2 params, one is number of disks in pool, the
>> second is either blank or mirrored, blank means number of disks in the
>> pool i.e. raid 0, mirrored makes 2 disk mirrors.
>>
>> #!/bin/sh
>> disks=( `cat diskList | grep Hitachi | awk '{print $2}' | tr '\n' ' '`
>> )
>> #echo ${disks[1]}
>> #$useDisks=" "
>> for (( i = 0; i < $1; i++ ))
>> do
>> #echo "Thus far: "$useDisks
>> if [ "$2" = "mirrored" ]
>> then
>> if [ $(($i % 2)) -eq 0 ]
>> then
>> useDisks="$useDisks mirror ${disks[i]}"
>> else
>> useDisks=$useDisks" "${disks[i]}
>> fi
>> else
>> useDisks=$useDisks" "${disks[i]}
>> fi
>>
>> if [ $(($i - $1)) -le 2 ]
>> then
>> echo "spares are: ${disks[i]}"
>> fi
>> done
>>
>> #echo $useDisks
>> zpool create -f fooPool0 $useDisks
>>
>>
>>
>> 2) hardware:
>> Each server attached to each storage array is a dell r710 with 32 GB
>> memory each. To test for issues with another platform the below info,
>> is from a dell 1950 server with 8GB memory. However, I see similar
>> results from the r710s as well.
>>
>>
>> 3) In order to deal with caching, I am writing larger amounts of data
>> to the disk then I have memory for.
>>
>> 4) I have tested with bonnie++ as well and here are the results, i
>> have read that it is best to test with 4x the amount of memory:
>> /usr/local/sbin/bonnie++ -s 32000 -d /fooPool0/test -u gdurham
>> Using uid:101, gid:10.
>> Writing with putc()...done
>> Writing intelligently...done
>> Rewriting...done
>> Reading with getc()...done
>> Reading intelligently...done
>> start 'em...done...done...done...
>> Create files in sequential order...done.
>> Stat files in sequential order...done.
>> Delete files in sequential order...done.
>> Create files in random order...done.
>> Stat files in random order...done.
>> Delete files in random order...done.
>> Version 1.03d --Sequential Output-- --Sequential Input-
>> --Random-
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
>> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
>> %CP
>> cm-srfe03 32000M 230482 97 477644 76 223687 44 209868 91 541182
>> 41 1900 5
>> --Sequential Create-- Random Create
>> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
>> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
>> 16 29126 100 + +++ + +++ 24761 100 + +++ + +++
>> cm-srfe03,32000M,230482,97,477644,76,223687,44,209868,91,541182,41,1899.7,5,16,29126,100,+,+++,+,+++,24761,100,+,+++,+,+++
>>
>>
>> I will run these with the r710 server as well and will report the
>> results.
>>
>> Thanks for the help!
>>
>> -Greg
>>
>>
>>
>> On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
>>  wrote:
>> > I would generally agree that dd is not a great benchmarking tool,
>> > but you
>> > could use multiple instances to multiple files, and larger block
>> > sizes are
>> > more efficient. And it's always good to check iostat and mpstat for
>> > io and
>> > cpu bottlenecks. Also note that an initial run that creates files
>> > may be
>> > quicker because it just allocates blocks, whereas subsequent
>> > rewrites
>> > require copy-on-write.
>> >
>> > - Reply message -
>> > From: "Peter Tribble" 
>> > To: "Gregory Durham" 
>> > Cc: 
>> > Subject: [zfs-discuss] Issues with supermicro
>> > Date: Wed, Aug 10, 2011 10:56
>> >
>> >
>> > On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
>> >  wrote:
>> >> Hello,
>> >> We just purchased two of the sc847e26-rjbod1 units to be used in a
>> >> storage environment running Solaris 11 express.
>> >>
>> >> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
>> >> 9200-8e hba. We are not using failover/redundancy. Meaning that one
>> >> port of the hba goes to the primary front backplane interface, and
>> >> the
>> >> other goes to the primary rear backplane interface.
>> >>
>> >> For testing, we have done the following:
>> >> Installed 12 disks in the front, 0 in the back.
>> >> Created a stripe of different numbers of disks. After each tes

Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Roy Sigurd Karlsbakk
What sort of load will this server be serving? sync or async writes? what sort 
of reads? random i/o or sequential? if sequential, how many streams/concurrent 
users? those are factors you need to evaluate before running a test. A local 
test will usually be using async i/o and a dd with only 4k blocksize is bound 
to be slow, probably because of cpu overhead.

roy

- Original Message -
> Hello All,
> Sorry for the lack of information. Here is some answers to some
> questions:
> 1) createPool.sh:
> essentially can take 2 params, one is number of disks in pool, the
> second is either blank or mirrored, blank means number of disks in the
> pool i.e. raid 0, mirrored makes 2 disk mirrors.
> 
> #!/bin/sh
> disks=( `cat diskList | grep Hitachi | awk '{print $2}' | tr '\n' ' '`
> )
> #echo ${disks[1]}
> #$useDisks=" "
> for (( i = 0; i < $1; i++ ))
> do
> #echo "Thus far: "$useDisks
> if [ "$2" = "mirrored" ]
> then
> if [ $(($i % 2)) -eq 0 ]
> then
> useDisks="$useDisks mirror ${disks[i]}"
> else
> useDisks=$useDisks" "${disks[i]}
> fi
> else
> useDisks=$useDisks" "${disks[i]}
> fi
> 
> if [ $(($i - $1)) -le 2 ]
> then
> echo "spares are: ${disks[i]}"
> fi
> done
> 
> #echo $useDisks
> zpool create -f fooPool0 $useDisks
> 
> 
> 
> 2) hardware:
> Each server attached to each storage array is a dell r710 with 32 GB
> memory each. To test for issues with another platform the below info,
> is from a dell 1950 server with 8GB memory. However, I see similar
> results from the r710s as well.
> 
> 
> 3) In order to deal with caching, I am writing larger amounts of data
> to the disk then I have memory for.
> 
> 4) I have tested with bonnie++ as well and here are the results, i
> have read that it is best to test with 4x the amount of memory:
> /usr/local/sbin/bonnie++ -s 32000 -d /fooPool0/test -u gdurham
> Using uid:101, gid:10.
> Writing with putc()...done
> Writing intelligently...done
> Rewriting...done
> Reading with getc()...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...done.
> Delete files in sequential order...done.
> Create files in random order...done.
> Stat files in random order...done.
> Delete files in random order...done.
> Version 1.03d --Sequential Output-- --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> cm-srfe03 32000M 230482 97 477644 76 223687 44 209868 91 541182
> 41 1900 5
> --Sequential Create-- Random Create
> -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> 16 29126 100 + +++ + +++ 24761 100 + +++ + +++
> cm-srfe03,32000M,230482,97,477644,76,223687,44,209868,91,541182,41,1899.7,5,16,29126,100,+,+++,+,+++,24761,100,+,+++,+,+++
> 
> 
> I will run these with the r710 server as well and will report the
> results.
> 
> Thanks for the help!
> 
> -Greg
> 
> 
> 
> On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
>  wrote:
> > I would generally agree that dd is not a great benchmarking tool,
> > but you
> > could use multiple instances to multiple files, and larger block
> > sizes are
> > more efficient. And it's always good to check iostat and mpstat for
> > io and
> > cpu bottlenecks. Also note that an initial run that creates files
> > may be
> > quicker because it just allocates blocks, whereas subsequent
> > rewrites
> > require copy-on-write.
> >
> > - Reply message -
> > From: "Peter Tribble" 
> > To: "Gregory Durham" 
> > Cc: 
> > Subject: [zfs-discuss] Issues with supermicro
> > Date: Wed, Aug 10, 2011 10:56
> >
> >
> > On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
> >  wrote:
> >> Hello,
> >> We just purchased two of the sc847e26-rjbod1 units to be used in a
> >> storage environment running Solaris 11 express.
> >>
> >> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> >> 9200-8e hba. We are not using failover/redundancy. Meaning that one
> >> port of the hba goes to the primary front backplane interface, and
> >> the
> >> other goes to the primary rear backplane interface.
> >>
> >> For testing, we have done the following:
> >> Installed 12 disks in the front, 0 in the back.
> >> Created a stripe of different numbers of disks. After each test, I
> >> destroy the underlying storage volume and create a new one. As you
> >> can
> >> see by the results, adding more disks, makes no difference to the
> >> performance. This should make a large difference from 4 disks to 8
> >> disks, however no difference is shown.
> >>
> >> Any help would be greatly appreciated!
> >>
> >> This is the result:
> >>
> >> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> >> of=/fooPool0/86gb.tst bs=4096 count=20971520
> >> ^C3503681+0 records in
> >> 3503681+0 records out
> >> 14351077376 bytes (14 GB) copied, 3

Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Gregory Durham
Hello All,
Sorry for the lack of information. Here is some answers to some questions:
1) createPool.sh:
essentially can take 2 params, one is number of disks in pool, the
second is either blank or mirrored, blank means number of disks in the
pool i.e. raid 0, mirrored makes 2 disk mirrors.

#!/bin/sh
disks=( `cat diskList | grep Hitachi | awk '{print $2}' | tr '\n' ' '` )
#echo ${disks[1]}
#$useDisks=" "
for (( i = 0; i < $1; i++ ))
do
#echo "Thus far: "$useDisks
if [ "$2" = "mirrored" ]
then
if [ $(($i % 2)) -eq 0 ]
then
useDisks="$useDisks mirror ${disks[i]}"
else
useDisks=$useDisks" "${disks[i]}
fi
else
useDisks=$useDisks" "${disks[i]}
fi

if [ $(($i - $1)) -le 2 ]
then
echo "spares are: ${disks[i]}"
fi
done

#echo $useDisks
zpool create -f fooPool0 $useDisks



2) hardware:
Each server attached to each storage array is a dell r710 with 32 GB
memory each. To test for issues with another platform the below info,
is from a dell 1950 server with 8GB memory. However, I see similar
results from the r710s as well.


3) In order to deal with caching, I am writing larger amounts of data
to the disk then I have memory for.

4) I have tested with bonnie++ as well and here are the results, i
have read that it is best to test with 4x the amount of memory:
/usr/local/sbin/bonnie++ -s 32000 -d /fooPool0/test -u gdurham
Using uid:101, gid:10.
Writing with putc()...done
Writing intelligently...done
Rewriting...done
Reading with getc()...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.03d   --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
cm-srfe0332000M 230482  97 477644  76 223687  44 209868  91 541182
 41  1900   5
--Sequential Create-- Random Create
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
 16 29126 100 + +++ + +++ 24761 100 + +++ + +++
cm-srfe03,32000M,230482,97,477644,76,223687,44,209868,91,541182,41,1899.7,5,16,29126,100,+,+++,+,+++,24761,100,+,+++,+,+++


I will run these with the r710 server as well and will report the results.

Thanks for the help!

-Greg



On Wed, Aug 10, 2011 at 9:16 AM, phil.har...@gmail.com
 wrote:
> I would generally agree that dd is not a great benchmarking tool, but you
> could use multiple instances to multiple files, and larger block sizes are
> more efficient. And it's always good to check iostat and mpstat for io and
> cpu bottlenecks. Also note that an initial run that creates files may be
> quicker because it just allocates blocks, whereas subsequent rewrites
> require copy-on-write.
>
> - Reply message -
> From: "Peter Tribble" 
> To: "Gregory Durham" 
> Cc: 
> Subject: [zfs-discuss] Issues with supermicro
> Date: Wed, Aug 10, 2011 10:56
>
>
> On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
>  wrote:
>> Hello,
>> We just purchased two of the sc847e26-rjbod1 units to be used in a
>> storage environment running Solaris 11 express.
>>
>> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
>> 9200-8e hba. We are not using failover/redundancy. Meaning that one
>> port of the hba goes to the primary front backplane interface, and the
>> other goes to the primary rear backplane interface.
>>
>> For testing, we have done the following:
>> Installed 12 disks in the front, 0 in the back.
>> Created a stripe of different numbers of disks. After each test, I
>> destroy the underlying storage volume and create a new one. As you can
>> see by the results, adding more disks, makes no difference to the
>> performance. This should make a large difference from 4 disks to 8
>> disks, however no difference is shown.
>>
>> Any help would be greatly appreciated!
>>
>> This is the result:
>>
>> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
>> of=/fooPool0/86gb.tst bs=4096 count=20971520
>> ^C3503681+0 records in
>> 3503681+0 records out
>> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s
>
> So, the problem here is that you're not testing the storage at all.
> You're basically measuring dd.
>
> To get meaningful results, you need to do two things:
>
> First, run it for long enough so you eliminate any write cache
> effects. Writes go to memory and only get sent to disk in the
> background.
>
> Second, use a proper benchmark su

Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread phil.har...@gmail.com
I would generally agree that dd is not a great benchmarking tool, but you could 
use multiple instances to multiple files, and larger block sizes are more 
efficient. And it's always good to check iostat and mpstat for io and cpu 
bottlenecks. Also note that an initial run that creates files may be quicker 
because it just allocates blocks, whereas subsequent rewrites require 
copy-on-write. 

- Reply message -
From: "Peter Tribble" 
To: "Gregory Durham" 
Cc: 
Subject: [zfs-discuss] Issues with supermicro
Date: Wed, Aug 10, 2011 10:56


On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
 wrote:
> Hello,
> We just purchased two of the sc847e26-rjbod1 units to be used in a
> storage environment running Solaris 11 express.
>
> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> 9200-8e hba. We are not using failover/redundancy. Meaning that one
> port of the hba goes to the primary front backplane interface, and the
> other goes to the primary rear backplane interface.
>
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks. After each test, I
> destroy the underlying storage volume and create a new one. As you can
> see by the results, adding more disks, makes no difference to the
> performance. This should make a large difference from 4 disks to 8
> disks, however no difference is shown.
>
> Any help would be greatly appreciated!
>
> This is the result:
>
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C3503681+0 records in
> 3503681+0 records out
> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s

So, the problem here is that you're not testing the storage at all.
You're basically measuring dd.

To get meaningful results, you need to do two things:

First, run it for long enough so you eliminate any write cache
effects. Writes go to memory and only get sent to disk in the
background.

Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread phil.har...@gmail.com
I would generally agree that dd is not a great benchmarking tool, but you could 
use multiple instances to multiple files, and larger block sizes are more 
efficient. And it's always good to check iostat and mpstat for io and cpu 
bottlenecks. Also note that an initial run that creates files may be quicker 
because it just allocates blocks, whereas subsequent rewrites require 
copy-on-write. 

- Reply message -
From: "Peter Tribble" 
To: "Gregory Durham" 
Cc: 
Subject: [zfs-discuss] Issues with supermicro
Date: Wed, Aug 10, 2011 10:56


On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
 wrote:
> Hello,
> We just purchased two of the sc847e26-rjbod1 units to be used in a
> storage environment running Solaris 11 express.
>
> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> 9200-8e hba. We are not using failover/redundancy. Meaning that one
> port of the hba goes to the primary front backplane interface, and the
> other goes to the primary rear backplane interface.
>
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks. After each test, I
> destroy the underlying storage volume and create a new one. As you can
> see by the results, adding more disks, makes no difference to the
> performance. This should make a large difference from 4 disks to 8
> disks, however no difference is shown.
>
> Any help would be greatly appreciated!
>
> This is the result:
>
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C3503681+0 records in
> 3503681+0 records out
> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s

So, the problem here is that you're not testing the storage at all.
You're basically measuring dd.

To get meaningful results, you need to do two things:

First, run it for long enough so you eliminate any write cache
effects. Writes go to memory and only get sent to disk in the
background.

Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS read errors

2011-08-10 Thread steven
Also should be getting Illegal Request errors? (no hard or soft errors)

Some more info: (I am doing a Scrub hence the high blocking levels)

var/log$ iostat -Ex
 extended device statistics
devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
sd0   1.1   16.9   30.6  463.0  0.2  0.0   10.6   1   1
sd1   1.0   16.9   30.3  463.0  0.2  0.0   13.5   2   2
sd2 208.74.8 14493.8   16.0  3.0  0.5   16.5  45  48
sd3 212.44.8 14493.4   16.0  2.6  0.4   13.8  41  44
sd4 221.94.8 14491.9   16.0  0.0  1.88.1   0  46
sd5 212.34.8 14493.5   16.0  2.5  0.4   13.4  41  44
sd6   0.00.00.00.0  0.0  0.00.0   0   0
sd8 231.74.8 14692.7   16.3  0.0  1.87.5   0  42
sd9 239.94.7 14691.7   16.3  0.0  1.25.1   0  36
sd0   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
Size: 250.06GB <250059350016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 3 Predictive Failure Analysis: 0
sd1   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD2500JD-75G Revision: 5D02 Serial No:
Size: 250.00GB <2500 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 4 Predictive Failure Analysis: 0
sd2   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD3200SD-01K Revision: 5J08 Serial No:
Size: 320.07GB <320072933376 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
sd3   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
Size: 250.06GB <250059350016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
sd4   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD5000YS-01M Revision: 2E07 Serial No:
Size: 500.11GB <500107862016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
sd5   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD2500SD-01K Revision: 2D08 Serial No:
Size: 250.06GB <250059350016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 2 Predictive Failure Analysis: 0
sd6   Soft Errors: 0 Hard Errors: 5 Transport Errors: 0
Vendor: CREATIVE Product: DVD-ROM DVD1243E Revision: IC01 Serial No:
Size: 0.00GB <0 bytes>
Media Error: 0 Device Not Ready: 5 No Device: 0 Recoverable: 0
Illegal Request: 0 Predictive Failure Analysis: 0
sd8   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: WDC WD5000AACS-0 Revision: 1B01 Serial No:
Size: 500.11GB <500106780160 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 7 Predictive Failure Analysis: 0
sd9   Soft Errors: 0 Hard Errors: 0 Transport Errors: 0
Vendor: ATA  Product: ST31000340AS Revision: SD15 Serial No:
Size: 1000.20GB <1000204886016 bytes>
Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0
Illegal Request: 7 Predictive Failure Analysis: 0
/var/log$
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scripting

2011-08-10 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.


hi
most mordern server has separate ILOM that support IPMLtool that can 
talk to HDD

what is your server? does it has separate  remote management port?

On 8/10/2011 8:36 AM, Lanky Doodle wrote:

Hiya,

Now I have figured out how to read disks using dd to make LEDs blink, I want to 
write a little script that iterates through all drives, dd's them with a few 
thousand counts, stop, then dd's them again with another few thousand counts, 
so I end up with maybe 5 blinks.

I don't want somebody to write something for me, I'd like to be pointed in the 
right direction so I can build one myself :)

Thanks
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Scripting

2011-08-10 Thread Lanky Doodle
Hiya,

Now I have figured out how to read disks using dd to make LEDs blink, I want to 
write a little script that iterates through all drives, dd's them with a few 
thousand counts, stop, then dd's them again with another few thousand counts, 
so I end up with maybe 5 blinks.

I don't want somebody to write something for me, I'd like to be pointed in the 
right direction so I can build one myself :)

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Lanky Doodle
Thanks Andrew, Fajar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS read errors

2011-08-10 Thread steven
Hello 
I am having problems with my ZFS, I have put in a LSI 3041E-S controller and 
have 2 disks on it, and a further 4 on the motherboard. I am getting read 
errors on the pool but not on any disk? Any idea where I should look to find 
the problem? 
Thanks
Steven
\> uname -a
SunOS X..com  5.11 snv_151a i86pc i386 i86pc

\> zpool status –v
  pool: rz2pool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scan: scrub in progress since Wed Aug 10 09:13:12 2011
78.5G scanned out of 720G at 74.0M/s, 2h28m to go
0 repaired, 10.89% done
config:

NAMESTATE READ WRITE CKSUM
rz2pool ONLINE [b]353[/b] 0 0
  raidz2-0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0
c8t4d0  ONLINE   0 0 0
c8t5d0  ONLINE   0 0 0
c9t1d0  ONLINE   0 0 0
c9t2d0  ONLINE   0 0 0

errors: Permanent errors have been detected in the following files:

rz2pool/datastore:<0x1>
rz2pool/datastore:<0xfffe>
rz2pool/datastore:<0x>
<0x49>:<0x1>
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs destory snapshot takes an hours.

2011-08-10 Thread Nix
Hi,

I am facing issue with zfs destroy, this takes almost 3 Hours to delete the 
snapshot of size 150G. 

Could you please help me to resolve this issue, why zfs destroy takes this much 
time.

While taking snapshot, it's done within few seconds.

I have tried with removing with old snapshot but still problem is same.

===
I am using : 

Release : OpenSolaris Development snv_134 X86

# uname -a
SunOS dev-nas01 5.11 snv_134 i86pc i386 i86pc Solaris

# isainfo -kv
64-bit amd64 kernel modules

# zpool status
  pool: rpool
 state: ONLINE
 scrub: scrub completed after 0h3m with 0 errors on Tue Aug  9 08:46:52 2011
config:

NAMESTATE READ WRITE CKSUM
rpool   ONLINE   0 0 0
  c8t0d0s0  ONLINE   0 0 0

errors: No known data errors

  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
c8t1d0  ONLINE   0 0 0
c8t2d0  ONLINE   0 0 0
c8t3d0  ONLINE   0 0 0
c8t4d0  ONLINE   0 0 0

errors: No known data errors

===

# zfs list tank
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   608G  2.13T  35.9K  /tank
===

Thanks,
Nix
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Issues with supermicro

2011-08-10 Thread Peter Tribble
On Wed, Aug 10, 2011 at 1:45 AM, Gregory Durham
 wrote:
> Hello,
> We just purchased two of the sc847e26-rjbod1 units to be used in a
> storage environment running Solaris 11 express.
>
> We are using Hitachi HUA723020ALA640 6 gb/s drives with an LSI SAS
> 9200-8e hba. We are not using failover/redundancy. Meaning that one
> port of the hba goes to the primary front backplane interface, and the
> other goes to the primary rear backplane interface.
>
> For testing, we have done the following:
> Installed 12 disks in the front, 0 in the back.
> Created a stripe of different numbers of disks. After each test, I
> destroy the underlying storage volume and create a new one. As you can
> see by the results, adding more disks, makes no difference to the
> performance. This should make a large difference from 4 disks to 8
> disks, however no difference is shown.
>
> Any help would be greatly appreciated!
>
> This is the result:
>
> root@cm-srfe03:/home/gdurham~# time dd if=/dev/zero
> of=/fooPool0/86gb.tst bs=4096 count=20971520
> ^C3503681+0 records in
> 3503681+0 records out
> 14351077376 bytes (14 GB) copied, 39.3747 s, 364 MB/s

So, the problem here is that you're not testing the storage at all.
You're basically measuring dd.

To get meaningful results, you need to do two things:

First, run it for long enough so you eliminate any write cache
effects. Writes go to memory and only get sent to disk in the
background.

Second, use a proper benchmark suite, and one that isn't itself
a bottleneck. Something like vdbench, although there are others.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] trouble adding log and cache on SSD to a pool

2011-08-10 Thread Eugen Leitl
On Sat, Aug 06, 2011 at 07:19:56PM +0200, Eugen Leitl wrote:
> 
> Upgrading to hacked N36L BIOS seems to have done the trick:
> 
> eugen@nexenta:~$ zpool status tank
>   pool: tank
>  state: ONLINE
>  scan: none requested
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0 0
>   raidz2-0  ONLINE   0 0 0
> c0t0d0  ONLINE   0 0 0
> c0t1d0  ONLINE   0 0 0
> c0t2d0  ONLINE   0 0 0
> c0t3d0  ONLINE   0 0 0
> logs
>   c0t5d0s0  ONLINE   0 0 0
> cache
>   c0t5d0s1  ONLINE   0 0 0
> 
> errors: No known data errors
> 
> Anecdotally, the drive noise and system load have gone
> down as well. It seems even with small SSDs hybrid pools
> are definitely worthwhile.

System is still stable. zilstat on on a lightly loaded box
(this is N36L with 8 GByte RAM and 4x1 and 1.5 TByte Seagate
drives in raidz2):

root@nexenta:/tank/tank0/eugen# ./zilstat.ksh -t 60
TIMEN-Bytes  N-Bytes/s N-Max-RateB-Bytes  B-Bytes/s 
B-Max-Rateops  <=4kB 4-32kB >=32kB
2011 Aug 11 10:38:31   17475360 2912565464560   34078720 567978   
10747904260  0  0260
2011 Aug 11 10:39:31   10417568 1736266191832   20447232 340787   
12189696156  0  0156
2011 Aug 11 10:40:31   19264288 3210715975840   34603008 576716
9961472264  0  0264
2011 Aug 11 10:41:31   11176512 1862756124832   22151168 369186   
12189696169  0  0169
2011 Aug 11 10:42:31   14544432 242407   13321424   26738688 445644   
24117248204  0  0204
2011 Aug 11 10:43:31   13470688 2245115019744   25821184 430353
9961472197  0  0197
2011 Aug 11 10:44:319147112 1524514225464   18350080 305834
8519680140  0  0140
2011 Aug 11 10:45:31   12167552 2027927760864   23068672 384477   
15204352176  0  0176
2011 Aug 11 10:46:31   13306192 2217698467424   25034752 417245   
15335424191  0  0191
2011 Aug 11 10:47:318634288 1439048254112   15990784 266513   
15204352122  0  0122
2011 Aug 11 10:48:314442896  7404840784089175040 152917
8257536 70  0  0 70
2011 Aug 11 10:49:318256312 1376055283744   15859712 264328
9961472121  0  0121

I've also run bonnie++ and scrub while under about the same load,
scrub was doing 80-90 MBytes/s.
 
> 
> On Fri, Aug 05, 2011 at 10:43:02AM +0200, Eugen Leitl wrote:
> > 
> > I think I've found the source of my problem: I need to reflash
> > the N36L BIOS to a hacked russian version (sic) which allows
> > AHCI in the 5th drive bay
> > 
> > http://terabyt.es/2011/07/02/nas-build-guide-hp-n36l-microserver-with-nexenta-napp-it/
> > 
> > ...
> > 
> > Update BIOS and install hacked Russian BIOS
> > 
> > The HP BIOS for N36L does not support anything but legacy IDE emulation on 
> > the internal ODD SATA port and the external eSATA port. This is a problem 
> > for Nexenta which can detect false disk errors when using the ODD drive on 
> > emulated IDE mode. Luckily an unknown Russian hacker somewhere has modified 
> > the BIOS to allow AHCI mode on both the internal and eSATA ports. I have 
> > always said, “Give the Russians two weeks and they will crack anything” and 
> > usually that has held true. Huge thank you to whomever has modified this 
> > BIOS given HPs complete failure to do so.
> > 
> > I have enabled this with good results. The main one being no emails from 
> > Nexenta informing you that the syspool has moved to a degraded state when 
> > it actually hasn’t :) 
> > 
> > ...
> > 
> > On Fri, Aug 05, 2011 at 09:05:07AM +0200, Eugen Leitl wrote:
> > > On Thu, Aug 04, 2011 at 11:58:47PM +0200, Eugen Leitl wrote:
> > > > On Thu, Aug 04, 2011 at 02:43:30PM -0700, Larry Liu wrote:
> > > > >
> > > > >> root@nexenta:/export/home/eugen# zpool add tank log /dev/dsk/c3d1p0
> > > > >
> > > > > You should use c3d1s0 here.
> > > > >
> > > > >> Th
> > > > >> root@nexenta:/export/home/eugen# zpool add tank cache /dev/dsk/c3d1p1
> > > > >
> > > > > Use c3d1s1.
> > > > 
> > > > Thanks, that did the trick!
> > > > 
> > > > root@nexenta:/export/home/eugen# zpool status tank
> > > >   pool: tank
> > > >  state: ONLINE
> > > >  scan: scrub repaired 0 in 0h0m with 0 errors on Fri Aug  5 03:04:57 
> > > > 2011
> > > > config:
> > > > 
> > > > NAMESTATE READ WRITE CKSUM
> > > > tankONLINE   0 0 0
> > > >   raidz2-0  ONLINE   0 0 0
> > > > c0t0d0  ONLINE   0 0 0
> > > > c0t1d0  ONLINE   0 0 0
> > > > c0t2d0  ONLINE   0 0 0
> > > > c0

Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Fajar A. Nugraha
On Wed, Aug 10, 2011 at 2:56 PM, Lanky Doodle  wrote:
> Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a 
> parameter of the command or the slice of a disk - none of my 'data' disks 
> have been 'configured' yet. I wanted to ID them before adding them to pools.

For starters, try looking at what files are inside /dev/dsk/. There
shouldn't be a c9t7d0 file/symlink.

Next, Googling "solaris disk notation" found this entry:
http://multiboot.solaris-x86.org/iv/3.html. In short, for whole disk
you'd need /dev/dsk/c9t7d0p0

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Andrew Gabriel

Lanky Doodle wrote:

Oh no I am not bothered at all about the target ID numbering. I just wondered 
if there was a problem in the way it was enumerating the disks.

Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a 
parameter of the command or the slice of a disk - none of my 'data' disks have 
been 'configured' yet. I wanted to ID them before adding them to pools.
  


Use p0 on x86 (whole disk, without regard to any partitioning).
Any other s or p device node may or may not be there, depending on what 
partitions/slices are on the disk.


--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk IDs and DD

2011-08-10 Thread Lanky Doodle
Oh no I am not bothered at all about the target ID numbering. I just wondered 
if there was a problem in the way it was enumerating the disks.

Can you elaborate on the dd command LaoTsao? Is the 's' you refer to a 
parameter of the command or the slice of a disk - none of my 'data' disks have 
been 'configured' yet. I wanted to ID them before adding them to pools.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss