Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Haudy Kazemi

Brian wrote:

Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
  

Look at these threads:

OpenSolaris b134 Genunix site iso failing boot
http://opensolaris.org/jive/thread.jspa?threadID=125445&tstart=0

Build 134 Won't boot
http://ko.opensolaris.org/jive/thread.jspa?threadID=125486
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6932552

How to bypass boot splash screen?
http://opensolaris.org/jive/thread.jspa?messageID=355648


They talk changing some Grub menu.lst options by either adding 
'console=text' or removing 'console=graphics' .  See if that works for 
you too.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 11:31 AM, Brian wrote:

Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
   

From my /rpool/boot/grub/menu.lst:

title OpenSolaris Development snv_133 Debug
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/opensolaris
kernel$ /platform/i86pc/kernel/$ISADIR/unix -kd -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

The "-kd" drops into the kernel debugger.  If all you want to do is 
loose the splash screen, copy your existing entry and use:


kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
GREAT, glad it worked for you!



On Sat, May 22, 2010 at 7:39 PM, Brian  wrote:

> Ok.  What worked for me was booting with the live CD and doing:
>
> pfexec zpool import -f rpool
> reboot
>
> After that I was able to boot with AHCI enabled.  The performance issues I
> was seeing are now also gone.  I am getting around 100 to 110 MB/s during a
> scrub.  Scrubs are completing in 20 minutes for 1TB of data rather than 1.2
> hours.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Ok.  What worked for me was booting with the live CD and doing:

pfexec zpool import -f rpool
reboot

After that I was able to boot with AHCI enabled.  The performance issues I was 
seeing are now also gone.  I am getting around 100 to 110 MB/s during a scrub.  
Scrubs are completing in 20 minutes for 1TB of data rather than 1.2 hours.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Not completely.  I noticed my performance problem in my "tank" rather than my 
rpool.  But my rpool was sharing a controller (the motherboard controller) with 
some devices in both the rpool and tank.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Sometimes when it hangs on boot hitting space bar or any key won't bring it 
back to the command line.  That is why I was wondering if there was a way to 
not show the splashscreen at all, and rather show what it was trying to load 
when it hangs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
this old thread has info on how to switch from ide->sata mode


http://opensolaris.org/jive/thread.jspa?messageID=448758񭣶




On Sat, May 22, 2010 at 5:32 PM, Ian Collins  wrote:

> On 05/23/10 08:43 AM, Brian wrote:
>
>> Is there a way within opensolaris to detect if AHCI is being used by
>> various controllers?
>>
>> I suspect you may be accurate an AHCI is not turned on.  The bios for this
>> particular motherboard is fairly confusing on the AHCI settings.  The only
>> setting I have is actually in the raid section, and it seems to let select
>> between "IDE/AHCI/RAID" as an option.  However, I can't tell if it applies
>> only if one is using software RAID.
>>
>>
>>
> [answered in other post]
>
>
>  If I set it to AHCI, another screen appears prior to boot that is titled
>> AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
>> Is there a way from the grub menu to request opensolaris boot without the
>> splashscreen, but instead boot with debug information printed to the
>> console?
>>
>>
>
> Just hit a key once the bar is moving.
>
> --
> Ian.
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
This didn't work for me.  I had the exact same issue a few days ago.

My motherboard had the following:

Native IDE
AHCI
RAID
Legacy IDE

so naturally i chose AHCI, but it ALSO had a mode called "IDE/SATA combined
mode"

I thought i needed this to use both the ide and ant sata ports, turns out it
was basically an ide emulation mode for sata, long story short i ended up
with opensolaris installed in IDE mode.

I had to reinstall.  I tried the livecd/import method and it still failed to
boot.


On Sat, May 22, 2010 at 5:30 PM, Ian Collins  wrote:

> On 05/23/10 08:52 AM, Thomas Burgess wrote:
>
>> If you install Opensolaris with the AHCI settings off, then switch them
>> on, it will fail to boot
>>
>>
>> I had to reinstall with the settings correct.
>>
>>  Well you probably didn't have to.  Booting form the live CD and importing
> the pool would have put things right.
>
> --
> Ian.
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 08:43 AM, Brian wrote:

Is there a way within opensolaris to detect if AHCI is being used by various 
controllers?

I suspect you may be accurate an AHCI is not turned on.  The bios for this particular 
motherboard is fairly confusing on the AHCI settings.  The only setting I have is 
actually in the raid section, and it seems to let select between 
"IDE/AHCI/RAID" as an option.  However, I can't tell if it applies only if one 
is using software RAID.

   

[answered in other post]


If I set it to AHCI, another screen appears prior to boot that is titled AMD 
AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
Is there a way from the grub menu to request opensolaris boot without the 
splashscreen, but instead boot with debug information printed to the console?
   


Just hit a key once the bar is moving.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Ian Collins

On 05/23/10 08:52 AM, Thomas Burgess wrote:
If you install Opensolaris with the AHCI settings off, then switch 
them on, it will fail to boot



I had to reinstall with the settings correct.

Well you probably didn't have to.  Booting form the live CD and 
importing the pool would have put things right.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
just to make sure i understand what is going on here,

you have a rpool which is having performance issues, and you discovered ahci
was disabled?


you enabled it, and now it won't boot.  correct?

This happened to me and the solution was to export my storage pool and
reinstall my rpool with the ahci settings on.

Then i imported my storage pool and all was golden


On Sat, May 22, 2010 at 5:25 PM, Brian  wrote:

> Thanks -
>   I can give reinstalling a shot.  Is there anything else I should do
> first?  Should I export my tank pool?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Thanks - 
   I can give reinstalling a shot.  Is there anything else I should do first?  
Should I export my tank pool?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
I am not sure I fully understand the question...  It is setup as raidz2 - is 
that what you wanted to know?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Thomas Burgess
If you install Opensolaris with the AHCI settings off, then switch them on,
it will fail to boot


I had to reinstall with the settings correct.

the best way to tell if ahci is working is to use cfgadm
if you see your drives there, ahci is on

if not, then you may need to reinstall with it on (for the rpool at least)


On Sat, May 22, 2010 at 4:43 PM, Brian  wrote:

> Is there a way within opensolaris to detect if AHCI is being used by
> various controllers?
>
> I suspect you may be accurate an AHCI is not turned on.  The bios for this
> particular motherboard is fairly confusing on the AHCI settings.  The only
> setting I have is actually in the raid section, and it seems to let select
> between "IDE/AHCI/RAID" as an option.  However, I can't tell if it applies
> only if one is using software RAID.
>
> If I set it to AHCI, another screen appears prior to boot that is titled
> AMD AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
> Is there a way from the grub menu to request opensolaris boot without the
> splashscreen, but instead boot with debug information printed to the
> console?
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Is there a way within opensolaris to detect if AHCI is being used by various 
controllers?

I suspect you may be accurate an AHCI is not turned on.  The bios for this 
particular motherboard is fairly confusing on the AHCI settings.  The only 
setting I have is actually in the raid section, and it seems to let select 
between "IDE/AHCI/RAID" as an option.  However, I can't tell if it applies only 
if one is using software RAID.

If I set it to AHCI, another screen appears prior to boot that is titled AMD 
AHCI BIOS.  However, opensolaris hangs during booting with this enabled.
Is there a way from the grub menu to request opensolaris boot without the 
splashscreen, but instead boot with debug information printed to the console?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brandon High
On Sat, May 22, 2010 at 11:41 AM, Brian  wrote:
> If I look at c7d0, I get a message about no "Alt Slice" found and I don't 
> have access to the cache settings.  Not sure if this is part of my problem or 
> not:

That can happen if the controller is not using AHCI. It'll effect your
performance pretty drastically too.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Roy Sigurd Karlsbakk
> extended device statistics   
> errors --- 
> r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w
> trn tot device
>   296.82.9 36640.27.5  7.8  2.0   26.16.6  99  99   0   0 
>  0   0 c7d0
>   296.72.5 36618.17.5  7.8  2.0   26.16.6  99  99   0   0 
>  0   0 c7d1
>   952.02.6 36689.67.1  0.5  0.80.60.8  26  46   0   0 
>  0   0 c8d1
> 0.01.70.05.8  0.0  0.0   10.7   13.3   1   1   0   0  
> 0   0 c9d0
>   963.72.8 36712.46.4  0.5  0.70.60.7  24  43   0  84 
>  0  84 c9d1
> 0.01.70.05.8  0.0  0.00.0   22.9   0   1   0   0  
> 0   0 c4t15d0
>  1000.93.7 36605.06.9  0.0  0.90.00.9   3  39   0   0 
>  0   0 c4t16d0
>  1000.74.1 36579.67.5  0.0  0.90.00.9   4  37   0   0 
>  0   0 c4t17d0

Seems to me c9d1 is having a hard time. How is your zpool layout?
 
Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Bob Friesenhahn

On Sat, 22 May 2010, Brian wrote:


The -xen helped me determine that it  was disks c7d0 and c7d1 that were slower.


You may be right, but is not totally clear since you really need to 
apply a workload which is assured to consistently load the disks.  I 
don't think that 'scrub' is necessarily best for this.  Perhaps the 
load issued to these two disks contains more random access requests.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread Brian
Following up with some more information here:

This is the output of "iostat -xen 30"

extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  296.82.9 36640.27.5  7.8  2.0   26.16.6  99  99   0   0   0   0 
c7d0
  296.72.5 36618.17.5  7.8  2.0   26.16.6  99  99   0   0   0   0 
c7d1
  952.02.6 36689.67.1  0.5  0.80.60.8  26  46   0   0   0   0 
c8d1
0.01.70.05.8  0.0  0.0   10.7   13.3   1   1   0   0   0   0 
c9d0
  963.72.8 36712.46.4  0.5  0.70.60.7  24  43   0  84   0  84 
c9d1
0.01.70.05.8  0.0  0.00.0   22.9   0   1   0   0   0   0 
c4t15d0
 1000.93.7 36605.06.9  0.0  0.90.00.9   3  39   0   0   0   0 
c4t16d0
 1000.74.1 36579.67.5  0.0  0.90.00.9   4  37   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  302.65.4 36937.5  156.3  7.7  2.0   24.96.4  98  99   0   0   0   0 
c7d0
  303.15.4 36879.0  156.2  7.7  2.0   24.86.4  98  99   0   0   0   0 
c7d1
  961.45.8 36974.3  155.6  0.5  0.80.50.8  26  47   0   0   0   0 
c8d1
0.03.50.0   22.7  0.0  0.0   11.19.5   1   2   0   0   0   0 
c9d0
  961.15.5 37044.0  154.5  0.6  0.70.60.8  26  45   0  84   0  84 
c9d1
0.03.40.0   22.7  0.0  0.00.0   13.8   0   1   0   0   0   0 
c4t15d0
  998.17.1 36995.6  155.4  0.0  1.00.01.0   3  40   0   0   0   0 
c4t16d0
  996.87.2 36954.1  156.5  0.0  0.90.00.9   4  37   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  296.03.4 36583.8  155.5  7.8  2.0   26.06.6  99  99   0   0   0   0 
c7d0
  296.33.3 36538.6  155.5  7.8  2.0   25.96.6  99  99   0   0   0   0 
c7d1
  920.53.4 36649.7  155.0  0.7  0.80.70.9  29  49   0   0   0   0 
c8d1
0.01.80.05.9  0.0  0.0   10.6   11.4   1   1   0   0   0   0 
c9d0
  948.93.3 36689.5  154.3  0.6  0.70.60.7  25  44   0  84   0  84 
c9d1
0.01.80.05.9  0.0  0.00.0   26.2   0   1   0   0   0   0 
c4t15d0
  974.03.8 36643.4  154.7  0.0  1.00.01.0   4  41   0   0   0   0 
c4t16d0
  976.83.8 36634.6  155.6  0.0  1.00.01.0   4  38   0   0   0   0 
c4t17d0
extended device statistics    errors --- 
r/sw/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot 
device
  301.33.6 37353.6  151.9  7.8  2.0   25.56.5  99  99   0   0   0   0 
c7d0
  301.53.5 37340.0  151.8  7.8  2.0   25.46.5  99  99   0   0   0   0 
c7d1
  979.53.4 37466.4  151.5  0.5  0.70.50.7  24  45   0   0   0   0 
c8d1
0.01.70.05.9  0.0  0.0   12.4   13.7   1   1   0   0   0   0 
c9d0
  995.83.4 37456.7  151.0  0.5  0.70.50.7  23  43   0  84   0  84 
c9d1
0.01.80.05.9  0.0  0.00.0   23.5   0   1   0   0   0   0 
c4t15d0
 1022.74.0 37410.8  151.4  0.0  0.90.00.9   3  38   0   0   0   0 
c4t16d0
 1020.14.4 37473.8  152.0  0.0  0.90.00.9   4  38   0   0   0   0 
c4t17d0

I looked at it for a while and the number of errors is not increasing.. I think 
that may have stemmed from when I was testing hot-plugging of the drive and 
disconnected it at one point.

The -xen helped me determine that it  was disks c7d0 and c7d1 that were slower.

I then tried to look at the cache settings in format:

pfexec format -e
Searching for disks...done


AVAILABLE DISK SELECTIONS:
   0. c4t15d0 
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@f,0
   1. c4t16d0 
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@10,0
   2. c4t17d0 
  /p...@0,0/pci1002,5...@3/pci1014,3...@0/s...@11,0
   3. c7d0 
  /p...@0,0/pci-...@11/i...@0/c...@0,0
   4. c7d1 
  /p...@0,0/pci-...@11/i...@0/c...@1,0
   5. c8d1 
  /p...@0,0/pci-...@11/i...@1/c...@1,0
   6. c9d0 
  /p...@0,0/pci-...@14,1/i...@1/c...@0,0
   7. c9d1 
  /p...@0,0/pci-...@14,1/i...@1/c...@1,0

If I look at c4t1d0, everything is how I expect it to be:

Specify disk (enter its number): 2
selecting c4t17d0
[disk formatted]
/dev/dsk/c4t17d0s0 is part of active ZFS pool tank. Please see zpool(1M).


FORMAT MENU:
disk   - select a disk
type   - select (define) a disk type
partition  - select (define) a partition table
current- describe the current disk
format - format and analyze the disk
fdisk  - run the fdisk program
repair - repair a defective sector
label  - write label to the disk
 

Re: [zfs-discuss] Understanding ZFS performance.

2010-05-22 Thread taemun
iostat -xen 1 will provide the same device names as the rest of the system
(as well as show error columns).

zpool status will show you which drive is in which pool.

As for the controllers, cfgadm -al groups them nicely.

t

On 23 May 2010 03:50, Brian  wrote:

> I am new to OSOL/ZFS but have just finished building my first system.
>
> I detailed the system setup here:
> http://opensolaris.org/jive/thread.jspa?threadID=128986&tstart=15
>
> I ended up having to add an additional controller card as two ports on the
> motherboard did not work as standard Sata port.  Luckily I was able to
> salvage an LSI SAS card from an old system.
>
> Things seem to be working OK for the most part.. But I am trying to dig a
> bit deeper into the performance.  I have done some searching and it seems
> that the iostat -x can help you better understand your performance.
>
> I have 8 drives in the system.  2 are in a mirrored boot pool and the other
> 6 are in a single raidz2 pool.  All 6 are the same.  Samsung 1TB Spinpoints.
>
> Here is what my output looks like from "iostat -x 30" during a scrub of the
> raidz2 pool:
>
> devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
> cmdk0   299.71.5 37080.91.6  7.7  2.0   32.3  98  99
> cmdk1   300.21.3 37083.01.5  7.7  2.0   32.2  98  99
> cmdk2  1018.61.6 37141.31.7  0.5  0.71.2  22  43
> cmdk3 0.01.80.05.2  0.0  0.0   33.7   1   2
> cmdk4  1045.62.1 37124.31.4  0.7  0.71.3  21  41
> sd6   0.01.80.05.2  0.0  0.0   25.1   0   1
> sd71033.42.5 37128.51.8  0.0  1.01.0   3  38
> sd81044.52.5 37129.41.8  0.0  0.90.9   3  36
> extended device statistics
> devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
> cmdk0   301.91.3 37339.01.7  7.8  2.0   32.1  99  99
> cmdk1   302.11.4 37341.01.8  7.7  2.0   32.0  99  99
> cmdk2  1048.11.5 37400.41.6  0.5  0.71.1  20  42
> cmdk3 0.01.50.05.1  0.0  0.0   36.5   1   2
> cmdk4  1054.41.6 37363.11.5  0.7  0.61.2  20  40
> sd6   0.01.50.05.1  0.0  0.0   30.4   0   1
> sd71044.42.1 37404.21.7  0.0  0.90.9   3  38
> sd81050.52.1 37382.81.9  0.0  0.90.9   3  36
> extended device statistics
> devicer/sw/s   kr/s   kw/s wait actv  svc_t  %w  %b
> cmdk0   296.31.5 36195.41.7  7.8  2.0   32.7  99  99
> cmdk1   295.21.5 36230.11.8  7.7  2.0   32.5  98  98
> cmdk2   987.52.0 36171.51.7  0.6  0.71.3  22  43
> cmdk3 0.01.50.05.1  0.0  0.0   37.7   1   2
> cmdk4  1018.32.0 36160.81.6  0.7  0.61.4  21  41
> sd6   0.01.50.05.1  0.0  0.1   40.3   0   2
> sd71005.32.6 36300.61.8  0.0  1.11.1   3  39
> sd81016.02.5 36260.12.0  0.0  1.01.0   3  36
>
>
> I think cmdk3 and sd6 are in my rpool.  I tried to split the pools across
> the controllers for better performance.
>
> It seems to me that cmdk0 and cmdk1 are much slower than the others..  But
> I am not sure why or what to check next...  In fact I am not even sure how I
> can trace back that device name to figure out which controller it is
> connected to.
>
> Any ideas or next steps would be appreciated.
>
> Thanks.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss