Re: [zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle

Hi Cindy, thanks for the reply...

On 08/05/09 18:55, cindy.swearin...@sun.com wrote:

Hi Steffen,

Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.


I will suggest that. Had already considered it. Since they may be forced 
to do a fresh load, they could turn off the HW RAID that is currently in 
place. System are T5xy0s.




I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SMI label and slice when the pool is created, if the whole disk's 
capacity is in slice 0, for example. 


Was that a performance test or a status test using something like format -e?

Turns out I am trying this on an ATA drive, and format -e doesn't do 
anything there.


> However, its not enabled on my

s10u7 root pool slice, all disk space is in slice 0, but it is enabled
on my upcoming Solaris 10 root pool disk. Don't know what's up with
that.


And I don't fully follow, since 5/09 is update 7 :)

Now maybe I do. The former case is a non-root pool and the latter is a 
root pool?



If performance is a goal then go with two pools anyway so that you have
more flexibility in configuring a mirrored or RAID-Z config for the data 
pool or adding log devices (if that helps their workload) and also
provides more flexibility in management of ZFS BEs vs ZFS data in zones, 
and so on.


System has two drives, so I don't see how I/they could get more 
performance by using RAID, at least for the write side of things (and I 
don't know where the performance issue is).


My other concern with two pools on a single disk is there less 
likelihood of putting two unrelated writes close together if they are in 
different pools, not just different file systems/data sets in the same 
pool. So two pools might force considerably more head movements--across 
more of the platter.


With a root pool, you currently constrained by no RAID-Z, can't add 
add'l mirrored VDEVs, no log devices, can't be exported to another

system, and so on.


These would be internal disks. Good point about the lack of log 
devices--not sure if there might be interest or opportunity in adding 
SSD later.



The ZFS BP wiki provides more performance-related tips:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide


Ah, and I only looked on the Evil Tuning Guide one. The BP mentions the 
whole disk, however it is not clear whether that applies to the root, 
non-EFI pool, so your information is of value to me.


Steffen



Cindy

On 08/05/09 15:07, Steffen Weiberle wrote:

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a 
zpool on a full disk, such as one with an EFI label. Does the same 
apply if the full disk is used with an SMI label, which is required to 
boot?


I am trying to determine the trade-off, if any, of having a single 
rpool on cXtYd0s2, if I can even do that, and improved performance 
compared to having two pools, a root pool and a separate data pool, 
for improved manageability and isolation. The data pool will have zone 
root paths on it. Customer has stated they are experiencing some 
performance limits in their application due to the disk, and if 
creating a single pool will help by enabling the write cache, that may 
be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update 
to S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Cindy . Swearingen

Hi Steffen,

Go with a mirrored root pool is my advice with all the disk space in s0
on each disk. Simple is best and redundant simple is even better.

I'm no write cache expert, but a few simple tests on Solaris 10 5/09,
show me that the write cache is enabled on a disk that is labeled with
an SMI label and slice when the pool is created, if the whole disk's 
capacity is in slice 0, for example. However, its not enabled on my

s10u7 root pool slice, all disk space is in slice 0, but it is enabled
on my upcoming Solaris 10 root pool disk. Don't know what's up with
that.

If performance is a goal then go with two pools anyway so that you have
more flexibility in configuring a mirrored or RAID-Z config for the data 
pool or adding log devices (if that helps their workload) and also
provides more flexibility in management of ZFS BEs vs ZFS data in zones, 
and so on.


With a root pool, you currently constrained by no RAID-Z, can't add 
add'l mirrored VDEVs, no log devices, can't be exported to another

system, and so on.

The ZFS BP wiki provides more performance-related tips:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide

Cindy

On 08/05/09 15:07, Steffen Weiberle wrote:

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a zpool 
on a full disk, such as one with an EFI label. Does the same apply if 
the full disk is used with an SMI label, which is required to boot?


I am trying to determine the trade-off, if any, of having a single rpool 
on cXtYd0s2, if I can even do that, and improved performance compared to 
having two pools, a root pool and a separate data pool, for improved 
manageability and isolation. The data pool will have zone root paths on 
it. Customer has stated they are experiencing some performance limits in 
their application due to the disk, and if creating a single pool will 
help by enabling the write cache, that may be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update to 
S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ?: SMI vs. EFI label and a disk's write cache

2009-08-05 Thread Steffen Weiberle

For Solaris 10 5/09...

There are supposed to be performance improvements if you create a zpool 
on a full disk, such as one with an EFI label. Does the same apply if 
the full disk is used with an SMI label, which is required to boot?


I am trying to determine the trade-off, if any, of having a single rpool 
on cXtYd0s2, if I can even do that, and improved performance compared to 
having two pools, a root pool and a separate data pool, for improved 
manageability and isolation. The data pool will have zone root paths on 
it. Customer has stated they are experiencing some performance limits in 
their application due to the disk, and if creating a single pool will 
help by enabling the write cache, that may be of value.


If the *current* answer is no to having ZFS turn on the write cache at 
this time, is it something that is coming in OpenSolaris or an update to 
S10?


Thanks
Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss