Re: [zfs-discuss] ashift and vdevs

2010-11-30 Thread taemun
On 24 November 2010 01:40, David Magda  wrote:

> It's a per-pool property, and currently hard coded to a value of nine
> (i.e., 2^9 = 512).


On 27 November 2010 14:11, Brandon High  wrote:

> The ashift is set in the pool when it's created and will persist
> through the life of that pool. If you set it at pool creation, it will
> stay regardless of OS upgrades.
>

I beg to differ:

$ zdb -C | grep -B1 -a9 "type: 'raidz'"
children[0]:
type: 'raidz'
id: 0
guid: 2697156371937180589
nparity: 1
metaslab_array: 30
metaslab_shift: 37
ashift: 12
asize: 18003469271040
is_log: 0
create_txg: 4
--
children[1]:
type: 'raidz'
id: 1
guid: 8374290131789411367
nparity: 1
metaslab_array: 28
metaslab_shift: 37
ashift: 12
asize: 18003469271040
is_log: 0
create_txg: 4
--
children[2]:
type: 'raidz'
id: 2
guid: 7520329545218679233
nparity: 1
metaslab_array: 64
metaslab_shift: 37
ashift: 9
asize: 17998477000704
is_log: 0
create_txg: 42736

For the pool:
$ zpool status tank | grep -v d0
  pool: tank
 state: ONLINE
 scan: resilvered 0 in 0h0m with 0 errors on Thu Nov 25 22:01:52 2010
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  raidz1-0   ONLINE   0 0 0
  raidz1-1   ONLINE   0 0 0
  raidz1-2   ONLINE   0 0 0

I created the pool with the modified binary from
http://digitaldj.net/2010/11/03/zfs-zpool-v28-openindiana-b147-4k-drives-and-you/,
with
just the top two vdevs (Samsung HD204UI's + Seagate LP's; with 512B emulated
from 4KiB physical sector drives). It was later expanded using the default
(normally installed in SX11) zpool binary, with the third vdev of Hitachi
7200rpm drives (512B native drives).

I'm presently trying to confirm this with something like iosnoop, and
struggling to isolate which vdev is which. I can confirm that there are 512B
reads/writes by zpool-tank (iotop). I'd presume that it's limited to the
third (bottom) vdev.

Newsflash for the internet: you can have pools with mixed ashifts. This
means that you don't need to rebuild your pool from scratch to integrate
4KiB physical sector drives. Please, Oracle, allow users to provide ashift
during zpool create and zpool add. Please!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Invitation to connect on LinkedIn

2010-11-30 Thread Yan Zhu via LinkedIn
LinkedIn
Yan Zhu requested to add you as a connection on LinkedIn:
--

Eric,

I'd like to add you to my professional network on LinkedIn.

- Yan

Accept invitation from Yan Zhu
http://www.linkedin.com/e/gn3nzl-gh5jt1nm-3w/XIBcooO7dGzlKzmXdkBdrKOTPpzdvXHxGrslduJS1Z/blk/I2486593580_2/1BpC5vrmRLoRZcjkkZt5YCpnlOt3RApnhMpmdzgmhxrSNBszYOnP0UdjcVdjoUd399bT1qtPhprAlBbPkOejwNcP4Tej4LrCBxbOYWrSlI/EML_comm_afe/

View invitation from Yan Zhu
http://www.linkedin.com/e/gn3nzl-gh5jt1nm-3w/XIBcooO7dGzlKzmXdkBdrKOTPpzdvXHxGrslduJS1Z/blk/I2486593580_2/39vc3wRcPARdzwQcAALqnpPbOYWrSlI/svi/
 
--

DID YOU KNOW you can conduct a more credible and powerful reference check using 
LinkedIn? Enter the company name and years of employment or the prospective 
employee to find their colleagues that are also in your network. This provides 
you with a more balanced set of feedback to evaluate that new hire.
http://www.linkedin.com/e/gn3nzl-gh5jt1nm-3w/rsr/inv-27/

 
-- 
(c) 2010, LinkedIn Corporation___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mirrored drive

2010-11-30 Thread Richard Elling
On Nov 29, 2010, at 5:05 AM, Dick Hoogendijk wrote:

> OK, I've got a proble I can't solve by myself. I've installed solaris 11 
> using just one drive.
> Now I want to create a mirror by attached a second one tot the rpool.
> However, the first one has NO partition 9 but the second one does. This way 
> the sizes differ if I create a partiotion 0 (needed because it's a boot 
> disk)..
> 
> How can I get the second disk look exactly the same like the first?
> Or can't that be done.

There is a whole section on managing boot disks in the ZFS Admin Guide,
worth a look.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SUNWzfsg missing in Solaris 11 express?

2010-11-30 Thread Linder, Doug
Craig Morgan wrote:

> The GUI was a plug-in to Sun WebConsole which is/was a Solaris10
> feature ... I would expect some integration of that going forward, but
> you'd have to check with Oracle on integration plans.

It was a POS anyways, in my opinion.  It was really tough to get working and 
didn't do all that much.  My guess is that Oracle is going to dump that and all 
similar stuff because they want anyone who needs to manage more than a half 
dozen systems to spend $OBSCENE on the fancy Enterprise Manager suite.  It 
looks like really handy software.  When I priced it with them once it took me 
minutes to stop laughing long enough to tell them that we won't spend more for 
management software than the hardware itself actually cost, especially PER 
SYSTEM instead of a site license.  

--
Learn more about Merchant Link at www.merchantlink.com.

THIS MESSAGE IS CONFIDENTIAL.  This e-mail message and any attachments are 
proprietary and confidential information intended only for the use of the 
recipient(s) named above.  If you are not the intended recipient, you may not 
print, distribute, or copy this message or any attachments.  If you have 
received this communication in error, please notify the sender by return e-mail 
and delete this message and any attachments from your computer.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Resizing ZFS block devices and sbdadm

2010-11-30 Thread Don
sbdadm can be used with a regular ZFS file or a ZFS block device.

Is there an advatage to using a ZFS block device and exporting it to comstar 
via sbdadm as opposed to using a file and exporting it? (e.g. performance or 
manageability?)

Also- let's say you have a 5G block device called pool/test

You can resize it by doing:
zfs set volsize=10G pool/test

However if the device was already imported into comstar then stmfadm list-lu -v 
 will still only report the original 5G block size. You can use sbdadm 
modify-lu -s 10G  but I'm not sure if there is a chance 
you might run into a size difference between ZFS and sbd.

i.e.- if I specify 10G in ZFS, and I do an sbdadm modify-lu -s 10G is there any 
chance they won't align and I'll try to write past the end of the zvol?

Thanks in advance-
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-30 Thread David Magda
On Tue, November 30, 2010 14:09, Pasi Kärkkäinen wrote:
>> Bug ID: 6907687 zfs pool is not automatically fixed when disk are
>> brought back online or after boot
>>
>> An IDR patch already exists, but no official patch yet.
>
> Do you know if these bugs are fixed in Solaris 11 Express ?

It says it was fixed in snv_140, and S11E is based on snv_151a, so it
should be in:

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6907687


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-30 Thread Pasi Kärkkäinen
On Tue, Nov 09, 2010 at 04:18:17AM -0800, Andreas Koppenhoefer wrote:
> From Oracle Support we got the following info:
> 
> Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses 
> iscsi LUNs
> This is a duplicate of:
> Bug ID: 6907687 zfs pool is not automatically fixed when disk are brought 
> back online or after boot
> 
> An IDR patch already exists, but no official patch yet.
> 

Do you know if these bugs are fixed in Solaris 11 Express ?

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Seagate ST32000542AS and ZFS perf

2010-11-30 Thread Krunal Desai
> Not sure where you got this figure from, the "Barracuda Green"
> (http://www.seagate.com/docs/pdf/datasheet/disc/ds1720_barracuda_green.pdf) is
> a different drive to the one we've been talking about in this thread
> (http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_lp.pdf).
> I would note that the Seagate 2TB LP has a 0.32% Annualised Failure Rate.
> ie, in a given sample (which aren't overheating, etc) 32 from every 10,000
> should fail. I *believe* that the Power On-Hours on the Barra Green is
> simply saying that it is designed for 24/7 usage. It's a per year number. I
> couldn't imagine them specifying the number of hours before failure like
> that, just below an AFR of 0.43.

Whoops, yes, that's what I did, I assumed that LP == Green, but I
guess that is not the case. I got 2 from the newegg sale, I'll post my
impressions once I get them and added to a pool...assuming they
survived newegg's rather subpar hard drive packaging process.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss