Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
I think the 'corruption' is caused by the shuffling and mismatch of the disks. 
One 1.5TB is now believed to be part of a mirror with a 2TB, a 1TB part of a 
mirror with a 1.5TB and so on. It would be better if zfs would try to find the 
second disk of each mirror instead of relying on what controller/channel/port 
it was previously connected to.

So, my best action would be to delete the zpool.cache and then do a zpool 
import?

Should I try to match disks with cables as it was previously connected before I 
do the import? Will that make any difference?

BTW, ZFS version is 22.

Thanks, 

Jan
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread David Magda

On Oct 28, 2010, at 04:44, Jan Hellevik wrote:

So, my best action would be to delete the zpool.cache and then do a  
zpool import?


Should I try to match disks with cables as it was previously  
connected before I do the import? Will that make any difference?


BTW, ZFS version is 22.


I'd say export, rename zpool.cache, and then try importing it. ZFS  
should scan all the devices and figure out what's there. If that still  
doesn't work, try the -F option to go back a few transactions to a  
known-good state.


Most file systems don't take well to having disks pulled on them, and  
ZFS is no different there. It's just with ZFS it can tell when there  
are (potentially) corrupted blocks because of the checksumming.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Good write, but slow read speeds over the network

2010-10-28 Thread Stephan Budach

Hi all,

I am running Netalk on OSol snv134 on a Dell R610, 32 GB RAM server. I 
am experiencing different speeds when when writing to and reading from 
the pool.
The pool itself consists of two FC LUNs that each build a vdev (no 
comments on that please, we discussed that already! ;) ).


Now, I am having a couple of AFP clients that access this pool either 
via FastEthernet or even GiBitEthernet. The point is: write to these AFP 
shares are pretty fast. Due to the fairly hight amount of RAM I am 
getting transfer speeds of up to 90 MB/sec for a 5 GB file, which is not 
bad. But reading from the pool seems to pan out at 40 to 50 MB/sec. no 
matter what I am trying.


When I run zpool iostat pool 1 while reading a big file from the 
share, I can see that the pool has continous reads of about 50 to 60 
MB/sec. and I am getting approx. 30 MB/sec on the client. Afterwards the 
files must reside in the ZFS file cache, since readint he same file 
again, will not show any read activity on the pool at all, but the 
transfer rate is still quite slow  - a max. of 45 to 50 MB/sec.


When I use dd on the host itself to copy the file from the pool to 
/dev/null I am getting approx. 120 MB/sec, which is still way more than 
the 30 to 50 MB/sec I am getting over the network. If I use nc on the 
server, I will get approx 200 MB/sec, so the pool seems to be fast enough.


I was wondering if there's anything related to ZFS, that could help 
here, or is it more likely a network related issue?


Cheers,
budy

--
Stephan Budach
Jung von Matt/it-services GmbH
Glashüttenstraße 79
20357 Hamburg

Tel: +49 40-4321-1353
Fax: +49 40-4321-1114
E-Mail: stephan.bud...@jvm.de
Internet: http://www.jvm.com

Geschäftsführer: Ulrich Pallas, Frank Wilhelm
AG HH HRB 98380

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Ooops - did it again... Moved disks without export first.

2010-10-28 Thread Jan Hellevik
Thanks! I will try later today and report back the result.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-10-28 Thread Mariusz
Hi.
I install solaris 10 x86 on PowerEdge R510 with PERC H700 without problem. 
8HDD configured with RAID 6.
Only question is how to monitor this controller?

Do you have any tools which allow you to monitor this controller? 
Get HDD status.

Thank you for help.

PS.
I know this is OpenSolaris not solaris group but maybe I get help here.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] PowerEdge R510 with PERC H200/H700 with ZFS

2010-10-28 Thread Kyle McDonald

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
 


On 8/7/2010 4:11 PM, Terry Hull wrote:

 It is just that lots of the PERC controllers do not do JBOD very well. I've
 done it several times making a RAID 0 for each drive. Unfortunately, that
 means the server has lots of RAID hardware that is not utilized very well.
Doing that lets you use the cache, which is the only part of the RAID
HW that I'd worry about wasting.
 Also, ZFS loves to see lots of spindles, and Dell boxes tend not to have
 lots of drive bays in comparison to what you can build at a given price
 point.
I've found the R515 (the R510's cousing with AMD processors) to be
very interesting in this regard. It has many more drive bays than most
Dell boxes.

I've also priced out the IBM x3630 M3, even more drive bays in this one.
for about %20 more.

 Of course then you have warranty / service issues to consider.

I don't know what you're needs are, but I found dell's 5yr onsite 10x5
NBD support to be priced very attractively. But I can live with a
machine being down till the next day, or through a weekend.

 -Kyle

 --
 Terry Hull
 Network Resource Group, Inc.


 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.14 (MingW32)
 
iQEcBAEBAgAGBQJMyZFEAAoJEEADRM+bKN5wTr4IAIh1LzIcm346TVcRZdKwkbgW
EkFux2ZT8uzk/v1lXqgiDCkO0zQ/Bwpk9SsSa0KOblOxKRWPYQwj2pO30syX/QnR
82aFfhcJaWmf0H3aphoowqTTDhKRefYXgbPINaVafDV8JY8tN9d0+Tcnhv03n3pq
7Eafg+RbjaZPceZxDuNQ0xJFw+cpXvOYSFAcCB+E49actOqDIErf4A2xGL96PK7k
POu1bHN5qyIsca6t76nvuR7w8+yq6FfM4HY0KahyPhx/MXjp01N7vFyQKdLF5rGU
ByliQedo7r8OsLl6BxeMwv+SBNxab4sjqWpWfTzniLk1Ng6aG3mm5YQ7/iAUZ+0=
=FDkN
-END PGP SIGNATURE-

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] sharesmb should be ignored if filesystem is not mounted

2010-10-28 Thread Richard L. Hamilton
I have sharesmb=on set for a bunch of filesystems,
including three that weren't mounted.  Nevertheless,
all of those are advertised.  Needless to say,
the one that isn't mounted can't be accessed remotely,
even though since advertised, it looks like it could be.

# zfs list -o name,mountpoint,sharesmb,mounted|awk '$(NF-1)!=off   
$(NF-1)!=-  $NF!=yes'
NAME   MOUNTPOINT  SHARESMB  MOUNTED
rpool/ROOT legacy  on no
rpool/ROOT/snv_129 /   on no
rpool/ROOT/snv_93  /tmp/.alt.luupdall.22709on no
# 


So I think that if a zfs filesystem is not mounted,
sharesmb should be ignored.

This is in snv_97 (SXCE; with a pending LU BE not yet activated,
and an old one no longer active); I don't know if it's still a problem in
current builds that unmounted filesystems are advertised, but if it is,
I can see how it could confuse clients.  So I thought I'd mention it.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] sharesmb should be ignored if filesystem is not mounted

2010-10-28 Thread Richard L. Hamilton
PS obviously these are home systems; in a real environment,
I'd only be sharing out filesystems with user or application
data, and not local system filesystems!  But since it's just
me, I somewhat trust myself not to shoot myself in the foot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Mirroring a zpool

2010-10-28 Thread SR
I have a raidz2 zpool which I would like to create a mirror of.

Is it possible to create a mirror of a zpool?

I know I can create multi way mirrors of vdevs, do zfs/send receive etc.. to 
mirror data.   But can I create a mirror at the zpool level?

Thanks
SR
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirroring a zpool

2010-10-28 Thread Cindy Swearingen

Hi SR,

You can create a mirrored storage pool, but you can't mirror
an existing raidz2 pool nor can you convert a raidz2 pool
to a mirrored pool.

You would need to copy the data from the existing pool,
destroy the raidz2 pool, and create a mirrored storage
pool.

Cindy

On 10/28/10 11:19, SR wrote:

I have a raidz2 zpool which I would like to create a mirror of.

Is it possible to create a mirror of a zpool?

I know I can create multi way mirrors of vdevs, do zfs/send receive etc.. to 
mirror data.   But can I create a mirror at the zpool level?

Thanks
SR

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zil behavior

2010-10-28 Thread Edward Ned Harvey

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
I have a couple drive enclosures:
15x 450gb 15krpm SAS
15x 600gb 15krpm SAS

I'd like to set them up like RAID10.  Previously, I was using two hardware 
RAID10 volumes, with the 15th drive as a hot spare, in each enclosure.

Using ZFS, it could be nice to make them a single volume, so that I could share 
L2ARC and ZIL devices, rather than buy two sets.

It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets 
in the same volume, without losing capacity.  Is that a bad idea?  Is there a 
problem with having different stripe sizes, like this?

Thanks,
Rob
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Ian Collins

On 10/29/10 09:40 AM, Rob Cohen wrote:

I have a couple drive enclosures:
15x 450gb 15krpm SAS
15x 600gb 15krpm SAS

I'd like to set them up like RAID10.  Previously, I was using two hardware 
RAID10 volumes, with the 15th drive as a hot spare, in each enclosure.

Using ZFS, it could be nice to make them a single volume, so that I could share 
L2ARC and ZIL devices, rather than buy two sets.

It appears possible to set up 7x450gb mirrored sets and 7x600gb mirrored sets 
in the same volume, without losing capacity.  Is that a bad idea?  Is there a 
problem with having different stripe sizes, like this?

   
The problem would be one of performance once the pool becomes more than 
75% full.  At this point the smaller vedevs may be full and all new 
write activity will be restricted to the bigger devices.


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Rob Cohen
Thanks, Ian.

If I understand correctly, the performance would then drop to the same level as 
if I set them up as separate volumes in the first place.

So, I get double the performance for 75% of my data, and equal performance for 
25% of my data, and my L2ARC will adapt to my working set across both 
enclosures.

That sounds like all upside, and no downside, unless I'm missing something.

Are there any other problems?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] stripes of different size mirror groups

2010-10-28 Thread Roy Sigurd Karlsbakk
 If I understand correctly, the performance would then drop to the same
 level as if I set them up as separate volumes in the first place.
 
 So, I get double the performance for 75% of my data, and equal
 performance for 25% of my data, and my L2ARC will adapt to my working
 set across both enclosures.
 
 That sounds like all upside, and no downside, unless I'm missing
 something.
 
 Are there any other problems?

Not really. You also have the option to replace the smaller drives with bigger 
ones, one by one, if you set autogrow=on on that pool.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss