Re: [zfs-discuss] Zpool import not working - I broke my pool...

2008-10-18 Thread James
Any updates on this ?
I created a pool in 5/08, then added a slog device which sadly failed.
I can no longer mount the pool, it gives a "cannot import 'mypool': one or more 
devices is currently unavailable".
I have tried it with the latest OpenSolaris  pre-release (2008.11, based on 
Nevada build 99) and still no luck.

Please advise,
James
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pool corruption avoidance

2008-10-18 Thread Ahmed Kamal
Hi,
Unfortunately, every now and then someone has his zpool corrupt, with no
tools to fix it! This is due to either zfs bugs, or hardware lying about
whether the bits really hit the platters. I am evaluating what I should be
using for storing VMware ESX VM images (ext3 or zfs on NFS). I really really
want zfs snapshots, but loosing the pool is going to be a royal pain for
small businesses.

My questions are:
1- What are the best practices to avoid pool corruption (even if it incurs a
performance hit) ?
2- I remember a suggested idea that zfs would iterate back in time when
mounting a zpool till it finds a fully written pool and uses that, thus
avoiding corruption. Is there an RFE for that yet ? I'd like to subscribe to
that, and I might even delay jumping on the zfs wagon till it's got this
recovery feature!

Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] smb "veto oplock files" in solaris Cifs / ZFS

2008-10-18 Thread Jonny Wichtig
Does Opensolaris Cifs Server or ZFS support Samba "veto oplock files" Share 
option? How can i activate this?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-10-18 Thread Toby Thain

On 18-Oct-08, at 12:46 AM, Roch Bourbonnais wrote:

>
> Leave the default recordsize. With 128K recordsize, files smaller than
> 128K are stored as single record
> tightly fitted to the smallest possible # of disk sectors. Reads and
> writes are then managed with fewer ops.
>
> Not tuning the recordsize is very generally more space efficient and
> more performant.
> Large DB (fixed size aligned accesses to uncacheable working set) is
> the exception here (tuning recordsize helps) and a few other corner
> cases.
>
> -r
>
>
> Le 15 sept. 08 à 04:49, Peter Eriksson a écrit :
>
>> I wonder if there exists some tool that can be used to figure out an
>> optimal ZFS recordsize configuration? Specifically for a mail
>> server using Maildir (one ZFS filesystem per user). Ie, lot's of
>> small files (one file per email).


Emails aren't as small as they used to be. I wouldn't be surprised if  
the median size is a good portion of 128K anyway.

--Toby


>> --
>> This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-18 Thread Mike La Spina
Ciao,

Your GUID's must not be the same an NAA is already established on the targets 
and if you previously tried to initialize the LUN with VMware it would have 
assigned the value in the VMFS header wich is now stored on your raw ZFS 
backing store. This will confuse VMware and it will remember it now some where 
in its definitions. You need to remove the second datastore from VMware and 
delete the target definition and ZFS backing store.

Once you recreate the backing and target you should have a new GUID and iqn 
which should cure the issue.

Regards,

Mike
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-10-18 Thread Nigel Smith
According to the svccfg(1M) man page:
http://docs.sun.com/app/docs/doc/819-2240/svccfg-1m?a=view
...it should be just 'export' without a leading '-' or '--'.

I've been googling on NAA and this is the 'Network Address Authority',
It seems to be yet another way of uniquely identifying a target & Lun,
and is apparently to be compatble with the way that Fibre Channel &
SAS do this. For futher details, see:
http://tools.ietf.org/html/rfc3980
"T11 Network Address Authority (NAA) Naming Format for iSCSI Node Names"

I also found this blog post:
http://timjacobs.blogspot.com/2008/08/matching-luns-between-esx-hosts-and-vcb.html
...which talks about Vmware ESX and NAA.

For anyone interested in the code fix's to the solaris
iscsi target to support Vmware ESX server, take a look
at these links:
http://hg.genunix.org/onnv-gate.hg/rev/29862a7558ef
http://hg.genunix.org/onnv-gate.hg/rev/5b422642546a

Tano, based on the above, I would say you need
unique GUID's for two separate Targets/LUNS.
Best Regards
Nigel Smith
http://nwsmith.blogspot.com/
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice recommendations for backing up to ZFS Fileserver

2008-10-18 Thread Ahmed Kamal
For *nix rsync
For windows rsyncshare
http://www.nexenta.com/corp/index.php?option=com_remository&Itemid=77&func=startdown&id=18


On Sat, Oct 18, 2008 at 1:56 PM, Ares Drake <[EMAIL PROTECTED]>wrote:

> Greetings.
>
> I am currently looking into setting up a better backup solution for our
> family.
>
> I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data (not
> the OS itself) from multiple PCs running linux oder windowsXP. The linux
> boxes are connected via 1000Mbit, the windows machines either via
> gigabit as well or 54Mbit WPA encrypted WLAN. So far i've set up sharing
> via NFS on the Solaris box and it works well from both Linux and Windows
> (via SFU).
>
> I am looking for a solution to do incremental backups without wasting
> space on the fileserver and I want to be able to access a single file in
> the backup in differnt versions without much hassle. I think it can be
> done easily with ZFS and Snapshots?
>
> What would be good ways to get the files to the fileserver? For linux I
> thought of using rsync to sync the files over, than do a snapshot to
> preserve that backup state. Would you recommend using rsync with NFS or
> over ssh? (I assume the network is save enough for our needs.) Are there
> better alternatives?
>
> How to best get the data from the Windows machines to the Solaris box?
> Just copying them over by hand would not delete files on the fileserver
> in case some files are deleted on the windows box in between different
> backups. Using rsync on windows is only possible with cygwin emulation.
> Maybe there are better methods?
>
>
> Anyone have a similar setup, recommendations, or maybe something I could
> use as an idea?
>
> Thanks in advance,
>
> A. Drake
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Best practice recommendations for backing up to ZFS Fileserver

2008-10-18 Thread Ares Drake
Greetings.

I am currently looking into setting up a better backup solution for our
family.

I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data (not
the OS itself) from multiple PCs running linux oder windowsXP. The linux
boxes are connected via 1000Mbit, the windows machines either via
gigabit as well or 54Mbit WPA encrypted WLAN. So far i've set up sharing
via NFS on the Solaris box and it works well from both Linux and Windows
(via SFU).

I am looking for a solution to do incremental backups without wasting
space on the fileserver and I want to be able to access a single file in
the backup in differnt versions without much hassle. I think it can be
done easily with ZFS and Snapshots?

What would be good ways to get the files to the fileserver? For linux I
thought of using rsync to sync the files over, than do a snapshot to
preserve that backup state. Would you recommend using rsync with NFS or
over ssh? (I assume the network is save enough for our needs.) Are there
better alternatives?

How to best get the data from the Windows machines to the Solaris box?
Just copying them over by hand would not delete files on the fileserver
in case some files are deleted on the windows box in between different
backups. Using rsync on windows is only possible with cygwin emulation.
Maybe there are better methods?


Anyone have a similar setup, recommendations, or maybe something I could
use as an idea?

Thanks in advance,

A. Drake


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can i make my zpool as faulted.

2008-10-18 Thread yuvraj
Hi Sanjeev,
   Please let me know how to pull out any of the disk in my 
pool. Is there any command available for the same.

Thanks in advance.

  



   Regards,

Yuvraj Balkrishna Jadhav.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How can i make my zpool as faulted.

2008-10-18 Thread yuvraj
Hi Sanjeev,
I am herewith giving all the details of my zpool by firirng 
#zpool status command on commandline. Please go through the same and help me 
out.

 Thanks in advance.


  Regards,
  Yuvraj 
Balkrishna Jadhav.

==

# zpool status
  pool: mypool1
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
mypool1 ONLINE   0 0 0
  /disk1ONLINE   0 0 0
  /disk2ONLINE   0 0 0

errors: No known data errors

  pool: zpool21
 state: ONLINE
 scrub: scrub completed with 0 errors on Sat Oct 18 13:01:52 2008
config:

NAMESTATE READ WRITE CKSUM
zpool21 ONLINE   0 0 0
  /disk3ONLINE   0 0 0
  /disk4ONLINE   0 0 0

errors: No known data errors
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Improving zfs send performance

2008-10-18 Thread Carsten Aulbert
Hi

Miles Nordin wrote:
>> "r" == Ross  <[EMAIL PROTECTED]> writes:
> 
>  r> figures so close to 10MB/s.  All three servers are running
>  r> full duplex gigabit though
> 
> there is one tricky way 100Mbit/s could still bite you, but it's
> probably not happening to you.  It mostly affects home users with
> unmanaged switches:
> 
>   http://www.smallnetbuilder.com/content/view/30212/54/
>   http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html
> 
> because the big switch vendors all use pause frames safely:
> 
>  http://www.networkworld.com/netresources/0913flow2.html -- pause frames as 
> interpreted by netgear are harmful

That rings a bell, Ross, are you using NFS via UDP or TCP? May it be
that your network has different performance levels for different
transport types? For our network we have disabled pause frames completey
and rely only on TCP internal mechanisms to prevent flooding/blocking.

Carsten

PS: the job where 25k files sizing up to 800 GB is now done - zfs send
took only 52 hrs and the speed was ~ 4.5 MB/s :(
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss