Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Joe Little
On Thu, Jun 5, 2008 at 9:26 PM, Tim <[EMAIL PROTECTED]> wrote:
>
>
> On Thu, Jun 5, 2008 at 11:12 PM, Joe Little <[EMAIL PROTECTED]> wrote:
>>
>> On Thu, Jun 5, 2008 at 8:16 PM, Tim <[EMAIL PROTECTED]> wrote:
>> >
>> >
>> > On Thu, Jun 5, 2008 at 9:17 PM, Peeyush Singh <[EMAIL PROTECTED]>
>> > wrote:
>> >>
>> >> Hey guys, please excuse me in advance if I say or ask anything stupid
>> >> :)
>> >>
>> >> Anyway, Solaris newbie here.  I've built for myself a new file server
>> >> to
>> >> use at home, in which I'm planning on configuring SXCE-89 & ZFS.  It's
>> >> a
>> >> Supermicro C2SBX motherboard with a Core2Duo & 4GB DDR3.  I have
>> >> 6x750GB
>> >> SATA drives in it connected to the onboard ICH9-R controller (with BIOS
>> >> RAID
>> >> disabled & AHCI enabled).  I also have a 160GB SATA drive connected to
>> >> a PCI
>> >> SIIG SC-SA0012-S1 controller, the drive which will be used as the
>> >> system
>> >> drive.  My plan is to configure a RAID-Z2 pool on the 6x750 drives.
>> >>  The
>> >> system drive is just there for Solaris.  I'm also out of ports to use
>> >> on the
>> >> motherboard, hence why I'm using an add-in PCI SATA controller.
>> >>
>> >> My problem is that Solaris is not recognizing the system drive during
>> >> the
>> >> DVD install procedure.  It sees the 6x750GB onboard drives fine.  I
>> >> originally used a RocketRAID 1720 SATA controller, which uses its own
>> >> HighPoint chipset I believe, and it was a no-go.  I went and exchanged
>> >> that
>> >> controller for a SIIG SC-SA0012-S1 controller, which I thought used a
>> >> Silicon Integrated (SII) chipset.  The install DVD isn't recognizing it
>> >> unfortunatly, & now I'm not so sure that it uses a SII chipset.  I
>> >> checked
>> >> the HCL, and it only lists a few cards that are reported to work under
>> >> SXCE.
>> >>
>> >> If anyone has any suggestions on either...
>> >> A) Using a different driver during the install procedure, or...
>> >> B) A different, cheap SATA controller
>> >>
>> >> I'd appreciate it very much.  Sorry for the rambling post, but I wanted
>> >> to
>> >> be detailed from the get-go.  Thanks for any input! :)
>> >>
>> >> PS. On a side note, I'm interested in playing around with SXCE
>> >> development.  It looks interesting :)
>> >>
>> >>
>> >> This message posted from opensolaris.org
>> >> ___
>> >> zfs-discuss mailing list
>> >> zfs-discuss@opensolaris.org
>> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>> >
>> >
>> > I'm still a fan of the marvell based supermicro card.  I run two of them
>> > in
>> > my fileserver.  AOC-SAT2-MV8
>> >
>> > http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm
>> >
>>
>> I gave treatment to this question a few days ago. Yes, if you want
>> PCI-X, go with the Marvell. If you want PCIe SATA, then its either a
>> SIIG produced Si3124 card or a lot of guessing. I think the real
>> winner is going to be the newer SAS/SATA mixed HBAs from LSI based on
>> the 1068 chipset, which Sun has been supporting well in newer
>> hardware.
>>
>>
>> http://jmlittle.blogspot.com/2008/06/recommended-disk-controllers-for-zfs.html
>
> **pci or pci-x.  Yes, you might see *SOME* loss in speed from a pci
> interface, but let's be honest, there aren't a whole lot of users on this
> list that have the infrastructure to use greater than 100MB/sec who are
> asking this sort of question.  A PCI bus should have no issues pushing that.
>
>
>>
>>
>> Equally important, don't mix SATA-I and SATA-II on that system
>> motherboard, or on one of those add-on cards.
>>
>> http://jmlittle.blogspot.com/2008/05/mixing-sata-dos-and-donts.html
>>
>
> I mix SATA-I and SATA-II and haven't had any issues to date.  Unless you
> have an official bug logged/linked, that's as good as a wives tail.

No bug to report, but it was one of the issues with losing my log
device a bit ago. ZFS engineers appear to be aware of it. Among other
things, its why there is a known work around to disable command
queueing (NCQ) on the marvell card when SATA-I drives are attached to
it.


>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 03:43:29PM -0700, eric kustarz wrote:
> 
> On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:
> 
> > On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >>
>  clients do not.  Without per-filesystem mounts, 'df' on the client
>  will not report correct data though.
> >>>
> >>> I expect that mirror mounts will be coming Linux's way too.
> >>
> >> The should already have them:
> >> http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
> >
> > Where does that leave those of us who need to deal with OSX  
> > clients?  Does apple
> > have any plans to get in on this?
> 
> They need to implement NFSv4 in general first :)

Technically, Mac OS X 10.5 "Leopard" has some basic NFSv4.0 support in it.
But just enough to make it look like it passes all the Connectathon tests.
Not enough to warrant use by anyone but the terminally curious (or masochistic).
This is mentioned briefly in the mount_nfs(8) man page.

It would be reasonable to expect that future MacOSX releases will include
increasing levels of functionality and that NFSv4 will eventually be made
the default NFS version.

> But you'd have to  
> ask them on their lists what the status of that is... i know i would  
> like it...

Or get lucky and happen to have one of their engineers catch the question
on this list and reply...  ;-)

--macko
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] partitioning a disk with online zfs

2008-06-06 Thread Justin Vassallo
Hello,

 

I have two disks with a partition mounted as swap, having also some space
unallocated. I would like to format the disk to create a partition from that
unallocated space.

 

This should be safe given i've done it several time on disks with ufs, but
i'm not too sure with zfs. Is there any risk of breaking zfs?

 

justin



smime.p7s
Description: S/MIME cryptographic signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard Elling
Mattias Pantzare wrote:
> 2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
>   
>> Richard L. Hamilton wrote:
>> 
 A single /var/mail doesn't work well for 10,000 users
 either.  When you
 start getting into that scale of service
 provisioning, you might look at
 how the big boys do it... Apple, Verizon, Google,
 Amazon, etc.  You
 should also look at e-mail systems designed to scale
 to large numbers of
 users
 which implement limits without resorting to file
 system quotas.  Such
 e-mail systems actually tell users that their mailbox
 is too full rather
 than
 just failing to deliver mail.  So please, when we
 start having this
 conversation
 again, lets leave /var/mail out.

 
>>> I'm not recommending such a configuration; I quite agree that it is neither
>>> scalable nor robust.
>>>
>>>   
>> I was going to post some history of scaling mail, but I blogged it instead.
>> http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
>>  -- richard
>>
>> 
>
> The problem with that argument is that 10.000 users on one vxfs or UFS
> filesystem is no problem at all, be it /var/mail or home directories.
> You don't even need a fast server for that. 10.000 zfs file systems is
> a problem.
>
> So, if it makes you happier, substitute mail with home directories.
>   

If you feel strongly, please pile onto CR 6557894
http://bugs.opensolaris.org/view_bug.do?bug_id=6557894
If we continue to talk about it on the alias, we will just end up
finding ways to solve the business problem using available
technologies.

A single file system serving 10,000 home directories doesn't scale
either, unless the vast majority are unused -- in which case it is a
practical problem for much less than 10,000 home directories.
I think you will find that the people who scale out have a better
long-term strategy.

The limitations of UFS do become apparent as you try to scale
to the size permitted with ZFS.  For example, the largest UFS
file system supported is 16 TBytes, or 1/4 of a thumper.  So if you
are telling me that you are serving 10,000 home directories in
a 16 TByte UFS file system with quotas (1.6 GBytes/user?  I've
got 16 GBytes in my phone :-), then I will definitely buy you a
beer.  And aspirin.  I'll bring a calendar so we can measure the
fsck time when the log can't be replayed.  Actually, you'd
probably run out of inodes long before you filled it up.  I wonder
how long it would take to run quotacheck?  But I digress.  Let's
just agree that UFS won't scale well and the people who do
serve UFS as home directories for large populations tend to use
multiple file systems.

For ZFS, there are some features which conflict with the
notion of user quotas: compression, copies, and snapshots come
immediately to mind.  UFS (and perhaps VxFS?) do not have
these features, so accounting space to users is much simpler.
Indeed, if was was easy to add to ZFS, then CR 6557894
would have been closed long ago.  Surely we can describe the
business problems previously solved by user-quotas and then
proceed to solve them?  Mail is already solved.

 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mike Mackovitch
On Fri, Jun 06, 2008 at 06:27:01PM -0400, Brian Hechinger wrote:
> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> > 
> > >> clients do not.  Without per-filesystem mounts, 'df' on the client
> > >> will not report correct data though.
> > >
> > > I expect that mirror mounts will be coming Linux's way too.
> > 
> > The should already have them:
> > http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
> 
> Where does that leave those of us who need to deal with OSX clients?  Does 
> apple
> have any plans to get in on this?

Apple plans on supporting NFSv4... including "mirror mounts" (barring any
unseen, insurmountable hurdles).

HTH
--macko
Not speaking "officially" for Apple, but just as an engineer who works
on this stuff.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread eric kustarz

On Jun 6, 2008, at 3:27 PM, Brian Hechinger wrote:

> On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
>>
 clients do not.  Without per-filesystem mounts, 'df' on the client
 will not report correct data though.
>>>
>>> I expect that mirror mounts will be coming Linux's way too.
>>
>> The should already have them:
>> http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts
>
> Where does that leave those of us who need to deal with OSX  
> clients?  Does apple
> have any plans to get in on this?

They need to implement NFSv4 in general first :)  But you'd have to  
ask them on their lists what the status of that is... i know i would  
like it...

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Fri, Jun 06, 2008 at 04:52:45PM -0500, Nicolas Williams wrote:
> 
> Mirror mounts take care of the NFS problem (with NFSv4).
> 
> NFSv3 automounters could be made more responsive to server-side changes
> is share lists, but hey, NFSv4 is the future.

So basically it's just a waiting game at this point?  I guess I can live
with that.

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> 
> >> clients do not.  Without per-filesystem mounts, 'df' on the client
> >> will not report correct data though.
> >
> > I expect that mirror mounts will be coming Linux's way too.
> 
> The should already have them:
> http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts

Where does that leave those of us who need to deal with OSX clients?  Does apple
have any plans to get in on this?

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 02:58:09PM -0700, eric kustarz wrote:
> >I expect that mirror mounts will be coming Linux's way too.
> 
> The should already have them:
> http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts

Even better.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread eric kustarz

On Jun 6, 2008, at 2:50 PM, Nicolas Williams wrote:

> On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
>> On Fri, 6 Jun 2008, Brian Hechinger wrote:
>>
>>> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:

 - as separate filesystems, they have to be separately NFS mounted
>>>
>>> I think this is the one that gets under my skin.  If there would  
>>> be a
>>> way to "merge" a filesystem into a parent filesystem for the  
>>> purposes
>>> of NFS, that would be simply amazing.  I want to have the fine- 
>>> grained
>>> control over my NFS server that multiple ZFS filesystems gives me,  
>>> but
>>> I don't want the client systems to have to know anything about it.
>>
>> Solaris 10 clients already do that.  The problem is that non-Solaris
>> clients do not.  Without per-filesystem mounts, 'df' on the client
>> will not report correct data though.
>
> I expect that mirror mounts will be coming Linux's way too.

The should already have them:
http://blogs.sun.com/erickustarz/en_US/entry/linux_support_for_mirror_mounts

eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 08:51:13PM +0200, Mattias Pantzare wrote:
> 2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
> > I was going to post some history of scaling mail, but I blogged it instead.
> > http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
> 
> The problem with that argument is that 10.000 users on one vxfs or UFS
> filesystem is no problem at all, be it /var/mail or home directories.
> You don't even need a fast server for that. 10.000 zfs file systems is
> a problem.

10k one-file filesystems, all with the same parent dataset would be
silly as you'd be spreading the overhead of each dataset over one file
and listing the parent would require listing child datasets, and ...

10k home directories is entirely different.

> So, if it makes you happier, substitute mail with home directories.

I don't think comparing /var/mail and home directories is comparing
apples to apples.

Also, don't use /var/mail.  Use IMAP.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 07:37:18AM -0400, Brian Hechinger wrote:
> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
> > 
> > - as separate filesystems, they have to be separately NFS mounted
> 
> I think this is the one that gets under my skin.  If there would be a
> way to "merge" a filesystem into a parent filesystem for the purposes
> of NFS, that would be simply amazing.  I want to have the fine-grained
> control over my NFS server that multiple ZFS filesystems gives me, but
> I don't want the client systems to have to know anything about it.
> 
> Maybe a pipe dream?  ;)

Mirror mounts take care of the NFS problem (with NFSv4).

NFSv3 automounters could be made more responsive to server-side changes
is share lists, but hey, NFSv4 is the future.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Nicolas Williams
On Fri, Jun 06, 2008 at 10:42:45AM -0500, Bob Friesenhahn wrote:
> On Fri, 6 Jun 2008, Brian Hechinger wrote:
> 
> > On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
> >>
> >> - as separate filesystems, they have to be separately NFS mounted
> >
> > I think this is the one that gets under my skin.  If there would be a
> > way to "merge" a filesystem into a parent filesystem for the purposes
> > of NFS, that would be simply amazing.  I want to have the fine-grained
> > control over my NFS server that multiple ZFS filesystems gives me, but
> > I don't want the client systems to have to know anything about it.
> 
> Solaris 10 clients already do that.  The problem is that non-Solaris 
> clients do not.  Without per-filesystem mounts, 'df' on the client 
> will not report correct data though.

I expect that mirror mounts will be coming Linux's way too.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Will Murnane
On Fri, Jun 6, 2008 at 16:23, Tom Buskey <[EMAIL PROTECTED]> wrote:
> I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives 
> in a RAIDZ.
...
> I get 550 MB/s
I doubt this number a lot.  That's almost 200 (550/N-1 = 183) MB/s per
disk, and drives I've seen are usually more in the neighborhood of 80
MB/s.  How did you come up with this number?  What benchmark did you
run?  While it's executing, what does "zpool iostat mypool 10" show?

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Peter Tribble
On Thu, Jun 5, 2008 at 2:11 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
> Quotas are great when, for administrative purposes, you want a large
> number of users on a single filesystem, but to restrict the amount of
> space for each.  The primary place I can think of this being useful is
> /var/mail

Not really. Controlling mail usage needs to be done by the mail app -
simply failing a write is a terrible way to implement policy.

> The ZFS filesystem approach is actually better than quotas for User and
> Shared directories, since the purpose is to limit the amount of space
> taken up *under that directory tree*.   Quotas do a miserable job with
> Shared directories, and get damned confusing if there is the ability of
> anyone else to write to your User directory (or vice versa).

Erm, that's backwards. You want quotas to control shared directories.
In particular, you can't use ZFS filesystem to control usage in a
single directory (essentially by definition). What we use quotas for there
is to make sure a bad user (or rogue application) is controlled and can't
fill up a filesystem, thereby impacting other users.

> Remember, that quotas aren't free, and while we have seen some
> performance problems with the '10,000 ZFS filesystems' approach, there
> are performance issues to be had when trying to keep track of 10,000
> user quotas on a file system, as well. I can't say they are equal, but
> don't think that quotas are just there for the implementing. There's a
> penalty for them, too.

But 5-10,000 users with quotas worked just fine on a supersparc based
machine in the last millenium. Even on a decent modern machine that
number of filesystems could best be described as painful. The reality is
that there are something like 3 orders of magnitude difference in cost
between traditional ufs quotas and using zfs filesystems to try and emulate
the same thing.

(Although I have to say that, in a previous job, scrapping user quotas entirely
not only resulted in happier users, much less work for the helpdesk, and -
paradoxically - largely eliminated systems running out of space.)

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Tim
On Fri, Jun 6, 2008 at 3:23 PM, Tom Buskey <[EMAIL PROTECTED]> wrote:

> >**pci or pci-x.  Yes, you might see
> > *SOME* loss in speed from a pci interface, but
> > let's be honest, there aren't a whole lot of
> > users on this list that have the infrastructure to
> > use greater than 100MB/sec who are asking this sort
> > of question.  A PCI bus should have no issues
> > pushing that.
>
> I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II
> drives in a RAIDZ.
> I have the $20 Syba SATA I PCI card with 4 120GB drives in another RAIDZ.
>
> I get 550 MB/s on the 1st and 82 MB/s on the 2nd in the local system.
> From another, faster system, over Gigabit NFS I get 69MB/s and 35MB/s.
>
> I'm taking a big hit from the SATA ! PCI card vs the motherboard SATA II it
> seems.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


That has FAR, FAR more to do with the drives and crappy card than the
interface.  I have no issues maxing out a gigE link with a marvell card on a
PCI bus.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS problems with USB Storage devices

2008-06-06 Thread Paulo Soeiro
Hi Ricardo,

I'll try that.

Thanks (Obrigado)
Paulo Soeiro



On 6/5/08, Ricardo M. Correia <[EMAIL PROTECTED]> wrote:
>
> On Ter, 2008-06-03 at 23:33 +0100, Paulo Soeiro wrote:
>
> 6)Remove and attached the usb sticks:
>
> zpool status
> pool: myPool
> state: UNAVAIL
> status: One or more devices could not be used because the label is missing
> or invalid. There are insufficient replicas for the pool to continue
> functioning.
> action: Destroy and re-create the pool from a backup source.
> see: http://www.sun.com/msg/ZFS-8000-5E
> scrub: none requested
> config:
> NAME STATE READ WRITE CKSUM
> myPool UNAVAIL 0 0 0 insufficient replicas
> mirror UNAVAIL 0 0 0 insufficient replicas
> c6t0d0p0 FAULTED 0 0 0 corrupted data
> c7t0d0p0 FAULTED 0 0 0 corrupted data
>
>
> This could be a problem of USB devices getting renumbered (or something to
> that effect).
> Try doing "zpool export myPool" and "zpool import myPool" at this point, it
> should work fine and you should be able to get your data back.
>
> Cheers,
> Ricardo
>   --
>*Ricardo Manuel Correia*
> Lustre Engineering
>
> *Sun Microsystems, Inc.*
> Portugal
> Phone +351.214134023 / x58723
> Mobile +351.912590825
> Email [EMAIL PROTECTED]
>
<<6g_top.gif>>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Tom Buskey
>**pci or pci-x.  Yes, you might see
> *SOME* loss in speed from a pci interface, but
> let's be honest, there aren't a whole lot of
> users on this list that have the infrastructure to
> use greater than 100MB/sec who are asking this sort
> of question.  A PCI bus should have no issues
> pushing that.

I have an AMD 939 MB w/ Nvidea on the motherboard and 4 500GB SATA II drives in 
a RAIDZ.
I have the $20 Syba SATA I PCI card with 4 120GB drives in another RAIDZ.

I get 550 MB/s on the 1st and 82 MB/s on the 2nd in the local system.
From another, faster system, over Gigabit NFS I get 69MB/s and 35MB/s.

I'm taking a big hit from the SATA ! PCI card vs the motherboard SATA II it 
seems.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Mattias Pantzare
2008/6/6 Richard Elling <[EMAIL PROTECTED]>:
> Richard L. Hamilton wrote:
>>> A single /var/mail doesn't work well for 10,000 users
>>> either.  When you
>>> start getting into that scale of service
>>> provisioning, you might look at
>>> how the big boys do it... Apple, Verizon, Google,
>>> Amazon, etc.  You
>>> should also look at e-mail systems designed to scale
>>> to large numbers of
>>> users
>>> which implement limits without resorting to file
>>> system quotas.  Such
>>> e-mail systems actually tell users that their mailbox
>>> is too full rather
>>> than
>>> just failing to deliver mail.  So please, when we
>>> start having this
>>> conversation
>>> again, lets leave /var/mail out.
>>>
>>
>> I'm not recommending such a configuration; I quite agree that it is neither
>> scalable nor robust.
>>
>
> I was going to post some history of scaling mail, but I blogged it instead.
> http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
>  -- richard
>

The problem with that argument is that 10.000 users on one vxfs or UFS
filesystem is no problem at all, be it /var/mail or home directories.
You don't even need a fast server for that. 10.000 zfs file systems is
a problem.

So, if it makes you happier, substitute mail with home directories.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard Elling
Richard L. Hamilton wrote:
>> A single /var/mail doesn't work well for 10,000 users
>> either.  When you
>> start getting into that scale of service
>> provisioning, you might look at
>> how the big boys do it... Apple, Verizon, Google,
>> Amazon, etc.  You
>> should also look at e-mail systems designed to scale
>> to large numbers of 
>> users
>> which implement limits without resorting to file
>> system quotas.  Such
>> e-mail systems actually tell users that their mailbox
>> is too full rather 
>> than
>> just failing to deliver mail.  So please, when we
>> start having this 
>> conversation
>> again, lets leave /var/mail out.
>> 
>
> I'm not recommending such a configuration; I quite agree that it is neither
> scalable nor robust.
>   

I was going to post some history of scaling mail, but I blogged it instead.
http://blogs.sun.com/relling/entry/on_var_mail_and_quotas
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread Brandon High
On Fri, Jun 6, 2008 at 9:29 AM, John Kunze <[EMAIL PROTECTED]> wrote:
> My organization is considering an RFP for MAID storage and we're
> wondering about potential conflicts between MAID and ZFS.

I had to look up MAID, first link Google gave me was
http://www.closetmaid.com/ which doesn't seem right ...

MAID seems to be a form of HSM, using powered down disk rather than
tape for the offline data. I've had poor experience with HSM solutions
in the past (only using 1/4 the capacity of tapes, not repacking onto
fewer tapes, losing indexes, etc.) but that was several years ago and
things may have improved. (I think it was Legato's product running
under Linux, but I'm not certain.)

I can't think of any reason that something like this wouldn't work
with ZFS, though the ACLs may not get saved.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Quotas Locking down a system

2008-06-06 Thread Walter Faleiro
Folks,
I am running into an issue with a quota enabled ZFS system. I tried to check
out the ZFS properties but could not figure out a workaround.

I have a file system /data/project/software which has 250G quota set. There
are no snapshots enabled for this system. When the quota is reached on this,
no users can delete any files and get a disk quota exceeded error. After
which I have to login as root on the zfs exporting server and increase the
quota before deleting any files then revert the quota back.
As a turnaround, I have a script which check every few minutes of the usage
on the file system and deletes dummy files that I created if the usage is
100% or creates dummy files if the usage is not 1005. But I assume there
must be a better way of handling this via zfs.

Thanks,
--Walter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread Mark A. Carlson
I think most MAID is sold as a (misguided IMHO) replacement for
Tape, not as a Tier 1 kind of storage. YMMV.

-- mark

John Kunze wrote:
> My organization is considering an RFP for MAID storage and we're
> wondering about potential conflicts between MAID and ZFS.
>
> We want MAID's power management benefits but are concerned
> that what we understand to be ZFS's use of dynamic striping across
> devices with filesystem metadata replication and cache syncing will
> tend to keep disks spinning that the MAID is trying to spin down.
> Of course, we like ZFS's large namespace and dynamic memory
> pool resizing ability.
>
> Is it possible to configure ZFS to maximize the benefits of MAID?
>
> -John
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> John A. Kunze  [EMAIL PROTECTED]
> California Digital LibraryWork: +1-510-987-9231
> 415 20th St, #406 http://dot.ucop.edu/home/jak/
> Oakland, CA  94612 USA University of California
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS conflict with MAID?

2008-06-06 Thread John Kunze
My organization is considering an RFP for MAID storage and we're
wondering about potential conflicts between MAID and ZFS.

We want MAID's power management benefits but are concerned
that what we understand to be ZFS's use of dynamic striping across
devices with filesystem metadata replication and cache syncing will
tend to keep disks spinning that the MAID is trying to spin down.
Of course, we like ZFS's large namespace and dynamic memory
pool resizing ability.

Is it possible to configure ZFS to maximize the benefits of MAID?

-John

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
John A. Kunze  [EMAIL PROTECTED]
California Digital LibraryWork: +1-510-987-9231
415 20th St, #406 http://dot.ucop.edu/home/jak/
Oakland, CA  94612 USA University of California
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Bob Friesenhahn
On Fri, 6 Jun 2008, Brian Hechinger wrote:

> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
>>
>> - as separate filesystems, they have to be separately NFS mounted
>
> I think this is the one that gets under my skin.  If there would be a
> way to "merge" a filesystem into a parent filesystem for the purposes
> of NFS, that would be simply amazing.  I want to have the fine-grained
> control over my NFS server that multiple ZFS filesystems gives me, but
> I don't want the client systems to have to know anything about it.

Solaris 10 clients already do that.  The problem is that non-Solaris 
clients do not.  Without per-filesystem mounts, 'df' on the client 
will not report correct data though.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Aubrey Li
On Fri, Jun 6, 2008 at 10:41 PM, Brandon High <[EMAIL PROTECTED]> wrote:
> On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote:
>> Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept
>> a directory as an output. So I use zfs send and zfs receive:
>
> Really? zfs send just gives you a byte stream, and the shell redirects
> it to the file "root" in the fs at /mnt. Provided your shell has large
> file support, it should work just fine.
>
Yeah, it works of course, after I realize "root" is a file, not a directory.
This is not what I'm struggling now.

zfs receive will mount the fs after get it from zfs send,
but these fs is already mounted on the existing fs,
so zfs receive abort

Any thoughts?

Thanks,
-Aubrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs/nfs issue editing existing files

2008-06-06 Thread Andy Lubel
That was it!

hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3

It is too bad our silly hardware only allows us to go to 11.23.   
That's OK though, in a couple months we will be dumping this server  
with new x4600's.

Thanks for the help,

-Andy


On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:

> Andy Lubel wrote:
>
>> I've got a real doozie..   We recently implemented a b89 as zfs/ 
>> nfs/ cifs server.  The NFS client is HP-UX (11.23).
>> What's happening is when our dba edits a file on the nfs mount  
>> with  vi, it will not save.
>> I removed vi from the mix by doing 'touch /nfs/file1' then 'echo  
>> abc   > /nfs/file1' and it just sat there while the nfs servers cpu  
>> went up  to 50% (one full core).
>
> Hi Andy,
>
> This sounds familiar: you may be hitting something I diagnosed
> last year.  Run snoop and see if it loops like this:
>
> 10920   0.00013 141.240.193.235 -> 141.240.193.27 NFS C GETATTR3  
> FH=6614
> 10921   0.7 141.240.193.27 -> 141.240.193.235 NFS R GETATTR3 OK
> 10922   0.00017 141.240.193.235 -> 141.240.193.27 NFS C SETATTR3  
> FH=6614
> 10923   0.7 141.240.193.27 -> 141.240.193.235 NFS R SETATTR3  
> Update synch mismatch
> 10924   0.00017 141.240.193.235 -> 141.240.193.27 NFS C GETATTR3  
> FH=6614
> 10925   0.00023 141.240.193.27 -> 141.240.193.235 NFS R GETATTR3 OK
> 10926   0.00026 141.240.193.235 -> 141.240.193.27 NFS C SETATTR3  
> FH=6614
> 10927   0.9 141.240.193.27 -> 141.240.193.235 NFS R SETATTR3  
> Update synch mismatch
>
> If you see this, you've hit what we filed as Sun bugid 6538387,
> "HP-UX automount NFS client hangs for ZFS filesystems".  It's an
> HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
> bitten by the nanosecond resolution on ZFS.  Part of the CREATE
> handshake is for the server to send the create time as a 'guard'
> against almost-simultaneous creates - the client has to send it
> back in the SETATTR to complete the file creation.  HP-UX has only
> microsecond resolution in their VFS, and so the 'guard' value is
> not sent accurately and the server rejects it, lather rinse and
> repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
> You can use NFSv2 in the short term until you get that update.
>
> If you see something different, by all means send us a snoop.
>
> Rob T

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Brandon High
On Thu, Jun 5, 2008 at 11:37 PM, Albert Lee
<[EMAIL PROTECTED]> wrote:
> Raw disk images are, uh, nice and all, but I don't think that was what
> Aubrey had in mind when asking zfs-discuss about a backup solution. This
> is 2008, not 1960.

But retro is in!

The point that I didn't really make is that Ghost and Drive Snapshot
can create images of known filesystems (NTFS, FAT, ext2/3, reiserfs)
that aren't raw images. zfs send is probably closest to that, except
both of the imaging tools allow you to mount images and browse them.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Brandon High
On Fri, Jun 6, 2008 at 12:23 AM, Aubrey Li <[EMAIL PROTECTED]> wrote:
> Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept
> a directory as an output. So I use zfs send and zfs receive:

Really? zfs send just gives you a byte stream, and the shell redirects
it to the file "root" in the fs at /mnt. Provided your shell has large
file support, it should work just fine.

-B

-- 
Brandon High [EMAIL PROTECTED]
"The good is the enemy of the best." - Nietzsche
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Darren J Moffat
Brian Hechinger wrote:
> On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
>> - as separate filesystems, they have to be separately NFS mounted
> 
> I think this is the one that gets under my skin.  If there would be a
> way to "merge" a filesystem into a parent filesystem for the purposes
> of NFS, that would be simply amazing.  I want to have the fine-grained
> control over my NFS server that multiple ZFS filesystems gives me, but
> I don't want the client systems to have to know anything about it.
> 
> Maybe a pipe dream?  ;)

As has been pointed out before when this topic came up this is really an 
NFS client or client automounter problem not an NFS server or ZFS problem.

Current Solaris/OpenSolaris NFS clients don't have any issues with the 
number of NFS filesystems or even traversing the server side mountpoints.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Richard L. Hamilton
[...]
> > That's not to say that there might not be other
> problems with scaling to
> > thousands of filesystems.  But you're certainly not
> the first one to test it.
> >
> > For cases where a single filesystem must contain
> files owned by
> > multiple users (/var/mail being one example), old
> fashioned
> > UFS quotas still solve the problem where the
> alternative approach
> > with ZFS doesn't.
> >   
> 
> A single /var/mail doesn't work well for 10,000 users
> either.  When you
> start getting into that scale of service
> provisioning, you might look at
> how the big boys do it... Apple, Verizon, Google,
> Amazon, etc.  You
> should also look at e-mail systems designed to scale
> to large numbers of 
> users
> which implement limits without resorting to file
> system quotas.  Such
> e-mail systems actually tell users that their mailbox
> is too full rather 
> than
> just failing to deliver mail.  So please, when we
> start having this 
> conversation
> again, lets leave /var/mail out.

I'm not recommending such a configuration; I quite agree that it is neither
scalable nor robust.

It's only merit is that it's an obvious example of where one would have
potentially large files owned by many users necessarily on one filesystem,
inasmuch as they were in one common directory.  But there must  be
other examples where the ufs quota model is a better fit than the
zfs quota model with potentially one filesystem per user.

In terms of the limitations they can provide, zfs filesystem quotas remind me
of DG/UX control point directories (presumably a relic of AOS/VS) - like regular
directories except they could have a quota bound to them restricting the sum of
the space of the subtree rooted there (the native filesystem on DG/UX didn't
have UID-based quotas).

Given restricted chown (non-root can't give files away), per-UID*filesystem
quotas IMO make just as much sense as per-filesystem quotas themselves
do on zfs, save only that per-UID*filesystem quotas make the filesystem less
lightweight.  For zfs, perhaps an answer might be if it were possible to
have per-zpool uid/gid/projid/zoneid/sid quotas too?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Per-user home filesystems and OS-X Leopard anomaly

2008-06-06 Thread Richard L. Hamilton
> I encountered an issue that people using OS-X systems
> as NFS clients 
> need to be aware of.  While not strictly a ZFS issue,
> it may be 
> encounted most often by ZFS users since ZFS makes it
> easy to support 
> and export per-user filesystems.  The problem I
> encountered was when 
> using ZFS to create exported per-user filesystems and
> the OS-X 
> automounter to perform the necessary mount magic.
> 
> OS-X creates hidden ".DS_Store" directories in every
> directory which 
> is accessed (http://en.wikipedia.org/wiki/.DS_Store).
> 
> OS-X decided that it wanted to create the path
> "/home/.DS_Store" and 
> it would not take `no' for an answer.  First it would
> try to create 
> "/home/.DS_Store" and then it would try an alternate
> name.  Since the 
> automounter was used, there would be an automount
> request for 
> "/home/.DS_Store", which does not exist on the server
> so the mount 
> request would fail.  Since OS-X does not take 'no'
> for an answer, 
> there would be subsequent thousands of back to back
> mount requests. 
> The end result was that 'mountd' was one of the top
> three resource 
> consumers on my system, there would be bursts of high
> network traffic 
> (1500 packets/second), and the affected OS-X system
> would operate 
> more strangely than normal.
> 
> The simple solution was to simply create a
> "/home/.DS_Store" directory 
> on the server so that the mount request would
> succeed.

Too bad it appears to be non-obvious how to do loopback mounts
(a mount of one local directory onto another, without having to be an
NFS server) on Darwin/MacOS X; then you could mount the
/home/.DS_Store locally from a directory elsewhere (e.g.
/export/home/.DS_Store) on each machine, rather than bothering
the server with it.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Filesystem for each home dir - 10,000 users?

2008-06-06 Thread Brian Hechinger
On Thu, Jun 05, 2008 at 12:02:42PM -0400, Chris Siebenmann wrote:
> 
> - as separate filesystems, they have to be separately NFS mounted

I think this is the one that gets under my skin.  If there would be a
way to "merge" a filesystem into a parent filesystem for the purposes
of NFS, that would be simply amazing.  I want to have the fine-grained
control over my NFS server that multiple ZFS filesystems gives me, but
I don't want the client systems to have to know anything about it.

Maybe a pipe dream?  ;)

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Get your SXCE on ZFS here!

2008-06-06 Thread Brian Hechinger
On Thu, Jun 05, 2008 at 10:45:09PM -0700, Vincent Fox wrote:
> Way to drag my post into the mud there.
> 
> Can we just move on?

Absolutely not!  Just be glad you never had to create a swap file on an
NFS mount just to be able to build software on your machine!  Yes, I really
did have to do that.  It was 12 years ago, so it was all 10Mbit too.

Painful?  Yes.

Could I at least get my job done?  Yes.

:-D

-brian
-- 
"Coding in C is like sending a 3 year old to do groceries. You gotta
tell them exactly what you want or you'll end up with a cupboard full of
pop tarts and pancake mix." -- IRC User (http://www.bash.org/?841435)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Marc Bevand
Richard L. Hamilton  smart.net> writes:
> But I suspect to some extent you get what you pay for; the throughput on the
> higher-end boards may well be a good bit higher.

Not really. Nowadays, even the cheapest controllers, processors & mobos are 
EASILY capable of handling the platter-speed throughput of up to 8-10 disks.

http://opensolaris.org/jive/thread.jspa?threadID=54481

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Richard L. Hamilton
If I read the man page right, you might only have to keep a minimum of two
on each side (maybe even just one on the receiving side), although I might be
tempted to keep an extra just in case; say near current, 24 hours old, and a
week old (space permitting for the larger interval of the last one).  Adjust
frequency, spacing, and number according to available space, keeping in
mind that the more COW-ing between snapshots (the longer interval if
activity is more or less constant), the more space required.  (assuming
my head is more or less on straight right now...)

Of course if you get messed up, you can always resync with a non-incremental
transfer, so if you could live with that occasionally, there may be no need for
more than two.

Your script would certainly have to be careful to check for successful send 
_and_
receive before removing old snapshots on either side.

ssh remotehost exit 1

seems to have a return code of 1 (cool).  rsh does _not_ have that desirable
property.  But that still leaves the problem of how to check the exit status
of the commands on both ends of a pipeline; maybe someone has solved
that?

Anyway, correctly verifying successful completion of the commands on both ends
might be a bit tricky, but is critical if you don't want failures or the need 
for
frequent non-incremental transfers.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Marc Bevand
Buy a 2-port SATA II PCI-E x1 SiI3132 controller ($20). The solaris driver is 
very stable.

Or, a solution I would personally prefer, don't use a 7th disk.  Partition 
each of your 6 disks with a small ~7-GB slice at the beginning and the rest of 
the disk for ZFS. Install the OS in one of the small slices. This will only 
reduce your usable ZFS storage space by <1% (and you may have to manually 
enable write cache because ZFS won't be given entire disks, only slices) but: 
(1) you save a disk and a controller and money and related hassles (the reason 
why you post here :P), (2) you can mirror your OS on the other small slices 
using SVM or a ZFS mirror to improve reliability, and (3) this setup allows 
you to easily experiment with parallel installs of different opensolaris 
versions in the other slices.

-marc

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Or you could use Tim Fosters ZFS snapshot service 
http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_now_with

/peter

On Jun 6, 2008, at 14:07, Tobias Exner wrote:

> Hi,
>
> I'm thinking about the following situation and I know there are some
> things I have to understand:
>
> I want to use two SUN-Servers with the same amount of storage capacity
> on both of them and I want to replicate the filesystem ( zfs )
> incrementally two times a day from the first to the second one.
>
> I know that the zfs send/receive commands will do the job, but I don't
> understand exactly how zfs will know what have to be transferred..  
> Is it
> the difference to the last snapshot?
>
> If yes, does that mean that I have to keep all snapshots to achieve an
> "incremental-forever" configuration?  --> That's my goal!
>
>
>
>
> regards,
>
> Tobias Exner
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs incremental-forever

2008-06-06 Thread Peter Karlsson
Hi Tobias,

I did this for a large lab we had last month, I have it setup  
something like this.

zfs snapshot  [EMAIL PROTECTED]
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh server2 zfs recv  rep_pool
ssh zfs destroy [EMAIL PROTECTED]
ssh zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]
zfs destroy [EMAIL PROTECTED]
zfs rename [EMAIL PROTECTED] [EMAIL PROTECTED]

I was using this for a set up with on master systems and 100 system  
that I replicated the zpool to, and I had scripts that was used when a  
student logged out to do a zfs rollback [EMAIL PROTECTED] to reset it to  
known state to be ready for a new student. I don't have the actual  
script I used here right now, so I might have missed some flags, but  
you see that basic flow of it.

/peter

On Jun 6, 2008, at 14:07, Tobias Exner wrote:

> Hi,
>
> I'm thinking about the following situation and I know there are some
> things I have to understand:
>
> I want to use two SUN-Servers with the same amount of storage capacity
> on both of them and I want to replicate the filesystem ( zfs )
> incrementally two times a day from the first to the second one.
>
> I know that the zfs send/receive commands will do the job, but I don't
> understand exactly how zfs will know what have to be transferred..  
> Is it
> the difference to the last snapshot?
>
> If yes, does that mean that I have to keep all snapshots to achieve an
> "incremental-forever" configuration?  --> That's my goal!
>
>
>
>
> regards,
>
> Tobias Exner
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2008-06-06 Thread Richard L. Hamilton
> On Thu, Jun 05, 2008 at 09:13:24PM -0600, Keith
> Bierman wrote:
> > On Jun 5, 2008, at 8:58 PM   6/5/, Brad Diggs
> wrote:
> > > Hi Keith,
> > >
> > > Sure you can truncate some files but that
> effectively corrupts
> > > the files in our case and would cause more harm
> than good. The
> > > only files in our volume are data files.
> > 
> > So an rm is ok, but a truncation is not?
> > 
> > Seems odd to me, but if that's your constraint so
> be it.
> 
> Neither will help since before the space can be freed
> a transaction must
> be written, which in turn requires free space.
> 
> (So you say "let ZFS save some just-in-case-space for
> this," but, how
> much is enough?)

If you make it a parameter, that's the admin's problem.  Although
since each rm of a file also present in a snapshot just increases the
divergence, only an rm of a file _not_ present in a snapshot would
actually recover space, right?  So in some circumstances, even if it's
the admin's problem, there might be no amount that's enough to
do what one wants to do without removing a snapshot.  Specifically,
take a snapshot of a filesystem that's very nearly full, and then use
dd or whatever to create a single new file that fills up the filesystem.
At that point, only removing that single new file will help, and even that's
not possible without a just-in-case reserve of enough to handle worst
case metadata(including system attributes, if any) update+transaction log+\
any other fudge I forgot, for at least one file's worth.

Maybe that's a simplistic view of the scenario, I dunno...
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] system backup and recovery

2008-06-06 Thread Aubrey Li
Hi Erik,

Thanks for your instruction, but let me dig into details.

On Thu, Jun 5, 2008 at 10:04 PM, Erik Trimble <[EMAIL PROTECTED]> wrote:
>
> Thus, you could do this:
>
> (1) Install system A

No problem, :-)

> (2) hook USB drive to A, and mount it at /mnt
I created a zfs pool, and mount it at /tank, now my system looks like
the following:
# zpool list
NAMESIZE   USED  AVAILCAP  HEALTH  ALTROOT
rpool   147G  14.0G   133G 9%  ONLINE  -
tank149G94K   149G 0%  ONLINE  -
# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
rpool   14.0G   131G  56.5K  /rpool
[EMAIL PROTECTED]   17.5K  -55K  -
rpool/ROOT  3.45G   131G18K  /rpool/ROOT
rpool/[EMAIL PROTECTED]  0  -18K  -
rpool/ROOT/opensolaris  3.45G   131G  2.54G  legacy
rpool/ROOT/[EMAIL PROTECTED]  63.2M  -  2.22G  -
rpool/ROOT/opensolaris/opt   862M   131G   862M  /opt
rpool/ROOT/opensolaris/[EMAIL PROTECTED]72K  -  3.60M  -
rpool/export10.5G   131G19K  /export
rpool/[EMAIL PROTECTED]  15K  -19K  -
rpool/export/home   10.5G   131G  10.5G  /export/home
rpool/export/[EMAIL PROTECTED] 19K  -21K  -
tank89.5K   147G 1K  /tank

> (3) use 'zfs send tank/root > /mnt/root'  to save off the root ZFS
> filesystem to the USB drive

"zfs send" always needs a snapshot, So I made a snapshot of rpool.

#zfs snapshot -r [EMAIL PROTECTED]

Here, "zfs send tank/root > /mnt/root" doesn't work, "zfs send" can't accept
a directory as an output. So I use zfs send and zfs receive:

# zfs send -R [EMAIL PROTECTED] | zfs receive -dF tank
cannot mount '/opt': directory is not empty

Is there anything I missed or I did wrong?

Thanks,
-Aubrey
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SATA controller suggestion

2008-06-06 Thread Richard L. Hamilton
I don't presently have any working x86 hardware, nor do I routinely work with
x86 hardware configurations.

But it's not hard to find previous discussion on the subject:
http://www.opensolaris.org/jive/thread.jspa?messageID=96790
for example...

Also, remember that SAS controllers can usually also talk to SATA drives;
they're usually more expensive of course, but sometimes you can find a deal.
I have a LSI SAS 3800x, and I paid a heck of a lot less than list for it (eBay),
I'm guessing because someone bought the bulk package and sold off whatever
they didn't need (new board, sealed, but no docs).  That was a while ago, and
being around US $100, it might still not have been what you'd call cheap.
If you want < $50, you might have better luck looking at the earlier discussion.
But I suspect to some extent you get what you pay for; the throughput on the
higher-end boards may well be a good bit higher, although for one disk
(or even two, to mirror the system disk), it might not matter so much.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs incremental-forever

2008-06-06 Thread Tobias Exner
Hi,

I'm thinking about the following situation and I know there are some 
things I have to understand:

I want to use two SUN-Servers with the same amount of storage capacity 
on both of them and I want to replicate the filesystem ( zfs ) 
incrementally two times a day from the first to the second one.

I know that the zfs send/receive commands will do the job, but I don't 
understand exactly how zfs will know what have to be transferred.. Is it 
the difference to the last snapshot?

If yes, does that mean that I have to keep all snapshots to achieve an 
"incremental-forever" configuration?  --> That's my goal!




regards,

Tobias Exner


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss