Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Nils Goroll
Hi Eric and all,

> Can anyone point me in the right direction here?  Much appreciated!

I have worked on a similar issue this week.

Though I have not worked through all the information you have provided, could 
you please try the settings and source code changes I posted here:

http://www.mail-archive.com/[EMAIL PROTECTED]/msg97466.html

Cheers, Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread Casper . Dik

>On Wed, Nov 26, 2008 at 04:30:59PM +0100, "C. Bergstr?m" wrote:
>> Ok. here's a trick question.. So to the best of my understanding zfs 
>> turns off write caching if it doesn't own the whole disk.. So what if s0 
>> *is* the whole disk?  Is write cache supposed to be turned on or off? 
>
>Actually, ZFS doesn't turn on write caching if it doesn't own the whole
>disk.  It leaves it alone.  You can turn it on yourself.
>
>It leaves it alone because it doesn't know if it would be safe to
>enable.  If you know that there is nothing on the disk other than ZFS,
>you can enable it and ZFS will get the benefit.

Isn't it true that for IDE disks, the disks have "write caching" enabled
by default?

>Swapfile performance is usually pretty far down on the list of things I
>want to optimize.  I'd rather set up for good management and expect that
>I won't need high-performance swapfiles.


If swap performance is a bottleneck, add more memory.

Casper

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread James C. McPherson
On Thu, 27 Nov 2008 04:33:54 -0800 (PST)
Ross <[EMAIL PROTECTED]> wrote:

> Hmm...  I logged this CR ages ago, but now I've come to find it in
> the bug tracker I can't see it anywhere.
> 
> I actually logged three CR's back to back, the first appears to have
> been created ok, but two have just disappeared.  The one I created ok
> is:  http://bugs.opensolaris.org/view_bug.do?bug_id=6766364
> 
> There should be two other CR's created within a few minutes of that,
> one for disabling caching on CIFS shares, and one regarding this ZFS
> availability discussion.  Could somebody at Sun let me know what's
> happened to these please.

Hi Ross,
I can't find the ZFS one you mention. The CIFS one is 
http://bugs.opensolaris.org/view_bug.do?bug_id=6766126.
It's been marked as 'incomplete' so you should contact
the R.E. - Alan M. Wright (at sun dot com, etc) to find
out what further info is required.


hth,
James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread Chris Ridd

On 26 Nov 2008, at 17:08, Chris Ridd wrote:

> It feels a lot like "don't start from here" (ie from my 2008.05
> install) so I'm doing an install of 101b from CD onto one of the new
> disks right now. At least format's not showing a swap slice now,
> yippee. I'll try and get it mirroring afterwards, and then see how
> much I can extract from my broken disk.

I've successfully got 101b installed and booting and mirrored.

I did get a "Bad PBR sig" error after the install onto a single disk,  
and subsequently noticed the slices on that disk were slightly off:  
slice 8 (boot) was on cylinder 0, but slices 0 and 2 ended on  
different cylinders. I adjusted the end of slice 0, ran installgrub on  
it, and it now boots. I copied that vtoc over to the other disk and  
ran installgrub on it too.

I'm not 100% convinced it'll boot if half the mirror's not there, but  
I was rearranging the drives on the controller at the time. Presumably  
that's a terrible idea?

Now to recover the bits from my dying disk...

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Ross
Thanks James, I've e-mailed Alan and submitted this one again.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Ross
Hmm...  I logged this CR ages ago, but now I've come to find it in the bug 
tracker I can't see it anywhere.

I actually logged three CR's back to back, the first appears to have been 
created ok, but two have just disappeared.  The one I created ok is:  
http://bugs.opensolaris.org/view_bug.do?bug_id=6766364

There should be two other CR's created within a few minutes of that, one for 
disabling caching on CIFS shares, and one regarding this ZFS availability 
discussion.  Could somebody at Sun let me know what's happened to these please.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Bernard Dugas
Hello,

Thank you for this very interesting thread !

I want to confirm that Synchronous Distributed Storage is main goal when using 
ZFS !

The target architecture is 1 local drive, and 2 (or more) remote iSCSI targets, 
with ZFS being the iSCSI initiator.

System is designed/cut so that local disk can handle all needed performance 
with good margin, as each one of iSCSI targets through large enough Ethernet 
fibers.

I need that any network problem doesn't slow the readings on local disk, and 
that writings are stopped only if not any remote are available after a time-out.

I also did a comment on that subject in :
http://blogs.sun.com/roch/entry/using_zfs_as_a_network

To  myxiplx :  we called "Sleeping Failure" a failure of 1 part, that is hidden 
by redundancy but not detected by monitoring. These are the most dangerous...

Would anybody be interested by supporting an opensource "projectseed" called 
MiSCSI ? This is for Multicast iSCSI, so that only 1 writing from initiator be 
propagated by network to all suscribed targets, with dynamic suscribing and 
"resilvering" being delegated to remote targets. I would even prefer this 
behaviour already exists in ZFS :-)

Please let me any comment if interested, i may send a draft for RFP...

Best regards !
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Scott Williamson
I have solaris 10 set to resolve user information from my directory (ldap).
I only get primary group information, not secondary. We use edirectory via
ldap and the attribute for group membersip is not the one that solaris looks
for.

If you run the id  on the box, does it show the users secondary
groups?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Ross
Well, you're not alone in wanting to use ZFS and iSCSI like that, and in fact 
my change request suggested that this is exactly one of the things that could 
be addressed:

"The idea is really a two stage RFE, since just the first part would have 
benefits.  The key is to improve ZFS availability, without affecting it's 
flexibility, bringing it on par with traditional raid controllers.

A.  Track response times, allowing for lop sided mirrors, and better failure 
detection.  Many people have requested this since it would facilitate remote 
live mirrors.

B.  Use response times to timeout devices, dropping them to an interim failure 
mode while waiting for the official result from the driver.  This would prevent 
redundant pools hanging when waiting for a single device."

Unfortunately if your links tend to drop, you really need both parts.  However, 
if this does get added to ZFS, all you would then need is standard monitoring 
on the ZFS pool.  That would notify you when any device fails and the pool goes 
to a degraded state, making it easy to spot when either the remote mirrors or 
local storage are having problems.  I'd have thought it would make monitoring 
much simpler.

And if this were possible, I would hope that you could configure iSCSI devices 
to automatically reconnect and resilver too, so the system would be self 
repairing once faults are corrected, but I haven't gone so far as to test that 
yet.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS ACL/ACE issues with Samba - Access Denied

2008-11-27 Thread Nils Goroll

> If you run the id  on the box, does it show the users 
> secondary groups?

id never shows secondary groups.

Use id -a

Nils
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPool and Filesystem Sizing - Best Practices?

2008-11-27 Thread Paul Sobey
On Wed, 26 Nov 2008, Anton B. Rang wrote:

>> If there is a zfs implementation bug it could perhaps be more risky
>> to have five pools rather than one.
>
> Kind of goes both ways.  You're perhaps 5 times as likely to wind up with a 
> damaged pool, but if that ever happens, there's only 1/5 as much data to 
> restore.

One more question regarding this - has anybody on this list had (or heard 
of) a zpool corruption that prevented access to all data in the pool? 
We're probably going to have a second X4540 here and push snapshots to it 
daily, so as a last resort we'd have access to the data on another 
machine. I'd like to minimise the chance of problems on the live machine 
first though :)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Tool to figure out optimum ZFS recordsize for a Mail server Maildir tree?

2008-11-27 Thread Roch Bourbonnais

Le 22 oct. 08 à 21:02, Bill Sommerfeld a écrit :

> On Wed, 2008-10-22 at 09:46 -0700, Mika Borner wrote:
>> If I turn zfs compression on, does the recordsize influence the
>> compressratio in anyway?
>
> zfs conceptually chops the data into recordsize chunks, then  
> compresses
> each chunk independently, allocating on disk only the space needed to
> store each compressed block.
>
> On average, I'd expect to get a better compression ratio with a larger
> block size since typical compression algorithms will have more  
> chance to
> find redundancy in a larger block of text.
>

With gzip yes, but with default compression, I believe lzjb is looking  
for short 'repeating byte pattern'.
It should not depend much on recordsize.

-r


> as always your mileage may vary.
>
>   - Bill
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPool and Filesystem Sizing - Best Practices?

2008-11-27 Thread Ross
Yes, several people have had problems.  Many, many people without redundancy in 
their pools have had problems, and I've also seen one or two cases of quite 
large pools going corrupt (including at least one on a thumper I believe).  I 
think they were all a good while ago now (9 months or so), and ZFS appears much 
more mature these days.  

I suspect the risk now is relatively low, but ZFS is still a pretty young 
filesystem, and one that's undergoing rapid development.

I'm happily storing production data on ZFS, but I do have backups of everything 
stored on a non-ZFS filesystem.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Bernard Dugas
> Well, you're not alone in wanting to use ZFS and
> iSCSI like that, and in fact my change request
> suggested that this is exactly one of the things that
> could be addressed:

Thank you ! Yes, this was also to tell you that you are not alone :-)

I agree completely with you on your technical points !
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPool and Filesystem Sizing - Best Practices?

2008-11-27 Thread Bob Friesenhahn
On Thu, 27 Nov 2008, Paul Sobey wrote:

> One more question regarding this - has anybody on this list had (or heard
> of) a zpool corruption that prevented access to all data in the pool?

If you check the list archives, you will find a number of such cases. 
Usually they occured on non-Sun hardware.  In some cases people lost 
their pools after a BIOS upgrade which commandeered a few more disk 
bytes for itself.  In most cases the pool was recoverable with some 
assistance from Sun tech support.

It is important to avoid hardware which writes data in the wrong 
order, or does not obey cache control and cache flush commands.  As 
one would expect, Sun hardware is "well behaved".

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 2008.11 rc2 weird zpool import behaviour

2008-11-27 Thread Udo Grabowski
Hello,
after installing 2008.11 and accidently locking up the administrative account 
by 
giving it userid=0, I cannot login to our machine (only role based access 
allowed). 
So I tried it the usual way: boot from 2008.11 live-cd, then 'zpool import -f 
-R /a rpool' 
or  'zpool import -f -R /a 23532453453 root_pool'  (tried both variants, id is 
fake).

The weird effect is that, although zfs list shows that rpool/ROOT/opensolaris 
is mounted
on /a, I can only see a few directories (/a/export/home/,/a/rpool), but no 
files at all, and 
also not the usual root directories ! So no way to change /a/etc/shadow and 
/a/etc/passwd.
Booting again  from the disk, everything is fine (except no login possible...).

What's going on here ? A rc2 bug ? Any other ways to get the pool into a usable 
state ?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2008.11 rc2 weird zpool import behaviour

2008-11-27 Thread petede
I found I had to set the mount point property for the rpool/ROOT/opensolaris
filesystem; zfs mount it and do the edits and then unmount and reset the mount
point property back to root (I made the mistake of messing up the user_attr
file).


Udo Grabowski wrote:
> Hello,
> after installing 2008.11 and accidently locking up the administrative account 
> by 
> giving it userid=0, I cannot login to our machine (only role based access 
> allowed). 
> So I tried it the usual way: boot from 2008.11 live-cd, then 'zpool import -f 
> -R /a rpool' 
> or  'zpool import -f -R /a 23532453453 root_pool'  (tried both variants, id 
> is fake).
> 
> The weird effect is that, although zfs list shows that rpool/ROOT/opensolaris 
> is mounted
> on /a, I can only see a few directories (/a/export/home/,/a/rpool), but no 
> files at all, and 
> also not the usual root directories ! So no way to change /a/etc/shadow and 
> /a/etc/passwd.
> Booting again  from the disk, everything is fine (except no login 
> possible...).
> 
> What's going on here ? A rc2 bug ? Any other ways to get the pool into a 
> usable state ?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2008.11 rc2 weird zpool import behaviour

2008-11-27 Thread Ross
Am I right in thinking that it hadn't mounted the pool, but a directory of the 
same name was there?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread dick hoogendijk
On Thu, 27 Nov 2008 12:58:20 +
Chris Ridd <[EMAIL PROTECTED]> wrote:

> I'm not 100% convinced it'll boot if half the mirror's not there,

Believe me, it will (been there done that). You -have- to make sure
though that both disks have installgrub ... And that your bios is able
to boot from the other disk.

You can always try it ou by pulling a plug from one of the disks ;-)

-- 
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
+ http://nagual.nl/ | SunOS sxce snv101 ++
+ All that's really worth doing is what we do for others (Lewis Carrol)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Separate /var

2008-11-27 Thread Edward Irvine
Hi Folks,

I'm currently working with an organisation who want use ZFS for their  
full zones. Storage is SAN attached, and they also want to create a  
separate /var for each zone, which causes issues when the zone is  
installed. They believe that a separate /var is still good practice.

What are others doing in this space?

Any pointers appreciated.

Thanks
Eddie

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-11-27 Thread Rich Teer
On Fri, 28 Nov 2008, Edward Irvine wrote:

> What are others doing in this space?

Educate them how the world has changed!  A separate /var is even
less necessary with ZFS than with UFS.

-- 
Rich Teer, SCSA, SCNA, SCSECA

CEO,
My Online Home Inventory

URLs: http://www.rite-group.com/rich
  http://www.linkedin.com/in/richteer
  http://www.myonlinehomeinventory.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-11-27 Thread Ian Collins



 On Fri 28/11/08 09:39 , Edward Irvine [EMAIL PROTECTED] sent:
> Hi Folks,
> 
> I'm currently working with an organisation who want use ZFS for their  
> full zones. Storage is SAN attached, and they also want to create a  
> separate /var for each zone, which causes issues when the zone is  
> installed. They believe that a separate /var is still good practice.
> 
Is it even possible?  Well I guess anything is possible, but I doubt such a 
configuration would be supported or survive an upgrade.

> What are others doing in this space?
> 
I usually create a filesystem for each zone's root and either use loopback 
mounts or ZFS data sets for any additional space (/export/home for instance) 
required in the zone.  I can then apply quotas to these filesystems if required.

-- 
Ian

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Micron - 1GB/s PCIe SSD's

2008-11-27 Thread Ross
... here's hoping they release it with Solaris drivers
http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9121698&intsrc=hm_list
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-11-27 Thread Gary Mills
On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
> 
> I'm currently working with an organisation who want use ZFS for their  
> full zones. Storage is SAN attached, and they also want to create a  
> separate /var for each zone, which causes issues when the zone is  
> installed. They believe that a separate /var is still good practice.

If your mount options are different for /var and /, you will need
a separate filesystem.  In our case, we use `setuid=off' and
`devices=off' on /var for security reasons.  We do the same thing
for home directories and /tmp .

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2008.11 rc2 weird zpool import behaviour

2008-11-27 Thread Udo Grabowski
Tried setting the mountpoint explicitly already, gives exactly the same 
behaviour.  There wasn't a directory of the same name (tried with /a existing 
and /a not existing before the import/mountpoint setting), and zpool status
and zfs list show exactly what is expected. But only the mountpoints under /a
and a few of the subdirectories there appear, not a single file. zpool status 
before the
import was empty, so no duplicate rpool. Tried setting mountpoint to legacy
and mounted by hand somewhere, exactly the same strange appearance.
We've root on zfs over a year now with SXDE 01/08 (fiddled at lot with zfs to 
get it 
running before it was officially possible), and never have seen such a 
behaviour.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-11-27 Thread Ian Collins
On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
> On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
> > 
> > I'm currently working with an organisation who
> want use ZFS for their  > full zones. Storage is SAN attached, and they
> also want to create a  > separate /var for each zone, which causes issues
> when the zone is  > installed. They believe that a separate /var is
> still good practice.
> If your mount options are different for /var and /, you will need
> a separate filesystem.  In our case, we use `setuid=off' and
> `devices=off' on /var for security reasons.  We do the same thing
> for home directories and /tmp .
> 
For zones?

-- 
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 2008.11 rc2 weird zpool import behaviour

2008-11-27 Thread petede
Udo Grabowski wrote:
> Tried setting the mountpoint explicitly already, gives exactly the same 
> behaviour.  There wasn't a directory of the same name (tried with /a existing 
> and /a not existing before the import/mountpoint setting), and zpool status
> and zfs list show exactly what is expected. But only the mountpoints under /a
> and a few of the subdirectories there appear, not a single file. zpool status 
> before the
> import was empty, so no duplicate rpool. Tried setting mountpoint to legacy
> and mounted by hand somewhere, exactly the same strange appearance.
> We've root on zfs over a year now with SXDE 01/08 (fiddled at lot with zfs to 
> get it 
> running before it was officially possible), and never have seen such a 
> behaviour.

That is odd - I just went through it here on a virtualbox system:

o boot from the iso image
o fire up an terminal window and did the following:
zpool import -f rpool
o cd /rpool and this only shows two directories for boot and etc
o zfs set mountpoint=/foo rpool/ROOT/opensolaris
o zfs mount rpool/ROOT/opensolaris
o ls /foo and all the pieces are there

Is this what you are doing ? Note /foo did not exist.

ta
pete
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice for swap and root pool

2008-11-27 Thread Peter Brouwer, Principal Storage Architect






dick hoogendijk wrote:

  On Thu, 27 Nov 2008 12:58:20 +
Chris Ridd <[EMAIL PROTECTED]> wrote:

  
  
I'm not 100% convinced it'll boot if half the mirror's not there,

  
  
Believe me, it will (been there done that). You -have- to make sure
though that both disks have installgrub ... And that your bios is able
to boot from the other disk.

You can always try it ou by pulling a plug from one of the disks ;-)
  

Or less drastic, use the BIOS to point to one of the disks in the
mirror to boot from.
If you have used installgrub to setup the stage 1&2 for both boot
drives you can boot from either one.


-- 
Regards Peter Brouwer,
Sun Microsystems Linlithgow
Principal Storage Architect, ABCP DRII Consultant
Office:+44 (0) 1506 672767
Mobile:+44 (0) 7720 598226
Skype :flyingdutchman_,flyingdutchman_l





smime.p7s
Description: S/MIME Cryptographic Signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cpio silently not copying files > 2GB -- seriously!?

2008-11-27 Thread Håkan Olsson
Hi,

I was just restoring a bunch of files from backup using find|cpio when I 
noticed that cpio does not copy files >2GB properly. The resulting files 
were "oddly" sized ( % 2GB, perhaps?).

Even more alarming, cpio did not warn in any way about not copying the 
file correctly! The cpio command exited normally ($? was 0). Output from 
"-pmdu" did not indicate any errors!

Fortunately, I happened to run 'du' and noticed it giving different sizes 
for the restored and "backup" directories, I've since re-restored the 
affected files using plain cp.

I guess I was lucky not doing the backup using cpio! :(

Seriously bad behaviour here -- something that really should be fixed for 
e.g "Solaris 11".

FWIW, I know about "largefile(5)", but I don't think failing silently like 
this (copying "some" data) can be considered proper behaviour in any 
scenario.

(This probably does not strictly belong in zfs-discuss, although for me 
both the backup media and the destination pool to restore to are ZFS.)

/H
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Separate /var

2008-11-27 Thread Gary Mills
On Fri, Nov 28, 2008 at 11:19:14AM +1300, Ian Collins wrote:
> On Fri 28/11/08 10:53 , Gary Mills [EMAIL PROTECTED] sent:
> > On Fri, Nov 28, 2008 at 07:39:43AM +1100, Edward Irvine wrote:
> > > 
> > > I'm currently working with an organisation who
> > want use ZFS for their  > full zones. Storage is SAN attached, and they
> > also want to create a  > separate /var for each zone, which causes issues
> > when the zone is  > installed. They believe that a separate /var is
> > still good practice.
> > If your mount options are different for /var and /, you will need
> > a separate filesystem.  In our case, we use `setuid=off' and
> > `devices=off' on /var for security reasons.  We do the same thing
> > for home directories and /tmp .
> > 
> For zones?

Sure, if you require different mount options in the zones.

-- 
-Gary Mills--Unix Support--U of M Academic Computing and Networking-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] cpio silently not copying files > 2GB -- seriously!?

2008-11-27 Thread Ian Collins
On Fri 28/11/08 13:09 , Håkan Olsson [EMAIL PROTECTED] sent:
> Hi,
> 
> I was just restoring a bunch of files from backup using find|cpio when I
> noticed that cpio does not copy files >2GB properly. The resulting files
> were "oddly" sized ( % 2GB, perhaps?).
> 
> Even more alarming, cpio did not warn in any way about not copying the 
> file correctly! The cpio command exited normally ($? was 0). Output from
> "-pmdu" did not indicate any errors!
> 
> Seriously bad behaviour here -- something that really should be fixed for
> e.g "Solaris 11".
> 
See the rather confusing comment at the end of the cpio man page:

 The new pax(1) format, with a command that supports it  (for
 example,  pax  ,  tar,  or  cpio),  should be used for large
 files. The cpio command is no longer  part  of  the  current
 POSIX standard and is deprecated in favor of pax.

-- 
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Micron - 1GB/s PCIe SSD's

2008-11-27 Thread Tim
On Thu, Nov 27, 2008 at 3:04 PM, Ross <[EMAIL PROTECTED]> wrote:

> ... here's hoping they release it with Solaris drivers
>
> http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9121698&intsrc=hm_list
> --
>


So it's fusionIO?  Here's hoping they can come in with a realistic price for
home enthusiasts... I highly doubt that will ever happen though.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Richard Elling
Ross wrote:
> Well, you're not alone in wanting to use ZFS and iSCSI like that, and in fact 
> my change request suggested that this is exactly one of the things that could 
> be addressed:
>
> "The idea is really a two stage RFE, since just the first part would have 
> benefits.  The key is to improve ZFS availability, without affecting it's 
> flexibility, bringing it on par with traditional raid controllers.
>
> A.  Track response times, allowing for lop sided mirrors, and better failure 
> detection. 

I've never seen a study which shows, categorically, that disk or network
failures are preceded by significant latency changes.  How do we get
"better failure detection" from such measurements?

>  Many people have requested this since it would facilitate remote live 
> mirrors.
>   

At a minimum, something like VxVM's preferred plex should be reasonably
easy to implement.

> B.  Use response times to timeout devices, dropping them to an interim 
> failure mode while waiting for the official result from the driver.  This 
> would prevent redundant pools hanging when waiting for a single device."
>   

I don't see how this could work except for mirrored pools.  Would that
carry enough market to be worthwhile?
 -- richard

> Unfortunately if your links tend to drop, you really need both parts.  
> However, if this does get added to ZFS, all you would then need is standard 
> monitoring on the ZFS pool.  That would notify you when any device fails and 
> the pool goes to a degraded state, making it easy to spot when either the 
> remote mirrors or local storage are having problems.  I'd have thought it 
> would make monitoring much simpler.
>
> And if this were possible, I would hope that you could configure iSCSI 
> devices to automatically reconnect and resilver too, so the system would be 
> self repairing once faults are corrected, but I haven't gone so far as to 
> test that yet.
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Micron - 1GB/s PCIe SSD's

2008-11-27 Thread Ross
I would imagine that 1GB/s is probably overkill for most home enthusiasts Tim.  
I would have thought something like the ACARD ram disk would be fast enough for 
that market:
http://www.mars-tech.com/ans-9010b.htm.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Availability: ZFS needs to handle disk removal / driver failure better

2008-11-27 Thread Ross Smith
On Fri, Nov 28, 2008 at 5:05 AM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Ross wrote:
>>
>> Well, you're not alone in wanting to use ZFS and iSCSI like that, and in
>> fact my change request suggested that this is exactly one of the things that
>> could be addressed:
>>
>> "The idea is really a two stage RFE, since just the first part would have
>> benefits.  The key is to improve ZFS availability, without affecting it's
>> flexibility, bringing it on par with traditional raid controllers.
>>
>> A.  Track response times, allowing for lop sided mirrors, and better
>> failure detection.
>
> I've never seen a study which shows, categorically, that disk or network
> failures are preceded by significant latency changes.  How do we get
> "better failure detection" from such measurements?

Not preceded by as such, but a disk or network failure will certainly
cause significant latency changes.  If the hardware is down, there's
going to be a sudden, and very large change in latency.  Sure, FMA
will catch most cases, but we've already shown that there are some
cases where it doesn't work too well (and I would argue that's always
going to be possible when you are relying on so many different types
of driver).  This is there to ensure that ZFS can handle *all* cases.


>>  Many people have requested this since it would facilitate remote live
>> mirrors.
>>
>
> At a minimum, something like VxVM's preferred plex should be reasonably
> easy to implement.
>
>> B.  Use response times to timeout devices, dropping them to an interim
>> failure mode while waiting for the official result from the driver.  This
>> would prevent redundant pools hanging when waiting for a single device."
>>
>
> I don't see how this could work except for mirrored pools.  Would that
> carry enough market to be worthwhile?
> -- richard

I have to admit, I've not tested this with a raided pool, but since
all ZFS commands hung when my iSCSI device went offline, I assumed
that you would get the same effect of the pool hanging if a raid-z2
pool is waiting for a response from a device.  Mirrored pools do work
particularly well with this since it gives you the potential to have
remote mirrors of your data, but if you had a raid-z2 pool, you still
wouldn't want that hanging if a single device failed.

I will go and test the raid scenario though on a current build, just to be sure.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs znode changes getting lost

2008-11-27 Thread shelly
thanx a lot perrin it really helped a lot.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss