Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-06 Thread Siju George
On Wed, Sep 29, 2010 at 1:12 AM, Jeff Blaine  wrote:
> We're considering ditching our Sun boxes with vice
> partitions on ZFS :(
>

hope this helps

http://leaf.dragonflybsd.org/mailarchive/users/2010-09/msg00083.html

thanks

--Siju
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-03 Thread Simon Wilkinson

On 1 Oct 2010, at 14:17, a...@setfilepointer.com wrote:
> 
> Of course, there are lots of other wish-list things for OpenAFS, like my
> favorite, file encryption in the cache manager (so the files are already
> encrypted when they arrive at the fileserver.)

We had a Summer of Code project implementing exactly this, this year.

S.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-03 Thread alec
On 2010-10-01 11:35, Harald Barth wrote:
[chop]
> 
> Another way to tackle the data corruption issue in the AFS case would
> be to add checksum functionality to the fileserver backend. In
> contrast to NFS, we have the advantage that noone reads the data
> directly from the file system but always through the client.

IMHO, AFS crossed over into these kinds of enterprise features the
instant a "volume" could be replicated or moved transparently... I'd be
happy to see more features like data checksumming, real-time
multi-server volume replication, and so on.  It seems like burying this
stuff in the filesystem just moves your single point of failure up into
the fileserver... if AFS were to replicate volumes to multiple
fileservers that point-of-failure moves up into the network (and if the
network is failing... nobody can get at their files anyway).  

Of course, there are lots of other wish-list things for OpenAFS, like my
favorite, file encryption in the cache manager (so the files are already
encrypted when they arrive at the fileserver.)



pgpYKshOGCXRe.pgp
Description: PGP signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-03 Thread Dirk Heinrichs
Am 28.09.2010 21:49, schrieb Russ Allbery:
> Jeff Blaine  writes:
> 
>> What's the tried-and-true production-quality Linux equivalent?
>> Anything?  Last I read, nothing.
> 
> There's nothing really equivalent to ZFS.
> 
>> Barring an equivalent, what Linux setup...
> 
>>   a) seems most stable
>>   b) is fsck-less
> 
>> Even quick grunt responses are appreciated.
> 
> We use ext3.  It isn't the fastest or the most featureful, but it's the
> core file system that everyone uses on Linux and for us it's been rock
> solid.  You're the least likely to run into strange problems.
> 
> Lots of people also use XFS, and it should be reasonably stable.  I would
> avoid ReiserFS and JFS due to lack of developers and widespread use.
> 
> ext4 is getting to the point that it's mature enough to use, but I'm not
> sure I'd trust it yet.

I run btrfs already, which is (or will be) equivalent to ZFS (somehow).

Bye...

Dirk




signature.asc
Description: OpenPGP digital signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 10/1/10 14:44 , Steve Simmons wrote:
> On Sep 30, 2010, at 4:09 PM, Robert Milkowski wrote:
>> Why does it matter for you if ZFS is being developed in open or not?
> 
> Can't speak for anybody else, but w/r/t umich and AFS it's likely a matter of 
> cost. We can build and run oAFS on white boxes a helluva lot cheaper than we 
> can do it on purchased Sun/Oracle anything. I'm keeping nexenta in mind as a 
> possible solution, but our next significant hardware/software roll is a year 
> or two away. When we hit that point, we'll see what the world looks like.

Same for CMU and AFS.  (And I'm developing a rather strong hatred of Oracle
for pulling the rug out from under us.)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkymiq4ACgkQIn7hlCsL25UmygCdG6GPKOGeUJ3ylmVEBZ/6OPpn
bFAAoIjBvhFt2SMzeRp2xf7ruaHWgG+G
=pM8M
-END PGP SIGNATURE-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Robert Milkowski


On Thu, Sep 30, 2010 at 22:56, Robert Milkowski  wrote:
   

On 30/09/2010 22:42, Gary Buhrmaster wrote:
 

On Thu, Sep 30, 2010 at 20:09, Robert Milkowskiwrote:

...

   

btw: according to the leaked memo Oracle will provide source code for
Solaris, including ZFS, everytime they produce a new Solaris release.
This
would mean that it will still be open source, but development wouldn't
happen in open.

 

I read the same leaked memo, and what I took from it is that
it implies no interim feature updates (which for ZFS have
been occurring during the current Solaris release), and no
bug fixes (when needed).  Just a code drop every major
release (24 months or so?).  As to whether that is what
will actually happen is unclear (leaked memos are not
policy).

   

Well, they've just releases S10 U9 with ZFS updates.
Then they are about to publish Solaris 11 Express with even more new ZFS
features.
 

Have they published the source code?  That is what I
talking about, source code that others could use to
update their implementations.  I have no doubt Oracle
will continue to release updates for their closed source
releases.
   


Well, yes. All ZFS features in Solaris 10, including the latest update, 
have been in Open Solaris for quite some time. The latest source 
available is for build 147 while all features in the S10 update well 
predate build 134.


--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Steve Simmons

On Sep 30, 2010, at 4:09 PM, Robert Milkowski wrote:

> Why does it matter for you if ZFS is being developed in open or not?

Can't speak for anybody else, but w/r/t umich and AFS it's likely a matter of 
cost. We can build and run oAFS on white boxes a helluva lot cheaper than we 
can do it on purchased Sun/Oracle anything. I'm keeping nexenta in mind as a 
possible solution, but our next significant hardware/software roll is a year or 
two away. When we hit that point, we'll see what the world looks like.

Steve___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Stephan Wiesand

On Oct 1, 2010, at 16:20, Douglas E. Engert wrote:

> On 10/1/2010 4:35 AM, Harald Barth wrote:
>> 
>> Another way to tackle the data corruption issue in the AFS case would
>> be to add checksum functionality to the fileserver backend. In
>> contrast to NFS, we have the advantage that noone reads the data
>> directly from the file system but always through the client.
> 
> ZFS stores the checksum of a block, not in the block, but in a higher level
> block. This can then detect if a block failed to be written when its read 
> back.
> Other file systems could not detect this, as they would read old data,
> with an old checksum.

We've also seen disks randomly reading and returning a different sector than 
the one requested.

> Keep this in mind if checksums are added to AFS.

This would be an awesome feature to have. Presumably, for each file one would 
keep an auxiliary one with, say,  4 or 8 bytes of checksum for each 512 bytes 
block of payload? Or 64 bytes of Hamming codes, allowing single bit error 
correction and dual bit detection?

-- 
Stephan Wiesand
DESY -DV-
Platanenallee 6
15738 Zeuthen, Germany






smime.p7s
Description: S/MIME cryptographic signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Douglas E. Engert



On 10/1/2010 4:35 AM, Harald Barth wrote:



ZFS has some really nice features, but Oracle just priced themselves
out of the market for scientific computing.


Looks like their strategy to me. Time will tell if it will be successful.


It's hard enough to sell buying enterprise disks and servers, when
consumer stuff is much cheaper[1], but add a doubling of the head
node price to have a safe filesystem and it just won't fly.


As you say, I just can't "sell" it to the researchers. They rather
have double the capacity and double the performance instead. Btw, the
only real troublesome advanced HW failures with single bit rot that we
have encountered where ZFS would have saved the day was with an
advanced "enterprise" SAN system (RIO). The simple "consumer" stuff
in my experience just fails in a more simple manner.


ZFS did save the day for us. An AFS filserver was reporting I/O errors,
which ZFS was reporting as errors. A closer look a the SAN showed a few
bad disks, but no other users of the SAN were reporting errors. After
replacing the disks, AFS was working again.



With daily backups, our HD failues are rare enough and not too much of
a pain so I bet that any researcher rather would have double the
storage and performance than double the price (or more) for Oracle-FS.

Another way to tackle the data corruption issue in the AFS case would
be to add checksum functionality to the fileserver backend. In
contrast to NFS, we have the advantage that noone reads the data
directly from the file system but always through the client.


ZFS stores the checksum of a block, not in the block, but in a higher level
block. This can then detect if a block failed to be written when its read back.
Other file systems could not detect this, as they would read old data,
with an old checksum. Keep this in mind if checksums are added to AFS.




Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info




--

 Douglas E. Engert  
 Argonne National Laboratory
 9700 South Cass Avenue
 Argonne, Illinois  60439
 (630) 252-5444
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-10-01 Thread Harald Barth

> ZFS has some really nice features, but Oracle just priced themselves
> out of the market for scientific computing.

Looks like their strategy to me. Time will tell if it will be successful.

> It's hard enough to sell buying enterprise disks and servers, when
> consumer stuff is much cheaper[1], but add a doubling of the head
> node price to have a safe filesystem and it just won't fly.

As you say, I just can't "sell" it to the researchers. They rather
have double the capacity and double the performance instead. Btw, the
only real troublesome advanced HW failures with single bit rot that we
have encountered where ZFS would have saved the day was with an
advanced "enterprise" SAN system (RIO). The simple "consumer" stuff
in my experience just fails in a more simple manner.

With daily backups, our HD failues are rare enough and not too much of
a pain so I bet that any researcher rather would have double the
storage and performance than double the price (or more) for Oracle-FS.

Another way to tackle the data corruption issue in the AFS case would
be to add checksum functionality to the fileserver backend. In
contrast to NFS, we have the advantage that noone reads the data
directly from the file system but always through the client.

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 9/30/10 12:53 , Vincent Fox wrote:
> It makes me sad that Oracle bought Sun, where it will probably wither.
> If IBM had bought Sun I would have more hope of a good filesystem
> for MacOS, Linux, etc.  in the near term.  ZFS has been stable and
> in production for years now.  I like btrfs but it is years from
> being ready for terabytes of production data.

In my dream world, someone with resources would buy ZFS from Oracle and keep
it usable for everyone else.  Sadly, ZFS is probably 90% of *why* Oracle
bought Sun, which bodes ill for anyone else trying to use it.  :(

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkylcNYACgkQIn7hlCsL25VyXgCfYWhthfaK3tzZCcSo+lCclC24
aYgAnjCs699hD99O0+96jbzTbudIwk9w
=z8WJ
-END PGP SIGNATURE-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Gary Buhrmaster
On Thu, Sep 30, 2010 at 22:56, Robert Milkowski  wrote:
> On 30/09/2010 22:42, Gary Buhrmaster wrote:
>>
>> On Thu, Sep 30, 2010 at 20:09, Robert Milkowski  wrote:
>>
>> ...
>>
>>>
>>> btw: according to the leaked memo Oracle will provide source code for
>>> Solaris, including ZFS, everytime they produce a new Solaris release.
>>> This
>>> would mean that it will still be open source, but development wouldn't
>>> happen in open.
>>>
>>
>> I read the same leaked memo, and what I took from it is that
>> it implies no interim feature updates (which for ZFS have
>> been occurring during the current Solaris release), and no
>> bug fixes (when needed).  Just a code drop every major
>> release (24 months or so?).  As to whether that is what
>> will actually happen is unclear (leaked memos are not
>> policy).
>>
>
> Well, they've just releases S10 U9 with ZFS updates.
> Then they are about to publish Solaris 11 Express with even more new ZFS
> features.

Have they published the source code?  That is what I
talking about, source code that others could use to
update their implementations.  I have no doubt Oracle
will continue to release updates for their closed source
releases.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 22:42, Gary Buhrmaster wrote:

On Thu, Sep 30, 2010 at 20:09, Robert Milkowski  wrote:

...
   

btw: according to the leaked memo Oracle will provide source code for
Solaris, including ZFS, everytime they produce a new Solaris release. This
would mean that it will still be open source, but development wouldn't
happen in open.
 

I read the same leaked memo, and what I took from it is that
it implies no interim feature updates (which for ZFS have
been occurring during the current Solaris release), and no
bug fixes (when needed).  Just a code drop every major
release (24 months or so?).  As to whether that is what
will actually happen is unclear (leaked memos are not
policy).
   


Well, they've just releases S10 U9 with ZFS updates.
Then they are about to publish Solaris 11 Express with even more new ZFS 
features.



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Jeffrey Altman
On 9/30/2010 3:52 PM, Robert Milkowski wrote:
> On 30/09/2010 18:05, Jeffrey Altman wrote:
>> Bill Moore and Jeff Bonwick have both left Oracle.  It is certainly a
>> sad day.
>>
>>
> 
> Bill is working for Nexenta and he is working on ZFS :)
> He is supposed to give a talk on ZFS's future at Open Storage Summit -
> see http://nexenta-summit2010.eventbrite.com/

To say that Bill Moore is working for Nexenta is a bit of an
overstatement.  Bill joined their Advisory Board back in April 2010.  As
far as I am aware, that is as far as the relationship goes.

http://www.nexenta.com/corp/blog/2010/04/06/bill-moore-joins-nexenta-advisory-board/

One of the issues at this point is whether ZFS splinters into a number
of incompatible deployments.  Will it be possible to migrate a ZFS posix
layer file system from Oracle Solaris 11 to Nexenta to FreeBSD to Debian
BSD and still have the file system be usable.  Oracle's decision to stop
giving away their intellectual property to Nexenta to ship to end users
before Oracle is able to ship and support Solaris Next makes a great
deal of sense.  The number of patchsets that were contributed to ZFS
from outside Sun/Oracle were extremely small in number compared to the
internal feature work.   Oracle's decision to release source in
conjunction with each Solaris release gives them the ability to profit
from their investment while still permitting open source developers to
make use of the code.

---

On a separate note.  A dozen e-mails in under 20 minutes to the same
thread really is overkill.

Jeffrey Altman



signature.asc
Description: OpenPGP digital signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Gary Buhrmaster
On Thu, Sep 30, 2010 at 20:09, Robert Milkowski  wrote:

...
> btw: according to the leaked memo Oracle will provide source code for
> Solaris, including ZFS, everytime they produce a new Solaris release. This
> would mean that it will still be open source, but development wouldn't
> happen in open.

I read the same leaked memo, and what I took from it is that
it implies no interim feature updates (which for ZFS have
been occurring during the current Solaris release), and no
bug fixes (when needed).  Just a code drop every major
release (24 months or so?).  As to whether that is what
will actually happen is unclear (leaked memos are not
policy).
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Russ Allbery
Robert Milkowski  writes:
> On 29/09/2010 01:47, Russ Allbery wrote:

>> Oracle so far appears to be pursuing a general policy of slowly ceasing
>> all open source development that Sun was doing.  I'm sure ZFS will
>> continue to exist in some form, but I expect the Oracle version of it
>> to become a commercial software product you have to license.

> ZFS is the main filesystem for Solaris 11 - you will no longer even be
> able to use UFS for root-fs.  ZFS is the filesystem behind Oracle 7000
> series disk arrays.

> They have just announced they are going to use Solaris 11/ZFS in their
> future versions of Exadata.

> IMHO ZFS is the key platform for Oracle and they already are investing
> in it.

None of that contradicts the statement I made.

>> Oracle is not an open source company and appears to have little to no
>> interest in becoming one.

> Why does it matter for you if ZFS is being developed in open or not?

Why does it matter to me?  Because I'm more likely to run servers on Mac
OS X than I am to run them on Solaris, which means the only interest ZFS
holds for me is its availability on non-Solaris, non-Oracle platforms.
Other people will have different reasons for caring.  :)

> btw: according to the leaked memo Oracle will provide source code for
> Solaris, including ZFS, everytime they produce a new Solaris
> release.

I'll be curious to see how long that lasts.

-- 
Russ Allbery (r...@stanford.edu) 
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Gary Buhrmaster
On Thu, Sep 30, 2010 at 20:51, Booker Bense  wrote:

> [1]- "But I can get a 2 TB disk at fry's for $150..."

Then one overpaid.  The current Fry's flyer shows 2TB for $99 :-)
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Jeff Blaine

On 9/30/2010 4:32 PM, Robert Milkowski wrote:

Why not to consider Solaris on x86 then? Be it HP, Dell, Oracle, ...?
For non-Oracle HW the support is 1000$ per socket iirc.


Because this is radically different than what we had established
as Sun customers over many years.

We're well off-topic now though ;)

Thanks for all of the great replies so far, everyone.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Booker Bense

On Thu, 30 Sep 2010, Robert Milkowski wrote:



I'm curious - did you pay for the support before or did you use a free 
version with no support?


Until thumpers priced themselves out of the market, we just 
bought Solaris hardware and the support that goes with it.

Sun gave academic institutions a significant discount so
there hardware could compete. That's gone.

They have also significantly changed the rules about OS support 
to make it extremely unattractive.




IIRC if you want to run Solaris 10 on non-Oracle x86 hardware (Dell, HP, ...) 
the support cost, which covers license as well, is 1000$ per CPU socket per 
year list price.




There has been a great deal of confusion about whether Oracle is 
really going to support that or not and whether that will be true

for Solaris 11.

But if that's the price, it's so expensive nobody will bother.
If you care enough to have ZFS, you'll just buy Oracle hardware.

ZFS has some really nice features, but Oracle just priced 
themselves out of the market for scientific computing.


It's hard enough to sell buying enterprise disks and servers, 
when consumer stuff is much cheaper[1], but add a doubling of the 
head node price to have a safe filesystem and it just won't fly.


- Booker C. Bense

[1]- "But I can get a 2 TB disk at fry's for $150..." Wish I had 
a nickle for every time I've heard that.



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 21:26, Jeff Blaine wrote:

On 9/30/2010 4:04 PM, Robert Milkowski wrote:

On 29/09/2010 01:22, Jeff Blaine wrote:

On 9/28/2010 6:05 PM, Jeffrey Altman wrote:

On 9/28/2010 5:53 PM, Patricia O'Reilly wrote:

I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our 
fileservers.


Many organizations are looking to move off of Oracle Solaris due to 
the

recently ownership, development and licensing changes. Jeff Blaine can
correct me but I suspect the reasons for the switch are not technical.


Precisely.


I'm curious - did you pay for the support before or did you use a free
version with no support?


Paid support, SPARC.


Why not to consider Solaris on x86 then? Be it HP, Dell, Oracle, ...?
For non-Oracle HW the support is 1000$ per socket iirc.

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 15:00, Stephan Wiesand wrote:


Alas, ZFS is only available with an OS that's *really* troublesome to maintain. 
Frankly, I wouldn't mind the cost. But those patch orgies make Solaris a no-go 
here. And two decades after the invention of RPMS and DEBs, they come up with 
... IPS.
   


What's wrong with IPS?
Or is it that you tried an early version in-development?

IMHO it is actually better than RPM or DEB.
And yes, with Solaris 11/IPS all the patching nightmare is gone...

Actually if you are doing manual upgrades what's really cool is IPS + 
Boot Environments. Basically you can update an OS on a cloned root-fs 
and reboot when it is done and you are ready - something is wrong you 
reboot back to original environment. All of that with one command + very 
quickly thanks to ZFS clones.
Well, that's much better than crossing your finger when updating a rpm 
based distribution (although Fedora guys are working on something 
similar base on btrfs, but it will take some years to be production 
ready...).



--
Robert Milkowski
http://milek.blogspot.com


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Jeff Blaine

On 9/30/2010 4:04 PM, Robert Milkowski wrote:

On 29/09/2010 01:22, Jeff Blaine wrote:

On 9/28/2010 6:05 PM, Jeffrey Altman wrote:

On 9/28/2010 5:53 PM, Patricia O'Reilly wrote:

I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our fileservers.


Many organizations are looking to move off of Oracle Solaris due to the
recently ownership, development and licensing changes. Jeff Blaine can
correct me but I suspect the reasons for the switch are not technical.


Precisely.


I'm curious - did you pay for the support before or did you use a free
version with no support?


Paid support, SPARC.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 29/09/2010 01:52, Booker Bense wrote:

On Tue, 28 Sep 2010, Vincent Fox wrote:


On 09/28/2010 04:13 PM, Rich Sudlow wrote:

  that being said we're also looking for fileserver
alternatives due to Oracle takeover. 


What's your reasoning here?



Oracle is making it pretty clear that Solaris is only for
Solaris hardware and they are changing the support/price
structure.

Ideally, we'd like to run Solaris on commodity x86 boxes rather than 
the increasingly expensive thumper boxes.


For whatever reason Oracle is pricing themselves out of the market wrt 
to generic hardware.




You can definitely buy a support for Solaris 10 from Oracle on 
non-Oracle x86 servers.

IIRC it is 1000$ per socket per year for 24x7 support.

http://www.oracle.com/us/corporate/press/161333
http://www.oracle.com/us/products/servers-storage/solaris/non-sun-x86-081976.html

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Andy Cobaugh

On 2010-09-30 at 21:00, Robert Milkowski ( mi...@task.gda.pl ) said:

On 30/09/2010 15:12, Andy Cobaugh wrote:


 I don't think anybody has mentioned the block level compression in ZFS
 yet. With simple lzjb compression (zfs set compression=on foo), our AFS
 home directories see ~1.75x compression. That's an extra 1-2TB of disk
 that we don't need to store. Of course that makes balancing vice
 partitions interesting when you can only see the compression ratio at the
 filesystem level and not the volume level.

 Checksums are nice too. There's no longer a question of whether your
 storage hardware wrote what you wanted it to write. This can go a long way
 to helping to predict failures if you run zpool scrub on a regular basis
 (otherwise, zfs only detects checksum mismatches upon read, scrub checks
 the whole pool).

 So, just to add us to the list, we're either ext3 on linux for small stuff
 (<10TB), and zfs on solaris for everything else. Will probably consider
 XFS in the future, however.


Why not ZFS on Solaris x86 for "smaller stuff" as well?


That's just the way things have worked out over the years. "smaller stuff" 
tends to be older machines that were here when I started, and a couple of 
those have hardware raid controllers (like, 3ware pata, for example), that 
will be decom'd soon. There are also cases where the machine with the 
storage attached to it also needs to be used interactively by people 
(like, a PI wants a new machine to run stuff on, but also wants 10TB, 
which we set up as a vice partition so they can access it from any 
machine).


Solaris is great for storage if that's all you use it for, but 
anything else gets to be a pain when people start asking for really weird 
and complicated stuff to be installed.


If I were doing everything over again, we would eliminate all of the 
storage islands, and run all the storage through solaris.


--andy
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 29/09/2010 01:47, Russ Allbery wrote:

Vincent Fox  writes:
   

  On 09/28/2010 04:13 PM, Rich Sudlow wrote:
 
   

that being said we're also looking for fileserver alternatives due to
Oracle takeover.
   
   

What's your reasoning here?
 
   

If anything I'd expect them to put effort into optimizing it
which Sun was letting languish recently.
 

Oracle so far appears to be pursuing a general policy of slowly ceasing
all open source development that Sun was doing.  I'm sure ZFS will
continue to exist in some form, but I expect the Oracle version of it to
become a commercial software product you have to license.

   


ZFS is the main filesystem for Solaris 11 - you will no longer even be 
able to use UFS for root-fs.

ZFS is the filesystem behind Oracle 7000 series disk arrays.

They have just announced they are going to use Solaris 11/ZFS in their 
future versions of Exadata.


IMHO ZFS is the key platform for Oracle and they already are investing 
in it.



Oracle is not an open source company and appears to have little to no
interest in becoming one.

   


Why does it matter for you if ZFS is being developed in open or not?

btw: according to the leaked memo Oracle will provide source code for 
Solaris, including ZFS, everytime they produce a new Solaris release. 
This would mean that it will still be open source, but development 
wouldn't happen in open.


--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 29/09/2010 01:22, Jeff Blaine wrote:

On 9/28/2010 6:05 PM, Jeffrey Altman wrote:

On 9/28/2010 5:53 PM, Patricia O'Reilly wrote:

I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our fileservers.


Many organizations are looking to move off of Oracle Solaris due to the
recently ownership, development and licensing changes.  Jeff Blaine can
correct me but I suspect the reasons for the switch are not technical.


Precisely.


I'm curious - did you pay for the support before or did you use a free 
version with no support?


IIRC if you want to run Solaris 10 on non-Oracle x86 hardware (Dell, HP, 
...) the support cost, which covers license as well, is 1000$ per CPU 
socket per year list price.


Free option - www.nexenta.org

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 15:12, Andy Cobaugh wrote:


I don't think anybody has mentioned the block level compression in ZFS 
yet. With simple lzjb compression (zfs set compression=on foo), our 
AFS home directories see ~1.75x compression. That's an extra 1-2TB of 
disk that we don't need to store. Of course that makes balancing vice 
partitions interesting when you can only see the compression ratio at 
the filesystem level and not the volume level.


Checksums are nice too. There's no longer a question of whether your 
storage hardware wrote what you wanted it to write. This can go a long 
way to helping to predict failures if you run zpool scrub on a regular 
basis (otherwise, zfs only detects checksum mismatches upon read, 
scrub checks the whole pool).


So, just to add us to the list, we're either ext3 on linux for small 
stuff (<10TB), and zfs on solaris for everything else. Will probably 
consider XFS in the future, however.


Why not ZFS on Solaris x86 for "smaller stuff" as well?

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 18:05, Jeffrey Altman wrote:

On 9/30/2010 12:53 PM, Vincent Fox wrote:
   

  On 9/30/2010 6:40 AM, Gary Buhrmaster wrote:
 

Getting back the data your wrote is a hard
problem.  ZFS presumes that everything
downstream of it will (eventually) fail.  There
is overhead there, but it does solve a set
of problems that other solutions do not.
(And the highly paranoid presume ZFS will
fail, so take different precautions).

   

I've seen 3 RAID-5 sets have double-disk failures in the last 5 years.

I've seen one even have a triple-disk failure in a short timespan.
Too short for all that rebuild from hot spares business to work.

Granted, older disks on older system.

Everyone will say "yeah, but it's very unlikely and hasn't happened to ME".

I like ZFS RAID-10, I like it a LOT.  I don't think people understand how
good it really is, most are afraid of anything other than the OS
they run now and antique filesystems that have accreted decades
of "fixes" for design defects.  Do you trust black box RAID controllers?
I don't.  I really like being able to run scrub whenever I need to ensure
the data on the disk is correct.

It makes me sad that Oracle bought Sun, where it will probably wither.
If IBM had bought Sun I would have more hope of a good filesystem
for MacOS, Linux, etc.  in the near term.  ZFS has been stable and
in production for years now.  I like btrfs but it is years from
being ready for terabytes of production data.
 

Bill Moore and Jeff Bonwick have both left Oracle.  It is certainly a
sad day.

   


Bill is working for Nexenta and he is working on ZFS :)
He is supposed to give a talk on ZFS's future at Open Storage Summit - 
see http://nexenta-summit2010.eventbrite.com/



--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 30/09/2010 13:19, Stephan Wiesand wrote:

Hi Jeff,

On Sep 29, 2010, at 22:18 , Jeffrey Altman wrote:

   

RAID is not a replacement for ZFS.  ZRAID-3 protects against single bit
disk corruption errors that RAID cannot.  Only ZFS stores a checksum of
the data as part of each block and verifies it before delivering the
data to the application.  If the checksum fails and there are replicas,
ZFS will read the data from another copy and fixup the damaged version.
That is what makes ZFS so special and so valuable.  If you have data
that must be correct, you want ZFS.
 


you're right, of course. This is a very desirable feature, and the main reason 
why I'd love to see ZFS become available on linux.

I disagree on the "RAID cannot provide this" statement though. RAID-5 has the 
data to detect single bit corruption, and RAID-6 even has the data to correct it. Alas, 
verifying/correcting data upon read is not a common feature. I know of just one vendor 
(DDN) actually providing it. It's a mystery to me why the others don't.

   


Most of the raid controller do not check any parity on reads if a raid 
group is not degraded.
In case of RAID-5 this would make them very slow (as you would need 
entire stripe of data). Not to mention that in ZFS it works with any 
RAID configuration, including stripe for meta-data (by default).


ZFS always checks its checksums on reads and transparently fixes any 
corruption if it can.
Additionally ZFS uses *much*  stronger checksums that what you have in 
raid controllers - currently it uses even sha256 if you want it. It 
means it can detect much more than just a single bit errors.


Another good feature is that with zfs you get a so called end-to-end 
checksumming - if data corruption happened anywhere from a disk to 
memory (medium errors? driver bugs? SAN? ...) zfs should be able to 
detect it and fix it. Not that long ago I was hit by a data corruption 
by one of a SAN switches... fortunately ZFS dealt with it (and other 
switch was fine). The other time there was something wrong with one of a 
scsi cards which under load was corrupting data fortunately we used 
zfs mirror between two different jbods via separate scsi cards so from 
an application point of view all was fine. Etc...


Not that it happens everyday... but when it does then ZFS just rocks :)

And *no* fsck :)

+ compression, deduplication, ... and much more :)

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Robert Milkowski

On 29/09/2010 16:49, Harald Barth wrote:


Currently our AFS servers are on Linux and XFS. We were thinking of
moving to OpenSolaris or Solaris and ZFS. But as things are now, these
options either have a very unsecure future or a very hefty pricetag.
For us, the name change of the company did result in a doubling of
prices, something that we did not pay but took another solution. I am
still searching for a good software raid solution. Any experiences
with ZFS on FreeBSD out there? Other suggestions?

   


Nexenta?

--
Robert Milkowski
http://milek.blogspot.com

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread chas williams - CONTRACTOR
On Thu, 30 Sep 2010 14:03:40 -0400
Tom Keiser  wrote:

> RAID-5 only provides a single parity bit.  Unfortunately, this means
> that it can merely detect a single bit parity error; it cannot correct
> the error since there is insufficient information to prove which of
> the stripes is in error.  RAID-6 is complicated because different
> implementations use different algorithms for the two orthogonal
> checksums.  IIRC, all of them are able to detect two-bit errors, and
> some of them can correct a single-bit error.

yes, in a traditional raid5.  i believe that most hardware raid5
vendors have a per disk block checksum also 'protecting' the individual
data blocks on the drives.  i have seen disks that have been formatted
to a blocksize of 520.  i gather this additional 8 bytes is to store a
per block checksum.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Tom Keiser
On Thu, Sep 30, 2010 at 12:02 PM, chas williams - CONTRACTOR
 wrote:
> On Thu, 30 Sep 2010 14:19:51 +0200
> Stephan Wiesand  wrote:
>
>> Hi Jeff,
>>
>> On Sep 29, 2010, at 22:18 , Jeffrey Altman wrote:
>>
>> > RAID is not a replacement for ZFS.  ZRAID-3 protects against single
>> > bit disk corruption errors that RAID cannot.  Only ZFS stores a
>> > checksum of the data as part of each block and verifies it before
>> > delivering the data to the application.  If the checksum fails and
>> > there are replicas, ZFS will read the data from another copy and
>> > fixup the damaged version. That is what makes ZFS so special and so
>> > valuable.  If you have data that must be correct, you want ZFS.
>>
>>
>> you're right, of course. This is a very desirable feature, and the
>> main reason why I'd love to see ZFS become available on linux.
>>
>> I disagree on the "RAID cannot provide this" statement though. RAID-5
>> has the data to detect single bit corruption, and RAID-6 even has the
>> data to correct it. Alas, verifying/correcting data upon read is not
>> a common feature. I know of just one vendor (DDN) actually providing
>> it. It's a mystery to me why the others don't.
>>
>> Anyway, the next best option if ZFS is not available is to run parity
>> checks on all your arrays regularly. Things do get awkward when
>> errors show up, but at least you know. Both Linux MD RAID and the
>> better hardware solutions offer this.
>>
>> From my experience, disks don't do this at random and do not develop
>> such a fault during their life span, but some broken disks do it
>> frequently from the beginning. NB I only ever observed this problem
>> with SATA drives.
>
> raid5 really isnt quite the same as what jeff is describing about zfs.
> zfs apparently maintains multiple copies of the same block across
> different devices.  if you had a single bit error in one of the those
> blocks (as determine by some checksum apparently stored with this
> block), zfs will pick another block that is supposed to contain the
> same data.
>
> raid5 only corrects single bit errors.  it can detect multiple bit
> errors.  raid5 (to my knowledge) always verifies, even on reads and can
> correct single bit errors.  raid6 can correct two single bit

RAID-5 only provides a single parity bit.  Unfortunately, this means
that it can merely detect a single bit parity error; it cannot correct
the error since there is insufficient information to prove which of
the stripes is in error.  RAID-6 is complicated because different
implementations use different algorithms for the two orthogonal
checksums.  IIRC, all of them are able to detect two-bit errors, and
some of them can correct a single-bit error.

-Tom
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Jeffrey Altman
On 9/30/2010 12:53 PM, Vincent Fox wrote:
>  On 9/30/2010 6:40 AM, Gary Buhrmaster wrote:
>> Getting back the data your wrote is a hard
>> problem.  ZFS presumes that everything
>> downstream of it will (eventually) fail.  There
>> is overhead there, but it does solve a set
>> of problems that other solutions do not.
>> (And the highly paranoid presume ZFS will
>> fail, so take different precautions).
>>
> I've seen 3 RAID-5 sets have double-disk failures in the last 5 years.
> 
> I've seen one even have a triple-disk failure in a short timespan.
> Too short for all that rebuild from hot spares business to work.
> 
> Granted, older disks on older system.
> 
> Everyone will say "yeah, but it's very unlikely and hasn't happened to ME".
> 
> I like ZFS RAID-10, I like it a LOT.  I don't think people understand how
> good it really is, most are afraid of anything other than the OS
> they run now and antique filesystems that have accreted decades
> of "fixes" for design defects.  Do you trust black box RAID controllers?
> I don't.  I really like being able to run scrub whenever I need to ensure
> the data on the disk is correct.
> 
> It makes me sad that Oracle bought Sun, where it will probably wither.
> If IBM had bought Sun I would have more hope of a good filesystem
> for MacOS, Linux, etc.  in the near term.  ZFS has been stable and
> in production for years now.  I like btrfs but it is years from
> being ready for terabytes of production data.

Bill Moore and Jeff Bonwick have both left Oracle.  It is certainly a
sad day.

ZFS is license incompatible with Linux.  Apple has already walked away
once.  That leaves BSD and Windows as potential operating systems.  If
you would be willing to run file servers on Windows atop a Windows ZFS
implementation, please let me know.

Jeffrey Altman



signature.asc
Description: OpenPGP digital signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Vincent Fox

 On 9/30/2010 6:40 AM, Gary Buhrmaster wrote:

Getting back the data your wrote is a hard
problem.  ZFS presumes that everything
downstream of it will (eventually) fail.  There
is overhead there, but it does solve a set
of problems that other solutions do not.
(And the highly paranoid presume ZFS will
fail, so take different precautions).


I've seen 3 RAID-5 sets have double-disk failures in the last 5 years.

I've seen one even have a triple-disk failure in a short timespan.
Too short for all that rebuild from hot spares business to work.

Granted, older disks on older system.

Everyone will say "yeah, but it's very unlikely and hasn't happened to ME".

I like ZFS RAID-10, I like it a LOT.  I don't think people understand how
good it really is, most are afraid of anything other than the OS
they run now and antique filesystems that have accreted decades
of "fixes" for design defects.  Do you trust black box RAID controllers?
I don't.  I really like being able to run scrub whenever I need to ensure
the data on the disk is correct.

It makes me sad that Oracle bought Sun, where it will probably wither.
If IBM had bought Sun I would have more hope of a good filesystem
for MacOS, Linux, etc.  in the near term.  ZFS has been stable and
in production for years now.  I like btrfs but it is years from
being ready for terabytes of production data.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread chas williams - CONTRACTOR
On Thu, 30 Sep 2010 14:19:51 +0200
Stephan Wiesand  wrote:

> Hi Jeff,
> 
> On Sep 29, 2010, at 22:18 , Jeffrey Altman wrote:
> 
> > RAID is not a replacement for ZFS.  ZRAID-3 protects against single
> > bit disk corruption errors that RAID cannot.  Only ZFS stores a
> > checksum of the data as part of each block and verifies it before
> > delivering the data to the application.  If the checksum fails and
> > there are replicas, ZFS will read the data from another copy and
> > fixup the damaged version. That is what makes ZFS so special and so
> > valuable.  If you have data that must be correct, you want ZFS.
> 
> 
> you're right, of course. This is a very desirable feature, and the
> main reason why I'd love to see ZFS become available on linux.
> 
> I disagree on the "RAID cannot provide this" statement though. RAID-5
> has the data to detect single bit corruption, and RAID-6 even has the
> data to correct it. Alas, verifying/correcting data upon read is not
> a common feature. I know of just one vendor (DDN) actually providing
> it. It's a mystery to me why the others don't.
> 
> Anyway, the next best option if ZFS is not available is to run parity
> checks on all your arrays regularly. Things do get awkward when
> errors show up, but at least you know. Both Linux MD RAID and the
> better hardware solutions offer this.
> 
> From my experience, disks don't do this at random and do not develop
> such a fault during their life span, but some broken disks do it
> frequently from the beginning. NB I only ever observed this problem
> with SATA drives.

raid5 really isnt quite the same as what jeff is describing about zfs.
zfs apparently maintains multiple copies of the same block across
different devices.  if you had a single bit error in one of the those
blocks (as determine by some checksum apparently stored with this
block), zfs will pick another block that is supposed to contain the
same data.

raid5 only corrects single bit errors.  it can detect multiple bit
errors.  raid5 (to my knowledge) always verifies, even on reads and can
correct single bit errors.  raid6 can correct two single bit
failures (assuming they are on seperate devices).  the only way to 'fix'
a bad block on a raid is to replace the drive.  most raid hardware
doesnt assume that the disk block will get better if you rewrite it.
of course, background verifies of parity are essential to protecting
your data.  media is going to age whether or not you read it.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread chas williams - CONTRACTOR
On Thu, 30 Sep 2010 09:01:49 +0200
Lars Schimmer  wrote:

> Just want to ask - XFS repair or XFS check. Last time I tried it on
> linux I got a out of memory error (with 24GB memory available).
> AFAIK it does need a really large memory to check big partitions with
> lots of files in it.
> Not that we need it much, but in case of error it is important.
> So far I got best recovery on broken partitions with ext3 fs, not with
> reiser neither XFS.

rarely, we need to run xfs_repair to fix the filesystem.  i believe i
can count on one hand the number of problems that were fixed the last
decade by xfs_repair.  i dont think any of the problems were related
to xfs bugs but hardware failures. currently the filesystems are 44T in
size. the machine running repair had either 4G or 8G.  never ran into
the out of memory problem but then again i dont run xfs_check just
xfs_repair (or xfs_repair -n if i am just checking consistency).
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Andy Cobaugh


I don't think anybody has mentioned the block level compression in ZFS yet. 
With simple lzjb compression (zfs set compression=on foo), our AFS home 
directories see ~1.75x compression. That's an extra 1-2TB of disk that we don't 
need to store. Of course that makes balancing vice partitions interesting when 
you can only see the compression ratio at the filesystem level and not the 
volume level.


Checksums are nice too. There's no longer a question of whether your storage 
hardware wrote what you wanted it to write. This can go a long way to helping 
to predict failures if you run zpool scrub on a regular basis (otherwise, zfs 
only detects checksum mismatches upon read, scrub checks the whole pool).


So, just to add us to the list, we're either ext3 on linux for small stuff 
(<10TB), and zfs on solaris for everything else. Will probably consider XFS in 
the future, however.


If you do use ext3, I find it helps sometimes to turn off atime. It might be 
interesting to see what other options, if any, other folks are using for ext3.


--andy
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Stephan Wiesand

On Sep 30, 2010, at 15:40 , Gary Buhrmaster wrote:

> On Thu, Sep 30, 2010 at 05:19, Stephan Wiesand  
> wrote:
> ...
> 
>> Anyway, the next best option if ZFS is not available is to run parity checks 
>> on all your arrays regularly.
> 
> Perhaps it is the best one can do, but be aware that
> a (rare, but real) failure mode of disks is that they
> return the contents of a different block than asked.

I know :-)

> No amount of background scrubbing fixes that
> unless the failures are solid (and they are usually
> not).

As I said: my experience is that a disk either does it or not. If it does, it 
does so when it's new and shiny. But yes, I'd still like to have ZFS's data 
protection.

> That does not even include the issue that
> most disk controller data paths (and cache
> memories) are not even parity checked, and
> bit flipping can happen there too.

right :-(

> NetApp recognized this and dealt with it with
> the WAFL file system years ago.  They actually
> wrote a checksum for the block and the block id
> onto disk and checked when they read a block
> back.
> 
> Getting back the data your wrote is a hard
> problem.  ZFS presumes that everything
> downstream of it will (eventually) fail.  There
> is overhead there, but it does solve a set
> of problems that other solutions do not.
> (And the highly paranoid presume ZFS will
> fail, so take different precautions).
> 
> As Jeff stated, if you really care about your
> data, you need ZFS.
> Gary

Alas, ZFS is only available with an OS that's *really* troublesome to maintain. 
Frankly, I wouldn't mind the cost. But those patch orgies make Solaris a no-go 
here. And two decades after the invention of RPMS and DEBs, they come up with 
... IPS. Sorry, no matter how good and exceptional ZFS may be, it's just not 
worth it...

Stephan


-- 
Stephan Wiesand
DESY -DV-
Platanenenallee 6
15738 Zeuthen, Germany



smime.p7s
Description: S/MIME cryptographic signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Gary Buhrmaster
On Thu, Sep 30, 2010 at 05:19, Stephan Wiesand  wrote:
...

> Anyway, the next best option if ZFS is not available is to run parity checks 
> on all your arrays regularly.

Perhaps it is the best one can do, but be aware that
a (rare, but real) failure mode of disks is that they
return the contents of a different block than asked.
No amount of background scrubbing fixes that
unless the failures are solid (and they are usually
not).  That does not even include the issue that
most disk controller data paths (and cache
memories) are not even parity checked, and
bit flipping can happen there too.

NetApp recognized this and dealt with it with
the WAFL file system years ago.  They actually
wrote a checksum for the block and the block id
onto disk and checked when they read a block
back.

Getting back the data your wrote is a hard
problem.  ZFS presumes that everything
downstream of it will (eventually) fail.  There
is overhead there, but it does solve a set
of problems that other solutions do not.
(And the highly paranoid presume ZFS will
fail, so take different precautions).

As Jeff stated, if you really care about your
data, you need ZFS.

Gary
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Stephan Wiesand
Hi Jeff,

On Sep 29, 2010, at 22:18 , Jeffrey Altman wrote:

> RAID is not a replacement for ZFS.  ZRAID-3 protects against single bit
> disk corruption errors that RAID cannot.  Only ZFS stores a checksum of
> the data as part of each block and verifies it before delivering the
> data to the application.  If the checksum fails and there are replicas,
> ZFS will read the data from another copy and fixup the damaged version.
> That is what makes ZFS so special and so valuable.  If you have data
> that must be correct, you want ZFS.


you're right, of course. This is a very desirable feature, and the main reason 
why I'd love to see ZFS become available on linux.

I disagree on the "RAID cannot provide this" statement though. RAID-5 has the 
data to detect single bit corruption, and RAID-6 even has the data to correct 
it. Alas, verifying/correcting data upon read is not a common feature. I know 
of just one vendor (DDN) actually providing it. It's a mystery to me why the 
others don't.

Anyway, the next best option if ZFS is not available is to run parity checks on 
all your arrays regularly. Things do get awkward when errors show up, but at 
least you know. Both Linux MD RAID and the better hardware solutions offer this.

From my experience, disks don't do this at random and do not develop such a 
fault during their life span, but some broken disks do it frequently from the 
beginning. NB I only ever observed this problem with SATA drives.

Best regards,
Stephan 

-- 
Stephan Wiesand
DESY -DV-
Platanenenallee 6
15738 Zeuthen, Germany



smime.p7s
Description: S/MIME cryptographic signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-30 Thread Lars Schimmer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 2010-09-29 21:12, chas williams - CONTRACTOR wrote:
> On Wed, 29 Sep 2010 12:35:23 -0400
> Steve Simmons  wrote:
> 
>> Yep, we're pretty much a 100% ext3 shop. We keep a close eye on other
>> things, and zfs has been looking more and more interesting. But given
>> the uncertain state of its future (see elsewhere in this thread) our
>> caution level has gone up quite a bit. For the really long term I
>> also keep an eye on btrfs, but some of it's features aren't as big a
>> win for AFS as they are for 'regular' users. Ext4 looks interesting
>> just for the fsck speed improvements (just freaking amazing). Extents
>> may also be useful, but I strongly suspect other issues bottleneck
>> AFS performance before the filesystem speed does. Then again, better
>> speed never hurts.
> 
> we use solaris ufs on our current fileservers, but xfs practically
> everywhere else.  this is mostly due to our irix heritage but xfs has
> some benefits that ext3 does not have (like extents).  xfs is certainly
> quite a bit more mature than ext3 in my opinion.  ext4 is an attempt to
> get some of these xfs features into ext3.
> 
> as far as bugs with xfs -- not too many.  we have done some really bad
> things to the xfs filesystems as well (like power failures several
> times in one week).

Just want to ask - XFS repair or XFS check. Last time I tried it on
linux I got a out of memory error (with 24GB memory available).
AFAIK it does need a really large memory to check big partitions with
lots of files in it.
Not that we need it much, but in case of error it is important.
So far I got best recovery on broken partitions with ext3 fs, not with
reiser neither XFS.

MfG,
Lars Schimmer
- -- 
- -
TU Graz, Institut für ComputerGraphik & WissensVisualisierung
Tel: +43 316 873-5405   E-Mail: l.schim...@cgv.tugraz.at
Fax: +43 316 873-5402   PGP-Key-ID: 0x4A9B1723
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkykNdwACgkQmWhuE0qbFyN73wCfZniB8RsbvOMlZD8+oR6hhzkX
pQgAniT5WKbyNdcIqPRliCHrPficbb28
=GYrq
-END PGP SIGNATURE-
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Jeffrey Altman
On 9/29/2010 12:28 PM, Stephan Wiesand wrote:
> 
> On Sep 29, 2010, at 17:49 , Harald Barth wrote:
> 
>> Currently our AFS servers are on Linux and XFS. We were thinking of
>> moving to OpenSolaris or Solaris and ZFS. But as things are now, these
>> options either have a very unsecure future or a very hefty pricetag.
>> For us, the name change of the company did result in a doubling of
>> prices, something that we did not pay but took another solution. I am
>> still searching for a good software raid solution. Any experiences
>> with ZFS on FreeBSD out there? Other suggestions?
> 
> Well, how about hardware RAID? The only flavour I have recent first hand 
> experience with are the LSI Megaraid Controllers (or, more precisely, OEM 
> controllers based on them from our current main hardware vendor): They do 
> RAID-6 at reasonable speed, scrub the disks and check the parity according to 
> your needs, keep a log of drive problems and error counters independent of 
> the OS, come with a BBU option, and are reasonably cheap. They isolate you 
> from having to deal with SATA drives or baseboard chipsets. And they have a 
> *stable* management interface with a lean CLI for accessing it independent of 
> vendors' management suites.
> 
> While I had loved to see ZFS for Linux (and am still hoping for it, or for 
> btrfs becoming production grade), we have been quite happy with this solution 
> (and are now runnig ~ 1.5 PB of disk storage this way). We're using ext3 on 
> top for vicep partitions, but XFS is doing well on other (non-AFS) 
> fileservers here and we'd use it for AFS if it had serious advantages for our 
> use case. IMO, XFS is just slightly more adventouros.


RAID is not a replacement for ZFS.  ZRAID-3 protects against single bit
disk corruption errors that RAID cannot.  Only ZFS stores a checksum of
the data as part of each block and verifies it before delivering the
data to the application.  If the checksum fails and there are replicas,
ZFS will read the data from another copy and fixup the damaged version.
That is what makes ZFS so special and so valuable.  If you have data
that must be correct, you want ZFS.

Jeffrey Altman



signature.asc
Description: OpenPGP digital signature


RE: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread David Boyes
> Anyone tried btrfs?

Works, but not stable enough for prime time yet. Ditto ZFS on FreeBSD. 

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread chas williams - CONTRACTOR
On Wed, 29 Sep 2010 12:35:23 -0400
Steve Simmons  wrote:

> Yep, we're pretty much a 100% ext3 shop. We keep a close eye on other
> things, and zfs has been looking more and more interesting. But given
> the uncertain state of its future (see elsewhere in this thread) our
> caution level has gone up quite a bit. For the really long term I
> also keep an eye on btrfs, but some of it's features aren't as big a
> win for AFS as they are for 'regular' users. Ext4 looks interesting
> just for the fsck speed improvements (just freaking amazing). Extents
> may also be useful, but I strongly suspect other issues bottleneck
> AFS performance before the filesystem speed does. Then again, better
> speed never hurts.

we use solaris ufs on our current fileservers, but xfs practically
everywhere else.  this is mostly due to our irix heritage but xfs has
some benefits that ext3 does not have (like extents).  xfs is certainly
quite a bit more mature than ext3 in my opinion.  ext4 is an attempt to
get some of these xfs features into ext3.

as far as bugs with xfs -- not too many.  we have done some really bad
things to the xfs filesystems as well (like power failures several
times in one week).

yes, i think extents probably arent a huge win for afs fileservers as
they currently exist.

it isnt clear to me that zfs is a big win for afs either.  you need
more storage?  add another vicep.  dont expand your existing volumes.
yes, you might need to do a little more volume management but errors in
a single filesystem wont potentially get all your data.

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Vincent Fox

 On 9/29/2010 8:57 AM, Richard Brittain wrote:
We are about to bring up a new fileserver with ext4.  We've been 
running ext4 for several months in non-AFS production boxes and were 
very happy with the performance - measurably faster than ext3 for 
large file i/o.


I'm a bit surprised nobody else has admitted to using ext4 for their 
/vicep*, but at least nobody has reported problems with it either.


Performance became secondary to SAFETY for me long ago.

Nothing says safety like checksums for every block, right in
the filesystem code.  I've been burned enough by failures in
getting bits back and forth to disks.

Anyone tried btrfs?


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


RE: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread David Boyes
> > What's the tried-and-true production-quality Linux
> > equivalent?  Anything?  Last I read, nothing.
> 
> We are about to bring up a new fileserver with ext4.  We've been
> running
> ext4 for several months in non-AFS production boxes and were very happy
> with the performance - measurably faster than ext3 for large file i/o.

It (ext4) is faster, but still has some pretty significant integrity problems 
at very high I/O rates.
It's buffer management code is also pretty spotty. Doesn't hurt much in 
dedicated machines, but sucks rocks in virtual machines. 

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Steve Simmons
On Sep 28, 2010, at 4:00 PM, Thomas Kula wrote:

> On Tue, Sep 28, 2010 at 12:49:59PM -0700, Russ Allbery wrote:
>> Jeff Blaine  writes:
>> 
>>> Barring an equivalent, what Linux setup...
>> 
>>>  a) seems most stable
>>>  b) is fsck-less
>> 
>>> Even quick grunt responses are appreciated.
>> 
>> We use ext3.  It isn't the fastest or the most featureful, but it's the
>> core file system that everyone uses on Linux and for us it's been rock
>> solid.  You're the least likely to run into strange problems.
> 
> We (umich.edu) also use ext3. We randomly run into issues where 
> the filesystem half-thinks that things that should be files are
> directories, which, when this happens on a vice paritition, leads
> to interesting problems. 
> 
> Other co-workers (some of whom I believe are on this list) follow
> this more, but I think our strategy has been to keep on top of any
> kernel issues and the corresponding userspace tools for dealing with
> ext filesystems and see what those do. I have no idea why we tend
> to run into this with not-frequent-but-too-often-for-me regularity.
> 
> That said, I'm not sure what else we'd even consider running on
> Linux systems.

*raises hand as 'other co-worker'*

Yep, we're pretty much a 100% ext3 shop. We keep a close eye on other things, 
and zfs has been looking more and more interesting. But given the uncertain 
state of its future (see elsewhere in this thread) our caution level has gone 
up quite a bit. For the really long term I also keep an eye on btrfs, but some 
of it's features aren't as big a win for AFS as they are for 'regular' users. 
Ext4 looks interesting just for the fsck speed improvements (just freaking 
amazing). Extents may also be useful, but I strongly suspect other issues 
bottleneck AFS performance before the filesystem speed does. Then again, better 
speed never hurts.

Tom refers to some ext3 problem with directories suddenly becoming files or 
vice-versa. There are two points worth mentioning here. First, it is extremely 
rare - maybe once every six months. That's in a cell with 26 file servers, 64 
vice partitions, 260,000 volumes, 180M files, 92TB of raw space for AFS with 
46TB currently used, compounded growth rate of about 45% per year. Big. Second, 
we are running our own linux-from-scratch systems. It's quite possible we have 
introduced a frailty somewhere.

We have several times been bitten in the ass by ext3 bugs, and the recovery 
process has not been pretty. Usually we've traced this down to actual ext3 bugs 
that others have found and fixed; I've not yet recently made that chase on our 
dir-vs-file problem. For all I know, it's oAFS hosing up an inode somehow. But 
without having equivalent data about what other sites do and their 
size/configurations, I can't honestly say if our problems are unique to us or 
just somethings that most folks manage to run below the radar on.

But either way we'll be sticking to ext3 for at least the next couple of years.

Steve___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Stephan Wiesand

On Sep 29, 2010, at 17:49 , Harald Barth wrote:

> Currently our AFS servers are on Linux and XFS. We were thinking of
> moving to OpenSolaris or Solaris and ZFS. But as things are now, these
> options either have a very unsecure future or a very hefty pricetag.
> For us, the name change of the company did result in a doubling of
> prices, something that we did not pay but took another solution. I am
> still searching for a good software raid solution. Any experiences
> with ZFS on FreeBSD out there? Other suggestions?

Well, how about hardware RAID? The only flavour I have recent first hand 
experience with are the LSI Megaraid Controllers (or, more precisely, OEM 
controllers based on them from our current main hardware vendor): They do 
RAID-6 at reasonable speed, scrub the disks and check the parity according to 
your needs, keep a log of drive problems and error counters independent of the 
OS, come with a BBU option, and are reasonably cheap. They isolate you from 
having to deal with SATA drives or baseboard chipsets. And they have a *stable* 
management interface with a lean CLI for accessing it independent of vendors' 
management suites.

While I had loved to see ZFS for Linux (and am still hoping for it, or for 
btrfs becoming production grade), we have been quite happy with this solution 
(and are now runnig ~ 1.5 PB of disk storage this way). We're using ext3 on top 
for vicep partitions, but XFS is doing well on other (non-AFS) fileservers here 
and we'd use it for AFS if it had serious advantages for our use case. IMO, XFS 
is just slightly more adventouros.

Best regards,
Stephan

-- 
Stephan Wiesand
DESY -DV-
Platanenenallee 6
15738 Zeuthen, Germany



smime.p7s
Description: S/MIME cryptographic signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Ted Creedon
OpenSuSe has software raid as an option when formatting a disk

I prefer the Adaptec RAID controllers because they have a web interface for
admin and checking and since /usr/vice and /usr/afs are on the raid system I
just move the controller and drives to a new motherboard

tedc

On Wed, Sep 29, 2010 at 8:49 AM, Harald Barth  wrote:

>
>
> Currently our AFS servers are on Linux and XFS. We were thinking of
> moving to OpenSolaris or Solaris and ZFS. But as things are now, these
> options either have a very unsecure future or a very hefty pricetag.
> For us, the name change of the company did result in a doubling of
> prices, something that we did not pay but took another solution. I am
> still searching for a good software raid solution. Any experiences
> with ZFS on FreeBSD out there? Other suggestions?
>
> Harald.
> ___
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Richard Brittain

On Tue, 28 Sep 2010, Jeff Blaine wrote:


We're considering ditching our Sun boxes with vice
partitions on ZFS :(

What's the tried-and-true production-quality Linux
equivalent?  Anything?  Last I read, nothing.


We are about to bring up a new fileserver with ext4.  We've been running 
ext4 for several months in non-AFS production boxes and were very happy 
with the performance - measurably faster than ext3 for large file i/o.


I'm a bit surprised nobody else has admitted to using ext4 for their 
/vicep*, but at least nobody has reported problems with it either.


Richard
--
Richard Brittain,  Research Computing Group,
   Kiewit Computing Services, 6224 Baker/Berry Library
   Dartmouth College, Hanover NH 03755
richard.britt...@dartmouth.edu 6-2085
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-29 Thread Harald Barth


Currently our AFS servers are on Linux and XFS. We were thinking of
moving to OpenSolaris or Solaris and ZFS. But as things are now, these
options either have a very unsecure future or a very hefty pricetag.
For us, the name change of the company did result in a doubling of
prices, something that we did not pay but took another solution. I am
still searching for a good software raid solution. Any experiences
with ZFS on FreeBSD out there? Other suggestions?

Harald.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Gary Buhrmaster
On Wed, Sep 29, 2010 at 00:04, Vincent Fox  wrote:
>  On 09/28/2010 04:13 PM, Rich Sudlow wrote:
>>
>>  that being said we're also looking for fileserver
>> alternatives due to Oracle takeover.
>
> What's your reasoning here?
>
> If anything I'd expect them to put effort into optimizing it
> which Sun was letting languish recently.

Oracle has suggested that they want to "move up the stack"
to selling solutions (entire boxes/racks to do [something];
I think someone called it a "Stack-in-a-box") and not
selling commodity hardware to run your own apps on.
There is more profit to be found there(*).  I believe ZFS
is part of those solutions, and I would expect Oracle
to continue to invest there.  But if/how that will end up
being a separable purchasable box to run as an
OpenAFS file server is simply not clear (and I
doubt Oracle has a product plan for selling an
OpenAFS file server solution today; maybe tommorow
if enough people ask for it?)

Gary

(*) And Oracle has done the same before on the
software side.  Databases were being commoditized,,
and Oracle moved up to application solutions.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Rich Sudlow


On Sep 28, 2010, at 8:52 PM, Booker Bense wrote:


On Tue, 28 Sep 2010, Vincent Fox wrote:


On 09/28/2010 04:13 PM, Rich Sudlow wrote:

 that being said we're also looking for fileserver
alternatives due to Oracle takeover.


What's your reasoning here?



Oracle is making it pretty clear that Solaris is only for
Solaris hardware and they are changing the support/price
structure.

Ideally, we'd like to run Solaris on commodity x86 boxes rather
than the increasingly expensive thumper boxes.

For whatever reason Oracle is pricing themselves out of the
market wrt to generic hardware.

_ Booker C. Bense


Exactly.  In addition Oracle discounts are no where near
what we used to get with Sun.  So if we wanted to buy Oracle
hardware (thumpers) it's significantly more than what we've
been paying.

Rich



___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Booker Bense

On Tue, 28 Sep 2010, Vincent Fox wrote:


On 09/28/2010 04:13 PM, Rich Sudlow wrote:

  that being said we're also looking for fileserver
alternatives due to Oracle takeover. 


What's your reasoning here?



Oracle is making it pretty clear that Solaris is only for
Solaris hardware and they are changing the support/price
structure.

Ideally, we'd like to run Solaris on commodity x86 boxes rather 
than the increasingly expensive thumper boxes.


For whatever reason Oracle is pricing themselves out of the 
market wrt to generic hardware.


_ Booker C. Bense
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Russ Allbery
Vincent Fox  writes:
>  On 09/28/2010 04:13 PM, Rich Sudlow wrote:

>> that being said we're also looking for fileserver alternatives due to
>> Oracle takeover.

> What's your reasoning here?

> If anything I'd expect them to put effort into optimizing it
> which Sun was letting languish recently.

Oracle so far appears to be pursuing a general policy of slowly ceasing
all open source development that Sun was doing.  I'm sure ZFS will
continue to exist in some form, but I expect the Oracle version of it to
become a commercial software product you have to license.

Oracle is not an open source company and appears to have little to no
interest in becoming one.

-- 
Russ Allbery (r...@stanford.edu) 
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Jeff Blaine

On 9/28/2010 6:05 PM, Jeffrey Altman wrote:

On 9/28/2010 5:53 PM, Patricia O'Reilly wrote:

I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our fileservers.


Many organizations are looking to move off of Oracle Solaris due to the
recently ownership, development and licensing changes.  Jeff Blaine can
correct me but I suspect the reasons for the switch are not technical.


Precisely.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Vincent Fox

 On 09/28/2010 04:13 PM, Rich Sudlow wrote:

  that being said we're also looking for fileserver
alternatives due to Oracle takeover. 


What's your reasoning here?

If anything I'd expect them to put effort into optimizing it
which Sun was letting languish recently.

I'm using btrfs on my desktop, but it's still of course not
something I'd throw onto production servers.




___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Rich Sudlow


On Sep 28, 2010, at 6:37 PM, Brian Sebby wrote:

For what it's worth, we've been using ZFS for our AFS partitions for  
many
years now, and it's been rock solid.  It's even saved our butts in a  
few
situations where ZFS recovered far more gracefully than UFS would  
have.
If you're using Solaris, I believe that there is no reason to use  
anything

other than ZFS at this point.


Brian


I have to agree with Brian here - we also use ZFS on thumpers and
am very happy with it - that being said we're also looking for  
fileserver

alternatives due to Oracle takeover.  I was thinking RHES 6 would work
fine but with the responses I'm seeing might stay with Solaris.

Rich

Rich Sudlow
Center for Research Computing
University of Notre Dame



On Tue, Sep 28, 2010 at 02:53:52PM -0700, Patricia O'Reilly wrote:

I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our  
fileservers.


--patty

Jeff Blaine wrote:

We're considering ditching our Sun boxes with vice
partitions on ZFS :(

What's the tried-and-true production-quality Linux
equivalent?  Anything?  Last I read, nothing.

Barring an equivalent, what Linux setup...

 a) seems most stable
 b) is fsck-less

Even quick grunt responses are appreciated.  Thanks.
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info

___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


--
Brian Sebby  (se...@anl.gov)  |  Infrastructure and Operation Services
Phone: +1 630.252.9935|  Computing and Information Systems
Fax:   +1 630.252.4601|  Argonne National Laboratory
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Brian Sebby
For what it's worth, we've been using ZFS for our AFS partitions for many
years now, and it's been rock solid.  It's even saved our butts in a few
situations where ZFS recovered far more gracefully than UFS would have.
If you're using Solaris, I believe that there is no reason to use anything
other than ZFS at this point.


Brian

On Tue, Sep 28, 2010 at 02:53:52PM -0700, Patricia O'Reilly wrote:
> I'm curious, what types of problems have you encountered with
> ZFS? We are actually considering using ZFS on some of our fileservers.
> 
> --patty
> 
> Jeff Blaine wrote:
> > We're considering ditching our Sun boxes with vice
> > partitions on ZFS :(
> > 
> > What's the tried-and-true production-quality Linux
> > equivalent?  Anything?  Last I read, nothing.
> > 
> > Barring an equivalent, what Linux setup...
> > 
> >   a) seems most stable
> >   b) is fsck-less
> > 
> > Even quick grunt responses are appreciated.  Thanks.
> > ___
> > OpenAFS-info mailing list
> > OpenAFS-info@openafs.org
> > https://lists.openafs.org/mailman/listinfo/openafs-info
> ___
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info

-- 
Brian Sebby  (se...@anl.gov)  |  Infrastructure and Operation Services
Phone: +1 630.252.9935|  Computing and Information Systems
Fax:   +1 630.252.4601|  Argonne National Laboratory
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Jeffrey Altman
On 9/28/2010 5:53 PM, Patricia O'Reilly wrote:
> I'm curious, what types of problems have you encountered with
> ZFS? We are actually considering using ZFS on some of our fileservers.
> 
> --patty

Many organizations are looking to move off of Oracle Solaris due to the
recently ownership, development and licensing changes.  Jeff Blaine can
correct me but I suspect the reasons for the switch are not technical.

Jeffrey Altman



signature.asc
Description: OpenPGP digital signature


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Patricia O'Reilly
I'm curious, what types of problems have you encountered with
ZFS? We are actually considering using ZFS on some of our fileservers.

--patty

Jeff Blaine wrote:
> We're considering ditching our Sun boxes with vice
> partitions on ZFS :(
> 
> What's the tried-and-true production-quality Linux
> equivalent?  Anything?  Last I read, nothing.
> 
> Barring an equivalent, what Linux setup...
> 
>   a) seems most stable
>   b) is fsck-less
> 
> Even quick grunt responses are appreciated.  Thanks.
> ___
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Thomas Kula
On Tue, Sep 28, 2010 at 12:49:59PM -0700, Russ Allbery wrote:
> Jeff Blaine  writes:
> 
> > Barring an equivalent, what Linux setup...
> 
> >   a) seems most stable
> >   b) is fsck-less
> 
> > Even quick grunt responses are appreciated.
> 
> We use ext3.  It isn't the fastest or the most featureful, but it's the
> core file system that everyone uses on Linux and for us it's been rock
> solid.  You're the least likely to run into strange problems.

We (umich.edu) also use ext3. We randomly run into issues where 
the filesystem half-thinks that things that should be files are
directories, which, when this happens on a vice paritition, leads
to interesting problems. 

Other co-workers (some of whom I believe are on this list) follow
this more, but I think our strategy has been to keep on top of any
kernel issues and the corresponding userspace tools for dealing with
ext filesystems and see what those do. I have no idea why we tend
to run into this with not-frequent-but-too-often-for-me regularity.

That said, I'm not sure what else we'd even consider running on
Linux systems. 

-- 
Thomas L. Kula | k...@tproa.net | http://kula.tproa.net/
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Russ Allbery
alec-keyword-openafs.org.f12...@setfilepointer.com writes:

> If your heart is set on XFS, the OpenAFS server runs fine on
> FreeBSD.  The client... not so much.

The server should also work on the Debian kfreebsd-{i386,amd64}
architectures, if you want a Linux userspace.  (Not formally released yet,
but they will be technology preview architectures in Debian squeeze.)

-- 
Russ Allbery (r...@stanford.edu) 
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread Russ Allbery
Jeff Blaine  writes:

> What's the tried-and-true production-quality Linux equivalent?
> Anything?  Last I read, nothing.

There's nothing really equivalent to ZFS.

> Barring an equivalent, what Linux setup...

>   a) seems most stable
>   b) is fsck-less

> Even quick grunt responses are appreciated.

We use ext3.  It isn't the fastest or the most featureful, but it's the
core file system that everyone uses on Linux and for us it's been rock
solid.  You're the least likely to run into strange problems.

Lots of people also use XFS, and it should be reasonably stable.  I would
avoid ReiserFS and JFS due to lack of developers and widespread use.

ext4 is getting to the point that it's mature enough to use, but I'm not
sure I'd trust it yet.

-- 
Russ Allbery (r...@stanford.edu) 
___
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info


Re: [OpenAFS] Overview? Linux filesystem choices

2010-09-28 Thread alec-keyword-openafs . org . f121e6
On 2010-09-28 15:42, Jeff Blaine wrote:
> We're considering ditching our Sun boxes with vice
> partitions on ZFS :(
> 
> What's the tried-and-true production-quality Linux
> equivalent?  Anything?  Last I read, nothing.
> 
> Barring an equivalent, what Linux setup...
> 
>a) seems most stable
>b) is fsck-less
> 
> Even quick grunt responses are appreciated.  Thanks.

hamlet% mount | grep vice
/dev/mapper/hamlet-vicepa on /vicepa type xfs (rw,noatime)
/dev/mapper/hamlet-vicepb on /vicepb type xfs (rw,noatime)
/dev/mapper/hamlet-vicepc on /vicepc type xfs (rw,noatime)
/dev/mapper/hamlet-vicepd on /vicepd type xfs (rw,noatime)
/dev/mapper/hamlet-vicepe on /vicepe type xfs (rw,noatime)
/dev/mapper/hamlet-vicepf on /vicepf type xfs (rw,noatime)
hamlet% 

I've noticed that XFS's extents do some funny things when the
partitions get close very to full and I've had to emergency grow
them to clean things up, but otherwise it's been nice and
fsck-less.

If your heart is set on XFS, the OpenAFS server runs fine on
FreeBSD.  The client... not so much.


pgpGtsSfyLtyN.pgp
Description: PGP signature