Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Freddie Cash
On Mon, Oct 17, 2011 at 10:50 AM, Harry Putnam  wrote:

> Freddie Cash  writes:
>
> > If you only want RAID0 or RAID1, then btrfs is okay.  There's no support
> for
> > RAID5+ as yet, and it's been "in development" for a couple of years now.
>
> [...] snipped excellent information
>
> Thanks much, I've very appreciative of the good information.  Much
> better to hear from actual users than pouring thru webpages to get a
> picture.
>
> I'm googling on the citations you posted:
>
> FreeNAS and freebsd.
>
> Maybe you can give a little synopsis of those too.  I mean when it
> comes to utilizing zfs; is it much the same as if running it on
> solaris?
>
> FreeBSD 8-STABLE (what will become 8.3) and 9.0-RELEASE (will be released
hopefully this month) both include ZFSv28, the latest open-source version of
ZFS.  This includes raidz3 and dedupe support, same as OpenSolaris, Illumos,
and other OSol-based distros.  Not sure what the latest version of ZFS is in
Solaris 10.

The ZFS bits work the same as on Solaris with only 2 small differences:
  - sharenfs property just writes data to /etc/zfs/exports, which is read by
the standard NFS daemons (it's easier to just use /etc/exports to share ZFS
filesystems)
  - sharesmb property doesn't do anything; you have to use Samba to share
ZFS filesystems

The only real differences are how the OSes themselves work.  If you are
fluent in Solaris, then FreeBSD will seem strange (and vice-versa).  If you
are fluent in Linux, then FreeBSD will be similar (but a lot more cohesive
and "put-together").


> I knew freebsd had a port, but assumed it would stack up kind of sorry
> compared to Solaris zfs.
>
> Maybe something on the order of the linux fuse/zfs adaptation in usability.
>
> Is that assumption wrong?
>
> Absolutely, completely, and utterly false.  :)  The FreeBSD port of ZFS is
pretty much on par with ZFS on OpenSolaris.  The Linux port of ZFS is just
barely usable.  No comparison at all.  :)


> I actually have some experience with Freebsd, (long before there was a
> zfs port), and it is very linux like in many ways.
>
> That's like saying that OpenIndiana is very Linux-like in many ways.  :)


-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Harry Putnam
Freddie Cash  writes:

> If you only want RAID0 or RAID1, then btrfs is okay.  There's no support for
> RAID5+ as yet, and it's been "in development" for a couple of years now.

[...] snipped excellent information 

Thanks much, I've very appreciative of the good information.  Much
better to hear from actual users than pouring thru webpages to get a
picture. 

I'm googling on the citations you posted:

FreeNAS and freebsd.

Maybe you can give a little synopsis of those too.  I mean when it
comes to utilizing zfs; is it much the same as if running it on
solaris?

I knew freebsd had a port, but assumed it would stack up kind of sorry
compared to Solaris zfs. 

Maybe something on the order of the linux fuse/zfs adaptation in usability.

Is that assumption wrong?

I actually have some experience with Freebsd, (long before there was a
zfs port), and it is very linux like in many ways.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Michael DeMan
Or, if you absolutely must run linux for the operating system, see: 
http://zfsonlinux.org/

On Oct 17, 2011, at 8:55 AM, Freddie Cash wrote:

> If you absolutely must run Linux on your storage server, for whatever reason, 
> then you probably won't be running ZFS.  For the next year or two, it would 
> probably be safer to run software RAID (md), with LVM on top, with XFS or 
> Ext4 on top.  It's not the easiest setup to manage, but it would be safer 
> than btrfs.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Paul Kraus
On Mon, Oct 17, 2011 at 11:29 AM, Harry Putnam  wrote:

> My main reasons for using zfs are pretty basic compared to some here

What are they ? (the reasons for using ZFS)

> and I wondered how btrfs stacks up on the basic qualities.

I use ZFS @ work because it is the only FS we have been able to find
that scales to what we need (hundreds of millions of small files in
ONE filesystem).

I use ZFS @ home because I really can't afford to have my data
corrupted and I can't afford Enterprise grade hardware.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] about btrfs and zfs

2011-10-17 Thread Freddie Cash
On Mon, Oct 17, 2011 at 8:29 AM, Harry Putnam  wrote:

> This subject may have been ridden to death... I missed it if so.
>
> Not wanting to start a flame fest or whatever but
>
> As a common slob who isn't very skilled, I like to see some commentary
> from some of the pros here as to any comparison of zfs against btrfs.
>
> I realize btrfs is a lot less `finished' but I see it is starting to
> show up as an option on some linux install routines... Debian an
> ubuntu I noticed and probably many others.
>
> My main reasons for using zfs are pretty basic compared to some here
> and I wondered how btrfs stacks up on the basic qualities.
>

If you only want RAID0 or RAID1, then btrfs is okay.  There's no support for
RAID5+ as yet, and it's been "in development" for a couple of years now.

There's no working fsck tool for btrfs.  It's been "in development" and
"released in two weeks" for over a year now.  Don't put any data you need
onto btrfs.  It's extremely brittle in the face of power loss.

My biggest gripe with btrfs is that they have come up with all new
terminology that only applies to them.  Filesystem now means "a collection
of block devices grouped together".  While "sub-volume" is what we'd
normally call a "filesystem".  And there's a few other weird terms thrown in
as well.

>From all that I've read on the btrfs mailing list, and news sites around the
web, btrfs is not ready for production use on any system with data that you
can't afford to lose.

If you absolutely must run Linux on your storage server, for whatever
reason, then you probably won't be running ZFS.  For the next year or two,
it would probably be safer to run software RAID (md), with LVM on top, with
XFS or Ext4 on top.  It's not the easiest setup to manage, but it would be
safer than btrfs.

If you don't need to run Linux on your storage server, then definitely give
ZFS a try.  There are many options, depending on your level of expertise:
 FreeNAS for plug-n-play simplicity with a web GUI, FreeBSD for a simpler OS
that runs well on x86/amd64 systems, any of the OpenSolaris-based distros,
or even Solaris if you have the money.

With ZFS you get:
  - working single, dual, triple parity raidz (RAID5, RAID6, "RAID7"
equivalence)
  - n-way mirroring
  - end-to-end checksums for all data/metadata blocks
  - unlimited snapshots
  - pooled storage
  - unlimited filesystems
  - send/recv capabilities
  - built-in compression
  - built-in dedupe
  - built-in encryption (in ZFSv31, which is currently only in Solaris 11)
  - built-in CIFS/NFS sharing (on Solaris-based systems; FreeBSD uses normal
nfsd and Samba for this)
  - automatic hot-spares (on Solaris-based systems; FreeBSD only supports
manual spares)
  - and more

Maybe in another 5 years or so, Btrfs will be up to the point of ZFS today.
 Just image where ZFS will be in 5 years of so.  :)

-- 
Freddie Cash
fjwc...@gmail.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] about btrfs and zfs

2011-10-17 Thread Harry Putnam
This subject may have been ridden to death... I missed it if so.

Not wanting to start a flame fest or whatever but

As a common slob who isn't very skilled, I like to see some commentary
from some of the pros here as to any comparison of zfs against btrfs.

I realize btrfs is a lot less `finished' but I see it is starting to
show up as an option on some linux install routines... Debian an
ubuntu I noticed and probably many others.

My main reasons for using zfs are pretty basic compared to some here
and I wondered how btrfs stacks up on the basic qualities.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Scrub error and object numbers

2011-10-17 Thread Shain Miley
Here is the out put from: zdb -vvv smbpool/glusterfs 0x621b67


Dataset smbpool/glusterfs [ZPL], ID 270, cr_txg 1034346, 20.1T, 4139680 
objects, rootbp DVA[0]=<5:5e21000:600> DVA[1]=<0:5621000:600> [L0 DMU 
objset] fletcher4 lzjb LE contiguous unique double size=400L/200P 
birth=1887643L/1887643P fill=4139680 
cksum=c3a5ac075:4be35f40b07:f3425110eaaa:217fb2e74152e6

Object  lvl   iblk   dblk  dsize  lsize   %full  type
   6429543116K512 2K512  100.00  ZFS directory
264   bonus  ZFS znode
dnode flags: USED_BYTES 
dnode maxblkid: 0
path???
uid 1009
gid 300
atime   Fri Jul 22 11:02:33 2011
mtime   Fri Jul 22 11:02:33 2011
ctime   Fri Jul 22 11:02:33 2011
crtime  Fri Jul 22 11:02:33 2011
gen 1659401
mode41777
size5
parent  6429542
links   0
xattr   0
rdev0x


Still hoping someone could point me in the right directionright now I am 
doing a recursive find command to locate files created on July 22nd (by that 
user)...but somehow I think the files no longer exist and that is why zfs is 
confused.

Any ideas please

Thanks,

Shain



From: Shain Miley
Sent: Wednesday, October 12, 2011 3:06 PM
To: zfs-discuss@opensolaris.org
Subject: Scrub error and object numbers

Hello all,
I am using Opensolaris version snv_101b and after some recent issues with a 
faulty raid card I am unable to finish an entire 'zpool scrub' to completion.

While running the scub I receive the following:

errors: Permanent errors have been detected in the following files:

smbpool/glusterfs:<0x621b67>

I have found out that the number after the data set represents the object 
number of the file/directory in question, however I have not been able to 
figure out what I need to do next to get this cleared up.

We currently have 25TB of large files stored on this file server...so I am 
REALLY looking to avoid having to do some sort of massive backup/restore in 
order to clear this up.

Can anyone help shed some light on what I can/should do next?

Thanks in advance,

Shain

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-10-17 Thread Richard Elling
On Oct 15, 2011, at 12:31 PM, Toby Thain wrote:
> On 15/10/11 2:43 PM, Richard Elling wrote:
>> On Oct 15, 2011, at 6:14 AM, Edward Ned Harvey wrote:
>> 
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Tim Cook
 
 In my example - probably not a completely clustered FS.
 A clustered ZFS pool with datasets individually owned by
 specific nodes at any given time would suffice for such
 VM farms. This would give users the benefits of ZFS
 (resilience, snapshots and clones, shared free space)
 merged with the speed of direct disk access instead of
 lagging through a storage server accessing these disks.
>>> 
>>> I think I see a couple of points of disconnect.
>>> 
>>> #1 - You seem to be assuming storage is slower when it's on a remote storage
>>> server as opposed to a local disk.  While this is typically true over
>>> ethernet, it's not necessarily true over infiniband or fibre channel.
>> 
>> Ethernet has *always* been faster than a HDD. Even back when we had 3/180s
>> 10Mbps Ethernet it was faster than the 30ms average access time for the 
>> disks of
>> the day. I tested a simple server the other day and round-trip for 4KB of 
>> data on a
>> busy 1GbE switch was 0.2ms. Can you show a HDD as fast? Indeed many SSDs
>> have trouble reaching that rate under load.
> 
> Hmm, of course the *latency* of Ethernet has always been much less, but I did 
> not see it reaching the *throughput* of a single direct attached disk until 
> gigabit.

In practice, there are very, very, very few disk workloads that do not involve 
a seek.
Just one seek kills your bandwidth. But we do not define "fast" as "bandwidth" 
do we?

> I'm pretty sure direct attached disk throughput in the Sun 3 era was much 
> better than 10Mbit Ethernet could manage. Iirc, NFS on a Sun 3 running NetBSD 
> over 10B2 was only *just* capable of streaming MP3, with tweaking, from my 
> own experiments (I ran 10B2 at home until 2004; hey, it was good enough!)

The max memory you could put into a Sun-3/280 was 32MB. There is no possible way
for such a system to handle 100 Mbps Ethernet, you could exhaust all of main 
memory
in about 3 seconds :-)

 -- richard

-- 

ZFS and performance consulting
http://www.RichardElling.com
VMworld Copenhagen, October 17-20
OpenStorage Summit, San Jose, CA, October 24-27
LISA '11, Boston, MA, December 4-9 













___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss