Re: [zfs-discuss] Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?

2010-04-05 Thread Ahmed Kamal
Install nexenta on a dell poweredge ?
or one of these http://www.pogolinux.com/products/storage_director

On Mon, Apr 5, 2010 at 9:48 PM, Kyle McDonald  wrote:

> I've seen the Nexenta and EON webpages, but I'm not looking to build my
> own.
>
> Is there anything out there I can just buy?
>
>  -Kyle
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] `zfs list` doesn't show my snapshot

2008-11-21 Thread Ahmed Kamal
zfs list -t snapshot ?

On Sat, Nov 22, 2008 at 1:14 AM, Pawel Tecza <[EMAIL PROTECTED]> wrote:

> Hello All,
>
> This is my zfs list:
>
> # zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> rpool 10,5G  3,85G61K  /rpool
> rpool/ROOT9,04G  3,85G18K  legacy
> rpool/ROOT/opensolaris89,7M  3,85G  5,44G  legacy
> rpool/ROOT/opensolaris-1  8,95G  3,85G  5,52G  legacy
> rpool/dump 256M  3,85G   256M  -
> rpool/export   747M  3,85G19K  /export
> rpool/export/home  747M  3,85G   747M  /export/home
> rpool/swap 524M  3,85G   524M  -
>
> Today I've created one snapshot as below:
>
> # zfs snapshot rpool/ROOT/[EMAIL PROTECTED]
>
> Ufortunately I can't see it, because `zfs list` command doesn't show it:
>
> # zfs list
> NAME   USED  AVAIL  REFER  MOUNTPOINT
> rpool 10,5G  3,85G61K  /rpool
> rpool/ROOT9,04G  3,85G18K  legacy
> rpool/ROOT/opensolaris89,7M  3,85G  5,44G  legacy
> rpool/ROOT/opensolaris-1  8,95G  3,85G  5,52G  legacy
> rpool/dump 256M  3,85G   256M  -
> rpool/export   747M  3,85G19K  /export
> rpool/export/home  747M  3,85G   747M  /export/home
> rpool/swap 524M  3,85G   524M  -
>
> I know the snapshot exists, because I can't create the same again:
>
> # zfs snapshot rpool/ROOT/[EMAIL PROTECTED]
> cannot create snapshot 'rpool/ROOT/[EMAIL PROTECTED]': dataset
> already exists
>
> Is it a strange? How can you explain that?
>
> I use OpenSolaris 2008.11 snv_101a:
>
> # uname -a
> SunOS oklahoma 5.11 snv_101a i86pc i386 i86pc Solaris
>
> My best regards,
>
> Pawel
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ESX integration

2008-11-24 Thread Ahmed Kamal
Hi,
Not sure if this is the best place to ask, but do Sun's new Amber road
storage boxes have any kind of integration with ESX? Most importantly,
quiescing the VMs, before snapshotting the zvols, and/or some level of
management integration thru either the web UI or ESX's console ? If there's
nothing official, did anyone hack any scripts for that?

Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] UFS over zvol major performance hit

2008-12-14 Thread Ahmed Kamal
Hi,

I have been doing some basic performance tests, and I am getting a big hit
when I run UFS over a zvol, instead of directly using zfs. Any hints or
explanations is very welcome. Here's the scenario. The machine has 30G RAM,
and two IDE disks attached. The disks have 2 fdisk partitons (c4d0p2,
c3d0p2) that are mirrored and form a zpool. When using filebench with 20G
files writing directly on the zfs filesystem, I get the following results:

RandomWrite-8k:  0.8M/s
SingleStreamWriteDirect1m: 50M/s
MultiStreamWrite1m:  51M/s
MultiStreamWriteDirect1m: 50M/s

Pretty consistent and lovely. The 50M/s rate sounds pretty reasonable, while
the random 0.8M/s is a bit too low ? All in all, things look ok to me though
here

The second step, is to create a 100G zvol, format it with UFS, then bench
that under same conditions. Note that this zvol lives on the exact same
zpool used previously. I get the following:

RandomWrite-8k:  0.9M/s
SingleStreamWriteDirect1m: 5.8M/s   (??)
MultiStreamWrite1m:  33M/s
MultiStreamWriteDirect1m: 11M/s

Obviously, there's a major hit. Can someone please shed some light as to why
this is happening ? If more info is required, I'd be happy to test some more
... This is all running on osol 2008.11 release.

Note: I know ZFS autodisables disk-caches when running on partitions (is
that slices, or fdisk partitions?!) Could this be causing what I'm seeing ?

Thanks for the help
Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] UFS over zvol major performance hit

2008-12-15 Thread Ahmed Kamal
Well, I checked and it is 8k
volblocksize  8K

Any other suggestions how to begin to debug such issue ?



On Mon, Dec 15, 2008 at 2:44 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Mon, 15 Dec 2008, Ahmed Kamal wrote:
>
>>
>> RandomWrite-8k:  0.9M/s
>> SingleStreamWriteDirect1m: 5.8M/s   (??)
>> MultiStreamWrite1m:  33M/s
>> MultiStreamWriteDirect1m: 11M/s
>>
>> Obviously, there's a major hit. Can someone please shed some light as to
>> why
>> this is happening ? If more info is required, I'd be happy to test some
>> more
>> ... This is all running on osol 2008.11 release.
>>
>
> What blocksize did you specify when creating the zvol?  Perhaps UFS will
> perform best if the zvol blocksize is similar to the UFS blocksize.  For
> example, try testing with a zvol blocksize of 8k.
>
> Bob
> ==
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and AVS replication performance issues

2008-12-21 Thread Ahmed Kamal
Hi,

I have setup AVS replication between two zvols on two opensolaris-2008.11
nodes. I have been seeing BIG performance issues, so I tried to setup the
system to be as fast as possible using a couple of tricks. The detailed
setup and performance data are below:

*  A 100G zvol has been setup on each node of an AVS replicating pair
* A "ramdisk" has been setup on each node using the following command.
This functions as a very fast logging disk!

  ramdiskadm -a ram1 10m

* The replication relationship has been setup using

  sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
/dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async

* The AVS driver was configured to *not* log the disk bitmap to disk,
rather to keep it in kernel memory and write it to disk only upon machine
shutdown. This is configured as such

  # grep bitmap_mode /usr/kernel/drv/rdc.conf
  rdc_bitmap_mode=2;

* The replication was configured to be in logging mode (To avoid any
possible network bottlenecks)

  #sndradm -P
  /dev/zvol/rdsk/gold/myzvol  <-  pri:/dev/zvol/rdsk/gold/myzvol
  autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
2, mode: async, state: logging

=== Testing ===

All tests were performed using the following command line

# dd if=/dev/zero of=/dev/zvol/rdsk/gold/xxVolNamexx oflag=dsync
bs=256M count=10

I usually ran a couple of runs initially to avoid caching effects.

=== Results ===

The results

The following Results were reported after initial couple of runs to avoid
cache effects

Run#dd count=N Native Vol Throughput   Replicated Vol
Throughput (logging mode)
1 4 42.2 MB/s
   4.9 MB/s
2 4 52.8 MB/s
5.5 MB/s
3 1050.9 MB/s
4.6 MB/s

As you can see the performance is almost 10 times slower!! Although no disk
logging is done to disk (only ram disk if not driver memory), and no network
traffic ! Seems to me that the AVS kernel drivers slow the system A LOT for
simply hooking any write operation and flipping a kernel memory bit per 32k
written disk space?!!

Any suggestions as to why this is happening is most appreciated
Best Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using ZFS for replication

2009-01-15 Thread Ahmed Kamal
You might want to look at AVS for realtime replication
http://www.opensolaris.org/os/project/avs/
However, I have had huge performance hits after enabling that. The
replicated volume is almost 10% the speed of normal ones

On Thu, Jan 15, 2009 at 1:28 PM, Ian Mather  wrote:

> Fairly new to ZFS. I am looking to replicate data between two thumper
> boxes.
> Found quite a few articles about using zfs incremental snapshot
> send/receive. Just a cheeky question to see if anyone has anything working
> in a live environment and are happy to share the scripts,  save me
> reinventing the wheel. thanks in advance.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-24 Thread Ahmed Kamal
Hi Jim,
Thanks for your informative reply. I am involved with kristof
(original poster) in the setup, please allow me to reply below

> Was the follow 'test' run during resynchronization mode or replication
> mode?
>

Neither, testing was done while in logging mode. This was chosen to
simply avoid any network "issues" and to get the setup working as fast
as possible. The setup was created with:

sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
/dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async

Note that the logging disks are ramdisks again trying to avoid disk
contention and get fastest performance (reliability is not a concern
in this test). Before running the tests, this was the state

#sndradm -P
/dev/zvol/rdsk/gold/myzvol  <-  pri:/dev/zvol/rdsk/gold/myzvol
autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
2, mode: async, state: logging

While we should be getting minimal performance hit (hopefully), we got
a big performance hit, disk throughput was reduced to almost 10% of
the normal rate.
Please feel free to ask for any details, thanks for the help

Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] thoughts on parallel backups, rsync, and send/receive

2009-01-26 Thread Ahmed Kamal
Did anyone share a script to send/recv zfs filesystems tree in
parallel, especially if a cap on concurrency can be specified?
Richard, how fast were you taking those snapshots, how fast were the
syncs over the network. For example, assuming a snapshot every 10mins,
is it reasonable to expect to sync every snapshot as they're created
every 10 mins. What would be the limit trying to lower those 10mins
even more
Is it catastrophic if a second zfs send launches, while an older one
is still being run

Regards

On Mon, Jan 26, 2009 at 9:16 AM, Ian Collins  wrote:
> Richard Elling wrote:
>> Recently, I've been working on a project which had agressive backup
>> requirements. I believe we solved the problem with parallelism.  You
>> might consider doing the same.  If you get time to do your own experiments,
>> please share your observations with the community.
>> http://richardelling.blogspot.com/2009/01/parallel-zfs-sendreceive.html
>>
>
> You raise some interesting points about rsync getting bogged down over
> time.  I have been working with a client with a requirement for
> replication between a number of hosts and I have found doing several
> rend/receives made quite an impact.  What I haven't done is try this
> with the latest performance improvements in b105.  Have you?  My guess
> is the gain will be less.
>
> One thing I have yet to do is find the optimum number of parallel
> transfers when there are 100s of filesystems.  I'm looking into making
> this dynamic, based on throughput.
>
> Are you working with OpenSolaris?  I still haven't managed to nail the
> toxic streams problem in Solaris 10, which have curtailed my project.
>
> --
> Ian.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [storage-discuss] AVS on opensolaris 2008.11

2009-01-26 Thread Ahmed Kamal
Hi Jim,

The setup is not there anymore, however, I will share as much details
as I have documented. Could you please post the commands you have used
and any differences you think might be important. Did you ever test
with 2008.11 ? instead of sxce ?

I will probably be testing again soon. Any tips or obvious errors are welcome :)

->8-
The Setup
* A 100G zvol has been setup on each node of an AVS replicating pair
* A "ramdisk" has been setup on each node using
  ramdiskadm -a ram1 10m
* The replication relationship has been setup using
  sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
/dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async
* The AVS driver was configured to not log the disk bitmap to disk,
rather to keep it in kernel memory and write it to disk only upon
machine shutdown. This is configured as such
  grep bitmap_mode /usr/kernel/drv/rdc.conf
  rdc_bitmap_mode=2;
* The replication was configured to be in logging mode
  sndradm -P
  /dev/zvol/rdsk/gold/myzvol  <-  pri:/dev/zvol/rdsk/gold/myzvol
  autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
2, mode: async, state: logging

Testing was done with:

 dd if=/dev/zero of=/dev/zvol/rdsk/gold/xxVolNamexx oflag=dsync bs=256M count=10

* Option 'dsync' is chosen to try avoiding zfs's aggressive caching.
Moreover however, usually a couple of runs were launched initially to
fill the instant zfs cache and to force real writing to disk
* Option 'bs=256M' was used in order to avoid the overhead of copying
multiple small blocks to kernel memory before disk writes. A larger bs
size ensures max throughput. Smaller values were used without much
difference though

The results on multiple runs

Non Replicated Vol Throughputs: 42.2, 52.8, 50.9 MB/s
Replicated Vol Throughputs:  4.9, 5.5, 4.6 MB/s

-->8-

Regards

On Mon, Jan 26, 2009 at 1:22 AM, Jim Dunham  wrote:
> Ahmed,
>
>> Thanks for your informative reply. I am involved with kristof
>> (original poster) in the setup, please allow me to reply below
>>
>>> Was the follow 'test' run during resynchronization mode or replication
>>> mode?
>>>
>>
>> Neither, testing was done while in logging mode. This was chosen to
>> simply avoid any network "issues" and to get the setup working as fast
>> as possible. The setup was created with:
>>
>> sndradm -E pri /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 sec
>> /dev/zvol/rdsk/gold/myzvol /dev/rramdisk/ram1 ip async
>>
>> Note that the logging disks are ramdisks again trying to avoid disk
>> contention and get fastest performance (reliability is not a concern
>> in this test). Before running the tests, this was the state
>>
>> #sndradm -P
>> /dev/zvol/rdsk/gold/myzvol  <-  pri:/dev/zvol/rdsk/gold/myzvol
>> autosync: off, max q writes: 4096, max q fbas: 16384, async threads:
>> 2, mode: async, state: logging
>>
>> While we should be getting minimal performance hit (hopefully), we got
>> a big performance hit, disk throughput was reduced to almost 10% of
>> the normal rate.
>
> Is it possible to share information on your ZFS storage pool configuration,
> your testing tool, testing types and resulting data?
>
> I just downloaded Solaris Express CE (b105)
> http://opensolaris.org/os/downloads/sol_ex_dvd_1/,  configured ZFS in
> various storage pool types, SNDR with and without RAM disks, and I do not
> see that disk throughput was reduced to almost 10% o the normal rate. Yes
> there is some performance impact, but no where near there amount reported.
>
> There are various factors which could come into play here, but the most
> obvious reason that someone may see a serious performance degradation as
> reported, is that prior to SNDR being configured, the existing system under
> test was already maxed out on some system limitation, such as CPU and
> memory.  I/O impact should not be a factor, given that a RAM disk is used.
> The addition of both SNDR and a RAM disk in the data, regardless of how
> small their system cost is, will have a profound impact on disk throughput.
>
> Jim
>
>>
>> Please feel free to ask for any details, thanks for the help
>>
>> Regards
>> ___
>> storage-discuss mailing list
>> storage-disc...@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/storage-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-09 Thread Ahmed Kamal
>
> "Unmount" is not sufficient.
>

Well, umount is not the "right" way to do it, so he'd be simulating a
power-loss/system-crash. That still doesn't explain why massive data loss
would occur ? I would understand the last txg being lost, but 90% according
to OP ?!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS: unreliable for professional usage?

2009-02-10 Thread Ahmed Kamal
>
> The good news is that ZFS is getting popular enough on consumer-grade
> hardware.  The bad news is that said hardware has a different set of
> failure modes, so it takes a bit of work to become resilient to them.
> This is pretty high on my short list.


So does this basically mean zfs rolls-back to the latest on-disk consistent
state before any failure, even if it means (minor) data loss. Is there any
bug report I can follow so I would know when the fix for this is committed
Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is zpool export/import | faster than rsync or cp

2009-03-27 Thread Ahmed Kamal
ZFS replication basics at http://cuddletech.com/blog/pivot/entry.php?id=984
Regards

On Sat, Mar 28, 2009 at 1:57 AM, Harry Putnam  wrote:

>
> [...]
>
> Harry wrote:
> >> Now I'm wondering if the export/import sub commands might not be a
> >> good bit faster.
> >>
> Ian Collins  answered:
> > I think you are thinking of zfs send/receive.
> >
> > I've never done a direct comparison, but zfs send/receive would be my
> > preferred way to move data between pools.
>
> Why is that?  I'm too new to know what all it encompasses (and a bit
> dense to boot)
>
> "Fajar A. Nugraha"  writes:
>
> > On Sat, Mar 28, 2009 at 5:05 AM, Harry Putnam 
> wrote:
> >> Now I'm wondering if the export/import sub commands might not be a
> >> good bit faster.
> >
> > I believe the greatest advantage of zfs send/receive over rsync is not
> > about speed, but rather it's on "zfs send -R", which would (from man
> > page)
> >
> >  Generate a replication stream  package,  which  will
> >  replicate  the specified filesystem, and all descen-
> >  dant file systems, up to the  named  snapshot.  When
> >  received, all properties, snapshots, descendent file
> >  systems, and clones are preserved.
> >
> > pretty much allows you to clone a complete pool preserving its structure.
> > As usual, compressing the backup stream (whether rsync or zfs) might
> > help reduce transfer time a lot. My favorite is lzop (since it's very
> > fast), but gzip should work as well.
> >
>
> Nice... good reasons it appears.
>
>
> Robert Milkowski  writes:
>
> > Hello Harry,
>
> [...]
>
> > As Ian pointed you want zfs send|receive and not import/export.
> > For a first full copy zfs send not necessarily will be noticeably
> > faster than rsync but it depends on data. If for example you have
> > milions of small files zfs send could be much faster then rsync.
> > But it shouldn't be slower in any case.
> >
> > zfs send|receive really shines when it comes to sending incremental
> > changes.
>
> Now that would be something to make it stand out.  Can you tell me a
> bit more about that would work..I mean would you just keep receiving
> only changes at one end and how do they appear on the filesystem.
>
> There is a backup tool called `rsnapshot' that uses rsync but creates
> hard links to all unchanged files and moves only changes to changed
> files.  This is all put in a serial directory system and ends up
> taking a tiny fraction of the space that full backups would take, yet
> retains a way to get to unchanged files right in the same directory
> (the hard link).
>
> Is what your talking about similar in some way.
>
> = * = * = * =
>
> To all posters... many thanks for the input.
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs reliability under xen

2009-05-17 Thread Ahmed Kamal
Hi zfs gurus,

I am wondering whether the reliability of solaris/zfs is still guaranteed if
I will be running zfs not directly over real hardware, but over Xen
virtualization ? The plan is to assign physical raw access to the disks to
the xen guest. I remember zfs having problems with hardware that lies about
disk write ordering, wonder how that is handled over Xen, or if that issue
has been completely resolved

Thanks and Best Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs reliability under xen

2009-05-19 Thread Ahmed Kamal
Is anyone even using ZFS under Xen in production in some form. If so, what's
your impression of reliability ?
Regards

On Sun, May 17, 2009 at 2:16 PM, Ahmed Kamal <
email.ahmedka...@googlemail.com> wrote:

> Hi zfs gurus,
>
> I am wondering whether the reliability of solaris/zfs is still guaranteed
> if I will be running zfs not directly over real hardware, but over Xen
> virtualization ? The plan is to assign physical raw access to the disks to
> the xen guest. I remember zfs having problems with hardware that lies about
> disk write ordering, wonder how that is handled over Xen, or if that issue
> has been completely resolved
>
> Thanks and Best Regards
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs reliability under xen

2009-05-22 Thread Ahmed Kamal
>
> However, if you need to decide, whether to use Xen, test your setup
> before going into production and ask your boss, whether he can live with
> innovative ... solutions ;-)
>

Thanks a lot for the informative reply. It has been definitely helpful
I am however interested in the reliability of running the ZFS stack as Xen
domU (and not dom0). For instance, I am worried that the emulated disk
controller would not obey flushes, or write ordering thus stabbing zfs in
the back.

Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering fs's

2009-06-22 Thread Ahmed Kamal
>
> It's worth a try although like you i'll have to bow to the gurus on the
> list. It's not the end of the world if she can't get it back, but if anyone
> does know of a method like this, I'd love to know, for future reference as
> much as anything.
>

Perhaps you're looking for http://www.cgsecurity.org/wiki/PhotoRec ?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Quantifying ZFS reliability

2008-09-29 Thread Ahmed Kamal
Hi everyone,

We're a small Linux shop (20 users). I am currently using a Linux server to
host our 2TBs of data. I am considering better options for our data storage
needs. I mostly need instant snapshots and better data protection. I have
been considering EMC NS20 filers and Zfs based solutions. For the Zfs
solutions, I am considering NexentaStor product installed on a pogoLinux
StorageDirector box. The box will be mostly sharing 2TB over NFS, nothing
fancy.

Now, my question is I need to assess the zfs reliability today Q4-2008 in
comparison to an EMC solution. Something like EMC is pretty mature and used
at the most demanding sites. Zfs is fairly new, and from time to time I have
heard it had some pretty bad bugs. However, the EMC solution is like 4X more
expensive. I need to somehow "quantify" the relative quality level, in order
to judge whether or not I should be paying all that much to EMC. The only
really important reliability measure to me, is not having data loss!
Is there any real measure like "percentage of total corruption of a pool"
that can assess such a quality, so you'd tell me zfs has pool failure rate
of 1 in a 10^6, while EMC has a rate of 1 in a 10^7. If not, would you guys
rate such a zfs solution as ??% the reliability of an EMC solution ?

I know it's a pretty difficult question to answer, but it's the one I need
to answer and weigh against the cost.
Thanks a million, I really appreciate your help
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
Thanks for all the answers .. Please find more questions below :)

- Good to know EMC filers do not have end2end checksums! What about netapp ?

- Any other limitations of the big two NAS vendors as compared to zfs ?

- I still don't have my original question answered, I want to somehow assess
the reliability of that zfs storage stack. If there's no hard data on that,
then if any storage expert who works with lots of systems can give his
"impression" of the reliability compared to the big two, that would be great
!

- Regarding building my own hardware, I don't really want to do that (I am
scared enough to put our small but very important data on zfs). If you know
of any Dell box (we usually deal with dell) that can host say 10 drives
minimum (for expandability) and that is *known* to work very well with
nexentaStor. Then please please let me know about it. I am unconfident about
the hardware quality of the pogoLinux solution, but forced to go with it for
nexenta. The Sun thumper solution is too expensive for me, I am looking for
a solution around 10k$. I don't need all those disks or RAM in thumper!

- Assuming I plan to host a maximum of 8TB uesable data on the pogo box as
seen in: http://www.pogolinux.com/quotes/editsys?sys_id=8498
  * Would I need one or two of those Quad core xeon CPUs ?
  * How much RAM is needed ?
  * I'm planning on using Segate 1TB sata 7200 disks. Is that crazy ? The
EMC guy insisted we use 10k Fibre/SAS drives at least. We're currently on 3
1TB sata disks on my current linux box, and it's fine for me! At least when
it's not rsnapshotting. The workload is 20 user NFS for homes and some
software shares
  * Assuming the pogo sata controller dies, do you suppose I could plug the
disks into any other machine and work with them ? I wonder why the pogo box
does not come with two controllers, doesn't solaris support that !


Thanks a lot for your replies


On Tue, Sep 30, 2008 at 10:31 AM, MC <[EMAIL PROTECTED]> wrote:

> The good news is that even though the answer to your question is "no", it
> doesn't matter because it sounds like what you are doing is a piece of cake
> :)
>
> Given how cheap hardware is, and how modest your requirements sound, I
> expect you could build multiple custom systems for the cost of an EMC
> system.  Even that pogolinux stuff is overshooting the mark compared to what
> a custom system might be.  Price is typical too, considering they're trying
> to sell 1TB drives for $260 when similar drives are less than $150 for
> regular folks.
>
> The manageability of nexentastor software might be worth it to you over a
> solaris terminal, but for a small shop with one machine and one guy who
> knows it well, you might just do the hardware from scratch :)  Especially
> given what there is to know about ZFS and your use case, such as being able
> to use slower disks with more RAM and a SSD ZIL cache to produce deceptively
> fast results.
>
> If cost continues to be a concern over performance, also consider that
> these pre-made systems are not designed for power conservation at all.
>  They're still shipping old inefficient processors and other such parts in
> these things, hoping to take advantage of IT people who don't care or know
> any better.  A custom system could potentially cut the total power cost in
> half...
>
> > 
> > Hi everyone,We're a small
> > Linux shop (20 users). I am currently using a Linux
> > server to host our 2TBs of data. I am considering
> > better options for our data storage needs. I mostly
> > need instant snapshots and better data protection. I
> > have been considering EMC NS20 filers and Zfs based
> > solutions. For the Zfs solutions, I am considering
> > NexentaStor product installed on a pogoLinux
> > StorageDirector box. The box will be mostly sharing
> > 2TB over NFS, nothing fancy.
> > Now, my question is I need to assess the zfs
> > reliability today Q4-2008 in comparison to an EMC
> > solution. Something like EMC is pretty mature and
> > used at the most demanding sites. Zfs is fairly new,
> > and from time to time I have heard it had some pretty
> > bad bugs. However, the EMC solution is like 4X more
> > expensive. I need to somehow "quantify" the
> > relative quality level, in order to judge whether or
> > not I should be paying all that much to EMC. The only
> > really important reliability measure to me, is not
> > having data loss!
> > Is there any real measure like "percentage of
> > total corruption of a pool" that can assess such
> > a quality, so you'd tell me zfs has pool failure
> > rate of 1 in a 10^6, while EMC has a rate of 1 in a
> > 10^7. If not, would you guys rate such a zfs solution
> > as ??% the reliability of an EMC solution ?
> > I know it's a pretty difficult question to
> > answer, but it's the one I need to answer and
> > weigh against the cost. Thanks a million, I
> > really appreciate your help
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolar

Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
I guess I am mostly interested in MTDL for a zfs system on whitebox hardware
(like pogo), vs dataonTap on netapp hardware. Any numbers ?

On Tue, Sep 30, 2008 at 4:36 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Tue, 30 Sep 2008, Ahmed Kamal wrote:
>
>>
>> - I still don't have my original question answered, I want to somehow
>> assess
>> the reliability of that zfs storage stack. If there's no hard data on
>> that,
>> then if any storage expert who works with lots of systems can give his
>> "impression" of the reliability compared to the big two, that would be
>> great
>>
>
> The reliability of that zfs storage stack primarily depends on the
> reliability of the hardware it runs on.  Note that there is a huge
> difference between 'reliability' and 'mean time to data loss' (MTDL). There
> is also the concern about 'availability' which is a function of how often
> the system fails, and the time to correct a failure.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
Thanks guys, it seems the problem is even more difficult than I thought, and
it seems there is no real measure for the software quality of the zfs stack
vs others, neutralizing the hardware used under both. I will be using ECC
RAM, since you mentioned it, and I will shift to using "enterprise" disks (I
had initially thought zfs will always recovers from cheapo sata disks,
making other disks only faster but not also safer), but now I am shifting to
10krpm SAS disks

So, I am changing my question into "Do you see any obvious problems with the
following setup I am considering"

- CPU: 1 Xeon Quad Core E5410 2.33GHz 12MB Cache 1333MHz
- 16GB ECC FB-DIMM 667MHz (8 x 2GB)
- 10  Seagate 400GB 10K 16MB SAS HDD

The 10 disks will be: 2 spare + 2 parity for raidz2 + 6 data => 2.4TB
useable space

* Do I need more CPU power ? How do I measure that ? What about RAM ?!
* Now that I'm using ECC RAM, and enterprisey disks, Does this put this
solution in par with low end netapp 2020 for example ?

I will be replicating the important data daily to a Linux box, just in case
I hit a wonderful zpool bug. Any final advice before I take the blue bill ;)

Thanks a lot


On Tue, Sep 30, 2008 at 8:40 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> >>>>> "ak" == Ahmed Kamal <[EMAIL PROTECTED]> writes:
>
>ak> I need to answer and weigh against the cost.
>
> I suggest translating the reliability problems into a cost for
> mitigating them: price the ZFS alternative as two systems, and keep
> the second system offline except for nightly backup.  Since you care
> mostly about data loss, not availability, this should work okay.  You
> can lose 1 day of data, right?
>
> I think you need two zpools, or zpool + LVM2/XFS, some kind of
> two-filesystem setup, because of the ZFS corruption and
> panic/freeze-on-import problems.  Having two zpools helps with other
> things, too, like if you need to destroy and recreate the pool to
> remove a slog or a vdev, or change from mirroring to raidz2, or
> something like that.
>
> I don't think it's realistic to give a quantitative MTDL for loss
> caused by software bugs, from netapp or from ZFS.
>
>ak> The EMC guy insisted we use 10k Fibre/SAS drives at least.
>
> I'm still not experienced at dealing with these guys without wasting
> huge amounts of time.  I guess one strategy is to call a bunch of
> them, so they are all wasting your time in parallel.  Last time I
> tried, the EMC guy wanted to meet _in person_ in the financial
> district, and then he just stopped calling so I had to guesstimate his
> quote from some low-end iSCSI/FC box that Dell was reselling.  Have
> you called netapp, hitachi, storagetek?  The IBM NAS is netapp so you
> could call IBM if netapp ignores you, but you probably want the
> storevault which is sold differently.  The HP NAS looks weird because
> it runs your choice of Linux or Windows instead of
> WeirdNASplatform---maybe read some more about that one.
>
> Of course you don't get source, but it surprised me these guys are
> MUCH worse than ordinary proprietary software.  At least netapp stuff,
> you may as well consider it leased.  They leverage the ``appliance''
> aspect, and then have sneaky licenses, that attempt to obliterate any
> potential market for used filers.  When you're cut off from support
> you can't even download manuals.  If you're accustomed to the ``first
> sale doctrine'' then ZFS with source has a huge advantage over netapp,
> beyond even ZFS's advantage over proprietary software.  The idea of
> dumping all my data into some opaque DRM canister lorded over by
> asshole CEO's who threaten to sick their corporate lawyers on users on
> the mailing list offends me just a bit, but I guess we have to follow
> the ``market forces.''
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
>
>  Intel mainstream (and indeed many tech companies') stuff is purposely
> stratified from the enterprise stuff by cutting out features like ECC and
> higher memory capacity and using different interface form factors.


Well I guess I am getting a Xeon anyway


> There is nothing magical about SAS drives. Hard drives are for the most
> part all built with the same technology.  The MTBF on that is 1.4M hours vs
> 1.2M hours for the enterprise 1TB SATA disk, which isn't a big difference.
>  And for comparison, the WD3000BLFS is a consumer drive with 1.4M hours
> MTBF.
>
>
Hmm ... well, there is a considerable price difference, so unless someone
says I'm horribly mistaken, I now want to go back to Barracuda ES 1TB 7200
drives. By the way, how many of those would saturate a single (non trunked)
Gig ethernet link ? Workload NFS sharing of software and homes. I think 4
disks should be about enough to saturate it ?

BTW, for everyone saying zfs is more reliable because it's closer to the
application than a netapp, well at least in my case it isn't. The solaris
box will be NFS sharing and the apps will be running on remote Linux boxes.
So, I guess this makes them equal. How about a new "reliable NFS" protocol,
that computes the hashes on the client side, sends it over the wire to be
written remotely on the zfs storage node ?!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
>
> Well, if you can probably afford more SATA drives for the purchase
> price, you can put them in a striped-mirror set up, and that may help
> things. If your disks are cheap you can afford to buy more of them
> (space, heat, and power not withstanding).
>

Hmm, that's actually cool !
If I configure the system with

10 x 400G 10k rpm disk == cost ==> 13k$
10 x 1TB SATA 7200 == cost ==> 9k$

Always assuming 2 spare disks, and Using the sata disks, I would configure
them in raid1 mirror (raid6 for the 400G), Besides being cheaper, I would
get more useable space (4TB vs 2.4TB), Better performance of raid1 (right?),
and better data reliability ?? (don't really know about that one) ?

Is this a recommended setup ? It looks too good to be true ?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
>
> I observe that there are no disk vendors supplying SATA disks
> with speed > 7,200 rpm.  It is no wonder that a 10k rpm disk
> outperforms a 7,200 rpm disk for random workloads.  I'll attribute
> this to intentional market segmentation by the industry rather than
> a deficiency in the transfer protocol (SATA).
>

I don't really need more performance that what's needed to saturate a gig
link (4 sata disks?)
So, performance aside, does SAS have other benefits ? Data integrity ? How
would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2) ?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
>
>
> So, performance aside, does SAS have other benefits ? Data integrity ? How
> would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2) ?
> Like apples and pomegranates.  Both should be able to saturate a GbE link.
>

You're the expert, but isn't the 100M/s for streaming not random read/write.
For that, I suppose the disk drops to around 25M/s which is why I was
mentioning 4 sata disks.

When I was asking for comparing the 2 raids, It's was aside from
performance, basically sata is obviously cheaper, it will saturate the gig
link, so performance yes too, so the question becomes which has better data
protection ( 8 sata raid1 or 8 sas raidz2)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-09-30 Thread Ahmed Kamal
Hm, richard's excellent Graphs here http://blogs.sun.com/relling/tags/mttdl
as well as his words say he prefers mirroring over raidz/raidz2 almost
always. It's better for performance and MTTDL.

Since 8 sata raid1 is cheaper and probably more reliable than 8 raidz2 sas
(and I dont need extra sas performance), and offers better performance and
MTTDL than 8 sata raidz2, I guess I will go with 8-sata-raid1 then!
Hope I'm not horribly mistaken :)

On Wed, Oct 1, 2008 at 3:18 AM, Tim <[EMAIL PROTECTED]> wrote:

>
>
> On Tue, Sep 30, 2008 at 8:13 PM, Ahmed Kamal <
> [EMAIL PROTECTED]> wrote:
>
>>
>>> So, performance aside, does SAS have other benefits ? Data integrity ?
>>> How would a 8 raid1 sata compare vs another 8 smaller SAS disks in raidz(2)
>>> ?
>>> Like apples and pomegranates.  Both should be able to saturate a GbE
>>> link.
>>>
>>
>> You're the expert, but isn't the 100M/s for streaming not random
>> read/write. For that, I suppose the disk drops to around 25M/s which is why
>> I was mentioning 4 sata disks.
>>
>> When I was asking for comparing the 2 raids, It's was aside from
>> performance, basically sata is obviously cheaper, it will saturate the gig
>> link, so performance yes too, so the question becomes which has better data
>> protection ( 8 sata raid1 or 8 sas raidz2)
>>
>
> SAS's main benefits are seek time and max IOPS.
>
> --Tim
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-01 Thread Ahmed Kamal
Thanks for all the opinions everyone, my current impression is:
- I do need as much RAM as I can afford (16GB look good enough for me)
- SAS disks offers better iops & better MTBF than SATA. But Sata offers
enough performance for me (to saturate a gig link), and its MTBF is around
100 years, which is I guess good enough for me too. If I wrap 5 or 6 SATA
disks in a raidz2 that should give me "enough" protection and performance.
It seems I will go with sata then for now. I hope for all practical purposes
the raidz2 array of say 6 sata drives are "very well protected" for say the
next 10 years! (If not please tell me)
- This will mainly be used for NFS sharing. Everyone is saying it will have
"bad" performance. My question is, how "bad" is bad ? Is it worse than a
plain Linux server sharing NFS over 4 sata disks, using a crappy 3ware raid
card with caching disabled ? coz that's what I currently have. Is it say
worse that a Linux box sharing over soft raid ?
- If I will be using 6 sata disks in raidz2, I understand to improve
performance I can add a 15k SAS drive as a Zil device, is this correct ? Is
the zil device per pool. Do I loose any flexibility by using it ? Does it
become a SPOF say ? Typically how much percentage improvement should I
expect to get from such a zil device ?

Thanks

On Wed, Oct 1, 2008 at 6:22 PM, Tim <[EMAIL PROTECTED]> wrote:

>
>
> On Wed, Oct 1, 2008 at 11:20 AM, <[EMAIL PROTECTED]> wrote:
>
>>
>>
>> >Ummm, no.  SATA and SAS seek times are not even in the same universe.=
>> >  They
>> >most definitely do not use the same mechanics inside.  Whoever told y=
>> >ou that
>> >rubbish is an outright liar.
>>
>>
>> Which particular disks are you guys talking about?
>>
>> I;m thinking you guys are talking about the same 3.5" w/ the same RPM,
>> right?  We're not comparing 10K/2.5 SAS drives agains 7.2K/3.5 SATA
>> devices, are we?
>>
>> Casper
>>
>>
> I'm talking about 10k and 15k SAS drives, which is what the OP was talking
> about from the get-go.  Apparently this is yet another case of subsequent
> posters completely ignoring the topic and taking us off on tangents that
> have nothing to do with the OP's problem.
>
> --Tm
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Quantifying ZFS reliability

2008-10-02 Thread Ahmed Kamal
Thanks for the info. I am not really after big performance, I am already on
SATA and it's good enough for me. What I really really can't afford is data
loss. The CAD designs our engineers are working on can sometimes be really
worth a lot. But still we're a small company and would rather save and buy
SATA drives if it is "Safe"
I now understand MTBF is next to useless (at least directly), the RAID
optimizer tables don't take how failure rates go up with years, so it's not
really accurate. My question now is if I will use high quality Barracuda
nearline 1TB sata 7200 disks, and configure them as 8 disks in a raidz2
configuration.

What is the "real/practical" possibility that I will face data loss during
the next 5 years for example ? As storage experts please help me
interpret whatever numbers you're going to throw, so is it a "really really
small chance", or would you be worried about it ?

Thanks

On Thu, Oct 2, 2008 at 12:24 PM, Marc Bevand <[EMAIL PROTECTED]> wrote:

> Marc Bevand  gmail.com> writes:
> >
> > Well let's look at a concrete example:
> > - cheapest 15k SAS drive (73GB): $180 [1]
> > - cheapest 7.2k SATA drive (160GB): $40 [2] (not counting a 80GB at $37)
> > The SAS drive most likely offers 2x-3x the IOPS/$. Certainly not
> 180/40=4.5x
>
> Doh! I said the opposite of what I meant. Let me rephrase: "The SAS drive
> offers at most 2x-3x the IOPS (optimistic), but at 180/40=4.5x the price.
> Therefore the SATA drive has better IOPS/$."
>
> (Joerg: I am on your side of the debate !)
>
> -marc
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Solved - a big THANKS to Victor Latushkin @ Sun / Moscow

2008-10-09 Thread Ahmed Kamal
   >
   >In the past year I've lost more ZFS file systems than I have any other
   >type of file system in the past 5 years.  With other file systems I
   >can almost always get some data back.  With ZFS I can't get any back.

Thats scary to hear!
>
>
I am really scared now! I was the one trying to quantify ZFS reliability,
and that is surely bad to hear!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best practice recommendations for backing up to ZFS Fileserver

2008-10-18 Thread Ahmed Kamal
For *nix rsync
For windows rsyncshare
http://www.nexenta.com/corp/index.php?option=com_remository&Itemid=77&func=startdown&id=18


On Sat, Oct 18, 2008 at 1:56 PM, Ares Drake <[EMAIL PROTECTED]>wrote:

> Greetings.
>
> I am currently looking into setting up a better backup solution for our
> family.
>
> I own a ZFS Fileserver with a 5x500GB raidz. I want to back up data (not
> the OS itself) from multiple PCs running linux oder windowsXP. The linux
> boxes are connected via 1000Mbit, the windows machines either via
> gigabit as well or 54Mbit WPA encrypted WLAN. So far i've set up sharing
> via NFS on the Solaris box and it works well from both Linux and Windows
> (via SFU).
>
> I am looking for a solution to do incremental backups without wasting
> space on the fileserver and I want to be able to access a single file in
> the backup in differnt versions without much hassle. I think it can be
> done easily with ZFS and Snapshots?
>
> What would be good ways to get the files to the fileserver? For linux I
> thought of using rsync to sync the files over, than do a snapshot to
> preserve that backup state. Would you recommend using rsync with NFS or
> over ssh? (I assume the network is save enough for our needs.) Are there
> better alternatives?
>
> How to best get the data from the Windows machines to the Solaris box?
> Just copying them over by hand would not delete files on the fileserver
> in case some files are deleted on the windows box in between different
> backups. Using rsync on windows is only possible with cygwin emulation.
> Maybe there are better methods?
>
>
> Anyone have a similar setup, recommendations, or maybe something I could
> use as an idea?
>
> Thanks in advance,
>
> A. Drake
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Pool corruption avoidance

2008-10-18 Thread Ahmed Kamal
Hi,
Unfortunately, every now and then someone has his zpool corrupt, with no
tools to fix it! This is due to either zfs bugs, or hardware lying about
whether the bits really hit the platters. I am evaluating what I should be
using for storing VMware ESX VM images (ext3 or zfs on NFS). I really really
want zfs snapshots, but loosing the pool is going to be a royal pain for
small businesses.

My questions are:
1- What are the best practices to avoid pool corruption (even if it incurs a
performance hit) ?
2- I remember a suggested idea that zfs would iterate back in time when
mounting a zpool till it finds a fully written pool and uses that, thus
avoiding corruption. Is there an RFE for that yet ? I'd like to subscribe to
that, and I might even delay jumping on the zfs wagon till it's got this
recovery feature!

Regards
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Distro Advice

2013-02-27 Thread Ahmed Kamal
How is the quality of the ZFS Linux port today? Is it comparable to Illumos
or at least FreeBSD ? Can I trust production data to it ?


On Wed, Feb 27, 2013 at 5:22 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Tue, 26 Feb 2013, Gary Driggs wrote:
>
>  On Feb 26, 2013, at 12:44 AM, "Sašo Kiselkov" wrote:
>>
>>   I'd also recommend that you go and subscribe to
>> z...@lists.illumos.org, since this list is going to get shut
>>   down by Oracle next month.
>>
>>
>> Whose description still reads, "everything ZFS running on illumos-based
>> distributions."
>>
>
> Even FreeBSD's zfs is now based on zfs from Illumos.  FreeBSD and Linux
> zfs developers contribute fixes back to zfs in Illumos.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/**
> users/bfriesen/ 
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss