Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Wojciech Puchar

When I say fast that's mean I already do some benchmarks with iozone. And
do some graphs to see what the performance are.

What I can say is it's go lot faster than H700+ 12 disk 600 15k/min.


i asked if it is faster than properly made UFS/gmirror/gstripe mix on the 
same hardware.


And I do those tests on FreeBSD with 12 disk, 24 disk, 36 disk and finaly
48 disk.


would be nice.


All I can say is ZFS go faster than 12 disk with H700 (and ext3) almost
every time.


if you compare to ext3 then maybe it is faster. compare to UFS.


can be controlled by settings in loader.conf.


Yes, but I think that's not a good idea to buy a server with 4 Go and make
him manage 100To through ZFS


as for file server i don't see a reason to buy more.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Wojciech Puchar

On the other hand, even on a single-disk pool, ZFS stores two copies of all
metadata, so the chances of actually losing a directory block are extremely
remote.  On mirrored or RAIDZ pools, you have at least four copies of all
metadata.
i can only wish you to be lucky. sometimes lack of understanding make 
people happy.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Wojciech Puchar
I have another storage server named bd3 that has a RAIDz2 array of 2.5T 
drives (11 of them, IIRC) but it is presently powered down for maintenance.


seems you don't need performance at all if you use RAIDz1/2 and ZFS. 
unless performance for you means how fast 1GB file are read linearly.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Simon

This thread confused me. Is the conclusion of this thread that ZFS is slow and
breaks beyond recovery? I keep seeing two sides to this coin. I can't decide
whether to use ZFS or hardware RAID. Why does EMC use hardware RAID?

-Simon




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Modulok
 This thread confused me. Is the conclusion of this thread that ZFS is slow
 and breaks beyond recovery?

I've personally experienced no problems with ZFS. The performance has been on
par with UFS as far as I can tell. Sometimes it's a little faster, sometimes a
little slower depending on the situation, but nothing dramatic on either end.

-Modulok-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Daniel Staal

--As of June 2, 2012 6:32:39 PM -0400, Simon is alleged to have said:


This thread confused me. Is the conclusion of this thread that ZFS is
slow and breaks beyond recovery? I keep seeing two sides to this coin. I
can't decide whether to use ZFS or hardware RAID. Why does EMC use
hardware RAID?


--As for the rest, it is mine.

It appears to be the conclusion of Wojciech Puchar that ZFS is slow, and 
breaks beyond recovery.  The rest of us don't appear to have issues.


I will agree that ZFS could use a good worst-case scenario 'fsck' like 
tool.  However, between at home and at work (where it's used on Solaris), 
the only time I've ever been in a situation where it would be needed was 
when I was playing with the disks in several low-level tools; the situation 
was entirely self-inflicted, and would have caused major trouble for any 
file system.  (If I'd been storing data on it, I would have needed to go to 
backups.  Again, this would have been the case for any file system.)


ZFS can be a complicated beast: It's not the best choice for a single, 
small, disk.  It may take tuning to work to it's full potential, and it's 
fairly resource-intensive.  However, for large storage sets there is no 
other file system out there at the moment that's as flexible, or as useful, 
in my opinion.


Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-02 Thread Michael Sierchio
On Sat, Jun 2, 2012 at 7:44 PM, Daniel Staal dst...@usa.net wrote:

 I will agree that ZFS could use a good worst-case scenario 'fsck' like tool.

Worst-case scenario?  That's when fsck doesn't work.  Quickly followed
by a sinking feeling.

 ZFS can be a complicated beast: It's not the best choice for a single,
 small, disk.  It may take tuning to work to it's full potential, and it's
 fairly resource-intensive.  However, for large storage sets there is no
 other file system out there at the moment that's as flexible, or as useful,
 in my opinion.

I don't even see the point of using it as a root drive.  But this
thread is about large file servers,  and I wouldn't seriously consider
using anything but ZFS.

NO filesystem has a mean time to data loss of infinity.  If your disk
traffic is primarily uncacheable random reads, you might be better off
with mirrored disks.  I guess that's what the traffic is like at the
internet cafe where Wojciech serves coffee. ;-) I tend to use RAIDZ-2
or RAIDZ-3 for most large installations.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Albert Shih
 Le 31/05/2012 ? 11:32:33-0400, Oscar Hodgson a écrit

 The subject is pretty much the question.  Perhaps there's a better
 place to be asking this question ...
 
 We have (very briefly) discussed the possibility of using FreeBSD
 pizza boxes as a storage heads direct attached to external JBOD arrays
 with ZFS.  In perusing the list, I haven't stumbled across indications
 of people actually doing this.  External JBODs would be running 24 to
 48TB each, roughly.  There would be a couple of units.  The pizza
 boxes would be used for computational tasks, and nominally would have
 8 cores and 96G+ RAM.
 
I've Dell R610 + 48 Go Ram, 2x 6 core + 4 * MD1200 (36*3T + 12*2T)

[root@filer ~]# zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
filer119T  35,4T  83,9T29%  1.00x  ONLINE  -
[root@filer ~]# 

Work very fine (I can't say I've long experience because the server is up
since just 4 months). 

The ZFS is very good, easy to manage, very fast.

They're two default IMHO : 

Eat lot of Ram

cannot synchronize two zpool automaticaly like HammerFS

Regards.

JAS
-- 
Albert SHIH
DIO bâtiment 15
Observatoire de Paris
5 Place Jules Janssen
92195 Meudon Cedex
Téléphone : 01 45 07 76 26/06 86 69 95 71
xmpp: j...@jabber.obspm.fr
Heure local/Local time:
ven 1 jui 2012 07:17:47 CEST
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar

48TB each, roughly.  There would be a couple of units.  The pizza
boxes would be used for computational tasks, and nominally would have
8 cores and 96G+ RAM.

Obvious questions are hardware compatibility and stability.  I've set
up small FreeBSD 9 machines with ZFS roots and simple mirrors for
other tasks here, and those have been successful so far.

Observations would be appreciated.

you idea of using disks in JBOD style (no hardware RAID) is good, but of 
using ZFS is bad.



i would recommend you to do some real performance testing of ZFS on any 
config  under real load (workload doesn't fit cache, there are many 
different things done by many users/programs)  and compare it to 
PROPERLY done UFS config on such config (with the help of gmirror/gstripe)


if you will have better result you certainly didn't configure the latter 
case (UFS,Gmirror,gstripe) properly :)


in spite of large scale hype and promotion of this free software (which by 
itself should be red alert for you), i strongly recommend to stay away from it.


and definitely do not use it if you will not have regular backups of all 
data, as in case of failures (yes they do happen) you will just have no 
chance to repair it.


There is NO fsck_zfs! And ZFS is promoted as it doesn't need it.

Assuming that filesystem doesn't need offline filesystem check utility 
because it never crash is funny.


In the other hand i never ever heard of UFS failsystem failure that was 
not a result of physical disk failure and resulted in bad damage.
in worst case some files or one/few subdirectory landed in lost+found, and 
some recently (minutes at most) done things wasn't here.



if you still like to use it, do not forget it uses many times more CPU 
power than UFS in handling filesystem, leaving much to computation 
you want to do.


As of memory you may limit it's memory (ab)usage by adding proper 
statements to loader.conf but still it uses enormous amount of it.


with 96GB it may not be a problem for you, or it may depends how much 
memory you need for computation.




if you need help in properly configuring large storage with UFS and 
gmirror/gstripe tools then feel free to ask

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar


I am also in charge of redesigning one of our virtual SAN's to a
FreeBSD ZFS storage system which will run well how many JBOD's can
you fit on the system?? Probably round ~100TB or so.


quite a bit more without buying overpriced things
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar
I'm not using as huge a dataset, but I was seeing this behavior as well when 
I first set my box up.  What was happening was that ZFS was caching *lots* of 
writes, and then would dump them all to disk at once, during which time the 
computer was completely occupied with the disk I/O.


The solution (suggested from http://wiki.freebsd.org/ZFSTuningGuide) for me 
was:

vfs.zfs.txg.timeout=5


both problem, and solution is very close to linux style ext2/3/4 and it's 
behaviour. And one of the main reason to moving out from this s..t to 
FreeBSD. (the other was networking)


UFS writes out complete MAXBSIZE sized chunks quickly.


all of that behaviour or linux (and probably ZFS) are because it often 
gives better result in benchmark, and people love synthetic benchmarks.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Kaya Saman

 and definitely do not use it if you will not have regular backups of all
 data, as in case of failures (yes they do happen) you will just have no
 chance to repair it.

 There is NO fsck_zfs! And ZFS is promoted as it doesn't need it.

 Assuming that filesystem doesn't need offline filesystem check utility
 because it never crash is funny.


zfs scrub...???

Additionally ZFS works directly at the block level of the HD meaning
that it is slightly different to the 'normal' file systems in storing
information and is also self healing..


Though I'm sure that you knew all this and have found otherwise.


I mean I haven't found any problem with it even after power failures
and such and my machine has been up for nearly 3 years.


Regards,


Kaya
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar

Assuming that filesystem doesn't need offline filesystem check utility
because it never crash is funny.



zfs scrub...???


when starting means crash quickly?
Well.. no.

Certainly with computers that never have hardware faults and assuming ZFS 
doesn't have any software bugs you may be right.


But in real world you will be hardly punished some day ;)


Additionally ZFS works directly at the block level of the HD meaning
that it is slightly different to the 'normal' file systems in storing
information and is also self healing..


doesn't other filesystem work on block level too? if no - then at what 
level?



___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Kaya Saman

 Additionally ZFS works directly at the block level of the HD meaning
 that it is slightly different to the 'normal' file systems in storing
 information and is also self healing..


 doesn't other filesystem work on block level too? if no - then at what
 level?



It was my impression that ZFS doesn't actually format the disk as
stores data as raw information on the hard disk directly rather then
using an actual file system structure as such.

That's what I was trying to get at by that statement. This is really
what made ZFS standout over other types of file systems.


In doing that according to everything I have read, it actually means
faster I/O and ease of portability incase the disks need to be removed
from their current location and added elsewhere but not loosing
information.


Unlike clunky hardware RAID systems ZFS adds much more versitility too
which of course being at this depth of knowledge you are aware of and
may even have a means to compare, however I personally prefer it over
RAID as RAID is rubbish dealing with it everyday I am fed up of
creating non-dynamic arrays.


I cannot compre directly to the more advanced UFS2 techniques but my
money would be with ZFS over RAID and LVM any day and don't even give
me M$ systems they would be out the window before being booted for the
first time..


Regards,

Kaya
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Daniel Feenberg




On Fri, 1 Jun 2012, Wojciech Puchar wrote:


Assuming that filesystem doesn't need offline filesystem check utility
because it never crash is funny.



zfs scrub...???


when starting means crash quickly?
Well.. no.

Certainly with computers that never have hardware faults and assuming ZFS 
doesn't have any software bugs you may be right.


But in real world you will be hardly punished some day ;)


Additionally ZFS works directly at the block level of the HD meaning
that it is slightly different to the 'normal' file systems in storing
information and is also self healing..


doesn't other filesystem work on block level too? if no - then at what level?



If the OP really intended to stripe disks with no parity or mirror for ZFS 
, then that is probably a mistake. If the disks are /tmp, it might make 
sense to stripe disks without parity, but no need for ZFS. The OP did say

JBOD, which to me means that each disk is a separate disk partition with
no striping or parity. Again, in that case I don't see any need for ZFS.

As for ZFS being dangerous, we have a score of drive-years with no loss of 
data. The lack of fsck is considered in this intelligently written piece


  http://www.osnews.com/story/22423/Should_ZFS_Have_a_fsck_Tool_

The link to the emotional posting by Jeff Bomwick is broken, but the 
original is available at:


  http://mail.opensolaris.org/pipermail/zfs-discuss/2008-October/022324.html

daniel feenberg
nber
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Polytropon
On Fri, 1 Jun 2012 14:05:57 +0100, Kaya Saman wrote:
 It was my impression that ZFS doesn't actually format the disk as
 stores data as raw information on the hard disk directly rather then
 using an actual file system structure as such.

In worst... in ultra-worst abysmal inexpected exceptional
and unbelievable narrow cases, when you don't have or can't
access a backup (which you should have even when using ZFS),
and you _need_ to do some forensic analysis on disks, ZFS
seems to be a worse solution than UFS. On ZFS, you never
can predict where the data will go. Add several disks to
the problem, a combination of striping and mirroring
mechanisms, and you will see that things start to become
complicated.

I do _not_ want to try to claim a ZFS inferiority due to
missing backups, but there may be occassions where (except
performance), low-level file system aspects of UFS might be
superior to using ZFS.




-- 
Polytropon
Magdeburg, Germany
Happy FreeBSD user since 4.0
Andra moi ennepe, Mousa, ...
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar

level?




It was my impression that ZFS doesn't actually format the disk as


does any filesystem format a disk?
disks are nowadays factory formatted.

filesystem only write data and it's metadata on it.

I really recommend you to get basic knowledge of how (any) filesystem 
works.


THEN please discuss things.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar

and unbelievable narrow cases, when you don't have or can't
access a backup (which you should have even when using ZFS),
and you _need_ to do some forensic analysis on disks, ZFS
seems to be a worse solution than UFS. On ZFS, you never
can predict where the data will go. Add several disks to


true. in UFS for example inodes are at known place, and flat structure 
instead of tree is used.




even if some sectors are overwritten with garbage then fsck can scan over 
inodes and recover all that can be recovered.



ZFS is somehow in that part similar to Amiga Fast File System. when you 
overwrite a directory block (by hardware fault for example), everything below that 
directory will disappear. You may not be even aware of it until you need 
that data


Only separate software (that - contrary to ZFS - do exist) can recover 
things by linearly scanning whole disk. terribly slow but at least 
possible.




EVEN FAT16/FAT32 IS MORE SAFE.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Michael Sierchio
On Fri, Jun 1, 2012 at 7:35 AM, Polytropon free...@edvax.de wrote:

 I do _not_ want to try to claim a ZFS inferiority due to
 missing backups, but there may be occassions where (except
 performance), low-level file system aspects of UFS might be
 superior to using ZFS.

If you have an operational need for offsite backups, that doesn't
change no matter how much redundancy you have in a single location.
Backups are still necessary.

But when RAIDed, ZFS has features that make it superior to hardware
RAID - copy-on-write, block deduplication, etc.  Like UFS2, it
supports snapshots - but a lot more of them.

Another performance criterion that is important to me is mirror (or
raidz) recovery - how long does mirror catch-up take when you replace
a disk, and how badly does it degrade performance for other data
operations?  Software raid, esp. gmirror, tends to do poorly here.  My
experience is that ZFS raid share recovery had less of an impact.

YMMV.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Wojciech Puchar
As for ZFS being dangerous, we have a score of drive-years with no loss of 
data. The lack of fsck is considered in this intelligently written piece


you are just lucky.

before i would start using anything new in such important part as 
filesystem, i do extreme test, ssimulate hardware faults, random 
overwrites etc.


I did it for ZFS not once, and it fails miserably ending with 
unrecoverable filesystem that - at best - is without data in some 
subdirectory. at worst - that crashes at mount and are inaccessible 
forever.


under FFS the worst thing i can get is loss of overwritten data only. 
overwritten inode - lost file. overwrite data blocks - overwritten files. 
nothing more!



what i don't talk about is ZFS performance which is just terribly bad, 
except some few special cases when it is slightly faster than 
UFS+softupdates.


It is even worse with RAID-5 style layout which ZFS do better with 
RAID-Z.


Better=random read performance of single drive.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Michael Sierchio
On Fri, Jun 1, 2012 at 8:16 AM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:

 Better=random read performance of single drive.

What an entirely useless performance measure!  Maybe you should
restrict yourself to
using SSDs, which have rather unbeatable random read performance - the
spindle speed
is really high. ;-)
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Michael Sierchio
On Fri, Jun 1, 2012 at 8:08 AM, Wojciech Puchar
woj...@wojtek.tensor.gdynia.pl wrote:

 ZFS is somehow in that part similar to Amiga Fast File System. when you
 overwrite a directory block (by hardware fault for example), everything
 below that directory will disappear. You may not be even aware of it until
 you need that data

 Only separate software (that - contrary to ZFS - do exist) can recover
 things by linearly scanning whole disk. terribly slow but at least possible.



 EVEN FAT16/FAT32 IS MORE SAFE.

First of all, in any environment you expect disk failures.  Which
operationally means replacing the entire disk.  Then you rely on the
raid recovery mechanism (in whichever flavor of disk discipline you
choose).  ZFS semantics (copy on write, for example) are much safer
than UFS semantics.  This is not to say that UFS is not a more mature
and possibly robust filesystem.  But relying on gmirror, graid, etc.
means you are no longer relying solely on the robustness of the
underlying filesystem - you cannot offer a reduction proof that shows
that if gmirror is bad, it means UFS is bad.

I use UFS for most purposes, but would never build a large fileserver
using gmirror on UFS.

Your assertions about the dangers of ZFS are just that - assertions.
They are not borne out in reality.

- M
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Oscar Hodgson
Albert,

What are you using for an HBA in the Dell?

On Fri, Jun 1, 2012 at 1:23 AM, Albert Shih albert.s...@obspm.fr wrote:
 I've Dell R610 + 48 Go Ram, 2x 6 core + 4 * MD1200 (36*3T + 12*2T)

 [root@filer ~]# zpool list
 NAME     SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
 filer    119T  35,4T  83,9T    29%  1.00x  ONLINE  -
 [root@filer ~]#

 Work very fine (I can't say I've long experience because the server is up
 since just 4 months).

 The ZFS is very good, easy to manage, very fast.

 They're two default IMHO :

        Eat lot of Ram

        cannot synchronize two zpool automaticaly like HammerFS

 Regards.

 JAS
 --
 Albert SHIH
 DIO bâtiment 15
 Observatoire de Paris
 5 Place Jules Janssen
 92195 Meudon Cedex
 Téléphone : 01 45 07 76 26/06 86 69 95 71
 xmpp: j...@jabber.obspm.fr
 Heure local/Local time:
 ven 1 jui 2012 07:17:47 CEST
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Dan Nelson
In the last episode (Jun 01), Wojciech Puchar said:
  and unbelievable narrow cases, when you don't have or can't access a
  backup (which you should have even when using ZFS), and you _need_ to do
  some forensic analysis on disks, ZFS seems to be a worse solution than
  UFS.  On ZFS, you never can predict where the data will go.  Add several
  disks to
 
 true. in UFS for example inodes are at known place, and flat structure
 instead of tree is used.
 
 even if some sectors are overwritten with garbage then fsck can scan over
 inodes and recover all that can be recovered.
 
 ZFS is somehow in that part similar to Amiga Fast File System. when you
 overwrite a directory block (by hardware fault for example), everything
 below that directory will disappear.  You may not be even aware of it
 until you need that data

On the other hand, even on a single-disk pool, ZFS stores two copies of all
metadata, so the chances of actually losing a directory block are extremely
remote.  On mirrored or RAIDZ pools, you have at least four copies of all
metadata.

-- 
Dan Nelson
dnel...@allantgroup.com
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-06-01 Thread Anonymous
 Certainly with computers that never have hardware faults and assuming ZFS 
 doesn't have any software bugs you may be right.

That was part of their assumption. It's based on server grade hardware and
ECC RAM, and lots of redundancy. 

They missed the part about their code not being perfect.

 But in real world you will be hardly punished some day ;)

Yep, big time. Hardly as in hard, not as in barely.

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Oscar Hodgson
The subject is pretty much the question.  Perhaps there's a better
place to be asking this question ...

We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD arrays
with ZFS.  In perusing the list, I haven't stumbled across indications
of people actually doing this.  External JBODs would be running 24 to
48TB each, roughly.  There would be a couple of units.  The pizza
boxes would be used for computational tasks, and nominally would have
8 cores and 96G+ RAM.

Obvious questions are hardware compatibility and stability.  I've set
up small FreeBSD 9 machines with ZFS roots and simple mirrors for
other tasks here, and those have been successful so far.

Observations would be appreciated.

Oscar.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Kaya Saman
If this is any consellation I run a 36TB cluster using a self built
server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
support my OpenSource projects and personal files.

As for OS take your pick: NexentaStor, FreeBSD, Solaris 11


All capable, of course Solaris has latest version of ZFS but still.


At work we're looking into getting a StorEdge appliance wich will
handle up to 140+ TB.


I am also in charge of redesigning one of our virtual SAN's to a
FreeBSD ZFS storage system which will run well how many JBOD's can
you fit on the system?? Probably round ~100TB or so.


Regards,


Kaya


On Thu, May 31, 2012 at 4:32 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:
 The subject is pretty much the question.  Perhaps there's a better
 place to be asking this question ...

 We have (very briefly) discussed the possibility of using FreeBSD
 pizza boxes as a storage heads direct attached to external JBOD arrays
 with ZFS.  In perusing the list, I haven't stumbled across indications
 of people actually doing this.  External JBODs would be running 24 to
 48TB each, roughly.  There would be a couple of units.  The pizza
 boxes would be used for computational tasks, and nominally would have
 8 cores and 96G+ RAM.

 Obvious questions are hardware compatibility and stability.  I've set
 up small FreeBSD 9 machines with ZFS roots and simple mirrors for
 other tasks here, and those have been successful so far.

 Observations would be appreciated.

 Oscar.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Oscar Hodgson
That helps.  Thank you.

This is an academic departmental instructional / research environment.
 We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students.  Oracle, not so
much, and the Oracle single-tier support model simply isn't affordable
for this business (there's no ROI at the departmental level g).
Solaris is not a viable option.

FreeBSD looks like the next best available option at the moment,
particularly considering the use of the storage heads as compute
machines.  OpenIndiana shows promise.  Nexenta has a great product,
but the user community expects more flexibility in software options.

Is there anything like a list of supported (known good) SAS HBA's?

Oscar

On Thu, May 31, 2012 at 11:38 AM, Kaya Saman kayasa...@gmail.com wrote:
 If this is any consellation I run a 36TB cluster using a self built
 server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
 support my OpenSource projects and personal files.

 As for OS take your pick: NexentaStor, FreeBSD, Solaris 11


 All capable, of course Solaris has latest version of ZFS but still.


 At work we're looking into getting a StorEdge appliance wich will
 handle up to 140+ TB.


 I am also in charge of redesigning one of our virtual SAN's to a
 FreeBSD ZFS storage system which will run well how many JBOD's can
 you fit on the system?? Probably round ~100TB or so.


 Regards,


 Kaya


 On Thu, May 31, 2012 at 4:32 PM, Oscar Hodgson oscar.hodg...@gmail.com 
 wrote:
 The subject is pretty much the question.  Perhaps there's a better
 place to be asking this question ...

 We have (very briefly) discussed the possibility of using FreeBSD
 pizza boxes as a storage heads direct attached to external JBOD arrays
 with ZFS.  In perusing the list, I haven't stumbled across indications
 of people actually doing this.  External JBODs would be running 24 to
 48TB each, roughly.  There would be a couple of units.  The pizza
 boxes would be used for computational tasks, and nominally would have
 8 cores and 96G+ RAM.

 Obvious questions are hardware compatibility and stability.  I've set
 up small FreeBSD 9 machines with ZFS roots and simple mirrors for
 other tasks here, and those have been successful so far.

 Observations would be appreciated.

 Oscar.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Kaya Saman
On Thu, May 31, 2012 at 5:05 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:
 That helps.  Thank you.

 This is an academic departmental instructional / research environment.
  We had a great relationship with Sun, they provided great
 opportunities to put Solaris in front of students.  Oracle, not so
 much, and the Oracle single-tier support model simply isn't affordable
 for this business (there's no ROI at the departmental level g).
 Solaris is not a viable option.

We found Oracle to be the cheapest out of all the solutions we looked
at: Netapp, MSI, et el.


 FreeBSD looks like the next best available option at the moment,
 particularly considering the use of the storage heads as compute
 machines.  OpenIndiana shows promise.  Nexenta has a great product,
 but the user community expects more flexibility in software options.

FreeBSD is better then Linux in my opinion though lacking some
software and multimedia functionality that Linux has and not for the
Desktop as it's not as bleeding edge as say Fedora 16, however, if
FreeBSD offered Gnome3 and supported my wireless NIC I'd be all over
it like a bad rash :-)


 Is there anything like a list of supported (known good) SAS HBA's?

LSI HBA's are really good!

For my DIY solution at home I used a SuperMicro system board with
non-RAID LSI HBA...

It is a similar solution that we will use for our test NAS at work
though we already have a Dell R700 series server. For this setup
however I will need to use an LSI HBA with both internal and external
Mini-SAS ports.

Instead of Promise we will use NetStor JBOD solutions as they work
with 6Gbps drives and overall give better performance.


 Oscar

Regards,


Kaya


 On Thu, May 31, 2012 at 11:38 AM, Kaya Saman kayasa...@gmail.com wrote:
 If this is any consellation I run a 36TB cluster using a self built
 server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
 support my OpenSource projects and personal files.

 As for OS take your pick: NexentaStor, FreeBSD, Solaris 11


 All capable, of course Solaris has latest version of ZFS but still.


 At work we're looking into getting a StorEdge appliance wich will
 handle up to 140+ TB.


 I am also in charge of redesigning one of our virtual SAN's to a
 FreeBSD ZFS storage system which will run well how many JBOD's can
 you fit on the system?? Probably round ~100TB or so.


 Regards,


 Kaya


 On Thu, May 31, 2012 at 4:32 PM, Oscar Hodgson oscar.hodg...@gmail.com 
 wrote:
 The subject is pretty much the question.  Perhaps there's a better
 place to be asking this question ...

 We have (very briefly) discussed the possibility of using FreeBSD
 pizza boxes as a storage heads direct attached to external JBOD arrays
 with ZFS.  In perusing the list, I haven't stumbled across indications
 of people actually doing this.  External JBODs would be running 24 to
 48TB each, roughly.  There would be a couple of units.  The pizza
 boxes would be used for computational tasks, and nominally would have
 8 cores and 96G+ RAM.

 Obvious questions are hardware compatibility and stability.  I've set
 up small FreeBSD 9 machines with ZFS roots and simple mirrors for
 other tasks here, and those have been successful so far.

 Observations would be appreciated.

 Oscar.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Damien Fleuriot
As a side note and in case you were considering, I strongly advise against 
Linux + fuse ZFS.


On 31 May 2012, at 18:05, Oscar Hodgson oscar.hodg...@gmail.com wrote:

 That helps.  Thank you.
 
 This is an academic departmental instructional / research environment.
 We had a great relationship with Sun, they provided great
 opportunities to put Solaris in front of students.  Oracle, not so
 much, and the Oracle single-tier support model simply isn't affordable
 for this business (there's no ROI at the departmental level g).
 Solaris is not a viable option.
 
 FreeBSD looks like the next best available option at the moment,
 particularly considering the use of the storage heads as compute
 machines.  OpenIndiana shows promise.  Nexenta has a great product,
 but the user community expects more flexibility in software options.
 
 Is there anything like a list of supported (known good) SAS HBA's?
 
 Oscar
 
 On Thu, May 31, 2012 at 11:38 AM, Kaya Saman kayasa...@gmail.com wrote:
 If this is any consellation I run a 36TB cluster using a self built
 server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
 support my OpenSource projects and personal files.
 
 As for OS take your pick: NexentaStor, FreeBSD, Solaris 11
 
 
 All capable, of course Solaris has latest version of ZFS but still.
 
 
 At work we're looking into getting a StorEdge appliance wich will
 handle up to 140+ TB.
 
 
 I am also in charge of redesigning one of our virtual SAN's to a
 FreeBSD ZFS storage system which will run well how many JBOD's can
 you fit on the system?? Probably round ~100TB or so.
 
 
 Regards,
 
 
 Kaya
 
 
 On Thu, May 31, 2012 at 4:32 PM, Oscar Hodgson oscar.hodg...@gmail.com 
 wrote:
 The subject is pretty much the question.  Perhaps there's a better
 place to be asking this question ...
 
 We have (very briefly) discussed the possibility of using FreeBSD
 pizza boxes as a storage heads direct attached to external JBOD arrays
 with ZFS.  In perusing the list, I haven't stumbled across indications
 of people actually doing this.  External JBODs would be running 24 to
 48TB each, roughly.  There would be a couple of units.  The pizza
 boxes would be used for computational tasks, and nominally would have
 8 cores and 96G+ RAM.
 
 Obvious questions are hardware compatibility and stability.  I've set
 up small FreeBSD 9 machines with ZFS roots and simple mirrors for
 other tasks here, and those have been successful so far.
 
 Observations would be appreciated.
 
 Oscar.
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
 ___
 freebsd-questions@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Kaya Saman
On Thu, May 31, 2012 at 6:28 PM, Damien Fleuriot m...@my.gd wrote:
 As a side note and in case you were considering, I strongly advise against 
 Linux + fuse ZFS.


Yes I agree; as far as I understand ZFS in Linux is still in testing
and in any case not part of the Linux kernel which means dramatic
performance degredation, like trying to use Firewire (IEEE1394) on any
thing other then a Mac,


Regards,

Kaya
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Mark Felder
I'm doing this with HP heads, LSI SAS adapters, and  
http://www.dataonstorage.com/ JBODs.


Note: the DataOn JBODs are very, very hard to get right now because these  
are really rebadged LSI devices and LSI sold this division to NetApp, who  
promptly shut it down to prevent people like us from making these types of  
storage backends. I don't know of anyone else who has stepped up to build  
similar devices using LSI parts.


http://www.netapp.com/us/company/news/news-rel-20110509-263500.html
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Dennis Glatting



On Thu, 31 May 2012, Oscar Hodgson wrote:


The subject is pretty much the question.  Perhaps there's a better
place to be asking this question ...

We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD arrays
with ZFS.  In perusing the list, I haven't stumbled across indications
of people actually doing this.  External JBODs would be running 24 to
48TB each, roughly.  There would be a couple of units.  The pizza
boxes would be used for computational tasks, and nominally would have
8 cores and 96G+ RAM.

Obvious questions are hardware compatibility and stability.  I've set
up small FreeBSD 9 machines with ZFS roots and simple mirrors for
other tasks here, and those have been successful so far.

Observations would be appreciated.




mc:

real memory  = 120259084288 (114688 MB)
FreeBSD/SMP: Multiprocessor System Detected: 64 CPUs
FreeBSD/SMP: 4 package(s) x 16 core(s)

mc  zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
disk-1  14.5T  4.95T  9.55T34%  1.00x  ONLINE  -
disk-2   270G   297M   270G 0%  1.00x  ONLINE  -

disk-1, RAIDz1, uses Hitachi 4TB drives.



iirc:

real memory  = 68719476736 (65536 MB)
FreeBSD/SMP: Multiprocessor System Detected: 32 CPUs
FreeBSD/SMP: 2 package(s) x 16 core(s)

iirc zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
disk-1  18.1T  6.70T  11.4T36%  1.00x  ONLINE  -
disk-2  5.44T  3.05G  5.43T 0%  1.00x  ONLINE  -

disk-1, RAIDz1, uses a bunch of 2TB drives


I have another storage server named bd3 that has a RAIDz2 array of 2.5T 
drives (11 of them, IIRC) but it is presently powered down for 
maintenance.



btw:

real memory  = 25769803776 (24576 MB)
FreeBSD/SMP: Multiprocessor System Detected: 12 CPUs
FreeBSD/SMP: 1 package(s) x 6 core(s) x 2 SMT threads

btw zpool list
NAME SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
disk-1  9.06T  97.3G  8.97T 1%  1.00x  ONLINE  -
disk-2  9.06T  5.13T  3.93T56%  1.00x  ONLINE  -

Those are smaller RAIDz1 arrays of 1TB and 2TB drives, IIRC.


I also have three other systems, over clocked to 4GHz with 16GB of RAM and 
presently powered off, each with 3 or 4 2TB disks RAIDz1.



None of these systems have external arrays. The storage systems use common 
technologies, such as NFS, to export their space but their primary mission 
is manipulating (sort-of) big data and crypto attacks, though one is being 
converted to a Hadoop node for experimentation.


I have only had four issues over the past year and a half:

1) It is important to keep your ZFS patches up to date and the firmware in 
you controllers up to date. Failure to do this results in a = :(


2) Under heavy I/O my systems freeze for a few seconds. I haven't looked 
into why but they are completely unresponsive. Note I am also using 
compressed volumes (gzip), which puts a substantual load on the kernel.


3) I have had a number of disk failures -- not too many and not too few. 
These are merely an annoyance with no loss of data.


4) In two systems I use OCZ Revo drives. After several months of operating 
they go Tango Uniform, requiring a system boot where they return from the 
dead. None of my other SSD technologies exhibit the same problem.







___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Dennis Glatting



On Thu, 31 May 2012, Oscar Hodgson wrote:


That helps.  Thank you.

This is an academic departmental instructional / research environment.
We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students.  Oracle, not so
much, and the Oracle single-tier support model simply isn't affordable
for this business (there's no ROI at the departmental level g).
Solaris is not a viable option.

FreeBSD looks like the next best available option at the moment,
particularly considering the use of the storage heads as compute
machines.  OpenIndiana shows promise.  Nexenta has a great product,
but the user community expects more flexibility in software options.

Is there anything like a list of supported (known good) SAS HBA's?



Most of my HBAs are LSI controllers flashed T. I'm fond of the 9211.




___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Dennis Glatting



On Thu, 31 May 2012, Kaya Saman wrote:


On Thu, May 31, 2012 at 5:05 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:

That helps.  Thank you.

This is an academic departmental instructional / research environment.
 We had a great relationship with Sun, they provided great
opportunities to put Solaris in front of students.  Oracle, not so
much, and the Oracle single-tier support model simply isn't affordable
for this business (there's no ROI at the departmental level g).
Solaris is not a viable option.


We found Oracle to be the cheapest out of all the solutions we looked
at: Netapp, MSI, et el.



FreeBSD looks like the next best available option at the moment,
particularly considering the use of the storage heads as compute
machines.  OpenIndiana shows promise.  Nexenta has a great product,
but the user community expects more flexibility in software options.


FreeBSD is better then Linux in my opinion though lacking some
software and multimedia functionality that Linux has and not for the
Desktop as it's not as bleeding edge as say Fedora 16, however, if
FreeBSD offered Gnome3 and supported my wireless NIC I'd be all over
it like a bad rash :-)



Is there anything like a list of supported (known good) SAS HBA's?


LSI HBA's are really good!

For my DIY solution at home I used a SuperMicro system board with
non-RAID LSI HBA...



Similarly:

mc = Tyan S8812WGM3NR
iirc = Supermicro H8DGi
bd3 = Soon another Supermicro H8DGi

Others are consumer boards from Gigabyte (preferred).

I also have a small collection of Supermicro AOC-USAS2-L8i boards. 
Generally, I have had no trouble but ESXi 5.0 hated them.


For work I looked at two Supermicro 848A chassis with a H8QGL board and 20 
3TB disks for two different projects, but they lie in limbo.




It is a similar solution that we will use for our test NAS at work
though we already have a Dell R700 series server. For this setup
however I will need to use an LSI HBA with both internal and external
Mini-SAS ports.

Instead of Promise we will use NetStor JBOD solutions as they work
with 6Gbps drives and overall give better performance.



Oscar


Regards,


Kaya



On Thu, May 31, 2012 at 11:38 AM, Kaya Saman kayasa...@gmail.com wrote:

If this is any consellation I run a 36TB cluster using a self built
server with a Promise DAS (VessJBOD 1840) using ZFS at home! to
support my OpenSource projects and personal files.

As for OS take your pick: NexentaStor, FreeBSD, Solaris 11


All capable, of course Solaris has latest version of ZFS but still.


At work we're looking into getting a StorEdge appliance wich will
handle up to 140+ TB.


I am also in charge of redesigning one of our virtual SAN's to a
FreeBSD ZFS storage system which will run well how many JBOD's can
you fit on the system?? Probably round ~100TB or so.


Regards,


Kaya


On Thu, May 31, 2012 at 4:32 PM, Oscar Hodgson oscar.hodg...@gmail.com wrote:

The subject is pretty much the question.  Perhaps there's a better
place to be asking this question ...

We have (very briefly) discussed the possibility of using FreeBSD
pizza boxes as a storage heads direct attached to external JBOD arrays
with ZFS.  In perusing the list, I haven't stumbled across indications
of people actually doing this.  External JBODs would be running 24 to
48TB each, roughly.  There would be a couple of units.  The pizza
boxes would be used for computational tasks, and nominally would have
8 cores and 96G+ RAM.

Obvious questions are hardware compatibility and stability.  I've set
up small FreeBSD 9 machines with ZFS roots and simple mirrors for
other tasks here, and those have been successful so far.

Observations would be appreciated.

Oscar.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org

Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Oscar Hodgson
The thought never crossed my mind.

On Thu, May 31, 2012 at 1:28 PM, Damien Fleuriot m...@my.gd wrote:
 As a side note and in case you were considering, I strongly advise against 
 Linux + fuse ZFS.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Daniel Staal
--As of May 31, 2012 11:24:41 AM -0700, Dennis Glatting is alleged to have 
said:



2) Under heavy I/O my systems freeze for a few seconds. I haven't looked
into why but they are completely unresponsive. Note I am also using
compressed volumes (gzip), which puts a substantual load on the kernel.


--As for the rest, it is mine.

I'm not using as huge a dataset, but I was seeing this behavior as well 
when I first set my box up.  What was happening was that ZFS was caching 
*lots* of writes, and then would dump them all to disk at once, during 
which time the computer was completely occupied with the disk I/O.


The solution (suggested from http://wiki.freebsd.org/ZFSTuningGuide) for 
me was:

vfs.zfs.txg.timeout=5

in loader.conf.  That only allows it to cache writes for 5 seconds, instead 
of the default 30.  This appears to be the default in the latest versions 
of FreeBSD, so if you are running an upgraded 9, ignore me.  ;)  (But check 
the page linked above: There are other suggestions to try.)


Daniel T. Staal

---
This email copyright the author.  Unless otherwise noted, you
are expressly allowed to retransmit, quote, or otherwise use
the contents for non-commercial purposes.  This copyright will
expire 5 years after the author's death, or in 30 years,
whichever is longer, unless such a period is in excess of
local copyright law.
---
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


Re: Anyone using freebsd ZFS for large storage servers?

2012-05-31 Thread Dennis Glatting
On Thu, 2012-05-31 at 19:27 -0400, Daniel Staal wrote:
 --As of May 31, 2012 11:24:41 AM -0700, Dennis Glatting is alleged to have 
 said:
 
  2) Under heavy I/O my systems freeze for a few seconds. I haven't looked
  into why but they are completely unresponsive. Note I am also using
  compressed volumes (gzip), which puts a substantual load on the kernel.
 
 --As for the rest, it is mine.
 
 I'm not using as huge a dataset, but I was seeing this behavior as well 
 when I first set my box up.  What was happening was that ZFS was caching 
 *lots* of writes, and then would dump them all to disk at once, during 
 which time the computer was completely occupied with the disk I/O.
 
 The solution (suggested from http://wiki.freebsd.org/ZFSTuningGuide) for 
 me was:
 vfs.zfs.txg.timeout=5
 

Was already set:

mc# sysctl vfs.zfs.txg.timeout
vfs.zfs.txg.timeout: 5



 in loader.conf.  That only allows it to cache writes for 5 seconds, instead 
 of the default 30.  This appears to be the default in the latest versions 
 of FreeBSD, so if you are running an upgraded 9, ignore me.  ;)  (But check 
 the page linked above: There are other suggestions to try.)
 
 Daniel T. Staal
 
 ---
 This email copyright the author.  Unless otherwise noted, you
 are expressly allowed to retransmit, quote, or otherwise use
 the contents for non-commercial purposes.  This copyright will
 expire 5 years after the author's death, or in 30 years,
 whichever is longer, unless such a period is in excess of
 local copyright law.
 ---


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org