Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Steffen Weiberle

On 09/18/09 14:34, Jeremy Kister wrote:

On 9/18/2009 1:51 PM, Steffen Weiberle wrote:

I am trying to compile some deployment scenarios of ZFS.

# of systems


do zfs root count?  or only big pools?


non root is more interesting to me. however, if you are sharing the root 
pool with your data, what you are running application wise is still of 
interest.





amount of storage


raw or after parity ?


Either, and great is you indicate which.






Thanks for all the private responses. I am still compiling and cleansing 
them, and will summarize when I get their OKs!


Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Jeremy Kister

On 9/18/2009 1:51 PM, Steffen Weiberle wrote:

# of systems


6 not including dozens of zfs root.


amount of storage


(a) 2 of them have 96TB raw,
46 WD SATA 2TB disks in two raidz2 pools + 2 hot spares
each raidz2 pool is on it's own shelf on it's own PCIx controller

(b) 2 of them have 268GB raw
26 HP 300GB SCA disks with mirroring + 2 hot spares
+ soon to be 3 way mirrored
each shelf of 14 disks is connected to it's own u320 pcix card

(c) 2 of them have 14TB raw
14 Dell SATA 1TB disks in two raidz2 pools + 1 hot spare


application profile(s)


(a) and (c) are file servers via nfs
(b) are postgres database servers

type of workload (low, high; random, sequential; read-only, read-write, 
write-only)


(a) are 70/30 read/write @ average of 40MB/s
30 clients
(b) are 50/50 read/write @ average of 180MB/s
local read/write only
(c) are 70/30 read/write @ average of 28MB/s
10 clients


storage type(s)


(a) and (c) are sata
(b) are u320 scsi


industry


call analytics


whether it is private or I can share in a summary


not private.


anything else that might be of interest


35. “Because” makes any explanation rational. In a line to Kinko’s copy 
machine a researcher asked to jump the line by presenting a reason “Can I 
jump the line, because I am in a rush?” 94% of people complied. Good 
reason, right? Okay, let’s change the reason. “Can I jump the line because 
I need to make copies?” Excuse me? That’s why everybody is in the line to 
begin with. Yet 93% of people complied. A request without “because” in it 
(”Can I jump the line, please?”) generated 24% compliance.



--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-22 Thread Jeremy Kister

On 9/22/2009 1:55 PM, Jeremy Kister wrote:

(b) 2 of them have 268GB raw
 26 HP 300GB SCA disks with mirroring + 2 hot spares


28 * 300G = 8.2T.  Not 268G.

Math class is tough!


--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-21 Thread Gary Mills
On Fri, Sep 18, 2009 at 01:51:52PM -0400, Steffen Weiberle wrote:
 I am trying to compile some deployment scenarios of ZFS.
 
 # of systems

One, our e-mail server for the entire campus.

 amount of storage

2 TB that's 58% used.

 application profile(s)

This is our Cyrus IMAP spool.  In addition to user's e-mail folders
(directories) and messages (files), it contains global, per-folder,
and per-user databases.  The latter two types are quite small.

 type of workload (low, high; random, sequential; read-only, read-write, 
 write-only)

It's quite active.  Message files arrive randomly and are deleted
randomly.  As a result, files in a directory are not located in
proximity on the storage.  Individual users often read all of their
folders and messages in one IMAP session.  Databases are quite active.
Each incoming message adds a file to a directory and reads or updates
several databases.  Most IMAP I/O is done with mmap() rather than with
read()/write().  So far, IMAP peformance is adequate.  The backup,
done by EMC Networker, is very slow because it must read thousands of
small files in directory order.

 storage type(s)

We are using an Iscsi SAN with storage on a Netapp filer.  It exports
four 500-gb LUNs that are striped into one ZFS pool.  All disk
mangement is done on the Netapp.  We have had several disk failures
and replacements on the Netapp, with no effect on the e-mail server.

 industry

A University with 35,000 enabled e-mail accounts.

 whether it is private or I can share in a summary
 anything else that might be of interest

You are welcome to share this information.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-19 Thread Blake
On Fri, Sep 18, 2009 at 1:51 PM, Steffen Weiberle
steffen.weibe...@sun.com wrote:
 I am trying to compile some deployment scenarios of ZFS.

 # of systems
3

 amount of storage
10 TB on storage server (can scale to 30)

 application profile(s)
NFS and CIFS

 type of workload (low, high; random, sequential; read-only, read-write,
 write-only)
Boot drives, Nearline backup, Postgres DB (OpenNMS)

 storage type(s)
SATA

 industry
Software

 whether it is private or I can share in a summary
 anything else that might be of interest
You can share my info :)


 Thanks in advance!!

 Steffen
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-19 Thread Erik Trimble

In the Eat-Your-Own-Dogfood mode:

Here in CSG at Sun (which is mainly all Java-related things):


Steffen Weiberle wrote:

I am trying to compile some deployment scenarios of ZFS.

If you are running ZFS in production, would you be willing to provide 
(publicly or privately)?


# of systems
All our central file servers plus the various Source Code repositories 
(of particular note:  http://hg.openjdk.java.net  which holds all of the 
OpenJDK and related source code).  Approximately 20 major machines, plus 
several dozen smaller ones.  And that's only what I know about (maybe 
about 50% of the total organization).



amount of storage

45TB+ just on the fileservers and Source Code repos

application profile(s)
NFS servers, Mercurial Source Code Repositories, Teamware Source Code 
Repositories, Lightweight web databases (db, postgresql, MySql), Web 
twikis, flat file data profile storage, Virtualized Host centralized 
storage.  Starting with roll-your-own VTLs.




type of workload (low, high; random, sequential; read-only, 
read-write, write-only)
NFS servers:  high load (100s of clients per server), random read  
write, mostly small files. 
Hg  TW source code repos:  low load (only on putbacks), huge amounts of 
small file read/writes (i.e. mostly random)

Testing apps:  mostly mid-size sequential writes
VTL (disk backups):  high load streaming writes almost exclusively.
xVM systems:  moderate to high load, heavy random read, modest random write.


storage type(s)
Almost exclusively FC-attached SAN.  Small amounts of dedicated FC 
arrays (STK2540 / STK6140), and the odd iSCSI thing here and there.  NFS 
servers are pretty much all T2000.  Source Code repos are X4000-series 
Opteron systems (usually X4200, X4140, or X4240).  Thumpers (X4500) are 
scattered around, and  the  rest is a total mishmash of both Sun and others.



industry

Software development

whether it is private or I can share in a summary

I can't see any reason not to summarize.

anything else that might be of interest

right now, we're not using SSDs hardly at all, and we unfortunately 
haven't done much with the Amber Road storage devices (7000-series).   
Our new interest is the Thumper/Thor (x4500 / x4540 ) machines being 
used as a disk backup device:  we're moving our backups to disk (i.e. 
client backup goes to disk first, then to tape as needed).  This is 
possible due to ZFS.  We're replacing virtually all our VxFS systems 
with ZFS.


Also, the primary development build/test system depends heavily on ZFS 
for storage, and will lean even more on it as we convert to xVM-based 
virtualization. I plan on using snapshots to radically reduce disk space 
required by multiple identical clients, and to make adding and retiring 
clients simpler.  In the case of our Windows clients, I expect ZFS 
snapshotting to enable me to automaticlly wipe the virtual client after 
every test run.  Which is really nice, considering the flakiness that 
testing on Windows causes.




Thanks in advance!!

Steffen


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-18 Thread Steffen Weiberle

I am trying to compile some deployment scenarios of ZFS.

If you are running ZFS in production, would you be willing to provide 
(publicly or privately)?


# of systems
amount of storage
application profile(s)
type of workload (low, high; random, sequential; read-only, read-write, 
write-only)

storage type(s)
industry
whether it is private or I can share in a summary
anything else that might be of interest

Thanks in advance!!

Steffen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] If you have ZFS in production, willing to share some details (with me)?

2009-09-18 Thread Jeremy Kister

On 9/18/2009 1:51 PM, Steffen Weiberle wrote:

I am trying to compile some deployment scenarios of ZFS.

# of systems


do zfs root count?  or only big pools?


amount of storage


raw or after parity ?


--

Jeremy Kister
http://jeremy.kister.net./
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss