Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-03-01 Thread Paul B. Henson
On Sat, 27 Feb 2010, Jens Elkner wrote:

> At least on S10u8 its not that bad. Last time I patched and rebooted
> a X4500 with ~350 ZFS it took about 10min to come up, a X4600 with
> a 3510 and ~2350 ZFS took about 20min (almost all are shared via NFS).

Our x4500's with about 8000 filesystems per run about 50-60 minutes to shut
down, and about the same to boot up, resulting in about a 2 hour boot
cycle, which is kind of annoying. The lack of scalability is in the NFS
sharing, it only takes ~5 minutes to mount all 8000. I hope someday they'll
optimize that a bit...


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  hen...@csupomona.edu
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-28 Thread Orvar Korvar
Speaking of long boot times, Ive heard that IBM power servers boot in 90 
minutes or more.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-27 Thread Jens Elkner
On Fri, Feb 26, 2010 at 09:25:57PM -0700, Eric D. Mudama wrote:
...
> I agree with the above, but the best practices guide:
> 
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_file_service_for_SMB_.28CIFS.29_or_SAMBA
> 
> states in the SAMBA section that "Beware that mounting 1000s of file
> systems, will impact your boot time".  I'd say going from a 2-3 minute
> boot time to a 4+ hour boot time is more than just "impact".  That's
> getting hit by a train.

At least on S10u8 its not that bad. Last time I patched and rebooted 
a X4500 with ~350 ZFS it took about 10min to come up, a X4600 with
a 3510 and ~2350 ZFS took about 20min (almost all are shared via NFS).
Shutting down/unshare them takes roughly the same time ...
On the X4600 creating|destroying a single ZFS (no matter on which pool or
how many ZFS belong to the same pool!) takes about 20 sec, renaming about
40 sec ... - that's really a pain ...
  
Regards,
jel.
-- 
Otto-von-Guericke University http://www.cs.uni-magdeburg.de/
Department of Computer Science   Geb. 29 R 027, Universitaetsplatz 2
39106 Magdeburg, Germany Tel: +49 391 67 12768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-26 Thread Richard Elling
On Feb 26, 2010, at 8:59 PM, Richard Elling wrote:

> On Feb 26, 2010, at 8:25 PM, Eric D. Mudama wrote:
>> On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:
>>> On Thu, 25 Feb 2010, Alastair Neil wrote:
>>> 
 I do not know and I don't think anyone would deploy a system in that way 
 with UFS. 
 This is the model that is imposed in order to take full advantage of zfs 
 advanced
 features such as snapshots, encryption and compression and I know many 
 universities
 in particular are eager to adopt it for just that reason, but are stymied 
 by this
 problem.
>>> 
>>> It was not really a serious question but it was posed to make a point. 
>>> However, it would be interesting to know if there is another type of 
>>> filesystem (even on Linux or some other OS) which is able to reasonably and 
>>> efficiently support 16K mounted and exported file systems.
>>> 
>>> Eventually Solaris is likely to work much better for this than it does 
>>> today, but most likely there are higher priorities at the moment.
>> 
>> I agree with the above, but the best practices guide:
>> 
>> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_file_service_for_SMB_.28CIFS.29_or_SAMBA
>> 
>> states in the SAMBA section that "Beware that mounting 1000s of file
>> systems, will impact your boot time".  I'd say going from a 2-3 minute
>> boot time to a 4+ hour boot time is more than just "impact".  That's
>> getting hit by a train.

Perhaps someone that has a SAMBA config large enough could make a
test similar to the NFS set described in
http://developers.sun.com/solaris/articles/nfs_zfs.html
(note the date, 2007)
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-26 Thread Alastair Neil
Ironically It's nfs exporting that is the real hog, cifs shares seem to come
up pretty fast.  The fact that cifs shares can be fast makes it hard for me
to understand why Sun/Oracle seem to be making such a meal of this bug.
Possibly because it only critically affects poor universities and not
clients with the budget to throw hardware at the problem.

On Fri, Feb 26, 2010 at 11:59 PM, Richard Elling
wrote:

> On Feb 26, 2010, at 8:25 PM, Eric D. Mudama wrote:
> > On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:
> >> On Thu, 25 Feb 2010, Alastair Neil wrote:
> >>
> >>> I do not know and I don't think anyone would deploy a system in that
> way with UFS.
> >>> This is the model that is imposed in order to take full advantage of
> zfs advanced
> >>> features such as snapshots, encryption and compression and I know many
> universities
> >>> in particular are eager to adopt it for just that reason, but are
> stymied by this
> >>> problem.
> >>
> >> It was not really a serious question but it was posed to make a point.
> However, it would be interesting to know if there is another type of
> filesystem (even on Linux or some other OS) which is able to reasonably and
> efficiently support 16K mounted and exported file systems.
> >>
> >> Eventually Solaris is likely to work much better for this than it does
> today, but most likely there are higher priorities at the moment.
> >
> > I agree with the above, but the best practices guide:
> >
> >
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_file_service_for_SMB_.28CIFS.29_or_SAMBA
> >
> > states in the SAMBA section that "Beware that mounting 1000s of file
> > systems, will impact your boot time".  I'd say going from a 2-3 minute
> > boot time to a 4+ hour boot time is more than just "impact".  That's
> > getting hit by a train.
>
> The shares are more troublesome than the mounts.
>
> >
> > Might be useful for folks, if the above document listed a few concrete
> > datapoints of boot time scaling with the number of filesystems or
> > something similar.
>
> Gory details and timings are available in the many references to CR 6850837
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6850837
>  -- richard
>
> ZFS storage and performance consulting at http://www.RichardElling.com
> ZFS training on deduplication, NexentaStor, and NAS performance
> http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)
>
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-26 Thread Richard Elling
On Feb 26, 2010, at 8:25 PM, Eric D. Mudama wrote:
> On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:
>> On Thu, 25 Feb 2010, Alastair Neil wrote:
>> 
>>> I do not know and I don't think anyone would deploy a system in that way 
>>> with UFS. 
>>> This is the model that is imposed in order to take full advantage of zfs 
>>> advanced
>>> features such as snapshots, encryption and compression and I know many 
>>> universities
>>> in particular are eager to adopt it for just that reason, but are stymied 
>>> by this
>>> problem.
>> 
>> It was not really a serious question but it was posed to make a point. 
>> However, it would be interesting to know if there is another type of 
>> filesystem (even on Linux or some other OS) which is able to reasonably and 
>> efficiently support 16K mounted and exported file systems.
>> 
>> Eventually Solaris is likely to work much better for this than it does 
>> today, but most likely there are higher priorities at the moment.
> 
> I agree with the above, but the best practices guide:
> 
> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_file_service_for_SMB_.28CIFS.29_or_SAMBA
> 
> states in the SAMBA section that "Beware that mounting 1000s of file
> systems, will impact your boot time".  I'd say going from a 2-3 minute
> boot time to a 4+ hour boot time is more than just "impact".  That's
> getting hit by a train.

The shares are more troublesome than the mounts.  

> 
> Might be useful for folks, if the above document listed a few concrete
> datapoints of boot time scaling with the number of filesystems or
> something similar.

Gory details and timings are available in the many references to CR 6850837
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6850837
 -- richard

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
http://nexenta-atlanta.eventbrite.com (March 16-18, 2010)




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-26 Thread Eric D. Mudama

On Thu, Feb 25 at 20:21, Bob Friesenhahn wrote:

On Thu, 25 Feb 2010, Alastair Neil wrote:


I do not know and I don't think anyone would deploy a system in that way with 
UFS. 
This is the model that is imposed in order to take full advantage of zfs 
advanced
features such as snapshots, encryption and compression and I know many 
universities
in particular are eager to adopt it for just that reason, but are stymied by 
this
problem.


It was not really a serious question but it was posed to make a 
point. However, it would be interesting to know if there is another 
type of filesystem (even on Linux or some other OS) which is able to 
reasonably and efficiently support 16K mounted and exported file 
systems.


Eventually Solaris is likely to work much better for this than it 
does today, but most likely there are higher priorities at the 
moment.


I agree with the above, but the best practices guide:

http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_file_service_for_SMB_.28CIFS.29_or_SAMBA

states in the SAMBA section that "Beware that mounting 1000s of file
systems, will impact your boot time".  I'd say going from a 2-3 minute
boot time to a 4+ hour boot time is more than just "impact".  That's
getting hit by a train.

Might be useful for folks, if the above document listed a few concrete
datapoints of boot time scaling with the number of filesystems or
something similar.

--eric


--
Eric D. Mudama
edmud...@mail.bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Bob Friesenhahn

On Thu, 25 Feb 2010, Alastair Neil wrote:


I do not know and I don't think anyone would deploy a system in that way with 
UFS. 
This is the model that is imposed in order to take full advantage of zfs 
advanced
features such as snapshots, encryption and compression and I know many 
universities
in particular are eager to adopt it for just that reason, but are stymied by 
this
problem.


It was not really a serious question but it was posed to make a point. 
However, it would be interesting to know if there is another type of 
filesystem (even on Linux or some other OS) which is able to 
reasonably and efficiently support 16K mounted and exported file 
systems.


Eventually Solaris is likely to work much better for this than it does 
today, but most likely there are higher priorities at the moment.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Alastair Neil
I do not know and I don't think anyone would deploy a system in that way
with UFS.  This is the model that is imposed in order to take full advantage
of zfs advanced features such as snapshots, encryption and compression and I
know many universities in particular are eager to adopt it for just that
reason, but are stymied by this problem.

Alastair


On Thu, Feb 25, 2010 at 8:39 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Thu, 25 Feb 2010, Alastair Neil wrote:
>
>  I don't think I have seen this addressed in the follow-ups to your
>> message.  One
>> issue we have is with deploying large numbers of files systems per pool -
>> not
>> necessarily large numbers of disk.  There are major scaling issues with
>> the sharing
>> of large numbers of file systems, in my configuration I have about 16K
>> file systems
>> to share and boot times can be several hours.  There is an open bug
>>
>
> Is boot performance with 16K mounted and exported file systems a whole lot
> better if you use UFS instead?
>
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Bob Friesenhahn

On Thu, 25 Feb 2010, Alastair Neil wrote:


I don't think I have seen this addressed in the follow-ups to your message.  One
issue we have is with deploying large numbers of files systems per pool - not
necessarily large numbers of disk.  There are major scaling issues with the 
sharing
of large numbers of file systems, in my configuration I have about 16K file 
systems
to share and boot times can be several hours.  There is an open bug


Is boot performance with 16K mounted and exported file systems a whole 
lot better if you use UFS instead?


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-25 Thread Alastair Neil
I don't think I have seen this addressed in the follow-ups to your message.
 One issue we have is with deploying large numbers of files systems per pool
- not necessarily large numbers of disk.  There are major scaling issues
with the sharing of large numbers of file systems, in my configuration I
have about 16K file systems to share and boot times can be several hours.
 There is an open bug

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6850837

There has been no indication of a horizon for a fix yet as far as I know.


On Thu, Jan 28, 2010 at 5:13 PM, Lutz Schumann
wrote:

> While thinking about ZFS as the next generation filesystem without limits I
> am wondering if the real world is ready for this kind of incredible
> technology ...
>
> I'm actually speaking of hardware :)
>
> ZFS can handle a lot of devices. Once in the import bug (
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is
> fixed it should be able to handle a lot of disks.
>
> I want to ask the ZFS community and users what large scale deploments are
> out there.  How man disks ? How much capacity ? Single pool or many pools on
> a server ? How does resilver work in those environtments ? How to you backup
> ?
> What is the experience so far ? Major headakes ?
>
> It would be great if large scale users would share their setups and
> experiences with ZFS.
>
> Will you ? :)
> Thanks,
> Robert
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-02-04 Thread Mertol Ozyoney
We got 50+ X4500/X4540's running in the same DC happiliy with ZFS.
Approximately 2500 drives and growing everyday... 

Br
Mertol 



Mertol Ozyoney 
Storage Practice - Sales Manager

Sun Microsystems, TR
Istanbul TR
Phone +902123352200
Mobile +905339310752
Fax +90212335
Email mertol.ozyo...@sun.com



-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Henrik Johansen
Sent: Friday, January 29, 2010 10:45 AM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Large scale ZFS deployments out there (>200
disks)

On 01/28/10 11:13 PM, Lutz Schumann wrote:
> While thinking about ZFS as the next generation filesystem without
> limits I am wondering if the real world is ready for this kind of
> incredible technology ...
>
> I'm actually speaking of hardware :)
>
> ZFS can handle a lot of devices. Once in the import bug
> (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786)
> is fixed it should be able to handle a lot of disks.

That was fixed in build 125.

> I want to ask the ZFS community and users what large scale deploments
> are out there.  How man disks ? How much capacity ? Single pool or
> many pools on a server ? How does resilver work in those
> environtments ? How to you backup ? What is the experience so far ?
> Major headakes ?
>
> It would be great if large scale users would share their setups and
> experiences with ZFS.

The largest ZFS deployment that we have is currently comprised of 22 
Dell MD1000 enclosures (330 750 GB Nearline SAS disks). We have 3 head 
nodes and use one zpool per node, comprised of rather narrow (5+2) 
RAIDZ2 vdevs. This setup is exclusively used for storing backup data.

Resilver times could be better - I am sure that this will improve once 
we upgrade from S10u9 to 2010.03.

One of the things that I am missing in ZFS is the ability to prioritize 
background operations like scrub and resilver. All our disks are idle 
during daytime and I would love to be able to take advantage of this, 
especially during resilver operations.

This setup has been running for about a year with no major issues so 
far. The only hickups we've had were all HW related (no fun in firmware 
upgrading 200+ disks).

> Will you ? :) Thanks, Robert


-- 
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Henrik Johansen

On 01/29/10 07:36 PM, Richard Elling wrote:

On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote:

On 01/28/10 11:13 PM, Lutz Schumann wrote:

While thinking about ZFS as the next generation filesystem
without limits I am wondering if the real world is ready for this
kind of incredible technology ...

I'm actually speaking of hardware :)

ZFS can handle a lot of devices. Once in the import bug
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786)



is fixed it should be able to handle a lot of disks.


That was fixed in build 125.


I want to ask the ZFS community and users what large scale
deploments are out there.  How man disks ? How much capacity ?
Single pool or many pools on a server ? How does resilver work in
those environtments ? How to you backup ? What is the experience
so far ? Major headakes ?

It would be great if large scale users would share their setups
and experiences with ZFS.


The largest ZFS deployment that we have is currently comprised of
22 Dell MD1000 enclosures (330 750 GB Nearline SAS disks). We have
3 head nodes and use one zpool per node, comprised of rather narrow
(5+2) RAIDZ2 vdevs. This setup is exclusively used for storing
backup data.


This is an interesting design.  It looks like a good use of hardware
and redundancy for backup storage. Would you be able to share more of
the details? :-)


Each head node (Dell PE 2900's) has 3 PERC 6/E controllers (LSI 1078 
based) with 512 MB cache each.


The PERC 6/E supports both load-balancing and path failover so each 
controller has 2 SAS connections to a daisy chained group of 3 MD1000 
enclosures.


The RAIDZ2 vdev layout was chosen because it gives a reasonable 
performance vs space ratio and it maps nicely onto the 15 disk MD1000's 
( 2 x (5+2) +1 ).


There is room for improvement in the design (fewer disks per controller, 
faster PCI Express slots, etc) but performance is good enough for our 
current needs.




Resilver times could be better - I am sure that this will improve
once we upgrade from S10u9 to 2010.03.


Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you
read. Solaris 10 u8 is 11/09.


One of the things that I am missing in ZFS is the ability to
prioritize background operations like scrub and resilver. All our
disks are idle during daytime and I would love to be able to take
advantage of this, especially during resilver operations.


Scrub I/O is given the lowest priority and is throttled. However, I
am not sure that the throttle is in Solaris 10, because that source
is not publicly available. In general, you will not notice a resource
cap until the system utilization is high enough that the cap is
effective.  In other words, if the system is mostly idle, the scrub
consumes the bulk of the resources.


That's not what I am seeing - resilver operations crawl even when the 
pool is idle.



This setup has been running for about a year with no major issues
so far. The only hickups we've had were all HW related (no fun in
firmware upgrading 200+ disks).


ugh. -- richard




--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Bryan Allen
+--
| On 2010-01-29 10:36:29, Richard Elling wrote:
| 
| Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you read.
| Solaris 10 u8 is 11/09.

Nit: S10u8 is 10/09.
 
| Scrub I/O is given the lowest priority and is throttled. However, I am not
| sure that the throttle is in Solaris 10, because that source is not publicly
| available. In general, you will not notice a resource cap until the system
| utilization is high enough that the cap is effective.  In other words, if the
| system is mostly idle, the scrub consumes the bulk of the resources.

Solaris 10 has the scrub throttle; it affects resilver too.. I have often
wished I could turn it off. I'm happy to take a major application performance
hit during a resilver (especially of a mirror or raidz1) in almost every case.
-- 
bda
cyberpunk is dead. long live cyberpunk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Richard Elling
On Jan 29, 2010, at 12:45 AM, Henrik Johansen wrote:
> On 01/28/10 11:13 PM, Lutz Schumann wrote:
>> While thinking about ZFS as the next generation filesystem without
>> limits I am wondering if the real world is ready for this kind of
>> incredible technology ...
>> 
>> I'm actually speaking of hardware :)
>> 
>> ZFS can handle a lot of devices. Once in the import bug
>> (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786)
>> is fixed it should be able to handle a lot of disks.
> 
> That was fixed in build 125.
> 
>> I want to ask the ZFS community and users what large scale deploments
>> are out there.  How man disks ? How much capacity ? Single pool or
>> many pools on a server ? How does resilver work in those
>> environtments ? How to you backup ? What is the experience so far ?
>> Major headakes ?
>> 
>> It would be great if large scale users would share their setups and
>> experiences with ZFS.
> 
> The largest ZFS deployment that we have is currently comprised of 22 Dell 
> MD1000 enclosures (330 750 GB Nearline SAS disks). We have 3 head nodes and 
> use one zpool per node, comprised of rather narrow (5+2) RAIDZ2 vdevs. This 
> setup is exclusively used for storing backup data.

This is an interesting design.  It looks like a good use of hardware and 
redundancy
for backup storage. Would you be able to share more of the details? :-)

> Resilver times could be better - I am sure that this will improve once we 
> upgrade from S10u9 to 2010.03.

Nit: Solaris 10 u9 is 10/03 or 10/04 or 10/05, depending on what you read.
Solaris 10 u8 is 11/09.

> One of the things that I am missing in ZFS is the ability to prioritize 
> background operations like scrub and resilver. All our disks are idle during 
> daytime and I would love to be able to take advantage of this, especially 
> during resilver operations.

Scrub I/O is given the lowest priority and is throttled. However, I am not
sure that the throttle is in Solaris 10, because that source is not publicly
available. In general, you will not notice a resource cap until the system
utilization is high enough that the cap is effective.  In other words, if the
system is mostly idle, the scrub consumes the bulk of the resources.

> This setup has been running for about a year with no major issues so far. The 
> only hickups we've had were all HW related (no fun in firmware upgrading 200+ 
> disks).

ugh.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-29 Thread Henrik Johansen

On 01/28/10 11:13 PM, Lutz Schumann wrote:

While thinking about ZFS as the next generation filesystem without
limits I am wondering if the real world is ready for this kind of
incredible technology ...

I'm actually speaking of hardware :)

ZFS can handle a lot of devices. Once in the import bug
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786)
is fixed it should be able to handle a lot of disks.


That was fixed in build 125.


I want to ask the ZFS community and users what large scale deploments
are out there.  How man disks ? How much capacity ? Single pool or
many pools on a server ? How does resilver work in those
environtments ? How to you backup ? What is the experience so far ?
Major headakes ?

It would be great if large scale users would share their setups and
experiences with ZFS.


The largest ZFS deployment that we have is currently comprised of 22 
Dell MD1000 enclosures (330 750 GB Nearline SAS disks). We have 3 head 
nodes and use one zpool per node, comprised of rather narrow (5+2) 
RAIDZ2 vdevs. This setup is exclusively used for storing backup data.


Resilver times could be better - I am sure that this will improve once 
we upgrade from S10u9 to 2010.03.


One of the things that I am missing in ZFS is the ability to prioritize 
background operations like scrub and resilver. All our disks are idle 
during daytime and I would love to be able to take advantage of this, 
especially during resilver operations.


This setup has been running for about a year with no major issues so 
far. The only hickups we've had were all HW related (no fun in firmware 
upgrading 200+ disks).



Will you ? :) Thanks, Robert



--
Med venlig hilsen / Best Regards

Henrik Johansen
hen...@scannet.dk
Tlf. 75 53 35 00

ScanNet Group
A/S ScanNet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Large scale ZFS deployments out there (>200 disks)

2010-01-28 Thread Lutz Schumann
While thinking about ZFS as the next generation filesystem without limits I am 
wondering if the real world is ready for this kind of incredible technology ... 

I'm actually speaking of hardware :)

ZFS can handle a lot of devices. Once in the import bug 
(http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed 
it should be able to handle a lot of disks. 

I want to ask the ZFS community and users what large scale deploments are out 
there.  How man disks ? How much capacity ? Single pool or many pools on a 
server ? How does resilver work in those environtments ? How to you backup ? 
What is the experience so far ? Major headakes ? 

It would be great if large scale users would share their setups and experiences 
with ZFS. 

Will you ? :)
Thanks, 
Robert
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss