Re: [zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread George Wilson
The threads associated with the zpool process have special purposes and are
used by the different I/O types of the ZIO pipeline. The number of threads
doesn't change for workstations or servers. They are fixed values per ZIO
type. The new process you're seeing is just exposing the work that has
always been there. Now you can monitor how much CPU is being used by the
underlying ZFS I/O subsystem. If you're seeing a specific performance
problem feel free to provide more details about the issue.

- George

On Mon, Jan 31, 2011 at 4:54 PM, Gary Mills  wrote:

> After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice
> a process called zpool-poolname that has 99 threads.  This seems to be
> a limit, as it never goes above that.  It is lower on workstations.
> The `zpool' man page says only:
>
>  Processes
> Each imported pool has an associated process,  named  zpool-
> poolname.  The  threads  in  this process are the pool's I/O
> processing threads, which handle the compression,  checksum-
> ming,  and other tasks for all I/O associated with the pool.
> This process exists to  provides  visibility  into  the  CPU
> utilization  of the system's storage pools. The existence of
> this process is an unstable interface.
>
> There are several thousand processes doing ZFS I/O on the busy server.
> Could this new process be a limitation in any way?  I'd just like to
> rule it out before looking further at I/O performance.
>
> --
> -Gary Mills--Unix Group--Computer and Network Services-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
George Wilson

 

M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Partitioning ARC

2011-01-31 Thread Stuart Anderson

On Jan 30, 2011, at 6:03 PM, Richard Elling wrote:

> On Jan 30, 2011, at 5:01 PM, Stuart Anderson wrote:
>> On Jan 30, 2011, at 2:29 PM, Richard Elling wrote:
>> 
>>> On Jan 30, 2011, at 12:21 PM, stuart anderson wrote:
>>> 
 Is it possible to partition the global setting for the maximum ARC size
 with finer grained controls? Ideally, I would like to do this on a per
 zvol basis but a setting per zpool would be interesting as well?
>>> 
>>> While perhaps not perfect, see the primarycache and secondarycache
>>> properties of the zvol or file system.
>> 
>> With primarycache I can turn off utilization of the ARC for some zvol's,
>> but instead I was hoping to use the ARC but limit the maximum amount
>> on a per zvol basis.
> 
> Just a practical question, do you think the average storage admin will have
> any clue as to how to use this tunable?

Yes. I think the basic idea of partitioning a memory cache over different
storage objects is a straightforward concept.

> How could we be more effective in
> communicating the features and pitfalls of resource management at this 
> level?

Document that this is normally handled dynamically based on the default
policy that all storage objects should be assigned ARC space on a fair
share basis. However, if different quality of service is required for different
storage objects this may be adjusted as follows...

> 
>>> 
 The use case is to prioritize which zvol devices should be fully cached
 in DRAM on a server that cannot fit them all in memory.
>>> 
>>> It is not clear to me that this will make sense in a world of snapshots and 
>>> dedup.
>>> Could you explain your requirements in more detail?
>> 
>> I am using zvol's to hold the metadata for another filesystem (SAM-QFS).
>> In some circumstances I can fit enough of this into the ARC that virtually
>> all metadata reads IOPS happen at DRAM performance rather than SSD
>> or slower.
>> 
>> However, with a single server hosting multiple filesystem (hence multiple
>> zvols) I would like to be able to prioritize the use of the ARC.
> 
> I think there is merit to this idea. It can be especially useful in the zone
> context. Please gather your thoughts and file an RFE at www.illumos.org

Not sure how to file an illumos RFE, but one simple model to think about
would is a 2 tiered system where by default ZFS datasets use the ARC is
currently the case, with no (to the best of my knowledge) relative priority,
but some objects could optionally specific a request for a minimum size,
e.g., add a companion attribute to primarycache named primarycachesize.
This would represent the minimum amount of ARC space that is available
for that object.

Some thought would have to be given as to how to indicate if the sum
of all primarycachesize settings is greater than zfs_arc_max, and
document what happens in this case, e.g., all values ignored?

Presumably something similar could also be considered for secondarycache.


Thanks.


--
Stuart Anderson  ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and L2ARC memory requirements?

2011-01-31 Thread Garrett D'Amore

On 01/31/11 06:40 PM, Roy Sigurd Karlsbakk wrote:

- Original Message -
   

Even *with* an L2ARC, your memory requirements are *substantial*,
because the L2ARC itself needs RAM. 8 GB is simply inadequate for
your
test.
   

With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC
would you would you recommeend?
 


First off... a *big* caveat.  I am *not* a tuning expert.  We have 
people in our company who can help you out from operational experience 
if you want to configure a system like this -- one of them -- Richard 
Elling -- is a frequently seen face around here.   That said, I'm going 
to respond from my *very* rough understanding of how these structures 
play together


So, I'd say:  Alot.  1TB for ARC sounds like a rather largeish amount.   
I don't know offhand the typical ratio of ARC -> L2ARC, but note that 
every entry in the L2ARC requires at least some book-keeping in the ARC 
(which is in RAM).  I've seen people say that you can have anywhere from 
10x RAM to 20x RAM for L2ARC.   It sounds like this means between 50GB 
and 100GB *just* for an L2ARC of this size.


That's without dedup.


And then, _with_ dedup, what would you recommemend?
 

make that 100TB of storage
   


With 100TB of storage, fully consumed, your DDT is going to need to be 
about 500GB (assuming 64K block size, which may or may not be a good 
average).


That whole DDT will fit into the L2ARC above, so you probably can get by 
with just the 50-100GB of RAM.  But I recommend allocating *more* than 
that, because you really don't want *every* write to the DDT to go to 
L2ARC, and you really *do* want to have some memory available for things 
besides just the ARC.


Generally, this feels like a 256GB memory configuration to me.

Fundamentally, the best way to reduce the memory impact is to use dedup 
much more sparingly, and configure a much smaller L2ARC.


You also need to analyze your workload to see if you'll benefit from 
having L2ARC apart from the DDT itself.


- Garrett


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Validating a zfs send object

2011-01-31 Thread stuart anderson
How do you verify that a zfs send binary object is valid?

I tried running a truncated file through zstreamdump and it completed
with no error messages and an exit() status of 0. However, I noticed it
was missing a final print statement with a checksum value,
END checksum = ...

Is there any normal circumstance under which this END checksum statement
will be missing?

More usefully is there an option to zstreamdump, or a similar program, to 
validate
validate an internal checksum value stored in a zfs send binary object?

Or is the only way to do this with zfs receive?

Thanks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and L2ARC memory requirements?

2011-01-31 Thread Richard Elling
On Jan 31, 2011, at 6:16 PM, Roy Sigurd Karlsbakk wrote:

>> Even *with* an L2ARC, your memory requirements are *substantial*,
>> because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
>> test.
> 
> With 50TB storage, and 1TB if L2ARC, with  no dedup, what amount of ARC would 
> you would you recommeend?

Just the L2ARC directory can consume 24GB of ARC for 8KB records.

> And then, _with_ dedup, what would you recommemend?

Depends on the specific OS release, but I think you should plan on using dedup
where the cost/GB is expensive relative to the cost/IOPS.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and L2ARC memory requirements?

2011-01-31 Thread Roy Sigurd Karlsbakk
- Original Message -
> > Even *with* an L2ARC, your memory requirements are *substantial*,
> > because the L2ARC itself needs RAM. 8 GB is simply inadequate for
> > your
> > test.
> 
> With 50TB storage, and 1TB if L2ARC, with no dedup, what amount of ARC
> would you would you recommeend?
> 
> And then, _with_ dedup, what would you recommemend?

make that 100TB of storage

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and L2ARC memory requirements?

2011-01-31 Thread Roy Sigurd Karlsbakk
> Even *with* an L2ARC, your memory requirements are *substantial*,
> because the L2ARC itself needs RAM. 8 GB is simply inadequate for your
> test.

With 50TB storage, and 1TB if L2ARC, with  no dedup, what amount of ARC would 
you would you recommeend?

And then, _with_ dedup, what would you recommemend?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool-poolname has 99 threads

2011-01-31 Thread Gary Mills
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice
a process called zpool-poolname that has 99 threads.  This seems to be
a limit, as it never goes above that.  It is lower on workstations.
The `zpool' man page says only:

  Processes
 Each imported pool has an associated process,  named  zpool-
 poolname.  The  threads  in  this process are the pool's I/O
 processing threads, which handle the compression,  checksum-
 ming,  and other tasks for all I/O associated with the pool.
 This process exists to  provides  visibility  into  the  CPU
 utilization  of the system's storage pools. The existence of
 this process is an unstable interface.

There are several thousand processes doing ZFS I/O on the busy server.
Could this new process be a limitation in any way?  I'd just like to
rule it out before looking further at I/O performance.

-- 
-Gary Mills--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS dedup success stories (take two)

2011-01-31 Thread Roy Sigurd Karlsbakk
> As I've said here on the list a few times earlier, the last on the
> thread 'ZFS not usable (was ZFS Dedup question)', I've been doing some
> rather thorough testing on zfs dedup, and as you can see from the
> posts, it wasn't very satisfactory. The docs claim 1-2GB memory usage
> per terabyte stored, ARC or L2ARC, but as you can read from the post,
> I don't find this very likely.

Sorry about the initial post - it was wrong. The hardware configuration was 
right, but for initial tests, I use NFS, meaning sync writes. This obviously 
stresses the ARC/L2ARC more than async writes, but the result remains the same.

With 140GB with of L2ARC on two X25-Ms and some 4GB partitions on the same 
devices, 4GB each, in a mirror, the write speed was reduced to something like 
20% of the origian speed. This was with about 2TB used on the zpool with a 
single data stream, no parallelism whatsoever. Still with 8GB ARC and 140GB of 
L2ARC on two SSDs, this speed is fairly low. I could not see substantially high 
CPU or I/O load during this test.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS dedup success stories?

2011-01-31 Thread Roy Sigurd Karlsbakk
> I'm not sure about *docs*, but my rough estimations:
> 
> Assume 1TB of actual used storage. Assume 64K block/slab size. (Not
> sure how realistic that is -- it depends totally on your data set.)
> Assume 300 bytes per DDT entry.
> 
> So we have (1024^4 / 65536) * 300 = 5033164800 or about 5GB RAM for
> one
> TB of used disk space.
> 
> Dedup is *hungry* for RAM. 8GB is not enough for your configuration,
> most likely! First guess: double the RAM and then you might have
> better
> luck.

I know... that's why I use L2ARC
 
> The other takeaway here: dedup is the wrong technology for typical
> small home server (e.g. systems that max out at 4 or even 8 GB).

This isn't a home server test

> Look into compression and snapshot clones as better alternatives to
> reduce your disk space needs without incurring the huge RAM penalties
> associated with dedup.
> 
> Dedup is *great* for a certain type of data set with configurations
> that
> are extremely RAM heavy. For everyone else, its almost universally the
> wrong solution. Ultimately, disk is usually cheaper than RAM -- think
> hard before you enable dedup -- are you making the right trade off?

Just what sort of configurations would you think of? I've been testing dedup in 
rather large ones, and the sun is that ZFS doesn't scale well as of now


Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Richard Elling
On Jan 31, 2011, at 1:19 PM, Mike Tancsa wrote:
> On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
>> Hi Mike,
>> 
>> Yes, this is looking much better.
>> 
>> Some combination of removing corrupted files indicated in the zpool
>> status -v output, running zpool scrub and then zpool clear should
>> resolve the corruption, but its depends on how bad the corruption is.
>> 
>> First, I would try least destruction method: Try to remove the
>> files listed below by using the rm command.
>> 
>> This entry probably means that the metadata is corrupted or some
>> other file (like a temp file) no longer exists:
>> 
>> tank1/argus-data:<0xc6>
> 
> 
> Hi Cindy,
>   I removed the files that were listed, and now I am left with
> 
> errors: Permanent errors have been detected in the following files:
> 
>tank1/argus-data:<0xc5>
>tank1/argus-data:<0xc6>
>tank1/argus-data:<0xc7>
> 
> I have started a scrub
> scrub: scrub in progress for 0h48m, 10.90% done, 6h35m to go
> 
> I will report back once the scrub is done!

The "permanent" errors report shows the current and previous results.
When you have multiple failures that are recovered, consider running scrub twice
before attempting to correct or delete files.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-31 Thread Rocky Shek
Fred,

 

You can easier get them from our resellers. Our resellers are all around the 
world. 

 

Rocky 

 

From: Fred Liu [mailto:fred_...@issi.com] 
Sent: Monday, January 31, 2011 1:43 AM
To: Khushil Dep
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] reliable, enterprise worthy JBODs?

 

Khushil,

 

Thanks. 

 

Fred

 

From: Khushil Dep [mailto:khushil@gmail.com] 
Sent: 星期一, 一月 31, 2011 17:37
To: Fred Liu
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?

 

You should also check out VA Technologies 
(http://www.va-technologies.com/servicesStorage.php) in the UK which supply a 
range of JBOD's. I've used this is very large deployments with no JBOD related 
failures to-date. Interestingly the laso list co-raid boxes.


---

W. A. Khushil Dep - khushil@gmail.com -  07905374843

Windows - Linux - Solaris - ZFS - XenServer - FreeBSD - C/C++ - PHP/Perl - LAMP 
- Nexenta - Development - Consulting & Contracting

http://www.khushil.com/ - http://www.facebook.com/GlobalOverlord





On 31 January 2011 09:15, Fred Liu  wrote:

Rocky,

Can individuals buy your products in the retail market?

Thanks.

Fred


> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

> boun...@opensolaris.org] On Behalf Of Rocky Shek
> Sent: 星期五, 一月 28, 2011 7:02
> To: 'Pasi Kärkkäinen'
> Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
>
> Pasi,
>
> I have not tried the Opensolaris FMA yet.
>
> But we have developed a tool called DSM that allow users to locate disk
> drive location, failed drive identification, FRU parts status.
>
> http://dataonstorage.com/dataon-products/dsm-30-for-nexentastor.html
>
> We also spending time in past to sure SES chip work with major RAID
> controller card.
>
> Rocky
>
>
> -Original Message-
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> Sent: Tuesday, January 25, 2011 1:30 PM
> To: Rocky Shek
> Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
>
> On Tue, Jan 25, 2011 at 11:53:49AM -0800, Rocky Shek wrote:
> > Philip,
> >
> > You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> >
> http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sas-
> sata-jbod
> > -storage.html
> >
> > It is the best fit for ZFS Storage application. It can be a good
> replacement
> > of Sun/Oracle J4400 and J4200
> >
> > There are also Ultra density DNS-1660 4U 60 Bay 6Gb/s SAS JBOD
> Storage and
> > other form factor JBOD.
> >
> >
> http://dataonstorage.com/dataon-products/6g-sas-jbod/dns-1660-4u-60-
> bay-6g-3
> > 5inch-sassata-jbod.html
> >
>
> Does (Open)Solaris FMA work with these DataON JBODs?
> .. meaning do the failure LEDs work automatically in the case of disk
> failure?
>
> I guess that requires the SES chip on the JBOD to include proper drive
> identification for all slots.
>
> -- Pasi
>
> >
> > Rocky
> >
> > -Original Message-
> > From: zfs-discuss-boun...@opensolaris.org
> > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Philip
> Brown
> > Sent: Tuesday, January 25, 2011 10:05 AM
> > To: zfs-discuss@opensolaris.org
> > Subject: [zfs-discuss] reliable, enterprise worthy JBODs?
> >
> > So, another hardware question :)
> >
> > ZFS has been touted as taking maximal advantage of disk hardware, to
> the
> > point where it can be used efficiently and cost-effectively on JBODs,
> rather
> > than having to throw more expensive RAID arrays at it.
> >
> > Only trouble is.. JBODs seem to have disappeared :(
> > Sun/Oracle has discontinued its j4000 line, with no replacement that
> I can
> > see.
> >
> > IBM seems to have some nice looking hardware in the form of its
> EXP3500
> > "expansion trays"... but they only support it connected to an IBM
> (SAS)
> > controller... which is only supported when plugged into IBM server
> hardware
> > :(
> >
> > Any other suggestions for (large-)enterprise-grade, supported JBOD
> hardware
> > for ZFS these days?
> > Either fibre or SAS would be okay.
> > --
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 

___
zfs-d

[zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-01-31 Thread James
G'day All. 

I’m trying to select the appropriate disk spindle speed for a proposal and 
would welcome any experience and opinions  (e.g. has anyone actively chosen 
10k/15k drives for a new ZFS build and, if so, why?).

This is for ZFS over NFS for VMWare storage ie. primarily random 4kB read/sync 
writes (SLOG) + some general CIFS file serving. About 40/60 read/write 
ratio.

The primary drive options I’m trying to compare are 48xHP SAS 500gb 7.2k(avg 
8ms seek - approx 80 random IOPS/drive) or 24xHP SAS 450g or 600g 10k drives 
(avg 4ms seek - approx 138 random IOPS/drive) which work out pretty close in 
price.

Ok, first theory.  Assuming sequential writes, the 7200 drives should be up to 
75% (at worst 80/138%) the IOPS of the 10k and with twice the number of 
spindles and a low latency ZIL SLOG that should give much better write 
performance**.  Correct?What IOPS are people seeing from 7200 (approx 8ms 
avg seek) drives under mainly write loads?

Random reads IOPS are about the same on both options in terms of £/Random IO so 
the only problem is higher latency for reads that miss the ARC/L2ARC and are 
serviced by the 7200’s (avg 12.3- max 25.8ms) which is slower than the 10k 
would be (avg 7ms – max 14ms).  I’m currently planning 2x240GB L2ARC so 
hopefully we’ll be able to get a lot of the active read memory into cache and 
keep the latencies low.  Any suggestions how to identify the amount of “working 
dataset” on windows/netapp etc? 

I note ZFSBuild said they’ld do their next build with 15k SAS but I couldn’t 
follow their logic.   Anything else I’m missing.  

** My understanding is that  ZFS will adjust the amount of data accepted into 
each “transaction” (TXG) to ensure it can be written to disk in 5s.Async 
data will stay in ARC, Sync data will also go to ZIL or, if overthreshold, will 
go to disk and pointer to ZIL(on low latency SLOG) – ie. all writes apart from 
sync writes over threshold will be unaffected by disk write latency from a 
client perspective.Therefore if, for the same budget, 7200rpm can give you 
a higher iops, high latency disk whereas 10k gives you lower latency but lower 
iops, the 7200rpm system would end up providing highest write iops at lowest 
latency (due to SLOG).
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/31/2011 3:14 PM, Cindy Swearingen wrote:
> Hi Mike,
> 
> Yes, this is looking much better.
> 
> Some combination of removing corrupted files indicated in the zpool
> status -v output, running zpool scrub and then zpool clear should
> resolve the corruption, but its depends on how bad the corruption is.
> 
> First, I would try least destruction method: Try to remove the
> files listed below by using the rm command.
> 
> This entry probably means that the metadata is corrupted or some
> other file (like a temp file) no longer exists:
> 
> tank1/argus-data:<0xc6>


Hi Cindy,
I removed the files that were listed, and now I am left with

errors: Permanent errors have been detected in the following files:

tank1/argus-data:<0xc5>
tank1/argus-data:<0xc6>
tank1/argus-data:<0xc7>

I have started a scrub
 scrub: scrub in progress for 0h48m, 10.90% done, 6h35m to go

I will report back once the scrub is done!

---Mike
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and TRIM

2011-01-31 Thread Joerg Schilling
Pasi Kärkkäinen  wrote:

> On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
> > Brandon High  wrote:
> > 
> > > On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> > >  wrote:
> > > > What is the status of ZFS support for TRIM?
> > >
> > > I believe it's been supported for a while now.
> > > http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
> > 
> > The command is implemented in the sata driver but there does ot seem to be 
> > any 
> > user of the code.
> > 
>
> Btw is the SCSI equivalent also implemented? iirc it was called SCSI UNMAP 
> (for SAS).

The high level interface is called unmap  it seems that the planned 
interface for ZFS is to send a raw SPC3_CMD_UNMAP SCSI command to the driver 
and that this command is translated into TRIM in the sata case.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
,
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and TRIM

2011-01-31 Thread Pasi Kärkkäinen
On Mon, Jan 31, 2011 at 03:41:52PM +0100, Joerg Schilling wrote:
> Brandon High  wrote:
> 
> > On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
> >  wrote:
> > > What is the status of ZFS support for TRIM?
> >
> > I believe it's been supported for a while now.
> > http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html
> 
> The command is implemented in the sata driver but there does ot seem to be 
> any 
> user of the code.
> 

Btw is the SCSI equivalent also implemented? iirc it was called SCSI UNMAP (for 
SAS).

-- Pasi

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Cindy Swearingen

Hi Mike,

Yes, this is looking much better.

Some combination of removing corrupted files indicated in the zpool
status -v output, running zpool scrub and then zpool clear should
resolve the corruption, but its depends on how bad the corruption is.

First, I would try least destruction method: Try to remove the
files listed below by using the rm command.

This entry probably means that the metadata is corrupted or some
other file (like a temp file) no longer exists:

tank1/argus-data:<0xc6>

If you are able to remove the individual file with rm, run another
zpool scrub and then a zpool clear to clear the pool errors. You
might need to repeat the zpool scrub/zpool clear combo.

If you can't remove the individual files, then you might have to
destroy the tank1/argus-data file system.

Let us know what actually works.

Thanks,

Cindy

On 01/31/11 12:20, Mike Tancsa wrote:

On 1/29/2011 6:18 PM, Richard Elling wrote:

On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:


On 1/29/2011 12:57 PM, Richard Elling wrote:

0(offsite)# zpool status
pool: tank1
state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
  replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
 see: http://www.sun.com/msg/ZFS-8000-3C
scrub: none requested
config:

  NAMESTATE READ WRITE CKSUM
  tank1   UNAVAIL  0 0 0  insufficient replicas
raidz1ONLINE   0 0 0
  ad0 ONLINE   0 0 0
  ad1 ONLINE   0 0 0
  ad4 ONLINE   0 0 0
  ad6 ONLINE   0 0 0
raidz1ONLINE   0 0 0
  ada4ONLINE   0 0 0
  ada5ONLINE   0 0 0
  ada6ONLINE   0 0 0
  ada7ONLINE   0 0 0
raidz1UNAVAIL  0 0 0  insufficient replicas
  ada0UNAVAIL  0 0 0  cannot open
  ada1UNAVAIL  0 0 0  cannot open
  ada2UNAVAIL  0 0 0  cannot open
  ada3UNAVAIL  0 0 0  cannot open
0(offsite)#

This is usually easily solved without data loss by making the
disks available again.  Can you read anything from the disks using
any program?

Thats the strange thing, the disks are readable.  The drive cage just
reset a couple of times prior to the crash. But they seem OK now.  Same
order as well.

# camcontrol devlist
  at scbus0 target 0 lun 0
(pass0,ada0)
  at scbus0 target 1 lun 0
(pass1,ada1)
  at scbus0 target 2 lun 0
(pass2,ada2)
  at scbus0 target 3 lun 0
(pass3,ada3)


# dd if=/dev/ada2 of=/dev/null count=20 bs=1024
20+0 records in
20+0 records out
20480 bytes transferred in 0.001634 secs (12534561 bytes/sec)
0(offsite)#

The next step is to run "zdb -l" and look for all 4 labels. Something like:
zdb -l /dev/ada2

If all 4 labels exist for each drive and appear intact, then look more closely
at how the OS locates the vdevs. If you can't solve the "UNAVAIL" problem,
you won't be able to import the pool.
 -- richard


On 1/29/2011 10:13 PM, James R. Van Artsdalen wrote:

On 1/28/2011 4:46 PM, Mike Tancsa wrote:

I had just added another set of disks to my zfs array. It looks like the
drive cage with the new drives is faulty.  I had added a couple of files
to the main pool, but not much.  Is there any way to restore the pool
below ? I have a lot of files on ad0,1,4,6 and ada4,5,6,7 and perhaps
one file on the new drives in the bad cage.

Get another enclosure and verify it works OK.  Then move the disks from
the suspect enclosure to the tested enclosure and try to import the pool.

The problem may be cabling or the controller instead - you didn't
specify how the disks were attached or which version of FreeBSD you're
using.



First off thanks to all who responded on and offlist!

Good news (for me) it seems. New cage and all seems to be recognized
correctly.  The history is

...
2010-04-22.14:27:38 zpool add tank1 raidz /dev/ada4 /dev/ada5 /dev/ada6
/dev/ada7
2010-06-11.13:49:33 zfs create tank1/argus-data
2010-06-11.13:49:41 zfs create tank1/argus-data/previous
2010-06-11.13:50:38 zfs set compression=off tank1/argus-data
2010-08-06.12:20:59 zpool replace tank1 ad1 ad1
2010-09-16.10:17:51 zpool upgrade -a
2011-01-28.11:45:43 zpool add tank1 raidz /dev/ada0 /dev/ada1 /dev/ada2
/dev/ada3

FreeBSD RELENG_8 from last week, 8G of RAM, amd64.

 zpool status -v
  pool: tank1
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank1   ONLINE   0 0 0
  raidz1ONLINE   0 0 

Re: [zfs-discuss] multiple disk failure (solved?)

2011-01-31 Thread Mike Tancsa
On 1/29/2011 6:18 PM, Richard Elling wrote:
> 
> On Jan 29, 2011, at 12:58 PM, Mike Tancsa wrote:
> 
>> On 1/29/2011 12:57 PM, Richard Elling wrote:
 0(offsite)# zpool status
 pool: tank1
 state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
   replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
  see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:

   NAMESTATE READ WRITE CKSUM
   tank1   UNAVAIL  0 0 0  insufficient replicas
 raidz1ONLINE   0 0 0
   ad0 ONLINE   0 0 0
   ad1 ONLINE   0 0 0
   ad4 ONLINE   0 0 0
   ad6 ONLINE   0 0 0
 raidz1ONLINE   0 0 0
   ada4ONLINE   0 0 0
   ada5ONLINE   0 0 0
   ada6ONLINE   0 0 0
   ada7ONLINE   0 0 0
 raidz1UNAVAIL  0 0 0  insufficient replicas
   ada0UNAVAIL  0 0 0  cannot open
   ada1UNAVAIL  0 0 0  cannot open
   ada2UNAVAIL  0 0 0  cannot open
   ada3UNAVAIL  0 0 0  cannot open
 0(offsite)#
>>>
>>> This is usually easily solved without data loss by making the
>>> disks available again.  Can you read anything from the disks using
>>> any program?
>>
>> Thats the strange thing, the disks are readable.  The drive cage just
>> reset a couple of times prior to the crash. But they seem OK now.  Same
>> order as well.
>>
>> # camcontrol devlist
>>   at scbus0 target 0 lun 0
>> (pass0,ada0)
>>   at scbus0 target 1 lun 0
>> (pass1,ada1)
>>   at scbus0 target 2 lun 0
>> (pass2,ada2)
>>   at scbus0 target 3 lun 0
>> (pass3,ada3)
>>
>>
>> # dd if=/dev/ada2 of=/dev/null count=20 bs=1024
>> 20+0 records in
>> 20+0 records out
>> 20480 bytes transferred in 0.001634 secs (12534561 bytes/sec)
>> 0(offsite)#
> 
> The next step is to run "zdb -l" and look for all 4 labels. Something like:
>   zdb -l /dev/ada2
> 
> If all 4 labels exist for each drive and appear intact, then look more closely
> at how the OS locates the vdevs. If you can't solve the "UNAVAIL" problem,
> you won't be able to import the pool.
>  -- richard

On 1/29/2011 10:13 PM, James R. Van Artsdalen wrote:
> On 1/28/2011 4:46 PM, Mike Tancsa wrote:
>>
>> I had just added another set of disks to my zfs array. It looks like the
>> drive cage with the new drives is faulty.  I had added a couple of files
>> to the main pool, but not much.  Is there any way to restore the pool
>> below ? I have a lot of files on ad0,1,4,6 and ada4,5,6,7 and perhaps
>> one file on the new drives in the bad cage.
>
> Get another enclosure and verify it works OK.  Then move the disks from
> the suspect enclosure to the tested enclosure and try to import the pool.
>
> The problem may be cabling or the controller instead - you didn't
> specify how the disks were attached or which version of FreeBSD you're
> using.
>

First off thanks to all who responded on and offlist!

Good news (for me) it seems. New cage and all seems to be recognized
correctly.  The history is

...
2010-04-22.14:27:38 zpool add tank1 raidz /dev/ada4 /dev/ada5 /dev/ada6
/dev/ada7
2010-06-11.13:49:33 zfs create tank1/argus-data
2010-06-11.13:49:41 zfs create tank1/argus-data/previous
2010-06-11.13:50:38 zfs set compression=off tank1/argus-data
2010-08-06.12:20:59 zpool replace tank1 ad1 ad1
2010-09-16.10:17:51 zpool upgrade -a
2011-01-28.11:45:43 zpool add tank1 raidz /dev/ada0 /dev/ada1 /dev/ada2
/dev/ada3

FreeBSD RELENG_8 from last week, 8G of RAM, amd64.

 zpool status -v
  pool: tank1
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tank1   ONLINE   0 0 0
  raidz1ONLINE   0 0 0
ad0 ONLINE   0 0 0
ad1 ONLINE   0 0 0
ad4 ONLINE   0 0 0
ad6 ONLINE   0 0 0
  raidz1ONLINE   0 0 0
ada0ONLINE   0 0 0
ada1ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
  raidz1ONLINE   0 0 0
ada5ONLINE   0 0 0
ada8ONLINE   0 0 0
ada7ONLINE   0 0 0

Re: [zfs-discuss] multiple disk failure

2011-01-31 Thread James Van Artsdalen
He says he's using FreeBSD.  ZFS recorded names like "ada0" which always means 
a whole disk.

In any case FreeBSD will search all block storage for the ZFS dev components if 
the cached name is wrong: if the attached disks are connected to the system at 
all FreeBSD will find them wherever they may be.

Try FreeBSD 8-STABLE rather than just 8.2-RELEASE as many improvements and 
fixes have been backported.  Perhaps try 9-CURRENT as I'm confident the code 
there has all of the dev search fixes.

Add the line "vfs.zfs.debug=1" to /boot/loader.conf to get detailed debug 
output as FreeBSD tries to import the pool.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
iirc, we would notify the user community that the FS'es were going to hang 
briefly.

Locking the FS'es is the best way to quiesce it, when users are worldwide, imo.

Mark

On Jan 31, 2011, at 9:45 AM, Torrey McMahon wrote:

> A matter of seconds is a long time for a running Oracle database. The point 
> is that if you have to keep writing to a UFS filesystem - "when the file 
> system also needs to accommodate writes" - you're still out of luck. If you 
> can quiesce the apps, great, but if you can't then you're still stuck.  In 
> other words, fssnap_ufs doesn't solve the quiesce problem.
> 
> On 1/31/2011 10:24 AM, Mark Sandrock wrote:
>> Why do you say fssnap has the same problem?
>> 
>> If it write locks the file system, it is only for a matter of seconds, as I 
>> recall.
>> 
>> Years ago, I used it on a daily basis to do ufsdumps of large fs'es.
>> 
>> Mark
>> 
>> On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:
>> 
>>> On 1/30/2011 5:26 PM, Joerg Schilling wrote:
 Richard Elling   wrote:
 
> ufsdump is the problem, not ufsrestore. If you ufsdump an active
> file system, there is no guarantee you can ufsrestore it. The only way
> to guarantee this is to keep the file system quiesced during the entire
> ufsdump.  Needless to say, this renders ufsdump useless for backup
> when the file system also needs to accommodate writes.
 This is why there is a ufs snapshot utility.
>>> You'll have the same problem. fssnap_ufs(1M) write locks the file system 
>>> when you run the lock command. See the notes section of the man page.
>>> 
>>> http://download.oracle.com/docs/cd/E19253-01/816-5166/6mbb1kq1p/index.html#Notes

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Torrey McMahon
A matter of seconds is a long time for a running Oracle database. The 
point is that if you have to keep writing to a UFS filesystem - "when 
the file system also needs to accommodate writes" - you're still out of 
luck. If you can quiesce the apps, great, but if you can't then you're 
still stuck.  In other words, fssnap_ufs doesn't solve the quiesce problem.


On 1/31/2011 10:24 AM, Mark Sandrock wrote:

Why do you say fssnap has the same problem?

If it write locks the file system, it is only for a matter of seconds, as I 
recall.

Years ago, I used it on a daily basis to do ufsdumps of large fs'es.

Mark

On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:


On 1/30/2011 5:26 PM, Joerg Schilling wrote:

Richard Elling   wrote:


ufsdump is the problem, not ufsrestore. If you ufsdump an active
file system, there is no guarantee you can ufsrestore it. The only way
to guarantee this is to keep the file system quiesced during the entire
ufsdump.  Needless to say, this renders ufsdump useless for backup
when the file system also needs to accommodate writes.

This is why there is a ufs snapshot utility.

You'll have the same problem. fssnap_ufs(1M) write locks the file system when 
you run the lock command. See the notes section of the man page.

http://download.oracle.com/docs/cd/E19253-01/816-5166/6mbb1kq1p/index.html#Notes

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
Why do you say fssnap has the same problem?

If it write locks the file system, it is only for a matter of seconds, as I 
recall.

Years ago, I used it on a daily basis to do ufsdumps of large fs'es.

Mark

On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote:

> On 1/30/2011 5:26 PM, Joerg Schilling wrote:
>> Richard Elling  wrote:
>> 
>>> ufsdump is the problem, not ufsrestore. If you ufsdump an active
>>> file system, there is no guarantee you can ufsrestore it. The only way
>>> to guarantee this is to keep the file system quiesced during the entire
>>> ufsdump.  Needless to say, this renders ufsdump useless for backup
>>> when the file system also needs to accommodate writes.
>> This is why there is a ufs snapshot utility.
> 
> You'll have the same problem. fssnap_ufs(1M) write locks the file system when 
> you run the lock command. See the notes section of the man page.
> 
> http://download.oracle.com/docs/cd/E19253-01/816-5166/6mbb1kq1p/index.html#Notes
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Joerg Schilling
Torrey McMahon  wrote:

> On 1/30/2011 5:26 PM, Joerg Schilling wrote:
> > Richard Elling  wrote:
> >
> >> ufsdump is the problem, not ufsrestore. If you ufsdump an active
> >> file system, there is no guarantee you can ufsrestore it. The only way
> >> to guarantee this is to keep the file system quiesced during the entire
> >> ufsdump.  Needless to say, this renders ufsdump useless for backup
> >> when the file system also needs to accommodate writes.
> > This is why there is a ufs snapshot utility.
>
> You'll have the same problem. fssnap_ufs(1M) write locks the file system 
> when you run the lock command. See the notes section of the man page.
>
> http://download.oracle.com/docs/cd/E19253-01/816-5166/6mbb1kq1p/index.html#Notes

The time the write lock is active is from a few seconds to a few minutes.
If you like do backup the system root filesystem, you may need to stop 
logging/auditing for that time or split the mirror.

Once the snapshot is established, you may take as much time as your storage for 
the snapshot will last.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and TRIM

2011-01-31 Thread Joerg Schilling
Brandon High  wrote:

> On Sat, Jan 29, 2011 at 8:31 AM, Edward Ned Harvey
>  wrote:
> > What is the status of ZFS support for TRIM?
>
> I believe it's been supported for a while now.
> http://www.c0t0d0s0.org/archives/6792-SATA-TRIM-support-in-Opensolaris.html

The command is implemented in the sata driver but there does ot seem to be any 
user of the code.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-31 Thread Fred Liu
Khushil,

Thanks.

Fred

From: Khushil Dep [mailto:khushil@gmail.com]
Sent: 星期一, 一月 31, 2011 17:37
To: Fred Liu
Cc: Rocky Shek; Pasi Kärkkäinen; Philip Brown; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?

You should also check out VA Technologies 
(http://www.va-technologies.com/servicesStorage.php) in the UK which supply a 
range of JBOD's. I've used this is very large deployments with no JBOD related 
failures to-date. Interestingly the laso list co-raid boxes.

---
W. A. Khushil Dep - khushil@gmail.com -  
07905374843
Windows - Linux - Solaris - ZFS - XenServer - FreeBSD - C/C++ - PHP/Perl - LAMP 
- Nexenta - Development - Consulting & Contracting
http://www.khushil.com/ - http://www.facebook.com/GlobalOverlord



On 31 January 2011 09:15, Fred Liu 
mailto:fred_...@issi.com>> wrote:
Rocky,

Can individuals buy your products in the retail market?

Thanks.

Fred

> -Original Message-
> From: 
> zfs-discuss-boun...@opensolaris.org
>  [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rocky 
> Shek
> Sent: 星期五, 一月 28, 2011 7:02
> To: 'Pasi Kärkkäinen'
> Cc: 'Philip Brown'; 
> zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
>
> Pasi,
>
> I have not tried the Opensolaris FMA yet.
>
> But we have developed a tool called DSM that allow users to locate disk
> drive location, failed drive identification, FRU parts status.
>
> http://dataonstorage.com/dataon-products/dsm-30-for-nexentastor.html
>
> We also spending time in past to sure SES chip work with major RAID
> controller card.
>
> Rocky
>
>
> -Original Message-
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> Sent: Tuesday, January 25, 2011 1:30 PM
> To: Rocky Shek
> Cc: 'Philip Brown'; 
> zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
>
> On Tue, Jan 25, 2011 at 11:53:49AM -0800, Rocky Shek wrote:
> > Philip,
> >
> > You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> >
> http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sas-
> sata-jbod
> > -storage.html
> >
> > It is the best fit for ZFS Storage application. It can be a good
> replacement
> > of Sun/Oracle J4400 and J4200
> >
> > There are also Ultra density DNS-1660 4U 60 Bay 6Gb/s SAS JBOD
> Storage and
> > other form factor JBOD.
> >
> >
> http://dataonstorage.com/dataon-products/6g-sas-jbod/dns-1660-4u-60-
> bay-6g-3
> > 5inch-sassata-jbod.html
> >
>
> Does (Open)Solaris FMA work with these DataON JBODs?
> .. meaning do the failure LEDs work automatically in the case of disk
> failure?
>
> I guess that requires the SES chip on the JBOD to include proper drive
> identification for all slots.
>
> -- Pasi
>
> >
> > Rocky
> >
> > -Original Message-
> > From: 
> > zfs-discuss-boun...@opensolaris.org
> > [mailto:zfs-discuss-boun...@opensolaris.org]
> >  On Behalf Of Philip
> Brown
> > Sent: Tuesday, January 25, 2011 10:05 AM
> > To: zfs-discuss@opensolaris.org
> > Subject: [zfs-discuss] reliable, enterprise worthy JBODs?
> >
> > So, another hardware question :)
> >
> > ZFS has been touted as taking maximal advantage of disk hardware, to
> the
> > point where it can be used efficiently and cost-effectively on JBODs,
> rather
> > than having to throw more expensive RAID arrays at it.
> >
> > Only trouble is.. JBODs seem to have disappeared :(
> > Sun/Oracle has discontinued its j4000 line, with no replacement that
> I can
> > see.
> >
> > IBM seems to have some nice looking hardware in the form of its
> EXP3500
> > "expansion trays"... but they only support it connected to an IBM
> (SAS)
> > controller... which is only supported when plugged into IBM server
> hardware
> > :(
> >
> > Any other suggestions for (large-)enterprise-grade, supported JBOD
> hardware
> > for ZFS these days?
> > Either fibre or SAS would be okay.
> > --
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss

Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-31 Thread Khushil Dep
You should also check out VA Technologies (
http://www.va-technologies.com/servicesStorage.php) in the UK which supply a
range of JBOD's. I've used this is very large deployments with no JBOD
related failures to-date. Interestingly the laso list co-raid boxes.

---
W. A. Khushil Dep - khushil@gmail.com -  07905374843
Windows - Linux - Solaris - ZFS - XenServer - FreeBSD - C/C++ - PHP/Perl -
LAMP - Nexenta - Development - Consulting & Contracting
http://www.khushil.com/ - http://www.facebook.com/GlobalOverlord




On 31 January 2011 09:15, Fred Liu  wrote:

> Rocky,
>
> Can individuals buy your products in the retail market?
>
> Thanks.
>
> Fred
>
> > -Original Message-
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Rocky Shek
> > Sent: 星期五, 一月 28, 2011 7:02
> > To: 'Pasi Kärkkäinen'
> > Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> > Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
> >
> > Pasi,
> >
> > I have not tried the Opensolaris FMA yet.
> >
> > But we have developed a tool called DSM that allow users to locate disk
> > drive location, failed drive identification, FRU parts status.
> >
> > http://dataonstorage.com/dataon-products/dsm-30-for-nexentastor.html
> >
> > We also spending time in past to sure SES chip work with major RAID
> > controller card.
> >
> > Rocky
> >
> >
> > -Original Message-
> > From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> > Sent: Tuesday, January 25, 2011 1:30 PM
> > To: Rocky Shek
> > Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> > Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
> >
> > On Tue, Jan 25, 2011 at 11:53:49AM -0800, Rocky Shek wrote:
> > > Philip,
> > >
> > > You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> > >
> > http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sas-
> > sata-jbod
> > > -storage.html
> > >
> > > It is the best fit for ZFS Storage application. It can be a good
> > replacement
> > > of Sun/Oracle J4400 and J4200
> > >
> > > There are also Ultra density DNS-1660 4U 60 Bay 6Gb/s SAS JBOD
> > Storage and
> > > other form factor JBOD.
> > >
> > >
> > http://dataonstorage.com/dataon-products/6g-sas-jbod/dns-1660-4u-60-
> > bay-6g-3
> > > 5inch-sassata-jbod.html
> > >
> >
> > Does (Open)Solaris FMA work with these DataON JBODs?
> > .. meaning do the failure LEDs work automatically in the case of disk
> > failure?
> >
> > I guess that requires the SES chip on the JBOD to include proper drive
> > identification for all slots.
> >
> > -- Pasi
> >
> > >
> > > Rocky
> > >
> > > -Original Message-
> > > From: zfs-discuss-boun...@opensolaris.org
> > > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Philip
> > Brown
> > > Sent: Tuesday, January 25, 2011 10:05 AM
> > > To: zfs-discuss@opensolaris.org
> > > Subject: [zfs-discuss] reliable, enterprise worthy JBODs?
> > >
> > > So, another hardware question :)
> > >
> > > ZFS has been touted as taking maximal advantage of disk hardware, to
> > the
> > > point where it can be used efficiently and cost-effectively on JBODs,
> > rather
> > > than having to throw more expensive RAID arrays at it.
> > >
> > > Only trouble is.. JBODs seem to have disappeared :(
> > > Sun/Oracle has discontinued its j4000 line, with no replacement that
> > I can
> > > see.
> > >
> > > IBM seems to have some nice looking hardware in the form of its
> > EXP3500
> > > "expansion trays"... but they only support it connected to an IBM
> > (SAS)
> > > controller... which is only supported when plugged into IBM server
> > hardware
> > > :(
> > >
> > > Any other suggestions for (large-)enterprise-grade, supported JBOD
> > hardware
> > > for ZFS these days?
> > > Either fibre or SAS would be okay.
> > > --
> > > This message posted from opensolaris.org
> > > ___
> > > zfs-discuss mailing list
> > > zfs-discuss@opensolaris.org
> > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > >
> > > ___
> > > zfs-discuss mailing list
> > > zfs-discuss@opensolaris.org
> > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] reliable, enterprise worthy JBODs?

2011-01-31 Thread Fred Liu
Rocky,

Can individuals buy your products in the retail market?

Thanks.

Fred

> -Original Message-
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rocky Shek
> Sent: 星期五, 一月 28, 2011 7:02
> To: 'Pasi Kärkkäinen'
> Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
> 
> Pasi,
> 
> I have not tried the Opensolaris FMA yet.
> 
> But we have developed a tool called DSM that allow users to locate disk
> drive location, failed drive identification, FRU parts status.
> 
> http://dataonstorage.com/dataon-products/dsm-30-for-nexentastor.html
> 
> We also spending time in past to sure SES chip work with major RAID
> controller card.
> 
> Rocky
> 
> 
> -Original Message-
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> Sent: Tuesday, January 25, 2011 1:30 PM
> To: Rocky Shek
> Cc: 'Philip Brown'; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] reliable, enterprise worthy JBODs?
> 
> On Tue, Jan 25, 2011 at 11:53:49AM -0800, Rocky Shek wrote:
> > Philip,
> >
> > You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> >
> http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sas-
> sata-jbod
> > -storage.html
> >
> > It is the best fit for ZFS Storage application. It can be a good
> replacement
> > of Sun/Oracle J4400 and J4200
> >
> > There are also Ultra density DNS-1660 4U 60 Bay 6Gb/s SAS JBOD
> Storage and
> > other form factor JBOD.
> >
> >
> http://dataonstorage.com/dataon-products/6g-sas-jbod/dns-1660-4u-60-
> bay-6g-3
> > 5inch-sassata-jbod.html
> >
> 
> Does (Open)Solaris FMA work with these DataON JBODs?
> .. meaning do the failure LEDs work automatically in the case of disk
> failure?
> 
> I guess that requires the SES chip on the JBOD to include proper drive
> identification for all slots.
> 
> -- Pasi
> 
> >
> > Rocky
> >
> > -Original Message-
> > From: zfs-discuss-boun...@opensolaris.org
> > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Philip
> Brown
> > Sent: Tuesday, January 25, 2011 10:05 AM
> > To: zfs-discuss@opensolaris.org
> > Subject: [zfs-discuss] reliable, enterprise worthy JBODs?
> >
> > So, another hardware question :)
> >
> > ZFS has been touted as taking maximal advantage of disk hardware, to
> the
> > point where it can be used efficiently and cost-effectively on JBODs,
> rather
> > than having to throw more expensive RAID arrays at it.
> >
> > Only trouble is.. JBODs seem to have disappeared :(
> > Sun/Oracle has discontinued its j4000 line, with no replacement that
> I can
> > see.
> >
> > IBM seems to have some nice looking hardware in the form of its
> EXP3500
> > "expansion trays"... but they only support it connected to an IBM
> (SAS)
> > controller... which is only supported when plugged into IBM server
> hardware
> > :(
> >
> > Any other suggestions for (large-)enterprise-grade, supported JBOD
> hardware
> > for ZFS these days?
> > Either fibre or SAS would be okay.
> > --
> > This message posted from opensolaris.org
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss