Re: [zfs-discuss] Storage 7000

2008-11-17 Thread Daryl Doami
Just to clarify that last answer, we are planning on releasing SSDs for 
many of our existing systems and storage.  They may be a little 
different than what's used in the 7000, but they're intended for the 
same purpose.

Your sales rep should be able to give you a better idea of when, but 
they're not that far off.

Here's a list of what existing products we're currently targeting in the 
near term:
http://www.sun.com/storage/flash/products.jsp

 Original Message 
Subject: Re: [zfs-discuss] Storage 7000
From: Adam Leventhal [EMAIL PROTECTED]
To: Mika Borner [EMAIL PROTECTED]
CC: ZFS discuss zfs-discuss@opensolaris.org
Date: Mon Nov 17 13:49:24 2008
 Would be interesting to hear more about how Fishworks differs from 
 Opensolaris, what build it is based on, what package mechanism you are 
 using (IPS already?), and other differences...
 

 I'm sure these details will be examined in the coming weeks on the blogs
 of members of the Fishworks team. Keep an eye on blogs.sun.com/fishworks.

   
 A little off topic: Do you know when the SSDs used in the Storage 7000 are 
 available for the rest of us?
 

 I don't think the will be, but it will be possible to purchase them as
 replacement parts.

 Adam

   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] OpenStorage GUI

2008-11-12 Thread Daryl Doami
There is no inbox/field upgrade available for the x4500 - x4540.  The 
upgrades mentioned are in the form of discounted box swaps.

Sorry about that.  It would be nice though.

 Original Message 
Subject: Re: [zfs-discuss] OpenStorage GUI
From: Andy Lubel [EMAIL PROTECTED]
To: Chris Greer [EMAIL PROTECTED], zfs-discuss@opensolaris.org
Date: Wed Nov 12 13:18:15 2008
 The word Module makes it sound really easy :)  Has anyone ever swapped
 this module out, and if so - was it painful?

 Since our 4500's went from the pallet to the offsite datacenter I never
 did really get a chance to look closely at it.  I found a picture of one
 and it looks like you could take out the whole guts in one tray (from
 the bottom rear?).

 -Andy

 -Original Message-
 From: Chris Greer [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, November 12, 2008 3:57 PM
 To: Andy Lubel; zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] OpenStorage GUI

 I was hoping for a swap out of the system board module.  

 Chris G.


 - Original Message -
 From: Andy Lubel [EMAIL PROTECTED]
 To: Chris Greer; zfs-discuss@opensolaris.org
 zfs-discuss@opensolaris.org
 Sent: Wed Nov 12 14:38:03 2008
 Subject: RE: [zfs-discuss] OpenStorage GUI

  

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Chris Greer
 Sent: Wednesday, November 12, 2008 3:20 PM
 To: zfs-discuss@opensolaris.org
 Subject: Re: [zfs-discuss] OpenStorage GUI

 Do you have any info on this upgrade path?
 I can't seem to find anything about this...

 I would also like to throw in my $0.02 worth that I would like to see
 the software offered to existing sun X4540 (or upgraded X4500)
 customers.

 Chris G.
 --
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] continuous replication

2008-11-12 Thread Daryl Doami
As an aside, replication has been implemented as part of the new Storage 
7000 family.  Here's a link to a blog discussing using the 7000 
Simulator running in two separate VMs and replicating w/ each other:

http://blogs.sun.com/pgdh/entry/fun_with_replicating_the_sun

I'm not sure of the specifics of how, but it might provide ideas of how 
it can be accomplished.

Regards.

 Original Message 
Subject: Re: [zfs-discuss] continuous replication
From: Brent Jones [EMAIL PROTECTED]
To: Ian Collins [EMAIL PROTECTED], zfs-discuss@opensolaris.org
Date: Wed Nov 12 16:46:37 2008
 On Wed, Nov 12, 2008 at 3:40 PM, River Tarnell
 [EMAIL PROTECTED] wrote:
   
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Ian Collins:
 
 I doubt zfs receive would be able to keep pace with any non-trivial update 
 rate.
   
 one could consider this a bug in zfs receive :)

 
 Mirroring iSCSI or a dedicated HA tool would be a better solution.
   
 i'm not sure how to apply iSCSI here; the pool needs to be mounted at least
 read-only on both hosts at the same time.  (also suggested was AVS, which
 doesn't allow keeping the pool mounted on the slave.)  at least Solaris
 Cluster, from what i've seen, doesn't allow this either; the failover is
 handled by importing the pool during failover.

- river.
 -BEGIN PGP SIGNATURE-

 iD8DBQFJG2ltIXd7fCuc5vIRAv5PAJ4lrVLcWuQlJkY05fxCYkLn8kgtxQCgo/CX
 Ae17uVMuX1FABt73hmeULmM=
 =OZZa
 -END PGP SIGNATURE-
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

 

 It sounds like you need either a true clustering file system or to
 draw back your plans to see changes read-only instantly on the
 secondary node.
 What kind of link do you plan between these nodes? Would the link keep
 up with non-trivial updates?



   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot / root in Nevada build 101

2008-10-29 Thread Daryl Doami

Hi Peter,

It's there, you just can't use the GUI installer.  You have to choose 
the text interactive installer.  It'll give you the choice there.


Regards.

 Original Message 
Subject: [zfs-discuss] zfs boot / root in Nevada build 101
From: Peter Baer Galvin [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Date: Wed Oct 29 09:28:46 2008

This seems like a n00b question but I'm stuck.

Nevada build 101. Doing fresh install (in vmware fusion). I don't see any way 
to select zfs as the root file system. Looks to me like UFS is the default, but 
I don't see any option box to allow that to be changed to zfs. What am I 
missing?! Thanks.
  


begin:vcard
fn:Daryl Doami
n:Doami;Daryl
org:Sun Microsystems Federal, Inc.;DoD, Intel,  NASA West Regions
adr;dom:;;222 N. Sepulveda Blvd. 10th Floor;El Segundo;CA;90245
email;internet:[EMAIL PROTECTED]
title:Systems Engineer
tel;work:310-242-6463
tel;fax:310-242-6463
x-mozilla-html:FALSE
url:http://www.sun.com/government/
version:2.1
end:vcard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs boot / root in Nevada build 101

2008-10-29 Thread Daryl Doami
Hi Peter,

It's mentioned here under Annoucements:
http://opensolaris.org/os/community/zfs/boot/

It's just not very obvious.

 Original Message 
Subject: Re: [zfs-discuss] zfs boot / root in Nevada build 101
From: Peter Baer Galvin [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Date: Wed Oct 29 11:25:20 2008
 Hi Cindy, I googled quite a lot before posting my question. This issue isn't 
 mentioned in the ZFS boot FAQ for example or anywhere (that I saw) on the 
 Opensolaris ZFS pages. Of course I could have read the ZFS Admin book at 
 docs. sun.com...
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?

2008-10-09 Thread Daryl Doami
Hi,

Maybe this might be an option too?

http://blogs.sun.com/storage/entry/mike_shapiro_and_steve_o

 Original Message 
Subject: [zfs-discuss] Strategies to avoid single point of failure w/ 
X45x0Servers?
From: Solaris [EMAIL PROTECTED]
To: zfs-discuss@opensolaris.org
Date: Thu Oct  9 13:09:28 2008
 I have been leading the charge in my IT department to evaluate the Sun
 Fire X45x0 as a commodity storage platform, in order to leverage
 capacity and cost against our current NAS solution which is backed by
 EMC Fiberchannel SAN.  For our corporate environments, it would seem
 like a single machine would supply more than triple our current usable
 capacity on our NAS, and the cost is significantly less per GB.  I am
 also working to prove the multi-protocol shared storage capabilities
 of the Thumper significantly out perform those of our current solution
 (which is notoriously bad from the end user perspective).

 The EMC solution is completely redundant with no single point of
 failure.  What are some good strategies for providing a Thumper
 solution with no single point of failure?

 The storage folks are poo-poo'ing this concept because of the chances
 for an Operating System failure... I'd like to come up with some
 reasonable methods to put them in their place :)
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?

2008-05-19 Thread Daryl Doami
Hi,

It's my understanding that CAM doesn't bundle the new ST6x40 firmware 
(7.1) at this point.  However, the new firmware is available today by 
request and it does remove the 2TB limitation for the 6140 and 6540.  As 
Andy had suggested, it does require a new version of CAM though, 6.1.

The ST25x0 firmware that fixes the 2TB limitation is still coming though.

Regards.

 Original Message 
Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?
From: Torrey McMahon [EMAIL PROTECTED]
To: Andy Lubel [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org, Kenny [EMAIL PROTECTED]
Date: Mon May 19 15:18:51 2008
 The release should be out any day now. I think its being pushed to the 
 external download site whilst we type/read.

 Andy Lubel wrote:
   
 The limitation existed in every Sun branded Engenio array we tested - 
 2510,2530,2540,6130,6540.  This limitation is on volumes.  You will not be 
 able to present a lun larger than that magical 1.998TB.  I think it is a 
 combination of both in CAM and the firmware.  Can't do it with sscs either...
  
 Warm and fuzzy:  Sun engineers told me they would have a new release of CAM 
 (and firmware bundle) in late June which would resolve this limitation.
  
 Or just do ZFS (or even SVM) setup like Bob and I did.  Its actually pretty 
 nice because the traffic will split to both controllers giving you 
 theoretically more throughput so long as MPxIO is functioning properly.  
 Only (minor) downside is parity is being transmitted from the host to the 
 disks rather than living on the controller entirely.
  
 -Andy
  
 

 From: [EMAIL PROTECTED] on behalf of Torrey McMahon
 Sent: Mon 5/19/2008 1:59 PM
 To: Bob Friesenhahn
 Cc: zfs-discuss@opensolaris.org; Kenny
 Subject: Re: [zfs-discuss] ZFS and Sun Disk arrays - Opinions?



 Bob Friesenhahn wrote:
   
 
 On Mon, 19 May 2008, Kenny wrote:

  
 
   
 Bob M.- Thanks for the heads up on the 2 (1.998) TN Lun limit.
 This has me a little concerned esp. since I have 1 TB drives being
 delivered! Also thanks for the scsi cache flushing heads up, yet
 another item to lookup!  grin

   
 
 I am not sure if this LUN size limit really exists, or if it exists,
 in which cases it actually applies.  On my drive array, I created a
 3.6GB RAID-0 pool with all 12 drives included during the testing
 process.  Unfortunately, I don't recall if I created a LUN using all
 the space.

 I don't recall ever seeing mention of a 2TB limit in the CAM user
 interface or in the documentation.
 
   
 The Solaris LUN limit is gone if you're using Solaris 10 and recent patches.
 The array limit(s) are tied to the type of array you're using. (Which
 type is this again?)
 CAM shouldn't be enforcing any limits of its own but only reporting back
 when the array complains.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss



   
 

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-17 Thread Daryl Doami
Hi Paul,

I believe the goal is to come out w/ new Solaris updates every 4-6 
months and sometimes are known as quarterly updates.

Regards.

 Original Message 
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
From: Paul B. Henson [EMAIL PROTECTED]
To: Robin Guo [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org
Date: Fri May 16 15:06:02 2008
 So, from a feature perspective it looks like S10U6 is going to be in pretty
 good shape ZFS-wise. If only someone could speak to (perhaps under the
 cloak of anonymity ;) ) the timing side :). Given U5 barely came out, I
 wouldn't expect U6 anytime soon :(.

 Thanks..
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08

2008-05-17 Thread Daryl Doami
Hi again,

I sort of take that back, here's the past history:

Solaris 10 3/05 = Solaris 10 RR 1/05
Solaris 10 1/06 = Update 1
Solaris 10 6/06 = Update 2
Solaris 10 11/06   = Update 3
Solaris 10 8/07 = Update 4
Solaris 10 5/08 = Update 5

I did say it was a goal though.

 Original Message 
Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
From: Daryl Doami [EMAIL PROTECTED]
To: Paul B. Henson [EMAIL PROTECTED]
CC: zfs-discuss@opensolaris.org
Date: Fri May 16 22:59:13 2008
 Hi Paul,

 I believe the goal is to come out w/ new Solaris updates every 4-6 
 months and sometimes are known as quarterly updates.

 Regards.

  Original Message 
 Subject: Re: [zfs-discuss] ZFS in S10U6 vs openSolaris 05/08
 From: Paul B. Henson [EMAIL PROTECTED]
 To: Robin Guo [EMAIL PROTECTED]
 CC: zfs-discuss@opensolaris.org
 Date: Fri May 16 15:06:02 2008
   
 So, from a feature perspective it looks like S10U6 is going to be in pretty
 good shape ZFS-wise. If only someone could speak to (perhaps under the
 cloak of anonymity ;) ) the timing side :). Given U5 barely came out, I
 wouldn't expect U6 anytime soon :(.

 Thanks..
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss