Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Ralf Ramge
Jorgen Lundman wrote:

> We did ask our vendor, but we were just told that AVS does not support 
> x4500.


The officially supported AVS works on the X4500 since the X4500 came 
out. But, although Jim Dunham and others will tell you otherwise, I 
absolutely can *not* recommend using it on this hardware with ZFS, 
especially with the larger disk sizes. At least not for important, or 
even business critical data - in such a case, using X41x0 servers with
J4500 JBODs and a HAStoragePlus Cluster instead of AVS may be a much 
better and more reliable option, for basically the same price.




-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas 
Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn, Oliver Mauss, 
Achim Weiss
Aufsichtsratsvorsitzender: Michael Scheeren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Brent Jones
On Thu, Sep 4, 2008 at 12:19 AM, Ralf Ramge <[EMAIL PROTECTED]> wrote:
>
> Jorgen Lundman wrote:
>
> > We did ask our vendor, but we were just told that AVS does not support
> > x4500.
>
>
> The officially supported AVS works on the X4500 since the X4500 came
> out. But, although Jim Dunham and others will tell you otherwise, I
> absolutely can *not* recommend using it on this hardware with ZFS,
> especially with the larger disk sizes. At least not for important, or
> even business critical data - in such a case, using X41x0 servers with
> J4500 JBODs and a HAStoragePlus Cluster instead of AVS may be a much
> better and more reliable option, for basically the same price.
>
>
>
>
> --
>
> Ralf Ramge
> Senior Solaris Administrator, SCNA, SCSA

I did some Googling, but I saw some limitations sharing your ZFS pool
via NFS while using HAStorage Cluster product as well.
Do similar limitations exist for sharing via the built in CIFS in
OpenSolaris as well?

Here:
http://docs.sun.com/app/docs/doc/820-2565/z4000275997776?a=view

"
Zettabyte File System (ZFS) Restrictions

If you are using the zettabyte file system (ZFS) as the exported file
system, you must set the sharenfs property to off.

To set the sharenfs property to off, run the following command.

$ zfs set sharenfs=off file_system/volume

To verify if the sharenfs property is set to off, run the following command.

$ zfs get sharenfs file_system/volume
"



--
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-04 Thread Tomas Ögren
On 03 September, 2008 - Aaron Blew sent me these 2,5K bytes:

> On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> 
> > I've never heard of a battery that's used for anything but RAID
> > features.  It's an interesting question, if you use the controller in
> > ``JBOD mode'' will it use the write cache or not?  I would guess not,
> > but it might.  And if it doesn't, can you force it, even by doing
> > sneaky things like making 2-disk mirrors where 1 disk happens to be
> > missing thus wasting half the ports you bought, but turning on the
> > damned write cache?  I don't know.
> >
> 
> The X4150 SAS RAID controllers will use the on-board battery backed cache
> even when disks are presented as individual LUNs.  You can also globally
> enable/disable the disk write caches.

We're using an Infortrend SATA/SCSI disk array with individual LUNs, but
it still uses the disk cache.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Ralf Ramge
Brent Jones wrote:

> I did some Googling, but I saw some limitations sharing your ZFS pool
> via NFS while using HAStorage Cluster product as well.
[...]
  > If you are using the zettabyte file system (ZFS) as the exported file
> system, you must set the sharenfs property to off.

That's not a limitation, just looks like one. The cluster's resource 
type called "SUNW.nfs" decides if a file system is shared or not. And it 
does this with the usual "share" and "unshare" commands in a separate 
dfstab file. The ZFS sharenfs flag is set to "off" to avoid conflicts.

-- 

Ralf Ramge
Senior Solaris Administrator, SCNA, SCSA

Tel. +49-721-91374-3963
[EMAIL PROTECTED] - http://web.de/

1&1 Internet AG
Brauerstraße 48
76135 Karlsruhe

Amtsgericht Montabaur HRB 6484

Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas 
Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn, Oliver Mauss, 
Achim Weiss
Aufsichtsratsvorsitzender: Michael Scheeren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] What is the correct procedure to replace a non failed disk for another?

2008-09-04 Thread F. Wessels
Thanks for the replies.

I guess I misunderstood the manual:
zpool replace [-f] pool old_device [new_device]

Replaces old_device with new_device. This is equivalent to attaching 
new_device, waiting for it to resilver, and then detaching old_device.

The size of new_device must be greater than or equal to the minimum size of 
all the devices in a mirror or raidz configuration.

new_device is required if the pool is not redundant. If new_device is not 
specified, it defaults to old_device. This form of replacement is useful after 
an existing disk has failed and has been physically replaced. In this case, the 
new disk may have the same /dev/dsk path as the old device, even though it is 
actually a different disk. ZFS recognizes this.

In the last paragraph it stated failed and physically removed etc. Also in the 
first paragraph it stated resilver.

To summarize, "zpool replace" can replace a disk in any vdev type without 
compromising redundancy. If the new disk fails (during resilvering) the old 
pool remains in it's original state. After resilvering the new disk the old one 
gets detached. All this time the pool remains in it's original state. (off 
course no other factor kicks in)

PS. Why can't I see the comments from Bob and Jerry and others made after the 
last comment from Ross? I can see the comments in the text based site at 
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-September but not at 
http://www.opensolaris.org/ at which I'm currently posting this message.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:

> Jorgen Lundman wrote:
>
> > We did ask our vendor, but we were just told that AVS does not support
> > x4500.
>
>
> The officially supported AVS works on the X4500 since the X4500 came
> out. But, although Jim Dunham and others will tell you otherwise, I
> absolutely can *not* recommend using it on this hardware with ZFS,
> especially with the larger disk sizes. At least not for important, or
> even business critical data - in such a case, using X41x0 servers with
> J4500 JBODs and a HAStoragePlus Cluster instead of AVS may be a much
> better and more reliable option, for basically the same price.
>

Ralf,

  War wounds?  Could you please expand on the why a bit more?

-Wade

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Explaining ZFS message in FMA

2008-09-04 Thread Alain Chéreau

Hi all,

ZFS send a message to FMA in case of disk failure.
The detail of the message reference a vdev by an hexadecimal number as:

# *fmdump -V -u 50ea07a0-2cd9-6bfb-ff9e-e219740052d5*
TIME UUID SUNW-MSG-ID
Feb 18 11:07:29.5195 50ea07a0-2cd9-6bfb-ff9e-e219740052d5 ZFS-8000-D3

 TIME CLASS ENA
 Feb 18 11:07:27.8476 ereport.fs.zfs.vdev.open_failed   0xb22406c635500401

   nvlist version: 0
   version = 0x0
...
   nvlist version: 0
   version = 0x0
   scheme = zfs
   pool = 0x3a2ca6bebd96cfe3
   _*vdev = 0xedef914b5d9eae8d*_


I have search how to join the vdev number
0xedef914b5d9eae8d to a failed device, but I found 
no way to make the link.


Can someone help ?

Thank you.

Alain Chéreau.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Explaining ZFS message in FMA

2008-09-04 Thread Eric Schrock
You should be able to do 'zpool status -x' to find out what vdev is
broken.  A useful extension to the DE would be to add a label to the
suspect corresponding to /.

- Eric

On Thu, Sep 04, 2008 at 06:34:33PM +0200, Alain Ch?reau wrote:
> Hi all,
> 
> ZFS send a message to FMA in case of disk failure.
> The detail of the message reference a vdev by an hexadecimal number as:
> 
> # *fmdump -V -u 50ea07a0-2cd9-6bfb-ff9e-e219740052d5*
> TIME UUID SUNW-MSG-ID
> Feb 18 11:07:29.5195 50ea07a0-2cd9-6bfb-ff9e-e219740052d5 ZFS-8000-D3
> 
>  TIME CLASS ENA
>  Feb 18 11:07:27.8476 ereport.fs.zfs.vdev.open_failed   
>  0xb22406c635500401
> 
>nvlist version: 0
>version = 0x0
> ...
>nvlist version: 0
>version = 0x0
>scheme = zfs
>pool = 0x3a2ca6bebd96cfe3
>_*vdev = 0xedef914b5d9eae8d*_
> 
> 
> I have search how to join the vdev number
> 0xedef914b5d9eae8d to a failed device, but I found 
> no way to make the link.
> 
> Can someone help ?
> 
> Thank you.
> 
> Alain Ch?reau.
> 
> 
> 

> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


--
Eric Schrock, Fishworkshttp://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Explaining ZFS message in FMA

2008-09-04 Thread Cindy . Swearingen
Alain,

I think you want to use fmdump -eV to display the extended device
information. See the output below.

Cindy

class = ereport.fs.zfs.checksum
 ena = 0x3242b9cdeac00401
 detector = (embedded nvlist)
 nvlist version: 0
 version = 0x0
 scheme = zfs
 pool = 0x5fd04b3a3f98df8e
 vdev = 0x6ae5803769caa878
 (end detector)

 pool = sunray
 pool_guid = 0x5fd04b3a3f98df8e
 pool_context = 0
 vdev_guid = 0x6ae5803769caa878
 vdev_type = disk
 vdev_path = /dev/dsk/c1t1d0s7
 vdev_devid = id1,[EMAIL PROTECTED]/h

Alain Chéreau wrote:
> Hi all,
> 
> ZFS send a message to FMA in case of disk failure.
> The detail of the message reference a vdev by an hexadecimal number as:
> 
> # *fmdump -V -u 50ea07a0-2cd9-6bfb-ff9e-e219740052d5*
> TIME UUID SUNW-MSG-ID
> Feb 18 11:07:29.5195 50ea07a0-2cd9-6bfb-ff9e-e219740052d5 ZFS-8000-D3
> 
>   TIME CLASS ENA
>   Feb 18 11:07:27.8476 ereport.fs.zfs.vdev.open_failed   
> 0xb22406c635500401
> 
> nvlist version: 0
> version = 0x0
> ...
> nvlist version: 0
> version = 0x0
> scheme = zfs
> pool = 0x3a2ca6bebd96cfe3
> _*vdev = 0xedef914b5d9eae8d*_
> 
> 
> I have search how to join the vdev number
> 0xedef914b5d9eae8d to a failed device, but I found 
> no way to make the link.
> 
> Can someone help ?
> 
> Thank you.
> 
> Alain Chéreau.
> 
> 
> 
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Max vol size and number of files in production

2008-09-04 Thread Jean Luc Berrier
Hi,

My problem is one of my customer wants to change his Exanet Systems to ZFS, but 
SUN told him that there is real limitation with ZFS :
Customer env :
Incoming Data
FS SIZE : 50 TB, with at least 100 Thousand files write  per day, around 20 
Millions files.
Consulting Data
FS Size : 200 TB with at least 300 Millions Files

Sun told him over 10 millions files ZFS should not be the right solution.
Do you have POC or explanation to this limitations regarding the spec of ZFS.
Best regards
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Max vol size and number of files in production

2008-09-04 Thread Tim
The issue is EXANET isn't really holding "300 million files" in one dataset
like you're talking about doing with zfs.  It's a clustered approach with a
single namespace.

Reality is you can do what the customer wants to do, but you'd be leveraging
something like pnfs which I don't think is quite production ready yet.

I'm sure there are others on this list much better versed in PNFS than
myself that can speak to that solution.

--Tim



On Thu, Sep 4, 2008 at 12:59 PM, Jean Luc Berrier <[EMAIL PROTECTED]> wrote:

> Hi,
>
> My problem is one of my customer wants to change his Exanet Systems to ZFS,
> but SUN told him that there is real limitation with ZFS :
> Customer env :
> Incoming Data
> FS SIZE : 50 TB, with at least 100 Thousand files write  per day, around 20
> Millions files.
> Consulting Data
> FS Size : 200 TB with at least 300 Millions Files
>
> Sun told him over 10 millions files ZFS should not be the right solution.
> Do you have POC or explanation to this limitations regarding the spec of
> ZFS.
> Best regards
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] send/receive statistics

2008-09-04 Thread Marcelo Leal
Hello all,
 Any plans (or already have), a send/receive way to get the transfer backup 
statistics? I mean, the "how much" was transfered, time and/or bytes/sec?
 And the last question... i did see in many threads the question about "the 
consistency between the send/receive through ssh"... but no definitive answers.
So, my last question is: If the transfer is completed (send/receive), i can 
trust the backup was good. The "receive" returning "0" is definitive?

 Thanks.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Terabyte scrub

2008-09-04 Thread Marcelo Leal
Hello all,
 I was used to use mirrors and solaris 10, in which the scrub process for 500gb 
took about two hours... and with solaris express (snv_79a) tests, terabytes in 
minutes. I did search for release changes in the scrub process, and could not 
find anything about enhancements in this magnitude. So i ask you... :)
 What??
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Terabyte scrub

2008-09-04 Thread Will Murnane
On Thu, Sep 4, 2008 at 14:18, Marcelo Leal
<[EMAIL PROTECTED]> wrote:
> Hello all,
>  I was used to use mirrors and solaris 10, in which the scrub process for 
> 500gb took about two hours... and with solaris express (snv_79a) tests, 
> terabytes in minutes. I did search for release changes in the scrub process, 
> and could not find anything about enhancements in this magnitude. So i ask 
> you... :)
How full were these filesystems?  Scrubbing only verifies data which
is written, not the entire surface of the disk.

A terabyte which is read at 100 megabytes per second will take about
10,000 seconds to read, or about three hours.  So I'm guessing you had
a mostly empty zpool to test.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/receive statistics

2008-09-04 Thread Richard Elling
Marcelo Leal wrote:
> Hello all,
>  Any plans (or already have), a send/receive way to get the transfer backup 
> statistics? I mean, the "how much" was transfered, time and/or bytes/sec?
>   

I'm not aware of any plans, you should file an RFE.

>  And the last question... i did see in many threads the question about "the 
> consistency between the send/receive through ssh"... but no definitive 
> answers.
>   

There is no additional data protection on the stream. The data is verified
by the receive.  But if you redirect the stream to a file, for instance, and
then later attempt the receive and encounter checksum errors, then you
might be sad.  In other words, the end-to-end verification of send streams
exists, but there is no inherent data protection in the stream.  The use of
ssh tends to work well, because secure protocols also check the integrity
of the data flowing between machines and will retry.

> So, my last question is: If the transfer is completed (send/receive), i can 
> trust the backup was good. The "receive" returning "0" is definitive?
>   

If the receive completes without error, then everything passed the
checksum verification.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/receive statistics

2008-09-04 Thread Toby Thain

On 4-Sep-08, at 4:52 PM, Richard Elling wrote:

> Marcelo Leal wrote:
>> Hello all,
>>  Any plans (or already have), a send/receive way to get the  
>> transfer backup statistics? I mean, the "how much" was transfered,  
>> time and/or bytes/sec?
>>
>
> I'm not aware of any plans, you should file an RFE.
>
>>  And the last question... i did see in many threads the question  
>> about "the consistency between the send/receive through ssh"...  
>> but no definitive answers.
>>
>
> There is no additional data protection on the stream. The data is  
> verified
> by the receive.  But if you redirect the stream to a file, for  
> instance, and
> then later attempt the receive and encounter checksum errors, then you
> might be sad.  In other words, the end-to-end verification of send  
> streams
> exists, but there is no inherent data protection in the stream.   
> The use of
> ssh tends to work well, because secure protocols also check the  
> integrity
> of the data flowing between machines and will retry.

What about the idea mooted recently here of 'zfs send' presenting a  
final checksum which could be checked by receiver? (Where destination  
is not 'zfs receive'). Worth doing?

--Toby

>
>> So, my last question is: If the transfer is completed (send/ 
>> receive), i can trust the backup was good. The "receive" returning  
>> "0" is definitive?
>>
>
> If the receive completes without error, then everything passed the
> checksum verification.
>  -- richard
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] send/receive statistics

2008-09-04 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 09/04/2008 03:40:46 PM:

>
> On 4-Sep-08, at 4:52 PM, Richard Elling wrote:
>
> > Marcelo Leal wrote:
> >> Hello all,
> >>  Any plans (or already have), a send/receive way to get the
> >> transfer backup statistics? I mean, the "how much" was transfered,
> >> time and/or bytes/sec?
> >>
> >
> > I'm not aware of any plans, you should file an RFE.
> >
> >>  And the last question... i did see in many threads the question
> >> about "the consistency between the send/receive through ssh"...
> >> but no definitive answers.
> >>
> >
> > There is no additional data protection on the stream. The data is
> > verified
> > by the receive.  But if you redirect the stream to a file, for
> > instance, and
> > then later attempt the receive and encounter checksum errors, then you
> > might be sad.  In other words, the end-to-end verification of send
> > streams
> > exists, but there is no inherent data protection in the stream.
> > The use of
> > ssh tends to work well, because secure protocols also check the
> > integrity
> > of the data flowing between machines and will retry.
>
> What about the idea mooted recently here of 'zfs send' presenting a
> final checksum which could be checked by receiver? (Where destination
> is not 'zfs receive'). Worth doing?
>
> --Toby

I don't have time to dig in the zfs send/receive code right now but I don't
understand why, if the checksums for the delta blocks exist on the sending
side, the receiving side does not write and verify the checksum matches.
Tossing a error or just plain exit or reverting to the base snapshot of the
receive (based on options) when a block checksum does not match while
receiving?  I understand send and receive were created to be pipeable --
but why should that exclude error handling?

-Wade

>
> >
> >> So, my last question is: If the transfer is completed (send/
> >> receive), i can trust the backup was good. The "receive" returning
> >> "0" is definitive?
> >>
> >
> > If the receive completes without error, then everything passed the
> > checksum verification.
> >  -- richard
> >
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sun samba <-> ZFS ACLs

2008-09-04 Thread Paul B. Henson
On Wed, 3 Sep 2008, Richard Elling wrote:

> Source packages are usually in a Solaris distribution (overloaded term,
> but look at something like Solaris 10 5/08) and typically end in "S" So
> look in the Product directory for something like SUNWsambaS. Of course,

SUNWsmbaS as it turns out... You will need to reapply your latest Samba
patch after installing the source code if you have already installed and
patched the server packages, otherwise the source code will be out of date.

Not that it compiles anyway :(.


-- 
Paul B. Henson  |  (909) 979-6361  |  http://www.csupomona.edu/~henson/
Operating Systems and Network Analyst  |  [EMAIL PROTECTED]
California State Polytechnic University  |  Pomona CA 91768
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-09-04 Thread Steve Goldberg
Hi Lori,

is ZFS boot still planned for S10 update 6?

Thanks,

Steve
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Status of ZFS boot for sparc?

2008-09-04 Thread Enda O'Connor
Steve Goldberg wrote:
> Hi Lori,
> 
> is ZFS boot still planned for S10 update 6?
> 
> Thanks,
> 
> Steve
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi
yes, its' in u6, I have migrated u5 ufs on svm  to zfs boot
Enda
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Al Hopper
On Thu, Sep 4, 2008 at 10:09 AM,  <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
>
>> Jorgen Lundman wrote:
>>
>> > We did ask our vendor, but we were just told that AVS does not support
>> > x4500.
>>
>>
>> The officially supported AVS works on the X4500 since the X4500 came
>> out. But, although Jim Dunham and others will tell you otherwise, I
>> absolutely can *not* recommend using it on this hardware with ZFS,
>> especially with the larger disk sizes. At least not for important, or
>> even business critical data - in such a case, using X41x0 servers with
>> J4500 JBODs and a HAStoragePlus Cluster instead of AVS may be a much
>> better and more reliable option, for basically the same price.
>>
>
> Ralf,
>
>  War wounds?  Could you please expand on the why a bit more?

+1   I'd also be interested in more details.

Thanks,

-- 
Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
 Voice: 972.379.2133 Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] x4500 vs AVS ?

2008-09-04 Thread Brent Jones
On Thu, Sep 4, 2008 at 7:38 PM, Al Hopper <[EMAIL PROTECTED]> wrote:
> On Thu, Sep 4, 2008 at 10:09 AM,  <[EMAIL PROTECTED]> wrote:
>> [EMAIL PROTECTED] wrote on 09/04/2008 02:19:23 AM:
>>
>>> Jorgen Lundman wrote:
>>>
>>> > We did ask our vendor, but we were just told that AVS does not support
>>> > x4500.
>>>
>>>
>>> The officially supported AVS works on the X4500 since the X4500 came
>>> out. But, although Jim Dunham and others will tell you otherwise, I
>>> absolutely can *not* recommend using it on this hardware with ZFS,
>>> especially with the larger disk sizes. At least not for important, or
>>> even business critical data - in such a case, using X41x0 servers with
>>> J4500 JBODs and a HAStoragePlus Cluster instead of AVS may be a much
>>> better and more reliable option, for basically the same price.
>>>
>>
>> Ralf,
>>
>>  War wounds?  Could you please expand on the why a bit more?
>
> +1   I'd also be interested in more details.
>
> Thanks,
>
> --
> Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED]
>  Voice: 972.379.2133 Timezone: US CDT
> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

Story time!

-- 
Brent Jones
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Terabyte scrub

2008-09-04 Thread Anton B. Rang
If you're using a mirror, and each disk manages 50 MB/second (unlikely if it's 
a single disk doing a lot of seeks, but you might do better using a hardware 
array for each half of the mirror), simple math says that scanning 1 TB would 
take roughly 20,000 seconds, or 5 hours. So your speed under Solaris 10 sounds 
plausible.  Are you sure you're scrubbing what you think you are, with Solaris 
Express?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs metada corrupted

2008-09-04 Thread LyeBeng Ong
I made a bad judgment and  now my raidz pool is corrupted. I have a raidz pool 
running on Opensolaris b85.  I wanted to try out freenas 0.7 and tried to add 
my pool to freenas.
 
After adding the zfs disk, vdev and pool.  I decided to back out and went back 
to opensolaris. Now my raidz pool will not mount and got the following errors.  
Hope someone expert can help me recover from this error.

[EMAIL PROTECTED]:/dev/rdsk# zpool status
  pool: syspool
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
syspool ONLINE   0 0 0
  c1d0s0ONLINE   0 0 0

errors: No known data errors

  pool: tank
 state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from a backup source.
   see: http://www.sun.com/msg/ZFS-8000-72
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankFAULTED  0 0 4  corrupted data
  raidz1ONLINE   0 0 4
c2d0ONLINE   0 0 0
c2d1ONLINE   0 0 0
c3d0ONLINE   0 0 0
c3d1ONLINE   0 0 0
[EMAIL PROTECTED]:/dev/rdsk# 

[EMAIL PROTECTED]:/dev/rdsk# zdb -vvv
syspool
version=10
name='syspool'
state=0
txg=13
pool_guid=7417064082496892875
hostname='elatte_installcd'
vdev_tree
type='root'
id=0
guid=7417064082496892875
children[0]
type='disk'
id=0
guid=16996723219710622372
path='/dev/dsk/c1d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/[EMAIL PROTECTED]/[EMAIL 
PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=0
metaslab_array=14
metaslab_shift=30
ashift=9
asize=158882856960
is_log=0
tank
version=10
name='tank'
state=0
txg=9305484
pool_guid=6165551123815947851
hostname='cempedak'
vdev_tree
type='root'
id=0
guid=6165551123815947851
children[0]
type='raidz'
id=0
guid=18029757455913565148
nparity=1
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1280228458496
is_log=0
children[0]
type='disk'
id=0
guid=14740261559114907785
path='/dev/dsk/c2d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci10de,[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
children[1]
type='disk'
id=1
guid=7618479640615121644
path='/dev/dsk/c2d1s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci10de,[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
children[2]
type='disk'
id=2
guid=1801493855297946488
path='/dev/dsk/c3d0s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci10de,[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
children[3]
type='disk'
id=3
guid=15710901655082836445
path='/dev/dsk/c3d1s0'
devid='id1,[EMAIL PROTECTED]/a'
phys_path='/[EMAIL PROTECTED],0/pci10de,[EMAIL 
PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED]/[EMAIL PROTECTED],0:a'
whole_disk=1
[EMAIL PROTECTED]:/dev/rdsk# 

[EMAIL PROTECTED]:/dev/rdsk# zdb -l /dev/rdsk/c2d0

LABEL 0

version=6
name='tank'
state=2
txg=14
pool_guid=11155694179612409655
hostid=0
hostname='cempedak.local'
top_guid=17207490567963887885
guid=15107016125503765553
vdev_tree
type='raidz'
id=0
guid=17207490567963887885
nparity=1
metaslab_array=14
metaslab_shift=33
ashift=9
asize=1280272498688
children[0]
type='disk'
id=0
guid=550142777835149292
path='/dev/ad10'
devid='ad:3QF0PGBS'
whole_disk=0