Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Jesus Cea
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Darren J Moffat wrote:
| Great tool, any chance we can have it integrated into zpool(1M) so that
| it can find and "fixup" on import detached vdevs as new pools ?
|
| I'd think it would be reasonable to extend the meaning of
| 'zpool import -D' to list detached vdevs as well as destroyed pools.

+inf :-)

- --
Jesus Cea Avion _/_/  _/_/_/_/_/_/
[EMAIL PROTECTED] - http://www.jcea.es/ _/_/_/_/  _/_/_/_/  _/_/
jabber / xmpp:[EMAIL PROTECTED] _/_/_/_/  _/_/_/_/_/
~   _/_/  _/_/_/_/  _/_/  _/_/
"Things are not so easy"  _/_/  _/_/_/_/  _/_/_/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/_/_/_/  _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBSCPNGZlgi5GaxT1NAQLXowQAnF/fWQ5SmBzRait+9wgVJdKEQ9Phh5D3
py3Bq75yQb4ljQ2PLbT1hU7QgNxavCLjx8NTz5pfnT9+m7E4SG5kQdfXXHgPMfHd
7Mp1ckRtcVZh+XWj2ESe/4ZDIIz/EvaeL4j7j9uFpDVWXGNPNZx1LyGcBuxt8uya
jdchjKgwyZM=
=xPth
-END PGP SIGNATURE-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] News on Single Drive RaidZ Expansion?

2008-05-08 Thread Jason King
On Thu, May 8, 2008 at 8:59 PM, EchoB <[EMAIL PROTECTED]> wrote:
> I cannot recall if it was this (-discuss) or (-code) but a post a few
>  months ago caught my attention.
>  In it someone detailed having worked out the math and algorithms for a
>  flexible expansion scheme for ZFS. Clearly this is very exciting to me,
>  and most people who use ZFS on purpose.
>  I am wondering if there is currently any work in progress to implement
>  that - or any other method of accomplishing that task. It seems to be
>  one of the most asked about features. I haven't heard anything in a
>  while - so I figured I'd ask.

I suspect this might be what you're looking for:
http://blogs.sun.com/ahl/entry/expand_o_matic_raid_z

However it's dependent on the block pointer rewrite functionality
(which I believe is being worked on, but I cannot say for sure).
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] News on Single Drive RaidZ Expansion?

2008-05-08 Thread EchoB
I cannot recall if it was this (-discuss) or (-code) but a post a few 
months ago caught my attention.
In it someone detailed having worked out the math and algorithms for a 
flexible expansion scheme for ZFS. Clearly this is very exciting to me, 
and most people who use ZFS on purpose.
I am wondering if there is currently any work in progress to implement 
that - or any other method of accomplishing that task. It seems to be 
one of the most asked about features. I haven't heard anything in a 
while - so I figured I'd ask.

Cheers!

:EchoBinary
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Dave
On 05/08/2008 11:29 AM, Luke Scharf wrote:
> Dave wrote:
>> On 05/08/2008 08:11 AM, Ross wrote:
>>  
>>> It may be an obvious point, but are you aware that snapshots need to 
>>> be stopped any time a disk fails?  It's something to consider if 
>>> you're planning frequent snapshots.
>>> 
>>
>> I've never heard this before. Why would snapshots need to be stopped 
>> for a disk failure?
>>   
> 
> Because taking a snapshot makes the scrub start over.  I hadn't thought 
> about this extending to a resilver, but I guess it would!
> 

Ah, yes, for scrubs/resilvers. My brain didn't seem to understand the 
actual intent of Ross' statement, which was to say that repairing a 
mirror/raidz after replacing a bad disk requires halting new snapshots. 
On the other hand, a disk can fail and you can take snapshots all day 
long on a degraded pool.

Glad to hear there's code under review to fix this.

--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread eric kustarz

On May 8, 2008, at 12:31 PM, Carson Gaspar wrote:

> Luke Scharf wrote:
>> Dave wrote:
>>> On 05/08/2008 08:11 AM, Ross wrote:
>>>
 It may be an obvious point, but are you aware that snapshots need  
 to be stopped any time a disk fails?  It's something to consider  
 if you're planning frequent snapshots.

>>> I've never heard this before. Why would snapshots need to be  
>>> stopped for
>>> a disk failure?
>>>
>>
>> Because taking a snapshot makes the scrub start over.  I hadn't  
>> thought
>> about this extending to a resilver, but I guess it would!
>
> I thought this was fixed in OpenSolaris and Solaris 10 U5? Can one of
> the ZFS folks please comment?

Matt just sent out a code review for this today:
6343667 scrub/resilver has to start over when a snapshot is taken
http://bugs.opensolaris.org/view_bug.do?bug_id=6343667

eric
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 05/08/2008 02:31:43 PM:

> Luke Scharf wrote:
> > Dave wrote:
> >> On 05/08/2008 08:11 AM, Ross wrote:
> >>
> >>> It may be an obvious point, but are you aware that snapshots
> need to be stopped any time a disk fails?  It's something to
> consider if you're planning frequent snapshots.
> >>>
> >> I've never heard this before. Why would snapshots need to be stopped
for
> >> a disk failure?
> >>
> >
> > Because taking a snapshot makes the scrub start over.  I hadn't thought

> > about this extending to a resilver, but I guess it would!
>
> I thought this was fixed in OpenSolaris and Solaris 10 U5? Can one of
> the ZFS folks please comment?
>
> I'll probably get to test this the hard way next week, as I start to
> attempt to engineer a zfs send/recv DR solution. If this bug _isn't_
> fixed, I will be a very unhappy geek :-(
>

Sorry to hear you are unhappy. =(

It is not fixed yet -- I am actively looking for the fix too.  On a 4500
with a lot of used data on a large pool,  you can expect to lose snapshots
for 5+ days to resilver or scrub.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Carson Gaspar
Luke Scharf wrote:
> Dave wrote:
>> On 05/08/2008 08:11 AM, Ross wrote:
>>   
>>> It may be an obvious point, but are you aware that snapshots need to be 
>>> stopped any time a disk fails?  It's something to consider if you're 
>>> planning frequent snapshots.
>>> 
>> I've never heard this before. Why would snapshots need to be stopped for 
>> a disk failure?
>>   
> 
> Because taking a snapshot makes the scrub start over.  I hadn't thought 
> about this extending to a resilver, but I guess it would!

I thought this was fixed in OpenSolaris and Solaris 10 U5? Can one of 
the ZFS folks please comment?

I'll probably get to test this the hard way next week, as I start to 
attempt to engineer a zfs send/recv DR solution. If this bug _isn't_ 
fixed, I will be a very unhappy geek :-(

-- 
Carson
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-08 Thread Mark Shellenbaum
Marcelo Leal wrote:
> No answer... well, do you not have this problem or there is another option to 
> delegate such administration? I was thinking if we can delegate a "single" 
> filesystem administration to some user through ZFS administration web console 
> (67889). 
>  Can i create a user and give him administration rights to a single 
> filesystem (and its snapshots, of course)?
> 
>  Thanks.


we already have the ability to allow users to create/destroy snapshots 
over NFS.  Look at the ZFS delegated administration model.  If all you 
want is snapshot creation/destruction then you will need to grant 
"snapshot,mount,destroy" permissions.

then on the NFS client mount go into .zfs/snapshot and do mkdir 
.  Providing the user has the appropriate permission the 
snapshot will be created.

rmdir can be used to remove the snapshot.


   -Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-08 Thread Darren J Moffat
Marcelo Leal wrote:
> No answer... well, do you not have this problem or there is another option to 
> delegate such administration? I was thinking if we can delegate a "single" 
> filesystem administration to some user through ZFS administration web console 
> (67889). 
>  Can i create a user and give him administration rights to a single 
> filesystem (and its snapshots, of course)?

User delegation of operations and properties already exists.

See the section on the "allow" sub-command in the zfs(1) man page.

-- 
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS cli for REMOTE Administration

2008-05-08 Thread Marcelo Leal
No answer... well, do you not have this problem or there is another option to 
delegate such administration? I was thinking if we can delegate a "single" 
filesystem administration to some user through ZFS administration web console 
(67889). 
 Can i create a user and give him administration rights to a single filesystem 
(and its snapshots, of course)?

 Thanks.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs! mirror and break

2008-05-08 Thread Richard Elling
Mike DeMarco wrote:
> I currently have a zpool with two 8Gbyte disks in it. I need to replace them 
> with a single 56Gbyte disk.
>
> with veritas I would just add the disk in as a mirror and break off the other 
> plex then destroy it.
>
> I see no way of being able to do this with zfs.
>
> Being able to migrate data without having to unmount and remount filesystems 
> is very 
> important to me.
>
> Can anyone say when such functionality will be implemented?
>   

If the original pool is a mirror, the it is trivial and has been a
features since day one.  zpool attach the new disk.
zpool detach the old disks.

If the original pool is not a mirror, then it can get more
complicated, but depends on what you want it to look
like in the long term...
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs! mirror and break

2008-05-08 Thread Mike DeMarco
I currently have a zpool with two 8Gbyte disks in it. I need to replace them 
with a single 56Gbyte disk.

with veritas I would just add the disk in as a mirror and break off the other 
plex then destroy it.

I see no way of being able to do this with zfs.

Being able to migrate data without having to unmount and remount filesystems is 
very 
important to me.

Can anyone say when such functionality will be implemented?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Richard Elling
Bob Friesenhahn wrote:
> On Thu, 8 May 2008, Ross wrote:
>
>   
>> protected even if a disk fails. I found this post quite an 
>> interesting 
>> read:http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl
>> 
>
> Richard's blog entry does not tell the whole story.  ZFS does not 
> protect against memory corruption errors and CPU execution errors 
> except for in the validated data path.  It also does not protect you 
> against kernel bugs, corrosion, meteorite strikes, or civil unrest. 
> As a result, the MTTDL plots (which only consider media reliability 
> and redundancy) become quite incorrect as they reach stratospheric 
> levels.
>   

These are statistical models, or as they say, "every child in Lake 
Woebegon is
above average." :-)  The important take-away is that no protection 
sucks, single
parity protection is better, and double parity protection is even 
better. See also
the discussion on "mean time" measurements and when we don't like them at
http://blogs.sun.com/relling/entry/using_mtbf_and_time_dependent
 -- richard
> Note that Richard does include a critical disclaimer: "The MTTDL 
> calculation is one attribute of Reliability, Availability, and 
> Serviceability (RAS) which we can also calculate relatively easily." 
> Notice the operative word "one".
>
> The law of diminishing returns still applies.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Hans
hello
thank you for your postings. i try to understood. but my english is not so 
good. :-)
for exporting a zfs i must use a special command like 
zspool export
this makes the filesystem ready to export.
but i think so:
when i boot from the live cd without mounting/activating the file system , the 
filesystem don't know about the backup / restore with dd. because dd copy's 
each sektor it is transparent for the file system.
is this correct?
when i create 2 file systems during install,
like
/dev/sda1 and /dev/sda2 and say to the opensolaris installer... use as solaris, 
i think they get formatet with zfs.
now, when i boot from a live-cd can i copy with dd /dev/sda1 into a file on 
/dev/sda2 or are the data of the 2 partitions are mixed, so that i cannot copy 
only one partiton?
sorry again for my bad english and the problems understanding zfs. it is 
difficult for a linux user, that never used such a file system or a raid 
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Luke Scharf
Dave wrote:
> On 05/08/2008 08:11 AM, Ross wrote:
>   
>> It may be an obvious point, but are you aware that snapshots need to be 
>> stopped any time a disk fails?  It's something to consider if you're 
>> planning frequent snapshots.
>> 
>
> I've never heard this before. Why would snapshots need to be stopped for 
> a disk failure?
>   

Because taking a snapshot makes the scrub start over.  I hadn't thought 
about this extending to a resilver, but I guess it would!

Anyway, I take frequent snapshots on my home ZFS server, and I got tired 
of a 90 minute scrub that started over every 60 minutes.  So, I put the 
following code snipped into my snapshot-management script:

# Is a scrub in-progress?  If so, abort
if [ ! $(zpool status | grep -c 'scrub in progress' ) == 0 ]
then
exit -1;
fi
  

Now I skip a couple of snashots a day, but I can run a daily scrub to 
make sure that my photos and the code from my undergraduate CS projects 
are being coherently stored.

To solve the resilver problem, change the "grep" statement to something 
like "egrep -c 'scrub in progress|resilver in progress' ".

-Luke

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Dave
On 05/08/2008 08:11 AM, Ross wrote:
> It may be an obvious point, but are you aware that snapshots need to be 
> stopped any time a disk fails?  It's something to consider if you're planning 
> frequent snapshots.

I've never heard this before. Why would snapshots need to be stopped for 
a disk failure?

--
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Bob Friesenhahn
On Thu, 8 May 2008, Ross Smith wrote:

> True, but I'm seeing more and more articles pointing out that the 
> risk of a secondary failure is increasing as disks grow in size, and

Quite true.

> While I'm not sure of the actual error rates (Western digital list 
> their unrecoverable rates as < 1 in 10^15), I'm very concious that 
> if you have any one disk fail completely, you are then reliant on 
> being able to read without error every single bit of data from every 
> other disk in that raid set.  I'd much rather have dual parity and 
> know that single bit errors are still easily recoverable during the 
> rebuild process.

I understand the concern.  However, the published unrecoverable rates 
are for the completely random write/read case.  ZFS validates the data 
read for each read and performs a repair if a read is faulty.  Doing a 
"zfs scrub" forces all of the data to be read and repaired if 
necessary.  Assuming that the data is read (and repaired if necessary) 
on a periodic basis, the chance that an unrecoverable read will occur 
will surely be dramatically lower.  This of course assumes that the 
system administrator pays attention and proactively replaces disks 
which are reporting unusually high and increasing read failure rates.

It is a simple matter of statistics.  If you have read a disk block 
successfully 1000 times, what is the probability that the next read 
from that block will spontaneously fail?  How about if you have read 
from it successfully a million times?

Assuming a reasonably designed storage system, the most likely cause 
of data loss is human error due to carelessness or confusion.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Bob Friesenhahn
On Thu, 8 May 2008, Ross wrote:

> protected even if a disk fails. I found this post quite an 
> interesting 
> read:http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl

Richard's blog entry does not tell the whole story.  ZFS does not 
protect against memory corruption errors and CPU execution errors 
except for in the validated data path.  It also does not protect you 
against kernel bugs, corrosion, meteorite strikes, or civil unrest. 
As a result, the MTTDL plots (which only consider media reliability 
and redundancy) become quite incorrect as they reach stratospheric 
levels.

Note that Richard does include a critical disclaimer: "The MTTDL 
calculation is one attribute of Reliability, Availability, and 
Serviceability (RAS) which we can also calculate relatively easily." 
Notice the operative word "one".

The law of diminishing returns still applies.

Bob
==
Bob Friesenhahn
[EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Ross
Mirrored drives should be fine.  My understanding is that write performance 
suffers slightly in a mirrored configuration, but random reads are much faster. 
 In your scenario I would expect mirroring to give far superior performance 
than raid-z2.

We're looking to do something similar, but we're strongly considering dual 
parity mirrors for when we buy some Thumpers.  You're getting tons of storage 
for your money with these servers, but the rebuild time and risk of data loss 
is considerable when you're dealing with busy 500GB drives.  We want to ensure 
that we never have to try to restore a 24TB Thumper from tape backup, and that 
our data is protected even if a disk fails.

I found this post quite an interesting read:
http://blogs.sun.com/relling/entry/raid_recommendations_space_vs_mttdl

It may be an obvious point, but are you aware that snapshots need to be stopped 
any time a disk fails?  It's something to consider if you're planning frequent 
snapshots.

Regarding the OS, I wouldn't even attempt to use those disks for data.  When we 
buy x4500's I'll be buying a couple of spare 500GB disks so I can mirror the 
boot volume onto them and stick them on the shelf, just in case.  A 500GB disk 
costs £60 or so, is it really worth risking your server over it?

And finally, I've no idea what performance would be like with that many 
snapshots, but Sun do a 60 day free trial of that server, so if you haven't 
done so already, take advantage of that and test it for yourself.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Moving zfs pool to new machine?

2008-05-08 Thread Ivan Wang
> Steve,
> 
> > Can someone tell me or point me to links that
> describe how to
> > do the following.
> >
> > I had a machine that crashed and I want to move to
> a newer machine
> > anyway.  The boot disk on the old machine is fried.
>  The two disks I  
>  was
> using for a zfs pool on that machine need to be
>  moved to a newer  
>  machine
>  now running 2008.05 OpenSolaris.
> 
> > What is the procedure for getting back the pool on
> the new machine and
> > not losing any of the files I had in that pool?  I
> searched the docs,
> > but did not find a clear answer to this and
> experimenting with various
> > zsh and zpool commands did not see the two disks or
> their contents.
> 
> To see all available pools to import:
> 
>   zpool import
> 
> From this list, it should include your prior storage
>  pool name
>   zpool import 
> 
> - Jim

How about migrating a root zool? Aside from rebuilding /devices, is there 
anything to watch for when migrating a root zpool between two similar 
configured systems?

Ivan

> 
> >
> >
> > The new disks are c6t0d0s0 and c6t1d0s0.  They are
> identical disks set
> > that were set up in a mirrored pool on the old
> machine.
> >
> > Thanks,
> >
> > Steve Christensen
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> >
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Peter Karlsson
Hi Hans,

Think what you are looking for would be a combination of a snapshot  
and zfs send/receive, that would give you an archive that you can use  
to recreate your zfs filesystems on your zpool at will at  later time.  
So you can do something like this

Create archive :
zfs snapshot -r [EMAIL PROTECTED]
zfs send -R [EMAIL PROTECTED] > mypool_archive.zfs

Restore from archive:
zpool create mynewpool disk1 disk2
zfs receive -d -F < mypool_archive.zfs


Doing this will create an archive that contains all descendent file  
systems of mypool, that can be restored at a later time, with out  
depending on how the zpool is organized.

/peter

On May 7, 2008, at 23:31, Hans wrote:
> thank you for your posting.
> well i still have problems understanding how a pool works.
> when i have one partition with zfs like this:
> /dev/sda1 -> ZFS
> /dev/sda2 -> ext2
> the only pool is on the sda1 device. in this way i can backup it  
> with the dd command.
> now i try to understand:
> when i have 2 zfs partitions like this:
> /dev/sda1 ->ZFS
> /dev/sda2 ->ZFS
> /dev/sda3 ->ext2
> i cannot copy only sda1 with dd and leave sda2 because i destroy the  
> pool then.
> is it possible to seperate two partitions in this way, that i can  
> backup one seperatley?
> the normal linux way is that every partition is mountet into the  
> file-system tree, but is in his way to store data different. so at  
> linux you can mount a ext3 and reiserfs together to one file-system  
> tree.
> zfs is different. it spreads data over the partitions how it is the  
> best way for zfs. maybe i can compare it a little with a raid 0  
> where data is spread over several harddisks. on a raid0 it is  
> impossible to backup one harddisk and restore it , in this way i  
> cannot backup a zfs partition and leafe other zfs partions.
> well i think a snapshot is not what i want.
> i want a image that i can use at any problems. so i can install an  
> new version of solaris, installing software. and then say... not  
> good. restore image. or whatever i want.
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sanity check -- x4500 storage server for enterprise file service

2008-05-08 Thread Peter Tribble
On Wed, May 7, 2008 at 11:34 PM, Paul B. Henson <[EMAIL PROTECTED]> wrote:
>
> We have been evaluating ZFS as a potential solution for delivering
> enterprise file services for our campus.
...
> I was thinking about allocating 2 drives for the OS (SVM mirroring, pending
> ZFS boot support), two hot spares, and allocating the other 44 drives as
> mirror pairs into a single pool. While this will result in lower available
> space than raidz, my understanding is that it should provide much better
> performance.

As a regular fileserver, yes - random reads of small files on raidz isn't
too hot...

> Has there been a final resolution on the x4500 I/O hanging issue? I think I
> saw a thread the other day about an IDR that seems promising to fix it, if
> we go this route hopefully that will be resolved before we go production.

I just disable NCQ and have done with it.

> It seems like kind of a waste to allocate 1TB to the operating system,
> would there be any issue in taking a slice of those boot disks and creating
> a zfs mirror with them to add to the pool?

Personally, I wouldn't - I do like pool-level separation of data and OS.

What I normally do in these cases is to create a separate pool
and use it for something else useful.

> I'm planning on using snapshots for online backups, maintaining perhaps 10
> days worth. At 6000 filesystems, that would be 6 snapshots floating
> around, any potential scalability or performance issues with that?

My only concern here would be how hard it would be to delete the
snapshots. With that cycle, you're deleting 6000 snapshots a day,
and while snapshot creation is "free", my experience is that snapshot
deletion is not.

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Image with DD from ZFS partition

2008-05-08 Thread Jim Dunham
Hans,

> hello,
> can i create a image from ZFS with the DD command?

Yes, with restrictions.

First, a ZFS storage pool must be in the "zpool export" state to be  
copied, so that a write-order consistent set of data exists in the  
copy. ZFS does an excellent job of detecting inconsistencies in those  
volumes making up a single ZFS storage pool, so a copy of a imported  
storage pool is sure to be inconsistent, and thus unusable by ZFS.

Although there are various means to copy ZFS (actually copy the  
individual vdevs in a single ZFS storage pool), one can not "zpool  
import" this copy of ZFS on the same node as the original ZFS storage  
pool. Unlike other Solaris filesystems, ZFS maintains metadata on each  
vdev that is used to reconstruct a ZFS storage pool at "zpool import"  
time. The logic within "zpool import" processing will correctly find  
all constituent volumes (vdevs) of a single ZFS storage pool, but  
ultimately hides / excludes other volumes (the copies) from being  
considered as part of the current or any other "zpool import"  
operation. Only the original, nots its copy, can be seen or utilized  
by "zpool import"

If possible, the ZFS copy can be moved or accessed (using dual-ported  
disks, FC SAN, iSCSI SAN, Availability Suite, etc.) from another host,  
and then only there can the ZFS copy undergo a successful "zpool  
import".

As a slight segue, Availability Suite (AVS), can create an instantly  
accessible copy of the constituent volumes (vdevs) of a ZFS storage  
pool (in lieu of using DD which can take minutes, or hours). This is  
the Point-in-Time Copy, or II (Instant Image) part of AVS. This copy  
can also be replicated to a remote Solaris host where it can be  
imported. This is the Remote Copy, of SNDR (Network Data Replicator)  
part of AVS.  AVS also supports the ability to synchronously, or  
asynchronously replicate the actual ZFS storage pool to a another  
host, (no local copy needed), and then "zpool imported" the replica  
remotely.

See: opensolaris.org/os/project/avs/, plus the demos.


>
> when i work with linux i use partimage to create an image from one  
> partitino and store it on another. so i can restore it if an error.
> partimage do not work with zfs, so i must use the DD command.
> i think so:
> DD IF=/dev/sda1 OF=/backup/image
> can i create an image this way, and restore it the other:
> DD IF=/backup/image OF=/dev/sda1
> when i have two partitions with zfs, can i boot from the live cd,  
> mount one partition to use it as backup target?
> or is it possible to create a ext2 partition and use a linux rescue  
> cd to backup the zfs partition with dd ?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

--
Jim Dunham
Engineering Manager
Storage Platform Software Group
Sun Microsystems, Inc.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] recovering data from a dettach mirrored vdev

2008-05-08 Thread Robert Milkowski
Hello Darren,

Tuesday, May 6, 2008, 11:16:25 AM, you wrote:

DJM> Great tool, any chance we can have it integrated into zpool(1M) so that
DJM> it can find and "fixup" on import detached vdevs as new pools ?

I remember long time ago some posts about 'zpool split' so one could
split a pool in two (assuming pool is mirrored).


-- 
Best regards,
 Robert Milkowskimailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss