Sorry - fat fingered this message 2x already. Need an iPhone5!

Sent from my iPhone

On Oct 10, 2012, at 2:07 AM, DavidHalko <davidha...@gmail.com> wrote:

>> 2012-10-09 13:01, Andrej Javor?ek wrote:
>>> Martin I hope you succeeded to get your data back.
>> +1
+1000

>> 
>>> I have been beaten by ZFS couple of times before
>> +2 ;)
Been luck here on my SPARC systems.

>> 
>>> Now the main question!
>>> If I offline disk (zpool offline MYPOOL <disk>), will that disk be usable
>>> as a single disk for import?!
>> 
>> I do understand how people want to make things simpler, but
>> how does it not suffice to create a stand-alone separate pool
>> on this removable disk or the disk migrating onto another host,
>> complete with "installgrub" and the appropriate zpool attributes
>> like "bootfs" (perhaps starting with "zpool split" - didn't use
>> that yet)?

I tried (unsuccessfully to use zfs send & receive between a 1.5tb zfs mirror on 
an ultra60 and a 2tb zfs mirror on a v240 over gigabit Ethernet. Never could 
get it to work. The man pages were insufficient under Solaris 10u8 for me to 
figure it out.

I built a 4 way mirror by adding 2 drives to a 2 way mirror, exported the set, 
yanked 2 drives, imported, and called it a day. Less filling, worked great!

>> In terms of keeping backup in a table, sending some incremental
>> snapshots should be safer than letting a mirror resync its
>> possible new errors (like those Martin had) completely.
>> With snapshots on a separate pool you can rollback easier than
>> on an identical clone of the same corrupted pool.

Is there a good reference on how to do this somewhere?

Need to do it on some SPARC's... Intel boxes are scarce where I am - they don't 
run the software I need. :-( Sol10u8 is it, until OI-SPARC is out.

>> You are even "guaranteed" to have enough space to do this, since
>> you intended to use a sufficiently-sized disk as a mirror half
>> anyway.
>> 
>> //Jim

Is there a good way to zfs send & receive the pool, carrying over snapshots 
deltas, every hour via cron, without having to ship gigabytes over the network 
every time?

Thanks, Dave
_______________________________________________
OpenIndiana-discuss mailing list
OpenIndiana-discuss@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to