Oh well, thanks for this answer.
It makes me feel much better!
What are eventual risks?
Gabriele Bulfon - Sonicle S.r.l.
Tel +39 028246016 Int. 30 - Fax +39 028243880
Via Felice Cavallotti 16 - 20089, Rozzano - Milano - ITALY
http://www.sonicle.com
Yes...they're still running...but being aware that a power failure causing an
unexpected poweroff may make the pool unreadable is a pain
Yes. Patches should be available.
Or adoption may be lowering a lot...
--
This message posted from opensolaris.org
mmmI double checked some of the running systems.
Most of them have the first patch (sparc-122640-05 and x86-122641-06), but not
the second one (sparc-142900-09 and x86-142901-09)...
...I feel I'm right in the middle of the problem...
How much am I risking?! These systems are all mirrored via
Yes, I did read it.
And what worries me is patches availability...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I found this today:
http://blog.lastinfirstout.net/2010/06/sunoracle-finally-announces-zfs-data.html?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+LastInFirstOut+%28Last+In%2C+First+Out%29&utm_content=FriendFeed+Bot
How can I be sure my Solaris 10 systems are fine?
Is latest OpenSola
Hello,
I have a situation where a zfs file server holding lots of graphic files cannot
be backed up daily with a full backup.
My idea was initially to run a full backup on Sunday through the lto library on
more dedicated tapes, then have an incremental backup run on daily tapes.
Brainstorming on
Hi, I would love some suggestions for an implementation I'm going to deploy.
I will have a machine with 4x1T disks, going to be a file server for both
windows and osx clients through smb/cifs.
I have read on "zfs best practices" articles that slicing is not suggested
(unless you want to just creat
Thanks for your suggestions :)
Another thing comes to my mind (expecially after a past bad experience with a
buggy storage non-zfs backend).
Usually (correct me if I'm wrong) the storage will be having redundancy on its
zfs volumes (be it mirror or raidz).
Once the redundant volume is exposed as
I'm trying to guess what is the best practice in this scenario:
- let's say I have a zfs based storage (let's say nexenta) that has it zfs
pools and volumes shared as iScsi raw devices
- let's say I have another server running xvm or virtualbox connected to the
storage
- let's say one of the virt
Well, I actually don't know what implementation is inside this legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built around IET,
don't know.
Well, maybe I should disable write-back on every zfs host connecting on iscsi?
How do I check this?
Thx
Gabriele.
--
This mes
Hello,
I'd like to check for any guidance about using zfs on iscsi storage appliances.
Recently I had an unlucky situation with an unlucky storage machine freezing.
Once the storage was up again (rebooted) all other iscsi clients were happy,
while one of the iscsi clients (a sun solaris sparc, run
Hello, I'm having the same exact situation on one VM, and not on another VM on
the same infrastructure.
The only difference is that on the failing VM I initially created the pool with
a name and then changed the mountpoint to another name.
Did you found a solution to the issue?
Should I consider
ew data in between.
Am I wrong?
Thanx for any help, really.
Gabriele Bulfon
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
elp):
- The SAN includes 2 Sun-Solaris-10 machines, and 3 windows machinesis
there any similar solution on the win machines?
Thanx for any help
Gabriele Bulfon.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zf
14 matches
Mail list logo