On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
> Hello Carisdad,
>
> Friday, March 9, 2007, 7:05:02 PM, you wrote:
>
> C> I have a setup with a T2000 SAN attached to 90 500GB SATA drives
> C> presented as individual luns to the host. We will be sending mostly
> C> large stre
Ayaz,
What does the panic stack look like ?
Did you have DPM (Disk Path Monitoring) enabled in both the cases
(UFS/ZFS) ?
Also, from what I have seen pulling the FC cable (or similar fault) to
simulate disk fault has caused
ZFS to hang or panic.
I don't think such a test is the right way t
Ayaz Anjum and others,
I think once you move into NFS over TCP in a client
server env, the chance for lost data is significantly
higher than just a disconnecting a cable,
Scenario, before a client generates a delayed write
from his violatile DRAM client cac
I observed better predictable thoughput if I use a IO generator that
can do throttling (xdd or vdbench)
s.
On 3/11/07, Jesse DeFer <[EMAIL PROTECTED]> wrote:
OK, I tried it with txg_time set to 1 and am seeing less predictable results.
The first time I ran the test it completed in 27 seconds
On 11-Mar-07, at 11:22 PM, Stuart Low wrote:
Heya,
I believe Robert and Darren have offered sufficient explanations: You
cannot be assured of committed data unless you've sync'd it. You are
only risking data loss if your users and/or applications assume data
is committed without seeing a comp
Heya,
> I believe Robert and Darren have offered sufficient explanations: You
> cannot be assured of committed data unless you've sync'd it. You are
> only risking data loss if your users and/or applications assume data
> is committed without seeing a completed sync, which would be a design
> erro
On 11-Mar-07, at 11:12 PM, Ayaz Anjum wrote:
HI !
Well as per my actual post, i created a zfs file as part of Sun
cluster HAStoragePlus, and then disconned the FC cable, since there
was no active IO hence the failure of disk was not detected, then i
touched a file in the zfs filesystem,
HI !
Well as per my actual post, i created a zfs file as part of Sun cluster
HAStoragePlus, and then disconned the FC cable, since there was no active
IO hence the failure of disk was not detected, then i touched a file in
the zfs filesystem, and it went fine, only after that when i did sync th
> On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
> > On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
> > >* ability to add disks to mirror the root filesystem at any time,
> > > should they become available
> >
> > Can't this be done with UFS+SVM as well?
> I have some concerns here, from my experience in the past, touching a
> file ( doing some IO ) will cause the ufs filesystem to failover, unlike
> zfs where it did not ! Why the behaviour of zfs different than ufs ?
UFS always does synchronous metadata updates. So a 'touch' that creates
a fi
While the snapshot isn't RW, the clone is and would certainly be helpful
in this case
Isn't the whole idea to:
0) boot into single-user/boot-archive if you're paranoid (or just quiess
and clone if you feel lucky)
1) "clone" the primary OS instance+relevant-slices & boot into the
primary OS
2)
Matty wrote:
How will /boot/grub/menu.lst be updated? Will the admin have to run
bootadm after the root clone is created, or will the zfs utility be
enhanced to populate / remove entries from the menu.lst?
The detail of how menu.lst will be updated is still being worked out.
We don't plan on u
OK, I tried it with txg_time set to 1 and am seeing less predictable results.
The first time I ran the test it completed in 27 seconds (vs 24s for ufs or 42s
with txg_time=5). Further tests ran from 27s to 43s, about half the time
greater than 40s.
zpool iostat doesn't show the large no-write
On 3/11/07, Lin Ling <[EMAIL PROTECTED]> wrote:
Matty wrote:
> I am curious how snapshots and clones will be integrated with grub.
> Will it be posible to boot from a snapshot? I think this would be
> useful when applying patches, since you could snapshot / ,/var and
> /opt, patch the system, and
Matty wrote:
I am curious how snapshots and clones will be integrated with grub.
Will it be posible to boot from a snapshot? I think this would be
useful when applying patches, since you could snapshot / ,/var and
/opt, patch the system, and revert back (by choosing a snapshot from
the grub menu)
> Robert Milkowski wrote:
>> Hello Ivan,
>> Sunday, March 11, 2007, 12:01:28 PM, you wrote:
>>
>> IW> Got it, thanks, and a more general question, in a single disk
>> IW> root pool scenario, what advantage zfs will provide over ufs w/
>> IW> logging? And when zfs boot integrated in neveda, will l
On Sun, Mar 11, 2007 at 11:21:13AM -0700, Frank Cusack wrote:
> On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
> >* ability to add disks to mirror the root filesystem at any time,
> > should they become available
>
> Can't this be done with UFS+SVM as well? A reboot wo
On March 11, 2007 6:05:13 PM + Tim Foster <[EMAIL PROTECTED]> wrote:
* ability to add disks to mirror the root filesystem at any time,
should they become available
Can't this be done with UFS+SVM as well? A reboot would be required
but you have to do regular reboots anyway just for patc
Robert Milkowski wrote:
Hello Ivan,
Sunday, March 11, 2007, 12:01:28 PM, you wrote:
IW> Got it, thanks, and a more general question, in a single disk
IW> root pool scenario, what advantage zfs will provide over ufs w/
IW> logging? And when zfs boot integrated in neveda, will live upgrade work
On 3/11/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
IW> Got it, thanks, and a more general question, in a single disk
IW> root pool scenario, what advantage zfs will provide over ufs w/
IW> logging? And when zfs boot integrated in neveda, will live upgrade work with
zfs > root?
Snapshots/c
Hello Ivan,
Sunday, March 11, 2007, 12:01:28 PM, you wrote:
IW> Got it, thanks, and a more general question, in a single disk
IW> root pool scenario, what advantage zfs will provide over ufs w/
IW> logging? And when zfs boot integrated in neveda, will live upgrade work
with zfs root?
Snapshots
HI !
I have some concerns here, from my experience in the past, touching a
file ( doing some IO ) will cause the ufs filesystem to failover, unlike
zfs where it did not ! Why the behaviour of zfs different than ufs ? is
not this compromising data integrity ?
thanks
Ayaz
From:
Robert M
>
> Ivan Wang wrote:
> >
> > Hi,
> >
> > However, this raises another concert that during
> recent discussions regarding to disk layout of a zfs
> system
> (http://www.opensolaris.org/jive/thread.jspa?threadID=
> 25679&tstart=0) it was said that currently we'd
> better give zfs the whole device (
23 matches
Mail list logo