Running RAID5 like that is strongly inadvisable (to the point of "don't
bother"), so doing it with RAIDZ would be a similarly bad idea. You could try
another cheapo/junk controller card to verify whether or not it is a shared
resource problem ;)
This message posted from opensolaris.org
Hi Guys,
Rather than starting a new thread I thought I'd continue this thread.
I've been running Build 54 on a Thumper since Mid January and wanted
to ask a question about the zfs_arc_max setting. We set it to "
0x1 #4GB", however its creeping over that till our Kernel
memory usage is nea
Bill Sommerfeld wrote:
On Fri, 2007-03-30 at 06:12 -0600, Robert Thurlow wrote:
Last night, after moving the
drives, I started a scrub. It's still running. At 20 hours, I
was up to 57.75%, and had 14.5 hours left.
Do you have any cron jobs which are creating periodic snapshots?
No, not in
On Fri, 2007-03-30 at 06:12 -0600, Robert Thurlow wrote:
> Last night, after moving the
> drives, I started a scrub. It's still running. At 20 hours, I
> was up to 57.75%, and had 14.5 hours left.
Do you have any cron jobs which are creating periodic snapshots? If so,
you may be hitting:
6343
Chris Beal wrote:
That's great to hear. I think zfs boot and live upgrade like technology
would be incredibly powerful for managing risk in a production
environment. (and patching would become easier too)
Chris
This afternoon I did a little experiment with a cloned zfs root and bfu.
Poor
That's great to hear. I think zfs boot and live upgrade like technology
would be incredibly powerful for managing risk in a production
environment. (and patching would become easier too)
Chris
Lori Alt wrote:
Regrettably, LiveUpgrade is basically clueless when it
comes to zfs right now. It d
Le 30 mars 07 à 20:32, Anton Rang a écrit :
Perhaps you should read the QFS documentation and/or source. :-)
I probably should...
QFS, like
other write-forward and/or delayed-allocation file systems, does
not incur a
seek per I/O. For sequential writes in a typical data capture
applic
hey folks,
Lori Alt wrote:
See Tim Foster's blog for some procedures for doing
some LU-like management of bootable datasets:
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
Yep - I'll be working this weekend on updating the previous
"mountrootadm.sh" script that I wrote
Roch,
true that a fixed-block size filesystems introduces these seek
operations you talked about.
now this has to be counter-balanced with "latencies" introduced in a
log-structures filesystems
(which personally, I am unable to list)
the 10% you talked about is roughly the difference in performa
Hello Brian,
Friday, March 30, 2007, 6:06:26 PM, you wrote:
BH> I've been going over this in my mind as I don't currently use
BH> LU yet, and this sort of stuff is getting me to hold off.
BH> While LU might not work yet, you can always do a snapshot before
BH> the BFU/upgrade and then roll back
Hello Torrey,
Friday, March 30, 2007, 3:14:27 AM, you wrote:
TM> Robert Milkowski wrote:
>>
>> 2. MPxIO - it tries to failover disk to second SP but looks like it
>>tries it forever (or very very long). After some time it should
>>have generated disk IO failure...
>>
TM> Are there any
One more response to this:
See Tim Foster's blog for some procedures for doing
some LU-like management of bootable datasets:
http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
- Lori
Ivan Wang wrote:
HI all,
Recently zfs boot is delivered in scheduled b62, so is there an
I've been going over this in my mind as I don't currently use
LU yet, and this sort of stuff is getting me to hold off.
While LU might not work yet, you can always do a snapshot before
the BFU/upgrade and then roll back to that if things break, which
isn't completely LU, but it does give you the p
Regrettably, LiveUpgrade is basically clueless when it
comes to zfs right now. It doesn't understand the
concepts of pools and datasets (and how they differ
from file systems on slices) at all. That code hasn't
been implemented yet. It's something we're working on.
So for now, LU won't be of mu
> Lets say I reorganized my zpools. Now there are 2
> pools:
> Pool1:
> Production data, combination of binary and text
> files. Only few files
> change at a time. Average file sizes are around 1MB.
> Does it make
> sense to take zfs snapshots of the pool? Will the
> snapshot consume as
> much spac
On 29-Mar-07, at 5:43 PM, Richard Elling wrote:
Atul Vidwansa wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when
application is
changing j
Richard Elling wrote:
Atul Vidwansa wrote:
Hi Richard,
I am not talking about source(ASCII) files. How about versioning
production data? I talked about file level snapshots because
snapshotting entire filesystem does not make sense when application is
changing just few files at a time.
CVS
HI all,
Recently zfs boot is delivered in scheduled b62, so is there any words when and
how may we use live upgrade with zfs root? Since I only use SXCR now and
sometimes you just need to boot to older BE in case of a no-so-good build, live
upgrade becomes very handy for me on that.
Cheers,
I
Hi,
If all of the four drives really hang off of one ATA-Controller, they
may all be stepping on each other's feet as it is a blocking bus protocol.
There's a speed discussion in the ATA article on Wikipedia:
http://en.wikipedia.org/wiki/AT_Attachment
My current machine (W1100z) has 4 internal
Hi folks,
In some prior posts, I've talked about trying to get four IDE drives in
a Firewire case working. Yesterday, I bailed out due to hangs, filed
6539587, and moved the drives inside my Opteron box, hanging off one
of these:
http://www.newegg.com/Product/Product.asp?Item=N82E16816124001
N
> On Wed, Mar 28, 2007 at 06:55:17PM -0700, Anton B.
> Rang wrote:
> > It's not defined by POSIX (or Solaris). You can
> rely on being able to
> > atomically write a single disk block (512 bytes);
> anything larger than
> > that is risky. Oh, and it has to be 512-byte
> aligned.
> >
> > File syste
Le 30 mars 07 à 08:36, Anton B. Rang a écrit :
However, even with sequential writes, a large I/O size makes a huge
difference in throughput. Ask the QFS folks about data capture
applications. ;-)
I quantified the 'huge' this as such
60MB/s and 5ms per seek means that for a FS that requ
22 matches
Mail list logo