On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote:
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice] assertion
failed: dmu_read(os,
Would it be easier to ...
1) Change ZFS code to enable a sort of directIO emulation and then run
various tests... or
2) Use Sun's performance team, which have all the experience in the
world when it comes to performing benchmarks on Solaris and Oracle ..
+ a Dtrace master to drill down and see
Dick Davies wrote:
On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote:
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Oct 4 15:03:12 fozzie ^Mpanic[cpu0]/thread=ff0002b51c80:
Oct 4 15:03:12 fozzie genunix: [ID 603766 kern.notice]
Wouldn't this be the known feature where a write error to zfs forces a panic?
Vic
On 10/4/07, Ben Rockwood [EMAIL PROTECTED] wrote:
Dick Davies wrote:
On 04/10/2007, Nathan Kroenert [EMAIL PROTECTED] wrote:
Client A
- import pool make couple-o-changes
Client B
- import
I think it's a little more sinister than that...
I'm only just trying to import the pool. Not even yet doing any I/O to it...
Perhaps it's the same cause, I don't know...
But I'm certainly not convinced that I'd be happy with a 25K, for
example, panicing just because I tried to import a dud
Hi
I have a Netra T1 with 2 int disks. I want to install Sol 10 8/07 and build 2
zones (one as an ftp server and one as an scp server) and would like the system
mirrored.
My thoughts are to use SVM to mirror the / partitions, then build a mirrored
zfs pool using slice 5 on both disks (I know
Perhaps it's the same cause, I don't know...
But I'm certainly not convinced that I'd be happy with a 25K, for
example, panicing just because I tried to import a dud pool...
I'm ok(ish) with the panic on a failed write to a non-redundant storage.
I expect it by now...
I agree, forcing a
Where does the win come from with directI/O? Is it 1), 2), or some
combination? If its a combination, what's the percentage of each
towards the win?
That will vary based on workload (I know, you already knew that ... :^).
Decomposing the performance win between what is gained as a
This bug was rendered moot via 6528732 in build
snv_68 (and s10_u5). We
now store physical devices paths with the vnodes, so
even though the
SATA framework doesn't correctly support open by
devid in early boot, we
But if I read it right, there is still a problem in SATA framework (failing
Jim Mauro writes:
Where does the win come from with directI/O? Is it 1), 2), or some
combination? If its a combination, what's the percentage of each
towards the win?
That will vary based on workload (I know, you already knew that ... :^).
Decomposing the performance
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Client A + B - With both mounting the same pool, touched a couple of
files, and removed a couple of files from each client
Client A + B - zpool export
Client A - Attempted import and dropped the panic.
I'm pleased to announce that the ZFS Crypto project now has Alpha
release binaries that you can download and try. Currently we only have
x86/x64 binaries available, SPARC will be available shortly.
Information on the Alpha release of ZFS Crypto and links for downloading
the binaries is here:
On Thu, Oct 04, 2007 at 08:36:10AM -0600, eric kustarz wrote:
Client A
- import pool make couple-o-changes
Client B
- import pool -f (heh)
Client A + B - With both mounting the same pool, touched a couple of
files, and removed a couple of files from each client
Client A +
Lori Alt told me that mountrount was a temporary hack until grub
could boot zfs natively.
Since build 62, mountroot support was dropped and I am not convinced
that this is a mistake.
Let's compare the two:
Mountroot:
Pros:
* can have root partition on raid-z: YES
* can have root
eric kustarz writes:
Anyhow, in the case of DBs, ARC indeed becomes a vestigial organ. I'm
surprised that this is being met with skepticism considering that
Oracle highly recommends direct IO be used, and, IIRC, Oracle
performance was the main motivation to adding DIO to UFS back
On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote:
It does, which leads to the core problem. Why do we have to store the
exact same data twice in memory (i.e., once in the ARC, and once in
the shared memory segment that Oracle uses)?
We do not retain 2 copies of the same
On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
...memory utilisation... OK so we should implement the 'lost cause' rfe.
In all cases, ZFS must not steal pages from other memory consumers :
6488341 ZFS should avoiding growing the ARC into trouble
So the DB memory pages
On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
This bug was rendered moot via 6528732 in build
snv_68 (and s10_u5). We
now store physical devices paths with the vnodes, so
even though the
SATA framework doesn't correctly support open by
devid in early boot, we
But if I
Nicolas Williams writes:
On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
...memory utilisation... OK so we should implement the 'lost cause' rfe.
In all cases, ZFS must not steal pages from other memory consumers :
6488341 ZFS should avoiding growing the ARC
On Thu, Oct 04, 2007 at 06:59:56PM +0200, Roch - PAE wrote:
Nicolas Williams writes:
On Thu, Oct 04, 2007 at 03:49:12PM +0200, Roch - PAE wrote:
So the DB memory pages should not be _contented_ for.
What if your executable text, and pretty much everything lives on ZFS?
You don't
I'd like to second a couple of comments made recently:
* If they don't regularly do so, I too encourage the ZFS, Solaris
performance, and Sun Oracle support teams to sit down and talk about the
utility of Direct I/O for databases.
* I too suspect that absent Direct I/O (or some ringing
Remember that you have to maintain an entirely separate slice with yet
another boot environment. This causes huge amounts of complexity in
terms of live upgrade, multiple BE management, etc. The old mountroot
solution was useful for mounting ZFS root, but completely unmaintainable
from an
Nicolas Williams writes:
On Wed, Oct 03, 2007 at 04:31:01PM +0200, Roch - PAE wrote:
It does, which leads to the core problem. Why do we have to store the
exact same data twice in memory (i.e., once in the ARC, and once in
the shared memory segment that Oracle uses)?
We
Manually installing the obsolete patch 122660-10 has worked fine for me.
Until sun fixes the patch dependencies, I think that is the easiest way.
-Brian
Bruce Shaw wrote:
It fails on my machine because it requires a patch that's deprecated.
This email and any files transmitted with it are
Update to this. Before destroying the original pool the first time, offline the
disk you plan on re-using in the new pool. Otherwise when you destroy the
original pool for the second time it causes issues with the new pool. In fact,
if you attempt to destroy the new pool immediately after
Yeah, the only thing wrong with that patch is that it eats
/etc/sma/snmp/snmpd.conf
All is not lost, your original is copied to
/etc/sma/snmp/snmpd.conf.save in the process.
Rob++
Brian H. Nelson wrote:
Manually installing the obsolete patch 122660-10 has worked fine for me.
Until sun
It was 120272-12 that caused ths snmp.conf problem and was withdrawn.
120272-13 has replaced it and has that bug fixed.
122660-10 does not have any issues that I am aware of. It is only
obsolete, not withdrawn. Additionally, it appears that the circular
patch dependency is by design if you
On Mon, Jul 16, 2007 at 09:36:06PM -0700, Stuart Anderson wrote:
Running Solaris 10 Update 3 on an X4500 I have found that it is possible
to reproducibly block all writes to a ZFS pool by running chgrp -R
on any large filesystem in that pool. As can be seen below in the zpool
iostat output
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your point?
Nathan.
eric kustarz wrote:
Client A
- import pool make
On Fri, Oct 05, 2007 at 08:20:13AM +1000, Nathan Kroenert wrote:
Erik -
Thanks for that, but I know the pool is corrupted - That was kind if the
point of the exercise.
The bug (at least to me) is ZFS panicing Solaris just trying to import
the dud pool.
But, maybe I'm missing your
Hi,
Using bootroot I can do seperate /usr filesystem since b64. I can also
do snapshot, clone and compression.
Rgds,
Andre W.
Kugutsumen wrote:
Lori Alt told me that mountrount was a temporary hack until grub
could boot zfs natively.
Since build 62, mountroot support was dropped and I am
Awesome.
Thanks, Eric. :)
This type of feature / fix is quite important to a number of the guys in
the our local OSUG. In particular, they are adamant that they cannot use
ZFS in production until it stops panicing the whole box for isolated
filesystem / zpool failures.
This will be a big
On 30/09/2007, William Papolis [EMAIL PROTECTED] wrote:
Henk,
By upgrading do you mean, rebooting and installing Open Solaris from DVD or
Network?
Like, no Patch Manager install some quick patches and updates and a quick
reboot, right?
You can live upgrade and then do a quick reboot:
5) DMA straight from user buffer to disk avoiding a copy.
This is what the direct in direct i/o has historically meant. :-)
line has been that 5) won't help latency much and
latency is here I think the game is currently played. Now the
disconnect might be because people might feel that the
...and eventually in a read-write capacity:
http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
developer-preview-1-1-for-leopard/
Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
OS X to Developers this week. The preview updates a previous build
released on
Dale Ghent wrote:
...and eventually in a read-write capacity:
http://www.macrumors.com/2007/10/04/apple-seeds-zfs-read-write-
developer-preview-1-1-for-leopard/
Apple has seeded version 1.1 of ZFS (Zettabyte File System) for Mac
OS X to Developers this week. The preview updates a
I've been thinking about this for awhile, but Anton's analysis makes me
think about it even more:
We all love ZFS, right. It's futuristic in a bold new way, which many
virtues, I won't preach tot he choir. But to make it all glue together
has some necessary CPU/Memory intensive
37 matches
Mail list logo