I'm afraid I asked a very stupid question...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nicolas Williams wrote:
> On Fri, Oct 05, 2007 at 10:44:21AM -0700, John Plocher wrote:
>
>> Lori Alt wrote:
>>
>>> I'm not surprised that having /usr in a separate pool failed.
>>> The design of zfs boot largely assumes that root, /usr, and
>>> /var are all on the same pool, and it is unli
> But note that, for ZFS, the win with direct I/O will be somewhat
> less. That's because you still need to read the page to compute
> its checksum. So for direct I/O with ZFS (with checksums enabled),
> the cost is W:LPS, R:2*LPS. Is saving one page of writes enough to
> make a difference? Pos
> Is there a specific reason why you need to do the caching at the DB
> level instead of the file system? I'm really curious as i've got
> conflicting data on why people do this. If i get more data on real
> reasons on why we shouldn't cache at the file system, then this could
> get bumpe
>122660-10 does not have any issues that I am aware of. It is only
obsolete, not withdrawn. Additionally, it appears that the circular
patch dependency is by design if you read this BugID:
So how to do you get it to install?
I get...
#patchadd 122660-10
Validating patches...
Loading patche
So I went ahead and loaded 10u4 on a pair of V210 units.
I am going to set this nocacheflush option and cross my fingers and see how it
goes.
I have my ZPool mirroring LUNs off 2 different arrays. I have
single-controllers in each 3310. My belief is it's OK for me to do this even
without dua
On Fri, Oct 05, 2007 at 02:01:29PM -0500, Nicolas Williams wrote:
> On Fri, Oct 05, 2007 at 06:54:21PM +, A Darren Dunham wrote:
> > I wonder how much this would change if a functional "pivot-root"
> > mechanism were available. It be handy nice to boot from flash, import a
> > pool, then make
Nicolas Williams wrote:
> I'm curious as to why you think this
The characteristics of /, /usr and /var are quite different,
from a usage and backup requirements perspective:
/ is read-mostly, but contains critical config data.
/usr is read-only, and
/var (/var/mail, /var/mysql, ...) can be high v
John Plocher wrote:
> Lori Alt wrote:
>> I'm not surprised that having /usr in a separate pool failed.
>> The design of zfs boot largely assumes that root, /usr, and
>> /var are all on the same pool, and it is unlikely that we would
>> do the work to support any other configuration any time soon.
>
On Fri, Oct 05, 2007 at 06:54:21PM +, A Darren Dunham wrote:
> I wonder how much this would change if a functional "pivot-root"
> mechanism were available. It be handy nice to boot from flash, import a
> pool, then make that the running root.
>
> Does anyone know if that's a target of any Ope
On Fri, Oct 05, 2007 at 01:41:32PM -0500, Nicolas Williams wrote:
> > Certainly, many of us will be satisfied with all-in-one pool,
> > just as we are today with all all-in-one filesystem, so this
> > makes sense as a first step. But, there needs to be the
> > presumption that the next steps towar
On Fri, Oct 05, 2007 at 10:44:21AM -0700, John Plocher wrote:
> Lori Alt wrote:
> > I'm not surprised that having /usr in a separate pool failed.
> > The design of zfs boot largely assumes that root, /usr, and
> > /var are all on the same pool, and it is unlikely that we would
> > do the work to su
I have 2 of these with Tyan mobos (both EATX) very nice to work on. Sadly I'm
not running Solaris on it at the moment, not that that really matters.
http://rackmountmart.stores.yahoo.net/rm2uracchase.html
Corey
On Fri, 28 Sep 2007 12:57:01 -0400, Blake wrote:
> I'm looking for a rackmount chas
Lori Alt wrote:
> I'm not surprised that having /usr in a separate pool failed.
> The design of zfs boot largely assumes that root, /usr, and
> /var are all on the same pool, and it is unlikely that we would
> do the work to support any other configuration any time soon.
This seems, uhm, undesira
Robert Milkowski wrote:
> Hello Richard,
>
> Friday, September 28, 2007, 7:45:47 PM, you wrote:
>
> RE> Kris Kasner wrote:
>
2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too,
because I
don't like it with 2 SATA disks either. There isn't enough drives to put
>>
Nicolas Williams wrote:
On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
I can envision a highly optimized, pipelined system, where writes and
reads pass through checksum, compression, encryption ASICs, that also
locate data properly on disk. ...
I've argued before t
Same here, if zfs boot support raidz then my problems will be solved.
On 05/10/2007, at 11:27 PM, Rob Logan wrote:
>> I'm not surprised that having /usr in a separate pool failed.
>
> while this is discouraging, (I have several b62 machines with
> root mirrored and /usr on raidz) if booting from
> I'm not surprised that having /usr in a separate pool failed.
while this is discouraging, (I have several b62 machines with
root mirrored and /usr on raidz) if booting from raidz
is a pri, and comes soon, at least I'd be happy :-)
Rob
___
/var has no problem being on a separate pool.
Any reason why it assumes that root and /usr are on the same pool?
You're forcing me to sacrifice one or two disk and SATA/IDE port to
support "zfs boot" when a 1 gig flashdisk costs less than 10$.
/ would fit nicely on it, /usr doesn't.
I guess I
> Regarding compression, if I am not mistaken, grub
> cannot access files that are compressed.
There was a bug where grub was unable to access files
on zfs that contained holes:
Bug ID 6541114
SynopsisGRUB/ZFS fails to load files from a default compressed (lzjb)
root
http://bu
On Fri, Oct 05, 2007 at 08:56:26AM -0700, Tim Spriggs wrote:
> Time for on board FPGAs!
Heh!
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nicolas Williams wrote:
> On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
>
>> I can envision a highly optimized, pipelined system, where writes and
>> reads pass through checksum, compression, encryption ASICs, that also
>> locate data properly on disk. ...
>>
>
> I've a
Kugutsumen wrote:
> Thanks, this is really strange.
> In your particular case you have /usr on the same pool as your rootfs
> and I guess that's why it is working for you.
>
> Alll my attempts with b64, b70 and b73 failed if /usr is on a
> separate pool.
>
I'm not surprised that having /usr
On Thu, Oct 04, 2007 at 10:26:24PM -0700, Jonathan Loran wrote:
> I can envision a highly optimized, pipelined system, where writes and
> reads pass through checksum, compression, encryption ASICs, that also
> locate data properly on disk. ...
I've argued before that RAID-Z could be implemented
On Fri, 2007-10-05 at 09:40 -0300, Toby Thain wrote:
> How far would that compromise ZFS' #1 virtue (IMHO), end to end
> integrity?
Speed sells, and speed kills.
If the offload were done on the HBA, it would extend the size of the
"assumed correct" part of the hardware from just the CPU+memory
ZFS boot is one of the best usage of ZFS for me. I can create more then
10 boot environment, rollback or destroy if necessary. Not afraid of bfu
anymore or patching or any other software installation. If bfu breaks
the OS, just rollback as simple as that.
Rgds,
Andre W.
Kugutsumen wrote:
> Tha
Thanks, this is really strange.
In your particular case you have /usr on the same pool as your rootfs
and I guess that's why it is working for you.
Alll my attempts with b64, b70 and b73 failed if /usr is on a
separate pool.
On 05/10/2007, at 4:10 PM, Andre Wenas wrote:
> Hi Kugutsumen,
>
>
Toby Thain wrote:
> On 5-Oct-07, at 2:26 AM, Jonathan Loran wrote:
>
>> I've been thinking about this for awhile, but Anton's analysis
>> makes me think about it even more:
>>
>> We all love ZFS, right. It's futuristic in a bold new way, which
>> many virtues, I won't preach tot he choir. B
On 10/5/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
> RH> 2) Also, direct I/O is faster because it avoids double buffering.
>
> I doubt its buying you much...
We don't know how much the performance gain is until we get a
prototype and benchmark it - the behavior is different with different
DBM
On 5-Oct-07, at 2:26 AM, Jonathan Loran wrote:
>
> I've been thinking about this for awhile, but Anton's analysis
> makes me think about it even more:
>
> We all love ZFS, right. It's futuristic in a bold new way, which
> many virtues, I won't preach tot he choir. But to make it all
> gl
From: "Anton B. Rang" <[EMAIL PROTECTED]>
> For many databases, most of the I/O is writes (reads wind up
> cached in memory).
2 words: table scan
-=dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listin
Hello Richard,
Friday, September 28, 2007, 7:45:47 PM, you wrote:
RE> Kris Kasner wrote:
>>> 2. Back to Solaris Volume Manager (SVM), I guess. It's too bad too, because
>>> I
>>> don't like it with 2 SATA disks either. There isn't enough drives to put
>>> the
>>> State Database Replicas so th
Hello Rayson,
Tuesday, October 2, 2007, 8:56:09 PM, you wrote:
RH> 1) Modern DBMSs cache database pages in their own buffer pool because
RH> it is less expensive than to access data from the OS. (IIRC, MySQL's
RH> MyISAM is the only one that relies on the FS cache, but a lot of MySQL
RH> sites us
Hi Kugutsumen,
Not sure abt the bugs, I follow instruction at
http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual
and create separate /usr, /opt and /var filesystem.
Here is the vfstab:
#device device mount FS fsckmount
mount
#to mount t
Hello Eric,
Thursday, October 4, 2007, 5:54:06 PM, you wrote:
ES> On Thu, Oct 04, 2007 at 05:22:58AM -0700, Ivan Wang wrote:
>> > This bug was rendered moot via 6528732 in build
>> > snv_68 (and s10_u5). We
>> > now store physical devices paths with the vnodes, so
>> > even though the
>> > SATA
35 matches
Mail list logo