Rainer Heilke wrote:
If you plan on RAC, then ASM makes good sense. It is
unclear (to me anyway)
if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC,
but then, I'm not a DBA (and don't want to be ;-) We'll be just
using raw
> If you plan on RAC, then ASM makes good sense. It is
> unclear (to me anyway)
> if ASM over a zvol is better than ASM over a raw LUN.
Hmm. I thought ASM was really the _only_ effective way to do RAC, but then, I'm
not a DBA (and don't want to be ;-) We'll be just using raw LUN's. While the
z
If some aspect of the load is writing large amount of data
into the pool (through the memory cache, as opposed to the
zil) and that leads to a frozen system, I think that a
possible contributor should be:
|6429205||each zpool needs to monitor its throughput and throttle heavy
wri
Rainer Heilke wrote On 01/17/07 15:44,:
It turns out we're probably going to go the UFS/ZFS route, with 4 filesystems
(the DB files on
> UFS with Directio).
It seems that the pain of moving from a single-node ASM to a RAC'd ASM is
great, and not worth it.
> The DBA group decided doing the
I did some straight up Oracle/ZFS testing but not on Zvols. I'll give it a shot
and report back, next week is the earliest.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
It turns out we're probably going to go the UFS/ZFS route, with 4 filesystems
(the DB files on UFS with Directio).
It seems that the pain of moving from a single-node ASM to a RAC'd ASM is
great, and not worth it. The DBA group decided doing the migration to UFS for
the DB files now, and then t
>We had a 2TB filesystem. No matter what options I set explicitly, the
>UFS filesystem kept getting written with a 1 million file limit.
>Believe me, I tried a lot of options, and they kept getting se t back
>on me.
The limit is documented as "1 million inodes per TB". So something
must not have
We had a 2TB filesystem. No matter what options I set explicitly, the UFS
filesystem kept getting written with a 1 million file limit. Believe me, I
tried a lot of options, and they kept getting set back on me.
After a fair bit of poking around (Google, Sun's site, etc.) I found several
other n
Anantha N. Srirama wrote On 01/17/07 08:32,:
Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting an
incorrect bug.
Yes, Anantha is correct that is the bug id, which could be responsible
for more disk writes than expected.
Let me try to explain that bug.
The ZIL as d
> Also as an workaround you could disable zil if it's
> acceptable to you
> (in case of system panic or hard reset you can endup
> with
> unrecoverable database).
Again, not an option, but thatnks for the pointer. I read a bit about this last
week, and it sounds way too scary.
Rainer
This me
Bug 6413510 is the root cause. ZFS maestros please correct me if I'm quoting an
incorrect bug.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The DBA team isn't wanting to do another test. They have "made up their minds".
We have a meeting with them tomorrow, though, and will try to convince them of
one more test so that we can try the mdb and fsstat tools. (The admin doing the
tests was using iostat, not fsstat.) I, at least, am inte
> Rainer Heilke,
>
> You have 1/4 of the amount of memory that the 2900
> 0 system is capable of (192GBs : I think).
Yep. The server does not hold the application (three-tier architecture) so this
is the standard build we bought. The memory has not indicated any problems. All
errors point to wr
13 matches
Mail list logo