> Bag-o-tricks-r-us, I suggest the following in such a case:
>
> - Two ZFS pools
> - One for production
> - One for Education
The DBA's are very resistant to splitting our whole environments. There are
nine on the test/devl server! So, we're going to put the DB files and redo logs
on separate
Jason J. W. Williams writes:
> Hi Anantha,
>
> I was curious why segregating at the FS level would provide adequate
> I/O isolation? Since all FS are on the same pool, I assumed flogging a
> FS would flog the pool and negatively affect all the other FS on that
> pool?
>
> Best Regards,
Hi Robert,
I see. So it really doesn't get around the idea of putting DB files
and logs on separate spindles?
Best Regards,
Jason
On 1/17/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Jason,
Wednesday, January 17, 2007, 11:24:50 PM, you wrote:
JJWW> Hi Anantha,
JJWW> I was curious w
Hello Jason,
Wednesday, January 17, 2007, 11:24:50 PM, you wrote:
JJWW> Hi Anantha,
JJWW> I was curious why segregating at the FS level would provide adequate
JJWW> I/O isolation? Since all FS are on the same pool, I assumed flogging a
JJWW> FS would flog the pool and negatively affect all the o
Bag-o-tricks-r-us, I suggest the following in such a case:
- Two ZFS pools
- One for production
- One for Education
- Isolate the LUNs feeding the pools if possible, don't share spindles.
Remember on EMC/Hitachi you've logical LUNs created by striping/concat'ng
carved up physical disks, so
Hi Anantha,
I was curious why segregating at the FS level would provide adequate
I/O isolation? Since all FS are on the same pool, I assumed flogging a
FS would flog the pool and negatively affect all the other FS on that
pool?
Best Regards,
Jason
On 1/17/07, Anantha N. Srirama <[EMAIL PROTECTE
Dennis Clarke wrote:
What do you mean by UFS wasn't an option due to
number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
what ?
$ uname -a
SunOS core 5.10 Generic_118833-17 sun4u sparc SUNW,UltraSPARC-I
>> What do you mean by UFS wasn't an option due to
>> number of files?
>
> Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
> Financials environment well exceeds this limitation.
>
what ?
$ uname -a
SunOS core 5.10 Generic_118833-17 sun4u sparc SUNW,UltraSPARC-IIi-cEngine
Rainer Heilke wrote:
What do you mean by UFS wasn't an option due to
number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
Really?!? I thought Oracle would use a database for storage...
Also do you hav
Rainer Heilke wrote:
I'll know for sure later today or tomorrow, but it sounds like they are
seriously considering the ASM route. Since we will be going to RAC later
this year, this move makes the most sense. We'll just have to hope that
the DBA group gets a better understanding of LUN's and ou
Thanks for the feedback!
This does sound like what we're hitting. From our testing, you are absolutely
correct--separating out the parts is a major help. The big problem we still
see, though, is doing the clones/recoveries. The DBA group clones the
production environment for Education. Since b
> What do you mean by UFS wasn't an option due to
> number of files?
Exactly that. UFS has a 1 million file limit under Solaris. Each Oracle
Financials environment well exceeds this limitation.
> Also do you have any tunables in system?
> Can you send 'zpool status' output? (raidz, mirror,
> ...
Anantha N. Srirama 写道:
You're probably hitting the same wall/bug that I came across; ZFS in all
versions up to and including Sol10U3 generates excessive I/O when it encounters
'fssync' or if any of the files were opened with 'O_DSYNC' option.
Why excessive I/O got generated? fsync or O_DSYNC
Hello Anantha,
Wednesday, January 17, 2007, 2:35:01 PM, you wrote:
ANS> You're probably hitting the same wall/bug that I came across;
ANS> ZFS in all versions up to and including Sol10U3 generates
ANS> excessive I/O when it encounters 'fssync' or if any of the files
ANS> were opened with 'O_DSYNC
You're probably hitting the same wall/bug that I came across; ZFS in all
versions up to and including Sol10U3 generates excessive I/O when it encounters
'fssync' or if any of the files were opened with 'O_DSYNC' option.
I do believe Oracle (or any DB for that matter) opens the file with O_DSYNC
Rainer Heilke,
You have 1/4 of the amount of memory that the 2900 system
is capable of (192GBs : I think).
Secondly, output from fsstat(1M) could be helpful.
Run this command over time and check to see if the
values change over time..
Mitchell Erb
> What hardware is used? Sparc? x86 32-bit? x86
> 64-bit?
> How much RAM is installed?
> Which version of the OS?
Sorry, this is happening on two systems (test and production). They're both
Solaris 10, Update 2. Test is a V880 with 8 CPU's and 32GB, production is an
E2900 with 12 dual-core CPU
> We are having issues with some Oracle databases on
> ZFS. We would appreciate any useful feedback you can
> provide.
> [...]
> The issue seems to be
> serious write contention/performance. Some read
> issues also exhibit themselves, but they seem to be
> secondary to the write issues.
What hardw
18 matches
Mail list logo