Op 30-01-12 02:52, Jose Ildefonso Camargo Tolosa schreef:
> On Sun, Jan 29, 2012 at 6:18 PM, Ron Arts wrote:
>> Hi list,
>>
>> I am running PostgreSQL 8.1 (CentOS 5.7) on a VM on a single XCP (Xenserver)
>> host.
>> This is a HP server with 8GB, Dual Quad Core
ready ruined my weekend.)
Now I came this far, can anybody give me some pointers? Why doesn't pgbench
saturate
either the CPU or the I/O? Why does using SSD only change the performance this
much?
Thanks,
Ron
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org
Kevin Grittner wrote:
>
> ...Sybase named caches...segment off portions of the memory for
> specific caches... bind specific database
> objects (tables and indexes) to specific caches. ...
>
> When I posted to the list about it, the response was that LRU
> eviction was superior to any tuning an
Jon Schewe wrote:
> OK, so if I want the 15 minute speed, I need to give up safety (OK in
> this case as this is just research testing), or see if I can tune
> postgres better.
Depending on your app, one more possibility would be to see if you
can re-factor the application so it can do multiple w
Greg Smith wrote:
> Bruce Momjian wrote:
>> I always assumed SCSI disks had a write-through cache and therefore
>> didn't need a drive cache flush comment.
Some do. SCSI disks have write-back caches.
Some have both(!) - a write-back cache but the user can explicitly
send write-through requests.
Bruce Momjian wrote:
> Greg Smith wrote:
>> Bruce Momjian wrote:
>>> I have added documentation about the ATAPI drive flush command, and the
>>
>> If one of us goes back into that section one day to edit again it might
>> be worth mentioning that FLUSH CACHE EXT is the actual ATAPI-6 command
>
Bruce Momjian wrote:
> Greg Smith wrote:
>> If you have a regular SATA drive, it almost certainly
>> supports proper cache flushing
>
> OK, but I have a few questions. Is a write to the drive and a cache
> flush command the same?
I believe they're different as of ATAPI-6 from 2001.
>
Bruce Momjian wrote:
> Agreed, thought I thought the problem was that SSDs lie about their
> cache flush like SATA drives do, or is there something I am missing?
There's exactly one case I can find[1] where this century's IDE
drives lied more than any other drive with a cache:
Under 120GB Maxto
Jesper Krogh wrote:
> I have a table that consists of somewhere in the magnitude of 100.000.000
> rows and all rows are of this tuples
>
> (id1,id2,evalue);
>
> Then I'd like to speed up a query like this:
>
> explain analyze select id from table where id1 = 2067 or id2 = 2067 order
> by evalue
if (argc<2) {
printf("usage: fs \n");
exit(1);
}
int fd = open (argv[1], O_RDWR | O_CREAT | O_TRUNC, 0666);
int i;
for (i=0;i<100;i++) {
char byte;
pwrite (fd, &byte, 1, 0);
// fchmod (fd, 0644); fchmod (fd, 0664);
fsync (fd);
}
}
Bruce Momjian wrote:
>> For example, ext3 fsync() will issue write barrier commands
>> if the inode was modified; but not if the inode wasn't.
>>
>> See test program here:
>> http://www.mail-archive.com/linux-ker...@vger.kernel.org/msg272253.html
>> and read two paragraphs further to see how touchi
Bruce Momjian wrote:
> Greg Smith wrote:
>> A good test program that is a bit better at introducing and detecting
>> the write cache issue is described at
>> http://brad.livejournal.com/2116715.html
>
> Wow, I had not seen that tool before. I have added a link to it from
> our documentation, an
Steve Crawford wrote:
> Greg Smith wrote:
>> Jochen Erwied wrote:
>>> - Promise Technology Supertrak ES4650 + additional BBU
>>>
>> I've never seen a Promise controller that had a Linux driver you would
>> want to rely on under any circumstances...
> +1
>
> I haven't tried Promise recently, but
If the table can be clustered on that column, I suspect
it'd be a nice case for the grouped index tuples patch
http://community.enterprisedb.com/git/
Actually, simply clustering on that column might give
major speedups anyway.
Vikul Khosla wrote:
> Folks,
>
> We have just migrated from Oracle to
Alan Hodgson wrote:
> On Monday 21 September 2009, Scott Marlowe wrote:
>> I'm looking at running session servers in ram.
>
> Use memcached for session data.
IMHO postgres is more appropriate for some types of session data.
One of the apps I work on involves session data that consists of
geospa
astro77 wrote:
> Thanks Kevin. I thought about using tsearch2 but I need to be able to select
> exact values on other numerical queries and cannot use "contains" queries.
You might be able to make use of a custom parser for tsearch2 that creates
something like a single "word" for xml fragments lik
Grzegorz Jaśkiewicz wrote:
>
> I thought that's where the difference is between postgresql and oracle
> mostly, ability to handle more transactions and better scalability .
>
Which were you suggesting had this "better scalability"?
I recall someone summarizing to a CFO where I used to work:
"Or
Greg Smith wrote:
> I keep falling into situations where it would be nice to host a server
> somewhere else. Virtual host solutions and the mysterious cloud are no
> good for the ones I run into though, as disk performance is important
> for all the applications I have to deal with.
It's worth no
Greg Smith wrote:
> On Wed, 1 Apr 2009, Scott Carey wrote:
>
>> Write caching on SATA is totally fine. There were some old ATA drives
>> that when paried with some file systems or OS's would not be safe. There are
>> some combinations that have unsafe write barriers. But there is a
>> standard
ch both the AMD and Intel CPU
options. It is far from clear which is the better purchase.
I'd be very interested to see the results of your research and
benchmarks posted here on pgsql-performance.
Ron Peacetree
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.
Tom Lane wrote:
> Ron Mayer writes:
>> vm=# create index "gist7" on tmp_intarray_test using GIST (my_int_array
>> gist__int_ops);
>> CREATE INDEX
>> Time: 2069836.856 ms
>
>> Is that expected, or does it sound like a bug to take over
>>
Ron Mayer wrote:
> This table summarizes some of the times, shown more completely
> in a script below.
> =
> create gist index on 1 = 5 seconds
> create gist index on 2 = 32 seconds
> create gist ind
ughts what I'm doing wrong?
Ron
psql output showing the timing follows.
===
vm=# create table tmp_intarray_test as select tag_id_array as my_int_array from
taggings;
SELECT
vm=# create table tmp_intarray_te
nt on that resource until the deadlock is
resolved.
To implement this, we need
a= to be able to count the number of locks for any given DB entity
b= some way of detecting HW saturation
Hope this is useful,
Ron Peacetree
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
M. Edward (Ed) Borasky wrote:
> At the CMG meeting I asked the disk drive engineers, "well, if the
> drives are doing the scheduling, why does Linux go to all the trouble?"
> Their answer was something like, "smart disk drives are a relatively
> recent invention." But
One more reason?
I imagine th
At 10:36 AM 1/10/2009, Gregory Stark wrote:
"Scott Marlowe" writes:
> On Sat, Jan 10, 2009 at 5:40 AM, Ron wrote:
>> At 03:28 PM 1/8/2009, Merlin Moncure wrote:
>>> just be aware of the danger . hard reset (power off) class of failure
>>> when fsync = o
rotect against data loss because of a power event..
(At least for most DB applications.)
...and of course, those lucky few with bigger budgets can use SSD's
and not care what fsync is set to.
Ron
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make
Tom Lane wrote:
Scott Carey <[EMAIL PROTECTED]> writes:
Which brings this back around to the point I care the most about:
I/O per second will diminish as the most common database performance limiting
factor in Postgres 8.4's lifetime, and become almost irrelevant in 8.5's.
Becoming more CPU eff
Scott Carey wrote:
For reads, if your shared_buffers is large enough, your heavily used
indexes won't likely go to disk much at all.
ISTM this would happen regardless of your shared_buffers setting.
If you have enough memory the OS should cache the frequently used
pages regardless of shared_buf
Greg Smith wrote:
On Wed, 13 Aug 2008, Ron Mayer wrote:
Second of all - ext3 fsync() appears to me to be *extremely* stupid.
It only seems to correctly do the correct flushing (and waiting) for a
drive's cache to be flushed when a file's inode has changed.
This is bad, b
Scott Marlowe wrote:
IDE came up corrupted every single time.
Greg Smith wrote:
you've drank the kool-aid ... completely
ridiculous ...unsafe fsync ... md0 RAID-1
array (aren't there issues with md and the barriers?)
Alright - I'll eat my words. Or mostly.
I still haven't found IDE drives
Greg Smith wrote:
The below disk writes impossibly fast when I issue a sequence of fsync
'k. I've got some homework. I'll be trying to reproduce similar
with md raid, old IDE drives, etc to see if I can reproduce them.
I assume test_fsync in the postgres source distribution is
a decent way to
Scott Marlowe wrote:
On Tue, Aug 12, 2008 at 10:28 PM, Ron Mayer ...wrote:
Scott Marlowe wrote:
I can attest to the 2.4 kernel ...
...SCSI...AFAICT the write barrier support...
Tested both by pulling the power plug. The SCSI was pulled 10 times
while running 600 or so concurrent pgbench
Scott Marlowe wrote:
I can attest to the 2.4 kernel not being able to guarantee fsync on
IDE drives.
Sure. But note that it won't for SCSI either; since AFAICT the write
barrier support was implemented at the same time for both.
--
Sent via pgsql-performance mailing list (pgsql-performance@
Scott Carey wrote:
Some SATA drives were known to not flush their cache when told to.
Can you name one? The ATA commands seem pretty clear on the matter,
and ISTM most of the reports of these issues came from before
Linux had write-barrier support.
I've yet to hear of a drive with the problem
Greg Smith wrote:
some write cache in the SATA disks...Since all non-battery backed caches
need to get turned off for reliable database use, you might want to
double-check that on the controller that's driving the SATA disks.
Is this really true?
Doesn't the ATA "FLUSH CACHE" command (say,
Matthew Wakeling wrote:
On Thu, 15 May 2008, Luke Lonergan wrote:
...HINT bit optimization, but avoids this whole ³write the data,
write it to the log also, then write it again just for good measure²
...
The hint data will be four bits per tuple plus overheads, so it could be
made very compact
Ron Mayer wrote:
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that wo
Tom Lane wrote:
Ron Mayer <[EMAIL PROTECTED]> writes:
Would another possible condition for considering
Cartesian joins be be:
* Consider Cartesian joins when a unique constraint can prove
that at most one row will be pulled from one of the tables
that would be part of thi
blowing up.
Not sure if that's redundant with the condition you
mentioned, or if it's yet a separate condition where
we might also want to consider cartesian joins.
Ron M
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Greg Smith wrote:
On Sat, 1 Mar 2008, Steve Poe wrote:
SATA over SCSI?
I've collected up many of the past list comments on this subject and put
a summary at
http://www.postgresqldocs.org/index.php/SCSI_vs._IDE/SATA_Disks
Should this section:
ATA Disks... Always default to the write cache
Joshua D. Drake wrote:
> Actually this is not true. Although I have yet to test 8.3. It is
> pretty much common knowledge that after 8 cores the acceleration of
> performance drops with PostgreSQL...
>
> This has gotten better every release. 8.1 for example handles 8 cores
> very well, 8.0 didn't
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> There's something fishy about this --- given that that plan has a lower
>>> cost estimate, it should've picked it without any artificial
>>> constraints.
One final t
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Tom Lane wrote:
>>> ...given that that plan has a lower cost estimate, it
>>> should've picked it without any artificialconstraints.
>
>>I think the reason it's not picking it was discu
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>
>> Also shown below it seems that if I use "OFFSET 0" as a "hint"
>> I can force a much (10x) better plan. I wonder if there's room for
>> a pgfoundry project for a patch set tha
like there's a step where it expects
511 rows and gets 2390779 which seems to be off by a factor of 4600x.
Also shown below it seems that if I use "OFFSET 0" as a "hint"
I can force a much (10x) better plan. I wonder if there's room for
a pgfoundry project for a
Bill Moran wrote:
> On Fri, 9 Nov 2007 11:11:18 -0500 (EST)
> Greg Smith <[EMAIL PROTECTED]> wrote:
>> On Fri, 9 Nov 2007, Sebastian Hennebrueder wrote:
>>> If the queries are complex, this is understable.
>> The queries used for this comparison are trivial. There's only one table
>> involved and
milar piece of equipment from Dell (the PowerEdge), and when
we had a problem with it we received excellent service from them. When
our raid controller went down (machine < 1 year old), Dell helped to
diagnose the problem and installed a new one at our hosting facility,
all within 24 hours.
Alvaro Herrera wrote:
Ron St-Pierre wrote:
Okay, here's our system:
postgres 8.1.4
Upgrade to 8.1.10
Any particular fixes in 8.1.10 that would help with this?
Here's the table information:
The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.
Gregory Stark wrote:
"Ron St-Pierre" <[EMAIL PROTECTED]> writes:
We vacuum only a few of our tables nightly, this one is the last one because it
takes longer to run. I'll probably re-index it soon, but I would appreciate any
advice on how to speed up the vacuum p
Tom Lane wrote:
Here is your problem:
vacuum_cost_delay = 200
If you are only vacuuming when nothing else is happening, you shouldn't
be using vacuum_cost_delay at all: set it to 0. In any case this value
is probably much too high. I would imagine that if you watch the
machine while
Bill Moran wrote:
In response to Ron St-Pierre <[EMAIL PROTECTED]>:
We vacuum only a few of our tables nightly, this one is the last one
because it takes longer to run. I'll probably re-index it soon, but I
would appreciate any advice on how to speed up the vacuum process (and
ned, any insights into changing the configuration to optimize
performance are most welcome.
Thanks
Ron
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Heikki Linnakangas wrote:
> Peter Schuller wrote:
>> to have a slow background process (similar to normal non-full vacuums
> ...
> I think it's doable, if you take a copy of the tuple, and set the ctid
> pointer on the old one like an UPDATE, and wait until the old tuple is
> no longer visible to
Csaba Nagy wrote:
>
> Well, my problem was actually solved by rising the statistics target,
Would it do more benefit than harm if postgres increased the
default_statistics_target?
I see a fair number of people (myself included) asking questions who's
resolution was to ALTER TABLE SET STATISTICS;
Trevor Talbot wrote:
>
> Lack of reliability compared to _UFS_? Can you elaborate on this?
What elaboration's needed? UFS seems to have one of the longest
histories of support from major vendors of any file system supported
on any OS (Solaris, HP-UX, SVR4, Tru64 Unix all use it).
Can you elab
I notice that I get different plans when I run the
following two queries that I thought would be
identical.
select distinct test_col from mytable;
select test_col from mytable group by test_col;
Any reason why it favors one in one case but not the other?
d=# explain analyze select distinct
Jay Kang wrote:
> Hello,
>
> I'm currently trying to decide on a database design for tags in my web
> 2.0 application. The problem I'm facing is that I have 3 separate tables
> i.e. cars, planes, and schools. All three tables need to interact with
> the tags, so there will only be one universal se
Seems Linux has IO scheduling through a program called ionice.
Has anyone here experimented with using it rather than
vacuum sleep settings?
http://linux.die.net/man/1/ionice
This program sets the io scheduling class and priority
for a program. As of this writing, Linux supports 3 schedulin
Greg Smith wrote:
>
> Let's break this down into individual parts:
Great summary.
> 4) Is vacuuming a challenging I/O demand? Quite.
>
> Add all this up, and that fact that you're satisfied with how nice has
> worked successfully for you doesn't have to conflict with an opinion
> that it's not
Tom Lane wrote:
> Ron Mayer <[EMAIL PROTECTED]> writes:
>> Greg Smith wrote:
>>> Count me on the side that agrees adjusting the vacuuming parameters is
>>> the more straightforward way to cope with this problem.
>
>> Agreed for vacuum; but it still seems
Greg Smith wrote:
>
> Count me on the side that agrees adjusting the vacuuming parameters is
> the more straightforward way to cope with this problem.
Agreed for vacuum; but it still seems interesting to me that
across databases and workloads high priority transactions
tended to get through fast
Andrew Sullivan wrote:
> On Thu, May 10, 2007 at 05:10:56PM -0700, Ron Mayer wrote:
>> One way is to write astored procedure that sets it's own priority.
>> An example is here:
>> http://weblog.bignerdranch.com/?p=11
>
> Do you have evidence to show this will
Dan Harris wrote:
> Daniel Haensse wrote:
>> Has anybody a nice
>> solution to change process priority? A shell script, maybe even for java?
One way is to write astored procedure that sets it's own priority.
An example is here:
http://weblog.bignerdranch.com/?p=11
> While this may technically wo
.
jfs seems to be best for that.
Caveat: I have not yet experimented with any version of reiserfs in
production.
Cheers,
Ron Peacetree
At 08:01 AM 5/8/2007, Luke Lonergan wrote:
WRT ZFS on Linux, if someone were to port it, the license issue
would get worked out IMO (with some discussio
Linux is a =good= thing security-wise. If it's good enough
for the NSA...)
Downside is that initial install and config can be a bit complicated.
We're happy with it.
Cheers,
Ron Peacetree
At 05:55 PM 5/7/2007, David Levy wrote:
Hi,
I am about to order a new server for my Postgres
on of the above looks like it will really be
successful in addressing the issues brought up in this thread.
Cheers,
Ron Peacetree
At 01:59 PM 4/27/2007, Josh Berkus wrote:
Dan,
> Exactly.. What I think would be much more productive is to use the
> great amount of information tha
over to be RAM resident during the query. If you can
manage that, said query should be =fast=.
RAM is cheap enough that if you can make this query RAM resident by a
reasonable combination of configuration + schema + RAM purchasing,
you should do it.
Cheers,
Ron Peacetree
At 03:07 PM 5/2/2007, P
al evidence that you are
being bitten by a slow memory leak; most likely in the JVM.
Cheers,
Ron Peacetree
At 11:24 AM 5/2/2007, Parks, Aaron B. wrote:
My pg 8.1 install on an AMD-64 box (4 processors) with 9 gigs of ram
running RHEL4 is acting kind of odd and I thought I would see if
anybod
that everyone should make superhuman efforts
to always be running the latest stable release.
Even the differences between 8.1.x and 8.2.x are worth it.
(and the fewer and more modern the releases "out in the wild", the
easier community support is)
Cheers,
Ron Peacetree
Craig A. James wrote:
> Merlin Moncure wrote:
>> Using surrogate keys is dangerous and can lead to very bad design
>> habits that are unfortunately so prevalent in the software industry
>> they are virtually taught in schools. ... While there is
>> nothing wrong with them in principle (you are ex
ted values for pg
memory use is that pg 7.x and pg 8.x are =very= different beasts.
If you break the advice into pg 7.x and pg 8.x categories, you find
that there is far less variation in the suggestions.
Bottom line: pg 7.x could not take advantage of larger s
At 10:08 AM 4/12/2007, Guido Neitzer wrote:
On 12.04.2007, at 07:26, Ron wrote:
You need to buy RAM and HD.
Before he does that, wouldn't it be more useful, to find out WHY he
has so much IO?
1= Unless I missed something, the OP described pg being used as a
backend DB for a webserver
I'm local. Drop me some
private email at the address I'm posting from if you want and I'll
send you further contact info so we can talk in more detail.
Cheers,
Ron Peacetree
At 06:02 PM 4/11/2007, Jason Lustig wrote:
Hello all,
My website has been having issues with our new
At 11:13 PM 4/7/2007, [EMAIL PROTECTED] wrote:
On Sat, 7 Apr 2007, Ron wrote:
Ron, I think that many people aren't saying cheap==good, what we are
doing is arguing against the idea that expesnsive==good (and it's
coorelary cheap==bad)
Since the buying decision is binary, you eithe
At 05:42 PM 4/7/2007, [EMAIL PROTECTED] wrote:
On Sat, 7 Apr 2007, Ron wrote:
The reality is that all modern HDs are so good that it's actually
quite rare for someone to suffer a data loss event. The
consequences of such are so severe that the event stands out more
than just the stati
nvironment and use
factors, sector remap detecting, rotating HDs into and out of roles
based on age, etc) are necessary.
Anyone who does some close variation of "b" directly above =will= see
the benefits of using better HDs.
At least in my supposedly unqualified anecdotal 25 year
At 02:19 PM 4/6/2007, Michael Stone wrote:
On Fri, Apr 06, 2007 at 12:41:25PM -0400, Ron wrote:
3.based on personal observation, case study reports, or random
investigations rather than systematic scientific evaluation:
anecdotal evidence.
Here you even quote the appropriate definition
At 09:23 AM 4/6/2007, Michael Stone wrote:
On Fri, Apr 06, 2007 at 08:49:08AM -0400, Ron wrote:
Not quite. Each of our professional
experiences is +also+ statistical
evidence. Even if it is a personally skewed sample.
I'm not sure that word means what you think it
means. I think th
At 07:38 AM 4/6/2007, Michael Stone wrote:
On Thu, Apr 05, 2007 at 11:19:04PM -0400, Ron wrote:
Both statements are the literal truth.
Repeating something over and over again doesn't make it truth. The
OP asked for statistical evidence (presumably real-world field
evidence) to support
tions that we need a new error rate model for end use. I agree.
Regardless of what Seagate et al can do in their QA labs, we need
reliability numbers that are actually valid ITRW of HD usage.
The other take-away is that organizational policy and procedure with
regards to HD maintenance and use in m
At 11:40 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Ron wrote:
At 10:07 PM 4/5/2007, [EMAIL PROTECTED] wrote:
On Thu, 5 Apr 2007, Scott Marlowe wrote:
> Server class drives are designed with a longer lifespan in mind.
> > Server class hard drives are rated
will suffer greatly shortened life and die a
horrible death in such a environment and under such use.
Ron
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [
very much !no! fun.
Until the drives have been burnt in and proven reliable, just
assume that they could all fail at any time and act accordingly.
Yep. Folks should google "bath tub curve of statistical failure" or
similar. Basically, always burn in your drives f
14 HD's in one case are going to have a serious transient load on
system start up and (especially with those SAS HDs) can generate a
great deal of heat.
What 16bay 3U server are you using?
Cheers,
Ron Peacetree
PS to all: Tom's point about the difference between enterprise and
and non-en
th 5+ year warranties and then make sure to use them only in
appropriate conditions and under appropriate loads.
Respect those constraints and the numbers say the difference in
reliability between SCSI, SATA, and SAS HDs is negligible.
Cheers,
Ron Peacetree
---(end of
[EMAIL PROTECTED] wrote:
> 8*73GB SCSI 15k ...(dell poweredge 2900)...
> 24*320GB SATA II 7.2k ...(generic vendor)...
>
> raid10. Our main requirement is highest TPS (focused on a lot of INSERTS).
> Question: will 8*15k SCSI drives outperform 24*7K SATA II drives?
It's worth asking the vendor
At 07:07 PM 4/3/2007, Ron wrote:
For random IO, the 3ware cards are better than PERC
> Question: will 8*15k 73GB SCSI drives outperform 24*7K 320GB SATA
II drives?
Nope. Not even if the 15K 73GB HDs were the brand new Savvio 15K screamers.
Example assuming 3.5" HDs and RAID 10
rouble), The SATA set-up rates
to be ~2x - ~3x faster ITRW than the SCSI set-up.
Cheers,
Ron Peacetree
At 06:13 PM 4/3/2007, [EMAIL PROTECTED] wrote:
We need to upgrade a postgres server. I'm not tied to these specific
alternatives, but I'm curious to get feedback on their general
qual
Xiaoning Ding wrote:
> Postgresql is 7.3.18. [...]
> 1 process takes 0.65 second to finish.
> I update PG to 8.2.3. The results are [...] now.
> 1 process :0.94 second
You sure about your test environment? Anything else
running at the same time, perhaps?
I'm a bit surprised that 8.2.3 would
. with BB IO caches.
You've got a lot more work ahead of you.
Ron
At 05:08 AM 3/22/2007, Michael Ben-Nes wrote:
Hello
I plan to buy a new development server and I wonder what will be the
best HD combination.
I'm aware that "best combination" also relay on DB structure
lure.
I've had the whole card die (massive cooling failure in NOC led to
...), but never any component on the card. OTOH, I'm conservative
about how much heat per unit area I'm willing to allow to occur in or
near my DB servers.
Cheers,
Ron
---(end
t be spent in any other place; and
there's a finite pile of them.
Spending 10x as much in labor and opportunity costs (you can only do
one thing at a time...) as you would on CapEx to address a problem is
simply not smart money management nor good business. Even spending
2x
At 09:11 AM 3/8/2007, Merlin Moncure wrote:
On 3/8/07, Magnus Hagander <[EMAIL PROTECTED]> wrote:
On Thu, Mar 08, 2007 at 06:24:35AM -, James Mansion wrote:
>
> In the long run, we are going to have to seriously rethink pg's use
> of WAL as the way we implement MVCC as it becomes more and mo
pricey any more.
Heck =16= GB Flash only costs ~$300 US and 128GB SSDs based on flash
RAM are due out this year.
Cheers,
Ron
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Profile, benchmark, and only then start allocating dedicated resources.
For instance, I've seen situations where putting pg_xlog on its own
spindles was !not! the right thing to do.
Best Wishes,
Ron Peacetree
---(end of broadcast)
At 02:43 PM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
...and I still think looking closely at the actual physical layout of
the tables in the SAN is likely to be worth it.
How would I go about doing that?
Alex
Hard for me to give specific advice when I
At 11:03 AM 3/2/2007, Alex Deucher wrote:
On 3/2/07, Ron <[EMAIL PROTECTED]> wrote:
May I suggest that it is possible that your schema, queries, etc were
all optimized for pg 7.x running on the old HW?
(explain analyze shows the old system taking ~1/10 the time per row
as well as esti
ably into RAM, do it and buy
yourself the time to figure out the rest of the story w/o impacting
on production performance.
Cheers,
Ron Peacetree
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
in the
annotated pg conf file: shared_buffers, work_mem, maintenance_work_mem, etc.
http://www.powerpostgresql.com/Downloads/annotated_conf_80.html
Cheers,
Ron Peacetree
---(end of broadcast)---
TIP 4: Have you searched our list
1 - 100 of 409 matches
Mail list logo