ock | 219936 | 215 | 2.30 | 0.04
WALWriteLock | 3172725 | 457 | 24.67 | 0.08
CLogControlLock| 1012458 |6423 | 10.59 | 1.09
The same test done with a readonly workload show virtually no SpinDelay
at all.
Any thoughts or comments on these results are
Thanks for the replies Jeff, Tom and Merlin.
>> Pages that SELECT multiple rows with OFFSET and LIMIT conditions struggle to
>> top 1.3k req/s
>
> Is that tested at the OFFSET and LIMIT of 0 and 15, as shown in the
> explain plan?
Yes — 0 (OFFSET) and 16 (LIMIT), or 15 and 31 (i.e. “second pag
Hi all. This might be tricky in so much as there’s a few moving parts (when
isn’t there?), but I’ve tried to test the postgres side as much as possible.
Trying to work out a potential database bottleneck with a HTTP application
(written in Go):
Pages that render HTML templates but don’t perform
On Tue, 2013-05-07 at 18:32 +1200, Mark Kirkwood wrote:
> On 07/05/13 18:10, Simon Riggs wrote:
> > On 7 May 2013 01:23, wrote:
> >
> >> I'm thinking that a variant of (2) might be simpler to inplement:
> >>
> >> (I think Matt C essenti
pment, 24x7 Support, Training & Services
Another option would be for the bulk insert/update/delete to track the
distribution stats as the operation progresses and if it detects that it
is changing the distribution of data beyond a certain threshold it would
update the pg stats accordingl
/9.0/static/row-estimation-examples.html
was a big help.
Matt
On Wed, Feb 27, 2013 at 9:08 AM, Matt Daw wrote:
> Quick follow up... I've found that the row estimate in:
>
> explain select count(id) from versions where project_id IN (80,115)
> A
ing the
same as (project_id=115) on its own.
Matt
On Tue, Feb 26, 2013 at 11:35 AM, Matt Daw wrote:
> Howdy, the query generator in my app sometimes creates redundant
> filters of the form:
>
> project_id IN ( ) AND
> project_id =
>
> ... and this is leading to a bad estimat
d the statistics target has been updated to
1000 on the project_id column. I've also loaded the one table into a
9.2.2 instance and replicated the behaviour.
I can change how the query is being generated, but I'm curious why I'm
getting a bad estimate. Is this an expected re
((code)::text
~~* '%comp%'::text))
Rows Removed by Filter: 1
Total runtime: 147.411 ms
(14 rows)
On Fri, Sep 28, 2012 at 2:47 PM, Matt Daw wrote:
> Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance
> and report back.
>
>
Hi Tom, thank you very much. I'll load these tables onto a 9.2 instance and
report back.
Matt
On Fri, Sep 28, 2012 at 2:44 PM, Tom Lane wrote:
> Matt Daw writes:
> > Howdy, I've been debugging a client's slow query today and I'm curious
> > about the q
=155834 loops=1)
Filter: ((retirement_date IS NULL)
AND ((sg_status_list)::text = 'ip'::text))
Total runtime: 5051.414 ms
(17 rows)
Does anything come to mind that would help me debug why this plan is being
chosen? Thanks!
Matt
ost=0.00..94459.07 rows=1926207 width=0) (actual
time=0.005..1475.218 rows=1926207 loops=1)
Total runtime: 2889.360 ms
(3 rows)
Time: 2889.842 ms
On Tuesday, 21 August, 2012 at 3:57 PM, Matt Daw wrote:
> Howdy. I'm curious what besides raw hardware speed determines the performance
and the results seem reasonable (8644.1985
MB/s with 1 core -> 25017 MB/s with 12 cores). The box is running 2.6.26.6-49
and postgresql 9.0.6.
I'm stumped as to why it's so much slower, any ideas on what might explain it…
or other benchmarks I could run to try to narrow down the cause?
Thanks!
Matt
Robert Haas wrote:
> Old row versions have to be kept around until they're no longer of
> interest to any still-running transaction.
Thanks for the explanation.
Regarding the snippet above, why would the intermediate history of
multiply-modified uncommitted rows be of interest to anything, or is
Hi. I've only been using PostgreSQL properly for a week or so, so I
apologise if this has been covered numerous times, however Google is
producing nothing of use.
I'm trying to import a large amount of legacy data (billions of
denormalised rows) into a pg database with a completely different schem
I have a relatively straightforward query that by itself isn't that
slow, but we have to run it up to 40 times on one webpage load, so it
needs to run much faster than it does. Here it is:
SELECT COUNT(*) FROM users, user_groups
WHERE users.user_group_id = user_groups.id AND NOT users.deleted AND
On Feb 2, 6:06 am, Edgardo Portal wrote:
> On 2010-02-02, Matt White wrote:
>
> > I have a relatively straightforward query that by itself isn't that
> > slow, but we have to run it up to 40 times on one webpage load, so it
> > needs to run much faster than it d
Glyn Astill wrote:
>> Stupid question, but why do people bother with the Perc line of
>> cards if the LSI brand is better? It seems the headache of trying
>> to get the Perc cards to perform is not worth any money saved.
>
> I think in most cases the dell cards actually cost more, people end
> u
Scott Carey wrote:
> You probably don’t want a single array with more than 32 drives anyway,
> its almost always better to start carving out chunks and using software
> raid 0 or 1 on top of that for various reasons. I wouldn’t put more than
> 16 drives in one array on any of these RAID cards, they
Rajesh Kumar Mallah wrote:
>> I've checked out the latest Areca controllers, but the manual
>> available on their website states there's a limitation of 32 disks
>> in an array...
>
> Where exactly is there limitation of 32 drives. the datasheet of
> 1680 states support upto 128drives using en
Glyn Astill wrote:
> Did you try flashing the PERC with the LSI firmware?
>
> I tried flashing a PERC3/dc with LSI firmware, it worked fine but I
> saw no difference in performance so I assumed it must be somethign
> else on the board that cripples it.
No, for a few reasons:
1. I read somewhere
Arjen van der Meijden wrote:
> Afaik the Perc 5/i and /e are more or less rebranded LSI-cards (they're
> not identical in layout etc), so it would be a bit weird if they
> performed much less than the similar LSI's wouldn't you think?
I've recently had to replace a PERC4/DC with the exact same ca
Alvaro Herrera wrote:
> Move the old clog files back where they were, and run VACUUM FREEZE in
> all your databases. That should clean up all the old pg_clog files, if
> you're really that desperate.
Has anyone actually seen a CLOG file get removed under 8.2 or 8.3? How about
8.1?
I'm probably
type:
ROLLBACK PREPARED 'my_prepared_transaction1';
Hope this helps!
Matt
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
column, reanalyze the table, and
recheck the selectivity estimate:
alter table paliasorigin alter column origin_id set statistics 20;
analyze paliasorigin;
explain analyze select * from paliasorigin where origin_id=20;
Good luck!
Matt
--
Sent via pgsql-performance mailing list (pgsql-performance
"Tom Lane" <[EMAIL PROTECTED]> writes:
> I'm not sure offhand whether the existing correlation stats would be of use
> for
> it, or whether we'd have to get ANALYZE to gather additional data.
Please forgive the tangent, but would it be practical to add support for
gathering statistics on an arbi
"Tom Lane" <[EMAIL PROTECTED]> writes:
> "Matt Smiley" <[EMAIL PROTECTED]> writes:
> > So an Index Scan is always going to have a higher cost estimate than
> > an equivalent Seq Scan returning the same result rows (unless
> > random_
Hi David,
Early in this thread, Pavel suggested:
> you should partial index
>
> create index foo(b) on mytable where a is null;
Rather, you might try the opposite partial index (where a is NOT null) as a
replacement for the original unqualified index on column A. This new index
will be ignor
wrote:
> "Matt Klinker" <[EMAIL PROTECTED]> writes:
> > --Joined View:
> > CREATE OR REPLACE VIEW directory_listing AS
> > SELECT school.id, school.name, school.description, 119075291 AS
> > listing_type_fid
> >FROM school
> > UNION
ex Scan using pk_company_id on company c (cost=0.00..9.67 rows=1
width=424)
Index Cond: (c.id = xref.listing_fid)
On Thu, Apr 3, 2008 at 11:49 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Matt Klinker" <[EMAIL PROTECTED]> writes:
> > Sorry for not including this extra
mpany_id on company c (cost=0.00..3.06
rows=1 width=517)
Index Cond: (c.id = outer.listing_fid)
On Thu, Apr 3, 2008 at 7:19 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Matt Klinker" <[EMAIL PROTECTED]> writes:
> > I new I'd forget something! I've t
I'm trying to fine tune this query to return in a reasonable amount of time
and am having difficulties getting the query to run the way I'd like. I
have a couple of semi-related entities that are stored in individual tables,
say, A and B. There is then a view created that pulls together the commo
ta will be created.
Considering a single table would grow to 10mil+ rows at max, and this
machine will sustain about 25mbps of insert/update/delete traffic 24/7 -
365, will I be saving much by partitioning data like that?
--
-Matt
<http://twiki.spimageworks.com/twiki/bin/view/Software/CueDevelopment>
Hi David,
Thanks for your feedback! I'm rather a newbie at this, and I do appreciate the
critique.
First, let me correct myself: The formulas for the risk of loosing data when
you loose 2 and 3 disks shouldn't have included the first term (g/n). I'll
give the corrected formulas and tables at
Hi Dimitri,
First of all, thanks again for the great feedback!
Yes, my I/O load is mostly read operations. There are some bulk writes done in
the background periodically throughout the day, but these are not as
time-sensitive. I'll have to do some testing to find the best balance of read
vs.
gle Postgres backend will see better
than a single disk's seek rate, because the executor currently dispatches only
1 logical I/O request at a time.
>>> Dimitri <[EMAIL PROTECTED]> 03/23/07 2:28 AM >>>
On Friday 23 March 2007 03:20, Matt Smiley wrote:
> My company is pur
My company is purchasing a Sunfire x4500 to run our most I/O-bound databases,
and I'd like to get some advice on configuration and tuning. We're currently
looking at:
- Solaris 10 + zfs + RAID Z
- CentOS 4 + xfs + RAID 10
- CentOS 4 + ext3 + RAID 10
but we're open to other suggestions.
>From
topped postgres from
writing to data/base
but the strange load pattern remains. (system is ~30% of the overall
load, vs 3% before)
So, my question is, what happened, and how can I get it back to the same
load pattern
7.4.6 had, and the same pattern I had for 4 hours before it wen
If memory serves me correctly I have seen several posts about this in
the past.
I'll try to recall highlights.
1. Create a md in linux sufficiently large enough to handle the data set
you are wanting to store.
2. Create a HD based copy somewhere as your permanent storage mechanism.
3. Start u
Rodrigo Madera wrote:
I am concerned with performance issues involving the storage of DV on
a database.
I though of some options, which would be the most advised for speed?
1) Pack N frames inside a "container" and store the container to the db.
2) Store each frame in a separate record in the
r,
in this set up, a data warehouse, the observed performance is not what I
would hope for.
Regards,
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
---(end of broadcast)---
TIP 8: explain analyze is your friend
o the list what I find and maybe do some _rough_
benchmarking. This is a production app, so I can't get too much in the way
of the daily batches.
--
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
On Tuesday 10 May 2005 11:13 am, Greg Stark wrote:
> Matt Olson writes:
> &
warehouse, the observed performance is not what I
would hope for.
Regards,
Matt Olson
Ocean Consulting
http://www.oceanconsulting.com/
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
not to say that pgpool couldn't make a good guess in the majority
of cases!
M
Joshua D. Drake wrote:
Matt Clark wrote:
Presumably it can't _ever_ know without being explicitly told,
because even for a plain SELECT there might be triggers involved that
update tables, or it might be
a view. But there is one thing
> missing. You can't just insert into super_foo and aquire the "correct
> partition". You will still have to insert into the correct underlying
> table. "Real" partitioning will take care of correct parti
Presumably it can't _ever_ know without being explicitly told, because
even for a plain SELECT there might be triggers involved that update
tables, or it might be a select of a stored proc, etc. So in the
general case, you can't assume that a select doesn't cause an update,
and you can't be su
in the
table partitions. (and in doing so is causing less disk IO)
BTW, internally, Oracle sees partitions as tables too. Even the "Union all"
system that MS SQL
Server uses works fine as long as the optimiser supports it to prune correctly.
Cheers,
Matt
--
Matt Casters <[EMAIL
u may have, I'll do my best to keep
pgsql-performance up to date on the results.
Best
regards,
Matt
___
Matt Casters
i-Bridge bvba, http://www.kettle.be
Fonteinstraat 70, 9400 OKEGEM, Belgium
Tel. 054/25.01.37
GSM 0486/97.29.37
Joshua,
Actually that's a great idea!
I'll have to check if Solaris wants to play ball though.
We'll have to see as we don't have the new disks yet, ETA is next week.
Cheers,
Matt
-Oorspronkelijk bericht-
Van: Joshua D. Drake [mailto:[EMAIL PROTECTED]
Verzonden:
laris level to get maximum performance out of the system.
Mmmm. This is going to be a though one to crack. Perhaps it will be
possible to get some extra juice out of placing the indexes on the smaller
disks (150G) and the data on the bigger ones?
Thanks!
Matt
-Oorspronkelijk bericht
#x27;t find any documents immediately, are there any fine manuals to
read on data
warehouse performance tuning on PostgreSQL?
Thanks in advance for any help you may have, I'll do my best to keep
pgsql-performance up to date
on the results.
Best regards,
Matt
--
Matt Casters <[EMAIL PROTECTE
This page may be of use:
http://www.serverworldmagazine.com/monthly/2003/02/solaris.shtml
From personal experience, for god's sake don't think Solaris' VM/swap
implementation is easy - it's damn good, but it ain't easy!
Matt
Kevin Schroeder wrote:
I think it's pr
this command like "vmstat 10". (ignore the first line)
Keep an eye on the "pi" and "po" parameters. (kilobytes paged in and out)
HTH,
Matt
--
Matt Casters <[EMAIL PROTECTED]>
i-Bridge bvba, http://www.kettle.be
Fonteinstraat 70,
Another man working to the bitter end this Christmas!
There could be many reasons, but maybe first you should look at the amount
of RAM available? If the tables fit in RAM on the production server but not
on the dev server, then that will easily defeat the improvement due to using
the native DB v
> Am I right to assume that "writeback" is both fastest and at
> the same time as safe to use as ordered? Maybe any of you
> did some benchmarks?
It should be fastest because it is the least overhead, and safe because
postgres does it's own write-order guaranteeing through fsync(). You should
So there are really only 9000 rows out of 76 million starting with 'I'? How
about combining some techniques - you could create an index on the first two
chars of the field (should be selective enough to give an index scan),
select from that, and select the actual data with the like clause.
I
> With that many rows, and a normal index on the field,
> postgres figures the best option for say "I%" is not an index
> scan, but a sequential scan on the table, with a filter --
> quite obviously this is slow as heck, and yes, I've run
> analyze several times and in fact have the vacuum anal
To me, these three queries seem identical... why doesn't the first one
(simplest to understand and write) go the same speed as the third one?
I'll I'm trying to do is get statistics for one day (in this case,
today) summarized. Table has ~25M rows. I'm using postgres 7.3.? on
rh linux 7.3 (note
> For some reason it's a requirement that partial wildcard
> searches are done on this field, such as "SELECT ... WHERE
> field LIKE 'A%'"
>
> I thought an interesting way to do this would be to simply
> create partial indexes for each letter on that field, and it
> works when the query matche
Pierre-Frédéric Caillaud wrote:
check this marvelus piece of 5 minutes of work :
http://boutiquenumerique.com/test/iframe_feed.html
cela m'a fait le sourire :-)
(apologies for bad french)
M
---(end of broadcast)---
TIP 7: don't forget to increase yo
A note though : you'll have to turn off HTTP persistent
connections in your server (not in your proxy) or youre back to
square one.
I hadn't considered that. On the client side it would seem to be up to
the client whether to use a persistent connection or not. If it does,
then yeah, a
Javascript is too powerful to turn for any random web page. It is only
essential for web pages because people write their web pages to only
work with javascript.
Hmm... I respectfully disagree. It is so powerful that it is impossible
to ignore when implementing a sophisticated app. And it is
All 3 plans have crappy estimates.
Run ANALYZE in production, then send another explain analyze (as an
attachment please, to avoid linewrap).
Er, no other possible answer except Rod's :-)
---(end of broadcast)---
TIP 2: you can get off all lists a
[EMAIL PROTECTED] wrote:
Citando Rod Taylor <[EMAIL PROTECTED]>:
Please send an explain analyze from both.
I'm sendin three explains. In the first the Dell machine didn't use existing
indexes, so I turn enable_seqscan off (this is the second explain). The total
cost dec
- ITEM table will, grow, grow, grow (sold items are not deleted)
WHERE PRODUCT.SECTION_USED_FK IS NOT NULL AND ITEM.STATUS=1 and
(ITEM.KIND=2 or ITEM.KIND=3)
Partial index on item.status ?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe c
In your webpage include an iframe with a Javascript to refresh it
every five seconds. The iframe fetches a page from the server which
brings in the new data in form of generated JavaScript which writes
in the parent window. Thus, you get a very short request every 5
seconds to fetch new
These are CGI scripts at the lowest level, nothing more and nothing
less. While I could probably embed a small webserver directly into
the perl scripts and run that as a daemon, it would take away the
portability that the scripts currently offer.
If they're CGI *scripts* then they just use the
1- You have a query that runs for half an hour and you spoon feed
the results to the client ?
(argh)
2- Your script looks for new data every few seconds, sends a
packet, then sleeps, and loops ?
If it's 2 I have a readymade solution for you, just ask.
I'm guessing (2) - PG do
> Correct the 75% of all hits are on a script that can take
> anywhere from
> a few seconds to a half an hour to complete.The script
> essentially
> auto-flushes to the browser so they get new information as it arrives
> creating the illusion of on demand generation.
This is more like a s
> Case in point: A first time visitor hits your home page. A
> dynamic page is generated (in about 1 second) and served
> (taking 2 more seconds) which contains links to 20 additional
The gain from an accelerator is actually even more that that, as it takes
essentially zero seconds for Apache
> Apache::DBI overall works better to what I require, even if
> it is not a
> pool per sey. Now if pgpool supported variable rate pooling like
> Apache does with it's children, it might help to even things
> out. That
> and you'd still get the spike if you have to start the webserver and
>
> I have a dual processor system that can support over 150 concurrent
> connections handling normal traffic and load. Now suppose I setup
> Apache to spawn all of it's children instantly, what will
...
> This will spawn 150 children in a short order of time and as
> this takes
"Doctor, it h
Title: Message
The best way to get all the stuff needed by a query into
RAM is to run the query. Is it more that you want to 'pin' the data in RAM
so it doesn't get overwritten by other
queries?
-Original Message-From:
[EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[E
and certainly anyone who's been around a computer more than a week or
two knows which direction "in" and "out" are customarily seen from.
regards, tom lane
Apparently not whoever wrote the man page that everyone copied ;-)
Interesting. I checked this on several machines. They actually
I don't have iostat on that machine, but vmstat shows a lot of writes to
the drives, and the runnable processes are more than 1:
6 1 0 3617652 292936 279192800 0 52430 1347 4681 25
19 20 37
Assuming that's the output of 'vmstat 1' and not some other delay,
50MB/second of sus
> Looking at that list, I got the feeling that you'd want to
> push that PG-awareness down into the block-io layer as well,
> then, so as to be able to optimise for (perhaps) conflicting
> goals depending on what the app does; for the IO system to be
> able to read the apps mind it needs to hav
, I'm not an FS guru...):
* great append performance for the WAL?
* optimised scattered writes for checkpointing?
* Knowledge that FSYNC is being used for preserving ordering a lot of the
time, rather than requiring actual writes to disk (so long as the writes
eventually happen in order...)?
Mat
Hyperthreading is actually an excellent architectural feature that
can give significant performance gains when implemented well and used
for an appropriate workload under a decently HT aware OS.
IMO, typical RDBMS streams are not an obviously appropriate workload,
Intel didn't implement it partic
The real
performance differences between unices are so small as to be ignorable
in this context.
<>
Well, at least the difference between Linux and BSD. There are
substantial
tradeoffs should you chose to use Solaris or UnixWare.
Yes, quite right, I should have said 'po
How would I turn that off? In the kernel config? Not too familiar with
that. I have a 2 proc xeon with 4 gigs of mem on the way for postgres,
so I hope HT isn't a problem. If HT is turned off, does it just not
use the other "half" of the processor? Or does the processor just work
as one unit?
Y
You are asking the wrong question. The best OS is the OS you (and/or
the customer) knows and can administer competently. The real
performance differences between unices are so small as to be ignorable
in this context. The context switching bug is not OS-dependent, but
varys in severity acro
On Tue, 19 Oct 2004 15:49:45 -0400, Jeremy Dunn <[EMAIL PROTECTED]> wrote:
> > -Original Message-
> > From: [EMAIL PROTECTED]
> > [mailto:[EMAIL PROTECTED] On Behalf Of
> > Matt Nuzum
> > Sent: Tuesday, October 19, 2004 3:35 PM
> > To: pgsql-perf
Hello, I've thought it would be nice to index certain aspects of my
apache log files for analysis. I've used several different techniques
and have something usable now, but I'd like to tweak it one step
further.
My first performance optimization was to change the logformat into a
CSV format. I pr
> SELECT cmp.WELL_INDEX, cmp.COMPOUND, con.CONCENTRATION
> FROM SCR_WELL_COMPOUND cmp, SCR_WELL_CONCENTRATION con
> WHERE cmp.BARCODE=con.BARCODE
> AND cmp.WELL_INDEX=con.WELL_INDEX
> AND cmp.MAT_ID=con.MAT_ID
> AND cmp.MAT_ID = 3
>
> >>trainwreck... If you're going through IBM, then they won't want to
> >>respond to any issues if you're not running a
> "bog-standard" RHAS/RHES
> >>release from Red Hat.
...> To be fair, we keep on actually running into things that
> _can't_ be backported, like fibrechannel drivers that
In the MySQL manual it says that MySQL performs best with Linux 2.4 with
ReiserFS on x86. Can anyone official, or in the know, give similar
information regarding PostgreSQL?
I'm neither official, nor in the know, but I do have a spare moment! I
can tell you that any *NIX variant on any modern
As for "vendor support" for Opteron, that sure looks like a
trainwreck... If you're going through IBM, then they won't want to
respond to any issues if you're not running a "bog-standard" RHAS/RHES
release from Red Hat. And that, on Opteron, is preposterous, because
there's plenty of the bits of
> I don't know what you are exactly referring to in above URL
> when you are talking about "potential pitfalls of pooling".
> Please explain more.
Sorry, I wasn't implying that pgpool doesn't deal with the issues, just that
some people aren't necessarily aware of them up front. For instance, p
> OK, that'd work too... the point is if you're re-connecting
> all the time it doesn't really matter what else you do for
> performance.
Yeah, although there is the chap who was asking questions on the list
recently who had some very long-running code on his app servers, so was best
off closing
If you're not using a connection pool of some kind then you might as
well forget query plan caching, because your connect overhead will swamp
the planning cost. This does not mean you have to use something like
pgpool (which makes some rather questionable claims IMO); any decent web
application la
Any competently written application where caching results would be a
suitable performance boost can already implement application or
middleware caching fairly easily, and increase performance much more
than putting result caching into the database would.
I guess the performance increase is that
More to the point though, I think this is a feature that really really
should be in the DB, because then it's trivial for people to use.
How does putting it into PGPool make it any less trivial for people to
use?
The answers are at
http://www2b.biglobe.ne.jp/~caco/pgpoo
It might be easiest to shove the caching logic into pgpool instead.
...
When pg_pool is told to cache a query, it can get a table list and
monitor for changes. When it gets changes, simply dumps the cache.
It's certainly the case that the typical web app (which, along with
warehouses, seems to
Basically you set a default in seconds for the HTML results to be
cached, and then have triggers set that force the cache to regenerate
(whenever CRUD happens to the content, for example).
Can't speak for Perl/Python/Ruby/.Net/Java, but Cache_Lite sure made a
believer out of me!
Nice to have it
> I've looked at PREPARE, but apparently it only lasts
> per-session - that's worthless in our case (web based
> service, one connection per data-requiring connection).
That's a non-sequitur. Most 'normal' high volume web apps have persistent
DB connections, one per http server process. Are yo
> >That looks like poor database normalization, really. Are you
> sure you
> >don't want to split this into multiple tables instead of having 62
> >columns?
> >
> No, it is properly normalized. The data in this table is stock
> fundamentals, stuff like 52 week high, ex-dividend date, etc, etc.
> Immediately after performing a vaccum, updates take upto 50
> milliseconds to occur, however the update performance
> degrades over time, such that after a few hours of continuous
> updates, each update takes about half a second. Regular
> vacuuming improves the performance temporarily, but d
> > It is likely that you are missing an index on one of those foreign
> > key'd items.
>
> I don't think that is too likely as a foreign key reference
> must be a unique key which would have an index.
I think you must be thinking of primary keys, not foreign keys. All
one-to-many relationshi
> Squid also takes away the work of doing SSL (presuming you're running it
> on a different machine). Unfortunately it doesn't support HTTP/1.1 which
> means that most generated pages (those that don't set Content-length) end
> up forcing squid to close and then reopen the connection to the web
>
> And this is exactly where the pgpool advantage lies.
> Especially with the
> TPC-W, the Apache is serving a mix of PHP (or whatever CGI
> technique is
> used) and static content like images. Since the 200+ Apache
> kids serve
> any of that content by random and the emulated browsers very mu
1 - 100 of 147 matches
Mail list logo