It sounds like what you're doing is comparing the planner's cost
estimate from running EXPLAIN on a few different queries. The planner's
cost estimate was never intended to do what you're trying to do; it's
not an absolute scale of cost, it's just a tool that the planner uses to
get relative compa
ess.
On the other hand, if you were to query "WHERE c_1=val" then if c_1 is
highly selective the index would still help.
See here:
http://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html
-- Mark Lewis
--
Sent via pgsql-performance mailing list (pgsql-performance@pos
On Tue, 2008-08-26 at 18:44 +0200, henk de wit wrote:
> Hi,
>
> We're currently having a problem with queries on a medium sized table. This
> table is 22GB in size (via select
> pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring
> the total size of the table to 35 GB (me
On Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:
> On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling
> <[EMAIL PROTECTED]> wrote:
> On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:
> creating multiple indexes on same column will effect
> performanc
>
> On Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[EMAIL PROTECTED]>
> wrote:
>
> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:
> > Hi,
> >
> > Can anyone suggest the performance tips for PostgreSQ
umns in the
> where conditions. and the possibilities are like the above.
>
> if we create such indexes will it effect on performance?
> and what is the best go in this case?
>
>
> On Wed, Aug 20, 2008 at 8:10 PM, Mark Lewis <[EMAIL PROTECTED]>
> wrote:
>
On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:
> Hi,
>
> Can anyone suggest the performance tips for PostgreSQL using
> Hibernate.
>
> One of the queries:
>
> - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.
> Which is better among them? or creating either of t
r not you're getting a battery-backed write
cache for that ServeRAID-8K.
-- Mark Lewis
On Wed, 2008-03-12 at 19:58 +0100, Pascal Cohen wrote:
> Hello, we plan to buy a dedicated server to host our database.
> Here is the proposal I was given (with a second identical server fro
&g
On Thu, 2008-03-06 at 07:28 -0800, Craig James wrote:
...
> My question is: What do the other databases do that Postgres can't do, and
> why not?
>
> Count() on Oracle and MySQL is almost instantaneous, even for very large
> tables. So why can't Postgres do what they do?
...
I can vouch that Or
istically insignificant, so your markers
wouldn't need to be updated very frequently and you wouldn't need to
store a marker for each page, maybe only 100 markers spread evenly
across the result set would be sufficient.
-- Mark Lewis
On Thu, 2008-02-14 at 19:49 +, Michael Lorenz wro
of a
record then just keep going until the end of the record. As long as the
algorithm for reading past the end marker is the same as the algorithm
for skipping past the beginning marker then all is well.
-- Mark Lewis
---(end of broadcast)---
imum number of connections you expect to have open at any time you
may want to consider increasing the max user processes and open files
settings as well.
-- Mark Lewis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
has to do with the way that NULL values are stored in the index.
This page has details and instructions for how to get it to work:
http://developer.postgresql.org/pgdocs/postgres/indexes-ordering.html
-- Mark Lewis
---(end of broadcast)---
TIP 1: if po
here the other is not.
If possible, try running your test with fsync=off on both servers. If
there's a marked improvement on the new server but no significant change
on the old server then you've found your culprit.
-- Mark Lewis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
On Sat, 2007-09-15 at 01:51 +0530, Harsh Azad wrote:
> Great, creating new tablespace for indexes worked! Now the question is
> whether existing tables/index can be moved to the new tablespace using
> an alter command or the only way possible is to drop and recreate
> them?
You can alter an existi
disks.
Maybe I'm jaded by past experiences, but the only real use case I can
see to justify a SAN for a database would be something like Oracle RAC,
but I'm not aware of any PG equivalent to that.
-- Mark Lewis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
a SAN, make sure that you're actually going to
be able to (and want to) use all the nice management features you're
paying for. We have some SAN's that are basically acting just as
expensive external RAID arrays because we do the database
replication/backup in software anywa
on that A is going to be faster than B. But
even if it weren't, remember that premature optimization is the root of
all evil. If you try A and it doesn't perform fast enough, then you can
always try B later to see if it works any better.
-- Mark Lewis
---(end
On Fri, 2007-08-31 at 19:39 -0400, Tom Lane wrote:
> I wrote:
> > Mark Lewis <[EMAIL PROTECTED]> writes:
> >> We've been holding back from upgrading to 8.2 because this one is a
> >> show-stopper for us.
>
> > Well, you could always make your
On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:
> Can you please correct me if I am wrong, I want to understand how this
> works.
> Based on what you said, it will run autovacuum again when it passes 200M
> transactions, as SELECTS are transactions too and are going on these
> tables.
> B
nning regular VACUUMs at all but are instead exclusively
running VACUUM FULL, then I don't think you would see warnings about
running out of fsm enties, which would explain why you did not notice
the bloat. I haven't confirmed that though, so I might be wrong.
-- Mark Lewis
On Thu, 2007
es PG to interpret an underscore as 'any
character', which means that it can only scan the index for all records
that start with 'reporting', and then it needs to apply a filter to each
match. This is going to be slower than just going directly to the
matching index ent
We've been holding back from upgrading to 8.2 because this one is a
show-stopper for us.
-- Mark Lewis
On Tue, 2007-08-28 at 11:24 -0500, Evan Carroll wrote:
> -- Forwarded message --
> From: Evan Carroll <[EMAIL PROTECTED]>
> Date: Aug 28, 2007 11:23 AM
>
On Wed, 2007-08-15 at 16:36 -0300, Sebastián Baioni wrote:
> Hello,
> Whe are running PostgreSQL 8.2.0 on amd64-portbld-freebsd6.2, compiled
> by GCC cc (GCC) 3.4.6 [FreeBSD] 20060305.
> The query only uses the index if we have a "limit n":
>
> Without "Limit n"
> explain
> select esapcuit, esapcu
are small tables. All
> statistics are up to date.
If I recall correctly, PG 8.2 was the first version where the planner
supported reordering outer joins. Prior releases would get poor
performance unless the joins were listed in the right order.
So it is quite possible that upgrading to 8.2 would
for one of the problematic queries?
Also, what kind of vacuuming regimen are you using? Just a daily cron
maybe? Are you regularly analyzing the tables?
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
On Thu, 2007-07-26 at 09:18 -0700, Brandon Shalton wrote:
> Hello all,
>
> My hard disk is filling up in the /base directory to where it has consumed
> all 200gig of that drive.
>
> All the posts that i see keep saying move to a bigger drive, but at some
> point a bigger drive would just get
e it would give you approximately the same
flexibility as #1 in terms of your data model. The only reason I can
think of why you might want this over #1 would be for a performance
improvement, but if there's a reasonably small number of distinct tags
and/or distinct tags per picture I can't imagine it being much faster
than #1.
-- Mark Lewis
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
help. You could also look at the pgsql-jdbc archives for the JDBC
driver patches which allow you to use COPY-style bulk loading, which
should get you to the performance level of COPY, which should be
reasonably close to the performance of select into.
-- Mark Lewis
On Tue, 2007-07-17 at 22:50 +02
abase,
it's not as painful as you seem to think it is. An 80 relation database
is by no means "too big to analyze" :)
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Wed, 2007-06-20 at 11:21 -0400, Greg Smith wrote:
...
> One of the things that was surprising to me when I started looking at the
> organization of the PostgreSQL buffer cache is how little gross
> information about its contents is available. I kept expecting to find a
> summary section wher
On Tue, 2007-06-19 at 09:37 -0400, Karl Wright wrote:
> Alvaro Herrera wrote:
> > Karl Wright wrote:
> >
> >> This particular run lasted four days before a VACUUM became essential.
> >> The symptom that indicates that VACUUM is needed seems to be that the
> >> CPU usage of any given postgresql q
gt; of data being stored in lots of different tables, in part for long-term
> maintenance reasons. We don't really need cross-client reporting, however.
What version of PG is this? What is your vacuuming strategy? Have you
tried a REINDEX to see if that helps?
-- Mark Lewis
On Thu, 2007-05-24 at 21:54 +0100, James Mansion wrote:
> > If Sybase is still like SQL Server (or the other way around), it *may*
> > end up scanning the index *IFF* the index is a clustered index. If it's
> > a normal index, it will do a sequential scan on the table.
> >
> >
> Are you sure its
concurrent connections are opened.
-- Mark Lewis
On Fri, 2007-05-18 at 10:45 +1200, Ralph Mason wrote:
> We have a database running on a 4 processor machine. As time goes by
> the IO gets worse and worse peeking at about 200% as the machine loads
> up.
>
>
>
> The weird thing is t
1. If you go the route of using nice, you might want to run the 3D
front-end at a higher priority instead of running PG at a lower
priority. That way apache, php and the other parts all run at the same
priority as PG and just the one task that you want to run smoothly is
elevated.
2. You may not
Maybe he's looking for a switch for initdb that would make it
interactive and quiz you about your expected usage-- sort of a magic
auto-configurator wizard doohicky? I could see that sort of thing being
nice for the casual user or newbie who otherwise would have a horribly
mis-tuned database. The
of the factor of 10 write volume
mismatch, so I pretty much wrote it off as a data point for
forcedirectio because of the unknowns. Did you ever figure out the
cause of that?
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
Not to hijack this thread, but has anybody here tested the behavior of
PG on a file system with OS-level caching disabled via forcedirectio or
by using an inherently non-caching file system such as ocfs2?
I've been thinking about trying this setup to avoid double-caching now
that the 8.x series sc
On Tue, 2007-02-06 at 14:06 -0500, Merlin Moncure wrote:
> On 2/6/07, Mark Lewis <[EMAIL PROTECTED]> wrote:
> > > actually, I get the stupid award also because RI check to unindexed
> > > column is not possible :) (this haunts deletes, not inserts).
> >
>
eck to unindexed
> column is not possible :) (this haunts deletes, not inserts).
Sure it's possible:
CREATE TABLE parent (col1 int4);
-- insert many millions of rows into parent
CREATE TABLE child (col1 int4 REFERENCES parent(col1));
-- insert many millions of rows into child, very very slowly.
- Mark Lewis
---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?
http://www.postgresql.org/docs/faq
x27;m wondering if it's possible to shed some light on the
remaining dark shadows of PG performance troubleshooting.
-- Mark Lewis
On Thu, 2006-12-14 at 11:19 -0500, Tom Lane wrote:
> Arnaud Lesauvage <[EMAIL PROTECTED]> writes:
> > Tom Lane a crit :
> >> It seems the ti
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
-- Mark
On Wed, 2006-12-13 at 11:46 -0500,
So a RAID controller with a battery-backed write cache can enable its
own write cache, but can't safely enable the write-caches on the disk
drives it manages.
-- Mark Lewis
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
of any task holding a lock that a high priority task
waits on. I guess that would just make it so that instead of HIGH tasks
being effectively reduced to LOW, then LOW tasks could be promoted to
HIGH.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
reasoning about the problem anyway, are
there any other types of cases you're aware of where some form of cross-
column statistics would be useful? In the unlikely event that I
actually come up with a brilliant and simple solution, I'd at least like
to make sure that I'm solving the rig
les in terms of utility vs. annoyance. One of the tips that shows up
in the footers today is just a link to the archives anyway.
-- Mark Lewis
On Wed, 2006-10-04 at 11:28 -0500, Bruno Wolff III wrote:
> On Wed, Oct 04, 2006 at 08:30:03 -0700,
> "Joshua D. Drake" <[EMAIL PROT
satisfy MAX queries; I'm not sure if it does.
-- Mark Lewis
On Tue, 2006-10-03 at 14:35 -0700, Graham Davis wrote:
> Not many. It fluctuates, but there are usually only ever a few hundred
> at most. Each assetid has multi-millions of positions though.
>
> Mark Lewis wrote:
&
Hmmm. How many distinct assetids are there?
-- Mark Lewis
On Tue, 2006-10-03 at 14:23 -0700, Graham Davis wrote:
> The "summary table" approach maintained by triggers is something we are
> considering, but it becomes a bit more complicated to implement.
> Currently we
were far fewer distinct
assetid values than rows in the main table, and would get slow if you
commonly delete rows from the main table or decrease the value for ts in
the row with the highest ts for a given assetid.
-- Mark Lewis
On Tue, 2006-10-03 at 13:52 -0700, Graham Davis wrote:
> Thanks
nce should
be at least as good as you could possibly achieve by reducing BLKSIZE.
PG 8.1 is smart enough to use a bitmap index scan to combine the two
indexes at query time; if that gives you adequate performance then it
would be simpler than reducing BLKSIZE.
-- Mark Lewis
On Mon, 2006-09-25 at 17:5
> So this might be a dumb question, but the above statements apply to the
> cluster (e.g. postmaster) as a whole, not per postgres
> process/transaction correct? So each transaction is blocked waiting for
> the main postmaster to retrieve the data in the order it was requested
> (i.e. not multiple
UM and ANALYZE
during the test? Have you confirmed that you aren't suffering from
table bloat?
3. What are the actual results you got from the PG run in question?
4. What is the size of the data set referenced in the test run?
-- Mark Lewis
On Thu, 2006-09-21 at 07:52 -0700, yoav x wrote:
&g
table will tell you how long each of the referential
integrity checks takes, so you can figure out which indexes are missing.
-- Mark Lewis
On Tue, 2006-09-19 at 15:22 +0200, Ivan Voras wrote:
> I've just fired off a "DELETE FROM table" command (i.e. unfiltered
> DELETE)
The last I checked (years ago), sql-bench was very synthetic (i.e.
reflecting no realistic use case). It's the sort of test suite that's
useful for database developers when testing the effects of a particular
code change or optimization, but not so applicable to real-world uses.
Historically th
> it's worse than that. if you need to read something that is not in
> the o/s cache, all the disks except for one need to be sent to a
> physical location in order to get the data. Thats the basic rule with
> striping: it optimizes for sequential i/o in expense of random i/o.
> There are some op
> Monitoring the processes using top reveals that the total amount of
> memory used slowly increases during the test. When reaching insert
> number 4, or somewhere around that, memory is exhausted, and the the
> systems begins to swap. Each of the postmaster processes seem to use a
> constant a
Well, at least on my test machines running gnome-terminal, my pgbench
runs tend to get throttled by gnome-terminal's lousy performance to no
more than 300 tps or so. Running with 2>/dev/null to throw away all the
detailed logging gives me 2-3x improvement in scores. Caveat: in my
case the db is o
mple, a 2-disk array of 10k
RPM disks is going to max out somewhere around 333 tps. (2*1/60).
-- Mark Lewis
On Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:
> I'm exhausted to try all performance tuning ideas, like following
> parameters
>
> shared_buffers
>
tool, hit .
To get the text-based one, execute "SET SHOWPLAN_ALL ON" which toggles
diagnostic mode on, and each query that you run will return the explain
plan instead of actually running until you execute "SET SHOWPLAN_ALL
OFF".
-- Mark Lewis
On Thu, 2006-08-17 at 09:11 -040
Can you provide an EXPLAIN ANALYZE of the query in PG? Have you
analyzed the PG database? How many rows is this query expected to
return? Which version of PG are you running? What indexes have you
defined?
-- Mark
On Tue, 2006-08-15 at 14:38 +, Sebastián Baioni wrote:
> Hello, I'm migrati
ee if its feasible, but seems like a
reasonable approach.
-- Mark Lewis
On Mon, 2006-07-31 at 09:30 -0400, Michael Stone wrote:
> On Mon, Jul 31, 2006 at 01:54:24PM +0200, Axel Rau wrote:
> >Am 31.07.2006 um 13:15 schrieb Michael Stone:
> >>On Mon, Jul 31, 2006 at 12:48:11P
A volatile function has may return a different result for each row;
think of the random() or nextval() functions for example. You wouldn't
want them to return the same value for each row returned.
-- Mark Lewis
On Fri, 2006-07-28 at 13:59 -0700, Ben wrote:
> It's volatile, but i
RAID-5 and RAID-10 random reads would be
about equal or else maybe give a slight edge to RAID-10.
-- Mark Lewis
On Fri, 2006-07-28 at 13:31 -0400, Jeff Trout wrote:
> I too have a DL385 with a single DC Opteron 270.
> It claims to have a smart array 6i controller and over the last
> cou
seems that it would make
life easier for other folks too.
Thanks in advance for any feedback :)
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PR
The IDE drive is almost certainly lying about flushing data to the disk.
Lower-end consumer drives often do.
What this means is that commits will be a whole lot faster, but the
database loses its ACID guarantees, because a power failure at the wrong
moment could corrupt the whole database.
If you
ause you've got a
write cache on your SAN, that you're getting the same speed as
fsync=off, at least for some cheap controllers.
-- Mark Lewis
---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
On Thu, 2006-06-15 at 14:46 -0400, John Vincent wrote:
> One question that we came up with is how does this affect other
> aggregate functions like MAX,MIN,SUM and whatnot? Being that this is
> our data warehouse, we use these all the time. As I've said
> previously, I didn't know a human could ge
the main table regardless, so the
sequential scan is much faster because it avoids traversing the index
and performing random read operations.
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropria
y to investigate.
With DBCP and non-ancient versions of the JDBC driver that use v3
protocol and real prepared statements, it is possible to (mis)configure
the system to create an unbounded number of cached prepared statements
on any particular connection. Older versions of DBCP were also known to
No. You need fsync on in order to force the data to get TO the NetApp
at the right time. With fsync off, the data gets cached in the
operating system.
-- Mark Lewis
On Wed, 2006-06-14 at 14:48 -0700, Dan Gorman wrote:
> All,
>So I thought I'd pose this question:
>
&g
On Fri, 2006-06-02 at 17:37 -0300, [EMAIL PROTECTED] wrote:
> Many thanks Mark,
>
> I will consider fsync=off only to do an initial load, not for a database
> normal operation.
>
This approach works well. You just need to remember to shut down the
database and start it back up again with fsync
On Fri, 2006-06-02 at 16:54 -0300, [EMAIL PROTECTED] wrote:
> > <[EMAIL PROTECTED]> writes:
> > > I would like to know if my supposition is right.
> >
> > > Considering an environment with only one hard disk attached to
> > a server, an
> > > initial loading of the database probably is much faster
On Fri, 2006-06-02 at 15:25 -0300, [EMAIL PROTECTED] wrote:
> Hi,
>
> I would like to know if my supposition is right.
>
> Considering an environment with only one hard disk attached to a server, an
> initial loading of the database probably is much faster using an IDE/ATA
> interface with write-
t one record out of every 150, which is
> going to cost near as much as seqscanning all of them.
Well, if the MySQL server has enough RAM that the index is cached (or
index + relevant chunks of data file if using InnoDB?) then that would
explain how MySQL can use an index to get fast results.
case, performing N concurrent imports (where N is the
number of processor cores available) might be a win over a single-
threaded import.
-- Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
IN clause), and discovered that for most databases,
the optimal batch size was 1. For PostgreSQL I think it was 2.
The moral of the story is that you're probably better off running a
bunch of small selects than in trying to optimize things with one
gargantuan select.
-- Mark Lewis
On Mon, 2
They are not equivalent. As I understand it, RAID 0+1 performs about
the same as RAID 10 when everything is working, but degrades much less
nicely in the presence of a single failed drive, and is more likely to
suffer catastrophic data loss if multiple drives fail.
-- Mark
On Tue, 2006-05-02 at
vulnerable to OS crashes, failures in
non-RAID hardware, UPS failures, or anything else that would necessitate
a hard reboot.
So a UPS is a decent replacement for a BBU only if you trust your app
server/OS more than you value your data.
-- Mark Lewis
On Mon, 2006-05-01 at 10:58 -0700, Erik Myllymaki
It's also possible that the single SATA drive you were testing (or the
controller it was attached to) is lying about fsync and performing write
caching behind your back, whereas your new controller and drives are
not.
You'll find a lot more info on the archives of this list about it, but
basically
o the deep guru who knew the proper undocumented
incantations.
-- Mark Lewis
On Thu, 2006-04-20 at 20:00 +0200, Mikael Carneholm wrote:
> We're going to get one for evaluation next week (equipped with dual
> 2Gbit HBA:s and 2x14 disks, iirc). Anyone with experience from them,
&g
Can you post an explain analyze for the delete query? That will at
least tell you if it is the delete itself which is slow, or a trigger /
referential integrity constraint check. Which version of PG is this?
-- Mark Lewis
On Wed, 2006-03-29 at 12:58 -0500, Eric Lauzon wrote:
> Greeti
ers yet. It's not for load balancing, just
active/passive fault tolerance.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
On Thu, 2006-02-16 at 21:33 -0800, David Lang wrote:
> > In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit
> > sortKey as elsewhere suggested). The sorting key doesn't need to be a
> > one-to-one mapping.
>
> that would violate your second contraint ( f(a)==f(b) iff (a==b) )
could see doing it for char(n)/varchar(n) where n<=4 in SQL_ASCII though.
In SQL_ASCII, just take the first 4 characters (or 8, if using a 64-bit
sortKey as elsewhere suggested). The sorting key doesn't need to be a
one-to-one mapping.
-- Mark Lewis
--
uld always use f(x)=0 as the
default sortKey function which would degenerate to the exact same sort
behavior in use today.
-- Mark Lewis
---(end of broadcast)---
TIP 6: explain analyze is your friend
Machine 1: $2000
Machine 2: $2000
Machine 3: $2000
Knowing how to rig them together and maintain them in a fully fault-
tolerant way: priceless.
(Sorry for the off-topic post, I couldn't resist).
-- Mark Lewis
On Wed, 2006-02-15 at 09:19 -0800, Craig A. James wrote:
> Jeremy Hai
want to upgrade as soon as possible, and refer to the
on-line docs about what to do with your FSM settings.
-- Mark Lewis
On Mon, 2006-01-30 at 23:57 +0100, Emmanuel Lacour wrote:
> Hi everybody,
>
> I have the following problem, on a test server, if I do a fresh import
> of pro
nough". In PostgreSQL, historical
rows are kept in the tables themselves and periodically vacuumed, so
there is no such guarantee, which means that you would need to either
implement a lot of complex locking for little material gain, or just
hold the cursors in moderately long-running transact
other users may need to wait for the connection, and another connection
won't do.
3. If this is a busy web site, you might end up with potentially many
thousands of open cursors. I don't know if this introduces an
unacceptable performance penalty or other bottleneck in the serv
If this is a query that will be executed more than once, you can also
avoid incurring the planning overhead multiple times by using PREPARE.
-- Mark Lewis
On Wed, 2006-01-11 at 18:50 -0500, Jean-Philippe Côté wrote:
> Thanks a lot for this info, I was indeed exceeding the genetic
> optim
ck and make sure RH backported whatever the
fix was to their current RHEL4 kernel.
Thanks,
Mark Lewis
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED]
Do you have an index on the date column? Can you post an EXPLAIN
ANALYZE for the slow query?
-- Mark Lewis
On Wed, 2005-10-26 at 13:41 -0700, aurora wrote:
> I am running Postgre 7.4 on FreeBSD. The main table have 2 million
> record (we would like to do at least 10 mil or more). It is ma
Which version of PG are you using? One of the new features for 8.0 was
an improved caching algorithm that was smart enough to avoid letting a
single big query sweep everything else out of cache.
-- Mark Lewis
On Tue, 2005-10-04 at 10:45 -0400, Lane Van Ingen wrote:
> Yes, Stefan, the kind
operations != passes. If you were clever, you could probably write a
modified bubble-sort algorithm that only made 2 passes. A pass is a
disk scan, operations are then performed (hopefully in memory) on what
you read from the disk. So there's no theoretical log N lower-bound on
the number of dis
I had a similar problem, so I downloaded 8.1 from CVS, ran it on a
relatively gnarly dev workstation, imported a dump of my 8.0 database,
and ran my troublesome queries with the new EXPLAIN ANALYZE.
This process took about an hour and worked great, provided that you've
actually named your foreign
performance with
this particular hardware and workload?
Ah well. Thought myself in circles and have no real conclusions to show
for it. Posting anyway, maybe this will give somebody some ideas to
work with.
-- Mark Lewis
On Fri, 2005-08-12 at 08:47 +, Steve Poe wrote:
> Paul,
>
> Bef
Easier and faster than doing the custom trigger is to simply define a
unique index and let the DB enforce the constraint with an index lookup,
something like:
create unique index happy_index ON happy_table(col1, col2, col3);
That should run faster than the custom trigger, but not as fast as the
t
On Fri, 2005-07-08 at 16:43 +0200, Enrico Weigelt wrote:
> * PFC <[EMAIL PROTECTED]> wrote:
>
>
> > For Python it's the reverse : the MySQL driver is slow and dumb,
> > and the postgres driver (psycopg 2) is super fast, handles all
> > quoting,
> > and knows about type conversions, i
as a
'supported' package for RH7.2 anyway.
-- Mark Lewis
On Thu, 2005-06-16 at 07:46 -0700, Todd Landfried wrote:
> Yes, it is 7.2. Why? because an older version of our software runs on
> RH7.3 and that was the latest supported release of Postgresql for
> RH7.3 (that we can find)
1 - 100 of 104 matches
Mail list logo