one thing, in SUM() , you don't have to coalesce. Consider following example:
foo=# create table bar(id serial primary key, a float);
NOTICE: CREATE TABLE will create implicit sequence bar_id_seq for
serial column bar.id
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index
bar_pkey for
http://wiki.postgresql.org/wiki/Guide_to_reporting_problems
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Order by ...upper(xyz), do you have functional index on these ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2011/8/29 Mark Kirkwood mark.kirkw...@catalyst.net.nz:
I note from the commit message that the fix test case was from Grzegorz
Jaskiewicz (antijoin against a small subset of a relation). I was not able
to find this in the archives - Grzegorz do you recall the actual test case?
I thought it
9.0rc1 ?
You know that the stable 9.0 has been out for quite a while now.
Its not going to affect the delete speed in any way, but I would
generally advice you to upgrade it to the lates 9.0.x
As for the delete it self, check if you have indices on the tables
that refer the main table on the
The card is configured in 1+0 . with 128k stripe afaik (I'm a
developer, we don't have hardware guys here).
Are you's sure about the lack of cache by default on the card ? I
thought the difference is that 5104 has 256, and 5105 has 512 ram
already on it.
--
Sent via pgsql-performance mailing
Does anyone here have any bad experiences with the RAID card in subject ?
This is in an IBM server, with 2.5 10k drives.
But we seem to observe its poor performance in other configurations as
well (with different drives, different settings) in comparison with -
say, what dell provides.
Any
do you have any indexes on that table ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Try going through the archives first because your question probably
has been answered many times already (altho there is no definitive
question as to what server postgresql would need to run to fit your
purpose).
Also, this is English list. If you prefer to ask questions in
Brazilian/Portuguese
you're joining on more than one key. That always hurts performance.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
implementation wise, count(*) is faster. Very easy to test:
SELECT COUNT(*) FROM generate_series(1,100) a, generate_series(1,1000) b;
SELECT COUNT(a) FROM generate_series(1,100) a, generate_series(1,1000) b;
;]
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To
On Mon, Aug 23, 2010 at 2:47 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Grzegorz Jaœkiewiczgryz...@gmail.com wrote:
joining on varchars is always going to be very expensive. Longer
the value is, more expensive it will be. Consider going for
surrogate keys.
Surrogate keys come
I am not a fan of 'do this - this is best' response to queries like that.
Rather: this is what you should try, and choose whichever one suits you better.
So, rather than 'natural keys ftw', I am giving him another option to
choose from.
You see, in my world, I was able to improve some large dbs
Oh, and I second using same types in joins especially, very much so :)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Hi folks,
is there a general problem with raid10 performance postgresql on it?
We see very low performance on writes (2-3x slower than on less
performant servers). I wonder if it is solely problem of raid10
configuration, or if it is postgresql's thing.
Would moving WAL dir to separate disk help
WAL matters in performance. Hence why it is advisable to have it on a
separate drive :)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty much the most of the differences.
--
Sent via
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Unless you have a lot of doubt about the two, I don't think it makes
too much
On Mon, May 17, 2010 at 12:54 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
On Mon, May 17, 2010 at 5:10 AM, Pierre C li...@peufeu.com wrote:
- or use a JOIN delete with a virtual VALUES table
- or fill a temp table with ids and use a JOIN DELETE
What is a virtual VALUES table? Can you give
again VALUES(1,2), (2,3), ; is a 'virtual table', as he calls it.
It really is not a table to postgresql. I guess he is just using that
naming convention.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Sat, Apr 24, 2010 at 2:23 PM, Merlin Moncure mmonc...@gmail.com wrote:
Well, you missed the most important part: not using cursors at all.
Instead of declaring a cursor and looping it to build the array, build
it with array(). That's what I've been saying: arrays can completely
displace
That really sounds like hardware issue. The I/O causes the system to freeze
basically.
Happens sometimes on cheaper hardware.
starting with 8.3, there's this new feature called HOT, which helps a lot
when you do loads of updates.
Plus writer is much quicker (30-40% sometimes), and autovacuum behaves much
nicer.
Bottom line, upgrade to 8.3, 8.1 had autovacuum disabled by default for a
reason.
On Wed, Apr 7, 2010 at 1:20 PM, sherry.ctr@faa.gov wrote:
Guys,
Thanks for trying and opening your mind.
If you want to know how Oracle addressed this issue, here it is: index
on two columns. I remember that they told me in the training postgres has
no this kind of index, can
2010/4/7 sherry.ctr@faa.gov
Do you mean one index on two columns?
something like this: create index idx1 on tb1(col1, col2);
yup :) It would be quite useless without that feature.
Don't listen to oracle folks, they obviously know not much about products
others than oracle db(s).
time that psql or pgAdmin shows is purely the postgresql time.
Question here was about the actual application's time. Sometimes the data
transmission, fetch and processing on the app's side can take longer than
the 'postgresql' time.
try JOINs...
On Tue, Mar 2, 2010 at 4:23 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Thu, Feb 25, 2010 at 7:03 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Partially. There are stats now but autovacuum is not bright about
when to update them.
Is that something
storing all fields as varchar surely doesn't make:
- indicies small,
- the thing fly,
- tables small.
...
isn't that possible with window functions and cte ?
rank, and limit ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
just create index on both columns:
CREATE INDEX foo_i ON foo(bar1, bar2);
HTH
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
maybe that 'one big table' needs something called 'normalisation'
first. See how much that will shed off. You might be surprised.
The partitioning needs to be done by some constant intervals, of time
- in your case. Whatever suits you, I would suggest to use the rate
that will give you both ease
you can also try :
select val FROM table ORDER BY val DESC LIMIT 1;
which usually is much quicker.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
On Thu, Jan 7, 2010 at 3:05 PM, Lefteris lsi...@gmail.com wrote:
On Thu, Jan 7, 2010 at 3:51 PM, Ivan Voras ivo...@freebsd.org wrote:
On 7.1.2010 15:23, Lefteris wrote:
I think what you all said was very helpful and clear! The only part
that I still disagree/don't understand is the
On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas robertmh...@gmail.com wrote:
NOT IN is the only that really kills you as far as optimization is
concerned. IN can be transformed to a join. NOT IN forces a NOT
(subplan)-type plan, which bites - hard.
in a well designed database (read: not
2009/12/18 Robert Haas robertmh...@gmail.com:
2009/12/18 Grzegorz Jaśkiewicz gryz...@gmail.com:
On Fri, Dec 18, 2009 at 2:18 PM, Robert Haas robertmh...@gmail.com wrote:
NOT IN is the only that really kills you as far as optimization is
concerned. IN can be transformed to a join
On Thu, Dec 17, 2009 at 6:05 PM, Robert Haas robertmh...@gmail.com wrote:
On Thu, Dec 17, 2009 at 10:23 AM, Thomas Hamilton
thomashamilto...@yahoo.com wrote:
Apparently the latest version of MySQL has solved this problem:
On Thu, Nov 26, 2009 at 4:20 PM, Richard Neill rn...@cam.ac.uk wrote:
Dear All,
I'm wondering whether Vacuum/analyse (notably by the autovaccuum daemon) is
responsible for some deadlocks/dropouts I'm seeing.
One particular table gets hit about 5 times a second (for single row
updates and
On Wed, Nov 25, 2009 at 4:13 PM, Luca Tettamanti kronos...@gmail.comwrote:
DELETE FROM t1 WHERE EXISTS (SELECT 1 FROM t2 WHERE t1.annotation_id =
t2.annotation_id)
performs event better:
Seq Scan on t1 (cost=0.00..170388415.89 rows=22937406 width=6) (actual
time=272.625..561241.294
On Wed, Nov 25, 2009 at 4:26 PM, Kevin Grittner kevin.gritt...@wicourts.gov
wrote:
Richard Neill rn...@cam.ac.uk wrote:
In terms of just index bloat, does a regular vacuum help?
You might want to use the REINDEX command to correct serious index
bloat. A regular vacuum will make dead
On Tue, Nov 24, 2009 at 3:19 PM, Thom Brown thombr...@gmail.com wrote:
2009/11/24 Luca Tettamanti kronos...@gmail.com
On Tue, Nov 24, 2009 at 3:59 PM, Jerry Champlin
jchamp...@absolute-performance.com wrote:
You may want to consider using partitioning. That way you can drop the
usual answer - use LEFT JOIN luke.
On Mon, Nov 9, 2009 at 3:58 AM, Robert Haas robertmh...@gmail.com wrote:
And maybe REINDEX, too.
yup, nevermind the mess in table, indices are getting fscked much quicker
than table it self, because of its structure.
--
GJ
On Mon, Nov 2, 2009 at 2:16 PM, Grant Masan grant.mas...@gmail.com wrote:
Hi Hi all,
I have now readed many many forums and tried many different solutions and I
am not getting good performance to database. My server is Debian linux, with
4gb ram, there is also java application and I am
for explains, use http://explain.depesz.com/
besides, why are you using left join ?
equivlent of IN () is just JOIN, not LEFT JOIN.
And please, format your query so it readable without twisting eyeballs
before sending.
On Wed, Oct 28, 2009 at 6:13 PM, Anj Adu fotogra...@gmail.com wrote:
Postgres consistently does a sequential scan on the child partitions
for this query
select * from partitioned_table
where partitioned_column current_timestamp - interval 8 days
where x in (select yy from z where colname
On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER dbuche...@hsolutions.chwrote:
Dear all,
I need to optimize a database used by approx 10 people, I don't need to
have the perfect config, simply to avoid stupid bottle necks and follow
the best practices...
The database is used from a web
2009/10/28 Denis BUCHER dbuche...@hsolutions.ch
Grzegorz Jaśkiewicz a écrit :
On Wed, Oct 28, 2009 at 12:11 PM, Denis BUCHER dbuche...@hsolutions.ch
mailto:dbuche...@hsolutions.ch wrote:
Dear all,
I need to optimize a database used by approx 10 people, I don't need
On Fri, Oct 23, 2009 at 4:49 PM, Scott Mead scott.li...@enterprisedb.comwrote:
Do you not have an index on last_snapshot.domain_id?
that, and also try rewriting a query as JOIN. There might be difference in
performance/plan.
--
GJ
On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski mich20...@gmail.comwrote:
We have similar problem and now we are try to find solution. When you
execute query on partion there is no sorting - DB use index to
retrieve data and if you need let say 50 rows it reads 50 rows using
index. But when
On Mon, Oct 19, 2009 at 2:43 PM, Vikul Khosla vkho...@gridsolv.com wrote:
Jeff, Robert, I am still working on the low cardinality info you
requested. Please bear with me.
In the meantime, have the following question:
Are there known scenarios where certain types of SQL queries perform
2009/10/19 Robert Haas robertmh...@gmail.com
2009/10/19 Grzegorz Jaśkiewicz gryz...@gmail.com:
On Sun, Oct 11, 2009 at 3:30 PM, Michal Szymanski mich20...@gmail.com
wrote:
We have similar problem and now we are try to find solution. When you
execute query on partion
On Tue, Oct 13, 2009 at 9:59 AM, Michael Schwipps msc.lis...@online.dewrote:
Hi,
I want to select the last contact of person via mail.
My sample database is build with the following shell-commands
| createdb -U postgres test2
| psql -U postgres test2 mail_db.sql
| mailtest.sh | psql -U
On Tue, Oct 13, 2009 at 4:17 PM, Shaul Dar shaul...@gmail.com wrote:
Hi,
I am running performance simulation against a DB. I want to randomly pull
different records from a large table. However the table has no columns that
hold sequential integer values (1..MAX), i.e. the columns all have
2009/10/13 Shaul Dar shaul...@gmail.com
Sorry, I guess I wasn't clear.
I have an existing table in my DB, and it doesn't have a column with serial
values (actually it did originally, but due to later deletions of about 2/3
of the rows the column now has holes). I realize I could add a new
On Mon, Oct 12, 2009 at 12:21 PM, S Arvind arvindw...@gmail.com wrote:
In the below query both table has less than 1 million data. Can u tell me
the reason of this plan?
why its takin extensive cost , seq scan and sorting?? wat is Materialize?
select 1 from service_detail
left join
btw, what's the version of db ?
what's the work_mem setting ?
try setting work_mem to higher value. As postgresql will fallback to disc
sorting if the content doesn't fit in work_mem, which it probably doesn't
(8.4+ show the memory usage for sorting, which your explain doesn't have).
2009/10/12 Matthew Wakeling matt...@flymine.org
This is an EXPLAIN, not an EXPLAIN ANALYSE. If it was an EXPLAIN ANALYSE,
it would show how much memory was used, and whether it was a disc sort or an
in-memory sort. As it is only an EXPLAIN, the query hasn't actually been
run, and we have no
2009/10/12 S Arvind arvindw...@gmail.com
Thanks Grzegorz,
But work memory is for each process (connection) rt? so if i keep
more then 10MB will not affect the overall performance ?
it will. But the memory is only allocated when needed.
You can always set it before running that
On Mon, Oct 5, 2009 at 8:35 PM, Guy Rouillier guyr-...@burntmail.comwrote:
Grzegorz Jaśkiewicz wrote:
well, as a rule of thumb - unless you can't think of a default value of
column - don't use nulls. So using nulls as default 'idunno' - is a bad
practice, but everybody's opinion
On Mon, Oct 5, 2009 at 1:24 PM, Omar Kilani omar.kil...@gmail.com wrote:
I'm not really sure what the alternatives are -- it never really makes
sense to get the selectivity for thousands of items in the IN clause.
I've never seen a different plan for the same query against a DB with
that
On Mon, Oct 5, 2009 at 2:52 PM, Matthew Wakeling matt...@flymine.orgwrote:
mnw21-modmine-r13features-copy=# select count(*) from project;
count
---
10
(1 row)
mnw21-modmine-r13features-copy=# select count(*) from intermineobject;
count
--
26344616
(1 row)
2009/10/5 Matthew Wakeling matt...@flymine.org
Yes, that does work, but only because id is NOT NULL. I thought Postgres
8.4 had had a load of these join types unified to make it less important how
the query is written?
well, as a rule of thumb - unless you can't think of a default value of
if you reuse that set a lot, how about storing it in a table , and doing the
join on db side ? if it is large, it sometimes makes sense to create temp
table just for single query (I use that sort of stuff for comparing with few
M records).
But temp tables in that case have to be short lived, as
On Fri, Sep 25, 2009 at 9:06 AM, Shiva Raman raman.shi...@gmail.com wrote:
Hi Gerhard
I also found the pg_log has 73 G of data .
clusternode2:/var/lib/pgsql/data # du -sh pg_log/
73G pg_log/
Is it necessary to keep this Log files? Can i backup the logs and delete it
from the original
2009/9/25 Shiva Raman raman.shi...@gmail.com
As suggested, i had changed the log_statement='ddl' and now it is logging
only
the ddl statements . thanks for the tip.
Can i delete the old log files in pg_log after backing up as zip archive ?
is it neccesary to keep those log files ?
they're
On Thu, Sep 24, 2009 at 9:27 AM, jes...@krogh.cc wrote:
Hi.
I have a transaction running at the database for around 20 hours .. still
isn't done. But during the last hours it has come to the point where it
really hurts performance of other queries.
Given pg_stat_activity output there seems
On Tue, Sep 22, 2009 at 1:36 PM, Alan McKay alan.mc...@gmail.com wrote:
Too high? How high is too high?
in a very simple scenario, you have 100 connections opened, and all of
them run the query that was the reason you bumped work_mem to 256M.
All of the sudden postgresql starts to complain
On Tue, Sep 22, 2009 at 1:46 PM, Alan McKay alan.mc...@gmail.com wrote:
Best practice to avoid that, is to bump the work_mem temporarily
before the query, and than lower it again, lowers the chance of memory
exhaustion.
Interesting - I can do that dynamically?
you can do set work_mem=128M;
not only that's slow, but limited as you can see. Use something like:
http://gjsql.wordpress.com/2009/04/19/how-to-speed-up-index-on-bytea-text-etc/
instead.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Tue, Sep 15, 2009 at 9:10 PM, Andrzej Zawadzki zawa...@wp.pl wrote:
So, I was close - bad index... DESCending is much better.
Thanks to Grzegorz Ja\skiewicz hi has strengthened me in the conjecture.
I'm posting this - maybe someone will find something useful in that case.
ps. query was
Learn it to not generate with WITH IN (subq), is this can be quite
slow on postgresql. Use joins instead.
looks like planner was wrong about rowcount in one place: Hash IN Join
(cost=2204.80..4809.31 rows=292 width=202) (actual
time=12.856..283.916 rows=15702 loops=1)
I have no idea why,
postgresql was faster than the files ;)
(sorry, I just couldn't resist).
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
that seems to be the killer:
and time = extract ('epoch' from timestamp '2009-08-12')
and time extract ('epoch' from timestamp '2009-08-13' )
You probably need an index on time/epoch:
CREATE INDEX foo ON table(extract ('epoch' from timestamp time );
or something like that, vacuum analyze and
how about normalizing the schema for start ?
by the looks of it, you have huge table,with plenty of varchars, that
smells like bad design of db.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
On Tue, Jul 21, 2009 at 1:42 PM, Doug Hunleyd...@hunley.homeip.net wrote:
Just wondering is the issue referenced in
http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php
is still present in 8.4 or if some tunable (or other) made the use of
hyperthreading a non-issue. We're
On Tue, Jul 21, 2009 at 3:16 PM, Scott Marlowescott.marl...@gmail.com wrote:
On Tue, Jul 21, 2009 at 6:42 AM, Doug Hunleyd...@hunley.homeip.net wrote:
Just wondering is the issue referenced in
http://archives.postgresql.org/pgsql-performance/2005-11/msg00415.php
is still present in 8.4 or if
On Thu, Jul 9, 2009 at 5:26 PM, Craig Jamescraig_ja...@emolecules.com wrote:
Suppose I have a large table with a small-cardinality CATEGORY column (say,
categories 1..5). I need to sort by an arbitrary (i.e. user-specified)
mapping of CATEGORY, something like this:
1 = 'z'
2 = 'a'
3 =
2009/7/9 Tom Lane t...@sss.pgh.pa.us:
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= gryz...@gmail.com writes:
On Thu, Jul 9, 2009 at 5:26 PM, Craig Jamescraig_ja...@emolecules.com
wrote:
Suppose I have a large table with a small-cardinality CATEGORY column (say,
categories 1..5). I need to sort by
not better just to store last time user visited the topic ? or forum in
general, and compare that ?
On Sun, Jun 21, 2009 at 9:01 PM, Justin Grafjus...@emproshunts.com wrote:
work_mem = 51024 # min 64, size in KB
Thats allot memory dedicated to work mem if you have 30 connections open
this could eat up 1.5gigs pushing the data out of cache.
I thought work memory is max
On Thu, Jun 18, 2009 at 6:06 PM, Brian Coxbrian@ca.com wrote:
these queries are still running now 27.5 hours later... These queries are
generated by some java code and in putting it into a test program so I could
capture the queries, I failed to get the id range correct -- sorry for
On Thu, Jun 18, 2009 at 6:16 PM, Brian Cox brian@ca.com wrote:
Grzegorz Jakiewicz [gryz...@gmail.com] wrote:
this might be quite bogus question, just a hit - but what is your
work_mem set to ?
Guys, isn't postgresql giving hudge cost, when it can't sort in memory ?
work_mem = 64MB
try
On Wed, Jun 17, 2009 at 8:33 AM, Albe Laurenzlaurenz.a...@wien.gv.at wrote:
I don't understand your data model well enough to understand
the query, so I can only give you general hints (which you probably
already know):
He is effectively joining same table 4 times in a for loop, to get
Postgresql isn't very efficient with subselects like that,
try:
explain select c.id from content c LEFT JOIN (select min(id) AS id
from content group by hash) cg ON cg.id=c.id WHERE cg.id is null;
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to
On Fri, May 29, 2009 at 2:54 AM, Greg Smith gsm...@gregsmith.com wrote:
The PostgreSQL connection handler is known to be bad at handling high
connection loads compared to the popular pooling projects, so you really
shouldn't throw this problem at it. While kernel problems stack on top of
2009/5/29 Scott Marlowe scott.marl...@gmail.com:
if it is implemented somewhere else better, shouldn't that make it
obvious that postgresql should solve it internally ? It is really
annoying to hear all the time that you should add additional path of
execution to already complex stack, and
2009/5/29 Scott Marlowe scott.marl...@gmail.com:
Both Oracle and PostgreSQL have fairly heavy backend processes, and
running hundreds of them on either database is a mistake. Sure,
Oracle can handle more transactions and scales a bit better, but no
one wants to have to buy a 128 way E15K
damn I agree with you Scott. I wish I had enough cash here to employ
Tom and other pg magicians to improve performance for all of us ;)
Thing is tho, postgresql is mostly used by companies, that either
don't have that sort of cash, but still like to get the performance,
or companies that have
depends on how soon do you need to access that data after it's being
created, the way I do it in my systems, I get data from 8 points, bit
less than you - but I dump it to csv, and import it on database host
(separate server).
now, you could go to BDB or whatever, but that's not the solution.
So,
try creating index on all three columns.
Btw, 38ms is pretty fast. If you run that query very often, do prepare
it, cos I reckon it takes few ms to actually create plan for it.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
you have to vacuum analyze after you've created index, afaik.
No, count(*) is still counting rows.
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
2009/5/25 Scott Marlowe scott.marl...@gmail.com:
So, in 2000 databases, there's only an average of 2 relations per db
and 102 dead rows? Cause that's all you got room for with those
settings.
Whats the last 20 or so lines of vacuum verbose as run by a superuser say?
according to
2009/5/25 Łukasz Jagiełło lukasz.jagie...@gforces.pl:
W dniu 25 maja 2009 17:32 użytkownik Scott Marlowe
scott.marl...@gmail.com napisał:
Recent change postgresql server from Amazon EC2 small into large one.
That gives me x86_64 arch, two core cpu and 7.5GB ram. Atm got almost
~2000 small
EXISTS won't help much either, postgresql is not too fast, when it
comes to that sort of approach.
join is always going to be fast, it is about time you learn joins and
use them ;)
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
use join instead of where in();
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
BETWEEN X AND Y
On Mon, Apr 20, 2009 at 2:55 PM, Rafael Domiciano
rafael.domici...@gmail.com wrote:
Hello People,
I have initiated a work to review the sqls of our internal software.
Lot of them he problem are about sql logic, or join with table unecessary,
and so on.
But software has lot
crawler=# select * from assigments;
jobid | timeout | workerid
---+-+--
(0 rows)
Time: 0.705 ms
crawler=# \d+ assigments
Table public.assigments
Column | Type |Modifiers
|
2009/4/18 Tom Lane t...@sss.pgh.pa.us:
=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= gryz...@gmail.com writes:
That expected 1510 rows in 'assigments' seems to be pretty off,
The planner does not trust an empty table to stay empty. Every
Postgres version in living memory has acted like that; it's not
create index foobar on table(row desc);
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
1 - 100 of 132 matches
Mail list logo