Sorry for the late reply - but I still haven't found a solution,
for example I have a PHP script with 5 consecutive SELECT
statements (source code + problem described again under:
http://stackoverflow.com/questions/6458246/php-and-pgbouncer-in-transaction-mode-current-transaction-is-aborted
I have two tables, town and townalias, the latter containing alternative
town names.
I would like to ensure that a town name is unique per
country-region across the two tables.
Can I do this with a constraint ot do I need to implement the logic via
trigger?
You can't have a constraint
Hi, I have performance issues on very large database(100GB). Reading from
the database is no problem, but writing(or heavy writing) is a nightmare.
I have tried tuning postgresql, but that does not seem to improving the
writing performance.
To improve the write performance, what are my
Greets,
I'm trying to figure out why the following SELECT has become slow
(hardware,
code changes, etc) and would appreciate any comments on interpreting the
EXPLAIN ANALYZE output. It *used* to take a few seconds at most, but not
anymore... In figuring out which part is taking so long,
On Fri, June 3, 2011 13:57, t...@fuzzy.cz wrote:
There's something very wrong with snames - the planner expects 22 rows
but
gets 164147851. Which probably causes a bad plan choice or something
like that.
Try to analyze the snames table (and maybe increase the statistics
target on the
Hello fellow PostgreSQL-users,
I run a Drupal 7 (+Facebook app) website
with a multiplayer flash game and use
postgresql-server-8.4.8-1PGDG.rhel5 +
CentOS 5.6 64 bit on a Quad-Core/4GB machine.
I generally like using PostgreSQL eventhough
I'm not an experienced DB-user, but in the recent
Yes, loading a large dictionary is known to be a fairly expensive
operation. There's been discussions about how to make it cheaper, but
nothing's been done yet.
regards, tom lane
Hi Tom,
thanks for the quick response. Bad news for me ;(
We develop ajax-driven web apps, which
Yes, I use the same approach, but I'm not aware of any such guideline
related to fillfactor with indexes. Anyway those guidelines need to be
written by someone, so you have a great opportunity ;-)
I did a quick test using your example. As in your test, increasing
values don't get any
What about the index size? How much space do they occupy? Analyze the
table and do this
Of course space is different. That's not the point. The point is: I'm
willing
to pay the price for another HD, if that helps with performance. But it
doesn't.
The minimal performance difference is
On 05/09/2011 04:39 PM, F T wrote:
Hi list
I use PostgreSQL 8.4.4. (with Postgis 1.4)
I have a simple update query that takes hours to run.
The table is rather big (2 millions records) but it takes more than 5
hours
to run !!
The query is just :
*UPDATE grille SET inter = 0*
So any
On Tue, May 3, 2011 at 6:01 PM, Raghavendra
raghavendra@enterprisedb.com wrote:
On Tue, May 3, 2011 at 5:37 PM, Simon Riggs si...@2ndquadrant.com
wrote:
On Tue, May 3, 2011 at 11:54 AM, raghu ram raghuchenn...@gmail.com
wrote:
It may be a silly question, still out of curiosity I
On Fri, Apr 22, 2011 at 12:06 PM, Phoenix Kiula phoenix.ki...@gmail.com
wrote:
On Fri, Apr 22, 2011 at 12:51 AM, Tomas Vondra t...@fuzzy.cz wrote:
Dne 21.4.2011 07:16, Phoenix Kiula napsal(a):
Tomas,
I did a crash log with the strace for PID of the index command as you
suggested.
Here's
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data, and at some point it finds out it
needs to allocate
On Fri, Apr 22, 2011 at 8:20 PM, t...@fuzzy.cz wrote:
On Fri, Apr 22, 2011 at 7:07 PM, t...@fuzzy.cz wrote:
In the pg_dumpall backup process, I get this error. Does this help?
Well, not really - it's just another incarnation of the problem we've
already seen. PostgreSQL reads the data,
Hello,
I'm executing this query:
SELECT x, y, another_field FROM generate_series(1, 10) x,
generate_series(1, 10) y, my_table
Well, do you realize this is a cartesian product that gives
10 x 10 x 36 = 36.000.000
rows in the end. Not sure how wide is the third table (how many columns
Thanks Filip.
I know which table it is. It's my largest table with over 125 million
rows.
All the others are less than 100,000 rows. Most are in fact less than
25,000.
Now, which specific part of the table is corrupted -- if it is row
data, then can I dump specific parts of that table?
Query1
-- the first select return 10 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
EXCEPT
-- this select return 5 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
Where
t...@fuzzy.cz writes:
Query1
-- the first select return 10 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id = tabl2_id)
Where table1_id NOT IN (SELECT DISTINCT table1_id FROM table3)
EXCEPT
-- this select return 5 rows
SELECT a, b
FROM table1 LEFT JOIN table2 on (table1_id =
Hello,
I have hug postgresql database and when i going to search for a query
with database retrieve with limit of 669 is allow
If my query fetch records more over 669 records then error is comming...
RROR: more than one row returned by a subquery used as an expression
** Error
On 03/22/11 11:18 PM, Toby Corkindale wrote:
Hi,
I wondered if there were any software packages floating around to
manage servers using streaming replication with Pg?
ie. To handle failing over and adjusting the config to promote a
slave; and performing the steps of syncing and restarting
Hi,
I am looking for some advice on where to troubleshoot after 1 drive in
a RAID 1 failed.
Thank you.
I am running v 7.41, I am currently importing the data to another
physical server running 8.4 and will test with that once I can. In the
meantime here is relevant info:
Backups used
On 03/21/11 5:04 PM, Tomas Vondra wrote:
So just put there a tripod, a reasonable HD cam for $300 and you'll get
a decent video of the session.
you definitely want a sound patch from a lapel mic or the room PA, and
not be using on-camera sound.
Yes, that's definitely true. We're using a
Incredible! Setting enable_nestloop off temporarily for the run of this
script made it run in less than a minute (had been running in 10 or 11
minutes). I think you have found a solution for many of my slow running
scripts that use these same type of joins. Thanks again.
Julie
Nice. Can
On Mon, Mar 21, 2011 at 11:32 AM, t...@fuzzy.cz wrote:
Incredible! Setting enable_nestloop off temporarily for the run of
this
script made it run in less than a minute (had been running in 10 or 11
minutes). I think you have found a solution for many of my slow
running
scripts that use
When I run the following query in Postgres 8.0, it runs in 61,509.372 ms
When I run it in Postgres 8.4, it runs in 397,857.472 ms
As Andrew already pointed out, we need to se EXPLAIN ANALYZE output from
both machines to see why this happens. Are you running both queries on the
same data, or is
Please help! I am using Windows 7 64, I've also tried with/without
firewalls, opened all local ports... Using Nod32 atm and Spybot that runs
in
the background, when ununstalled (both of them) postgres doesn' t work
too
Hi.
I really don't know what a Holdem Manager is, but have you
Hello
2011/3/14 Jo jl.n...@uni-bonn.de:
I set the work_mem to 100MB and the shared buffers are 2 GB
The query plans are long and complex. I send the beginning of the
two plans. Hope this helps to understand the differences.
I assume the join strategy in 8.3 differs from the one in 8.4.
Hi Aleksey,
I've read your previous post, and although I'm not quite sure what is the
root cause, I have some questions and (maybe wild) guesses ...
1) Are those two machines (primary and DR) exactly the same? I mean CPU,
filesystem (including blocksize etc.)?
2) What about database encoding? I
As a result of my recent encounter with table bloat and other tuning
issues I've been running into, I'm looking for a good resource for
improving my tuning skills.
My sysadmin ran into the following book:
PostgreSQL 9.0 High Performance, by Gregory Smith, ISBN 184951030X
I got a big problem:
Warning: database 'postgres' must be vacuumed within 100 transactions
HINT: To avoid database shutdown, execute a full database vacuum in
'postgres'
Warning: database 'postgres' must be vacuumed within 99 transactions
HINT: To avoid database shutdown, execute
I will admit that the Preventing Transaction ID Wraparound Failures
chapter in the manual (and the subsequent autovacuum daemon chapter) make
my head spin.
The basic idea is pretty simple, really. You have a limited amount of
transaction IDs, so you arrange them into a circle, and you are
8.1.23
Im typing ¨vacuum¨ since yesterday in the backend and stop the vacuuming
each 30 minutes, and I try again typing vacuum, but im by the number
890271/100 and its too slow!, i dont know what else to do to make it
faster.
Why are you stopping it each 30 minutes? And the fact that
On 2010-12-21 10:42, Massa, Harald Armin wrote:
b) creating an index requires to read the data-to-be-indexed. So, to
have an
index pointing at the interesting rows for your query, the table has to
be
read ... which would be the perfect time to allready select the
interesting
rows. And
I don't think planner should do things like creating an index. But it
might hint at doing it in the logs.
There was a discussion around that sort of feature on -hackers not so
long time ago. I don't remember what the conclusion was, but probably
that it just isn't worth wasting planner's
Tomas Vondra t...@fuzzy.cz writes:
I've been thinking about this and I think it might be improved. If I
understand the logic corretly, it says 'use half of the histogram bin
size'. But the value
#define DEFAULT_RANGE_INEQ_SEL 0.005
says it's always 0.5%, which is not not true if STATISTICS
Hi ( sorry for the double posting, thought Id use the wrong email
address but both have been posted anyway). As far as the db is concerned
Im just reading data then writing the data to a lucene search index (which
is outside of the database) , but my labtop is jut a test machine I want
to run
Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
I using the database with just one db connection to build a lucene
search index from some of the data, and Im trying to improve
performance. The key thing is that I'm only a single user but
manipulating large amounts of data , i.e processing
I'm creating a data queue on top of postgres and I'm wondering if I've
made
an incorrect assumption about isolation or synchronization or some similar
issue.
Is there a particular reason why you are not using any of the proven
queuing packages (pgq for example)? Because all the issues seem
Hello
you have to parse a sqlerrm variable
That's one way to do that. Another - more complex but more correct in many
cases is using two separate blocks.
BEGIN
... do stuff involving constraint A
EXCEPTION
WHEN unique_violation THEN ...
END;
BEGIN
... do stuff involving constraint B
You can find out exactly what you're waiting for by correlating this to
the
pg_locks table.
Grab the 'procpid' of your waiting query and run:
select * from pg_locks where pid=thepid and granted='f';
Notice, in the pg_locks table, the logic works slightly different...
if
you're
On Thu, Nov 11, 2010 at 3:30 AM, tuanhoanganh hatua...@gmail.com wrote:
My Computer is running POS with Postgres 8.9.11 database
Not sure which version is that. There's nothing like 8.9.11 ...
Ram : 16GB
OS : Windows 2008 R2
CPU XEON 2G
User : 50-60 user (connect ~ 200 connects, I
Hi all -
I am seeing lot of these records in the log file. Not able
to
find why I get this in log file. Is there a way to find out info about
this
? Thanks for your help
LOG: unexpected EOF on client connection
LOG: unexpected EOF on client connection
This means the
Hi everyone,
I've run into a strange problem with system catalogs - we're collecting
database stats periodically (every 10 minutes), and from time to time we
get the following error:
--
ERROR: could not open relation with OID 154873708
Well, you can see usage statistics for tables, indexes, functions and
sequences ... but AFAIK it's not possible to do that for columns.
See this
http://www.postgresql.org/docs/8.4/static/runtime-config-statistics.html
and then use pg_stat_ and pg_statio_ catalogs
I'd like to look at it from the object level and see how much I/O is being
done on specific table or index and then check which sessions are
responsible for that.
also, what's the catalog table you would recommend me to use if I want to
see I/O activity on an object regardless of the
I've
delete from catalog_items where ItemID in (select id from
import.Articoli_delete);
id and ItemID have an index.
catalog_items is ~1M rows
Articoli_delete is less than 2K rows.
This query has been running for roughly 50min.
Right now it is the only query running.
PostgreSQL 8.3.4
Hi,
Is it possible to configure postgres from SQL?
I am interested in turning off fsync for a set of queries (that take
ages to run) and then turn fsync back on again afterwards.
There are things that can be changed at runtime using SQL - in that case
you may just type SET enable_seqscan =
On Thu, Jun 24, 2010 at 17:14, Alban Hertroys
dal...@solfertje.student.utwente.nl wrote:
You didn't provide explain analyse results for those queries, so I'm
just guessing here, but I think you should add indices to email_msg_id
and email_sender_text to speed up those queries.
Alban
Well, the reason why deadlock happen is usually uncoordinated access to the
same resource - in this case the resouce is a database row. This has
nothing
to do with the complexity of the queries, but with the order of the
updates.
According to the log process 8253 waits for 8230, and 8230 waits
raghavendra t wrote:
Hi All,
Could please guide me in knowing the Dynamic Catalog views which will
tell about the Buffers and Cache information using System Catalogs.
you mean, stuff like
http://www.postgresql.org/docs/current/static/catalog-pg-statistic.html ?
afaik, data about whats
INFO: repcopy: scanned 3000 of 4652 pages, containing 128964 live rows
and 0 dead rows; 3000 rows in sample, 199980 estimated total rows
VACUUM
controlsmartdb=# select distinct report_status from repcopy ;
According to the vacuum output, there are about 20 rows in the
repcopy table,
On Tue, 13 Apr 2010 10:26:04 -0400, Brent Friedman
br...@brentfriedman.net
wrote:
I am starting a project next week that looks like it will involve some
massive sql rewrites to gain needed performance, and I am looking at
implementing as many functions as possible. I haven't worried that
Hi!
I have written this function in PL/pgSQL,
CREATE OR REPLACE FUNCTION MakeSimpleReservation(integer, integer,
integer, text, text) RETURNS boolean ...
In my PHP script, I have this code:
$start_ts = '2010-04-12 11:00:00';
$end_ts = '2010-04-12 14:00:00';
$update =
Following a great deal of discussion, I'm pleased to announce that the
PostgreSQL Core team has decided that the major theme for the 9.1
release, due in 2011, will be 'NoSQL'.
Please, provide me your address so I can forward you the health care
bills I had to pay due to the heart attack I
Hi All,
When we start the postgres server, the writer process, wal process,
postmaster, autovacuum ( if autovacuum is on), stats collector will come
into picture as mandotory process. My question is, is there any processes
apart from these process, what are the mandotory process come along
Hi Tomas,
Thank you for the reply.
Well, there is a bunch of processes started at the beginning, and then
there is one backend process for each connection (see the
postgresql.conf
how many connections are allowed in your case).
I do agree with you, that there would be bunch of process.
Le 16/03/2010 15:25, Richard Huxton a écrit :
OK - we have a merge join in the first case where it joins the
pre-sorted output of both tables.
In the second case it queries the index once for each row in cellules.
Now look at the costs. The first one is around 704,000 and the second
one
Hi Pavel,
can you provide some link or other directions to the proposal? I guess it
was posted to this list or somewhere else?
Tomas
Hello
one year ago there was proposal for index support for LIKE %some%. The
problem was extreme size of index size.
I thing so you can write own C
I guess Talend (Open Studio) might be the right choice. But I do not have
direct experience with the training.
see www.talend.com
Hi all,
We are in the process of getting an ETL program. We need it to perform
some
basic extract, transform and load jobs.
But we want to get an open source
In response to Philippe Lang philippe.l...@attiksystem.ch:
I'm using Postgresql 8.3.6 under Freebsd 7.1.
After a fresh restore of a customer dump (running version 8.2.7 at the
moment), a rather big query executes in about 30 seconds. As soon as I
run ANALYZE, it is instantly 4-5 times
Try running EXPLAIN ANALYZE - that gives much more information. For
example it may show differences in number of rows between the two
machines, that the statistics are not up to date, etc.
regards
Tomas
Hi,
I have system here with Debian/Testing and the latest 8.2 and 8.3
database installed.
A lot of important information is missing in your post, for example:
a) Was the table analyzed recently? Is the table vacuumed regularly?
b) How large are the tables? (Number of tuples and pages. SELECT
reltuples, relpages FROM pg_class WHERE relname LIKE 'event%')
c) What values are used for the
8.4 seconds is a very long time to spend looking up a single record.
Is this table bloated? What does
vacuum verbose books;
say about it? Look for a line like this:
There were 243 unused item pointers
Thanks but this table books has autovac on, and it's manually
vacuumed every hour!
Thanks Tomas.
The table may still be bloated - the default autovacuum parameters may
not
be agressive enough for heavily modified tables.
My autovacuum settings:
autovacuum = on
autovacuum_vacuum_cost_delay = 20
vacuum_cost_delay= 20
autovacuum_naptime
On Fri, Nov 14, 2008 at 12:57:32PM +0800, Phoenix Kiula wrote:
1. If I have a unique index on (user_id, url_encrypted), then will
queries asking only for user_id also use this index? Or should i
simply have separate indexes on user_id and url_encrypted? I vaguely
recall reading somewhere that
I've worked with pgsql for a while, but never needed to post from a
database trigger to another db. Is this possible? And if so, can someone
offer a pointer to the docs on how to refer to other db's in my script,
etc?
What do you mean by 'other db'? Does that mean other PostgreSQL database,
I've worked with pgsql for a while, but never needed to post from a
database trigger to another db. Is this possible? And if so, can someone
offer a pointer to the docs on how to refer to other db's in my script,
etc?
What do you mean by 'other db'? Does that mean other PostgreSQL database,
Andrus [EMAIL PROTECTED] writes:
explain analyze SELECT sum(xxx)
FROM dok JOIN rid USING (dokumnr)
WHERE dok.kuupaev BETWEEN '2008-04-01' AND '2008-04-30'
Hash Join (cost=29584.84..308259.32 rows=142686 width=0) (actual
time=68510.748..96932.174 rows=117883 loops=1)
Hash Cond:
I am having a table with more than 1000 records, i am not having index in
that, while executing that query it occupies the processor..
1000 rows is not much - I guess the index is not necessary at all, as the
traditional sequential scan is faster than index scan (due to random
access vs.
Hi, you have forgot to note some very important information - what load do you
expect and what is the size of the database? Is this an upgrade (is the
database already running somewhere - this would give you some performance
requirements) or is it a completely new database? Hom nay users /
Suddenly stops working, or continues doing exactly what it did before?
I'm wondering if you are relying on a cached plan that doesn't include
the new rule.
regards, tom lane
If there´s only the insert_850 RULE then everything works as expected - the
insert prints
71 matches
Mail list logo