Hi,
the explanation of the --inserts option of pg_dumps states that
The --column-inserts option is safe against column order changes, though even
slower.
The way I read this is, that
INSERT INTO table (column, ...) VALUES ...
is slower than
INSERT INTO table VALUES ...
Is that really
On 07/16/10 02:23, Tom Lane wrote:
Howard Rogers h...@diznix.com writes:
I have 10 million rows in a table, with full text index created on one
of the columns. I submit this query:
ims=# select count(*) from search_rm
ims-# where to_tsvector('english', textsearch)
ims-# @@
If I'd like to learn how to manage resources in postgres and grant
different users different time slot/memory/CPU?
eg. I'd like to grant to user A to execute queries that last less
than 1min or that occupy no more than X Mb... etc...
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
--
In response to Ivan Sergio Borgonovo :
If I'd like to learn how to manage resources in postgres and grant
different users different time slot/memory/CPU?
eg. I'd like to grant to user A to execute queries that last less
than 1min or that occupy no more than X Mb... etc...
Isn't (real)
On 16/07/10 19:21, Ivan Sergio Borgonovo wrote:
If I'd like to learn how to manage resources in postgres and grant
different users different time slot/memory/CPU?
eg. I'd like to grant to user A to execute queries that last less
than 1min or that occupy no more than X Mb... etc...
On Fri, 16 Jul 2010 19:43:01 +0800
Craig Ringer cr...@postnewspapers.com.au wrote:
On 16/07/10 19:21, Ivan Sergio Borgonovo wrote:
If I'd like to learn how to manage resources in postgres and
grant different users different time slot/memory/CPU?
eg. I'd like to grant to user A to
In response to Ivan Sergio Borgonovo :
If you need strong user resource limits, user storage limits, etc
PostgreSQL might not be your best option. There are some things
you can do, but there's not much.
What about an external process that monitor backend and kill them
gracefully if they
Hi all.
I'd like to add an item to the PG wishlist (provided that one exists).
In PL/PgSQL function bodies I'm using very often a pattern like this:
EXECUTE SPRINTF( '...',... );
Where SPRINFT comes from here:
http://wiki.postgresql.org/wiki/Sprintf
It's by far more powerful, easy and effective
Hi,
Attached is a query and its corresponding plan, where sorting of the CTE
acts seems to be the bottle neck. It is a real execution plan captured
with the auto_explain module.
The query is recursive. In each iteration CTE acts is sorted again,
which is obviously quite expensive for about 24000
On Fri, Jul 16, 2010 at 10:23 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Howard Rogers h...@diznix.com writes:
I have 10 million rows in a table, with full text index created on one
of the columns. I submit this query:
ims=# select count(*) from search_rm
ims-# where to_tsvector('english',
Does it run any differently if you split out the tag?
select count(*) from search_rm where to_tsvector('english', textsearch) @@
to_tsquery('english', 'woan batt') and to_tsvector('english',
textsearch) @@ 'ftx1'::tsquery
Steve
On 16 July 2010 05:22, Howard Rogers h...@diznix.com wrote:
On 16/07/10 05:22, Howard Rogers wrote:
OK, Tom: I did actually account for the number of rows difference
before I posted, though I accept I didn't show you that. So here goes:
Tom's good, but his mind-reading powers aren't what they used to be :-)
ims=# select count(*)
ims-# from search_rm
Thomas Kellerer spam_ea...@gmx.net writes:
the explanation of the --inserts option of pg_dumps states that
The --column-inserts option is safe against column order changes, though
even slower.
The way I read this is, that
INSERT INTO table (column, ...) VALUES ...
is slower than
Wappler, Robert rwapp...@ophardt.com writes:
Attached is a query and its corresponding plan, where sorting of the CTE
acts seems to be the bottle neck. It is a real execution plan captured
with the auto_explain module.
There isn't a lot of intelligence about CTEs at the moment; in
particular I
Howard Rogers h...@diznix.com writes:
OK, Tom: I did actually account for the number of rows difference
before I posted, though I accept I didn't show you that. So here goes:
...
Both queries return zero rows. One takes an awful lot longer than the
other. The only difference between them is
2010/7/16 Vincenzo Romano vincenzo.rom...@notorand.it:
Hi all.
I'd like to add an item to the PG wishlist (provided that one exists).
In PL/PgSQL function bodies I'm using very often a pattern like this:
EXECUTE SPRINTF( '...',... );
Where SPRINFT comes from here:
Hi Julian,
Sorry for the slow response. I think I will need to chop up the query some
how, but have not yet found an efficient way to do that. row_id is the
primary key in both tables, so that might work.
Here is the explain:
urls_jrubin_merged=# EXPLAIN UPDATE table1 SET row_id = table2.row_id
Hi Julian,
Using this way to break up the queries, I am able to update about 1500 rows
per minute which will take over 100 days to complete, so I need to figure
out why this is slow, and if there is any faster way.
UPDATE table1 SET new_column = table1.new_column FROM table2 WHERE
table1.row_id
Tom Lane wrote on 16.07.2010 18:40:
Thomas Kellererspam_ea...@gmx.net writes:
the explanation of the --inserts option of pg_dumps states that
The --column-inserts option is safe against column order changes, though even
slower.
The way I read this is, that
INSERT INTO table (column,
On 17/07/10 04:26, Thomas Kellerer wrote:
Hmm.
For years I have been advocating to always use fully qualified column
lists in INSERTs (for clarity and stability)
And now I learn it's slower when I do so :(
If you're not doing hundreds of thousands of identical ones at a time,
it's still very
On Sat, Jul 17, 2010 at 3:14 AM, Tom Lane t...@sss.pgh.pa.us wrote:
Howard Rogers h...@diznix.com writes:
OK, Tom: I did actually account for the number of rows difference
before I posted, though I accept I didn't show you that. So here goes:
...
Both queries return zero rows. One takes an
-Original Message-
From: Magnus Hagander [mailto:mag...@hagander.net]
Sent: Tuesday, July 13, 2010 3:26 PM
To: Thom Brown
Cc: Duncavage, Daniel P. (JSC-OD211); pgsql-general@postgresql.org
Subject: Re: [GENERAL] NASA needs Postgres - Nagios help
On Tue, Jul 13, 2010 at 20:10, Thom Brown
22 matches
Mail list logo