On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:
On Wed, 22 Mar 2006, Jim C. Nasby wrote:
Ok, I saw disk activity on the base directory and assumed it was pg_xlog
stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore
default_tablepsace and create the new tables in
On Wed, Mar 22, 2006 at 02:20:39PM +0800, Qingqing Zhou wrote:
Simon Riggs [EMAIL PROTECTED] wrote
On Tue, 2006-03-21 at 06:22 -0600, Jim C. Nasby wrote:
Currently, it appears that SELECT * INTO new_table FROM old_table logs
each page as it's written to WAL. Is this actually needed?
On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Scott Marlowe)
wrote:
On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:
Has someone been working on the problem of splitting a query into pieces
and
On Thu, Mar 23, 2006 at 10:14:24AM +0100, Edoardo Serra wrote:
Now, for the interesting test. Run the import on both machines, with
the begin; commit; pairs around it. Halfway through the import, pull
the power cord, and see which one comes back up. Don't do this to
servers with data you
On Thu, Mar 23, 2006 at 01:09:49PM +0200, Theo Kramer wrote:
ii If no to i, is it feasible to extend PostgreSQL to allow traversing
an index in column descending and column ascending order - assuming
an order by on more than one column with column order not
in the same direction and
On Thu, Mar 23, 2006 at 01:12:08PM +0100, Bendik Rognlien Johansen wrote:
Hello, I have a big problem with one of my databases. When i run my
query, after a few minutes, the postmaster shows 99% mem i top, and
the server becomes totally unresponsive.
You've got a bunch of sorts going on;
Jim C. Nasby wrote:
Why would the content of the old_table be unreliable? If we've replayed
logs up to the point of the CTAS then any data that would be visible to
the CTAS should be fine, no?
Though, the way Tom put it in one of his replies it sounds like WAL
doesn't do any kind of
Hi,
I have a select like
SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total
FROM table
WHERE
(array[20]+array[21]+ ... +array[50]+array[51])5000
AND array[20]0
AND array[21]0
...
AND array[50]0
AND array[51])0
Any ideas to make this query faster?
---(end
On Fri, Mar 24, 2006 at 01:41:50PM +0100, Ruben Rubio Rey wrote:
Hi,
I have a select like
SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total
FROM table
WHERE
(array[20]+array[21]+ ... +array[50]+array[51])5000
http://www.varlena.com/GeneralBits/109.php might provide some
On Fri, Mar 24, 2006 at 01:49:17PM +0100, Svenne Krap wrote:
explain select dataset_id, entity, sum(amount) from entrydata_current
where flow_direction in (select * from outflow_direction(dataset_id))
and dataset_id in (select * from get_dataset_ids(122)) group by
dataset_id, entity;
snip
Jim C. Nasby wrote:
On Fri, Mar 24, 2006 at 01:41:50PM +0100, Ruben Rubio Rey wrote:
Hi,
I have a select like
SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total
FROM table
WHERE
(array[20]+array[21]+ ... +array[50]+array[51])5000
On Fri, Mar 24, 2006 at 02:01:29PM +0100, Ruben Rubio Rey wrote:
http://www.varlena.com/GeneralBits/109.php might provide some useful
insights. I also recall seeing something about sum operators for arrays,
but I can't recall where.
I ll check it out, seems to be very useful
Is faster
On Fri, Mar 24, 2006 at 07:06:19AM -0600, Jim C. Nasby wrote:
On Fri, Mar 24, 2006 at 02:01:29PM +0100, Ruben Rubio Rey wrote:
mmm ... i have read in postgres documentation that null values on arrays
are not supported ...
Damn, you're right. Another reason I tend to stay away from them...
Jim C. Nasby wrote:
On Fri, Mar 24, 2006 at 08:39:02AM -0400, Alvaro Herrera wrote:
Jim C. Nasby wrote:
Why would the content of the old_table be unreliable? If we've replayed
logs up to the point of the CTAS then any data that would be visible to
the CTAS should be fine, no?
On Fri, Mar 24, 2006 at 09:47:20AM -0400, Alvaro Herrera wrote:
Jim C. Nasby wrote:
On Fri, Mar 24, 2006 at 08:39:02AM -0400, Alvaro Herrera wrote:
Jim C. Nasby wrote:
Why would the content of the old_table be unreliable? If we've replayed
logs up to the point of the CTAS then
From: [EMAIL PROTECTED]
[mailto:pgsql-performance-
[EMAIL PROTECTED] On Behalf Of Jim C. Nasby
Subject: Re: [PERFORM] Problem with query, server totally unresponsive
On Thu, Mar 23, 2006 at 01:12:08PM +0100, Bendik Rognlien Johansen
wrote:
Hello, I have a big problem with one of my
On Fri, Mar 24, 2006 at 08:46:54AM -0600, Dave Dutcher wrote:
The machine has 2x Intel dual core processors (3GHz) and 2 Gigs of
ram.
Unless I missed some big news recently, no such CPU exists.
Hyperthreading is absolutely not the same as dual core, and many
people
have found that
Christopher,
On 3/23/06 6:22 PM, Christopher Browne [EMAIL PROTECTED] wrote:
Question: Does the Bizgress/MPP use threading for this concurrency?
Or forking?
If it does so via forking, that's more portable, and less dependent on
specific complexities of threading implementations (which
With 8.1.3, I get an error when trying to do this on a Text[] column
:
.. WHERE ANY(array) LIKE 'xx%'
Indeed, I get rejected even with:
.. WHERE ANY(array) = 'xx'
In both cases, the error is: ERROR: syntax error at or near
any ...
It would only work as documented in the manual (8.10.5):
Ruben Rubio Rey [EMAIL PROTECTED] writes:
SELECT (array[20]+array[21]+ ... +array[50]+array[51]) as total
FROM table
WHERE
(array[20]+array[21]+ ... +array[50]+array[51])5000
AND array[20]0
AND array[21]0
...
AND array[50]0
AND array[51])0
Any ideas to make this query faster?
What's
Jim C. Nasby [EMAIL PROTECTED] writes:
On Fri, Mar 24, 2006 at 01:49:17PM +0100, Svenne Krap wrote:
explain select dataset_id, entity, sum(amount) from entrydata_current
where flow_direction in (select * from outflow_direction(dataset_id))
and dataset_id in (select * from
K C Lau [EMAIL PROTECTED] writes:
Indeed, I get rejected even with:
.. WHERE ANY(array) = 'xx'
It would only work as documented in the manual (8.10.5):
SELECT * FROM sal_emp WHERE 1 = ANY (pay_by_quarter);
That's not changing any time soon; the SQL spec defines only the second
syntax for
On Fri, 2006-03-24 at 04:16, Jim C. Nasby wrote:
On Thu, Mar 23, 2006 at 10:14:24AM +0100, Edoardo Serra wrote:
Now, for the interesting test. Run the import on both machines, with
the begin; commit; pairs around it. Halfway through the import, pull
the power cord, and see which one comes
On Fri, 24 Mar 2006, Jim C. Nasby wrote:
On Wed, Mar 22, 2006 at 02:37:28PM -0500, Kris Jurka wrote:
On Wed, 22 Mar 2006, Jim C. Nasby wrote:
Ok, I saw disk activity on the base directory and assumed it was pg_xlog
stuff. Turns out that both SELECT INTO and CREATE TABLE AS ignore
[EMAIL PROTECTED] (Jim C. Nasby) writes:
On Thu, Mar 23, 2006 at 09:22:34PM -0500, Christopher Browne wrote:
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Scott Marlowe)
wrote:
On Thu, 2006-03-23 at 10:43, Joshua D. Drake wrote:
Has someone been working on the problem of
[EMAIL PROTECTED] (Luke Lonergan) writes:
Christopher,
On 3/23/06 6:22 PM, Christopher Browne [EMAIL PROTECTED] wrote:
Question: Does the Bizgress/MPP use threading for this concurrency?
Or forking?
If it does so via forking, that's more portable, and less dependent on
specific
Tom Lane wrote:
where (flow_direction, dataset_id) in (select * from new_func(122))
Is this form of multi-column IN mentioned anywhere in the docs? I can't
find it.
Svenne
smime.p7s
Description: S/MIME Cryptographic Signature
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
A naive read on this is that you might start with one backend process,
which then spawns 16 more. Each of those backends is scanning through
one of those 16 files; they then throw relevant tuples into shared
memory to be
Svenne Krap [EMAIL PROTECTED] writes:
Tom Lane wrote:
where (flow_direction, dataset_id) in (select * from new_func(122))
Is this form of multi-column IN mentioned anywhere in the docs? I can't
find it.
Sure, look under Subquery Expressions. 8.0 and later refer to it as a
row_constructor,
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
Correct me if I'm wrong, but there's no way to (reasonably) accomplish
that without having some dedicated extra processes laying around that
you can use to execute the queries, no? In other words, the cost of a
fork() during
[EMAIL PROTECTED] (Michael Stone) writes:
On Fri, Mar 24, 2006 at 01:21:23PM -0500, Chris Browne wrote:
A naive read on this is that you might start with one backend process,
which then spawns 16 more. Each of those backends is scanning through
one of those 16 files; they then throw relevant
Whoa !
bookmark_delta contains very few rows but is inserted/deleted very
often... the effect is spectacular !
I guess I'll have to vacuum analyze this table every minute...
annonces=# EXPLAIN ANALYZE SELECT id, priority FROM annonces WHERE id IN
(SELECT annonce_id FROM
32 matches
Mail list logo