Markus Schiltknecht [EMAIL PROTECTED] wrote:
Hi,
Gregory Stark wrote:
Only if your application is single-threaded. By single-threaded I don't
refer
to operating system threads but to the architecture. If you're processing a
large batch file handling records one by one and waiting for
On 8/26/07, Kevin Kempter [EMAIL PROTECTED] wrote:
On Saturday 25 August 2007 21:10:19 Ron Johnson wrote:
On 08/25/07 21:51, Kevin Kempter wrote:
Hi List;
I have a very large table (52million rows) - I'm creating a copy of it to
rid it of 35G worth of dead space, then I'll do a
On 8/24/07, Tom Lane [EMAIL PROTECTED] wrote:
Trevor Talbot [EMAIL PROTECTED] writes:
On 8/23/07, Magnus Hagander [EMAIL PROTECTED] wrote:
Not that wild a guess, really :-) I'd say it's a very good possibility -
but I have no idea why it'd do that, since all backends load the same
DLLs at
On Saturday 25 August 2007 23:49:39 Ron Johnson wrote:
On 08/25/07 22:21, Kevin Kempter wrote:
On Saturday 25 August 2007 21:10:19 Ron Johnson wrote:
On 08/25/07 21:51, Kevin Kempter wrote:
Hi List;
I have a very large table (52million rows) - I'm creating a copy of it
to rid it of
Does the table have a unique index/primary key?
The view shows fields from two tables. One of the primary keys of one of
the tables is shown by the view.
Thanks,
JLoz
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
I have W2K server, relatively small database containing all required indexes
and need to sum only few records.
My query takes 26 seconds to run.
How to fix this ?
Andrus.
explain analyze select sum(taitmata) as ukogus
from rid join dok using (dokumnr)
where toode='NE TR'
and
Use
www.fyireporting.com
Open source, uses excellent PostgreSQL npgsql drivers.
Use standard RDL format
Andrus.
Geoffrey [EMAIL PROTECTED] kirjutas sõnumis
news:[EMAIL PROTECTED]
We are looking for a reporting tool that will enable users to generate
their own reports. Something like
Trevor Talbot escribió:
The environment is consistent then. Whatever is going on, when
postgres first starts things are normal, something just changes later
and the change is temporary. As vague guides, I would look at some
kind of global resource usage/tracking, and scheduled tasks. Do you
Hi,
Bill Moran wrote:
I'm curious as to how Postgres-R would handle a situation where the
constant throughput exceeded the processing speed of one of the nodes.
Well, what do you expect to happen? This case is easily detectable, but
I can only see two possible solutions: either stop the node
Andrus [EMAIL PROTECTED] writes:
My query takes 26 seconds to run.
The time seems entirely spent in fetching rows from table rid.
Perhaps that table is bloated by lack of vacuuming --- can you
show the output from vacuum verbose rid?
regards, tom lane
On 8/23/07, JLoz [EMAIL PROTECTED] wrote:
Does the table have a unique index/primary key?
The view shows fields from two tables. One of the primary keys of one of
the tables is shown by the view.
you will probably have better luck with the TQuery component. Also,
you should try out another
Hi,
So, I built my tables which contains a TSearch2 field by
1. Create table without indexes
2. COPY data into table
3. ALTER TABLE tblMessages ADD COLUMN idxFTI tsvector;
4. UPDATE tblMessages SET idxFTI=to_tsvector('default', strMessage);
5. Index all the fields including the TSearch2 field
On Sat, Aug 25, 2007 at 10:18:25AM -0400, Bill Moran wrote:
Phoenix Kiula [EMAIL PROTECTED] wrote:
We're moving from MySQL to PG, a move I am rather enjoying, but
we're currently running both databases. As we web-enable our
financial services in fifteen countries, I would like to
On Sun, 26 Aug 2007, Benjamin Arai wrote:
Hi,
So, I built my tables which contains a TSearch2 field by
1. Create table without indexes
2. COPY data into table
3. ALTER TABLE tblMessages ADD COLUMN idxFTI tsvector;
4. UPDATE tblMessages SET idxFTI=to_tsvector('default', strMessage);
vacuum
Oleg Bartunov [EMAIL PROTECTED] writes:
On Sun, 26 Aug 2007, Benjamin Arai wrote:
Hi,
So, I built my tables which contains a TSearch2 field by
1. Create table without indexes
2. COPY data into table
3. ALTER TABLE tblMessages ADD COLUMN idxFTI tsvector;
4. UPDATE tblMessages SET
On Aug 26, 2007, at 9:02 AM, Dawid Kuroczko wrote:
On 8/26/07, Kevin Kempter [EMAIL PROTECTED] wrote:
On Saturday 25 August 2007 21:10:19 Ron Johnson wrote:
On 08/25/07 21:51, Kevin Kempter wrote:
Hi List;
I have a very large table (52million rows) - I'm creating a copy
of it to
rid it
Gregory Stark [EMAIL PROTECTED] writes:
On Sun, 26 Aug 2007, Benjamin Arai wrote:
So, I built my tables which contains a TSearch2 field by
1. Create table without indexes
2. COPY data into table
3. ALTER TABLE tblMessages ADD COLUMN idxFTI tsvector;
4. UPDATE tblMessages SET
On Tue, 2007-08-14 at 10:16 -0500, Scott Marlowe wrote:
On 8/14/07, Ow Mun Heng [EMAIL PROTECTED] wrote:
I'm seeing an obstacle in my aim to migrate from mysql to PG mainly from
the manner in which PG handles duplicate entries either from primary
keys or unique entries.
Data is taken
Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
On Sun, 26 Aug 2007, Benjamin Arai wrote:
So, I built my tables which contains a TSearch2 field by
1. Create table without indexes
2. COPY data into table
3. ALTER TABLE tblMessages ADD COLUMN idxFTI tsvector;
4. UPDATE
On Mon, 2007-08-27 at 11:55 +0800, Ow Mun Heng wrote:
I just ran into trouble with this. This rule seems to work when I do
simple inserts, but as what I will be doing will be doing \copy
bulkloads, it will balk and fail.
Now would be a good idea to teach me how to skin the cat differently.
20 matches
Mail list logo