We offer a web based application to companies. By keeping a company_id
in the schema we differentiate the data amongst companies. e.g the user
table has a company_id field to distinguish users between companies.
However, most companies are feeling "insecure" about their data not
being stored
Anna,
At 12:57 -0500 12/19/2001, Zhang, Anna wrote:
>I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing
>research to see how postgres doing, I used copy utility to import data from
>a text file which contains 32 mils rows, it has been 26 hours passed, but
>still running. My
This in interesting effect, that I ran into more than once, and I have a
question concerning this:
We see that the cost for this query went from roughly 12,000,000 to about
12,000. Of course, we cannot assume that the time of execution will be
directly proportional to this, and also, the weight fa
Thanks, that sped things up a bit, from 7.6 sec. to about 5.5 sec. However
the plan still includes a sequential scan on ssa_candidate:
Aggregate (cost=12161.11..12161.11 rows=1 width=35)
-> Merge Join (cost=11611.57..12111.12 rows=19996 width=35)
-> Sort (cost=11488.27..11488.27 r
Not sure about tuning, but it seems to me that this query would be much more
effective if it's rewritten like this (especially if style_id columns on both
tables are indexed):
SELECT count(DISTINCT song_id) AS X
FROM ssa_candidate SC JOIN station_subgenre SS ON SC.style_id = SS.style_id
WHERE SS.
"Michael T. Halligan" <[EMAIL PROTECTED]> writes:
> The query sorts through about 80k rows.. here's the query
> --
> SELECT count(*) FROM (
> SELECT DISTINCT song_id FROM ssa_candidate WHERE
> style_id IN (
>
To create an index, to do ordering is a basic operation, it is time consumed
job. Generally there is only relative better algorithm to be used to do
ordering. Some cases, it might be pretty good, but this is sure, not for each
one. If you look the typical ordering algorithms, different initial con
Hi.. I seem to be running into a bottle neck on a query, and I'm not
sure what the bottleneck is .
The machine is a dual-processor p3 750 with 2 gigs of (pc100) memory,
and 3 72 gig disks setup
in raid 5. Right now i'm just testing our db for speed (we're porting
from oracle) .. later on
We'r
Are the rows huge? What kind of machine hardware-wise are we talking
about? Did you start the postmaster with fsync disabled? I generally turn
fsync off for importing, the improvement is amazing :-)
Good luck!
-Mitch
- Original Message -
From: "Zhang, Anna" <[EMAIL PROTECTED]>
To: <
I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing
research to see how postgres doing, I used copy utility to import data from
a text file which contains 32 mils rows, it has been 26 hours passed, but
still running. My question is how postgres handles such data loading? it
Randall Perry <[EMAIL PROTECTED]> writes:
> Anyone know what this means:
> getTables(): SELECT (funcname) for trigger cust_modification_date returned 0
> tuples. Expected 1.
It would seem you have dropped the function which that trigger uses.
(If you drop and recreate a function, you have to drop
Anyone know what this means:
getTables(): SELECT (funcname) for trigger cust_modification_date returned 0
tuples. Expected 1.
pg_dump failed on tasbill, exiting
Tried deleting and recreating the trigger. After restarting the postmaster
pgdump works ok once. The next time it runs it fails again w
12 matches
Mail list logo