Yes, agreed, it would be nice to have WHERE as part of COPY. In fact, I
like Informix's solution, which merges data in/out with INSERT/SELECT:
UNLOAD TO '/tmp/x'
SELECT *
FROM tab;
and
LOAD FROM '/tmp/x'
INSERT INTO tab;
It is tough to beat this flexibi
Vincent Janelle <[EMAIL PROTECTED]> writes:
> when entries are deleted from an table they're not deleted from the
> index, and vacuum doesn't clean them up.
This is entirely false.
There is a problem with recovering space in an index if the range of
indexed values changes over time. But claiming
It is possible. Delimiters can be used while restoring the data file.
Null string specification can also be specified. See,
test_db=> \h COPY
for more details.
The problem with COPY, I think, is that I cannot use a WHERE
statement. I'd like to just unload certain rows from a table. If I wan
also check for application logic that loops when you could retrieve a
single result with a group by statement, perform nightly
maintaince/recreation of indexes, and vacuums.
a beowulf cluster will not help with postgres due to the use of shared
memory.
checking the logs to see that you're not rec
when entries are deleted from an table they're not deleted from the
index, and vacuum doesn't clean them up. reindex recreates the index.
It is suggested that you run a script at whatever necessary intervals to
recreate the indexes on your tables if they have large amounts of data
deleted from th
Have you read the SCO FAQ?
http://www.us.postgresql.org/users-lounge/docs/faq.html
Also, we are about to release 7.3. Would you please try 7.3beta3 and
let us know how that works?
---
Shibashish wrote:
> Dear Sir,
On Fri, 2002-11-01 at 06:15, [EMAIL PROTECTED] wrote:
> tradein_clients=# VACUUM VERBOSE ANALYZE email_bank_mailing_lists;
> NOTICE: --Relation email_bank_mailing_lists--
> NOTICE: Pages 3583: Changed 0, Empty 0; Tup 256419: Vac 0, Keep 0, UnUsed 44822.
> Total CPU 0.24s/0.04u sec elapsed
> I need to find a way to increase performance on my server.
> We are running apache, mod-perl, sendmail, and postgres on our server.
The
> machine is a dual 900Mhz processor with 2 gigs of ram, and fast 10k raid
> drives.
> What else can I do to help performance?
> Will a beowulf cluster help to
On Wed, Oct 30, 2002 at 04:27:33PM -0500, John Sequeira wrote:
> over has a very high # of INSERTS (it does a lot of logging), and it
> seems like there might be too much overhead using something like usogres
> or one of the replication options. We expect it to grow very quickly to
How many
To: [EMAIL PROTECTED]
From: Gary DeSorbo <[EMAIL PROTECTED]>
Subject: DB Performance
Cc:
Bcc:
X-Attachments:
I need to find a way to increase performance on my server.
We are currently using postgres as a back-end to our web-based corporate
wide application. The application is used for everyth
I need to find a way to increase performance on my server.
We are currently using postgres as a back-end to our web-based corporate
wide application. The application is used for everything from collecting
large amount of data, updating current data and creating large reports based
on this data. At
Hello,
I have a Redhat 7.3 system with PostgreSQL 7.2.3 and when I run some
jobs I have to add data to the databases running the system slows,
the file system where PGDATA resides fills up and all of memory and
disk swap gets used up... then panic.
There is nothing in any of the log files, we're
On Fri, 2002-11-01 at 08:03, [EMAIL PROTECTED] wrote:
>
> Hi Rod ,
>
> Does it means that index scan is used for less frequenlty occuring data?
> yes my table was not clustered.
>
> can u tell me what does 0.00..6788.24 and rows and width means?
>
> in explain out put cost=0.00..6788.24 rows=30
See Paragraph 2 of the description section:
http://www.postgresql.org/idocs/index.php?sql-explain.html
In the above is a good explanation of 'cost'. Rows is the number of
rows estimated to be returned, and width is the expected number of
columns it needs to deal with at that point.
On Fri, 2002-
On Fri, 2002-11-01 at 06:15, [EMAIL PROTECTED] wrote:
Looks like a borderline case. See the costs of the index scan and
sequential scan are very similar. Since 499 covers nearly 1 in 10
tuples, it's likely found on nearly every page. This should make a
sequential scan much cheaper.
However, i
Hi all,
I'm trying to get 'readline' and 'history' support for the PostgreSQL
shell.
Working in psql without all the bash features (like auto completion
etc.) is very nasty.
I receive the following error by configuring PostgreSQL:
".
checking for netinet/tcp.h... yes
checking whether string
hi
while creating user in postgresql. i'm giving these
command on cygwin. it's giving the below error.
$ createuser administrator with password ' ';
Shall the new user be allowed to create databases?
(y/n) y
Shall the new user be allowed to create more new
users? (y/n) n
psql: could not conne
"scott.marlowe" <[EMAIL PROTECTED]> writes:
> sort_mem (bigger isn't always better here. This is the amount of memory
> EACH sort operation will grab when it runs, and it seems to be that it
> will grab all of it whether it needs it or not, so be careful not to crank
> this up.
That's not cor
On 31 Oct 2002, Kevin Old wrote:
> Hello everyone,
>
> I have a question about setting the variables in the postgresql.conf
> file. I have the following settings in my my.cnf file for MySQLand
> need to know which variables in the postgresql.conf file would
> correspond to the ones below, so
I just made a mistake that could be quite costly: I did this:
update writer_survey set partid='W41308' where survid in (select survid
from participants where partid='W41291' limit 1);
when I should have done this:
update writer_survey set partid='W41308' where survid in (select survid
from write
I'm looking for some suggestions for setting up a remote failover
Postgres instance (warm spare). In my situation, the db to be failed
over has a very high # of INSERTS (it does a lot of logging), and it
seems like there might be too much overhead using something like usogres
or one of the r
Hello,
I used TPCC to test postgreSQL 7.3 beta3 version.
My test steps are showed as follow,
1st, the number of TPCC simulation processes is 100,
run time is 7200 seconds. The consistency test was
got through successfully.
2nd, the number of TPCC simulation processes is 100,
The first step definitely would have to split of the DB server.
Secondly u shud check the application part, is it some specific
queries that take too much time? in that case consider optimising
the queries.
to identify the slow queries you could use pg_stat_activity view
for log the time of vari
Gary DeSorbo <[EMAIL PROTECTED]> writes:
> Postgres is configured as follows:
> sort_mem = 128672
> shared_buffers = 60800
> fsync = false
Yipes. Back off that sort_mem setting --- that's 128M *per sort*,
which will undoubtedly run you out of memory (or at least into serious
swapping) as soon as
I need to find a way to increase performance on my server.
We are currently using postgres as a back-end to our web-based
corporate wide application. The application is used for everything
from collecting large amount of data, updating current data and
creating large reports based on this data.
>
>
> Does eRserver work with pgsql 7.2.3 or 7.3b3 ?
It does work with 7.2.3, if you use the commercial version. It is
definitely broken right now in 7.3, but it will certainly be upgraded
to work with 7.3 some time before we upgrade to 7.3.x. (Of course,
when that will happen is unclear to m
Thanks for the insight Rod.
is there any other place i can know more abt these principles?
But if the table is clustered then the pages are stored catagiously
with respect to that column rite?
> On Fri, 2002-11-01 at 08:03, [EMAIL PROTECTED] wrote:
>>
>> Hi Rod ,
>>
>> Does it means that index
I repeat my simple experience for
know about what is going on:
push=# explain analyze select * from jobs where status = 'r';
NOTICE: QUERY PLAN:
Index Scan using idx_jobs_status_r on jobs (cost=0.00..8.57 rows=3770
width=49)
(actual time=19.26..1295.73 rows=5 loops=1)
Total runtime: 1295.85 ms
Rod ,
Clustering did work for my other case ;-)
tradein_clients=> explain analyze SELECT count(*) from email_source where
source_id=173;
NOTICE: QUERY PLAN:
Aggregate (cost=13042.91..13042.91 rows=1 width=0) (actual time=1415.32..1415.32
rows=1 loops=1)
-> Seq Scan on email_source (c
Hi Rod ,
Does it means that index scan is used for less frequenlty occuring data?
yes my table was not clustered.
can u tell me what does 0.00..6788.24 and rows and width means?
in explain out put cost=0.00..6788.24 rows=30001 width=4
I have one more table where i face the similar problem , i
Hi ,
For a particular table it was only dump and reload of the table that
helped in enabling index usage.
I tried VACUUM ANALYZE and even recreating the indexes but it
did not work.
why does the planner use the index like a miser?
below are the details
was there anything bettwer i could have
Does eRserver work with pgsql 7.2.3 or 7.3b3 ?
regds
mallah.
> On Thu, Oct 31, 2002 at 03:52:03PM -0500, John Sequeira wrote:
>> Does anyone have successful experience using rsync/usogres or one of the pg
>replication
>> projects to accomplish this?
>
> Yes. We use the rserv (eRServer) sof
> So, I ran psql, did a \o to capture the output,
> did a select * from table where x=y
> whacked the first row in the output file, and added the COPY statement.
test_db=> \pset tuples_only -- does whacking for you
> This seems way too klunky, so I must be going about it all wrong. In
> Informix
Hi Folks,
I Routinely vacuum analyze by database four times a day
if is non locking vacuum (not full)
Vacuum operation takes 15-20 mins
database size is 4 GB.
The problem is the databser server becomes bit slower as
its' load average remains at 2-4 during vacuum.
can anyone tell me if this sit
34 matches
Mail list logo