[ADMIN] Fast SELECT sometimes blocks

2007-04-13 Thread ogjunk-pgjedan
Hi, A perfectly normal and healthy PG 8.2.3 instance I have started acting funny - a query that normally takes only a few milliseconds sometimes runs for minutes. I thought maybe something's funny with indices, so I reindexed them all. Didn't help. Thought maybe something's funky with the who

[ADMIN] REINDEX using only 1 CPU (of 2)

2007-04-13 Thread ogjunk-pgjedan
Hi, I've got PG 8.2.3 running on a dual-core P4, and I noticed that, at least during reindexing, postgreSQL REINDEX process is using only 1 of those 2 cores. You can clearly see this from this snippet from top: Cpu0 : 93.3% us, 6.7% sy, 0.0% ni, 0.0% id, 0.0% wa, 0.0% hi, 0.0% si Cpu1

Re: [ADMIN] Fastest DB restore options

2007-02-22 Thread ogjunk-pgjedan
Hi, Yes, In remember discussions about (f)sync config. Can anyone comment on whether turning fsync off for a restore into 8.2.3: 1) is advisable 2) will make the restore faster If the OS and FS matter, this is on a Fedora Core3 Linux with kernel 2.6.9 and the ext3 journaling FS. Thanks, Otis

Re: [ADMIN] Fastest DB restore options

2007-02-22 Thread ogjunk-pgjedan
- Original Message From: Tom Lane <[EMAIL PROTECTED]> [EMAIL PROTECTED] writes: > I normally dump with these options: > -d MyDB --clean --inserts --column-inserts --format=P > But the last time I tried that, the restore took foreever. --inserts is pretty expensive. OG: rig

[ADMIN] Fastest DB restore options

2007-02-22 Thread ogjunk-pgjedan
Hello, I have a fairly large DB to dump and restore as fast as possible. I'm moving from 8.0.3 to 8.2.3! :) I normally dump with these options: -d MyDB --clean --inserts --column-inserts --format=P But the last time I tried that, the restore took foreever. So I'm looking for t

Re: [ADMIN] How are ppl monitoring PostgreSQL ... ? What is being monitored ... ?

2006-07-06 Thread ogjunk-pgjedan
Interesting, Hubert. Where/how do you get the data for: - number of transactions per second - duration of longest query ? Thanks, Otis - Original Message From: hubert depesz lubaczewski To: Marc G. Fournier Cc: pgsql-admin@postgresql.org Sent: Sunday, June 11, 2006 11:10:2

Re: [ADMIN] Logging long queries: not all get logged

2006-05-10 Thread ogjunk-pgjedan
Hi Chris, I'm pretty sure (I do no timing on the Hibernate/java/app side). I changed the config to: # log queries that take more than 1 ms log_min_duration_statement = 1 # in ms Still nothing in the log :) (I did ctl_reload the postmaster) Simon Riggs confirmed this is a known bug in a

[ADMIN] Logging long queries: not all get logged

2006-05-10 Thread ogjunk-pgjedan
Hi, I'm running PG 8.0.3. I'm trying to catch slow queries, so I have this in postgresql.conf: # log queries that take more than 500 ms log_min_duration_statement = 500 # in ms This does log _some_ queries that take > 500 ms to run. However, it looks like not all queries get logged!

Re: [ADMIN] Renaming a sequence?

2006-03-25 Thread ogjunk-pgjedan
Thanks Andy, this was it: ALTER TABLE watchlist ALTER COLUMN id SET DEFAULT nextval('public.watchlist_id_seq'::text); Otis - Original Message From: Andy Shellam <[EMAIL PROTECTED]> To: [EMAIL PROTECTED]; pgsql-admin@postgresql.org Cc: pgsql-admin@postgresql.org Sent: Saturday, March 25

[ADMIN] Renaming a sequence?

2006-03-24 Thread ogjunk-pgjedan
Hi, I just renamed some of my tables, but I now have sequences with older names. I followed info from http://www.postgresql.org/docs/current/static/sql-altersequence.html : "Some variants of ALTER TABLE can be used withsequences as well; for example, to rename a sequence use ALTERT

Re: [ADMIN] Character encoding problems and dump import

2006-03-21 Thread ogjunk-pgjedan
Thanks John and Ivo for help. It turned out that I had to manually SET CLIENT_ENCODING TO 'LATIN1' before processing the dump (which didn't have this specified). This fixed the problem. I thought a DB set to UNICODE char encoding (server_encoding) would process the Extended ASCII characters, bu

[ADMIN] Character encoding problems and dump import

2006-03-20 Thread ogjunk-pgjedan
Hello, I have a dump (non-binary, if it matters) of a DB that has some characters in it that my DB doesn't want to take. I'm using PG 8.0.3 and it was created with Unicode support: => \encoding UNICODE Characters that cause problems during the import are things like: é and other characters from

Re: [ADMIN] postgres optimization (effective_cache_size)

2005-08-12 Thread ogjunk-pgjedan
Hello, I followed this advice for picking a good effective_cache_size value (below) from Scott Marlowe, and run into a bit of trouble: I looked at the `top' output and saw "721380k cached". So I calculated the effective cache size using Scott's formula: 721380/8 = 90172 Then I changed my effe

Re: [ADMIN] DB replicators comparison; (vs. DB upgrade via pg_dump)

2005-05-15 Thread ogjunk-pgjedan
Hello, Thanks for the explanation :) I thought there was some technical limitation when dump/reload is used, and I just wasn't seeing it after my dump/reload successfully got me from 7.3.4 to 8.0.3. Still, DB replication sounds very useful, so I'd still be interested in Slony-I, Daffodil, and Ma

[ADMIN] DB replicators comparison; (vs. DB upgrade via pg_dump)

2005-05-14 Thread ogjunk-pgjedan
Hello, I am about to upgrade from 7.3.4 to 8.0.3, and I read that using a DB replication tool is a good way to go about it. I see there are multiple replicator choices: Slony-I Daffodil Replicator Mammoth Replicator I was wondering if anyone has tried any 2 or 3 of them, and could share th

Re: [ADMIN] VACUUMing for 30 minutes

2004-12-23 Thread ogjunk-pgjedan
Hello, So I run VACUUM VERBOSE (just like that, without specifying a table) and got some output. The 2 big tables take 99% of the vacuuming time. Now, I run VACUUM religiously every night, across all tables, but maybe that's an overkill for th number of updates and inserts in this DB. Maybe s

Re: [ADMIN] VACUUMing for 30 minutes

2004-12-22 Thread ogjunk-pgjedan
Hello, --- Tom Lane <[EMAIL PROTECTED]> wrote: > <[EMAIL PROTECTED]> writes: > > VACUUMing this DB takes about 30 minutes, and during that time the > DB > > is pretty unresponsive, although the PG process is not using a lot > of > > CPU (load ~ 1) nor memory (~20MB for the VACUUM process). > > H

Re: [ADMIN] VACUUMing for 30 minutes

2004-12-21 Thread ogjunk-pgjedan
Hello, I still have this issue of a long vacuum process on a DB that is not really all that big in schema (~ 30 tables) nor size (biggest table is ~150K rows, another ~120K, all others only a few thousand rows each). VACUUMing this DB takes about 30 minutes, and during that time the DB is pre

Re: [ADMIN] easy one: location of the database cluster

2004-12-16 Thread ogjunk-pgjedan
--- Tom Lane <[EMAIL PROTECTED]> wrote: > Peter Eisentraut <[EMAIL PROTECTED]> writes: > > Iain wrote: > >> Is there a "standard" directory that people tend to use for this, > >> such as /var/local/pgsql/ ? > > > According to the Filesystem Hierarchy Standard, program data should > be > > under

[ADMIN] VACUUMing for 30 minutes

2004-12-15 Thread ogjunk-pgjedan
Hello, I have a DB with about 30 tables, where 2 tables are significantly larger than the rest, and contain a bit over 100,000 rows. Every night I do these 3 things: VACUUM; ANALYZE; pg_dump I am noticing that the VACUUM part takes nearly 30 minutes, during which the DB is not very accessible (a

Re: [ADMIN] 7.3.4 -> 7.3.8 - a little stuck

2004-11-17 Thread ogjunk-pgjedan
Aha! :) No problem Devrim - I did this last night and everything worked. The init script must be from the 7.4.* versions, because there are also changelog-type messages mentioning 7.4.* in there. Otis --- Devrim GUNDUZ <[EMAIL PROTECTED]> wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1

Re: [ADMIN] 7.3.4 -> 7.3.8 - a little stuck

2004-11-17 Thread ogjunk-pgjedan
Hello, --- Tom Lane <[EMAIL PROTECTED]> wrote: > <[EMAIL PROTECTED]> writes: > > Although I upgraded to 7.3.8, I saw several incorrect(?) references > to > > 7.4 (e.g. PGVERSION=7.4 in that init script). Mistakes? > > Yes ... yours. You may have thought you updated to 7.3.8, but you > evidentl

[ADMIN] 7.3.4 -> 7.3.8 - a little stuck

2004-11-16 Thread ogjunk-pgjedan
Hello, I was using 7.3.4 and decided to upgrade to 7.3.8. I did this via RPMs for RH9. RPM installed OK, and the migration of data looks like it worked. I have one remaining problem: I can start the DB with `pg_ctl -D /var/lib/pgsql/data start', but I cannot start it with `/etc/rc.d/init.d/pos

Re: [ADMIN] cannot open segment 1 of relation .... No such file or directory

2004-11-16 Thread ogjunk-pgjedan
Hello, --- Tom Lane <[EMAIL PROTECTED]> wrote: > <[EMAIL PROTECTED]> writes: > > I'm using PG 7.3.4 under RedHat 9.0 and I've started noticing the > > following type of errors: > > cannot open segment 1 of relation url (target block 537329664): > No > > such file or directory > > If this is co

[ADMIN] cannot open segment 1 of relation .... No such file or directory

2004-11-15 Thread ogjunk-pgjedan
Hello, I'm using PG 7.3.4 under RedHat 9.0 and I've started noticing the following type of errors: cannot open segment 1 of relation url (target block 537329664): No such file or directory How can I determine what exactly is corrupt or missing and which row is triggering this? I have found pg

[ADMIN] pg_restore: [archiver] input file does not appear to be a valid archive

2004-09-15 Thread ogjunk-pgjedan
Hello, I'm having trouble with pg_dump and pg_restore (PG 7.3.4). This is how I'm dumping my DB: # pg_dump -d mydb --clean --inserts --column-inserts --format=P -v -h localhost -p 5432 -U otis > dbdump # gzip -9 dbdump + scp it to a remote machine where I want to restore this dump in a differen

Re: [ADMIN] Listing all open sessions/connections/XAs ?

2004-06-17 Thread ogjunk-pgjedan
Hello, I think pg_stat_activity table may show me what I need. However, even though I have 'stats_command_string = true' property in postgresql.conf (and I restarted postmaster), I do not see the 'current_query' in pg_stat_activity table: simpydb=> select * from pg_stat_activity ; datid | datna

[ADMIN] Listing all open sessions/connections/XAs ?

2004-06-16 Thread ogjunk-pgjedan
Hello, Occasionally I see that my (web) app leaves some DB connections open, so they look like this to `ps': postgres: username dbname 127.0.0.1 idle in transaction This results in my DB connection pool getting exhausted every so often. I need to track the source of this problem. Is there a w