Re: [ADMIN] Performance problem...

2005-03-15 Thread Marcin Giedz
Dnia poniedziałek, 14 marca 2005 19:32, Scott Marlowe napisał:
> On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > Hello...
> >
> >
> > Our company is going to change SQL engine from MySQL to PSQL. Of course
> > some performance problems occured. Our server is Dual Xeon 3.0GHz + 8GB
> > RAM + RAID1(software - two 146GB SCSI 15k) for sql data + RAID1(software
> > - two 146GB SCSI 15k) for pg_xlog. Postgres.conf parameters are as
> > follows:
> >
> > max_connections = 150
> > shared_buffers = 5  # min 16, at least max_connections*2, 8KB
> > each work_mem = 2048 # min 64, size in KB
>
> 50,000 shared buffers may or may not be too much.  Try it at different
> sizes from 5,000 or so up to 50,000 and find the "knee".  It's usually
> closer to 10,000 than 50,000, but ymmv...

Playing with shared_buffers from 1 to 5 doesn't change anything in 
total time for this query :( But when I change work_mem a little higher to 
1 total runtime decreases a little about 10% but when I change 
random_page_cost to 0.2 (I know that almost all papers say it should be 
higher then 1.0) total runtime decreases almost 3 times and lasts about 900ms 
- earlier with random_page_cost=1.2 it took 2.7s. Is it possible to have 
random_page_cost on this value? 

>
> On the other hand, for a machine with 8 gigs of ram, 2 meg of work_mem
> is pretty small.  Try bumping it up to 8 or 16 megs.  You can change
> this one "on the fly" for testing, so just do:
>
> set work_mem=16384;
> and then run the query again and see if that helps.  The hash aggregate
> method uses sort/work mem to do it's work, and if it doesn't think it
> can hold the result set in that space the planner will pick another
> method, like the merge left join.
>
> In your explain analyze output, look for gross mismatches between
> estimated and actual rows.  Most of yours here look pretty good in the
> areas where the data is being collected, but during the merges, the
> numbers are WAY off, but i'm not sure what to do to change that.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[ADMIN] Pgadmin II works with postgre 7.4.7?

2005-03-15 Thread Andrei Bintintan



Hi, 
 
I made an update from 7.4.2 to 7.4.7 and pgadmin II 
does not connect anymore. The error message is: "The database does not exist on 
the server or user authentication failed.". The settings are good, I can connect 
with pgadmin III without no problem.
 
Do you know also any other postgres admin/access 
programs like pgadmin? For me pgadmin II has some better things than pgadmin 
III. 
 
I run the db on a suse linux system 
(9.1) 
 
Best regards, 
Andy.


Re: [ADMIN] Pgadmin II works with postgre 7.4.7?

2005-03-15 Thread Wes Williams



Not 
stand-alone [web-based and required PHP] but phppgadmin works pretty 
well.
 
http://phppgadmin.sourceforge.net/

  -Original Message-From: 
  [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]On Behalf Of Andrei 
  BintintanSent: Tuesday, March 15, 2005 5:50 AMTo: 
  [email protected]: [ADMIN] Pgadmin II works with 
  postgre 7.4.7?
  Hi, 
   
  I made an update from 7.4.2 to 7.4.7 and pgadmin 
  II does not connect anymore. The error message is: "The database does not 
  exist on the server or user authentication failed.". The settings are good, I 
  can connect with pgadmin III without no problem.
   
  Do you know also any other postgres admin/access 
  programs like pgadmin? For me pgadmin II has some better things than pgadmin 
  III. 
   
  I run the db on a suse linux system 
  (9.1) 
   
  Best regards, 
  Andy.


[ADMIN] grant with pl/pgsql script

2005-03-15 Thread Fred Blaise
Hello all

I am trying to grant privs to a user on all tables. I think I understood
there was no command to do that :// so I wrote the following:

create or replace function granting() RETURNS integer AS '
declare
v_schema varchar;
v_user varchar;
begin
v_user := "user"
v_schema := "public"
FOR t in select tablename from pg_tables where schemaname =
v_schema
LOOP
grant select on t to v_user;
END LOOP;
return 1;
end;
' LANGUAGE plpgsql;


I then login to psql, and do a \i myscript.sql. It returns CREATE
FUNCTION, but I cannot see anything. The tables are not granted, etc...
Also I am trying to find out how to debug this. How can I print out to
STDOUT the value of t for example?

Thanks for any help

Best,

fred


signature.asc
Description: This is a digitally signed message part


[ADMIN] How to format a date with a serial number for DEFAULT?

2005-03-15 Thread Michiel Lange
Hello list,
I am trying to create a table that hould countain a number formatted 
this way: MMDD##

Where the hashes should be padded to '0'.
I have tried the following
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || 
CAST(CAST(to_char(date_part('month',current_date),'00') AS INT) AS TEXT)
template_test(# || 
CAST(CAST(to_char(date_part('day',current_date),'00') AS INT) AS TEXT)
template_test(# || 
CAST(CAST(to_char(nextval('test_counter_seq'),'00') AS INT) AS 
TEXT),
template_test(# tekst TEXT);

This resulted in something almost good, but I lost the padding zeroes.
I got "20053151"
Without the many CAST's like this:
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || to_char(date_part('month',current_date),'00')
template_test(# || to_char(date_part('day',current_date),'00')
template_test(# || 
to_char(nextval('test_counter_seq'),'00'),
template_test(# tekst TEXT);

Resulted in something almost right as well, but now to_char adds a space 
before each to_char
I would get a result like "2005 03 05 01"

What options do I have to get this straight?
Mind that I created TEMP tables to test how I should set my default 
value
TIA
Michiel

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] Performance Question

2005-03-15 Thread Brad Nicholson
Werner vd Merwe wrote:
Output of VACUUM ANALYSE VERBOSE pg_listener:
Query OK, 0 rows affected (0.06 sec)
INFO:  vacuuming "pg_catalog.pg_listener"
INFO:  "pg_listener": found 0 removable, 0 nonremovable row versions in 0
pages
INFO:  analyzing "pg_catalog.pg_listener"
INFO:  "pg_listener": 0 pages, 0 rows sampled, 0 estimated total rows
 

No problems there.
At any given time we have between 10 and 20 IDLE connections.
 

How long have the transactions been IDLE for?  Do the IDLE transactions 
have any locks on anything (check pg_locks)?

I suspect that (gently) killing the IDLE transactions  that are sitting 
around doing nothing will prevent you from having to restart the postmaster.

--
Brad Nicholson  
Database Administrator, Afilias Canada Corp. 


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Nicholson
Sent: 14 March 2005 07:10 PM
To: PgSQL Admin
Subject: Re: [ADMIN] Performance Question
I'm wondering if long running transacations might be the cause (you'll 
likely want to do this while perfomance is suffering).

Have a look at pg_stat_activity and see if there are any long running 
transacations (or any idle transactions).

I'd also be curious to see the output  of the following:
VACUUM ANALYZE VERBOSE pg_listener;
 


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] How to format a date with a serial number for DEFAULT?

2005-03-15 Thread Andrei Bintintan
CREATE TABLE test(
counter SERIAL,
foobar CHAR(100)
DEFAULT to_char(CURRENT_DATE, 'DDMM') || 
trim(to_char(nextval('test_counter_seq'),'00')),
tekst TEXT);

I don't know exactly why the white space is in, but the trim function takes 
it out.

Best regards,
Andy.
- Original Message - 
From: "Michiel Lange" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, March 15, 2005 5:27 PM
Subject: [ADMIN] How to format a date with a serial number for DEFAULT?


Hello list,
I am trying to create a table that hould countain a number formatted this 
way: MMDD##

Where the hashes should be padded to '0'.
I have tried the following
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || 
CAST(CAST(to_char(date_part('month',current_date),'00') AS INT) AS TEXT)
template_test(# || 
CAST(CAST(to_char(date_part('day',current_date),'00') AS INT) AS TEXT)
template_test(# || 
CAST(CAST(to_char(nextval('test_counter_seq'),'00') AS INT) AS 
TEXT),
template_test(# tekst TEXT);

This resulted in something almost good, but I lost the padding zeroes.
I got "20053151"
Without the many CAST's like this:
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || to_char(date_part('month',current_date),'00')
template_test(# || to_char(date_part('day',current_date),'00')
template_test(# || 
to_char(nextval('test_counter_seq'),'00'),
template_test(# tekst TEXT);

Resulted in something almost right as well, but now to_char adds a space 
before each to_char
I would get a result like "2005 03 05 01"

What options do I have to get this straight?
Mind that I created TEMP tables to test how I should set my default 
value
TIA
Michiel

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [ADMIN] How to format a date with a serial number for DEFAULT?

2005-03-15 Thread Andrei Bintintan
CREATE TABLE test(
counter SERIAL,
foobar CHAR(18)
DEFAULT to_char(CURRENT_DATE, 'DDMM') ||
trim(to_char(nextval('test_counter_seq'),'00')),
tekst TEXT);
I don't know exactly why the white space is in, but the trim function takes
it out.
Best regards,
Andy.
- Original Message - 
From: "Michiel Lange" <[EMAIL PROTECTED]>
To: 
Sent: Tuesday, March 15, 2005 5:27 PM
Subject: [ADMIN] How to format a date with a serial number for DEFAULT?


Hello list,
I am trying to create a table that hould countain a number formatted this
way: MMDD##
Where the hashes should be padded to '0'.
I have tried the following
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# ||
CAST(CAST(to_char(date_part('month',current_date),'00') AS INT) AS TEXT)
template_test(# ||
CAST(CAST(to_char(date_part('day',current_date),'00') AS INT) AS TEXT)
template_test(# ||
CAST(CAST(to_char(nextval('test_counter_seq'),'00') AS INT) AS
TEXT),
template_test(# tekst TEXT);
This resulted in something almost good, but I lost the padding zeroes.
I got "20053151"
Without the many CAST's like this:
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || to_char(date_part('month',current_date),'00')
template_test(# || to_char(date_part('day',current_date),'00')
template_test(# ||
to_char(nextval('test_counter_seq'),'00'),
template_test(# tekst TEXT);
Resulted in something almost right as well, but now to_char adds a space
before each to_char
I would get a result like "2005 03 05 01"
What options do I have to get this straight?
Mind that I created TEMP tables to test how I should set my default
value
TIA
Michiel
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [ADMIN] Performance problem...

2005-03-15 Thread Scott Marlowe
On Tue, 2005-03-15 at 02:59, Marcin Giedz wrote:
> Dnia poniedziaÅek, 14 marca 2005 19:32, Scott Marlowe napisaÅ:
> > On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > > Hello...
> > >
> > >
> > > Our company is going to change SQL engine from MySQL to PSQL. Of course
> > > some performance problems occured. Our server is Dual Xeon 3.0GHz + 8GB
> > > RAM + RAID1(software - two 146GB SCSI 15k) for sql data + RAID1(software
> > > - two 146GB SCSI 15k) for pg_xlog. Postgres.conf parameters are as
> > > follows:
> > >
> > > max_connections = 150
> > > shared_buffers = 5  # min 16, at least max_connections*2, 8KB
> > > each work_mem = 2048 # min 64, size in KB
> >
> > 50,000 shared buffers may or may not be too much.  Try it at different
> > sizes from 5,000 or so up to 50,000 and find the "knee".  It's usually
> > closer to 10,000 than 50,000, but ymmv...
> 
> Playing with shared_buffers from 1 to 5 doesn't change anything in 
> total time for this query :( But when I change work_mem a little higher to 
> 1 total runtime decreases a little about 10% but when I change 
> random_page_cost to 0.2 (I know that almost all papers say it should be 
> higher then 1.0) total runtime decreases almost 3 times and lasts about 900ms 
> - earlier with random_page_cost=1.2 it took 2.7s. Is it possible to have 
> random_page_cost on this value? 

IF random_page_cost needs to be that low, then it's likely that the
query planner is either getting bad statistics and making a poor
decision, or that you've got a corner case that it just can't figure
out.  What does explain analyze  say with
random_page_cost set to 1.2 and 0.2?  HAve you run analyze and vacuumed
full lately?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Performance problem...

2005-03-15 Thread Marcin Giedz
Dnia wtorek, 15 marca 2005 17:08, Scott Marlowe napisaÅ:
> On Tue, 2005-03-15 at 02:59, Marcin Giedz wrote:
> > Dnia poniedziaÅek, 14 marca 2005 19:32, Scott Marlowe napisaÅ:
> > > On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > > > Hello...
> > > >
> > > >
> > > > Our company is going to change SQL engine from MySQL to PSQL. Of
> > > > course some performance problems occured. Our server is Dual Xeon
> > > > 3.0GHz + 8GB RAM + RAID1(software - two 146GB SCSI 15k) for sql data
> > > > + RAID1(software - two 146GB SCSI 15k) for pg_xlog. Postgres.conf
> > > > parameters are as follows:
> > > >
> > > > max_connections = 150
> > > > shared_buffers = 5  # min 16, at least max_connections*2,
> > > > 8KB each work_mem = 2048 # min 64, size in KB
> > >
> > > 50,000 shared buffers may or may not be too much.  Try it at different
> > > sizes from 5,000 or so up to 50,000 and find the "knee".  It's usually
> > > closer to 10,000 than 50,000, but ymmv...
> >
> > Playing with shared_buffers from 1 to 5 doesn't change anything
> > in total time for this query :( But when I change work_mem a little
> > higher to 1 total runtime decreases a little about 10% but when I
> > change random_page_cost to 0.2 (I know that almost all papers say it
> > should be higher then 1.0) total runtime decreases almost 3 times and
> > lasts about 900ms - earlier with random_page_cost=1.2 it took 2.7s. Is it
> > possible to have random_page_cost on this value?
>
> IF random_page_cost needs to be that low, then it's likely that the
> query planner is either getting bad statistics and making a poor
> decision, or that you've got a corner case that it just can't figure
> out.  What does explain analyze  say with
> random_page_cost set to 1.2 and 0.2?  HAve you run analyze and vacuumed
> full lately?
It cann't be possible - I've run vacuum full analyze - it didn't change 
anything ;)

Marcin


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [ADMIN] Performance problem...

2005-03-15 Thread Scott Marlowe
On Tue, 2005-03-15 at 10:17, Marcin Giedz wrote:
> Dnia wtorek, 15 marca 2005 17:08, Scott Marlowe napisaÅ:
> > On Tue, 2005-03-15 at 02:59, Marcin Giedz wrote:
> > > Dnia poniedziaÅek, 14 marca 2005 19:32, Scott Marlowe napisaÅ:
> > > > On Mon, 2005-03-14 at 12:03, Marcin Giedz wrote:
> > > > > Hello...
> > > > >
> > > > >
> > > > > Our company is going to change SQL engine from MySQL to PSQL. Of
> > > > > course some performance problems occured. Our server is Dual Xeon
> > > > > 3.0GHz + 8GB RAM + RAID1(software - two 146GB SCSI 15k) for sql data
> > > > > + RAID1(software - two 146GB SCSI 15k) for pg_xlog. Postgres.conf
> > > > > parameters are as follows:
> > > > >
> > > > > max_connections = 150
> > > > > shared_buffers = 5  # min 16, at least max_connections*2,
> > > > > 8KB each work_mem = 2048 # min 64, size in KB
> > > >
> > > > 50,000 shared buffers may or may not be too much.  Try it at different
> > > > sizes from 5,000 or so up to 50,000 and find the "knee".  It's usually
> > > > closer to 10,000 than 50,000, but ymmv...
> > >
> > > Playing with shared_buffers from 1 to 5 doesn't change anything
> > > in total time for this query :( But when I change work_mem a little
> > > higher to 1 total runtime decreases a little about 10% but when I
> > > change random_page_cost to 0.2 (I know that almost all papers say it
> > > should be higher then 1.0) total runtime decreases almost 3 times and
> > > lasts about 900ms - earlier with random_page_cost=1.2 it took 2.7s. Is it
> > > possible to have random_page_cost on this value?
> >
> > IF random_page_cost needs to be that low, then it's likely that the
> > query planner is either getting bad statistics and making a poor
> > decision, or that you've got a corner case that it just can't figure
> > out.  What does explain analyze  say with
> > random_page_cost set to 1.2 and 0.2?  HAve you run analyze and vacuumed
> > full lately?
> It cann't be possible - I've run vacuum full analyze - it didn't change 
> anything ;)
> 

You might want to try adjusting these values to see if you can get the
query planner to choose the faster plan without dropping
random_page_cost to 0.2.  I.e. give the query planner candy and flowers,
don't just bonk it on the head with a big stick and drag it back home...

#cpu_tuple_cost = 0.01  # (same)
#cpu_index_tuple_cost = 0.001   # (same)
#cpu_operator_cost = 0.0025 # (same)

Does explain analyze show a big difference in expected an actual rows
returned for any of the parts of the query plan?

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [ADMIN] How to format a date with a serial number for DEFAULT?

2005-03-15 Thread Robert Perry
I am far to lazy to bother actually trying it, but I believe prefixing 
your format string for the bigint returned by nextval with 'FM' will 
eliminate your need for the trim.

On Mar 15, 2005, at 11:15 AM, Andrei Bintintan wrote:
CREATE TABLE test(
counter SERIAL,
foobar CHAR(18)
DEFAULT to_char(CURRENT_DATE, 'DDMM') ||
trim(to_char(nextval('test_counter_seq'),'00')),
tekst TEXT);
I don't know exactly why the white space is in, but the trim function 
takes
it out.

Best regards,
Andy.
- Original Message - From: "Michiel Lange" 
<[EMAIL PROTECTED]>
To: 
Sent: Tuesday, March 15, 2005 5:27 PM
Subject: [ADMIN] How to format a date with a serial number for DEFAULT?


Hello list,
I am trying to create a table that hould countain a number formatted 
this
way: MMDD##

Where the hashes should be padded to '0'.
I have tried the following
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# ||
CAST(CAST(to_char(date_part('month',current_date),'00') AS INT) AS 
TEXT)
template_test(# ||
CAST(CAST(to_char(date_part('day',current_date),'00') AS INT) AS TEXT)
template_test(# ||
CAST(CAST(to_char(nextval('test_counter_seq'),'00') AS INT) AS
TEXT),
template_test(# tekst TEXT);

This resulted in something almost good, but I lost the padding zeroes.
I got "20053151"
Without the many CAST's like this:
template_test=# CREATE TEMP TABLE test (
template_test(# counter SERIAL,
template_test(# foobar CHAR(18)
template_test(# DEFAULT CAST(date_part('year',current_date) AS TEXT)
template_test(# || 
to_char(date_part('month',current_date),'00')
template_test(# || to_char(date_part('day',current_date),'00')
template_test(# ||
to_char(nextval('test_counter_seq'),'00'),
template_test(# tekst TEXT);

Resulted in something almost right as well, but now to_char adds a 
space
before each to_char
I would get a result like "2005 03 05 01"

What options do I have to get this straight?
Mind that I created TEMP tables to test how I should set my default
value
TIA
Michiel
---(end of 
broadcast)---
TIP 4: Don't 'kill -9' the postmaster



---(end of 
broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if 
your
 joining column's datatypes do not match


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


[ADMIN] Vacuum questions

2005-03-15 Thread Chris Hoover
Question on vacuuming.
When you do a vacuum , are the "freed" tuples available only the 
table, or to the entire db, or to the entire cluster?

The reason I'm asking is that we are getting ready to preform a major 
upgrade to our application that involves adding some new fields to 
almost every table and populating them.  This is causing our test 
conversion db's to double in size (we are assuming that every update is 
causing a delete and re-insert behind the scenes due to the populating 
of the new columns).  Anyway, we are working on trying to find the 
fastest way to recover the space.

We have also had one test occurrence where after the upgrade, a vacuum 
full would not recover the space.  However, when we bounced the 
postmaster, and then performed a vacuum full, the space was recovered.  
Any ideas on what might cause this?  This happened on a newly restored 
db that was converted and then vacuumed full.  There would have been no 
connections to the db after the conversion.

PG 7.3.4
RH 2.1
Thanks,
Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[ADMIN] FYI: Interview with Josh Berkus at Mad Penguin

2005-03-15 Thread Wes Williams
http://madpenguin.org/cms/html/62/3677.html

I for one think that Postgres 8.0 is great and that the largest hurdle for the 
project is the lack of [easy] developer tools that MySQL has.  This in turn, 
seems to make learning Postgres and SQL more difficult for less experienced 
users that turn to MySQL for a quick solution.

Keep up the excellent work all!


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [ADMIN] FYI: Interview with Josh Berkus at Mad Penguin

2005-03-15 Thread Robert Perry
	I am not certain that this is the correct forum for Kudos, but after 
taking a long break from using PostgreSQL I am also delighted with how 
good it is doing.  I always hated the fact the it was missing table 
spaced and I am also excited about Point In Time backups.

	Maybe it is just because I learned PostgreSQLfirst, but I have always 
felt much more comfortable with it as opposed to mysql.  But, then I 
also do not like to use GUI tools much.	
On Mar 15, 2005, at 4:12 PM, Wes Williams wrote:

http://madpenguin.org/cms/html/62/3677.html
I for one think that Postgres 8.0 is great and that the largest hurdle 
for the project is the lack of [easy] developer tools that MySQL has.  
This in turn, seems to make learning Postgres and SQL more difficult 
for less experienced users that turn to MySQL for a quick solution.

Keep up the excellent work all!
---(end of 
broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [ADMIN] Performance Question

2005-03-15 Thread Werner vd Merwe
Hi Brad,

Just before I carry on - I am not sure if top-posting is 'allowed' on this
list, if not, please let me know and I'll stop  :)

Will gently be level 15?

There are a couple of locks, both exclusive and shared... 


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Brad Nicholson
Sent: 15 March 2005 05:33 PM
To: [email protected]
Subject: Re: [ADMIN] Performance Question

Werner vd Merwe wrote:

>Output of VACUUM ANALYSE VERBOSE pg_listener:
>
>Query OK, 0 rows affected (0.06 sec)
>INFO:  vacuuming "pg_catalog.pg_listener"
>INFO:  "pg_listener": found 0 removable, 0 nonremovable row versions in 0
>pages
>INFO:  analyzing "pg_catalog.pg_listener"
>INFO:  "pg_listener": 0 pages, 0 rows sampled, 0 estimated total rows
>
>  
>
No problems there.

>At any given time we have between 10 and 20 IDLE connections.
>
>  
>
How long have the transactions been IDLE for?  Do the IDLE transactions 
have any locks on anything (check pg_locks)?

I suspect that (gently) killing the IDLE transactions  that are sitting 
around doing nothing will prevent you from having to restart the postmaster.

-- 
Brad Nicholson  
Database Administrator, Afilias Canada Corp. 



>-Original Message-
>From: [EMAIL PROTECTED]
>[mailto:[EMAIL PROTECTED] On Behalf Of Brad Nicholson
>Sent: 14 March 2005 07:10 PM
>To: PgSQL Admin
>Subject: Re: [ADMIN] Performance Question
>
>I'm wondering if long running transacations might be the cause (you'll 
>likely want to do this while perfomance is suffering).
>
>Have a look at pg_stat_activity and see if there are any long running 
>transacations (or any idle transactions).
>
>I'd also be curious to see the output  of the following:
>
>VACUUM ANALYZE VERBOSE pg_listener;
>
>  
>



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

-- 
No virus found in this incoming message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 2005/03/15
 

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 2005/03/15
 


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [ADMIN] Performance Question

2005-03-15 Thread Werner vd Merwe


-Original Message-
From: weiping [mailto:[EMAIL PROTECTED] 
Sent: 15 March 2005 05:55 PM
To: Werner vd Merwe
Cc: [email protected]
Subject: Re: [ADMIN] Performance Question

what' your JDBC version?
if it's pretty old, then upgrade to newest one is a bet.
Don't know if it could solve the problem, but old
version of JDBC did have some problem in transaction
handling, we've experienced such problem not so long
before.

regards laser

-- 
Hi,

We are using JDK1.4.2_06

Regards
Werner 

-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.308 / Virus Database: 266.7.3 - Release Date: 2005/03/15
 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [ADMIN] Vacuum questions

2005-03-15 Thread Tom Lane
"Chris Hoover" <[EMAIL PROTECTED]> writes:
> We have also had one test occurrence where after the upgrade, a vacuum 
> full would not recover the space.  However, when we bounced the 
> postmaster, and then performed a vacuum full, the space was recovered.  
> Any ideas on what might cause this?

Most likely, you had an open transaction lurking that was old enough
that it could still "see" the deleted data, and so VACUUM couldn't
safely reclaim the data.

VACUUM's decisions about this are cluster-wide; so even though you'd
recently created the database in question, a long-running transaction
in another database under the same postmaster could still cause the
problem.

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend