[GENERAL] Error accessing db with psql

2007-12-06 Thread Paul Lambert

I wrote some records to a database to do some testing, which worked:

AutoDRS=# insert into job_classification 
(dealer_id,date_changed,time_changed,jo

b_id) VALUES ('F65','1-Jul-2007','00:00',generate_series(1,100));
INSERT 0 100
AutoDRS=# insert into job_classification 
(dealer_id,date_changed,time_changed,jo

b_id) VALUES ('F65','1-Jul-2007','00:00',generate_series(200,900));
INSERT 0 701

But then I immediately went to select from the table:

AutoDRS=# select count(*) from job_classification;
WARNING:  could not write block 51773 of 16441/16443/16907
DETAIL:  Multiple failures --- write error might be permanent.
ERROR:  could not open relation 16441/16443/16907: No such file or directory
CONTEXT:  writing block 51773 of relation 16441/16443/16907
AutoDRS=# select * from job_classification;
ERROR:  could not open relation 16441/16443/2662: No such file or directory
AutoDRS=# vacuum full analyze;
ERROR:  could not open relation 16441/16443/16907: No such file or directory
CONTEXT:  writing block 51776 of relation 16441/16443/16907

Now I can't even change to the main postgres database...
AutoDRS=# \c postgres
FATAL:  could not open relation 16441/16443/16700: Permission denied
CONTEXT:  writing block 56798 of relation 16441/16443/16700
Previous connection kept
AutoDRS=# \q

And now I can't even reconnect at all...

C:\Program Files\PostgreSQL\8.3\bin>psql -U postgres
Password for user postgres:
psql: FATAL:  could not open relation 16441/16443/16907: No such file or 
directo

ry
CONTEXT:  writing block 51779 of relation 16441/16443/16907

My pg_log is full of similar messages:

T:2007-12-07 15:45:52.781 WST D: U: >>>ERROR:  could not open relation 
16441/16443/16700: No such file or directory
T:2007-12-07 15:45:52.781 WST D: U: >>>CONTEXT:  writing block 56799 of 
relation 16441/16443/16700
T:2007-12-07 15:45:52.781 WST D: U: >>>WARNING:  could not write block 
56799 of 16441/16443/16700
T:2007-12-07 15:45:52.781 WST D: U: >>>DETAIL:  Multiple failures --- 
write error might be permanent.
T:2007-12-07 15:45:53.781 WST D: U: >>>ERROR:  could not open relation 
16441/16443/16700: No such file or directory
T:2007-12-07 15:45:53.781 WST D: U: >>>CONTEXT:  writing block 56799 of 
relation 16441/16443/16700
T:2007-12-07 15:45:53.781 WST D: U: >>>WARNING:  could not write block 
56799 of 16441/16443/16700
T:2007-12-07 15:45:53.781 WST D: U: >>>DETAIL:  Multiple failures --- 
write error might be permanent.
T:2007-12-07 15:45:54.781 WST D: U: >>>ERROR:  could not open relation 
16441/16443/16700: No such file or directory
T:2007-12-07 15:45:54.781 WST D: U: >>>CONTEXT:  writing block 56799 of 
relation 16441/16443/16700
T:2007-12-07 15:45:54.781 WST D: U: >>>WARNING:  could not write block 
56799 of 16441/16443/16700
T:2007-12-07 15:45:54.781 WST D: U: >>>DETAIL:  Multiple failures --- 
write error might be permanent.


I assume I somehow fubar'd something.

I've checked permissions on all the files and the postgres user that the 
service runs under has full control of all directories.


One thing I did do was create a directory on a different disk, called 
pg_xlog, shutdown pg, copied all files from the existing pg_xlog under 
the data directory and deleted that and recreated it as a junction 
point. At that stage I had neglected to create the archive_status 
directory which pg did start complaining about:


T:2007-12-07 15:00:43.718 WST D: U: >>>FATAL:  could not open archive 
status directory "pg_xlog/archive_status": No such file or directory
T:2007-12-07 15:00:43.718 WST D: U: >>>LOG:  archiver process (PID 7432) 
exited with exit code 1
T:2007-12-07 15:01:23.484 WST D: U: >>>FATAL:  could not open archive 
status directory "pg_xlog/archive_status": No such file or directory
T:2007-12-07 15:01:23.484 WST D: U: >>>LOG:  archiver process (PID 5444) 
exited with exit code 1


It was creating files in the pg_xlog directory, and after creating the 
archive_status directory it is now archiving them off to my WAL_Archive 
directory.


So.. what have I done? Why did my insert statement claim it inserted all 
the records when the logfile tells me that it couldn't write the blocks.


This is a fresh install of pg 8.3-beta4 on weendoze server 2003 (new 
server too, no pre-existing database system) so I can just blast it and 
restart, but I'm curious to know what I accidentally deleted since I'm 
99.41421356237309504% sure I didn't delete anything other than some 
files out of my WAL_Archive directory.


--
Paul Lambert
Database Administrator
AutoLedgers - A Reynolds & Reynolds Company


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Q: using generate_series to fill in the blanks

2007-12-06 Thread Ow Mun Heng

On Thu, 2007-12-06 at 23:06 -0500, Rodrigo De León wrote:
> On Dec 6, 2007 10:44 PM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> > I've got a desired output which looks something like this..
> >
> >  vdt| count
> > +---
> >   1 |   514
> >   2 |27
> >   3 |15
> >   4 |  
> >   5 |12
> >   6 |15
> 
> SELECT i.i AS vdt,
>CASE
>  WHEN COUNT(vdt)=0 THEN NULL
>  ELSE COUNT(vdt)
>END AS COUNT
> FROM generate_series (1, 7) i
>  LEFT JOIN footable f ON i.i = f.vdt AND c_id = '71'
> GROUP BY i.i
> ORDER BY i.i;

This is _way_ cool. Thanks. However I still have some additional
questions.

as individual c_ids:
 vdt | c_id  | count
-+---+---
   1 | 71|   533
   2 | 71|30
   3 | 71|15
   4 | 71|10
   5 | 71|12
   6 | 71|15
   7 |   |

 vdt |c_id| count
-+---+---
   1 | 48 |   217
   2 | 48 |86
   3 | 48 |46
   4 | 48 |50
   5 | 48 | 4
   6 ||
   7 ||

select i.i as vdt,dcm_evaluation_code as c_id
, case when count(vdt_format) = 0 then NULL else count(vdt_format) end
as count
from generate_series(1,7) i
left join footable f
on i.i = f.vdt_format
and c_id in ('71','48')
group by c_id, i.i
order by c_id,i.i;


When Joined into 1 query
 vdt | c_id  | count
-+---+---
   1 | HMK71 |   533
   2 | HMK71 |30
   3 | HMK71 |15
   4 | HMK71 |10
   5 | HMK71 |12
   6 | HMK71 |15 << What happened to 7?
   1 | HML48 |   217
   2 | HML48 |86
   3 | HML48 |46
   4 | HML48 |50
   5 | HML48 | 4
   7 |   |

additionally, if you don't mind, when I substitute 

-->and c_id = '71' 

with

--> where c_id = '71'

the nulls also disappears.

In any case, it seems to be working for _single_ c_id clauses..









---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Q: using generate_series to fill in the blanks

2007-12-06 Thread Rodrigo De León
On Dec 6, 2007 10:44 PM, Ow Mun Heng <[EMAIL PROTECTED]> wrote:
> I've got a desired output which looks something like this..
>
>  vdt| count
> +---
>   1 |   514
>   2 |27
>   3 |15
>   4 |  
>   5 |12
>   6 |15

SELECT i.i AS vdt,
   CASE
 WHEN COUNT(vdt)=0 THEN NULL
 ELSE COUNT(vdt)
   END AS COUNT
FROM generate_series (1, 7) i
 LEFT JOIN footable f ON i.i = f.vdt AND c_id = '71'
GROUP BY i.i
ORDER BY i.i;

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Q: using generate_series to fill in the blanks

2007-12-06 Thread Ow Mun Heng
I've got a desired output which looks something like this..

 vdt| count
+---
  1 |   514
  2 |27
  3 |15
  4 |  
  5 |12
  6 |15


the query in psql is something like this..

select vdt, count(*) from footable where c_id = '71' group by vdt order
by vdt

problem is.. since there's not data whatsoever on vdt=4 I get this..

 vdt| count
+---
  1 |   514
  2 |27
  3 |15
  5 |12
  6 |15

I tried to use generate_series


select generate_series(1,7,1), count(*) from footable where c_id = '71'
group by generate_series(1,7,1),vdt order by generate_series(1,7,1);

(note : the vdt are numbered from 1 to 7 sequence)

 generate_series | count
-+---
   1 |   514
   1 |27
   1 |15
   1 |12
   1 |15
   2 |   514
   2 |27
   2 |15
   2 |12
   2 |15
   3 |   514
   3 |27
   3 |15
   3 |12
   3 |15
   4 |   514
   4 |27
   4 |15
   4 |12
   4 |15
.
[snip]
.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Continual Postgres headaches...

2007-12-06 Thread Tom Lane
"Weber, Geoffrey M." <[EMAIL PROTECTED]> writes:
> My problems really are with performance consistency.  I have tweaked the
> execution so that everything should run with sub-second execution times, but
> even after everything is running well, I can get at most a week or two of
> steady running before things start to degrade.

Aside from the other responses: gradual degradation over a period of
time sounds like a bloat problem to me.  You should monitor table and
index sizes to confirm or deny that and find out just what's bloating.
Switching to autovacuum, or a better-designed manual vacuum schedule,
might be the ultimate answer, but first you need some concrete data
about what's going wrong.  Otherwise you're just shooting in the dark.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Understanding how partial indexes work?

2007-12-06 Thread Tom Lane
"Chris Velevitch" <[EMAIL PROTECTED]> writes:
> On Dec 7, 2007 2:39 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
>> Is it worth the trouble?  You didn't mention the statistics involved,
>> but ordinarily I'd think a non-equal condition is too nonselective
>> to justify the cost of maintaining an extra index.

> Yes, I did neglect to mention the stats. Y is the status of the record
> which represents the life cycle of the record starting at zero and
> ending up at k3. So basically the query would retrieving and the
> partial index will indexing records which aren't at their end of life.
> So the number of records where Y <> k3 will be low and the number of
> records where Y = k3 will be growing over time.

OK, in that case the partial index might be worth trying.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Understanding how partial indexes work?

2007-12-06 Thread Chris Velevitch
On Dec 7, 2007 2:39 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> "Chris Velevitch" <[EMAIL PROTECTED]> writes:
> > I have a query on a table:-
> >  X between k1 and k2 or X < k1 and Y <> k3
>
> > where k1, k2, k3 are constants.
>
> > How would this query work, if I created an index on X and a partial
> > index on X where Y <> k3?
>
> Is it worth the trouble?  You didn't mention the statistics involved,
> but ordinarily I'd think a non-equal condition is too nonselective
> to justify the cost of maintaining an extra index.

Yes, I did neglect to mention the stats. Y is the status of the record
which represents the life cycle of the record starting at zero and
ending up at k3. So basically the query would retrieving and the
partial index will indexing records which aren't at their end of life.
So the number of records where Y <> k3 will be low and the number of
records where Y = k3 will be growing over time. So the ratio of Y <>
k3 to Y = k3 will be getting smaller over time.

> The real problem here is that "X < k1" is probably too nonselective
> as well, which will likely lead the planner to decide that a plain
> seqscan is the cheapest solution.  In principle the above could be
> done with a BitmapOr of two indexscans on X, but unless the constants
> are such that only a small fraction of rows are going to be selected,
> it's likely that a seqscan is the way to go.

How would that compare doing a union the results of two queries:-

 X between k1 and k2 UNION X where X < k1 and Y <> k3

where there is index on X and a partial index on X where Y <> k3


Chris
-- 
Chris Velevitch
Manager - Sydney Flash Platform Developers Group
m: 0415 469 095
www.flashdev.org.au

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 6, 2007, at 5:10 PM, Gregory Stark wrote:
It needs to store the number of bits present as well. Otherwise it  
wouldn't be

able to tell apart B'1' and B'01' ... B'0001'

...
Only in the sense that numeric also has to store some meta data as  
well like

the weight and display precision.



Hrm... perhaps that's another worthwhile target for the varvarlena  
technique...

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 6, 2007, at 5:19 PM, Vyacheslav Kalinin wrote:

> It needs to store the number of bits present as well

Couldn't that be reduced to 1 byte that'd say how many bits count  
in the last byte?


> Only in the sense that numeric also has to store some meta data  
as well like

the weight and display precision.

Is it really necessary to store display precision when it can be  
taken from the table column definition?



Two problems...

1) CREATE TABLE n(n numeric);

2) The knowledge of extra type information (ie: the numbers in char()  
or numeric()) don't extend deeply enough into the code. This is part  
of why char() uses the exact same storage mechanism as varchar().

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Understanding how partial indexes work?

2007-12-06 Thread Chris Velevitch
On Dec 7, 2007 2:39 AM, Scott Marlowe <[EMAIL PROTECTED]> wrote:
> On Dec 6, 2007 1:44 AM, Chris Velevitch <[EMAIL PROTECTED]> wrote:
> > I have a query on a table:-
> >
> >  X between k1 and k2 or X < k1 and Y <> k3
> >
> > where k1, k2, k3 are constants.
> >
> > How would this query work, if I created an index on X and a partial
> > index on X where Y <> k3?
>
> Ummm.  Using AND and OR in the same where clause without parenthesis
> is a bad idea, as the logic you might think you're expressing isn't
> the exact logic you're actually expressing.  Do you mean:

Are you saying that operator precedence doesn't apply here?

When else does operator precedence not apply and where is this documented?

According to the documentation
(http://www.postgresql.org/docs/7.4/interactive/sql-syntax.html#SQL-PRECEDENCE)

 X between k1 and k2 or X < k1 and Y <> k3

is the same as:-

 (X between k1 and k2) or ((X < k1) and (Y <> k3))

which is what I want.


Chris
-- 
Chris Velevitch
Manager - Sydney Flash Platform Developers Group
m: 0415 469 095
www.flashdev.org.au

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Gregory Stark
"Decibel!" <[EMAIL PROTECTED]> writes:

> On Dec 5, 2007, at 7:23 PM, Michael Glaesemann wrote:
>> On Dec 5, 2007, at 14:19 , Alex Mayrhofer wrote:
>>> i'm trying to find out the storage size for bit(n) data. My  initial
>>> assumption would be that for any 8 bits, one byte of  storage is required.
>>
>> select pg_column_size(B'1') as "1bit",
>>pg_column_size(B'') as "4bits",
>>pg_column_size(B'') as "1byte",
>>pg_column_size(B'') as "12bits",
>>pg_column_size(B'') as "2bytes",
>>pg_column_size(B'1') as "17bits",
>>pg_column_size(B'') as "3bytes";
>> 1bit | 4bits | 1byte | 12bits | 2bytes | 17bits | 3bytes
>> --+---+---++++
>> 9 | 9 | 9 | 10 | 10 | 11 | 11
>> (1 row)
>>
>> Looks like there's 8 bytes of overhead as well, probably because a  bit
>> string is a varlena type.
>
> Wow, that's screwed up... that's a lot more than varlena overhead:

It needs to store the number of bits present as well. Otherwise it wouldn't be
able to tell apart B'1' and B'01' ... B'0001'

> select pg_column_size('a'::text), pg_column_size(1::numeric),
> pg_column_size(3111234::numeric);
>  pg_column_size | pg_column_size | pg_column_size
> ++
>   5 | 10 | 12
>
> Apparently it's something related to numeric.

Only in the sense that numeric also has to store some meta data as well like
the weight and display precision.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Replication Monitoring

2007-12-06 Thread Alvaro Herrera
Glyn Astill wrote:
> How did that happen? The subject is totally different, so is the
> body.

It has an "In-Reply-To:" and possibly "References:" header which relates
it to the other thread.

The solution is simple.  Don't reply to an existing message when you
want to post a new thread.  Compose a new one instead.

-- 
Alvaro Herrera  Developer, http://www.PostgreSQL.org/
"Aprender sin pensar es inútil; pensar sin aprender, peligroso" (Confucio)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ldap authentication allows logon with blank password

2007-12-06 Thread lighthouse . software
After some investigation into Open LDAP I discovered that a post that
states:

"A bind with a DN but with an empty password is equivalent to an
anonymous
bind, while a bind with a DN and with a wrong password is not;"

So could this cause a blank password to allow access to the database
as the LDAP server
is successfully connecting anonymously?

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Replication Monitoring

2007-12-06 Thread Glyn Astill
[posted again as it found it's way into another thread]

Hi people,

I intend to set up two slave servers, one using WAL shipping and one
using Slony I.

Are there any good tools, or scripts that'll help us check that both
replication methods are working?

I know theres Nagios - but what does this actually allow us to
monitor?

Also if I want to make backups from the slaves to save master
downtime / load what are my options?

Thanks


Glyn Astill



  __
Sent from Yahoo! - the World's favourite mail http://uk.mail.yahoo.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Replication Monitoring

2007-12-06 Thread Glyn Astill
How did that happen? The subject is totally different, so is the
body.

This is shit.

--- Gregory Stark <[EMAIL PROTECTED]> wrote:

> "Glyn Astill" <[EMAIL PROTECTED]> writes:
> 
> > Hi people,
> >
> > I intend to set up two slave servers, one using WAL shipping and
> one
> > using Slony I.
> 
> This has nothing to do with "aggregate and ordering" the subject of
> the
> message to which you're replying. You're more likely to see
> responses if you
> post in a new thread.
> 
> -- 
>   Gregory Stark
>   EnterpriseDB  http://www.enterprisedb.com
>   Get trained by Bruce Momjian - ask me about EnterpriseDB's
> PostgreSQL training!
> 
> ---(end of
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org/
> 



Glyn Astill



  __
Sent from Yahoo! - the World's favourite mail http://uk.mail.yahoo.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] storage size of "bit" data type..

2007-12-06 Thread Decibel!

On Dec 5, 2007, at 7:23 PM, Michael Glaesemann wrote:

On Dec 5, 2007, at 14:19 , Alex Mayrhofer wrote:
i'm trying to find out the storage size for bit(n) data. My  
initial assumption would be that for any 8 bits, one byte of  
storage is required.


select pg_column_size(B'1') as "1bit",
   pg_column_size(B'') as "4bits",
   pg_column_size(B'') as "1byte",
   pg_column_size(B'') as "12bits",
   pg_column_size(B'') as "2bytes",
   pg_column_size(B'1') as "17bits",
   pg_column_size(B'') as "3bytes";
1bit | 4bits | 1byte | 12bits | 2bytes | 17bits | 3bytes
--+---+---++++
9 | 9 | 9 | 10 | 10 | 11 | 11
(1 row)

Looks like there's 8 bytes of overhead as well, probably because a  
bit string is a varlena type.


Wow, that's screwed up... that's a lot more than varlena overhead:

select pg_column_size('a'::text), pg_column_size(1::numeric),  
pg_column_size(3111234::numeric);

 pg_column_size | pg_column_size | pg_column_size
++
  5 | 10 | 12

Apparently it's something related to numeric.
--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Determining current block size?

2007-12-06 Thread John Wells
On 12/6/07, Gregory Stark <[EMAIL PROTECTED]> wrote:
> "John Wells" <[EMAIL PROTECTED]> writes:
>
> > I see that BLOCK_SIZE can be set at compile time, but is there a way
> > to determine what block size is in use in a running system? I've been
> > searching but have been unsuccessful so far.
>
> postgres=# show block_size;
>  block_size
> 
>  8192
> (1 row)

Well *that* was unexpected easy.

Thanks!
John

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Determining current block size?

2007-12-06 Thread Gregory Stark
"John Wells" <[EMAIL PROTECTED]> writes:

> I see that BLOCK_SIZE can be set at compile time, but is there a way
> to determine what block size is in use in a running system? I've been
> searching but have been unsuccessful so far.

postgres=# show block_size;
 block_size 

 8192
(1 row)

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's On-Demand Production Tuning

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Replication Monitoring

2007-12-06 Thread Gregory Stark
"Glyn Astill" <[EMAIL PROTECTED]> writes:

> Hi people,
>
> I intend to set up two slave servers, one using WAL shipping and one
> using Slony I.

This has nothing to do with "aggregate and ordering" the subject of the
message to which you're replying. You're more likely to see responses if you
post in a new thread.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Rules slower than Dynamic SQL ?

2007-12-06 Thread Decibel!

On Nov 26, 2007, at 6:09 PM, Simon Riggs wrote:

On Mon, 2007-11-26 at 16:01 -0800, Alex Vinogradovs wrote:

I've got a data warehouse with pretty high rate of insert into
partitioned tables. What I've noticed, is that rule-based  
partitioning

seems to be somewhat slower than insertions made directly into
partitions through execution of dynamic SQL.

Is it really true ?


Depends how complex your rules are, but I think yes is the short  
answer.


My understanding is that each rule has to re-evaluate the query  
that's being run. That's going to add some overhead to a simple  
INSERT INTO table VALUES ... but if you think that's bad, try it with  
an expensive INSERT INTO table SELECT statement!

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Calculation for Max_FSM_pages : Any rules of thumb?

2007-12-06 Thread Decibel!

On Nov 20, 2007, at 6:14 PM, Ow Mun Heng wrote:

On Mon, 2007-11-19 at 08:24 -0500, Bill Moran wrote:

In response to Ow Mun Heng <[EMAIL PROTECTED]>:


Even with the regular vacuuming and even a vacuum full ( on my  
test DB)

I still see that perhaps something is wrong (from the below)

(I got this gem from the mailling list archives)
hmxmms=> SELECT
c.relname,
c.reltuples::bigint as rowcnt,
pg_stat_get_tuples_inserted(c.oid) AS inserted,
pg_stat_get_tuples_updated(c.oid) AS updated,
pg_stat_get_tuples_deleted(c.oid) AS deleted
FROM pg_class c
WHERE c.relkind = 'r'::"char"
GROUP BY c.oid, c.relname, c.reltuples
HAVING pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) > 1000
ORDER BY pg_stat_get_tuples_updated(c.oid) +
pg_stat_get_tuples_deleted(c.oid) DESC;
relname|  rowcnt  | inserted | updated | deleted
---+--+--+-+--
 tst_r | 11971691 |0 |   0 | 22390528  
<--

 pg_statistic  | 1465 |  280 |7716 |  153
 dr_ns |  2305571 | 1959 |   0 | 1922
 pg_attribute  | 3787 | 1403 | 184 | 1292

No matter how many times I vacuum/full the deleted number still  
doesn't

go down.


Are you sure you're interpreting that number correctly?  I took it to
mean a counter of the number of delete operations since server start.


Actually, it's not on server start; it's on stats reset. Which can  
happen at server start depending on your config.


You are right. This is definitely a snafu in my interpretation.  
After I
restarted PG on the laptop, the numbers went away. So, then I'm  
confused

as to why the above "gem" was provided as a means to see which tables
needs more vacumming.



By itself it doesn't help; you need to track how many rows have been  
updated or deleted since the last time you vacuumed. That, along with  
the rowcount, will give you an idea of how much of the table is dead  
space.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] SQL design pattern for a delta trigger?

2007-12-06 Thread Erik Jones


On Dec 6, 2007, at 2:36 PM, Ted Byers wrote:


IS there such a thing?  I can be the first to consider
this.



What I am aiming for is a solution with a couple
coupled tables, one of which represents state through
time and the other represents transactions or deltas
on the state.  With one field (a floating point
number) in the state table (or should I say a number
for each id field), it obviously has a time interval
for which it is valid: a start time and an end time.

What I am after is a situation where the moment a
record is inserted in the deltas table, a trigger
function first looks to see if the id provided
presently has a state in the state table.  If not,
then it creates one.  Then, the  end date for that
state record gets set to the current time and a new
record is inserted with the new state (computed by
applying the delta to the value in the previous record
for the state), the current date as the start date and
null for the end date.

This seems like an obvious thing to try, but I am
floundering a little and am therefore wondering if
anyone has seen an SQL design pattern that talks about
this, and an url where I can see such a discussion
and, better, an example.  The first concern is to
ensure that every record inserted into the deltas
table is immediately reflected in the state table, and
the second is that the history of state can be
reconstructed from a suitable query on the state
table.

I can do this easily in client code, but isn't this
the sort of thing best suited to living within the
database itself?


What you want to do here for handling the update v. insert is called  
an "UPSERT".  Basically, what you do is run the update as if the row  
exists and catch the exception that is thrown if it doesn't at which  
point you insert the record with the end date = now().  After that  
you can proceed normally with creating the new record with start date  
= now() and end date = NULL.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] Determining current block size?

2007-12-06 Thread John Wells
I see that BLOCK_SIZE can be set at compile time, but is there a way
to determine what block size is in use in a running system? I've been
searching but have been unsuccessful so far.

Thanks!
John

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Nested loop in simple query taking long time

2007-12-06 Thread Alvaro Herrera
Gauthier, Dave wrote:
> Future Enhancement?
> If the column's new value can fit in the space already being used by the
> existing value, just change the column value in place and leave the
> record alone.  Would reduce the need for vacuum in many cases.

That's in 8.3.  Not exactly like that (because it doesn't work in
Postgres due to MVCC), but something close.  Look for HOT in release
notes.

-- 
Alvaro Herrera   Valdivia, Chile   ICBM: S 39º 49' 18.1", W 73º 13' 56.4"
Syntax error: function hell() needs an argument.
Please choose what hell you want to involve.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] SQL design pattern for a delta trigger?

2007-12-06 Thread Ted Byers

--- Ted Byers <[EMAIL PROTECTED]> wrote:

> IS there such a thing?  I can be the first to
> consider
> this.
> 
OOPS.  The mind is faster than the fingers.  That
should have been "I can NOT be the first to consider
this.

Ted

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Replication Monitoring

2007-12-06 Thread Glyn Astill
Hi people,

I intend to set up two slave servers, one using WAL shipping and one
using Slony I.

Are there any good tools, or scripts that'll help us check that both
replication methods are working?

I know theres Nagios - but what does this actually allow us to
monitor?

Also if I want to make backups from the slaves to save master
downtime / load what are my options?

Thanks



  __
Sent from Yahoo! - the World's favourite mail http://uk.mail.yahoo.com



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] aggregate and order by

2007-12-06 Thread Gregory Stark
"Matthew Dennis" <[EMAIL PROTECTED]> writes:

> So, my question is if I can have PostgreSQL honor order by clauses such as:
>
> select trip_id, avg_vel(position, pos_time) 
>   from (select position, pos_time, trip_id from data order by pos_time) 
> sorted_data
>
> Would this in fact guarantee that the rows are passed into the aggregate in
> the order specified?

Yes. AFAIK this isn't covered by the spec but it works in Postgres and we know
there are people depending on it so we wouldn't break it without a big notice
and presumably some replacement.

> Other suggestions/discussions/questions/etc are welcome.

Good luck, this looks pretty painful to get right. Keep in mind you have to
keep all your state in the state data. If you keep a temporary variable
outside that data then your function won't work if it's called twice in the
same query like "select avg_vel(position, pos_time), avg_vel(position2,
pos_time2) from ..."

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's 24x7 Postgres support!

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] SQL design pattern for a delta trigger?

2007-12-06 Thread Ted Byers
IS there such a thing?  I can be the first to consider
this.

What I am aiming for is a solution with a couple
coupled tables, one of which represents state through
time and the other represents transactions or deltas
on the state.  With one field (a floating point
number) in the state table (or should I say a number
for each id field), it obviously has a time interval
for which it is valid: a start time and an end time.

What I am after is a situation where the moment a
record is inserted in the deltas table, a trigger
function first looks to see if the id provided
presently has a state in the state table.  If not,
then it creates one.  Then, the  end date for that
state record gets set to the current time and a new
record is inserted with the new state (computed by
applying the delta to the value in the previous record
for the state), the current date as the start date and
null for the end date.

This seems like an obvious thing to try, but I am
floundering a little and am therefore wondering if
anyone has seen an SQL design pattern that talks about
this, and an url where I can see such a discussion
and, better, an example.  The first concern is to
ensure that every record inserted into the deltas
table is immediately reflected in the state table, and
the second is that the history of state can be
reconstructed from a suitable query on the state
table.

I can do this easily in client code, but isn't this
the sort of thing best suited to living within the
database itself?

Thanks

Ted

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Continual Postgres headaches...

2007-12-06 Thread Gauthier, Dave
Sometimes breaking the query down using nested cursors can help,
especially if the query has many joins.  It usually makes behavior more
predicatable anyway.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Thursday, December 06, 2007 2:43 PM
To: Weber, Geoffrey M.
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Continual Postgres headaches...

On Dec 6, 2007 2:22 PM, Weber, Geoffrey M.
<[EMAIL PROTECTED]> wrote:
> In my efforts to correct these consistency in execution problems, I
have
> gone from vacuuming (with analyze) twice a day to every 30 minutes
(how long
> it takes a vacuum analyze to run - another seeming problem because it
> shouldn't take so long?).  I've done a VACUUM FULL and that sometimes
helps,
> but obviously is not a long-term solution for a true OLTP system.  Why
I
> said "sometimes helps": earlier this week I did a VACUUM FULL, VACUUM
> ANALYZE on the entire schema. A query that was supposed to use the
25MB
> index above (that matched the parameters of the partial index exactly)
was
> still not doing so.  I had to DROP and re-CREATE the index (and do
another
> ANALYZE) to get the database to (correctly) use the index.

you may want to consider using autovacuum.  vacuum full is usually not
recommended. maybe you could post some explain analyze of some queries
that are giving you problems?

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Continual Postgres headaches...

2007-12-06 Thread A.M.


On Dec 6, 2007, at 2:22 PM, Weber, Geoffrey M. wrote:

I've been trying for quite a while to get Postgresql tuned for use  
as an OLTP system.  I have several PL/pgSQL functions that handle  
inserts and updates to the main table and several near-real-time  
daemons written that access the data and can take automated actions  
on it (email/page concerned people, get complimentary information  
from a different system, etc.).  I started with Postgres 8.1 and am  
now using 8.2.4 (and have been since its release).  I'll try to  
provide enough information for a decent response, but as I can't  
obviously put my entire schema and database out there, I'm hoping  
that I can get some decent guidelines beyond that what I've found  
though Google, etc. to get this thing tuned better.


Most of the data centers in on a central table and has 23 columns,  
1 constraint, and 9 indexes.  4 of the indexes are partial.  The  
table usually contains about 3-4 million rows, but I've cut it down  
to 1.2 million (cut out 2/3 of the data) in an effort to migrate  
the database to a 2nd sever for more testing.  The two partial  
indexes used the most: 242MB accessed nearly constantly, and 15MB  
accessed every 5 seconds - but also updated constantly via inserts  
using the 242MB index.  Other than one other 25MB index, the others  
seem to average around 300MB each, but these aren't used quite as  
often (usually about every minute or so).


My problems really are with performance consistency.  I have  
tweaked the execution so that everything should run with sub-second  
execution times, but even after everything is running well, I can  
get at most a week or two of steady running before things start to  
degrade.




Without some examples of reproducible problematic behavior, you are  
likely to get only hazy responses. With your rate of database  
changes, you may need to be vacuuming more often (or certain tables  
more and other tables less).


From your description above, it sounds like you are persistently  
polling the database for changes. Have you considered using  
asynchronous notifications?


http://www.postgresql.org/docs/8.2/interactive/sql-listen.html

Cheers,
M

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] Nested loop in simple query taking long time

2007-12-06 Thread Gauthier, Dave
Future Enhancement?
If the column's new value can fit in the space already being used by the
existing value, just change the column value in place and leave the
record alone.  Would reduce the need for vacuum in many cases.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Alvaro Herrera
Sent: Thursday, December 06, 2007 2:26 PM
To: Henrik
Cc: Tom Lane; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Nested loop in simple query taking long time

Henrik wrote:

> I think I have a clue why its so off. We update a value in that table
about 
> 2 - 3 million times per night and as update creates a new row it
becomes 
> bloated pretty fast. The table hade a size of 765 MB including indexes
and 
> after vacuum full and reindex it went down to 80kB... I guess I need 
> routine reindex on this table. Thank god is not big. :)

I suggest you put a lone VACUUM on that table in cron, say once every 5
minutes, and you should be fine.  You shouldn't need a reindex at all.

-- 
Alvaro Herrera
http://www.PlanetPostgreSQL.org/
"Right now the sectors on the hard disk run clockwise, but I heard a
rumor that
you can squeeze 0.2% more throughput by running them counterclockwise.
It's worth the effort. Recommended."  (Gerry Pourwelle)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Continual Postgres headaches...

2007-12-06 Thread Erik Jones


On Dec 6, 2007, at 1:22 PM, Weber, Geoffrey M. wrote:

I've been trying for quite a while to get Postgresql tuned for use  
as an OLTP system.  I have several PL/pgSQL functions that handle  
inserts and updates to the main table and several near-real-time  
daemons written that access the data and can take automated actions  
on it (email/page concerned people, get complimentary information  
from a different system, etc.).  I started with Postgres 8.1 and am  
now using 8.2.4 (and have been since its release).  I'll try to  
provide enough information for a decent response, but as I can't  
obviously put my entire schema and database out there, I'm hoping  
that I can get some decent guidelines beyond that what I've found  
though Google, etc. to get this thing tuned better.




Most of the data centers in on a central table and has 23 columns,  
1 constraint, and 9 indexes.  4 of the indexes are partial.  The  
table usually contains about 3-4 million rows, but I've cut it down  
to 1.2 million (cut out 2/3 of the data) in an effort to migrate  
the database to a 2nd sever for more testing.  The two partial  
indexes used the most: 242MB accessed nearly constantly, and 15MB  
accessed every 5 seconds - but also updated constantly via inserts  
using the 242MB index.  Other than one other 25MB index, the others  
seem to average around 300MB each, but these aren't used quite as  
often (usually about every minute or so).
Do you really have that many query patterns that you need 9 different  
indexes on the table?  Without seeing the  table layout I'm  
suspicious that with 23 column table with 9 indexes it could either  
do with some normalization, vertical partitioning or less indexes.   
There is definitely such a thing as too many indexes, especially if  
any of them share indexed columns.
 My problems really are with performance consistency.  I have  
tweaked the execution so that everything should run with sub-second  
execution times, but even after everything is running well, I can  
get at most a week or two of steady running before things start to  
degrade.




In my efforts to correct these consistency in execution problems, I  
have gone from vacuuming (with analyze) twice a day to every 30  
minutes (how long it takes a vacuum analyze to run - another  
seeming problem because it shouldn't take so long?).

That really depends on the table size and the kind of traffic it sees.
I've done a VACUUM FULL and that sometimes helps, but obviously is  
not a long-term solution for a true OLTP system.  Why I said  
"sometimes helps": earlier this week I did a VACUUM FULL, VACUUM  
ANALYZE on the entire schema. A query that was supposed to use the  
25MB index above (that matched the parameters of the partial index  
exactly) was still not doing so.  I had to DROP and re-CREATE the  
index (and do another ANALYZE) to get the database to (correctly)  
use the index.
VACUUM FULL rewrites large portions of your table in order to compact  
space.  Since you're physically moving the tuples around it requires  
updating the indexes on the table and thus can be a fast track to  
bloated indexes. The VACUUM FULL docs even point this out and  
suggests running REINDEX after a VACUUM FULL.  You do realize that we  
have a REINDEX command, right?  No need to drop and recreate  
separately when they are consecutive actions, just use REINDEX.



Another problem: sometimes I get these unexplainable delays in  
using a 'stored procedure' even though all its SQL calls run just  
fine individually.  For example, calling a particular function will  
take 1 minute to execute even though entering the SQL commands  
individually through psql will total up to about 1 second.  When I  
log 'long-duration queries', I only see the top function call in  
the log.  When I put 'RAISE NOTICE' statements in the various PL/ 
pgSQL functions, I only see them displayed for the function I call  
directly; any underlying functions called from that function does  
not show any of their NOTICE statements.  Because of this I can't  
track down where the delays are occurring for me to try and correct  
the problem (not that there should be one in the first place)!  I  
should also note that there is not any lock contention (double- 
checked with pgadmin3) showing that would "artificially" delay the  
queries...




I've played with quite a few of the parameters like  
vacuum_cost_delay, work_mem, shared_buffers, turned fsync off, etc.  
in the postgresql.conf file.  The server I've been given to use is  
quite old (read: not the fastest anymore), but normally shows a  
load average of 0.25.to 0.50 or so.




I guess my questions boil down to what I can do other than use the  
EXPLAIN, EXPLAIN ANALYZE, VACUUM ANALYZE, etc. to get my schema  
performing acceptably across the board in a consistent manner.  If  
there is more information that I can give out that would improve  
responses, please let me know.
As Merlin said, it would help to s

Re: [GENERAL] Recovering data via raw table and field separators

2007-12-06 Thread John Wells
On 12/6/07, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 06, 2007 at 02:35:42PM -0500, John Wells wrote:
> > A bit beyond me I'm afriad, at least at my current level with
> > postgresql. Does anyone offer a commercial tool to do this? Or, would
> > anyone be interested in doing it for a fee?
>
> There was a tool pgfsck which could dump table data, but it's not been
> updated in quite a while so I don't know if it'll work for your
> version...

I'm on 8.1. I'll check and see if I can make it work.

Thanks!
John

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] aggregate and order by

2007-12-06 Thread Matthew Dennis
I want to create an aggregate that will give the average velocity (sum of
distance traveled / sum of elapsed time) from position and timestamps.

example:

create table data(position integer, pos_time timestamp, trip_id integer);

insert into data values(1, "time x", 1);
insert into data values(2, "time x plus 1 second", 1);
insert into data values(4, "time x plus 2 second", 1);
insert into data values(1, "time y", 2);
insert into data values(4, "time y plus 1 second", 2);
insert into data values(16, "time y plus 3 second", 2);
etc, etc, etc...

select trip_id, avg_vel(position, pos_time) from data group by trip_id;

Row one to row two has an elapsed time of ("time x plus 1 second" - "time
x") and a difference in position of (2 - 1) and from row two to row three
there is a difference in position of (4 -2) and a elapsed time of ("time x
plus 1 second" - "time x plus 2 seconds") so for trip_id we get ((2-1) +
(4-2)) / (1 + 1).

Row 4 to row 5 has a difference in position of (4-1) and a elapsed time of
("time y plus 1 second" - "time y") and from row 5 to row 6 there is a
position difference of (16-4) and time difference of ("time y plus 3
seconds" - "time y plus 1 second") so for trip_id 2 we get ((4-1) + (16-4))
/ (1 + 2).

Keep in mind that I can't just take the difference between the start and end
of the trip because I might move from 1 to 10 to 1.  If I just took the end
points (1-1) the velocity would be zero because it looks like I didn't move.

So I could write an aggregate that remembers the last row and on each new
row, does the diff and keeps the running sums and then when it's done, the
final function does the division and returns the average velocity.  However,
this only works if the rows come into the aggregate function in the correct
order (otherwise I might count the total distance and/or elapsed time wrong
because both are calculated from the difference of the previous row).  So,
my question is if I can have PostgreSQL honor order by clauses such as:

select trip_id, avg_vel(position, pos_time) from (select position, pos_time,
trip_id from data order by pos_time) sorted_data

Would this in fact guarantee that the rows are passed into the aggregate in
the order specified?

Other suggestions/discussions/questions/etc are welcome.


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-06 Thread Martijn van Oosterhout
On Thu, Dec 06, 2007 at 02:35:42PM -0500, John Wells wrote:
> A bit beyond me I'm afriad, at least at my current level with
> postgresql. Does anyone offer a commercial tool to do this? Or, would
> anyone be interested in doing it for a fee?

There was a tool pgfsck which could dump table data, but it's not been
updated in quite a while so I don't know if it'll work for your
version...

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Continual Postgres headaches...

2007-12-06 Thread Merlin Moncure
On Dec 6, 2007 2:22 PM, Weber, Geoffrey M. <[EMAIL PROTECTED]> wrote:
> In my efforts to correct these consistency in execution problems, I have
> gone from vacuuming (with analyze) twice a day to every 30 minutes (how long
> it takes a vacuum analyze to run - another seeming problem because it
> shouldn't take so long?).  I've done a VACUUM FULL and that sometimes helps,
> but obviously is not a long-term solution for a true OLTP system.  Why I
> said "sometimes helps": earlier this week I did a VACUUM FULL, VACUUM
> ANALYZE on the entire schema. A query that was supposed to use the 25MB
> index above (that matched the parameters of the partial index exactly) was
> still not doing so.  I had to DROP and re-CREATE the index (and do another
> ANALYZE) to get the database to (correctly) use the index.

you may want to consider using autovacuum.  vacuum full is usually not
recommended. maybe you could post some explain analyze of some queries
that are giving you problems?

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Recovering data via raw table and field separators

2007-12-06 Thread John Wells
On 12/5/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote:
> Not that I know of.
>
> I think the simplest thing to get your tuples back is:
>
> 1. mark the transaction that deleted them as aborted in pg_clog
> 2. reset the hint bits in the deleted tuples, or hack your postgres copy
> to ignore hint bits
>
> You can figure out the transaction that deleted the tuples by seeing
> that their Xmax value is with pg_filedump.
>
> The hint bits part makes it rather messy :-(

A bit beyond me I'm afriad, at least at my current level with
postgresql. Does anyone offer a commercial tool to do this? Or, would
anyone be interested in doing it for a fee?

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Nested loop in simple query taking long time

2007-12-06 Thread Alvaro Herrera
Henrik wrote:

> I think I have a clue why its so off. We update a value in that table about 
> 2 - 3 million times per night and as update creates a new row it becomes 
> bloated pretty fast. The table hade a size of 765 MB including indexes and 
> after vacuum full and reindex it went down to 80kB... I guess I need 
> routine reindex on this table. Thank god is not big. :)

I suggest you put a lone VACUUM on that table in cron, say once every 5
minutes, and you should be fine.  You shouldn't need a reindex at all.

-- 
Alvaro Herrera   http://www.PlanetPostgreSQL.org/
"Right now the sectors on the hard disk run clockwise, but I heard a rumor that
you can squeeze 0.2% more throughput by running them counterclockwise.
It's worth the effort. Recommended."  (Gerry Pourwelle)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[GENERAL] Continual Postgres headaches...

2007-12-06 Thread Weber, Geoffrey M.
I've been trying for quite a while to get Postgresql tuned for use as an
OLTP system.  I have several PL/pgSQL functions that handle inserts and
updates to the main table and several near-real-time daemons written that
access the data and can take automated actions on it (email/page concerned
people, get complimentary information from a different system, etc.).  I
started with Postgres 8.1 and am now using 8.2.4 (and have been since its
release).  I'll try to provide enough information for a decent response, but
as I can't obviously put my entire schema and database out there, I'm hoping
that I can get some decent guidelines beyond that what I've found though
Google, etc. to get this thing tuned better.

 

Most of the data centers in on a central table and has 23 columns, 1
constraint, and 9 indexes.  4 of the indexes are partial.  The table usually
contains about 3-4 million rows, but I've cut it down to 1.2 million (cut
out 2/3 of the data) in an effort to migrate the database to a 2nd sever for
more testing.  The two partial indexes used the most: 242MB accessed nearly
constantly, and 15MB accessed every 5 seconds - but also updated constantly
via inserts using the 242MB index.  Other than one other 25MB index, the
others seem to average around 300MB each, but these aren't used quite as
often (usually about every minute or so).

 

My problems really are with performance consistency.  I have tweaked the
execution so that everything should run with sub-second execution times, but
even after everything is running well, I can get at most a week or two of
steady running before things start to degrade.

 

In my efforts to correct these consistency in execution problems, I have
gone from vacuuming (with analyze) twice a day to every 30 minutes (how long
it takes a vacuum analyze to run - another seeming problem because it
shouldn't take so long?).  I've done a VACUUM FULL and that sometimes helps,
but obviously is not a long-term solution for a true OLTP system.  Why I
said "sometimes helps": earlier this week I did a VACUUM FULL, VACUUM
ANALYZE on the entire schema. A query that was supposed to use the 25MB
index above (that matched the parameters of the partial index exactly) was
still not doing so.  I had to DROP and re-CREATE the index (and do another
ANALYZE) to get the database to (correctly) use the index.

 

Another problem: sometimes I get these unexplainable delays in using a
'stored procedure' even though all its SQL calls run just fine individually.
For example, calling a particular function will take 1 minute to execute
even though entering the SQL commands individually through psql will total
up to about 1 second.  When I log 'long-duration queries', I only see the
top function call in the log.  When I put 'RAISE NOTICE' statements in the
various PL/pgSQL functions, I only see them displayed for the function I
call directly; any underlying functions called from that function does not
show any of their NOTICE statements.  Because of this I can't track down
where the delays are occurring for me to try and correct the problem (not
that there should be one in the first place)!  I should also note that there
is not any lock contention (double-checked with pgadmin3) showing that would
"artificially" delay the queries...

 

I've played with quite a few of the parameters like vacuum_cost_delay,
work_mem, shared_buffers, turned fsync off, etc. in the postgresql.conf
file.  The server I've been given to use is quite old (read: not the fastest
anymore), but normally shows a load average of 0.25.to 0.50 or so.

 

I guess my questions boil down to what I can do other than use the EXPLAIN,
EXPLAIN ANALYZE, VACUUM ANALYZE, etc. to get my schema performing acceptably
across the board in a consistent manner.  If there is more information that
I can give out that would improve responses, please let me know.

 

Thanks in advance for any help you send my way!

 

 

 




NOTICE: This electronic mail transmission may contain confidential
information and is intended only for the person(s) named.  Any use, copying
or disclosure by any other person is strictly prohibited. If you have
received this transmission in error, please notify the sender via e-mail.





Re: [GENERAL] Nested loop in simple query taking long time

2007-12-06 Thread Henrik


6 dec 2007 kl. 18.12 skrev Tom Lane:


Henrik Zagerholm <[EMAIL PROTECTED]> writes:

5 dec 2007 kl. 16.25 skrev Tom Lane:

Henrik Zagerholm <[EMAIL PROTECTED]> writes:

->  Bitmap Index Scan on tbl_archive_idx1
(cost=0.00..1150.47 rows=8 width=0) (actual time=1505.456..1505.456
rows=86053 loops=16)
Index Cond: (tbl_share.pk_share_id =
tbl_archive.fk_share_id)



Why is this scan finding so many more rows than the planner expects?



 This is really weird. That tables primary key sequence is at 1220
and the number of rows right now is 139. There have never been that
many rows in tbl_archive. Could the index or stat be really really
corrupt?


I wonder how long it's been since you vacuumed that table?  The  
rowcount
from the bitmap indexscan would include tuple IDs that are in the  
index

but prove to be dead upon arrival at the heap.


I actually have autovacuum on the whole database but I may have been a  
little too restrictive on the autovacuum settings.

I did a vacuum full and a reindex and now its fine again.

I think I have a clue why its so off. We update a value in that table  
about 2 - 3 million times per night and as update creates a new row it  
becomes bloated pretty fast. The table hade a size of 765 MB including  
indexes and after vacuum full and reindex it went down to 80kB... I  
guess I need routine reindex on this table. Thank god is not big. :)


Thanks Tom!

Cheers,
Henke



regards, tom lane

---(end of  
broadcast)---

TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] reformatting floats ?

2007-12-06 Thread Gauthier, Dave
Both work (to_char and casting to numeric)
Thanks !

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Andreas 
Kretschmer
Sent: Thursday, December 06, 2007 1:46 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] reformatting floats ?

Gauthier, Dave <[EMAIL PROTECTED]> schrieb:

> 
> 
> Hi:
>  
> if...  
> create table coords (id int, x float, y float);
> then... 
> insert into coords (id,x,y) values (1,1.000,2.001)
> and then...
> select * from coords
> i get...
> 1,1,2.001
> i want...
> 1.1.000,2.001
> while retaining the numeric nature of the x,y data (for math ops in other
> operations).
>  
> How can I reformat the float output (sort of like using %5.3f in good-ole C)

CAST it to numeric ;-)

test=# create table coords (id int, x float, y float);
CREATE TABLE
Time: 4.437 ms
test=*# insert into coords (id,x,y) values (1,1.000,2.001);
INSERT 0 1
Time: 4.831 ms
test=*# select id, x::numeric(10,3), y::numeric(10,3) from coords;
 id |   x   |   y
+---+---
  1 | 1.000 | 2.001


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] reformatting floats ?

2007-12-06 Thread Andreas Kretschmer
Gauthier, Dave <[EMAIL PROTECTED]> schrieb:

> 
> 
> Hi:
>  
> if...  
> create table coords (id int, x float, y float);
> then... 
> insert into coords (id,x,y) values (1,1.000,2.001)
> and then...
> select * from coords
> i get...
> 1,1,2.001
> i want...
> 1.1.000,2.001
> while retaining the numeric nature of the x,y data (for math ops in other
> operations).
>  
> How can I reformat the float output (sort of like using %5.3f in good-ole C)

CAST it to numeric ;-)

test=# create table coords (id int, x float, y float);
CREATE TABLE
Time: 4.437 ms
test=*# insert into coords (id,x,y) values (1,1.000,2.001);
INSERT 0 1
Time: 4.831 ms
test=*# select id, x::numeric(10,3), y::numeric(10,3) from coords;
 id |   x   |   y
+---+---
  1 | 1.000 | 2.001


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."(unknow)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] record-based log shipping

2007-12-06 Thread SHARMILA JOTHIRAJAH
Thanks
> Have anyone implemented or tried record-based log shipping? 
> If so is there any other materials in the web other than the
> documentation (it has very few details about this)
>

>>I don't know exactly what you mean by "record-based log shipping", but
>>perhaps you're looking for something like Slony-I, which can be found
>>here:
http://www.postgresql.org/docs/8.3/static/warm-standby.html#WARM-STANDBY-RECORD
in the documentation talks about record based lg shipping






  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

Re: [GENERAL] reformatting floats ?

2007-12-06 Thread Martijn van Oosterhout
On Thu, Dec 06, 2007 at 01:22:55PM -0500, Gauthier, Dave wrote:
> i get...
> 
> 1,1,2.001
> 
> i want...
> 
> 1.1.000,2.001
> 
> while retaining the numeric nature of the x,y data (for math ops in
> other operations).

I imagine you want to_char().

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


[GENERAL] reformatting floats ?

2007-12-06 Thread Gauthier, Dave
Hi:

 

if...   

create table coords (id int, x float, y float);

then...  

insert into coords (id,x,y) values (1,1.000,2.001)

and then...

select * from coords

i get...

1,1,2.001

i want...

1.1.000,2.001

while retaining the numeric nature of the x,y data (for math ops in
other operations).

 

How can I reformat the float output (sort of like using "%5.3f" in
good-ole C)

 

Thanks !



Re: [GENERAL] record-based log shipping

2007-12-06 Thread Jeff Davis
On Thu, 2007-12-06 at 09:30 -0800, SHARMILA JOTHIRAJAH wrote:
> Hi,
> Have anyone implemented or tried record-based log shipping? 
> If so is there any other materials in the web other than the
> documentation (it has very few details about this)
> Thanks
> sharmila
> 

I don't know exactly what you mean by "record-based log shipping", but
perhaps you're looking for something like Slony-I, which can be found
here:

http://www.slony.info

Regards,
Jeff Davis




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Import LDAP data to a Postgres database

2007-12-06 Thread Chris Browne
[EMAIL PROTECTED] (Marco Curtolo) writes:
> Hi, my question is this: it is possible to query, using SQL, an LDAP
> server and put these data to a table of a Postgres database (under
> linux) ???

Unfortunately, the LDAP model is more or less a "network model," which
doesn't fit terribly elegantly onto SQL.

You can in fact use PostgreSQL as the backend for OpenLDAP; there's a
HOWTO on this:

  

There are also ways of exporting SQL data via LDAP; see here:

  
-- 
select 'cbbrowne' || '@' || 'linuxdatabases.info';
http://www3.sympatico.ca/cbbrowne/sap.html
Microsoft Outlook: Deploying Viruses Has Never Been This Easy!

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] record-based log shipping

2007-12-06 Thread SHARMILA JOTHIRAJAH
Hi,
Have anyone implemented or tried record-based log shipping? 
If so is there any other materials in the web other than the documentation (it 
has very few details about this)
Thanks
sharmila




  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 


Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-06 Thread Martijn van Oosterhout
On Wed, Dec 05, 2007 at 11:32:59AM +0200, Andrus wrote:
> I do'nt have this index.
> dok.kuupaev||dok.kellaaeg conditon should applied after index search is 
> performed.
> It filters out only a small number of rows additionally to the plain kuupaev 
> filter.
> So adding index on dok.kuupaev||dok.kellaaeg  is not reasonable IMHO.

Your problem is that it is doing that, except postgres is assuming that
it does filter more rows. In actual fact the extra condition is making
no difference whatsoever.

So postgres assumes the result will only give 10 rows and so a nested
loop is better.

As for the index problem, I don't know but it will probably work better.

Have a nice day,
-- 
Martijn van Oosterhout   <[EMAIL PROTECTED]>   http://svana.org/kleptog/
> Those who make peaceful revolution impossible will make violent revolution 
> inevitable.
>  -- John F Kennedy


signature.asc
Description: Digital signature


Re: [GENERAL] Nested loop in simple query taking long time

2007-12-06 Thread Tom Lane
Henrik Zagerholm <[EMAIL PROTECTED]> writes:
> 5 dec 2007 kl. 16.25 skrev Tom Lane:
>> Henrik Zagerholm <[EMAIL PROTECTED]> writes:
>>> ->  Bitmap Index Scan on tbl_archive_idx1
>>> (cost=0.00..1150.47 rows=8 width=0) (actual time=1505.456..1505.456
>>> rows=86053 loops=16)
>>> Index Cond: (tbl_share.pk_share_id =
>>> tbl_archive.fk_share_id)

>> Why is this scan finding so many more rows than the planner expects?

>   This is really weird. That tables primary key sequence is at 1220  
> and the number of rows right now is 139. There have never been that  
> many rows in tbl_archive. Could the index or stat be really really  
> corrupt?

I wonder how long it's been since you vacuumed that table?  The rowcount
from the bitmap indexscan would include tuple IDs that are in the index
but prove to be dead upon arrival at the heap.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unreasonable size of table pg 8.2.5

2007-12-06 Thread Henrik


6 dec 2007 kl. 15.25 skrev Bill Moran:


Henrik <[EMAIL PROTECTED]> wrote:


Hello list,

I have a table with 135 rows and it still takes up about 360MB with
only small columns. Details below.

db=# vacuum full tbl_archive;
VACUUM
db=# select * from
pg_size_pretty(pg_total_relation_size('tbl_archive'));
 pg_size_pretty

 360 MB
(1 row)

db=# select * from pg_size_pretty(pg_relation_size('tbl_archive'));
 pg_size_pretty

 16 kB
(1 row)


Looks like we have a very bloated index.
After reindex
db=# select * from
pg_size_pretty(pg_total_relation_size('tbl_archive'));
 pg_size_pretty

80 kB
(1 row)

I thought that reindex should not be necessary in 8.2.5?


VACUUM FULL tends to bloat indexes, which is one of the reasons that  
it's

not recommended for regular maintenance.

Use plain VACUUM instead.  If you feel the need to run a VACUUM  
FULL, always

do a REINDEX afterward.
I usually only do normal vacuum but its good to know that reindex  
should be ran after vacuum full.



Even still, there are apparently some corner cases around that cause  
index
bloat.  If it turns out that you've found one, you may want to  
document it

so the developers can look into possible solutions.

I maybe have an idea why its get this big but I'll do some more  
testing first!

Thanks,
Henke

--
Bill Moran
http://www.potentialtech.com

---(end of  
broadcast)---

TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Disconnects hanging server

2007-12-06 Thread Tom Lane
Brian Wipf <[EMAIL PROTECTED]> writes:
> Nearly 100% of the CPU is going into pmap_remove_range. The stack  
> trace for pmap_remove_range, viewable within Shark, is:
> -> pmap_remove_range
> --> pmap_remove
> ---> vm_map_simplify
> > vm_map_remove
> -> task_terminate_internal
> --> exit1
> ---> exit
> > unix_syscall64
> -> lo64_unix_scall

In case it's not obvious, this is a kernel performance bug, which
you should report to Apple.

In the meantime you might want to think about backing off your
shared_buffers setting.  I would suppose that the performance bug
is being triggered by a very large shared memory segment (you
said 3Gb right?).

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] pg_dump and server responsiveness

2007-12-06 Thread Erik Jones


On Dec 6, 2007, at 9:58 AM, Bryan Murphy wrote:


On Dec 5, 2007 9:49 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
Only access-share locks, but that could still be an issue if  
anything in

your system likes to take exclusive locks.  Have you looked into
pg_locks to see if anything's getting blocked?

pg_dump is entirely capable of causing an unpleasant amount of I/O
load, but that shouldn't result in "complete unresponsiveness",
and anyway your iostat output doesn't look like you're saturated...


It does appear to be lock contention.  I took a closer look this
morning, and I noticed our web site was consistently locking up on a
particular table, and there were a number of exclusive locks.  I
started eliminating various jobs, and found the one that essentially
rewrites that particular table every 5 minutes to be the culprit
(create new table, drop old table, rename new table).

Is there a better way we can do this so that we won't causes lock
contention during a dump?  I can disable the process, but if the
backup takes an hour that's an hour where all the data in this table
is stale.  I believe we chose to do it this way, because we wanted to
minimize the amount of time the table wasn't available, which is why
we didn't use a truncate based strategy.


If this table has such transient data in it, does it even need to be  
included in the dump?  If not, either move it into another database,  
another schema, or just use the -T flag in your dump command.


Erik Jones

Software Developer | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] pg_dump and server responsiveness

2007-12-06 Thread Bryan Murphy
On Dec 6, 2007 10:09 AM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Why dump such a table at all?  It evidently doesn't contain any
> data you need to preserve ...
>
> I forget which version you are running, but 8.2 pg_dump has an
> --exclude-table switch which'd work peachy for this.

I did not know about that option but it sounds like it will get the
job done.  This is our last database running 8.1.9, so even if it
doesn't support that, I plan on migrating it to 8.2 soon anyway.

Thanks,
Bryan

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] pg_dump and server responsiveness

2007-12-06 Thread Tom Lane
"Bryan Murphy" <[EMAIL PROTECTED]> writes:
> It does appear to be lock contention.  I took a closer look this
> morning, and I noticed our web site was consistently locking up on a
> particular table, and there were a number of exclusive locks.  I
> started eliminating various jobs, and found the one that essentially
> rewrites that particular table every 5 minutes to be the culprit
> (create new table, drop old table, rename new table).

> Is there a better way we can do this so that we won't causes lock
> contention during a dump?

Why dump such a table at all?  It evidently doesn't contain any
data you need to preserve ...

I forget which version you are running, but 8.2 pg_dump has an
--exclude-table switch which'd work peachy for this.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] pg_dump and server responsiveness

2007-12-06 Thread Bryan Murphy
On Dec 5, 2007 9:49 PM, Tom Lane <[EMAIL PROTECTED]> wrote:
> Only access-share locks, but that could still be an issue if anything in
> your system likes to take exclusive locks.  Have you looked into
> pg_locks to see if anything's getting blocked?
>
> pg_dump is entirely capable of causing an unpleasant amount of I/O
> load, but that shouldn't result in "complete unresponsiveness",
> and anyway your iostat output doesn't look like you're saturated...

It does appear to be lock contention.  I took a closer look this
morning, and I noticed our web site was consistently locking up on a
particular table, and there were a number of exclusive locks.  I
started eliminating various jobs, and found the one that essentially
rewrites that particular table every 5 minutes to be the culprit
(create new table, drop old table, rename new table).

Is there a better way we can do this so that we won't causes lock
contention during a dump?  I can disable the process, but if the
backup takes an hour that's an hour where all the data in this table
is stale.  I believe we chose to do it this way, because we wanted to
minimize the amount of time the table wasn't available, which is why
we didn't use a truncate based strategy.

Thanks,
Bryan

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Understanding how partial indexes work?

2007-12-06 Thread Scott Marlowe
On Dec 6, 2007 1:44 AM, Chris Velevitch <[EMAIL PROTECTED]> wrote:
> I have a query on a table:-
>
>  X between k1 and k2 or X < k1 and Y <> k3
>
> where k1, k2, k3 are constants.
>
> How would this query work, if I created an index on X and a partial
> index on X where Y <> k3?

Ummm.  Using AND and OR in the same where clause without parenthesis
is a bad idea, as the logic you might think you're expressing isn't
the exact logic you're actually expressing.  Do you mean:

X between k1 and k2 or (X < k1 and Y <> k3)

OR

(X between k1 and k2 or X < k1) and Y <> k3

Those are two different questions.

That plan chosen by the query planner depends on what the data
distribution looks like, and what decisions the planner makes based on
the stats it has about that distribution.

Why not make a table, fill it with test data, analyze it, and see what
you get with your indexes with explain analyze select ...

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Question in dblink

2007-12-06 Thread Abraham, Danny
I would like the code below to accept the returned value from t2.
How do I do it?
Thanks
Danny

===

  err := dblink_connect('C',cname);
  begin
execute dblink('C','SELECT t2()');
  exception
when others then null;
  end;
  err := dblink_disconnect('C');

==
CREATE OR REPLACE FUNCTION t2()  RETURNS integer
AS
$$
DECLARE
  i integer;
BEGIN
  execute 'insert into x values(12)';
  return 67;
END;  
$$ LANGUAGE 'plpgsql' VOLATILE;
==



Danny Abraham
BMC Software
CTM&D Business Unit
972-52-4286-513
[EMAIL PROTECTED]


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Understanding how partial indexes work?

2007-12-06 Thread Tom Lane
"Chris Velevitch" <[EMAIL PROTECTED]> writes:
> I have a query on a table:-
>  X between k1 and k2 or X < k1 and Y <> k3

> where k1, k2, k3 are constants.

> How would this query work, if I created an index on X and a partial
> index on X where Y <> k3?

Is it worth the trouble?  You didn't mention the statistics involved,
but ordinarily I'd think a non-equal condition is too nonselective
to justify the cost of maintaining an extra index.

The real problem here is that "X < k1" is probably too nonselective
as well, which will likely lead the planner to decide that a plain
seqscan is the cheapest solution.  In principle the above could be
done with a BitmapOr of two indexscans on X, but unless the constants
are such that only a small fraction of rows are going to be selected,
it's likely that a seqscan is the way to go.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Obe, Regina
 
You could  use a COALESCE instead of a case statement for simple case
like this.  The below will treat a NULL as false and then when you do
not it becomes true.  So NULLS will be set to true

UPDATE boolean_column SET boolean_column = NOT COALESCE(boolean_column,
false)

hope that helps,
Regina


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ivan Sergio
Borgonovo
Sent: Thursday, December 06, 2007 10:19 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] simple update on boolean

On Thu, 06 Dec 2007 14:53:13 +0100
Cedric Boudin <[EMAIL PROTECTED]> wrote:

> > What if boolean_column is NULL?
> >
> > btw set bolean_column= not bolean_column works "as expected".
> >
> > template1=# select (not 't'::boolean),(not 'f'::boolean),(not
> > NULL::boolean);
> >
> > ?column? | ?column? | ?column?
> > --+--+--
> >  f| t|
> > (1 riga)

> If it was null before it has to be null afterwards (IMHO).
> If you don't want to have null,
> take care of it somewhere else but not here.

That is the "as expected" part.

The "case" case, other than being more verbose, do more than what I
would expect since all NULL are converted to t.

template1=# select case when NULL then 'f'::boolean else 't'::boolean
end;

case
--
 t
(1 riga)


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 6: explain analyze is your friend
-
The substance of this message, including any attachments, may be
confidential, legally privileged and/or exempt from disclosure
pursuant to Massachusetts law. It is intended
solely for the addressee. If you received this in error, please
contact the sender and delete the material from any computer.


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Ivan Sergio Borgonovo
On Thu, 06 Dec 2007 14:53:13 +0100
Cedric Boudin <[EMAIL PROTECTED]> wrote:

> > What if boolean_column is NULL?
> >
> > btw set bolean_column= not bolean_column works "as expected".
> >
> > template1=# select (not 't'::boolean),(not 'f'::boolean),(not
> > NULL::boolean);
> >
> > ?column? | ?column? | ?column?
> > --+--+--
> >  f| t|
> > (1 riga)

> If it was null before it has to be null afterwards (IMHO).
> If you don't want to have null,
> take care of it somewhere else but not here.

That is the "as expected" part.

The "case" case, other than being more verbose, do more than what I
would expect since all NULL are converted to t.

template1=# select case when NULL then 'f'::boolean else 't'::boolean
end;

case
--
 t
(1 riga)


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Unreasonable size of table pg 8.2.5

2007-12-06 Thread Bill Moran
Henrik <[EMAIL PROTECTED]> wrote:
>
> Hello list,
> 
> I have a table with 135 rows and it still takes up about 360MB with  
> only small columns. Details below.
> 
> db=# vacuum full tbl_archive;
> VACUUM
> db=# select * from  
> pg_size_pretty(pg_total_relation_size('tbl_archive'));
>   pg_size_pretty
> 
>   360 MB
> (1 row)
> 
> db=# select * from pg_size_pretty(pg_relation_size('tbl_archive'));
>   pg_size_pretty
> 
>   16 kB
> (1 row)
> 
> 
> Looks like we have a very bloated index.
> After reindex
> db=# select * from  
> pg_size_pretty(pg_total_relation_size('tbl_archive'));
>   pg_size_pretty
> 
> 80 kB
> (1 row)
> 
> I thought that reindex should not be necessary in 8.2.5?

VACUUM FULL tends to bloat indexes, which is one of the reasons that it's
not recommended for regular maintenance.

Use plain VACUUM instead.  If you feel the need to run a VACUUM FULL, always
do a REINDEX afterward.

Even still, there are apparently some corner cases around that cause index
bloat.  If it turns out that you've found one, you may want to document it
so the developers can look into possible solutions.

-- 
Bill Moran
http://www.potentialtech.com

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


[GENERAL] Unreasonable size of table pg 8.2.5

2007-12-06 Thread Henrik

Hello list,

I have a table with 135 rows and it still takes up about 360MB with  
only small columns. Details below.


db=# vacuum full tbl_archive;
VACUUM
db=# select * from  
pg_size_pretty(pg_total_relation_size('tbl_archive'));

 pg_size_pretty

 360 MB
(1 row)

db=# select * from pg_size_pretty(pg_relation_size('tbl_archive'));
 pg_size_pretty

 16 kB
(1 row)


Looks like we have a very bloated index.
After reindex
db=# select * from  
pg_size_pretty(pg_total_relation_size('tbl_archive'));

 pg_size_pretty

80 kB
(1 row)

I thought that reindex should not be necessary in 8.2.5? This is not a  
big tabel but what I can see is that we have many small updates.

Cheers,
Henke

db=# \d tbl_Archive;
Table  
"public.tbl_archive"
   Column|Type  
|  Modifiers
-+- 
+--
 pk_archive_id   | bigint  | not null  
default nextval(('archive_seq_id'::text)::regclass)

 archive_name| character varying(255)  |
 archive_backup_type | character(1)|
 archive_size| bigint  | not null  
default 0

 fk_share_id | bigint  |
 archive_complete| boolean | not null  
default false

 fk_job_id   | bigint  |
 archive_date| timestamp without time zone | not null  
default now()
 archive_nmb_files   | integer | not null  
default 0
 archive_nmb_folders | integer | not null  
default 0
 archive_nmb_file_exceptions | integer | not null  
default 0

Indexes:
"tbl_archive_pkey" PRIMARY KEY, btree (pk_archive_id)
"tbl_archive_idx" btree (archive_complete)
"tbl_archive_idx1" btree (fk_share_id)
"tbl_archive_idx2" btree (fk_job_id)
Check constraints:
"tbl_archive_chk" CHECK (archive_backup_type = 'F'::bpchar OR  
archive_backup_type = 'I'::bpchar)

Foreign-key constraints:
"tbl_archive_fk" FOREIGN KEY (fk_share_id) REFERENCES  
tbl_share(pk_share_id) ON UPDATE CASCADE ON DELETE SET NULL
"tbl_archive_fk1" FOREIGN KEY (fk_job_id) REFERENCES  
tbl_job(pk_job_id) ON DELETE SET NULL



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] autovacuum is stopping automatically....

2007-12-06 Thread Alvaro Herrera
Charles.Hou wrote:
> this is the pg_log...
> after 2007-12-04 10:40:37 CST 15533 , it always autovacuum "template0"
> not mydatabase...

Is there an ERROR in the log?  My guess is that template0 is in danger
of Xid wraparound and autovacuum wants to process it, but it can't for
some reason.

My second guess is that you are still running a version below 8.1.6, or
were for a nontrivial amount of time.  Said versions contain a bug that
can make a future vacuum on template0 error out every time.


-- 
Alvaro Herrera  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"Es filósofo el que disfruta con los enigmas" (G. Coli)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Cedric Boudin
Ivan Sergio Borgonovo wrote:
> On Thu, 06 Dec 2007 13:38:31 +0100
> Cedric Boudin <[EMAIL PROTECTED]> wrote:
>
>   
>> It was indeed a forest problem.
>> both:
>>
>> set bolean_column= not bolean_column
>> and
>> set bolean_column= case when bolean_column then 'f'::bool else
>> 't'::bool end;
>>
>> do work perfectly.
>> 
>
> What if boolean_column is NULL?
>
> btw set bolean_column= not bolean_column works "as expected".
>
> template1=# select (not 't'::boolean),(not 'f'::boolean),(not
> NULL::boolean);
>
> ?column? | ?column? | ?column?
> --+--+--
>  f| t|
> (1 riga)
>   
If it was null before it has to be null afterwards (IMHO).
If you don't want to have null,
take care of it somewhere else but not here.

cedric

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Error in creating function

2007-12-06 Thread A. Kretschmer
am  Thu, dem 06.12.2007, um  4:16:14 -0800 mailte Yancho folgendes:
> I am trying to create a function, which takes the nearest 3 hospitals
> to a point making use of a PostGIS function), and then check each
> hospital for the exact distance on roads (by making use of a pgRouting
> function).
> 
> Below please find a copy of my function, and u can also find it
> highlighted here : http://yancho.pastebin.com/f13cc045e
> 
> CREATE OR REPLACE FUNCTION near_hospital(text, integer, integer)
>   RETURNS integer AS
> $BODY$
> ...
> END;
> 
> ' language 'plpgsql';
> 
> 
> The error being given by pgAdminIII is : unterminated dollar-quoted
> string at or near "$BODY$" [then some garbled text] for $1;
> 
> Any help will be extremely appreciated!

change the last line into:

$BODY$ language 'plpgsql';




Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] how to redirect output to a file

2007-12-06 Thread A. Kretschmer
am  Tue, dem 04.12.2007, um 20:19:29 -0800 mailte pc folgendes:
> Hi,
> 
> How to redirect the output of an sql command to a file?
> Thanks in advance

within psql you can use \o , from the shell you can use this:

[EMAIL PROTECTED]:~$ echo "select now()" | psql test > now.txt
[EMAIL PROTECTED]:~$ cat now.txt
  now
---
 2007-12-06 14:21:58.963405+01
(1 row)



Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] how to redirect output to a file

2007-12-06 Thread Usama Dar
On Dec 5, 2007 9:19 AM, pc <[EMAIL PROTECTED]> wrote:

> Hi,
>
> How to redirect the output of an sql command to a file?
> Thanks in advance


if you are using psql

postgres=# \o ~/sql.out
postgres=# select * from foo;

the output will be directed to a file, when you need to stop doing it

postgres=# \o




>
>
> ---(end of broadcast)---
> TIP 6: explain analyze is your friend
>



-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-06 Thread Andrus
Thank you very much for quick reply.

> can you please give us the types of  dok.kuupaev and dok.kellaaeg?  I
> think a simple fix is possible here.

dok.kuupaev type is DATE

dok.kellaaeg type is character(5) NOT NULL DEFAULT ''
and is used to represent dokument time in format   hh mm

Database encoding is UTF-8 , cluster locale is estonian, OS is Windows 2003 
server.

>> You provide zero information on the table layout

dok table full definition is below.
What other information do you need ?

>>, and the explain output
>> has been horribly mangled by your MUA.

I used copy and paste from pgAdmin.
I checked my message and it seems that explain output is OK, havent found 
any truncation.
So I do'nt understand this.

>> I would suspect the problem is that there's no index that can be used
>> for that final comparison.

Postgres must use index on kuupaev in both queries.
This index filters out most rows.

>> Do you have an index along the lines of
>> CREATE INDEX dokindex ON dok (kuupaeve||kellaaeg) ?

I do'nt have this index.
dok.kuupaev||dok.kellaaeg conditon should applied after index search is 
performed.
It filters out only a small number of rows additionally to the plain kuupaev 
filter.
So adding index on dok.kuupaev||dok.kellaaeg  is not reasonable IMHO.

Please confirm that most reasonable way to fix this to add this index, I 
will add this.

>> Overall, the fact that you're concatenating two text fields to generate a
>> date field tends to suggest that your database schema has some fairly
>> major design problems, but I can only speculate at this point.

This schema is migrated from dbms where there was no datetime support.
char(5) field is used to express time  in form   hh mm
This schema is deployed in a large number of servers.
Its change would be very expensive. change requires huge amout of  work time 
to re-write applications, create database conversion scripts, re-write 
pl/sql triggers, test and fix new bugs causes by change.

Andrus.


CREATE TABLE firma1.dok
(
  doktyyp character(1) NOT NULL,
  dokumnr integer NOT NULL DEFAULT nextval('dok_dokumnr_seq'::regclass),
  kuupaev date NOT NULL,
  oper character(3),
  klient character(12),
  laonr numeric(2),
  raha character(3),
  tasudok character(25),
  knr character(10),
  tasukuup date,
  yksus character(10),
  sihtyksus character(10),
  pais2obj character(10),
  saaja character(12),
  krdokumnr integer,
  eimuuda ebool,
  kasutaja character(10),
  username character(10),
  kellaaeg character(5) NOT NULL DEFAULT ''::bpchar,
  arvekonto character(10),
  maksetin character(5),
  exchrate numeric(11,6),
  ratefound date,
  kurss numeric(10,5),
  tekst1 text,
  viitenr character(20),
  objrealt ebool,
  arvenumber character(25),
  pais3obj character(10),
  pais4obj character(10),
  pais5obj character(10),
  pais6obj character(10),
  pais7obj character(10),
  pais8obj character(10),
  pais9obj character(10),
  masin character(5),
  tegmasin character(5),
  guid character(36) NOT NULL,
  doksumma numeric(12,2),
  kinnitatud ebool,
  tasumata numeric(12,2),
  sularaha numeric(12,2),
  kaardimaks numeric(12,2),
  kalkliik character(1),
  kalktoode character(20),
  inventuur ebool,
  algus date,
  lopp date,
  taidetud ebool,
  kaal numeric(7,3),
  "timestamp" character(14) NOT NULL DEFAULT to_char(now(), 
'MMDDHH24MISS'::text),
  vmnr integer,
  tellimus character(25),
  volitaisik character(36),
  liikmesrii character(2),
  tehingulii character(2),
  tarneklaus character(10),
  statprots character(2),
  CONSTRAINT dok_pkey PRIMARY KEY (dokumnr),
  CONSTRAINT dok_arvekonto_fkey FOREIGN KEY (arvekonto)
  REFERENCES firma1.konto (kontonr) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_kalktoode_fkey FOREIGN KEY (kalktoode)
  REFERENCES firma1.toode (toode) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_kasutaja_fkey FOREIGN KEY (kasutaja)
  REFERENCES kasutaja (kasutaja) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_klient_fkey FOREIGN KEY (klient)
  REFERENCES firma1.klient (kood) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_knr_fkey FOREIGN KEY (knr)
  REFERENCES firma1.konto (kontonr) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_krdokumnr_fkey FOREIGN KEY (krdokumnr)
  REFERENCES firma1.dok (dokumnr) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE SET NULL DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_liikmesrii_fkey FOREIGN KEY (liikmesrii)
  REFERENCES riik (kood) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE,
  CONSTRAINT dok_maksetin_fkey FOREIGN KEY (maksetin)
  REFERENCES maksetin (maksetin) MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIA

[GENERAL] Error in creating function

2007-12-06 Thread Yancho
I am trying to create a function, which takes the nearest 3 hospitals
to a point making use of a PostGIS function), and then check each
hospital for the exact distance on roads (by making use of a pgRouting
function).

Below please find a copy of my function, and u can also find it
highlighted here : http://yancho.pastebin.com/f13cc045e

CREATE OR REPLACE FUNCTION near_hospital(text, integer, integer)
  RETURNS integer AS
$BODY$

DECLARE
pojnt ALIAS FOR $1;
box ALIAS FOR $2;
dist ALIAS FOR $3;

distances RECORD;

nearest RECORD;


BEGIN


   nearest.dist := 10;

   FOR distances IN

  select astext(h.the_geom) as
hospital_location from hospitals h where
 (
 h.the_geom && expand (pointfromtext(pojnt),
10) and
 distance ( h.the_geom ,
pointfromtext(pojnt) ) < 15
 )
order by distance (h.the_geom ,
pointfromtext(pojnt)) ASC
limit 3;
LOOP

   select INTO hospital gid, the_geom, length(the_geom) AS
dist from shootingstar_sp
  ( 'streets',

(
select s.gid from streets s, hospitals h
where
source = (
   select
give_source(distances.hospital_location,10,15))
limit 1
)

,

(
select gid from streets where
target = (select 
give_target(pojnt,10,15))
limit 1
)

,
   5000,
   'length',
   true,
   true
  );


  IF hospital.dist < nearest.dist  THEN
  nearest.dist := hospital.dist;
  nearest.gid := hospital.gid;

  select INTO nearest name from hospital h
 where h.gid = hospital.gid ;

  END IF;

   END LOOP;

   RETURN nearest.gid;

END;

' language 'plpgsql';


The error being given by pgAdminIII is : unterminated dollar-quoted
string at or near "$BODY$" [then some garbled text] for $1;

Any help will be extremely appreciated!

Thanks and regards

Matthew

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Deadlock when updating table partitions (and presumed solution)

2007-12-06 Thread Paul Boddie
On 5 Des, 05:00, [EMAIL PROTECTED] (Tom Lane) wrote:
>
> Yeah, this is a problem.  The SELECT will acquire AccessShareLock
> on R and P, and subsequently try to acquire AccessShareLock on all
> the inheritance children of P (and I don't think the order in which
> these locks are acquired is very clear).  Meanwhile the ALTER acquires
> AccessExclusiveLock on Pm and R --- probably in that order, though
> I'd not really want to promise that ordering either.  So the potential
> for deadlock is obvious.

Indeed.

> You seem to be hoping that the SELECT would avoid acquiring lock
> on child tables Pn that it didn't need to access, but this cannot be:
> it has to get at least AccessShareLock on those tables before it can
> even examine their constraints to find out that they don't need to be
> scanned.  And even if it could magically not take those locks, the
> deadlock condition still exists with regard to the child table that
> it *does* need to access.

Understood. I was really wondering whether the SELECT would be able to
acquire locks on child tables at the same time as it acquired the lock
on the parent table, but I suppose this isn't an atomic operation: it
first has to acquire a lock to be able to see the constraints; then it
finds referenced tables and attempts to acquire locks on them.

> I guess I'm wondering why you need to be adding foreign key constraints
> during live operations.

This was just some impatience on my part while updating my database: I
was merely inspecting some data which I knew resided in some
partitions whilst some other partitions were being altered. Obviously,
the database system cannot know that some data of interest isn't going
to be found in some partition without checking the properties of that
partition. Consequently, it made sense for me to exclude such
partitions from consideration by the SELECT in order to help it reach
the requested data whilst keeping it out of the way of the alteration
activities.

I suppose the lingering question is this: what constraints should I
drop in order to avoid such problems? Dropping the insert rule from
the parent table for each child table being altered *seems* to
diminish the possibility of deadlock, in that my tests produced no
deadlock situations when I adopted this approach (whereas such
situations were unavoidable before adopting this approach), but
shouldn't I actually be removing the check constraints from the child
tables instead? The manual for 8.1 says that "constraint exclusion is
driven only by CHECK constraints", but my intuition tells me that the
SELECT should initially be driven by the mere existence of tables
inheriting from the parent table and that the insert rules should have
little or nothing to do with it.

Paul

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-06 Thread Andrus
> I'm not sure what that comment is supposed to mean.
>
> PG is using the index for the condition
> dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'
>
> but there is no index that matches the expression
> dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'
>
> If you look at your explain output, you'll see that step is taking a
> lot of time, and it's inside a nested loop, which means it's run
> repeatedly.

Postgres must use index to filter out rows matching to the  condition

dok.kuupaev BETWEEN '2007-11-01' AND '2007-12-04'

this returns 121 rows.

Additional condition

 dok.kuupaev||dok.kellaaeg BETWEEN '2007-11-01' AND '2007-12-0423 59'

should be evaluated only for these 121 rows.

> Have you run a VACUUM ANALYZE on the tables involved with this query
> recently?  It's possible that PG has outdated statistics and is
> running a poor plan as a result.

I have the follwing command in end of postgresql.conf file:

stats_start_collector = on
stats_row_level = on
autovacuum = on

So autovacuum should be running.

> Just add the index and rerun to see if it helps.  If it doesn't, then
> drop the index.  I have absolutely no way to investigate this for you.

I tried

CREATE INDEX dok_kuupaev_kellaaeg_idx ON dok ((kuupaev||kellaaeg));

but got error

ERROR: functions in index expression must be marked IMMUTABLE
SQL state: 42P17

How to create such index ?

> For example, in your query, you have a text string meant to represent
> a date: '2007-12-0423 59'
>
> This is not a valid date/time, but PostgreSQL has no way to know that
> because it's just a text string.  As a result, you're query is liable
> to give you outright incorrect results.

My fields are kuupaev date  and  kellaaeg char(5)
kellaaeg is in format hh mm

I compare this always with kuupaev||kellaaeg

'2007-12-0423 59' is my valid datetime for to be used for this conversion.

I can probably convert kuupaev||kellaaeg to a datetime and use datetime 
comparison instead of this. Will this increase perfomance ?

Andrus. 



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] how to redirect output to a file

2007-12-06 Thread pc
Hi,

How to redirect the output of an sql command to a file?
Thanks in advance


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] WAL shipping question

2007-12-06 Thread Alvaro Herrera
SHARMILA JOTHIRAJAH wrote:

> How do you install pg_standby. I get the following error when i try the 
> Makefile
> 
> [EMAIL PROTECTED]:~/postgres8.3/pgsql> 
> /export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile

The Makefile is not a shell script.  Run just "make" and then "make
install".

-- 
Alvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4
"You knock on that door or the sun will be shining on places inside you
that the sun doesn't usually shine" (en Death: "The High Cost of Living")

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Vacuum output redirect

2007-12-06 Thread Wim Chalmet
Yes, the write access is fine. I think I have a problem with syntax. I don't
know if there is a command switch on psql to redirect standard output to a
logfile. THere is one for any query results, but that does not happen to
contain the output of vacuum.

On 12/4/07, Martin Gainty <[EMAIL PROTECTED]> wrote:
>
>  does psql have write access to the folder?
>
> M-
>
> - Original Message -
> *From:* Wim Chalmet <[EMAIL PROTECTED]>
> *To:* pgsql-general@postgresql.org
> *Sent:* Tuesday, December 04, 2007 10:27 AM
> *Subject:* [GENERAL] Vacuum output redirect
>
>
> Hi,
>
> Sorry for my basic question here, but I can't figure things out. I am
> running postgres in a windows environment. I would like to redirect the
> output from a "vacuum full analyze verbose;" to a text file. How do I do
> this?
>
> I have tried this (the file "run_vacuum.sql" just contains this one line -
> without quotes of course: "vacuum full analyze verbose;" )
> psql -d sirius_lm -U postgres -f run_vacuum.sql -o vacuum.log
>
> And also this:
> psql -d sirius_lm -U postgres -f run_vacuum.sql >vacuum.log
>
> But none of that works. Can anybody help a beginner out? Thanks!
>
> --
> Do you want to make a difference? Issue a microloan at www.kiva.org.
>
>


-- 
Do you want to make a difference? Issue a microloan at www.kiva.org.


[GENERAL] autovacuum is stopping automatically....

2007-12-06 Thread Charles.Hou
this is the pg_log...
after 2007-12-04 10:40:37 CST 15533 , it always autovacuum "template0"
not mydatabase...
why?
i didn't change any configuration...

2007-12-04 10:14:55 CST 23858 LOG:  autovacuum: processing database
"mydatabase"
2007-12-04 10:23:15 CST 31601 LOG:  autovacuum: processing database
"postgres"
2007-12-04 10:23:35 CST 31932 LOG:  autovacuum: processing database
"test"
2007-12-04 10:23:55 CST 32234 LOG:  autovacuum: processing database
"template1"
2007-12-04 10:24:15 CST 32477 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:24:35 CST 365 LOG:  autovacuum: processing database
"roller"
2007-12-04 10:24:55 CST 668 LOG:  autovacuum: processing database
"mydatabase"
2007-12-04 10:31:29 CST 6858 LOG:  autovacuum: processing database
"roller"
2007-12-04 10:32:42 CST 8074 LOG:  autovacuum: processing database
"test"
2007-12-04 10:40:37 CST 15533 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:41:02 CST 15957 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:41:27 CST 16277 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:41:52 CST 16710 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:42:16 CST 17095 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:42:41 CST 17455 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:43:06 CST 17848 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:43:31 CST 18257 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:43:55 CST 18648 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:44:20 CST 18966 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:44:45 CST 19356 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:45:10 CST 19767 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:45:34 CST 20140 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:45:58 CST 20527 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:46:22 CST 20905 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:46:46 CST 21316 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:47:10 CST 21634 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:47:35 CST 21996 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:48:00 CST 22399 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:48:24 CST 22748 LOG:  autovacuum: processing database
"template0"
2007-12-04 10:48:49 CST 23136 LOG:  autovacuum: processing database
"template0"

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Avoid huge perfomance loss on string concatenation

2007-12-06 Thread Andrus
> just small correction here...expressions like that in the create index
> need an extra set of parens (but I agree with your sentiment):
> CREATE INDEX dokindex ON dok ((kuupaeve||kellaaeg))

I tried

CREATE INDEX dok_kuupaev_kellaaeg_idx ON dok ((kuupaev||kellaaeg));

but got error

ERROR: functions in index expression must be marked IMMUTABLE
SQL state: 42P17

How to create such index ?

Andrus. 



---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Older version of PGSQL help

2007-12-06 Thread Ed Burgstaler
Thank you very much for your help Greg ...
I'll do as you say and install version 7.3 from rpm on a new Centos4.5
system and try to import the dump file.

If I'm sucsessful in getting it working on version 7.3 can you suggest the
next version I should upgrade to after that?

Very grateful

Ed

-Original Message-
From: Greg Smith [mailto:[EMAIL PROTECTED] 
Sent: December 4, 2007 11:54 PM
To: Ed Burgstaler
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Older version of PGSQL help

On Mon, 3 Dec 2007, Ed Burgstaler wrote:

> Would anyone be able to direct me as to whether or not there is a 
> pgadmin for windows utility available that will work for a PostgreSQL 
> v7.0.3 database?

>From reading the rest of your message, I think you're under the impression
that you need pgadmin in order to migrate the database to a newer platform.
You do not; the main utility you need is pg_dump, and you should be able to
run it just fine directly from the command prompt (when logged in as the
database user) on your RH6.2 system, or from a newer installation connecting
to the old one.  The output from that is a text file which you can then copy
anywhere, or reload into a newer version (with some restrictions as
discussed below).

Before you do anything else, it would be wise to login to the old server,
run pg_dump to dump everything, and copy that file elsewhere.  That will at
least give you some sort of backup if the whole system dies on you before
you can migrate to a newer version.

On Tue, 4 Dec 2007, Andrew Sullivan wrote:

> My suggestion is to download and compile a more recent release -- 7.3 
> is about to become unsupported, but you may need to go through that 
> version anyway in order to get off 7.0.  Compiling from source isn't 
> hard, but it may be frustrating on such an old installed system

Ed should be able to get 7.3 running on his target CentOS boxes using the
RPMs at ftp://ftp.postgresql.org/pub/binary/v7.3.20/linux/rpms/redhat/

The issue Andrew is bringing up here is that really new PostgreSQL versions
probably aren't necessairly backward compatible talking to or reading dumps
from your 7.0 system, so your odds are better trying to upgrade to 7.3
instead of a really recent one.  But be warned that 7.3 is going away
relatively soon as well.

I think the path of least resistance here is:
1) Pick a target CentOS system.  Make sure there is no current PostgreSQL
installed.
2) Grab the 7.3 RPMs from ftp.postgresql.org, install them
3) Create a new database cluster ('service postgresql start' may be all you
need)
4) Run pg_dump against the old system.  See
http://www.postgresql.org/docs/7.3/static/app-pgdump.html for details. 
You want to do something like

pg_dump  -h  > dump.sql

You may need to specify the port and username if they're not at the default
as well.  Hopefully the old server is setup to be accessed over the network
like this.

5) If that goes OK, import into the new version with 'psql -f dump.sql'

That should get you started in the right direction.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Transaction isolation and constraints

2007-12-06 Thread cliff
Hi, Tom:

>Whichever one manages to get to the index page first will go through.
>The second one will block waiting to see if the first one commits,
>and will error out if so --- or proceed, if it aborts.

I see, this makes sense.  What if the two transactions insert rows
that don't violate the constraint: will they be able to proceed in
parallel?  Or will one wait for the other, because they both need to
update the shared index?  I.e., does the mechanism work by waiting for
one index update to commit before permitting the next?

   --Cliff

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Ivan Sergio Borgonovo
On Thu, 06 Dec 2007 13:38:31 +0100
Cedric Boudin <[EMAIL PROTECTED]> wrote:

> It was indeed a forest problem.
> both:
> 
> set bolean_column= not bolean_column
> and
> set bolean_column= case when bolean_column then 'f'::bool else
> 't'::bool end;
> 
> do work perfectly.

What if boolean_column is NULL?

btw set bolean_column= not bolean_column works "as expected".

template1=# select (not 't'::boolean),(not 'f'::boolean),(not
NULL::boolean);

?column? | ?column? | ?column?
--+--+--
 f| t|
(1 riga)


-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] WAL shipping question

2007-12-06 Thread SHARMILA JOTHIRAJAH


>The main thing that's improved in 8.3 is the integration of pg_standby
 as 
>a more rugged restore_command than most people were coding on their
 own:

>http://www.postgresql.org/docs/8.3/static/pgstandby.html

>You should use it instead of the example restore.sh included in the 
>message I referenced above.


How do you install pg_standby. I get the following error when i try the Makefile

[EMAIL PROTECTED]:~/postgres8.3/pgsql> 
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 3: 
PROGRAM: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 4: OBJS: 
command not found
/export/home/sjpostgresql-8.3beta3/contrib/pg_standby/Makefile: line 6: 
libpq_srcdir: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 6: 
PG_CPPFLAGS: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 7: 
libpq_pgport: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 7: 
PG_LIBS: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 9: ifdef: 
command not found
/export/home/sjpostgresql-8.3beta3/contrib/pg_standby/Makefile: line 10: 
PG_CONFIG: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 11: 
PG_CONFIG: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 11: 
shell: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 11: PGXS: 
command not found
/export/home/sjpostgresql-8.3beta3/contrib/pg_standby/Makefile: line 12: PGXS: 
command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 12: 
include: command not found
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 13: 
syntax error near unexpected token `else'
/export/home/sj/postgresql-8.3beta3/contrib/pg_standby/Makefile: line 13: `else'

Thanks
sharmila




  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

Re: [GENERAL] simple update on boolean

2007-12-06 Thread Cedric Boudin
Cedric Boudin wrote:
> Most honourable members of the list,
>
> this is a simple one, but I can't find the solution ( probably a
> forest/tree problem).
>
> update table set bolean_column = set_it_to_its_inverse where fk =
> some_value;
>
> or join me in the dark forest
>
> cedric
>
>
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
>
>http://archives.postgresql.org/
>   
It was indeed a forest problem.
both:

set bolean_column= not bolean_column
and
set bolean_column= case when bolean_column then 'f'::bool else 't'::bool
end;

do work perfectly.

Thks for bringing me the light in the forest.
working late isn't that effective !!!

cedric


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] libpq messages language

2007-12-06 Thread Usama Dar
On Dec 6, 2007 8:03 AM, Efraín López <[EMAIL PROTECTED]> wrote:

> Thank you for your reply
>
> but I got the error 'LC_MESSAGES' : undeclared identifier
>
> locale.h only defines LC_COLLATE, LC_CTYPE, LC_MONETARY, LC_NUMERIC,
> LC_TIME
>
> I tried to set a system variable LC_MESSAGES, but didn't work
>
> Then, I tried to find more information
>
>
> In libpq, when ENABLE_NLS is not defined,
> #define libpq_gettext(x) (x)
>
> If I set ENABLE_NLS to 1, then I think I need the gettext library, because
> it needs 
>
> so, there is no simple way in windows to get messages in spanish within
> libpq before connecting to server, is correct?


Well libpq like rest of postgres uses gettext for i18N , so i think you need
to have your windows locale set to spanish, have the gettext library
installed and spanish message catalogs available on your system, which will
be if you compiled the source with --enable-nls configure option then it
doesn't matter if the connection was made to the server or not, since libpq
has its own translated messages catalog.




-- 
Usama Munir Dar http://linkedin.com/in/usamadar
Consultant Architect
Cell:+92 321 5020666
Skype: usamadar


Re: [GENERAL] Import LDAP data to a Postgres database

2007-12-06 Thread Albe Laurenz
Marco Curtolo write:
> Hi, my question is this: it is possible to query, using SQL, 
> an LDAP server and put these data to a table of a Postgres 
> database (under linux) ???

Yes, by writing a function in C that uses the LDAP API.

Yours,
Laurenz Albe

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Frank Millman
Cedric Boudin wrote:
> 
> Most honourable members of the list,
> 
> this is a simple one, but I can't find the solution ( 
> probably a forest/tree problem).
> 
> update table set bolean_column = set_it_to_its_inverse where 
> fk = some_value;
> 

I am usually a newbie around here, but this is one that I can answer :-)

update table set boolean_column = not boolean_column where fk = some_value

HTH

Frank Millman


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Orthodox use of PQtransactionStatus

2007-12-06 Thread luca . ciciriello
Hi All.
Does somebody know where (some internet sites or sample code) I can achieve
a good understanding in the use of the 
status PQTRANS_IDLE, PQTRANS_ACTIVE, PQTRANS_INTRANS, PQTRANS_INERROR,
PQTRANS_UNKNOWN using the lib function
PQtransactionStatus?
I need to use this flags in order to discipline some BEGIN-COMMIT blocks in
a multithreading Windows/Linux 
environment.

Each thread uses a separate db connection and shouldn't have any interaction
with the other threads, but I still
have messages as "WARNING: There is already a transaction in progress" and
"WARNING: There is no transaction in
progress". A further information is that the DB operation involved in each
thread is a store in a table. This table
is the same for each thread.

Thanks in advance.

Luca. 
 --
 Email.it, the professional e-mail, gratis per te: http://www.email.it/f
 
 Sponsor:
 PARTI CON TODOMONDO: occasioni speciali a prezzi straordinari, fino al 50%
di sconto su voli e viaggi!
 Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=7277&d=20071206



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] simple update on boolean

2007-12-06 Thread Daniel Drotos

On Thu, 6 Dec 2007, A. Kretschmer wrote:


update table set bolean_column = set_it_to_its_inverse where fk =
some_value;


I've used:

update table set bolean_column = not boolean_column where fk =
some_value;

which has worked for me well.

Daniel

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] simple update on boolean

2007-12-06 Thread A. Kretschmer
am  Thu, dem 06.12.2007, um 10:25:26 +0100 mailte Cedric Boudin folgendes:
> Most honourable members of the list,
> 
> this is a simple one, but I can't find the solution ( probably a
> forest/tree problem).
> 
> update table set bolean_column = set_it_to_its_inverse where fk =
> some_value;

test=# create table forrest (id int, b bool);
CREATE TABLE
test=*# insert into forrest values (1,'f'::bool);
INSERT 0 1
test=*# insert into forrest values (2,'t'::bool);
INSERT 0 1
test=*# update forrest set b = case when b then 'f'::bool else 't'::bool end;
UPDATE 2
test=*# select * from forrest ;
 id | b
+---
  1 | t
  2 | f
(2 rows)


Okay?



Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] simple update on boolean

2007-12-06 Thread Cedric Boudin
Most honourable members of the list,

this is a simple one, but I can't find the solution ( probably a
forest/tree problem).

update table set bolean_column = set_it_to_its_inverse where fk =
some_value;

or join me in the dark forest

cedric


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Import LDAP data to a Postgres database

2007-12-06 Thread Magnus Hagander
On Thu, Dec 06, 2007 at 09:42:19AM +0100, Marco Curtolo wrote:
> 
> Hi, my question is this: it is possible to query, using SQL, an LDAP server 
> and put these data to a table of a Postgres database (under linux) ???

Depending on exactly what you nede to do, check out dblink-ldap
(http://pgfoundry.org/projects/dblink-ldap/)

//Magnus


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Import LDAP data to a Postgres database

2007-12-06 Thread Andrej Ricnik-Bay
On 12/6/07, Marco Curtolo <[EMAIL PROTECTED]> wrote:
>  Hi, my question is this: it is possible to query, using SQL, an LDAP server
> and put these data to a table of a Postgres database (under linux) ???
I don't know of any LDAP server implementations that can
be queried via SQL, and getting LDAP tree data into any
RDBMS can be done, but it won't be trivial once you get
to tree objects that have more than one value per attribute,
i.e. you'll have to map those attributes into separate tables.




> Thanks to all.
Cheers,
Andrej

-- 
Please don't top post, and don't use HTML e-Mail :}  Make your quotes concise.

http://www.american.edu/econ/notes/htmlmail.htm

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Import LDAP data to a Postgres database

2007-12-06 Thread Marco Curtolo

Hi, my question is this: it is possible to query, using SQL, an LDAP server and 
put these data to a table of a Postgres database (under linux) ???
Thanks to all.

_
Scarica GRATIS 30 emoticon per Messenger!
http://www.emoticons-livemessenger.com/pages/msnit/index.htm