Re: [GENERAL] Data types for IP address.

2011-02-24 Thread Gaini Rajeshwar
On Thu, Feb 24, 2011 at 3:03 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 John R Pierce pie...@hogranch.com writes:
  On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
  *3. Start-End IP format :*  1.2.3.0-1.2.3.255
  You don't even need to program the conversion, it is already done:
 
  % netmask 1.2.3.0:1.2.3.255
  1.2.3.0/24

  yes, but what about 10.1.2.57-10.1.2.123 ?presumably valid in his
  range system, and certainly NOT a valid CIDR range.




 The question is does he actually have a use-case for address ranges that
 don't correspond to legal CIDR ranges, but do nonetheless have an
 identifiable lower boundary, upper boundary, and no holes?  And if so,
 what is it?  The whole thing looked to me like somebody inventing
 requirements with little or no study of what they really needed.


   I have customers who wanted to access application from different
locations without using login credentials every time. So they wanted to
register their ip addresses and  have automated authentication for them. As
i don't know how their ip addresses definitely going to be, i am assuming
that they might have a ip address rage that is not a valid CIDR.


regards, tom lane

 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] Data types for IP address.

2011-02-24 Thread Gaini Rajeshwar
On Thu, Feb 24, 2011 at 1:10 AM, Michael Glaesemann g...@seespotcode.netwrote:


 On Feb 23, 2011, at 13:49, John R Pierce wrote:

  On 02/23/11 4:44 AM, Stephane Bortzmeyer wrote:
  *3. Start-End IP format :*  1.2.3.0-1.2.3.255
  You don't even need to program the conversion, it is already done:
 
  % netmask 1.2.3.0:1.2.3.255
  1.2.3.0/24
 
  yes, but what about 10.1.2.57-10.1.2.123 ?presumably valid in his
 range system, and certainly NOT a valid CIDR range.

 If it hasn't been mentioned already, take a look at ip4r. 
 http://pgfoundry.org/projects/ip4r/


I will look into it. May i know how to use this data type? It is not
available with postgreSQL 8.4 right?
I guess i have to download and integrate into postgreSQL, any suggestions on
how to do?


 Michael Glaesemann
 grzm seespotcode net




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general



Re: [GENERAL] schema Cleanning

2011-02-24 Thread salah jubeh
Hello, 

Exactly, I want to drop unused tables, views, etc...,  I am writing now a shell 
script to handle this issue by analyzing the log files. If you have suggestions 
and comments I will be glade to hear it.

Regards  

 





From: Michael Black michaelblack75...@hotmail.com
To: s_ju...@yahoo.com
Sent: Wed, February 23, 2011 11:57:01 PM
Subject: RE: [GENERAL] schema Cleanning

 I do not know what your criteria is for dropping things.  But I would be very 
sure that something is no longer used.  For example, some tables are not used 
but once a quarter or once a year.  And the same is true with views and 
procedures.  But that would also depend on the type of data like accounting or 
inventory verses customer.  Just some thoughts.

Michael


Date: Wed, 23 Feb 2011 00:10:03 -0800
From: s_ju...@yahoo.com
Subject: Re: [GENERAL] schema Cleanning
To: s_ju...@yahoo.com; pgsql-general@postgresql.org


Hello Guys,

This is a simple way to check if there is no activity on the table, 

SELECT relname,seq_tup_read,idx_tup_fetch  FROM pg_stat_user_tables 
WHERE (idx_tup_fetch + seq_tup_read)= 0;


Is there is any way to make similar checks on views

Regards 


 





From: salah jubeh s_ju...@yahoo.com
To: pgsql-general@postgresql.org
Sent: Mon, February  21, 2011 4:53:41 PM
Subject: [GENERAL] schema Cleanning


Hello,

I am trying to drop deprecated tables, views, procedures from an existing 
database, I am checking the pg_stat_user_tables view to check if the tables are 
accessed recently also to check live tuples. For deprecated views and 
procedures, I am wondering How can I do that, for example suppose the table is 
used and there is a view defined using this table but no one is using it. Is 
there is away to check the views statistics. 


Regards  


  

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread tv
Hi Aleksey,

I've read your previous post, and although I'm not quite sure what is the
root cause, I have some questions and (maybe wild) guesses ...

1) Are those two machines (primary and DR) exactly the same? I mean CPU,
filesystem (including blocksize etc.)?

2) What about database encoding? I think that ASCII on primary and UTF-8
on the DR might have result in such difference in size (but maybe I'm
wrong).

3) How exactly have you measured the size of the database? Filesystem
(using 'du'), or from pg_* views?

4) The fact that a dump file is significantly smaller than the database is
not a big surprise - the reason is that some values are 'shorter' in ASCII
than in the original form (e.g. 32-bit integer 0 is encoded as a single
char '0' which means 1B instead of 4B).

5) Have you tried to use pageinspect contrib module? It allows you to find
out exactly how the data items are organized on a page, so I'd recommend
this:

   a) compare table sizes using system catalogs, select the one with
greatest difference

   b) use pageinspect to display details about one block of the table in
both databases and compare the result

Maybe this will help you to find out the real cause.

regards
Tomas

 I've dumped the big table from the original database (where it is 61 GB in
 size)
 and am restoring it into a test database to see what the size is after
 the restore.

 As it is now, our DR is offline because we did not expect the database to
 nearly double in size upon COPY of the data into a new database.  Would
 like to
 understand what is going on.And would like to not have such a swell of
 data
 upon transfer.  Is there anything I can do, please?

 Best,
 Aleksey

 On Tue, Feb 22, 2011 at 12:44 PM, Aleksey Tsalolikhin
 atsaloli.t...@gmail.com wrote:
 Hi.  Last week our 60 GB database (per psql \l+) was (re-)replicated to
 the DR site using SlonyI, and arrived 109 GB in size which caused a
 problem as it filled up the filesystem on the DR server - we expected
 the
 DR database to be the same size.  Mystery.

 Now just past weekend we upgraded our production server by pg_dump
 and pg_restore, and again the database is 109 GB in size!

 Most of our data is in a single table, which on the old server is 50 GB
 in
 size and on the new server is 100 GB in size.

 Could you please help us understand why a COPY of the data into a new
 database (whether DR or the new server) results in different disk usage?

 Somebody mentioned on the Slony users list that there is a kind of
 padding
 that goes in that actually helps performance.

 Is there a way to track disk usage MINUS the padding?

 Thanks,
 Aleksey


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] could not accept SSPI security context

2011-02-24 Thread Ahmed Shinwari
On Sat, Feb 19, 2011 at 11:31 PM, Brar Piening b...@gmx.de wrote:

 On Thu, 17 Feb 2011 07:58:46 -0800 (PST), Ahmed ahmed.shinw...@gmail.com
 wrote:

 I tried changing that one line to use UTF-8 encoder, but the password
 packet
 didn't get fixed. It works smoothly if kept in byte array instead of
 string.

 Yes, as SSPI uses binary bytes we have to avoid to convert them to
 characters and back to bytes during message generation.

 While char *buffer in C can serve as both a string and a buffer of binary
 bytes, we have byte[] as binary buffer and string as string in c#.
 This is the reason why we need to use byte[] in all places where libpq uses
 char* without really caring whether there is a string inside or some opaque
 bytes.


  I think changing the AuthenticationSSPI case to use context.Host may break
 the cases Brar's mentioned.

 If this isn't necessary to fix the problem we should probably get some more
 instight to what really happens here before fixing one end and breaking the
 other.

I agree.


For now, Francisco may check in that part which fixes the bug due to
encoding. And, address the later issue after further investigation.




 Regards,

 Brar



Re: [GENERAL] ldap authentication multiple ou objects

2011-02-24 Thread Sim Zacks

On 02/23/2011 10:27 PM, Magnus Hagander wrote:


On Wed, Feb 23, 2011 at 11:43, Sim Zackss...@compulab.co.il  wrote:

Is there a way to do ldap authentication in pg_hba on a structure that has
multiple ou objects?

Lets say I have an ou=Users and then an ou per dept.

I want the ldap to do authentication no matter which ou the user is in.


My current ldap string is:

ldap://ldap.server.local/Users;uid=;,ou=Users,dc=server,dc=local

and it works if the user is only in the ou=Users but if he is in any sub ou
it gives authentication failed.

Yes, if you use the search+bind mode you can search across a
hierarchy. See 
http://www.postgresql.org/docs/9.0/static/auth-methods.html#AUTH-LDAP,
look under search+bind.

I guess that's new in 9.0 We're still stuck in 8.2land for the time being.
Thanks

Sim

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mysql to Postgresql

2011-02-24 Thread Linas Virbalas
2011/2/22 Adarsh Sharma adarsh(dot)sharma(at)orkash(dot)com:
 Dear all,

 Today I need to back up a mysql database and restore in Postgresql database
 but I don't know how to achieve this accurately.

In addition to other suggestions, you could also use open source Tungsten
Replicator which has real-time MySQL to PostgreSQL replication capabilities.
Though, if you only need to do it once, this might be overwhelming in your
case. On the other hand, if you need to migrate out with minimum downtime
and QA you might want to check it out.

We've done a webinar a week back which recording you can watch from
http://www.continuent.com/news/webinars - search for Move Your Data In
Real-time From MySQL To PostgreSQL And Greenplum.

In a nutshell, it looks like this:
(1) Manual work to prepare the empty schema on PG
(2) New MySQL instance with Row Replication enabled
(3) Dump your MySQL backup to this MySQL with Row Replication
(4) Tunsgten Replicator will extract all incoming Row Change Events of each
transaction and apply them to PG one by one, taking care of DBMS specific
conversions under the hood (there are quite a bit).

Hope this helps,

Sincerely,
Linas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL database design for a large company

2011-02-24 Thread Vick Khera
On Wed, Feb 23, 2011 at 9:59 PM, Kalai R softlinne...@gmail.com wrote:
 Is it possible to maintain in a single database for all years of data?

Yes it is possible.  But nobody can answer that for your specific
situation without knowing the amount of data and how you plan to use
it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ldap authentication multiple ou objects

2011-02-24 Thread Sim Zacks

On 02/24/2011 12:51 AM, Michael Black wrote:

Look at the Search Filters and LDAP URL sections of 
http://quark.humbug.org.au/publications/ldap/ldap_tut.html . There are 
some samples of wildcard filters there.
I tried a number of possibilities for the ldap url based on the LDAP URL 
section and I'm guessing that pg 8.2 doesn't support the ?scope 
variable. According to the documentation it is supported in 9.0 but not 
in the same format as the RFC documentation in the link you sent me.


Thanks
Sim


 Date: Wed, 23 Feb 2011 12:43:45 +0200
 From: s...@compulab.co.il
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] ldap authentication multiple ou objects

 Is there a way to do ldap authentication in pg_hba on a structure that
 has multiple ou objects?

 Lets say I have an ou=Users and then an ou per dept.

 I want the ldap to do authentication no matter which ou the user is in.


 My current ldap string is:

 ldap://ldap.server.local/Users;uid=;,ou=Users,dc=server,dc=local

 and it works if the user is only in the ou=Users but if he is in any 
sub

 ou it gives authentication failed.


 Thanks

 SIm


 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general




Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread marcin mank
On Tue, Feb 22, 2011 at 9:44 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:

 Most of our data is in a single table, which on the old server is 50 GB in
 size and on the new server is 100 GB in size.


Maybe the table the on new server has fillfactor less than 100 ?

Greetings
Marcin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Recreate database but old tables still present

2011-02-24 Thread gvim

PostgreSQL 9.0.3/Mac OS X 10.6.6

I need to recreate a database at regular intervals and what normally works is:

user$: dropdb -U myuser -h localhost mydb;

user $: psql -U myuser -h localhost mydb;
psql: FATAL: database mydb does not exist

user$: createdb -U myuser -h localhost mydb;

Now, for some reason, when I recreate the datbase the old tables are still 
present. I've tried the same procedure using a psql login but the result is the 
same.

gvim



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recreate database but old tables still present

2011-02-24 Thread Vibhor Kumar

On Feb 24, 2011, at 9:19 PM, gvim wrote:

 Now, for some reason, when I recreate the datbase the old tables are still 
 present. I've tried the same procedure using a psql login but the result is 
 the same.



This can happen if template1 database has those tables. Check/verify if those 
tables are in template1 database.

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Cell: +91-932-568-2279
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Recreate database but old tables still present

2011-02-24 Thread Guillaume Lelarge
Le 24/02/2011 16:49, gvim a écrit :
 PostgreSQL 9.0.3/Mac OS X 10.6.6
 
 I need to recreate a database at regular intervals and what normally
 works is:
 
 user$: dropdb -U myuser -h localhost mydb;
 
 user $: psql -U myuser -h localhost mydb;
 psql: FATAL: database mydb does not exist
 
 user$: createdb -U myuser -h localhost mydb;
 
 Now, for some reason, when I recreate the datbase the old tables are
 still present. I've tried the same procedure using a psql login but the
 result is the same.
 

You probably created your tables in the template1 database. Check this
database if they appear here too.


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Richard Huxton

On 24/02/11 15:24, marcin mank wrote:

On Tue, Feb 22, 2011 at 9:44 PM, Aleksey Tsalolikhin
atsaloli.t...@gmail.com  wrote:


Most of our data is in a single table, which on the old server is 50 GB in
size and on the new server is 100 GB in size.



Maybe the table the on new server has fillfactor less than 100 ?


That would be my immediate guess. Someone changed the fillfactor on the 
table - that won't affect the existing data but will affect a restore.


--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] regexp problem

2011-02-24 Thread Gauthier, Dave
I want to include '[', ']', and '.' in a list of permitted chars in a regexp.  
This doesn's seem to work...


select 'abc.def[0]' ~ E'^[a-zA-Z0-9_\.\[\]]+$';
?collum?

f
(1 row)

Help!


[GENERAL] Adding a column with constraint

2011-02-24 Thread Alexander Farber
Hello,

I have a paranoic question.

In PostgreSQL 8.4.7 I had a table to store started, completed and
interrupted games :

# \d pref_match
  Table public.pref_match
  Column   | Type  |Modifiers
---+---+-
 id| character varying(32) |
 started   | integer   | default 0
 completed | integer   | default 0
 quit  | integer   | default 0
 yw| character(7)  | default to_char(now(), 'IYYY-IW'::text)
Check constraints:
pref_match_completed_check CHECK (completed = 0)
pref_match_quit_check CHECK (quit = 0)
pref_match_started_check CHECK (started = 0)
Foreign-key constraints:
pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)

And have tried to add a column win with a check (to document the won games):

# alter table pref_match add column win integer default 0 check
(completed = win and win = 0);

Now I have:

# \d pref_match
  Table public.pref_match
  Column   | Type  |Modifiers
---+---+-
 id| character varying(32) |
 started   | integer   | default 0
 completed | integer   | default 0
 quit  | integer   | default 0
 yw| character(7)  | default to_char(now(), 'IYYY-IW'::text)
 win   | integer   | default 0
Check constraints:
pref_match_check CHECK (completed = win AND win = 0)
pref_match_completed_check CHECK (completed = 0)
pref_match_quit_check CHECK (quit = 0)
pref_match_started_check CHECK (started = 0)
Foreign-key constraints:
pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)

Shouldn't the line

pref_match_check CHECK (completed = win AND win = 0)

above actually be:

pref_match_win_check CHECK (completed = win AND win = 0)

? Does it indicate something went wrong or is it just cosmetic issue?

Thank you
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp problem

2011-02-24 Thread Steve Crawford

On 02/24/2011 10:25 AM, Gauthier, Dave wrote:

select 'abc.def[0]' ~ E'^[a-zA-Z0-9_*\.\[\]*]+$';


Try:
E'^[a-zA-Z0-9._\\[\\]]+$'

The outer level of parsing turns that into '^[a-zA-Z0-9._\[\]]+$' 
which is the regex you want. Also, I'm *pretty sure* you don't need to 
escape the '.' within a character class.


Cheers,
Steve



Re: [GENERAL] Adding a column with constraint

2011-02-24 Thread Andrew Sullivan
On Thu, Feb 24, 2011 at 07:30:32PM +0100, Alexander Farber wrote:
 Shouldn't the line
 
 pref_match_check CHECK (completed = win AND win = 0)
 
 above actually be:
 
 pref_match_win_check CHECK (completed = win AND win = 0)
 
 ? Does it indicate something went wrong or is it just cosmetic issue?

That's its name.  It could be fredsredwagon.  If you want it to have
a specific name, then name it when you create the constraint.  This is
a little hard to understand from the manual, because of all the square
brackets and such, but I encourage you to experiment with the manual
open to see exactly what everything does.

I haven't checked just this instant, but I think you can rename the
constraint if you don't like its name.

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a column with constraint

2011-02-24 Thread David Johnston
A column constraint can only reference its own column.  Since you are
referencing completed in the CHECK it implicitly converts the Column
constraint into a Table constraint - and table constraints do not reference
the name of a column like a column constraint does during name
auto-generation.

David J.

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Alexander Farber
Sent: Thursday, February 24, 2011 1:31 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Adding a column with constraint

Hello,

I have a paranoic question.

In PostgreSQL 8.4.7 I had a table to store started, completed and
interrupted games :

# \d pref_match
  Table public.pref_match
  Column   | Type  |Modifiers
---+---+
---+---+-
 id| character varying(32) |
 started   | integer   | default 0
 completed | integer   | default 0
 quit  | integer   | default 0
 yw| character(7)  | default to_char(now(), 'IYYY-IW'::text)
Check constraints:
pref_match_completed_check CHECK (completed = 0)
pref_match_quit_check CHECK (quit = 0)
pref_match_started_check CHECK (started = 0) Foreign-key constraints:
pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)

And have tried to add a column win with a check (to document the won
games):

# alter table pref_match add column win integer default 0 check (completed
= win and win = 0);

Now I have:

# \d pref_match
  Table public.pref_match
  Column   | Type  |Modifiers
---+---+
---+---+-
 id| character varying(32) |
 started   | integer   | default 0
 completed | integer   | default 0
 quit  | integer   | default 0
 yw| character(7)  | default to_char(now(), 'IYYY-IW'::text)
 win   | integer   | default 0
Check constraints:
pref_match_check CHECK (completed = win AND win = 0)
pref_match_completed_check CHECK (completed = 0)
pref_match_quit_check CHECK (quit = 0)
pref_match_started_check CHECK (started = 0) Foreign-key constraints:
pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)

Shouldn't the line

pref_match_check CHECK (completed = win AND win = 0)

above actually be:

pref_match_win_check CHECK (completed = win AND win = 0)

? Does it indicate something went wrong or is it just cosmetic issue?

Thank you
Alex

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
Hi.  Thanks for your replies.

How do I check the fillfactor on the table, please?

(http://www.postgresql.org/docs/8.4/static/sql-createtable.html tells me how
to set it, but I haven't found yet how to read it.)

Same CPU, same filesystem, same blocksize - identical systems.  Same model
of server.  We made them identical on purpose.

The way I check table size including TOAST and Indexes is:

SELECT relname as Table,
pg_size_pretty(pg_total_relation_size(relid)) As Size from
pg_catalog.pg_statio_user_tables ORDER BY
pg_total_relation_size(relid) DESC;

My largest table is 50 GB in size; when I pg_dump it, and then
pg_restore it, it becomes 100 GB in size.

How do I pg_restore it so that it is 50 GB in size?  Is it a setting
to pg_dump or to pg_restore?

Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Adding a column with constraint

2011-02-24 Thread Alexander Farber
On Thu, Feb 24, 2011 at 8:02 PM, David Johnston pol...@yahoo.com wrote:
 A column constraint can only reference its own column.  Since you are
 referencing completed in the CHECK it implicitly converts the Column
 constraint into a Table constraint - and table constraints do not reference
 the name of a column like a column constraint does during name
 auto-generation.

Oh ok, that's a good explanation, thank you



 # alter table pref_match add column win integer default 0 check (completed
= win and win = 0);

 Now I have:

 # \d pref_match
                          Table public.pref_match
  Column   |         Type          |                Modifiers
 ---+---+
 ---+---+-
  id        | character varying(32) |
  started   | integer               | default 0
  completed | integer               | default 0
  quit      | integer               | default 0
  yw        | character(7)          | default to_char(now(), 'IYYY-IW'::text)
  win       | integer               | default 0
 Check constraints:
    pref_match_check CHECK (completed = win AND win = 0)
    pref_match_completed_check CHECK (completed = 0)
    pref_match_quit_check CHECK (quit = 0)
    pref_match_started_check CHECK (started = 0) Foreign-key constraints:
    pref_match_id_fkey FOREIGN KEY (id) REFERENCES pref_users(id)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] array size

2011-02-24 Thread akp geek
Hi all -

  I am trying to find the number of elements in the array. Right
now I am using array_upper and array_lower. Is there any other way of
getting the number of elements?

thanks for the help

Regards


Re: [GENERAL] array size

2011-02-24 Thread David Johnston
It may help to specify why you feel that array_upper and array_lower are
insufficient for your use.  I mean, you could  count( unnest( array ) ) 
but whether that is better or worse than array_upper really depends on your
needs.

 

David J.

 

From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of akp geek
Sent: Thursday, February 24, 2011 2:17 PM
To: pgsql-general
Subject: [GENERAL] array size

 

Hi all -

 

  I am trying to find the number of elements in the array. Right
now I am using array_upper and array_lower. Is there any other way of
getting the number of elements?

 

thanks for the help

 

Regards



Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread John R Pierce

On 02/24/11 11:02 AM, Aleksey Tsalolikhin wrote:


How do I check the fillfactor on the table, please?


its in the field reloptions in pg_class.   so...

select reloptions from pg_class where relname='tablename';

if tablename is non-unique, you'll need to qualify that with the OID of 
the namespace (aka schema).



My largest table is 50 GB in size; when I pg_dump it, and then
pg_restore it, it becomes 100 GB in size.

How do I pg_restore it so that it is 50 GB in size?  Is it a setting
to pg_dump or to pg_restore?



are you truncating the table before restoring, or is this a restore into 
a new database, or what?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp problem

2011-02-24 Thread Gauthier, Dave
Yup, that did it.  And you're right, you don't need to escape the '.'.
So the extra \ is needed because of the single quotes string.   A.  
:-)

Thanks Steve !



From: Steve Crawford [mailto:scrawf...@pinpointresearch.com]
Sent: Thursday, February 24, 2011 1:40 PM
To: Gauthier, Dave
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] regexp problem

On 02/24/2011 10:25 AM, Gauthier, Dave wrote:
select 'abc.def[0]' ~ E'^[a-zA-Z0-9_\.\[\]]+$';

Try:
E'^[a-zA-Z0-9._\\[\\]]+$'

The outer level of parsing turns that into '^[a-zA-Z0-9._\[\]]+$' which is 
the regex you want. Also, I'm *pretty sure* you don't need to escape the '.' 
within a character class.

Cheers,
Steve


Re: [GENERAL] array size

2011-02-24 Thread Dmitriy Igrishin
Hey,

2011/2/24 akp geek akpg...@gmail.com

 Hi all -

   I am trying to find the number of elements in the array.
 Right now I am using array_upper and array_lower. Is there any other way of
 getting the number of elements?

You may use array_length() function, e.g.
dmitigr= SELECT array_length(ARRAY[ARRAY[1,2,3],ARRAY[4,5,6]], 1);
 array_length
--
2
(1 row)

dmitigr= SELECT array_length(ARRAY[ARRAY[1,2,3],ARRAY[4,5,6]], 2);
 array_length
--
3
(1 row)

See http://www.postgresql.org/docs/9.0/static/functions-array.html


 thanks for the help

 Regards




-- 
// Dmitriy.


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
On Thu, Feb 24, 2011 at 11:46 AM, John R Pierce pie...@hogranch.com wrote:
 On 02/24/11 11:02 AM, Aleksey Tsalolikhin wrote:

 How do I check the fillfactor on the table, please?

 its in the field reloptions in pg_class.   so...

    select reloptions from pg_class where relname='tablename';

Thanks, John!

autovacuum_enabled=true is the only option set on this table
on both the source database and the target.

 How do I pg_restore it so that it is 50 GB in size?  Is it a setting
 to pg_dump or to pg_restore?


 are you truncating the table before restoring, or is this a restore into a
 new database, or what?

I've tried both.  Slony truncates the table before copying it over, and I've
tryind pg_restore'ing it into a new database.  In both cases, the 50 GB
table arrives as a 100 GB table.

Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index Ignored Due To Use Of View

2011-02-24 Thread David Johnston
Hi,

I have a query using a view such as:

SELECT * 
FROM taskretrievalwithfiles
WHERE ti_id='ti_0r0w2';

The view taskretrievalwithfiles is defined as:

SELECT taskinstance.ti_id, lotsofotherstuff
FROM taskinstance
JOIN store ON taskinstance.s_id=store.s_id
JOIN storegroup ON store.sg_id=storegroup.sg_id
JOIN tasktemplate ON taskinstance.tt_id=tasktemplate.tt_id
JOIN taskclass ON tasktemplate.tc_id=taskclass.tc_id  
LEFT OUTER JOIN process ON (taskinstance.p_id=process.p_id)
LEFT OUTER JOIN genericprocess ON (process.gp_id=genericprocess.gp_id)
LEFT OUTER JOIN filetaskinstancelookup ON
taskinstance.ti_id=filetaskinstancelookup.ti_id
;

A partial EXPLAIN shows a sequential scan on both filetaskinstance and
filereference

  -  Hash Join  (cost=42219.70..140099.85 rows=734905 width=853)
Hash Cond: ((filetaskinstance.fr_name)::text =
(filereference.fr_name)::text)
-  Seq Scan on filetaskinstance  (cost=0.00..35377.05 rows=734905
width=136)
-  Hash  (cost=20772.31..20772.31 rows=188031 width=808)
  -  Seq Scan on filereference  (cost=0.00..20772.31
rows=188031 width=808)

Now, if I simply replace the original FROM clause with the view definition
(i.e., SELECT * FROM (SELECT ... ) alias WHERE ) I get:

   -  Hash  (cost=25.32..25.32 rows=2 width=853)
 -  Nested Loop  (cost=0.00..25.32 rows=2 width=853)
   -  Index Scan using idx_fti_ti_id on filetaskinstance
(cost=0.00..8.53 rows=2 width=136)
 Index Cond: ((ti_id)::text =
'ti_0r0w2'::text)
   -  Index Scan using filereference_pkey on filereference
(cost=0.00..8.38 rows=1 width=808)
Index Cond: ((filereference.fr_name)::text =
(filetaskinstance.fr_name)::text)

I now have index scans on both filetaskinstance and filereference - but
all I appeared to do is the same as what rule re-writing should have done.

I can provide additional information but figured I'd start here.

Running 9.0.3; seeing the behavior on both 64bit versions (Ubuntu and
Windows 7).  Using the same data on 8.2 the query using the view does using
the indexes as expected.

All query related settings have been left as defaults:
from_collapse_limit = 8 
geqo_threshold = 12
join_collapse_limit = 8

The query plans up to the sections listed are identical.  Basically, all the
INNER JOINs and the process LEFT OUTER JOIN are using indexes/seqscan as
expected but the last two LEFT OUTER JOINS are switching between seqscan and
index depending on whether the SELECT query is referenced via a view or if
it is embedded directly into the outer WHERE clause query.  Note that
filetaskinstancelookup itself is a view (it is where the filereference
table gets joined).

One additional note:  I noticed as I was doing this that the first explain
was checking a condition: ((filetaskinstance.fr_name)::text =
(filereference.fr_name)::text); but the filetaskinstance.fr_name column (a
foreign key) had not been indexed.  I added the index thinking the first
query may choose to use that index (on fr_name instead of ti_id) but it did
not - it still wanted to do the sequential scan.

I appreciate any help or direction that can be provided.

David J.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Alex Hunsaker
On Thu, Feb 24, 2011 at 14:11, Aleksey Tsalolikhin
atsaloli.t...@gmail.com wrote:

 are you truncating the table before restoring, or is this a restore into a
 new database, or what?

 I've tried both.  Slony truncates the table before copying it over, and I've
 tryind pg_restore'ing it into a new database.  In both cases, the 50 GB
 table arrives as a 100 GB table.

Are they both the same version of pg? I'm wondering if the 50GB
version is running 8.3 and the slave something =8.4, IIRC I had a
table that grew quite a bit because of some TOAST changes in 8.4. If
so, I'd first try making the slave and master versions of pg match,
which is good practice anyhow.

If none of the above ring any bells, maybe some bloat is creeping in.
Check_postgres (http://bucardo.org/wiki/Check_postgres) has some nice
metrics for that. (I suppose you could also try CLUSTER and see ff
that makes a size difference).

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Adrian Klaver
On Thursday, February 24, 2011 1:11:44 pm Aleksey Tsalolikhin wrote:
 On Thu, Feb 24, 2011 at 11:46 AM, John R Pierce pie...@hogranch.com wrote:
  On 02/24/11 11:02 AM, Aleksey Tsalolikhin wrote:
  How do I check the fillfactor on the table, please?
  
  its in the field reloptions in pg_class.   so...
  
 select reloptions from pg_class where relname='tablename';
 
 Thanks, John!
 
 autovacuum_enabled=true is the only option set on this table
 on both the source database and the target.
 
  How do I pg_restore it so that it is 50 GB in size?  Is it a setting
  to pg_dump or to pg_restore?
  
  are you truncating the table before restoring, or is this a restore into
  a new database, or what?
 
 I've tried both.  Slony truncates the table before copying it over, and
 I've tryind pg_restore'ing it into a new database.  In both cases, the 50
 GB table arrives as a 100 GB table.
 
 Aleksey

What is the schema for this table and associated indexes?
Or if that is not available what is in the table?
Is the data the same in both for the problem table? I am somewhat suspicious 
that the second db table is twice the size of the first. Almost like the data 
is 
being duplicated. 
From above what is the size of the pg_dump for that table only when you dump 
from the original table versus from the table in the second db?


-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
Hi.  We're running Postgres 8.4.4 everywhere.

I already have a pg_dump -Fc of the big table from the source, now
I am running a pg_dump -Fc on the recipient, to see if the size is different.

Then I will run a pg_dump as text, so I can diff the two files if they are
different in size.

Thanks!!
Aleksey

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4)

2011-02-24 Thread Aleksey Tsalolikhin
I read in the pg_dump man page that pg_dump does not block other users
accessing the database (readers or writers).

In practice, if I pg_dump our 100 GB database, our application, which
is half Web front end and half OLTP, at a certain point, slows to a
crawl and the Web interface becomes unresponsive.  I start getting
check_postgres complaints about number of locks and query lengths.  I
see locks around for over 5 minutes.

I've had to abort pg_dump as it made the system unusable.

Can I pg_dump without breaking the system?

Best,
-at

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Adrian Klaver
On Thursday, February 24, 2011 3:34:02 pm Aleksey Tsalolikhin wrote:
 Hi.  We're running Postgres 8.4.4 everywhere.
 
 I already have a pg_dump -Fc of the big table from the source, now
 I am running a pg_dump -Fc on the recipient, to see if the size is
 different.

I thought you already had a pg_dump file that you where restoring to the second 
db?

 
 Then I will run a pg_dump as text, so I can diff the two files if they are
 different in size.

You don't need to do that if the pg_dump was done using -Fc. You can use 
pg_restore to dump a table to a file instead of a database. When it does that 
the file will contain a plain text copy. Something like:

pg_restore -a -t really_big_table -f really_big_table_data.sql

Where -a is data only

 
 Thanks!!
 Aleksey

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Adrian Klaver
On Thursday, February 24, 2011 3:48:35 pm Adrian Klaver wrote:
 On Thursday, February 24, 2011 3:34:02 pm Aleksey Tsalolikhin wrote:
  Hi.  We're running Postgres 8.4.4 everywhere.
  
  I already have a pg_dump -Fc of the big table from the source, now
  I am running a pg_dump -Fc on the recipient, to see if the size is
  different.
 
 I thought you already had a pg_dump file that you where restoring to the
 second db?
 
  Then I will run a pg_dump as text, so I can diff the two files if they
  are different in size.
 
 You don't need to do that if the pg_dump was done using -Fc. You can use
 pg_restore to dump a table to a file instead of a database. When it does
Oops typo    should be restore

 that the file will contain a plain text copy. Something like:
 
 pg_restore -a -t really_big_table -f really_big_table_data.sql
 
 Where -a is data only
 
  Thanks!!
  Aleksey

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Octal to hex transition - WTF

2011-02-24 Thread Craig Ringer

Hi all

I just found out that PHP 5.3's PostgreSQL PDO driver (as used by 
Drupal) is broken by the Postgresql 8.0 transition from octal to hex 
encoding for bytea. It passes the raw hex through to the app, including 
the leading x , causing PHP's unserialize() function to choke.


I'm using Drupal with Apache and PostgreSQL on Windows (sigh) because 
I'm writing up a step-by-step hand-holding guide for someone who needs 
to do some testing against our Drupal database.


Drupal doesn't check that bytea_output is set to 'escape' as a 
workaround or do a sanity test to detect this fault, so the results are 
... interesting:


Notice: unserialize(): Error at offset 0 of 27 bytes in 
variable_initialize() (line 749 of C:\Program Files\Apache Software 
Foundation\Apache2.2\htdocs\drupal-7.0\includes\bootstrap.inc).


If anybody else lands up whacking their head against this for a while: just

  ALTER DATABASE drupal SET bytea_output = 'escape'

to have your sanity restored.

The amount of breakage being seen in drivers really makes me wonder if 
the the octal-hex transition should've been done with an opt-in from 
drivers during negotiation or an explicit SET, rather than just making 
it globally default. But, hey, hindsight.


--
Craig Ringer

Tech-related writing at http://soapyfrogs.blogspot.com/

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4)

2011-02-24 Thread John R Pierce

On 02/24/11 7:09 PM, Adam Bruss wrote:
I'm using cruisecontrol and ant to back up our database at certain 
times on certain days of the week. Cruisecontrol sends out completion 
emails when it's done. I don't think pgsql has a good built in way to 
schedule backups. Cruisecontrol offers supreme flexibility with 
relatively little work.




wow, thats a complicated lotta stuff to do what crontab would do with a 
oneliner


0 6 * * 3,6 pg_dump args to dump

(06:00 on wed,sat)

or the windows task scheduler equivalent.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index Ignored Due To Use Of View

2011-02-24 Thread Tom Lane
David Johnston pol...@yahoo.com writes:
 Now, if I simply replace the original FROM clause with the view definition
 (i.e., SELECT * FROM (SELECT ... ) alias WHERE ) I get:
 [ a different plan ]
 I now have index scans on both filetaskinstance and filereference - but
 all I appeared to do is the same as what rule re-writing should have done.

If you really just manually plugged in the view definition, then yeah
you should have gotten the same results.  Could we see a complete test
case?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4)

2011-02-24 Thread Adam Bruss
I like being emailed when my job runs. It tells me how big the backup is and 
whether it passed or failed. I use cruisecontrol and ant on a regular basis so 
it was a natural choice. The most time consuming part was learning how to use 
pg_dump.

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of John R Pierce
Sent: Thursday, February 24, 2011 10:22 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_dump makes our system unusable - any way to pg_dump 
in the middle of the day? (postgres 8.4.4)

On 02/24/11 7:09 PM, Adam Bruss wrote:
 I'm using cruisecontrol and ant to back up our database at certain 
 times on certain days of the week. Cruisecontrol sends out completion 
 emails when it's done. I don't think pgsql has a good built in way to 
 schedule backups. Cruisecontrol offers supreme flexibility with 
 relatively little work.


wow, thats a complicated lotta stuff to do what crontab would do with a oneliner

0 6 * * 3,6 pg_dump args to dump

(06:00 on wed,sat)

or the windows task scheduler equivalent.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make 
changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL group sort

2011-02-24 Thread zab08

use these sqls:

CREATE TABLE big(id serial, name text);
CREATE TABLE sm(id serial, big_id integer, name text);

INSERT into big (id, name) VALUES (1, 'big1');
INSERT into big (id, name) VALUES (2, 'big2');

INSERT into sm(id, big_id, name)VALUES (2, 1, 'sm1');
INSERT into sm(id, big_id, name)VALUES (1, 1, 'sm2');
INSERT into sm(id, big_id, name)VALUES (3, 1, 'sm3');
INSERT into sm(id, big_id, name)VALUES (6, 2, 'sm4');
INSERT into sm(id, big_id, name)VALUES (4, 2, 'sm5');
INSERT into sm(id, big_id, name)VALUES (4, 2, 'sm6');




run :
SELECT b.id, array_accum(s.id), array_accum(s.name)from big b, sm s where b.id 
= s.big_id group by b.id;
(ps: array_accum is  aggregate 
inhttp://www.postgresql.org/docs/9.0/static/xaggr.html)

id | array_accum |  array_accum 
+-+---
  1 | {2,1,3} | {sm1,sm2,sm3}
  2 | {6,4,4} | {sm4,sm5,sm6}
(2 rows)

the excepted result:
 id | array_accum |  array_accum 
+-+---
  1 | {1,2,3} | {sm1,sm2,sm3}
  2 | {4,5,6} | {sm4,sm5,sm6}
(2 rows)


It is a group sort ?





[GENERAL] PostgresQL group sort

2011-02-24 Thread zab08




[GENERAL] Compression hacks?

2011-02-24 Thread Yang Zhang
For various workloads, compression could be a win on both disk space
and speed (see, e.g.,
http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html).
 I realize Postgresql doesn't have general table compression a la
InnoDB's row_format=compressed (there's TOAST for large values and
there's some old discussion on
http://wiki.postgresql.org/wiki/CompressedTables), but I thought I'd
ask: anybody tried to compress their PG data somehow?  E.g., any
positive experiences running PG on a compressed filesystem (and any
caveats)?  Anecdotal stories of the effects of app-level large-field
compression in analytical workloads (though I'd be curious about
transactional workloads as well)?  Thanks in advance.

-- 
Yang Zhang
http://yz.mit.edu/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Compression hacks?

2011-02-24 Thread John R Pierce

On 02/24/11 10:55 PM, Yang Zhang wrote:

For various workloads, compression could be a win on both disk space
and speed (see, e.g.,
http://blog.oskarsson.nu/2009/03/hadoop-feat-lzo-save-disk-space-and.html).
  I realize Postgresql doesn't have general table compression a la
InnoDB's row_format=compressed (there's TOAST for large values and
there's some old discussion on
http://wiki.postgresql.org/wiki/CompressedTables), but I thought I'd
ask: anybody tried to compress their PG data somehow?  E.g., any
positive experiences running PG on a compressed filesystem (and any
caveats)?  Anecdotal stories of the effects of app-level large-field
compression in analytical workloads (though I'd be curious about
transactional workloads as well)?  Thanks in advance.



compressed file systems tend to perform poorly on random 8K block 
writes, which transactional databases do a lot of.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL group sort

2011-02-24 Thread Vibhor Kumar

On Feb 25, 2011, at 11:45 AM, zab08 wrote:

 run :
 SELECT b.id, array_accum(s.id), array_accum(s.name)from big b, sm s where 
 b.id = s.big_id group by b.id;
 (ps: array_accum is  aggregate in 
 http://www.postgresql.org/docs/9.0/static/xaggr.html)
 
 id | array_accum |  array_accum  
 +-+---
   1 | {2,1,3} | {sm1,sm2,sm3}
   2 | {6,4,4} | {sm4,sm5,sm6}
 (2 rows)
 
 the excepted result:
  id | array_accum |  array_accum  
 +-+---
   1 | {1,2,3} | {sm1,sm2,sm3}
   2 | {4,5,6} | {sm4,sm5,sm6}


Try with intarray Module:
http://www.postgresql.org/docs/8.4/static/intarray.html

Thanks  Regards,
Vibhor Kumar
EnterpriseDB Corporation
The Enterprise PostgreSQL Company
Cell: +91-932-568-2279
vibhor.ku...@enterprisedb.com
Blog:http://vibhork.blogspot.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general