Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Dave Page
On Thu, May 7, 2009 at 3:04 AM, Alvaro Herrera
 wrote:
> Joshua D. Drake wrote:
>> Hello,
>>
>> Yeah its not general technical discussion but this little bit of news
>> warrants more widely read attention. PgUS (http://www.postgresql.us/)
>> received its 501c3 public charity status today. You can view the
>> determination letter here:
>>
>> https://www.postgresql.us/determination_letter
>
> Just curious: is PostgreSQL as a project withdrawing from SPI?

No. Joshua is reporting the status of PgUS which is analogous to pgEU
in function - that is, it is primarily supporting & serving the
regional users and user groups. The core project will remain in SPI to
support global activities and the work of the developers/contributors
etc.

-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

Hi chaps,

We had a power outage today when a couple of computer controlled power strips 
crashed (my secondary psu's will stay firmly in the wall sockets now though).

I'd had a lot of fun pulling plugs out under load before we went into 
production so I wasn't particularly worried, and the databases came back up and 
appled the redo logs as expected.

What did make me scratch my head was a short stream of @ symbols (well they 
show up as @ symbols in vi) in the log file of the main server (others are 
slony subscribers).

My only reasoning so far is that it's just garbage from postgres as the power 
died? The contorllers have BBU cache and drive caches are off. The only other 
thing I can think is it's something to do with me using data=writeback on the 
data partition, and relying on the wal for journaling of the data. The logs are 
on that same partition...

Just wondered what you chaps thought about this?

Glyn





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread S Arvind
Our 600GB data was currently loaded in postgres 8.1 , we want to upgrade
from postgres 8.1 to 8.3 . Can we able to point the data directly or should
we have to do any porting work for transfering data from 8.1 to 8.3.

Arvind S


*
"Many of lifes failure are people who did not realize how close they were to
success when they gave up."
-Thomas Edison*


[GENERAL] "No transaction in progress" warning

2009-05-07 Thread Conrad Lender
Hello.

We've recently discovered a bug in our code that resulted in COMMITs
without BEGINs, and our client saw "no transaction in progress" warnings
in his log file. What worries me is that we didn't discover this problem
during development - the warning doesn't show up in our logs. We're both
using 8.3, and the settings in both postgresql.conf files are similar
(except for memory settings and file locations).

SHOW log_min_messages;
 log_min_messages
--
 notice
(1 row)

I thought this would cause all warnings to be logged automatically. When
I enter "COMMIT" in psql, I do get the warning, but I don't see it in
the log file. What could be the problem?


Thanks,
  - Conrad

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Glyn Astill

> From: S Arvind 
> Subject: [GENERAL] Upgrading from postgres 8.1 to 8.3
> To: pgsql-general@postgresql.org
> Date: Thursday, 7 May, 2009, 11:42 AM
> Our 600GB data was currently loaded in postgres 8.1 , we
> want to upgrade
> from postgres 8.1 to 8.3 . Can we able to point the data
> directly or should
> we have to do any porting work for transfering data from
> 8.1 to 8.3.
> 

You need to do a dump and reload. There was a project that did an in-place 
upgrade of datafiles (pg_migrator???) but I think it only supports -> 8.2




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about function returning record

2009-05-07 Thread Markus Wollny
Hi!

I've got a generalized function

getshadowrecord(schema (varchar),table (varchar), id (int4),
version_id (int)) 

which returns RECORD. As this function is supposed to work on any table
structure, I need to declare a column alias list when I do a SELECT on
it, like

SELECT *
FROM getshadowrecord('foo','article',683503,0) AS shadowrecord (

id integer,
type_id integer ,
headline text,
strapline text,
[...]
);

Now I'd like to make things easier for my developers by supplying sort
of alias functions for each table like

CREATE or REPLACE FUNCTION "foo"."getshadow_article"(
  IN "_id" int4,
  IN "_versid" int4)  
RETURNS foo.article_shadow AS
$BODY$  
SELECT *
FROM getshadowrecord('foo','article',$1,$2) AS
shadowrecord ( 
id integer,
type_id integer ,
headline text,
strapline text,
[...]
);
$BODY$
LANGUAGE SQL VOLATILE;

Using these alias functions, they can simply do a SELECT * FROM
foo.getshadow_article(id,version_id) without having to write the column
list.

As each of those alias functions would correspond exactly to one table,
I wonder if there is a more elegant alternative to explicitly declaring
the column list, something like this:

CREATE or REPLACE FUNCTION "foo"."getshadow_article"(
  IN "_id" int4,
  IN "_versid" int4)  
RETURNS foo.article_shadow AS
$BODY$  
SELECT *
FROM getshadowrecord('foo','article',$1,$2) AS
shadowrecord (foo.article_shadow%rowtype);
$BODY$
LANGUAGE SQL VOLATILE;

Unfortunately my example doesn't work, but I think you'll know what I'd
like to do. The only way I see to solve this so far, would be to use
pl/pgsql or pl/perl, issue a query to the information_schema.columns
table, then assemble the query string with the column list and execute
that. I'd like to know if there's some better way to implement this,
something that would somehow use the %rowtype construct.

Kind regards

   Markus

Jede Stimme zahlt, jetzt voten fur die besten Games: www.bamaward.de

Computec Media AG
Sitz der Gesellschaft und Registergericht: Furth (HRB 8818)
Vorstandsmitglieder: Albrecht Hengstenberg (Vorsitzender) und Rainer Rosenbusch
Vorsitzender des Aufsichtsrates: Jurg Marquard 
Umsatzsteuer-Identifikationsnummer: DE 812 575 276



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date ranges in where

2009-05-07 Thread Jasen Betts
On 2009-05-06, Miguel Miranda  wrote:
> --00032557620e737136046944dbf1
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: 7bit
>
> Hi, what is the recommended way to select a range of dates?

if you want to compare dates, use dates!

> Lets say a have a table with a lastlogin (timestamp) column and i want
> toknow what users logged in for last time between 2009-05-01 and 2009-05-02?
>
> I know that a simple
>
> where lastlogin between '2009-05-01' and '2009-05-02' doesnt work beacuse it
> doesnt include who logged in 2009-05-02 15:30:00, etc...

If you are comparing dates it does.

where lastlogin::date between '2009-05-01'::date and '2009-05-02'::date 

If you leave it uncast postgres will probably convert the lastlogin to
a string and produce results other than that desired and proabaly 
take longer to do it too.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] date ranges in where

2009-05-07 Thread Raymond O'Donnell
On 07/05/2009 12:59, Jasen Betts wrote:
> where lastlogin::date between '2009-05-01'::date and '2009-05-02'::date 
> 
> If you leave it uncast postgres will probably convert the lastlogin to
> a string and produce results other than that desired and proabaly 
> take longer to do it too.

The OP was basing the query on a timestamp column, so I'd say Postgres
was casting everything to timestamp, with the results he reported.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Raymond O'Donnell
On 07/05/2009 11:42, S Arvind wrote:
> Our 600GB data was currently loaded in postgres 8.1 , we want to upgrade
> from postgres 8.1 to 8.3 . Can we able to point the data directly or should
> we have to do any porting work for transfering data from 8.1 to 8.3.

You'll want to test your applications against the new version - one big
gotcha is the removal of certain automatic casts. It's well documented
in the release notes and in various threads on this list.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is thre a way?

2009-05-07 Thread Jasen Betts
On 2009-05-06, Anderson dos Santos Donda  wrote:
> --0021cc022382dbd1bb0469443c6e
> Content-Type: text/plain; charset=ISO-8859-1
> Content-Transfer-Encoding: 7bit
>
> Hi o/
>
> I have a table with 2000 names and emails.. and I did UPDATE list SET email
>= '' without the WHERE and now all rows is null in column email..
>
>
> I have a backup made with pg_dump.. is there a way to restore only the email
> data?

what format of pg_dump? 

If a binary dump use pg_restore to restore the table you want to a
scratch database then use dblink or copy to get that into a temp table
in the database you want.

If a text dump use sed (or an interactive editor) to extract the table
data you want, load it into a temp table 

finally do an 
UPDATE ... FROM  
to fix the losses.

I had to do this a few months ago.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] prepared statements and DBD::Pg

2009-05-07 Thread Tim Bunce
On Thu, May 07, 2009 at 04:54:06AM +1200, Andrej wrote:
> 2009/5/7 JP Fletcher :
> > Hi,
> >
> > I see different behavior with DBI/DBD::Pg (1.607/2.11.8, pg 8.1) when the
> > first command in a prepared statement is 'CREATE TEMP TABLE'.
> >
> > For instance, this works:
> >
> >   my $prepare_sql =< >   CREATE TEMP TABLE foo( id int, user_id int,);
> >   INSERT INTO foo(1, 1);
> >   INSERT INTO foo(2, 2);
> >   SQL
> >   my $sth = $dbh->prepare($prepare_sql);
> >
> > This produces the error
> > ERROR:  cannot insert multiple commands into a prepared statement
> >
> Blessed be CPAN and the manuals for DBD
> http://search.cpan.org/~turnstep/DBD-Pg-2.13.1/Pg.pm#prepare
> 
> WARNING: DBD::Pg now (as of version 1.40) uses true prepared
> statements by sending them to the backend to be prepared by the
> Postgres server. Statements that were legal before may no longer work.

Sure seems like a bug, or at best a misfeature, that DBD::Pg doesn't
simply fallback to client-side prepare when a server-side prepare can't
be performed. I believe DBD::mysql does that.

Tim.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with pg_restore in windows VISTA

2009-05-07 Thread Anamika Saini
Hello,
I have an problem with using pg_restore in windows VISTA. I created the 
pgpass.conf file and copied it into the APPDATA of user currently logged in. 
The pgpass.conf file has all the rights of postgres service account.
Well its perfectly working on Windows XP but when i try it on VISTA it givex me 
error that ' no password supplied during the pg_restore command'.
Please can some one tell me how it works on VISTA?

I got to know from some post that the file should be under APPDATA of postgres 
service account. Well if the file has full rights of servive account.Still is 
there any need to put it there? and how to access APPDATA of service account?
any suggestions?

Regards,
Anamika



  

Re: [GENERAL] prepared statements and DBD::Pg

2009-05-07 Thread David Fetter
On Thu, May 07, 2009 at 02:31:08PM +0100, Tim Bunce wrote:
> On Thu, May 07, 2009 at 04:54:06AM +1200, Andrej wrote:
> > 2009/5/7 JP Fletcher :
> > > Hi,
> > >
> > > I see different behavior with DBI/DBD::Pg (1.607/2.11.8, pg 8.1) when the
> > > first command in a prepared statement is 'CREATE TEMP TABLE'.
> > >
> > > For instance, this works:
> > >
> > >   my $prepare_sql =< > >   CREATE TEMP TABLE foo( id int, user_id int,);
> > >   INSERT INTO foo(1, 1);
> > >   INSERT INTO foo(2, 2);
> > >   SQL
> > >   my $sth = $dbh->prepare($prepare_sql);
> > >
> > > This produces the error
> > > ERROR:  cannot insert multiple commands into a prepared statement
> > >
> > Blessed be CPAN and the manuals for DBD
> > http://search.cpan.org/~turnstep/DBD-Pg-2.13.1/Pg.pm#prepare
> > 
> > WARNING: DBD::Pg now (as of version 1.40) uses true prepared
> > statements by sending them to the backend to be prepared by the
> > Postgres server.  Statements that were legal before may no longer
> > work.
> 
> Sure seems like a bug, or at best a misfeature, that DBD::Pg doesn't
> simply fallback to client-side prepare when a server-side prepare
> can't be performed.  I believe DBD::mysql does that.

It's a safety feature. :)

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Adrian Klaver
On Thursday 07 May 2009 3:33:09 am Conrad Lender wrote:
> Hello.
>
> We've recently discovered a bug in our code that resulted in COMMITs
> without BEGINs, and our client saw "no transaction in progress" warnings
> in his log file. What worries me is that we didn't discover this problem
> during development - the warning doesn't show up in our logs. We're both
> using 8.3, and the settings in both postgresql.conf files are similar
> (except for memory settings and file locations).
>
> SHOW log_min_messages;
>  log_min_messages
> --
>  notice
> (1 row)
>
> I thought this would cause all warnings to be logged automatically. When
> I enter "COMMIT" in psql, I do get the warning, but I don't see it in
> the log file. What could be the problem?

Are you certain you are looking at the correct log file? One way I test is to 
run 'select 1/0;' this yields an ERROR.

>
>
> Thanks,
>   - Conrad



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Tom Lane
Glyn Astill  writes:
> What did make me scratch my head was a short stream of @ symbols (well they 
> show up as @ symbols in vi) in the log file of the main server (others are 
> slony subscribers).

There isn't anything making any effort to fsync the postmaster log, so
some data corruption in the log is hardly surprising.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Tom Lane
Conrad Lender  writes:
> SHOW log_min_messages;
>  log_min_messages
> --
>  notice
> (1 row)

> I thought this would cause all warnings to be logged automatically.

Yup.

> When
> I enter "COMMIT" in psql, I do get the warning, but I don't see it in
> the log file. What could be the problem?

What seems most likely is that you're looking at the wrong log file.
Recheck the "where to log" settings.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

> From: Albe Laurenz 
> Subject: RE: [GENERAL] Power outage and funny chars in the logs
> To: glynast...@yahoo.co.uk, pgsql-general@postgresql.org
> Date: Thursday, 7 May, 2009, 2:44 PM
> Glyn Astill wrote:
> > We had a power outage today when a couple of computer
> 
> > controlled power strips crashed (my secondary
> psu's will stay 
> > firmly in the wall sockets now though).
> > 
> > I'd had a lot of fun pulling plugs out under load
> before we 
> > went into production so I wasn't particularly
> worried, and 
> > the databases came back up and appled the redo logs as
> expected.
> > 
> > What did make me scratch my head was a short stream of
> @ 
> > symbols (well they show up as @ symbols in vi) in the
> log 
> > file of the main server (others are slony
> subscribers).
> > 
> > My only reasoning so far is that it's just garbage
> from 
> > postgres as the power died? The contorllers have BBU
> cache 
> > and drive caches are off. The only other thing I can
> think is 
> > it's something to do with me using data=writeback
> on the data 
> > partition, and relying on the wal for journaling of
> the data. 
> > The logs are on that same partition...
> > 
> > Just wondered what you chaps thought about this?
> 
> You mean the error log and not the transaction log, right?
> 

Yes just the text based server logs.

> I would say that the file system suffered data loss in the
> system crash, and what you see is something that happened
> during file system recovery.
> 
> The strange characters are towards the end of the file,
> right?

Yeah right at the end

> Can you find anything about file system recovery in the
> operating system log files?

As tom said in his post, I think this is just down to os cache of the server 
log etc - it's not actually flushed to disk with fsync like the wal.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Albe Laurenz
Glyn Astill wrote:
> We had a power outage today when a couple of computer 
> controlled power strips crashed (my secondary psu's will stay 
> firmly in the wall sockets now though).
> 
> I'd had a lot of fun pulling plugs out under load before we 
> went into production so I wasn't particularly worried, and 
> the databases came back up and appled the redo logs as expected.
> 
> What did make me scratch my head was a short stream of @ 
> symbols (well they show up as @ symbols in vi) in the log 
> file of the main server (others are slony subscribers).
> 
> My only reasoning so far is that it's just garbage from 
> postgres as the power died? The contorllers have BBU cache 
> and drive caches are off. The only other thing I can think is 
> it's something to do with me using data=writeback on the data 
> partition, and relying on the wal for journaling of the data. 
> The logs are on that same partition...
> 
> Just wondered what you chaps thought about this?

You mean the error log and not the transaction log, right?

I would say that the file system suffered data loss in the
system crash, and what you see is something that happened
during file system recovery.

The strange characters are towards the end of the file, right?
Can you find anything about file system recovery in the
operating system log files?

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Conrad Lender
On 07/05/09 15:58, Tom Lane wrote:
> What seems most likely is that you're looking at the wrong log file.
> Recheck the "where to log" settings.

Thanks, Adrian and Tom,

you were right, I wasn't looking at the correct file.
My postgresql.conf has:

log_directory = '/var/log/postgresql'
log_filename = 'postgresql-8.3-main.log'

The server startup messages were in this file (which is why I assumed it
was the correct one), but for some reason after a restart the logs were
created as

/var/log/postgresql/postgresql-8.3-main.log.1241706461

I've moved away all the old files, but every time I restart (using the
/etc/init.d/postgres script), new log files with a timestamp are
created. AFAICS, there are no open filehandles or locks left pointing to
'/var/log/postgresql/postgresql-8.3-main.log'.

Well, at least now I know where to look for the warnings.


Thanks,
  - Conrad

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] A question about RAISE NOTICE

2009-05-07 Thread Luigi N. Puleio
Hello everyone,

I have a question concerning RAISE NOTICE I have a plpgsql function where 
it has a code like this:

IF (condition) THEN
    RAISE NOTICE 'This condition is verified';
END IF;

and this function is fired when I modify a view's field which as a RULE INSTEAD 
DO where it calls this function. The result I'd like would be to notice me with 
a on-display message with a window having an OK button (like when I use 
EXCEPTION). which would tell me the condition is verified

I settled client_min_messages to notice which it should return me a message but 
it doesn't show me any message... 

Shall I have missed something maybe?... 

Thanks 
Ciao
Luigi



  

Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Adrian Klaver
On Thursday 07 May 2009 7:42:40 am Conrad Lender wrote:
> On 07/05/09 15:58, Tom Lane wrote:
> > What seems most likely is that you're looking at the wrong log file.
> > Recheck the "where to log" settings.
>
> Thanks, Adrian and Tom,
>
> you were right, I wasn't looking at the correct file.
> My postgresql.conf has:
>
> log_directory = '/var/log/postgresql'
> log_filename = 'postgresql-8.3-main.log'
>
> The server startup messages were in this file (which is why I assumed it
> was the correct one), but for some reason after a restart the logs were
> created as
>
> /var/log/postgresql/postgresql-8.3-main.log.1241706461
>
> I've moved away all the old files, but every time I restart (using the
> /etc/init.d/postgres script), new log files with a timestamp are
> created. AFAICS, there are no open filehandles or locks left pointing to
> '/var/log/postgresql/postgresql-8.3-main.log'.

From the manual
http://www.postgresql.org/docs/8.3/interactive/runtime-config-logging.html
Emphasis added.

log_filename (string)

When logging_collector is enabled, this parameter sets the file names of 
the 
created log files. The value is treated as a strftime pattern, so %-escapes can 
be used to specify time-varying file names. (Note that if there are any 
time-zone-dependent %-escapes, the computation is done in the zone specified by 
log_timezone.) <> This parameter can only be set in 
the postgresql.conf file or on the server command line.

If CSV-format output is enabled in log_destination, .csv will be appended 
to 
the timestamped log file name to create the file name for CSV-format output. 
(If log_filename ends in .log, the suffix is replaced instead.) In the case of 
the example above, the CSV file name will be server_log.1093827753.csv. 


>
> Well, at least now I know where to look for the warnings.
>
>
> Thanks,
>   - Conrad



-- 
Adrian Klaver
akla...@comcast.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A question about RAISE NOTICE

2009-05-07 Thread Craig Ringer
Luigi N. Puleio wrote:

> and this function is fired when I modify a view's field which as a RULE 
> INSTEAD DO where it calls this function. The result I'd like would be to 
> notice me with a on-display message with a window having an OK button (like 
> when I use EXCEPTION). which would tell me the condition is verified

Nothing in PostgreSQL displays windows. Presumably you're using some
application development environment or database interface tool, and when
it gets an error while executing a statement it displays an error dialog.

Most programs will not be aware of notice messages. Unlike error
messages there's no provision for them in standard JDBC/ODBC/etc APIs,
and you have to specifically ask the client driver for them. Unless your
app is aware of PostgreSQL specifically, it won't be doing that.

If you're developing yourself using JDBC or similar directly, there are
some options. If you're using something like MS Access, you're probably
stuffed; consider doing things like appending records to message tables
and polling the contents instead. It's ugly, but so are most database
interface RAD tools.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PGSQL x iptables

2009-05-07 Thread Slansky Lukas
Very odd, indeed.
We have analysed the traffic on both ends and we have found that the
problem is probably in PIX. As I have read in LKML
(http://lkml.org/lkml/2007/7/29/174) CISCO works really badly with SACK
packets and these are then recognized as INVALID and dropped. We still
don't know what is wrong and why SACKs are produced but I suspect PIX
too. The workaround is and some explanation is mentioned in LKML. As we
have found there is not problem only in PG, but in Oracle too (it seems
that DB server or drivers deals with this problem better). So this is
not only PG-related.

Now after turning SACKs off seems connection stabile. But we'll try to
figure out why are such packets produced. But it will probably take
time...

Thanks,
Lukas

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Craig Ringer
Sent: Thursday, May 07, 2009 3:02 AM
To: Slansky Lukas
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] PGSQL x iptables

Slansky Lukas wrote:
>> Craig Ringer wrote:
>>
>> After a long period of inactivity, perhaps?
> 
> Is 15 seconds long period? I don't think so.

No. If you see a connection that was working 15 and active seconds ago 
suddenly die, it's not due to time-based state table expiry.

Do you see anything in `dmesg'?

Have you used wireshark to trace activity on the interface and analyzed 
the dumps? You can often learn a lot about what's actually happening 
that way.

> To John: I know it's related to iptables but this state seems to be
only
> on PG connections :-)

Very odd.

--
Craig Ringer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Glyn Astill

--- On Thu, 7/5/09, Massa, Harald Armin  wrote:

> >
> > mentioning those @ symbols ...
> 
> 1,5 weeks ago there was reported on this list the problem
> "postgres service
> not starting on windows"; after consulting event log
> the user reported as
> message "bogus data in postmaster.pid". After
> deleting postmaster.pid the
> service started up fine.
> 
> Soon after a customer of mine reported the same error, also
> on windows; and
> before deleting postmaster.pid I got a copy of that
> "bogus one". AND: there
> where also a lot of  symobols in postmaster.pid (hex 0)
> 
> After reading the answers to the funny chars in the logs
> and no fsync on the
> logs: is there a fsync on postmaster.pid? Or is that file
> not considered
> important enough?
> 
> (just digging for the reason for corrupted data in
> postmaster.pid)...
> 

Aha, nice one Harald,

So the @ symbols are hex 0. Perhaps all the @ symbols are the pattern of the 
text that was written to the log - but since ext3 is in data=writeback mode it 
knows that there should be some data there *but* it doesn't know what that data 
is, so it just ends up as 0's.

With regards to your question, if the .pid is not fsynced I agree doing so 
would perhaps be a good idea, is there any reason why not to?


 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Alvaro Herrera

> --- On Thu, 7/5/09, Massa, Harald Armin  wrote:

> > After reading the answers to the funny chars in the logs and no fsync on
> > the logs: is there a fsync on postmaster.pid? Or is that file not
> > considered important enough?

I think this strongly suggests that postmaster.pid should be fsync'ed.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Alvaro Herrera
Dave Page wrote:
> On Thu, May 7, 2009 at 3:04 AM, Alvaro Herrera
>  wrote:
> > Joshua D. Drake wrote:
> >> Hello,
> >>
> >> Yeah its not general technical discussion but this little bit of news
> >> warrants more widely read attention. PgUS (http://www.postgresql.us/)
> >> received its 501c3 public charity status today. You can view the
> >> determination letter here:
> >>
> >> https://www.postgresql.us/determination_letter
> >
> > Just curious: is PostgreSQL as a project withdrawing from SPI?
> 
> No. Joshua is reporting the status of PgUS which is analogous to pgEU
> in function - that is, it is primarily supporting & serving the
> regional users and user groups. The core project will remain in SPI to
> support global activities and the work of the developers/contributors
> etc.

Isn't the majority of donations going to go to PgUS and pgEU anyway?
What good will it to for SPI to attempt to support global activities
if it doesn't have any money?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Dave Page
On Thu, May 7, 2009 at 4:22 PM, Alvaro Herrera
 wrote:

> Isn't the majority of donations going to go to PgUS and pgEU anyway?

I don't believe so,

> What good will it to for SPI to attempt to support global activities
> if it doesn't have any money?

It wouldn't be any good, but it does have money so isn't a problem (at present).


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Optimizing tuning and table design for large analytics DB

2009-05-07 Thread Rob W

Can anyone point me towards good articles or books that would help a PostgreSQL 
novice (i.e. me) learn the optimal approaches to setting up a DB for analytics?

In this particular case, I need to efficiently analyze approximately 300 
million system log events (i.e. time series data). It's log data, so it's only 
appended to the table, not inserted and is never modified. Only 90 days worth 
of data will be retained, so old records need to be deleted periodically. Query 
performance will only be important for small subsets of the data (e.g. when 
analyzing a week or day's worth of data), the rest of the reports will be run 
in batch mode. There will likely only be one user at a time doing ad-hoc 
queries.

This is a a follow-up to the earlier suggestions that PostgreSQL will handle 
the volumes of data I plan to work with, so I figured I'd give it a shot. 

Rob

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is thre a way?

2009-05-07 Thread Anderson dos Santos Donda
My backup is PLAIN(text) format... i'll try to do with sed...

Thanks for all o/

On Thu, May 7, 2009 at 9:14 AM, Jasen Betts  wrote:

> On 2009-05-06, Anderson dos Santos Donda  wrote:
> > --0021cc022382dbd1bb0469443c6e
> > Content-Type: text/plain; charset=ISO-8859-1
> > Content-Transfer-Encoding: 7bit
> >
> > Hi o/
> >
> > I have a table with 2000 names and emails.. and I did UPDATE list SET
> email
> >= '' without the WHERE and now all rows is null in column email..
> >
> >
> > I have a backup made with pg_dump.. is there a way to restore only the
> email
> > data?
>
> what format of pg_dump?
>
> If a binary dump use pg_restore to restore the table you want to a
> scratch database then use dblink or copy to get that into a temp table
> in the database you want.
>
> If a text dump use sed (or an interactive editor) to extract the table
> data you want, load it into a temp table
>
> finally do an
> UPDATE ... FROM 
> to fix the losses.
>
> I had to do this a few months ago.
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Conrad Lender
On 07/05/09 16:56, Adrian Klaver wrote:
>> The server startup messages were in this file (which is why I
>> assumed it was the correct one), but for some reason after a
>> restart the logs were created as
>> 
>> /var/log/postgresql/postgresql-8.3-main.log.1241706461
[...]
> is done in the zone specified by log_timezone.) < present, PostgreSQL will append the epoch of the new log file's
> creation time. For example, if log_filename were server_log, then the
> chosen file name would be server_log.1093827753 for a log starting at
> Sun Aug 29 19:02:33 2004 MST.>> This parameter can only be set in the
> postgresql.conf file or on the server command line.

Thanks, I had missed that in the manual.

I'm beginning to feel incredibly dense now, but this actually brought my
original problem back. When I do specify log_filename and log_directory,
and restart Postgres, the test you suggested ('select 1/0;') shows up as
an error in the log file

/var/log/postgresql/postgresql-8.3-main.log.{timestamp}

When I leave both log_directory and log_filename commented out (my
original settings), then restart postgres, it creates the file

/var/log/postgresql/postgresql-8.3-main.log

This contains three lines about SSL certificates, but the warning from
'select 1/0' will instead be written to this file:

/var/lib/postgresql/8.3/main/pg_log/postgresql-2009-05-07_170932.log

So there are two log files :-|

Is there any way to configure Postgres to always append to the same file
(/var/log/postgresql/postgresql-8.3-main.log), even if it doesn't
contain strftime escapes? I guess it must be possible, because that's
the way it used to work (before I screwed up my kernel memory settings
in an unrelated SNAFU, preventing Postgres from starting on boot).


Thanks for your patience.
  - Conrad

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Scott Mead
On Thu, May 7, 2009 at 11:29 AM, Conrad Lender  wrote:

>
> I'm beginning to feel incredibly dense now, but this actually brought my
> original problem back. When I do specify log_filename and log_directory,
> and restart Postgres, the test you suggested ('select 1/0;') shows up as
> an error in the log file
>
> /var/log/postgresql/postgresql-8.3-main.log.{timestamp}
>
> When I leave both log_directory and log_filename commented out (my
> original settings), then restart postgres, it creates the file
>
> /var/log/postgresql/postgresql-8.3-main.log
>
> This contains three lines about SSL certificates, but the warning from
> 'select 1/0' will instead be written to this file:
>
> /var/lib/postgresql/8.3/main/pg_log/postgresql-2009-05-07_170932.log
>
> So there are two log files :-|


How did you restart postgres?  I'm guessing that you're using a distribution
provided package.  If you're using the /etc/init.d scripts from that
package, it's likely that the startup script is redirecting stderr and that
the system is configured to use syslog for the rest of the logging.



--Scott


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Andrew Gould
On Thu, May 7, 2009 at 10:22 AM, Alvaro Herrera
wrote:

> Dave Page wrote:
> > On Thu, May 7, 2009 at 3:04 AM, Alvaro Herrera
> >  wrote:
> > > Joshua D. Drake wrote:
> > >> Hello,
> > >>
> > >> Yeah its not general technical discussion but this little bit of news
> > >> warrants more widely read attention. PgUS (http://www.postgresql.us/)
> > >> received its 501c3 public charity status today. You can view the
> > >> determination letter here:
> > >>
> > >> https://www.postgresql.us/determination_letter
> > >
> > > Just curious: is PostgreSQL as a project withdrawing from SPI?
> >
> > No. Joshua is reporting the status of PgUS which is analogous to pgEU
> > in function - that is, it is primarily supporting & serving the
> > regional users and user groups. The core project will remain in SPI to
> > support global activities and the work of the developers/contributors
> > etc.
>
> Isn't the majority of donations going to go to PgUS and pgEU anyway?
> What good will it to for SPI to attempt to support global activities
> if it doesn't have any money?
>
> --
> Alvaro Herrera
> http://www.CommandPrompt.com/
> PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> http://www.postgresql.org/mailpref/pgsql-general
>

The public charity status lets those of us who pay income taxes to the US
government claim donations to PostgreSQL as deductions on our income tax
statements.  It encourages donations.  It makes donations more affordable.
It does not limit where the money is used.  That's it.  That's all.

Andrew


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Joshua D. Drake
On Thu, 2009-05-07 at 16:25 +0100, Dave Page wrote:
> On Thu, May 7, 2009 at 4:22 PM, Alvaro Herrera
>  wrote:
> 
> > Isn't the majority of donations going to go to PgUS and pgEU anyway?
> 
> I don't believe so,

Actually except for a very recent influx of a bulk sum into SPI, PgUS is
probably leading the pack for donations. I would expect that pgEU would
be in a similar situation after PgDay.eu.

> 
> > What good will it to for SPI to attempt to support global activities
> > if it doesn't have any money?
> 
> It wouldn't be any good, but it does have money so isn't a problem (at 
> present).

There are a lot of pros and cons to staying with SPI. The number one pro
is that it gives International community members a way to help support
the project and in return gives .Org a way to return the support. Of
course there are issues in that as well which are probably best served
not being discussed on this thread (IRS rules etc..).

It is true that over the next year most money will likely funnel into
PgEU and PgUS. This makes sense as those are the most active communities
and those communities are actively providing services to their
respective communities.

I hope that a couple of things come out of this as a whole:

 1. I would like to see more country or regional non profits. They are
always going to be better able to serve their needs than .Org.

 2. For those places that won't or can't organize in such a way they
still have a place that they can affiliate with that will continue to
help support them.

In all I believe we need to be pushing hard to help any community that
is reasonably organized to get official (/me hints at Brazil).

Sincerely,

Joshua D. Drake



-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Power outage and funny chars in the logs

2009-05-07 Thread Massa, Harald Armin
> What did make me scratch my head was a short stream of @ symbols (well they
> show up as @ symbols in vi) in the log file of the main server (others are
> slony subscribers).
>
> mentioning those @ symbols ...

1,5 weeks ago there was reported on this list the problem "postgres service
not starting on windows"; after consulting event log the user reported as
message "bogus data in postmaster.pid". After deleting postmaster.pid the
service started up fine.

Soon after a customer of mine reported the same error, also on windows; and
before deleting postmaster.pid I got a copy of that "bogus one". AND: there
where also a lot of  symobols in postmaster.pid (hex 0)

After reading the answers to the funny chars in the logs and no fsync on the
logs: is there a fsync on postmaster.pid? Or is that file not considered
important enough?

(just digging for the reason for corrupted data in postmaster.pid)...

Harald

-- 
GHUM Harald Massa
persuadere et programmare
Harald Armin Massa
Spielberger Straße 49
70435 Stuttgart
0173/9409607
no fx, no carrier pigeon
-
LASIK good, steroids bad?


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Alvaro Herrera
Andrew Gould escribió:

> The public charity status lets those of us who pay income taxes to the US
> government claim donations to PostgreSQL as deductions on our income tax
> statements.  It encourages donations.  It makes donations more affordable.
> It does not limit where the money is used.  That's it.  That's all.

I know that.  But both pgUS and SPI have public charity status now.  Which one
would be a prospective US donator be more willing to donate to?  Josh just
confirmed money is currently flowing to PgUS.  But PgUS charter is to help the
activities within the US; so since SPI is going to have little money shortly,
the "global" communities (meaning everything outside US and Europe) are going
to find themselves without any means to fund getting people from there to here
(Yes -- "here" to me means outside the US/EU).

For it was SPI who used to fund US speakers to travel to places like Brazil.
Do you think Brazil is in a position to get nearly as many funds as the US
community?  I know my country is likely to raise very little money (we hardly
get enough money to handle a single yearly Linux conference; and that's only
because we bunch all F/OSS stuff together.  A single project like Pg is
unlikely to fly very far.)

I'll ask PgUS later to fund my possible flight to Cuba for a Pg summer school.  
Oh wait a minute ...

Hey, but I forgot -- congratulations on the 501(c)3 status!

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PgUS 501c3 Public Charity

2009-05-07 Thread Joshua D. Drake
On Thu, 2009-05-07 at 12:15 -0400, Alvaro Herrera wrote:
> Andrew Gould escribió:

> Hey, but I forgot -- congratulations on the 501(c)3 status!
> 


First, thanks!

I think we are getting a bit off topic here. If we want to continue this
let's move it to -advocacy.

Sincerely,

Joshua D. Drake


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "No transaction in progress" warning

2009-05-07 Thread Tom Lane
Scott Mead  writes:
> On Thu, May 7, 2009 at 11:29 AM, Conrad Lender  wrote:
>> When I leave both log_directory and log_filename commented out (my
>> original settings), then restart postgres, it creates the file
>> 
>> /var/log/postgresql/postgresql-8.3-main.log
>> 
>> This contains three lines about SSL certificates, but the warning from
>> 'select 1/0' will instead be written to this file:
>> 
>> /var/lib/postgresql/8.3/main/pg_log/postgresql-2009-05-07_170932.log
>> 
>> So there are two log files :-|

> How did you restart postgres?  I'm guessing that you're using a distribution
> provided package.  If you're using the /etc/init.d scripts from that
> package, it's likely that the startup script is redirecting stderr and that
> the system is configured to use syslog for the rest of the logging.

Right.  There are a number of messages that can appear at startup before
the postmaster reads and adopts the "where to log" settings in
postgresql.conf.  Those early messages are going to go to postmaster's
stderr.  What it sounds like is you're using a start script that points
postmaster stderr to /var/log/postgresql/postgresql-8.3-main.log.
I don't find that to be a tremendously good idea --- in my RPMs the
early-startup messages go to a fixed file (/var/lib/pgsql/pgstartup.log)
that's not dependent on what the "where to log" configuration settings
are.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres partially hang after inactivity

2009-05-07 Thread Marco Maccaferri

On 06/05/2009 21:39 Merlin Moncure ha scritto:


probably, you had a transaction sitting around that is not closed (to
see: select * from pg_stat_activity).

the alter table is waiting for that transaction to finish, but itself
blocks all _new_ transactions (alter table acquires a strong lock on
the table).   Cancel the alter table query or the query that is
blocking your alter table.  Again, check pg_stat_activity and
ungranted locks (select * from pg_locks where granted = false).


By querying these tables I found that the connection pooling was not 
setup properly and that there were few unclosed sessions hanging around. 
Thank you for your help.


Regards,
Marco.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 4:54 AM, Glyn Astill  wrote:
>
>> From: S Arvind 
>> Subject: [GENERAL] Upgrading from postgres 8.1 to 8.3
>> To: pgsql-general@postgresql.org
>> Date: Thursday, 7 May, 2009, 11:42 AM
>> Our 600GB data was currently loaded in postgres 8.1 , we
>> want to upgrade
>> from postgres 8.1 to 8.3 . Can we able to point the data
>> directly or should
>> we have to do any porting work for transfering data from
>> 8.1 to 8.3.
>>
>
> You need to do a dump and reload. There was a project that did an in-place 
> upgrade of datafiles (pg_migrator???) but I think it only supports -> 8.2

Bruce is working on that project for 8.4 I believe.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Optimizing tuning and table design for large analytics DB

2009-05-07 Thread John R Pierce

Rob W wrote:

Can anyone point me towards good articles or books that would help a PostgreSQL 
novice (i.e. me) learn the optimal approaches to setting up a DB for analytics?

In this particular case, I need to efficiently analyze approximately 300 
million system log events (i.e. time series data). It's log data, so it's only 
appended to the table, not inserted and is never modified. Only 90 days worth 
of data will be retained, so old records need to be deleted periodically. Query 
performance will only be important for small subsets of the data (e.g. when 
analyzing a week or day's worth of data), the rest of the reports will be run 
in batch mode. There will likely only be one user at a time doing ad-hoc 
queries.

This is a a follow-up to the earlier suggestions that PostgreSQL will handle the volumes of data I plan to work with, so I figured I'd give it a shot. 
  



one approach to speeding up the handling of time expired data like this 
is to partition it, maybe by week.   eg, you create a seperate table for 
each of 14 weeks, and have a view that joins them all for doing 
queries.  you insert your new records to the latest week table, then 
each week truncate the oldest week table and switch to using that one 
for the new inserts   this is more efficient than having one large 
table and deleting individual rows.


you can speed up the inserts some by doing them in batches, for 
instance, collecting a few minutes worth of new records, and inserting 
them all as one transaction.   depending on how many fields of these 
tables are indexed, this can greatly reduce the overhead of maintaining 
those indices.


see http://www.postgresql.org/docs/current/static/ddl-partitioning.html 
for more on this sort of partitioning.   Above, I mentioned using a view 
to read the whole table as a join, this page discusses using inheritance 
instead, which has advantages.


if your reporting requirements include the sorts of statistics that can 
be precomputed, it can be advantageous to keep a set of running tallies 
in separate tables, like per hour and per day counts for each event 
class, which can be used to reduce the amount of bulk querying required 
to generate statistical count reports. of course, these tally tables 
also need aging, but there's much MUCH less data in them so conventional 
row deletes is probably fine.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Scott Mead
On Thu, May 7, 2009 at 2:56 PM, Scott Marlowe wrote:

> On Thu, May 7, 2009 at 4:54 AM, Glyn Astill 
> wrote:
> >
> >> From: S Arvind 
> >> Subject: [GENERAL] Upgrading from postgres 8.1 to 8.3
> >> To: pgsql-general@postgresql.org
> >> Date: Thursday, 7 May, 2009, 11:42 AM
> >> Our 600GB data was currently loaded in postgres 8.1 , we
> >> want to upgrade
> >> from postgres 8.1 to 8.3 . Can we able to point the data
> >> directly or should
> >> we have to do any porting work for transfering data from
> >> 8.1 to 8.3.
> >>
> >
> > You need to do a dump and reload. There was a project that did an
> in-place upgrade of datafiles (pg_migrator???) but I think it only supports
> -> 8.2
>
> Bruce is working on that project for 8.4 I believe.


  I think that one only supports 8.3 +

http://momjian.us/main/blogs/pgblog.html#May_5_2009

--Scott

>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Upgrading from postgres 8.1 to 8.3

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 1:05 PM, Scott Mead  wrote:
>
> On Thu, May 7, 2009 at 2:56 PM, Scott Marlowe 
> wrote:
>>
>> On Thu, May 7, 2009 at 4:54 AM, Glyn Astill 
>> wrote:
>> >
>> >> From: S Arvind 
>> >> Subject: [GENERAL] Upgrading from postgres 8.1 to 8.3
>> >> To: pgsql-general@postgresql.org
>> >> Date: Thursday, 7 May, 2009, 11:42 AM
>> >> Our 600GB data was currently loaded in postgres 8.1 , we
>> >> want to upgrade
>> >> from postgres 8.1 to 8.3 . Can we able to point the data
>> >> directly or should
>> >> we have to do any porting work for transfering data from
>> >> 8.1 to 8.3.
>> >>
>> >
>> > You need to do a dump and reload. There was a project that did an
>> > in-place upgrade of datafiles (pg_migrator???) but I think it only supports
>> > -> 8.2
>>
>> Bruce is working on that project for 8.4 I believe.
>
>   I think that one only supports 8.3 +

Sorry if that came across as a reply to the OP about his issue, it was
more a comment to the replied to post that the upgrade project was
back alive.  Didn't mean to cause confusion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Schema per user?

2009-05-07 Thread Anders Steinlein

Hi,

I'm pondering a design question for a subscription-based web-app we  
are developing. Would it be feasible to create a new schema per user  
account, setting the search_path to their own schema during login?  
There is no shared data (except where we keep a record of users), as  
each account is entirely self-contained. I would usually just put  
username into relevant tables and querying based on this, but I  
figured separate schemas *might* have a few advantages: Ease scaling  
by placing users across different tablespaces and/or databases, easier  
backup/restore of specific user's data and no vacuum when we delete  
accounts (just drop schema vs delete).


Has anyone done something like this, or is it simply A Bad Idea? How  
many schemas can a database contain, until one hits a (hard or soft)  
limit? Keep in mind that this is not a "regular" web-app with  
thousands of users, but more in the range of 500-1000 for the  
foreseeable future.


Regads,
Anders

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein  wrote:
> Hi,
>
> I'm pondering a design question for a subscription-based web-app we are
> developing. Would it be feasible to create a new schema per user account,
> setting the search_path to their own schema during login? There is no shared
> data (except where we keep a record of users), as each account is entirely
> self-contained. I would usually just put username into relevant tables and
> querying based on this, but I figured separate schemas *might* have a few
> advantages: Ease scaling by placing users across different tablespaces
> and/or databases, easier backup/restore of specific user's data and no
> vacuum when we delete accounts (just drop schema vs delete).
>
> Has anyone done something like this, or is it simply A Bad Idea? How many
> schemas can a database contain, until one hits a (hard or soft) limit? Keep
> in mind that this is not a "regular" web-app with thousands of users, but
> more in the range of 500-1000 for the foreseeable future.

We're looking at something similar here at work, but in the 10k to 10M
range of schemas.  I'll let you know how our testing goes.

1,000 is nothing in terms of schemas.  You should be fine.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Anders Steinlein


On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:

On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein  
 wrote:

Hi,

I'm pondering a design question for a subscription-based web-app we  
are
developing. Would it be feasible to create a new schema per user  
account,

setting the search_path to their own schema during login?

[snip]


We're looking at something similar here at work, but in the 10k to 10M
range of schemas.  I'll let you know how our testing goes.

1,000 is nothing in terms of schemas.  You should be fine.


I'd be *very* interested to hear your experiences once you get some  
results.


Generally though, what made you consider such a solution? Same  
advantages as I mentioned? One thing I'm a bit usure of how best to  
solve is where to place the "users" or some such table for  
authentication and other "shared" info -- simply in the "public"  
schema, perhaps?


\a.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Adam Ruth
I've actually done this before. I had a web app with about 400 users  
each with their own schema. It actually worked very well, except for  
one thing. There got to be so many tables that a pg_dump would fail  
because it would run out of file locks. We got around it by creating a  
primary table and then using views in each of the schemas to access  
that user's data. It also made it easy to do a query against all users  
at once in the primary table.



On 08/05/2009, at 5:45 AM, Anders Steinlein wrote:


Hi,

I'm pondering a design question for a subscription-based web-app we  
are developing. Would it be feasible to create a new schema per user  
account, setting the search_path to their own schema during login?  
There is no shared data (except where we keep a record of users), as  
each account is entirely self-contained. I would usually just put  
username into relevant tables and querying based on this, but I  
figured separate schemas *might* have a few advantages: Ease scaling  
by placing users across different tablespaces and/or databases,  
easier backup/restore of specific user's data and no vacuum when we  
delete accounts (just drop schema vs delete).


Has anyone done something like this, or is it simply A Bad Idea? How  
many schemas can a database contain, until one hits a (hard or soft)  
limit? Keep in mind that this is not a "regular" web-app with  
thousands of users, but more in the range of 500-1000 for the  
foreseeable future.


Regads,
Anders

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Erik Jones


On May 7, 2009, at 1:05 PM, Scott Marlowe wrote:

On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein  
 wrote:

Hi,

I'm pondering a design question for a subscription-based web-app we  
are
developing. Would it be feasible to create a new schema per user  
account,
setting the search_path to their own schema during login? There is  
no shared
data (except where we keep a record of users), as each account is  
entirely
self-contained. I would usually just put username into relevant  
tables and
querying based on this, but I figured separate schemas *might* have  
a few
advantages: Ease scaling by placing users across different  
tablespaces
and/or databases, easier backup/restore of specific user's data and  
no

vacuum when we delete accounts (just drop schema vs delete).

Has anyone done something like this, or is it simply A Bad Idea?  
How many
schemas can a database contain, until one hits a (hard or soft)  
limit? Keep
in mind that this is not a "regular" web-app with thousands of  
users, but

more in the range of 500-1000 for the foreseeable future.


We're looking at something similar here at work, but in the 10k to 10M
range of schemas.  I'll let you know how our testing goes.

1,000 is nothing in terms of schemas.  You should be fine.


One thing you'll notice a big change in is dump times from pg_dump  
when compared to what you'd see from a db with the same size data set  
but a smaller schema.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Tom Lane
Adam Ruth  writes:
> I've actually done this before. I had a web app with about 400 users  
> each with their own schema. It actually worked very well, except for  
> one thing. There got to be so many tables that a pg_dump would fail  
> because it would run out of file locks. We got around it by creating a  
> primary table and then using views in each of the schemas to access  
> that user's data. It also made it easy to do a query against all users  
> at once in the primary table.

Note that this is about how many tables you have, and has got nothing to
do with how many schemas they are in, but: the solution to that is to
increase max_locks_per_transaction.  The default value is kinda
conservative to avoid eating up too much shared memory.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 2:56 PM, Erik Jones  wrote:
>
> On May 7, 2009, at 1:05 PM, Scott Marlowe wrote:
>
>> On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein 
>> wrote:
>>>
>>> Hi,
>>>
>>> I'm pondering a design question for a subscription-based web-app we are
>>> developing. Would it be feasible to create a new schema per user account,
>>> setting the search_path to their own schema during login? There is no
>>> shared
>>> data (except where we keep a record of users), as each account is
>>> entirely
>>> self-contained. I would usually just put username into relevant tables
>>> and
>>> querying based on this, but I figured separate schemas *might* have a few
>>> advantages: Ease scaling by placing users across different tablespaces
>>> and/or databases, easier backup/restore of specific user's data and no
>>> vacuum when we delete accounts (just drop schema vs delete).
>>>
>>> Has anyone done something like this, or is it simply A Bad Idea? How many
>>> schemas can a database contain, until one hits a (hard or soft) limit?
>>> Keep
>>> in mind that this is not a "regular" web-app with thousands of users, but
>>> more in the range of 500-1000 for the foreseeable future.
>>
>> We're looking at something similar here at work, but in the 10k to 10M
>> range of schemas.  I'll let you know how our testing goes.
>>
>> 1,000 is nothing in terms of schemas.  You should be fine.
>
> One thing you'll notice a big change in is dump times from pg_dump when
> compared to what you'd see from a db with the same size data set but a
> smaller schema.

Yeah, we're already looking at segregating out pg_dumps by schemas /
ranges of schemas to keep things manageable.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 2:12 PM, Anders Steinlein  wrote:
>
> On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:
>
>> On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein 
>> wrote:
>>>
>>> Hi,
>>>
>>> I'm pondering a design question for a subscription-based web-app we are
>>> developing. Would it be feasible to create a new schema per user account,
>>> setting the search_path to their own schema during login?
>>>
>>> [snip]
>>
>> We're looking at something similar here at work, but in the 10k to 10M
>> range of schemas.  I'll let you know how our testing goes.
>>
>> 1,000 is nothing in terms of schemas.  You should be fine.
>
> I'd be *very* interested to hear your experiences once you get some results.
>
> Generally though, what made you consider such a solution? Same advantages as
> I mentioned? One thing I'm a bit usure of how best to solve is where to
> place the "users" or some such table for authentication and other "shared"
> info -- simply in the "public" schema, perhaps?

We're looking at a "schema per group" fit for a certain application
and we have lot of groups (in the 100,000 to 1,000,000 range.)  We're
also looking at partitioning to multiple db servers if needs be.  It's
a compelling app, and schemas allow us to have one copy of the master
user data etc and the app just has to have a different search path and
viola, we're integrated.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Erik Jones

On May 7, 2009, at 2:06 PM, Scott Marlowe wrote:

On Thu, May 7, 2009 at 2:12 PM, Anders Steinlein  
 wrote:


On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:

On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein >

wrote:


Hi,

I'm pondering a design question for a subscription-based web-app  
we are
developing. Would it be feasible to create a new schema per user  
account,

setting the search_path to their own schema during login?

[snip]


We're looking at something similar here at work, but in the 10k to  
10M

range of schemas.  I'll let you know how our testing goes.

1,000 is nothing in terms of schemas.  You should be fine.


I'd be *very* interested to hear your experiences once you get some  
results.


Generally though, what made you consider such a solution? Same  
advantages as
I mentioned? One thing I'm a bit usure of how best to solve is  
where to
place the "users" or some such table for authentication and other  
"shared"

info -- simply in the "public" schema, perhaps?


We're looking at a "schema per group" fit for a certain application
and we have lot of groups (in the 100,000 to 1,000,000 range.)  We're
also looking at partitioning to multiple db servers if needs be.  It's
a compelling app, and schemas allow us to have one copy of the master
user data etc and the app just has to have a different search path and
viola, we're integrated.


Actually, that does sound really interesting.  I could see using pl/ 
proxy to handle transparently accessing schemas regardless of what  
actual db their on -- and I do think that once you get up to those #s  
you're talking you're going to need to partition across multiple boxes.


Erik Jones, Database Administrator
Engine Yard
Support, Scalability, Reliability
866.518.9273 x 260
Location: US/Pacific
IRC: mage2k






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread David Kerr
On Thu, May 07, 2009 at 03:06:11PM -0600, Scott Marlowe wrote:
- On Thu, May 7, 2009 at 2:12 PM, Anders Steinlein  wrote:
- >
- > On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:
- >
- >> On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein 
- >> wrote:
- > Generally though, what made you consider such a solution? Same advantages as
- > I mentioned? One thing I'm a bit usure of how best to solve is where to
- > place the "users" or some such table for authentication and other "shared"
- > info -- simply in the "public" schema, perhaps?
- 
- We're looking at a "schema per group" fit for a certain application
- and we have lot of groups (in the 100,000 to 1,000,000 range.)  We're
- also looking at partitioning to multiple db servers if needs be.  It's
- a compelling app, and schemas allow us to have one copy of the master
- user data etc and the app just has to have a different search path and
- viola, we're integrated.
- 

Interesting, we were looking at something similar but dismissed it because it 
seemed like a maintenance nightmare, instead we're planning on going with 
partitioning.

>From a programming aspect, we're using JPA, anyone know if you can set 
>search_path 
with JPA/JDBC?

Also, how do you plan to handle schema updates in that model, inheritence?

You don't have a concern with dealing with 100,000 * n tables?

My background is with oracle, and in general it would have cleanup issues with
tracking that many tables/segments. Does postgres just handle an insane amount 
of tables better?

Thanks

David Kerr

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] A question about RAISE NOTICE

2009-05-07 Thread Kris Jurka



On Thu, 7 May 2009, Craig Ringer wrote:


Most programs will not be aware of notice messages. Unlike error
messages there's no provision for them in standard JDBC/ODBC/etc APIs,
and you have to specifically ask the client driver for them. Unless your
app is aware of PostgreSQL specifically, it won't be doing that.


The JDBC API has a SQLWarning that we map notice messages to and are 
understood by client tools.


http://java.sun.com/javase/6/docs/api/java/sql/SQLWarning.html

Kris Jurka

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] prepared statements and DBD::Pg

2009-05-07 Thread Tim Bunce
On Thu, May 07, 2009 at 06:50:11AM -0700, David Fetter wrote:
> On Thu, May 07, 2009 at 02:31:08PM +0100, Tim Bunce wrote:
> > On Thu, May 07, 2009 at 04:54:06AM +1200, Andrej wrote:
> > > 
> > > WARNING: DBD::Pg now (as of version 1.40) uses true prepared
> > > statements by sending them to the backend to be prepared by the
> > > Postgres server.  Statements that were legal before may no longer
> > > work.
> > 
> > Sure seems like a bug, or at best a misfeature, that DBD::Pg doesn't
> > simply fallback to client-side prepare when a server-side prepare
> > can't be performed.  I believe DBD::mysql does that.
> 
> It's a safety feature. :)

Er. I see the smiley but I'm not sure if that's a joke. Can you expand?

Tim.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] prepared statements and DBD::Pg

2009-05-07 Thread David Fetter
On Fri, May 08, 2009 at 01:02:04AM +0100, Tim Bunce wrote:
> On Thu, May 07, 2009 at 06:50:11AM -0700, David Fetter wrote:
> > On Thu, May 07, 2009 at 02:31:08PM +0100, Tim Bunce wrote:
> > > On Thu, May 07, 2009 at 04:54:06AM +1200, Andrej wrote:
> > > > 
> > > > WARNING: DBD::Pg now (as of version 1.40) uses true prepared
> > > > statements by sending them to the backend to be prepared by
> > > > the Postgres server.  Statements that were legal before may no
> > > > longer work.
> > > 
> > > Sure seems like a bug, or at best a misfeature, that DBD::Pg
> > > doesn't simply fallback to client-side prepare when a
> > > server-side prepare can't be performed.  I believe DBD::mysql
> > > does that.
> > 
> > It's a safety feature. :)
> 
> Er.  I see the smiley but I'm not sure if that's a joke.  Can you
> expand?

It's not a joke.  Client-side prepare is essentially creating a
duplicate code path and hoping that it does exactly the same thing
that the server-side one does, and this in a context of controlling
access.

If PostgreSQL's parser, etc., were in the form of exportable
libraries, that would be very nice, but until then, making server-side
prepare the only kind is just jungle caution.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Extract Week

2009-05-07 Thread Willem Buitendyk
Any ideas how to extract a non-iso week from timestamp?  In other words, 
weeks that start on Sunday and end on Saturday?  We have the dow 
function which returns the non-iso day of the week, why not a non-iso 
week function?


Cheers,

Willem

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] getting a list of users

2009-05-07 Thread Eric Smith

All,

How do I get a list of database usernames using the postgres C API?

Thanks,
Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting a list of users

2009-05-07 Thread Eric Smith

Never mind, everyone.  I figured it out.

On May 7, 2009, at 8:05 PM, Eric Smith wrote:


All,

How do I get a list of database usernames using the postgres C API?

Thanks,
Eric


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread Scott Marlowe
On Thu, May 7, 2009 at 4:59 PM, David Kerr  wrote:
> On Thu, May 07, 2009 at 03:06:11PM -0600, Scott Marlowe wrote:
> - On Thu, May 7, 2009 at 2:12 PM, Anders Steinlein  
> wrote:
> - >
> - > On May 7, 2009, at 10:05 PM, Scott Marlowe wrote:
> - >
> - >> On Thu, May 7, 2009 at 1:45 PM, Anders Steinlein 
> - >> wrote:
> - > Generally though, what made you consider such a solution? Same advantages 
> as
> - > I mentioned? One thing I'm a bit usure of how best to solve is where to
> - > place the "users" or some such table for authentication and other "shared"
> - > info -- simply in the "public" schema, perhaps?
> -
> - We're looking at a "schema per group" fit for a certain application
> - and we have lot of groups (in the 100,000 to 1,000,000 range.)  We're
> - also looking at partitioning to multiple db servers if needs be.  It's
> - a compelling app, and schemas allow us to have one copy of the master
> - user data etc and the app just has to have a different search path and
> - viola, we're integrated.
> -
>
> Interesting, we were looking at something similar but dismissed it because it
> seemed like a maintenance nightmare, instead we're planning on going with
> partitioning.

We're looking at both, eventually.  They're kind of two parts of the
same solution.  schemas make having many isolated users easy,
partitioning (across dbs with something like pl/proxy or in the app
layer) allows us to scale to millions or more schemas before things
start getting too slow.

> From a programming aspect, we're using JPA, anyone know if you can set 
> search_path
> with JPA/JDBC?

Can you issue
set search_path=public,myschema;
as a command, just straight up sql?

We don't use java for much here, so I don't know.

> Also, how do you plan to handle schema updates in that model, inheritence?

Yeah, we're looking at using inheritance to allow us to update them
all at once.  But if we need to do more than that, we can always
automate it with scripts.

> You don't have a concern with dealing with 100,000 * n tables?

Of course I do.  But I kinda want to know when we start having issues.
 Our machines are pretty beefy, and we can double the memory to 64G
and will probably double (or more) the cores on them from 8 to 16 to
24 etc... over time.  So, as long as the catalog tables aren't truly
monstrous we should be ok.  But only benchmarking will really give us
an idea

> My background is with oracle, and in general it would have cleanup issues with
> tracking that many tables/segments. Does postgres just handle an insane amount
> of tables better?

I don't think pgsql can handle many more tables than oracle in terms
of performance, it's just way easier to maintain them, due to useful
features like inheritance and such.  And multi-element search_path
lets you mix > 1 schema for a user to see seamlessly.  you have to be
WAY more schema aware in writing an app in oracle, at least in the
older versions I've used.  Haven't touched 11g, and don't really plan
to unless I have to. :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schema per user?

2009-05-07 Thread CaT
On Thu, May 07, 2009 at 09:45:30PM +0200, Anders Steinlein wrote:
> Has anyone done something like this, or is it simply A Bad Idea? How  
> many schemas can a database contain, until one hits a (hard or soft)  
> limit? Keep in mind that this is not a "regular" web-app with thousands 
> of users, but more in the range of 500-1000 for the foreseeable future.

I'm doing something similarish for our blog server. We have a blog db and
a schema for each user. This was mainly for backup, restore and maintenance
reasons. Being able to take one dump of all the blogs and then selectively
rip them out by schema is nice.

Currently we're at 632 schemas and roles and climbing. No problems as yet.

-- 
  "A search of his car uncovered pornography, a homemade sex aid, women's 
  stockings and a Jack Russell terrier."
- http://www.news.com.au/story/0%2C27574%2C24675808-421%2C00.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] getting a list of users

2009-05-07 Thread Albe Laurenz
Eric Smith wrote:
> How do I get a list of database usernames using the postgres C API?

Execute this query:

   SELECT usename FROM pg_catalog.pg_user

and read the results.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general