Re: [GENERAL] Problem with Crosstab (Concatenate Problem)

2010-11-01 Thread Stefan Schwarzer
>> I need to convert the integer values for the years into column names, i.e. 
>> "1965" into "y_1965". How do I achieve this then?
> 
> Try something like:
> 
> create table foo (
> name text,
> year_start int,
> value float8);
> 
> insert into foo values('a',2010,1.23),('b',2011,2.34);
> 
> SELECT * FROM
>crosstab(
>'SELECT name, year_start, value FROM foo ORDER BY 1',
>'SELECT DISTINCT year_start FROM foo'
>)
> AS ct(name varchar, y_2010 float8, y_2011 float8);
> 
> name | y_2010 | y_2011
> --++
> a||   1.23
> b|   2.34 |
> (2 rows)

Hi Joe. Thanks a lot for the suggestions. Tried it out, but same error message:

ERROR:  invalid input syntax for integer: "SELECT DISTINCT year_start FROM foo"
LINE 4:'SELECT DISTINCT year_start FROM foo'
   ^

Did this work for you? Then this would indeed be strange. 

I wonder if the second crosstab SQL must have the same column names as the 
final output or not ("2010" vs. "y_2010").

Anyone can help me out? Thanks a lot for any tips!

Stef
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select problem

2010-11-01 Thread zab08

(SELECT * from mydata,city WHERE mydata.sample = 1 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
INTERSECT
(SELECT * from mydata,city WHERE mydata.sample = 2 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))

EXCEPT
 (
(SELECT * from mydata,city WHERE mydata.sample = 2 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
INTERSECT
(SELECT * from mydata,city WHERE mydata.sample = 3 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
)

UNION

(SELECT * from mydata,city WHERE mydata.sample = 1 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
INTERSECT
(SELECT * from mydata,city WHERE mydata.sample = 3 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))

EXCEPT
 (
(SELECT * from mydata,city WHERE mydata.sample = 2 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
INTERSECT
(SELECT * from mydata,city WHERE mydata.sample = 3 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to AND
basen not in ('A', 'T', 'G','C'))
)

ps:
Here is set 1, 2, 3.
our target: (set 1 INTERSECT set 2) EXCEPT (set 2 INTERSECT set 3) UNION (set 1 
INTERSECT set 3) EXCEPT (set 2 INTERSECT set 3).
your data is not correct.


Re: [GENERAL] Replication

2010-11-01 Thread Scott Marlowe
On Mon, Nov 1, 2010 at 4:39 PM, Jonathan Tripathy  wrote:
>
> On 01/11/10 21:10, Vick Khera wrote:
>>
>> On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy
>>  wrote:
>>>
>>> The standby must support INSERTS and UPDATES as well (once the master has
>>> failed)
>>>
>>> Are there any solutions like this? Looking on the Postgresql site, all
>>> the
>>> standby solutions seem to be read only..
>>
>> If they are RO it is only while they are replicas, not masters.  Once
>> the server is upgraded to the master role, it becomes RW.
>>
> So in the "Hot Standby" setup as described in
> http://www.postgresql.org/docs/current/static/hot-standby.html , how would I
> automatically make the slave a master?

I think you're looking for this:
http://www.postgresql.org/docs/current/static/warm-standby-failover.html

-- 
To understand recursion, one must first understand recursion.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can select contents of view but not view itself, despite indirect membership

2010-11-01 Thread Tom Lane
Kevin Field  writes:
> My guess is that it has something to do with the join to the table
> 'mandate'.

I was wondering about that too, but the error message is pretty clear
about which table it's complaining about.

Please see if you can put together a self-contained example.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Tom Lane
Andy Colson  writes:
> now now we have a membership record (100), but no customer record.

I haven't really been following this thread, but: isn't the answer
to that to establish a foreign-key constraint?  If there's an FK
then the database will provide sufficient row locking to prevent
you from deleting a row that someone else is in the midst of creating
a reference to.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 21:10, Vick Khera wrote:

On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy  wrote:

The standby must support INSERTS and UPDATES as well (once the master has
failed)

Are there any solutions like this? Looking on the Postgresql site, all the
standby solutions seem to be read only..

If they are RO it is only while they are replicas, not masters.  Once
the server is upgraded to the master role, it becomes RW.

So in the "Hot Standby" setup as described in 
http://www.postgresql.org/docs/current/static/hot-standby.html , how 
would I automatically make the slave a master?


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Thomas Kellerer

Merlin Moncure wrote on 01.11.2010 23:13:

On Mon, Nov 1, 2010 at 4:27 PM, Thomas Kellerer  wrote:

The problem is, that the JDBC driver only returns information about the temp
tables, if I specify that schema directly.


Have you filed a bug report to jdbc yet? :-D.


I thought about it initially, but then realized that it works as documented by 
the JDBC API.

When requesting the table information without specifying a schema, it is 
returned.
But in my application I use the current schema to request information about 
non-qualified tables which obviously fails as the current schema is usually 
public or another user schema but never pg_temp_xxx. So even though a select 
from a temp table (whithout a schema) works fine from within JDBC, retrieving 
metadata only works when either specifying no schema, or the correct one - 
which is a bit confusing but absolutely according to the JDBC specs.

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Rob Sargent


On 11/01/2010 04:13 PM, Merlin Moncure wrote:
> On Mon, Nov 1, 2010 at 4:27 PM, Thomas Kellerer  wrote:
>> The problem is, that the JDBC driver only returns information about the temp
>> tables, if I specify that schema directly.
> 
> Have you filed a bug report to jdbc yet? :-D.
> 
> merlin
> 

But can you consider temp tables as part of the schema when they last
only the duration of the session?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] select problem

2010-11-01 Thread Adrian Johnson
Dear group:

I have a table structure like following:


city:

city_blockage_from   age_to   name

SF 10 20grade1
SF 21 30grade1
SF 35 40grade1
SF 53 19grade2
SF 100   153   grade2
NY 20 21 grade5


mydata:

samplecity_blockage_fromage_to   baseo   basen
1 SF 13  14   T  Y
1 SF 33   34  A  M
2 SF 24   25  G  A
2 SF 18   19  G  K
2 SF 33   34  A  M
3 SF 13   14   T  Y
3 SF 105 106 C   T

I am interested in following result:

1. sample 1 and 3 share a same mydata.age_from and mydata.age_to  (but
sample 2 and sample 3 should not have same age_from and age_to for
same city.name)
2. sample 1 and 2 share a same mydata.age_from and mydata.age_to
3. in the results basen should not be any of 'A' or 'T' or 'G' or 'C'.

that means for a give city.name sample 1 should contain both  age_from
and age_to with sample 2 and sample 3.  But sample 2 and sample 3
should have different age_from and age_to for same city.name.

myquery:

SELECT DISTINCT city.name from mydata,city WHERE mydata.sample = 1 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to
INTERSECT
SELECT DISTINCT city.name from mydata,city WHERE mydata.sample = 2 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to
INTERSECT
SELECT DISTINCT city.name from mydata,city WHERE mydata.sample = 3 AND
mydata.age_from >= city.age_from AND mydata.age_to <= city.age_to
INTERSECT AND
basen not in ('A', 'T', 'G','C');


I am not convinced that this is correct. can any one help me here please.

thanks
adrian

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Merlin Moncure
On Mon, Nov 1, 2010 at 4:27 PM, Thomas Kellerer  wrote:
> The problem is, that the JDBC driver only returns information about the temp
> tables, if I specify that schema directly.

Have you filed a bug report to jdbc yet? :-D.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can select contents of view but not view itself, despite indirect membership

2010-11-01 Thread Kevin Field
On Nov 1, 4:39 pm, t...@sss.pgh.pa.us (Tom Lane) wrote:
> Kevin Field  writes:
> > Strange, no?  Anybody have any ideas why this might be?
>
> Worksforme:
>
> regression=# create group "user";
> CREATE ROLE
> regression=# create group extranet_user in group "user";
> CREATE ROLE
> regression=# create user x in group extranet_user;
> CREATE ROLE
> regression=# create view page_startup as select ...
> CREATE VIEW
> regression=# GRANT SELECT ON TABLE page_startup TO "user";
> GRANT
> regression=# set session authorization x;
> SET
> regression=> select * from page_startup;
> [ works ]
>
> I'm a bit suspicious of naming a group "user".  I wonder whether you
> outsmarted yourself somewhere along the line by failing to double-quote
> that name, so that the command ended up doing something else than you
> thought.
>
>                         regards, tom lane

Good point about the naming (I was a bit wary of it myself but hadn't
thought of the right thing yet); however, as you can see, the view
grant is quoted, and also pgAdminIII shows this:

GRANT "user" TO extranet_user;

My guess is that it has something to do with the join to the table
'mandate'.  If your view definition includes a CASE WHEN... that would
potentially (but never actually, since it tests for permission first)
select from a table that you don't have permission to select
from...does it still work for you?  (I'll try to build a generic
example tomorrow to limit it to this specific test.)

Thanks,
Kev

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Vick Khera
On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy  wrote:
> The standby must support INSERTS and UPDATES as well (once the master has
> failed)
>
> Are there any solutions like this? Looking on the Postgresql site, all the
> standby solutions seem to be read only..

If they are RO it is only while they are replicas, not masters.  Once
the server is upgraded to the master role, it becomes RW.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is this a known feature of 8.1 SSL connection?

2010-11-01 Thread Ray Stell
On Mon, Nov 01, 2010 at 11:54:25AM -0400, zhong ming wu wrote:
> I don't recall this being an issue with 8.4 I am also using
> 
> Say your 8.1 server has SSL on.  Even though pg_hba.conf have
> 
> host or hostnossl  md5
> 
> either server or 8.1 psql  insists that you have .postgresql/postgresql.*
> 
> Does that make sense to you?
> 
> Note: no "cert" in pg_hba.conf


no, that does not make sense to me, however, I don't have an 8.x to play with.

In 9.0.1, 
 with hostnossl+md5 
 ssl=on 
 no ~/.postgresql on the client 

$ psql -p 5498 template1 postgres
Password for user postgres:
psql (9.0.1)
Type "help" for help.

template1=# \q

what is the postmaster msg exactly?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Andy Colson

On 11/1/2010 3:02 PM, Jonathan Tripathy wrote:


On 01/11/10 19:56, Andy Colson wrote:

On 11/1/2010 2:29 PM, Jonathan Tripathy wrote:


On 01/11/10 19:12, Andy Colson wrote:

On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:



I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership,
then
check to see if there are no more memberships left for the
corresponding
customer, and if there are none, delete the corresponding
customer as
well.



Hum.. yeah, I can see a race condition there. but even with table
locking I can see it. Not sure how your stuff works, but I'm thinking
website:

user1 goes to customer page, clicks on "add membership" and starts
filling out info.

user2 goes to customer page, clicks on "delete membership" of the
last
member ship, which blows away the membership, then the customer.

user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy


In the case described above, our code would throw an exception saying
"Customer no longer exists", prompting the user to create a fresh
customer - So I'm not worried about this (Although it may be
inconvenient for the user, I don't think much can be done in this
case).
Please let me know if I've missed something here.

I'm more worried about the following situation (Where a bad
interleaving
sequence happens):

user1 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

However I guess that if the relations are set up properly in the
database, an exception could be thrown to say that there are
corresponding memberships still exist...



yep, that sequence could be a problem too. It'll be a problem whenever
more than one person gets to the customer page. Another user could
cause that customer to go away at any time. with or without table
locks:

user1 and 2 go to customer page.
user1 deletes last membership, and customer
user2 does anything... cuz customer has gone away.

Do you really need to delete the customer? Is leaving it around a
problem?

-Andy


Yeah, unfortunately leaving the customer round is a problem due to Data
Protection Policies in the EU.

However, I'm not worried about the above situation, as if the user tries
to do anything with a customer that doesn't exist, an exception is
thrown which is, I believe, handled properly (i.e. the program doesn't
crash, but will simply tell the user to start again and create a new
customer).

Do you think table relations are enough to solve the situation that I
gave above? I.e:

user1 goes to customer page, clicks on "delete membership" of the last
membership, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

Would my above problem be solved if the database refused to remove a
customer if it had remaining memberships?

Another potential solution could be to leave the customer behind, but
run a script on a Saturday night or something to delete all customers
with no memberships...

What do you think would be best?

Thanks



I think we might be splitting hairs... What are the chances two people
are editing the same customer at the exact same time? Plus the chances
there is only one membership (which one user is deleting), plus the
chances they are clicking the save button at the exact same time.

In the PG world, I think it might go like:

user1 clicks delete last membership:
start transaction
delete from memberships where id = 42;

user2 has filled out new membership and clicks save
start transaction
insert into memebership where id = 100;

user1
pg's default transaction level is read commited (which I learned in
"[GENERAL] Can Postgres Not Do This Safely ?!?" thread)
At this point both have a transaction open, neither commited. If user1
checked right now to see if customer had any more memberships, it
would not see any and delete the customer which would be bad... but
lets wait

user2
commit

user1
now user1 would see the new membership, and not delete the customer,
which would be ok.


So yes, there is a problem. I'm not 100% sure how to solve.

-Andy



Sorry, Andy, where is the problem?



At this point I'm hoping someone will jump in... hint hint.  I have no 
idea if I'm even close to correct.



user1 clicks delete last membership:
>> start transaction
>> delete from memberships where id = 42;

user2 has filled out new membership and clicks save
>> start transaction
>> insert into memebership where id = 100;

user1
check to see if any memberships, nope, so blow away the customer
commit

user2
commit

now now we have a membership record (100), but no customer record.

-Andy


[GENERAL] FTS phrase searches

2010-11-01 Thread Glenn Maynard
How are adjacent word searches handled with FTS?  tsquery doesn't do
this, so I assume this has to be done as a separate filter step, eg.:

  # "large house" sales
  SELECT * FROM data WHERE fts @@ to_tsquery('large & house & sales')
AND tsvector_contains_phrase(fts, to_tsvector('large house')));

to do an indexed search for "large & house & sales" and then to narrow
the results to where "large house" actually appears as a phrase (eg.
adjacent positions at the same weight).  I can't find any function to
do that, though.  (Presumably, it would return true if all of the
words in the second tsvector exist in the first, with the same
positions relative to each other.)

"tsvector <@ tsvector" seems logical, but isn't supported.

This isn't as simple as using LIKE, since that'll ignore stemming,
tokenization rules, etc.  If the language rules allow this to match
"larger house" or "large-house", then a phrase restriction should,
too.  It's also painful when the FTS column is an aggregate of several
other columns (eg. title and body), since a LIKE match needs to know
that and check all of them separately.

Any hints?  This is pretty important to even simpler search systems.

-- 
Glenn Maynard

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can select contents of view but not view itself, despite indirect membership

2010-11-01 Thread Tom Lane
Kevin Field  writes:
> Strange, no?  Anybody have any ideas why this might be?

Worksforme:

regression=# create group "user";
CREATE ROLE
regression=# create group extranet_user in group "user";
CREATE ROLE
regression=# create user x in group extranet_user;
CREATE ROLE
regression=# create view page_startup as select ...
CREATE VIEW
regression=# GRANT SELECT ON TABLE page_startup TO "user";
GRANT
regression=# set session authorization x;
SET
regression=> select * from page_startup;
[ works ]

I'm a bit suspicious of naming a group "user".  I wonder whether you
outsmarted yourself somewhere along the line by failing to double-quote
that name, so that the command ended up doing something else than you
thought.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Thomas Kellerer

Merlin Moncure wrote on 01.11.2010 21:13:

On Mon, Nov 1, 2010 at 6:46 AM, Thomas Kellerer  wrote:

Hello,

I have created a temporary table using

create temporary table foo
(
  id integer
);

and noticed this was created in a schema called "pg_temp_2"

My question is:

is this always "pg_temp_2"?
Or will the name of the "temp schema" change?

If it isn't always the same, is there a way I can retrieve the schema name
for temporary tables?


Curious why you'd want to do this -- temporary magic schemas are an
implementation artifact, and there shouldn't ever be a reason to
directly reference them.


Yes and no ;)

The problem is, that the JDBC driver only returns information about the temp 
tables, if I specify that schema directly.

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 20:26, Thomas Kellerer wrote:

Jonathan Tripathy wrote on 01.11.2010 21:12:


9.0 has streaming replication and "Hot Standby"

http://www.postgresql.org/docs/current/static/hot-standby.html
http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION 





But does that not only allow "read-only" things to work on the standby?



But you didn't ask for read/write on the standby, only for a standby 
that can take of the master once the master fails:

"must support INSERTS and UPDATES as well (once the master has failed)"

That's exactly what the hot standby does: As long as it is in standby 
mode it's read-only.
Once the failover has happened the standby is the new master and will 
allow read/write access.


Thomas


Ahh!! So in both those links above, once the master has failed, the 
standby will support writes (As it not acts like the master)?


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 20:21, Scott Marlowe wrote:

On Mon, Nov 1, 2010 at 2:12 PM, Jonathan Tripathy  wrote:

On 01/11/10 20:01, Thomas Kellerer wrote:

Jonathan Tripathy wrote on 01.11.2010 20:53:

Hi Everyone,

I'm looking for the best solution for "Hot Standbys" where once the
primary server fails, the standby will take over and act just like
the master did. The standby must support INSERTS and UPDATES as well
(once the master has failed)

Are there any solutions like this? Looking on the Postgresql site,
all the standby solutions seem to be read only..

9.0 has streaming replication and "Hot Standby"

http://www.postgresql.org/docs/current/static/hot-standby.html

http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION

Regards
Thomas




But does that not only allow "read-only" things to work on the standby?

Yep.  Generally when to fail over is considered a business decision.
I think only pgpool supports automatic failover but has a lot of
limitations to deal with otherwise.


So really Postgresql doesn't have any "Hot Standbys" that once fail-over 
has occurred, the system can act as normal? For this, would I have to 
looking in Xen or VMWare HA?


I'm guessing the standbys in the "warm-failover" setup allow write 
operations?


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Thomas Kellerer

Jonathan Tripathy wrote on 01.11.2010 21:12:


9.0 has streaming replication and "Hot Standby"

http://www.postgresql.org/docs/current/static/hot-standby.html
http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION



But does that not only allow "read-only" things to work on the standby?



But you didn't ask for read/write on the standby, only for a standby that can 
take of the master once the master fails:
"must support INSERTS and UPDATES as well (once the master has failed)"

That's exactly what the hot standby does: As long as it is in standby mode it's 
read-only.
Once the failover has happened the standby is the new master and will allow 
read/write access.

Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] async queries in Perl and poll()/select() loop - how to make them work together?

2010-11-01 Thread Merlin Moncure
On Mon, Nov 1, 2010 at 1:31 PM, Andy Colson  wrote:
> I dont think you can start a second query until you have called
> $dbh->pg_result.  These constants just give you neat ways of waiting... its
> still just one at a time.

Correct.  The C api also supports the ability to test if getting the
result would 'block'...meaning wait for the server generated result
because the client doesn't have it yet.

Asynchronous queries give you a neat way to wait on the server or do a
bit of work while a previous query is executing without dealing with
the headache of threads.  You can't overlap queries on a single
connection because the server doesn't support it.  You could however
create (a very small number of, like 2) multiple connections, keep
them open, and round robin them.  Forking is overkill.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Scott Marlowe
On Mon, Nov 1, 2010 at 2:12 PM, Jonathan Tripathy  wrote:
>
> On 01/11/10 20:01, Thomas Kellerer wrote:
>>
>> Jonathan Tripathy wrote on 01.11.2010 20:53:
>>>
>>> Hi Everyone,
>>>
>>> I'm looking for the best solution for "Hot Standbys" where once the
>>> primary server fails, the standby will take over and act just like
>>> the master did. The standby must support INSERTS and UPDATES as well
>>> (once the master has failed)
>>>
>>> Are there any solutions like this? Looking on the Postgresql site,
>>> all the standby solutions seem to be read only..
>>
>> 9.0 has streaming replication and "Hot Standby"
>>
>> http://www.postgresql.org/docs/current/static/hot-standby.html
>>
>> http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION
>>
>> Regards
>> Thomas
>>
>>
>>
> But does that not only allow "read-only" things to work on the standby?

Yep.  Generally when to fail over is considered a business decision.
I think only pgpool supports automatic failover but has a lot of
limitations to deal with otherwise.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] can select contents of view but not view itself, despite indirect membership

2010-11-01 Thread Kevin Field
Hi everyone,

I'm having a strange issue on PostgreSQL 9.0.1 on Windows Server 2003
SP2.

I connect as a superuser and then SET SESSION AUTHORIZATION to user
"X" who is a member of group role "extranet_user" which inherits
membership from group role "user".  "X", "extranet_user", and even
"user" are all INHERIT.

I have the following view:

CREATE OR REPLACE VIEW page_startup AS
 SELECT contact.name, contact.nickname, COALESCE(
CASE
WHEN has_table_privilege('mandate'::text, 'select'::text)
THEN ( SELECT false AS bool
   FROM mandate
   NATURAL JOIN task
 WHERE task.waiting_for = "session_user"()::text::integer AND
task.deadline < now()
LIMIT 1)
ELSE NULL::boolean
END, true) AS no_mandates
   FROM contact
  WHERE contact.id = "session_user"()::text::integer;

GRANT SELECT ON TABLE page_startup TO "user";


If I run this:

set session authorization "X";
select pg_has_role('user','member')

I get 't' as a result.  Also, if I run this (just copying the
definition of the view):


set session authorization "X";
 SELECT contact.name, contact.nickname, COALESCE(
CASE
WHEN has_table_privilege('mandate'::text, 'select'::text)
THEN ( SELECT false AS bool
   FROM mandate
   NATURAL JOIN task
 WHERE task.waiting_for = "session_user"()::text::integer AND
task.deadline < now()
LIMIT 1)
ELSE NULL::boolean
END, true) AS no_mandates
   FROM contact
  WHERE contact.id = "session_user"()::text::integer;


I get the single row of data I'm looking for.

However, if I try to use the view instead of copying its definition:


set session authorization "X";
select * from page_startup


I get the following:

ERROR:  permission denied for relation page_startup

** Error **

ERROR: permission denied for relation page_startup
SQL state: 42501



Strange, no?  Anybody have any ideas why this might be?

Thanks,
Kev

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Merlin Moncure
On Mon, Nov 1, 2010 at 6:46 AM, Thomas Kellerer  wrote:
> Hello,
>
> I have created a temporary table using
>
> create temporary table foo
> (
>  id integer
> );
>
> and noticed this was created in a schema called "pg_temp_2"
>
> My question is:
>
> is this always "pg_temp_2"?
> Or will the name of the "temp schema" change?
>
> If it isn't always the same, is there a way I can retrieve the schema name
> for temporary tables?

Curious why you'd want to do this -- temporary magic schemas are an
implementation artifact, and there shouldn't ever be a reason to
directly reference them.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 20:01, Thomas Kellerer wrote:

Jonathan Tripathy wrote on 01.11.2010 20:53:

Hi Everyone,

I'm looking for the best solution for "Hot Standbys" where once the
primary server fails, the standby will take over and act just like
the master did. The standby must support INSERTS and UPDATES as well
(once the master has failed)

Are there any solutions like this? Looking on the Postgresql site,
all the standby solutions seem to be read only..


9.0 has streaming replication and "Hot Standby"

http://www.postgresql.org/docs/current/static/hot-standby.html
http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION 



Regards
Thomas




But does that not only allow "read-only" things to work on the standby?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Jehan-Guillaume (ioguix) de Rorthais
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Le 01/11/2010 20:54, hubert depesz lubaczewski a écrit :
> On Mon, Nov 01, 2010 at 08:31:10PM +0100, Cédric Villemain wrote:
>> It should stick at a maximum of 3 * checkpoint_segments + 1, if it
>> exceed it will remove the extra files after.
> 
> if you'd look at the graph you'd notice that it never goes down to 2n+1.
> And really - so far I have not yet heard/seen/read any solid reasoning
> for 3n instead of 2n.

I understand this 3n this way:

n "active" WAL files
n "recycled-ready-to-use" WAL files
checkpoint_completion_target*n WAL being write on disk

> 
>>> also - can you explain why "fraction of total time" (time!) would
>>> directly relate to number of xlog files existing in pg_xlog? I mean -
>>> you're not the first person to suggest it, but I don't see any way that
>>> these two could be related.
>> It's guess that while your checkpoint is longer by this factor(X%),
>> the number of wal files needed might be multiplied by the same ratio.
>> (1+X%) To handle extra files created while the checklpoint is still
>> running.
> 
> I'm not sure I understand. Will need to run some tests. Yet - even
> assuming (2 + checkpoint_completion_target ) * n - it doesn't explain
> why there was no difference in number of segments after decreasing from
> 0.9 to 0.5.

Does your cluster have enough write ? I think you might have to wait a
bit longer to see remaining files being recycled or deleted...

> 
> Best regards,
> 
> depesz
> 
> 

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkzPHZcACgkQxWGfaAgowiKQnQCgg7HIAI35mlfySbYY/VptqyjQ
kIwAni9DtLqx4j7MFk//1cTf88Dul/4e
=NfHT
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 19:56, Andy Colson wrote:

On 11/1/2010 2:29 PM, Jonathan Tripathy wrote:


On 01/11/10 19:12, Andy Colson wrote:

On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:



I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership,
then
check to see if there are no more memberships left for the
corresponding
customer, and if there are none, delete the corresponding 
customer as

well.



Hum.. yeah, I can see a race condition there. but even with table
locking I can see it. Not sure how your stuff works, but I'm thinking
website:

user1 goes to customer page, clicks on "add membership" and starts
filling out info.

user2 goes to customer page, clicks on "delete membership" of the 
last

member ship, which blows away the membership, then the customer.

user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy


In the case described above, our code would throw an exception saying
"Customer no longer exists", prompting the user to create a fresh
customer - So I'm not worried about this (Although it may be
inconvenient for the user, I don't think much can be done in this 
case).

Please let me know if I've missed something here.

I'm more worried about the following situation (Where a bad 
interleaving

sequence happens):

user1 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

However I guess that if the relations are set up properly in the
database, an exception could be thrown to say that there are
corresponding memberships still exist...



yep, that sequence could be a problem too. It'll be a problem whenever
more than one person gets to the customer page. Another user could
cause that customer to go away at any time. with or without table 
locks:


user1 and 2 go to customer page.
user1 deletes last membership, and customer
user2 does anything... cuz customer has gone away.

Do you really need to delete the customer? Is leaving it around a
problem?

-Andy


Yeah, unfortunately leaving the customer round is a problem due to Data
Protection Policies in the EU.

However, I'm not worried about the above situation, as if the user tries
to do anything with a customer that doesn't exist, an exception is
thrown which is, I believe, handled properly (i.e. the program doesn't
crash, but will simply tell the user to start again and create a new
customer).

Do you think table relations are enough to solve the situation that I
gave above? I.e:

user1 goes to customer page, clicks on "delete membership" of the last
membership, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

Would my above problem be solved if the database refused to remove a
customer if it had remaining memberships?

Another potential solution could be to leave the customer behind, but
run a script on a Saturday night or something to delete all customers
with no memberships...

What do you think would be best?

Thanks



I think we might be splitting hairs... What are the chances two people 
are editing the same customer at the exact same time?  Plus the 
chances there is only one membership (which one user is deleting), 
plus the chances they are clicking the save button at the exact same 
time.


In the PG world, I think it might go like:

user1 clicks delete last membership:
start transaction
delete from memberships where id = 42;

user2 has filled out new membership and clicks save
start transaction
insert into memebership where id = 100;

user1
pg's default transaction level is read commited (which I learned 
in "[GENERAL] Can Postgres Not Do This Safely ?!?" thread)
At this point both have a transaction open, neither commited.  If 
user1 checked right now to see if customer had any more memberships, 
it would not see any and delete the customer which would be bad... but 
lets wait


user2
commit

user1
now user1 would see the new membership, and not delete the 
customer, which would be ok.



So yes, there is a problem.  I'm not 100% sure how to solve.

-Andy



Sorry, Andy, where is the problem?

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Replication

2010-11-01 Thread Thomas Kellerer

Jonathan Tripathy wrote on 01.11.2010 20:53:

Hi Everyone,

I'm looking for the best solution for "Hot Standbys" where once the
primary server fails, the standby will take over and act just like
the master did. The standby must support INSERTS and UPDATES as well
(once the master has failed)

Are there any solutions like this? Looking on the Postgresql site,
all the standby solutions seem to be read only..


9.0 has streaming replication and "Hot Standby"

http://www.postgresql.org/docs/current/static/hot-standby.html
http://www.postgresql.org/docs/current/static/warm-standby.html#STREAMING-REPLICATION

Regards
Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 08:31:10PM +0100, Cédric Villemain wrote:
> It should stick at a maximum of 3 * checkpoint_segments + 1, if it
> exceed it will remove the extra files after.

if you'd look at the graph you'd notice that it never goes down to 2n+1.
And really - so far I have not yet heard/seen/read any solid reasoning
for 3n instead of 2n.

> > also - can you explain why "fraction of total time" (time!) would
> > directly relate to number of xlog files existing in pg_xlog? I mean -
> > you're not the first person to suggest it, but I don't see any way that
> > these two could be related.
> It's guess that while your checkpoint is longer by this factor(X%),
> the number of wal files needed might be multiplied by the same ratio.
> (1+X%) To handle extra files created while the checklpoint is still
> running.

I'm not sure I understand. Will need to run some tests. Yet - even
assuming (2 + checkpoint_completion_target ) * n - it doesn't explain
why there was no difference in number of segments after decreasing from
0.9 to 0.5.

Best regards,

depesz


-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication

2010-11-01 Thread Jonathan Tripathy

Hi Everyone,

I'm looking for the best solution for "Hot Standbys" where once the 
primary server fails, the standby will take over and act just like the 
master did. The standby must support INSERTS and UPDATES as well (once 
the master has failed)


Are there any solutions like this? Looking on the Postgresql site, all 
the standby solutions seem to be read only..


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] A few [Python] tools for postgres

2010-11-01 Thread Matt Harrison
Hey Folks-

I've got 2 projects out that I'm finding useful, so I thought I'd share with
the wider postgres community.

The first is PgPartition [0].  This (python) tool eliminates the monotony of
dealing with partitions.  It generates SQL to create/index/remove/alter
partitions.

The second is PgTweak [1].  This is a somewhat nascent (also python)
project, but I hope for it to be more useful.  The main idea is to try out
different setting combinations and to see what effect they have on query
performance.  It does this right now.  I'd like for it to analyze the
EXPLAIN ANALYZE output and make somewhat intelligent suggestions.

Any feedback is appreciated.  Hopefully these tools are useful to others.
I'll be at PgWest this week, if anyone wants to discuss these (or pgtune, or
python,  etc)

cheers,

matt

http://panela.blog-city.com/

0 - http://github.com/mattharrison/PgPartition
1 - http://github.com/mattharrison/PgTweak


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Cédric Villemain
2010/11/1 hubert depesz lubaczewski :
> On Mon, Nov 01, 2010 at 08:13:49PM +0100, Cédric Villemain wrote:
>> 2010/11/1 hubert depesz lubaczewski :
>> > On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote:
>> >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
>> >> > checkpoint_segments ).
>> >> (2 + checkpoint_completion_target) * checkpoint_segments + 1
>> >> => 291
>> >
>> > this is formula gave to me by rhodiumtoad on irc, but we tested with lower
>> > checkpoint_completion_target and it didn't change *anything*.
>>
>> You'll have between 211 and 291 files for 0.1 to 0.9
>> checkpoint_completion_target.
>> You'd have more than the number of files given by formula used during
>> your tests ?
>
> yes. we decreased checkpoint_completion_target to 0.5, and the numbers
> of xlog segments *did not change*.

It should stick at a maximum of 3 * checkpoint_segments + 1, if it
exceed it will remove the extra files after.

> also - can you explain why "fraction of total time" (time!) would
> directly relate to number of xlog files existing in pg_xlog? I mean -
> you're not the first person to suggest it, but I don't see any way that
> these two could be related.

It's guess that while your checkpoint is longer by this factor(X%),
the number of wal files needed might be multiplied by the same ratio.
(1+X%) To handle extra files created while the checklpoint is still
running.

>
> Best regards,
>
> depesz
>
> --
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
>



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 19:12, Andy Colson wrote:

On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:



I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership, 
then
check to see if there are no more memberships left for the 
corresponding

customer, and if there are none, delete the corresponding customer as
well.



Hum.. yeah, I can see a race condition there. but even with table
locking I can see it. Not sure how your stuff works, but I'm thinking
website:

user1 goes to customer page, clicks on "add membership" and starts
filling out info.

user2 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership, then the customer.

user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy


In the case described above, our code would throw an exception saying
"Customer no longer exists", prompting the user to create a fresh
customer - So I'm not worried about this (Although it may be
inconvenient for the user, I don't think much can be done in this case).
Please let me know if I've missed something here.

I'm more worried about the following situation (Where a bad interleaving
sequence happens):

user1 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

However I guess that if the relations are set up properly in the
database, an exception could be thrown to say that there are
corresponding memberships still exist...



yep, that sequence could be a problem too.  It'll be a problem 
whenever more than one person gets to the customer page.  Another user 
could cause that customer to go away at any time.  with or without 
table locks:


user1 and 2 go to customer page.
user1 deletes last membership, and customer
user2 does anything... cuz customer has gone away.

Do you really need to delete the customer?  Is leaving it around a 
problem?


-Andy

Yeah, unfortunately leaving the customer round is a problem due to Data 
Protection Policies in the EU.


However, I'm not worried about the above situation, as if the user tries 
to do anything with a customer that doesn't exist, an exception is 
thrown which is, I believe, handled properly (i.e. the program doesn't 
crash, but will simply tell the user to start again and create a new 
customer).


Do you think table relations are enough to solve the situation that I 
gave above? I.e:


user1 goes to customer page, clicks on "delete membership" of the last 
membership, which blows away the membership,

user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

Would my above problem be solved if the database refused to remove a 
customer if it had remaining memberships?


Another potential solution could be to leave the customer behind, but 
run a script on a Saturday night or something to delete all customers 
with no memberships...


What do you think would be best?

Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Installing postgis-pg90-setup-1.5.2-3 on windows after installing postgresql-9.0.1-1-windows_x64 errors off

2010-11-01 Thread John Mitchell
Hi,

Installing postgis-pg90-setup-1.5.2-3 on windows after installing
postgresql-9.0.1-1-windows_x64 errors off (see below error).  I believe that
postgres is a 64 bit application and postgis is a 32 bit application so I
don't know if that is the reason why it errors off.

*psql:C:/Program
Files/PostgreSQL/9.0/share/contrib/postgis-1.5/postgis.sql:58: ERROR:  could
not load library "C:/Program Files/PostgreSQL/9.0/lib/postgis-1.5.dll": %1
is not a valid Win32 application.
*
John



-- 
John J. Mitchell


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 08:18:24PM +0100, Cédric Villemain wrote:
> 2010/11/1 hubert depesz lubaczewski :
> > On Mon, Nov 01, 2010 at 07:18:22PM +0100, Filip Rembiałkowski wrote:
> >> 2010/11/1 hubert depesz lubaczewski :
> >>
> >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> >> > checkpoint_segments ).
> >>
> >> why?
> >>
> >> for a server overloaded with R/W transactions, it's possible to go beyond 
> >> this.
> >> checkpoints just do not keep up.
> >> right now I have an 8.3 with checkpoint_segments=3, constantly running
> >> pgbench and I see 8 WAL segments.
> >
> > you will notice in the logs that the system doesn't look like very
> > loaded.
> > i mean - there is fair amount of work, but nothing even resembling
> > "overloaded".
> 
> There exists some checkpoint which occur more frequently than perhaps
> expected. (less than 15 minutes)...
> 
> The logline about checkpoint might be usefull. Still I wonder what
> your question is exactly ?

why the number of wal segments is larger than 2n + 1

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 08:13:49PM +0100, Cédric Villemain wrote:
> 2010/11/1 hubert depesz lubaczewski :
> > On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote:
> >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> >> > checkpoint_segments ).
> >> (2 + checkpoint_completion_target) * checkpoint_segments + 1
> >> => 291
> >
> > this is formula gave to me by rhodiumtoad on irc, but we tested with lower
> > checkpoint_completion_target and it didn't change *anything*.
> 
> You'll have between 211 and 291 files for 0.1 to 0.9
> checkpoint_completion_target.
> You'd have more than the number of files given by formula used during
> your tests ?

yes. we decreased checkpoint_completion_target to 0.5, and the numbers
of xlog segments *did not change*.

also - can you explain why "fraction of total time" (time!) would
directly relate to number of xlog files existing in pg_xlog? I mean -
you're not the first person to suggest it, but I don't see any way that
these two could be related.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Adrian Klaver

On 11/01/2010 11:49 AM, Carlos Mennens wrote:

On Mon, Nov 1, 2010 at 2:27 PM, Steve Crawford
  wrote:


I'm guessing you are missing an initdb. Move your old data directory
somewhere else for now and do a new initdb so you can start up version 9.


When you say 'old data' can you be more specific as to the path and
possible files I need to move?

I go to '/var/lib/postgres/data/' directory however I am not sure
where from that folder structure I need to start moving files away
without breaking basic server functionality&  connection data.



I installed from source so my data directory is in /usr/local/pgsql ,
but here is what the layout look likes. Basically everything under 
~/data can be moved and the initdb run against ~/data


postg...@ford:/usr/local/pgsql/data$ l
total 112
drwx-- 13 postgres postgres  4096 2010-11-01 11:58 .
drwxr-xr-x  7 root root  4096 2010-06-28 12:18 ..
drwx-- 13 postgres postgres  4096 2010-07-14 15:16 base
drwx--  2 postgres postgres  4096 2010-11-01 11:59 global
drwx--  2 postgres postgres  4096 2010-06-28 14:43 pg_clog
-rw---  1 postgres postgres  3939 2010-06-28 14:43 pg_hba.conf
-rw---  1 postgres postgres  1636 2010-06-28 14:43 pg_ident.conf
drwx--  3 postgres postgres 12288 2010-11-01 11:58 pg_log
drwx--  4 postgres postgres  4096 2010-06-28 14:43 pg_multixact
drwx--  2 postgres postgres  4096 2010-11-01 11:58 pg_notify
drwx--  2 postgres postgres  4096 2010-11-01 12:15 pg_stat_tmp
drwx--  2 postgres postgres  4096 2010-06-28 14:43 pg_subtrans
drwx--  2 postgres postgres  4096 2010-06-28 14:43 pg_tblspc
drwx--  2 postgres postgres  4096 2010-06-28 14:43 pg_twophase
-rw---  1 postgres postgres 4 2010-06-28 14:43 PG_VERSION
drwx--  3 postgres postgres  4096 2010-06-28 16:17 pg_xlog
-rw---  1 postgres postgres 17578 2010-10-21 11:28 postgresql.conf
-rw---  1 postgres postgres59 2010-11-01 11:58 postmaster.opts
-rw---  1 postgres postgres47 2010-11-01 11:58 postmaster.pid
-rw-r--r--  1 postgres postgres  3302 2010-07-01 14:52 server.crt
-rw---  1 postgres postgres   887 2010-07-01 14:52 server.key
-rw-r--r--  1 postgres postgres  2094 2010-07-01 14:51 server.req



--
Adrian Klaver
adrian.kla...@gmail.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Cédric Villemain
2010/11/1 hubert depesz lubaczewski :
> On Mon, Nov 01, 2010 at 07:18:22PM +0100, Filip Rembiałkowski wrote:
>> 2010/11/1 hubert depesz lubaczewski :
>>
>> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
>> > checkpoint_segments ).
>>
>> why?
>>
>> for a server overloaded with R/W transactions, it's possible to go beyond 
>> this.
>> checkpoints just do not keep up.
>> right now I have an 8.3 with checkpoint_segments=3, constantly running
>> pgbench and I see 8 WAL segments.
>
> you will notice in the logs that the system doesn't look like very
> loaded.
> i mean - there is fair amount of work, but nothing even resembling
> "overloaded".

There exists some checkpoint which occur more frequently than perhaps
expected. (less than 15 minutes)...

The logline about checkpoint might be usefull. Still I wonder what
your question is exactly ?

>
> Best regards,
>
> depesz
>
> --
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Cédric Villemain
2010/11/1 hubert depesz lubaczewski :
> On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote:
>> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
>> > checkpoint_segments ).
>> (2 + checkpoint_completion_target) * checkpoint_segments + 1
>> => 291
>
> this is formula gave to me by rhodiumtoad on irc, but we tested with lower
> checkpoint_completion_target and it didn't change *anything*.

You'll have between 211 and 291 files for 0.1 to 0.9
checkpoint_completion_target.
You'd have more than the number of files given by formula used during
your tests ?

>
> Best regards,
>
> depesz
>
> --
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007
>



-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Andy Colson

On 11/1/2010 2:01 PM, Jonathan Tripathy wrote:



I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership, then
check to see if there are no more memberships left for the corresponding
customer, and if there are none, delete the corresponding customer as
well.



Hum.. yeah, I can see a race condition there. but even with table
locking I can see it. Not sure how your stuff works, but I'm thinking
website:

user1 goes to customer page, clicks on "add membership" and starts
filling out info.

user2 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership, then the customer.

user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy


In the case described above, our code would throw an exception saying
"Customer no longer exists", prompting the user to create a fresh
customer - So I'm not worried about this (Although it may be
inconvenient for the user, I don't think much can be done in this case).
Please let me know if I've missed something here.

I'm more worried about the following situation (Where a bad interleaving
sequence happens):

user1 goes to customer page, clicks on "delete membership" of the last
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts
filling out info.
user1 then blows away the customer.

However I guess that if the relations are set up properly in the
database, an exception could be thrown to say that there are
corresponding memberships still exist...



yep, that sequence could be a problem too.  It'll be a problem whenever 
more than one person gets to the customer page.  Another user could 
cause that customer to go away at any time.  with or without table locks:


user1 and 2 go to customer page.
user1 deletes last membership, and customer
user2 does anything... cuz customer has gone away.

Do you really need to delete the customer?  Is leaving it around a problem?

-Andy


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy



I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership, then
check to see if there are no more memberships left for the corresponding
customer, and if there are none, delete the corresponding customer as 
well.




Hum.. yeah, I can see a race condition there.  but even with table 
locking I can see it.  Not sure how your stuff works, but I'm thinking 
website:


user1 goes to customer page, clicks on "add membership" and starts 
filling out info.


user2 goes to customer page, clicks on "delete membership" of the last 
member ship, which blows away the membership, then the customer.


user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy


In the case described above, our code would throw an exception saying 
"Customer no longer exists", prompting the user to create a fresh 
customer - So I'm not worried about this (Although it may be 
inconvenient for the user, I don't think much can be done in this case). 
Please let me know if I've missed something here.


I'm more worried about the following situation (Where a bad interleaving 
sequence happens):


user1 goes to customer page, clicks on "delete membership" of the last 
member ship, which blows away the membership,
user2 goes to customer page, clicks on "add membership" and starts 
filling out info.

user1 then blows away the customer.

However I guess that if the relations are set up properly in the 
database, an exception could be thrown to say that there are 
corresponding memberships still exist...


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Andy Colson

On 11/1/2010 1:38 PM, Jonathan Tripathy wrote:



Hi Andy,

Thanks for your reply. Would the above code be classed as a single
transaction then?

Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.

Actually, we do have maybe one or 2 queries that use ROLLBACK, however
ROLLBACK happens at the end of a "code block" so the question is
probably moot.

And if so, I could just simple leave out the line which
says "//Insert SQL here to lock table"?

In PostgreSQL, locking is done automatically depending on actual
isolation level and SQL queries.
You can use explicit locking but most of the time it's not needed.


I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple
memberships (which are stored in the memberships table). We want our
deleteMembership(int membershipID) method to remove the membership, then
check to see if there are no more memberships left for the corresponding
customer, and if there are none, delete the corresponding customer as well.



Hum.. yeah, I can see a race condition there.  but even with table 
locking I can see it.  Not sure how your stuff works, but I'm thinking 
website:


user1 goes to customer page, clicks on "add membership" and starts 
filling out info.


user2 goes to customer page, clicks on "delete membership" of the last 
member ship, which blows away the membership, then the customer.


user1 clicks save.

Wouldnt matter for user2 if you locked the table or not, right?

-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
On Mon, Nov 1, 2010 at 2:27 PM, Steve Crawford
 wrote:

> I'm guessing you are missing an initdb. Move your old data directory
> somewhere else for now and do a new initdb so you can start up version 9.

When you say 'old data' can you be more specific as to the path and
possible files I need to move?

I go to '/var/lib/postgres/data/' directory however I am not sure
where from that folder structure I need to start moving files away
without breaking basic server functionality & connection data.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 18:38, Jonathan Tripathy wrote:



Hi Andy,

Thanks for your reply. Would the above code be classed as a single
transaction then?

Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.
Actually, we do have maybe one or 2 queries that use ROLLBACK, however 
ROLLBACK happens at the end of a "code block" so the question is 
probably moot.
Please ignore this above comment from me. We are using JDBC's rollback() 
method, instead of comitt() (in a catch block), so all seems fine.

And if so, I could just simple leave out the line which
says "//Insert SQL here to lock table"?

In PostgreSQL, locking is done automatically depending on actual
isolation level and SQL queries.
You can use explicit locking but most of the time it's not needed.


I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple 
memberships (which are stored in the memberships table). We want our 
deleteMembership(int membershipID) method to remove the membership, 
then check to see if there are no more memberships left for the 
corresponding customer, and if there are none, delete the 
corresponding customer as well.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy



Hi Andy,

Thanks for your reply. Would the above code be classed as a single
transaction then?

Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.
Actually, we do have maybe one or 2 queries that use ROLLBACK, however 
ROLLBACK happens at the end of a "code block" so the question is 
probably moot.

And if so, I could just simple leave out the line which
says "//Insert SQL here to lock table"?

In PostgreSQL, locking is done automatically depending on actual
isolation level and SQL queries.
You can use explicit locking but most of the time it's not needed.


I'll give you the exact case where I'm worried:

We have a table of customers, and each customer can have multiple 
memberships (which are stored in the memberships table). We want our 
deleteMembership(int membershipID) method to remove the membership, then 
check to see if there are no more memberships left for the corresponding 
customer, and if there are none, delete the corresponding customer as well.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 07:18:22PM +0100, Filip Rembiałkowski wrote:
> 2010/11/1 hubert depesz lubaczewski :
> 
> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> > checkpoint_segments ).
> 
> why?
> 
> for a server overloaded with R/W transactions, it's possible to go beyond 
> this.
> checkpoints just do not keep up.
> right now I have an 8.3 with checkpoint_segments=3, constantly running
> pgbench and I see 8 WAL segments.

you will notice in the logs that the system doesn't look like very
loaded.
i mean - there is fair amount of work, but nothing even resembling
"overloaded".

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote:
> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> > checkpoint_segments ).
> (2 + checkpoint_completion_target) * checkpoint_segments + 1
> => 291

this is formula gave to me by rhodiumtoad on irc, but we tested with lower
checkpoint_completion_target and it didn't change *anything*.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Filip Rembiałkowski
2010/11/1 Jonathan Tripathy :
>
> On 01/11/10 18:08, Andy Colson wrote:
>>
>> On 11/1/2010 12:37 PM, Jonathan Tripathy wrote:
>>>
>>> Hi Everyone,
>>>
>>> I'm trying to create a server for a database system which will be used
>>> by multiple clients. Of course, table locking is very important. Reading
>>> the Postgresql docs, locking occurs on a transaction-by-transaction
>>> basis.
>>>
>>> In our java code, we are doing this:
>>>
>>> //Start Code Block
>>>
>>> Connection con = "..."
>>> con.setAutoComitt(false);
>>>
>>> //Insert SQL here to lock table
>>>
>>> String qry1 = "..."
>>> pst1 = con.prepareStatement(qry1)
>>> //Insert code here to add values to prepared statement pst1
>>> pst1.executequery();
>>>
>>> String qry2 = "..."
>>> pst2 = con.prepareStatement(qry2)
>>> //Insert code here to add values to prepared statement pst2
>>> pst2.executequery();
>>>
>>> con.comitt();
>>>
>>> //End Code Block
>>>
>>> My question is, would the above block of code be classed as a single
>>> transaction, and would the locking work correctly?
>>>
>>> Thanks
>>>
>>> Jonny
>>>
>>>
>>
>> Table locking is very bad for concurrent access.  When a table is locked,
>> its one user at a time.
>>
>> PG usually does not need any locks at all.  As long as you use
>> transactions as they were meant to be used (as an atomic operation), things
>> usually work really well, with no locking at all.  You could read up on MVCC
>> is you were interested.
>>
>> Without knowing what sql you are running, I can _totally guarantee_ it'll
>> work perfectly with NO table locking.  :-)
>>
>> -Andy
>
> Hi Andy,
>
> Thanks for your reply. Would the above code be classed as a single
> transaction then?

Yes, assuming there's no explicit transaction control
(COMMIT/ROLLBACK/END) in your queries.

> And if so, I could just simple leave out the line which
> says "//Insert SQL here to lock table"?

In PostgreSQL, locking is done automatically depending on actual
isolation level and SQL queries.
You can use explicit locking but most of the time it's not needed.


-- 
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] avoiding nested loops when joining on partitioned tables

2010-11-01 Thread Vick Khera
On Sun, Oct 31, 2010 at 6:35 PM, Peter Neal  wrote:
> Is there any way I can explain this to postgres? When I query the parent
> table of the partitions,  "SELECT * from A, B where a.id=b.id;", the planner
> does a sequential scan on A, A1, A2, ... an index scan on B, B1, B2, ...
> then a nested loop, which generally takes a while.
>

The index scan on the B tables should be very quick to discount the
tables which have no matching data.  It will take I expect exactly one
page of the index to determine that.  Assuming you have plenty of RAM,
those pages should remain in your memory and not cause any disk I/O
after the first such iteration.

> As I say, I presume this is because the planner does not know that there is
> no overlap in 'id' values between the different partitions - is there any
> way to express this?

I don't believe there is.  If the inside loop is using an index scan
on each partition, that's about as good as you can do.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Cédric Villemain
2010/11/1 hubert depesz lubaczewski :
> Hi
> have strange situation - too many xlog files.
>
> PostgreSQL 8.3.11 on i386-pc-solaris2.10, compiled by cc -Xa
>
> config:
> # select name, setting from pg_settings  where name  ~ 'checkpoint|wal' order 
> by 1;
>             name             |    setting
> --+---
>  checkpoint_completion_target | 0.9
>  checkpoint_segments          | 100
>  checkpoint_timeout           | 900
>  checkpoint_warning           | 30
>  log_checkpoints              | on
>  wal_buffers                  | 2048
>  wal_sync_method              | open_datasync
>  wal_writer_delay             | 200
> (8 rows)
>
> as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> checkpoint_segments ).

(2 + checkpoint_completion_target) * checkpoint_segments + 1

=> 291

> And in our case - it's more.
>
> Added cronjob to log data about number of segments, current segment
> name, number of segments in pg_xlog that are before current, and after
> current. script is:
>
> 
> #!/usr/bin/bash
> LOGFILE=/home/postgres/logs/check_pg_xlog.out
>
> LS_OUTPUT=$( ls -l /pgdata/main/pg_xlog | egrep -v 
> "xlogtemp|backup|status|total" | sort -k9 )
> FIRST_SEGMENT_LINE=$( echo "$LS_OUTPUT" | head -1 )
> LAST_SEGMENT_LINE=$( echo "$LS_OUTPUT" | tail -1 )
>
> FIRST_SEGMENT=$( echo "$FIRST_SEGMENT_LINE" | awk '{print $NF}' )
> LAST_SEGMENT=$( echo "$LAST_SEGMENT_LINE" | awk '{print $NF}' )
> FIRST_SEGMENT_NUM=$( echo "$FIRST_SEGMENT" | awk '{print $NF}' | cut -b 
> 9-16,23-24 )
> LAST_SEGMENT_NUM=$( echo "$LAST_SEGMENT" | awk '{print $NF}' | cut -b 
> 9-16,23-24 )
>
> SEGMENT_COUNT=$( printf $'ibase=16\n1 + %s - %s\n' $LAST_SEGMENT_NUM 
> $FIRST_SEGMENT_NUM | bc )
> CURRENT_WAL_FILE=$( /opt/pgsql8311/bin/psql -U postgres -qAtX -c 'select 
> file_name from pg_xlogfile_name_offset( pg_current_xlog_location())' )
> CURRENT_WAL_FILE_NUM=$( echo "$CURRENT_WAL_FILE" | cut -b 9-16,23-24 )
>
> SEGMENTS_BEFORE_CURRENT=$( printf $'ibase=16\n%s - %s\n' 
> $CURRENT_WAL_FILE_NUM $FIRST_SEGMENT_NUM | bc )
> SEGMENTS_AFTER_CURRENT=$( printf $'ibase=16\n%s - %s\n' $LAST_SEGMENT_NUM 
> $CURRENT_WAL_FILE_NUM | bc )
>
> CURRENT_SEGMENT_LINE=$( echo "$LS_OUTPUT" | grep "$CURRENT_WAL_FILE" )
> (
>        date
>        printf $'First segment           : %s\n' "$FIRST_SEGMENT_LINE"
>        printf $'Current segment         : %s\n' "$CURRENT_SEGMENT_LINE"
>        printf $'Last segment            : %s\n' "$LAST_SEGMENT_LINE"
>        printf $'Segment count           : %s\n' "$SEGMENT_COUNT"
>        printf $'Current wal segment     : %s\n' "$CURRENT_WAL_FILE"
>        printf $'Segments before current : %s\n' "$SEGMENTS_BEFORE_CURRENT"
>        printf $'Segments after current  : %s\n' "$SEGMENTS_AFTER_CURRENT"
>        printf $'Last checkpoint time    : %s\n' "$( 
> /opt/pgsql8311/bin/pg_controldata /pgdata/main | egrep '^Time of latest 
> checkpoint:' | sed 's/^[^:]*: *//' )"
>        /opt/pgsql8311/bin/psql -U postgres -c "select name, setting from 
> pg_settings where name = 
> any('{checkpoint_timeout,checkpoint_segments,archive_mode,archive_command}')"
> ) >> $LOGFILE
> 
>
>
> sample output looks like this:
>
> | Mon Nov  1 13:46:00 EDT 2010
> | First segment           : -rw---   1 postgres postgres 16777216 Nov  1 
> 13:16 000137670053
> | Current segment         : -rw---   1 postgres postgres 16777216 Nov  1 
> 13:45 000137670064
> | Last segment            : -rw---   1 postgres postgres 16777216 Nov  1 
> 13:01 000137680029
> | Segment count           : 215
> | Current wal segment     : 000137670064
> | Segments before current : 17
> | Segments after current  : 197
> | Last checkpoint time    : Mon Nov 01 13:31:29 2010
> |         name         |    setting
> | -+---
> |  archive_command     | /usr/bin/true
> |  archive_mode        | on
> |  checkpoint_segments | 100
> |  checkpoint_timeout  | 900
> | (4 rows)
>
> As you can see, now we have 215 segments, with 17 that represent wal before 
> current location and 197 that are after current segment!
>
> Here - you can see graph which plots number of wal segments in the last week 
> http://depesz.com/various/bad-wal.jpg
>
> it virtually never goes below 215, and it spikes to 270-300.
>
> In here: http://www.depesz.com/various/bad-wal.log.gz is log from my test 
> script since 20th of october.
>
> Any ideas why number of segments is higher than expected?
>
> Just so that I am clear: I do not want to lower it by changing
> checkpoint_segments. I'm looking for information/enlightenment about why
> it works the way it works, and what could be possibly wrong.
>
> Best regards,
>
> depesz
>
> --
> Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
> jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:

Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
On Mon, Nov 1, 2010 at 2:00 PM, Bill Moran  wrote:
> To clarify my earlier comments, if you're going to use pg_upgrade, you
> probably won't need to downgrade to 8.4.  My comments about putting
> 8.4 back on would have be necessary if you were going to go the old
> dump/restore route.

I've already downgraded / dumped the databases and upgraded to the
latest version. I was then going to create the new databases however I
can't connect because of the invalid data error:

I guess I'm just missing something here...I didn't choose to go with
the 'pg_upgrade' script simply because I wasn't aware of it's location
and was worried it would dork up my data so I reverted back to 8.4.4-6
and got a clean backup of my databases. So is there nothing I can do
from my position now?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why so many xlogs?

2010-11-01 Thread Filip Rembiałkowski
2010/11/1 hubert depesz lubaczewski :

> as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
> checkpoint_segments ).

why?

for a server overloaded with R/W transactions, it's possible to go beyond this.
checkpoints just do not keep up.
right now I have an 8.3 with checkpoint_segments=3, constantly running
pgbench and I see 8 WAL segments.

cheers
Filip

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] max_fsm_pages increase

2010-11-01 Thread Vick Khera
On Sun, Oct 31, 2010 at 4:43 AM, AI Rumman  wrote:
> I using Postgresql 8.1 and during vacuum at night time, I am getting the
> following log:
> number of page slots needed (2520048) exceeds max_fsm_pages (356656)
> Do I need to increase max_fsm_pages to 2520048? Does it have any bad affect?

You don't *have* to do it.  The consequences of not doing it are: 1)
your server will not know all of the pages in the files holding your
database that there are empty slots available for use.  2) because of
that lack of knowledge, it may then allocate new pages to hold your
data, causing potentially more bloat, and the need for even more FSM
pages.  3) Allocating new pages usually costs more than just filling
in space on existing page, so your system slows down.

If I were you, I'd set the FSM pages to double what your current need
is, run vacuum again, and you should be good for a while. It will
unfortunately, require a restart of your postgres server.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Andy Colson

On 11/1/2010 12:37 PM, Jonathan Tripathy wrote:

Hi Everyone,

I'm trying to create a server for a database system which will be used
by multiple clients. Of course, table locking is very important. Reading
the Postgresql docs, locking occurs on a transaction-by-transaction basis.

In our java code, we are doing this:

//Start Code Block

Connection con = "..."
con.setAutoComitt(false);

//Insert SQL here to lock table

String qry1 = "..."
pst1 = con.prepareStatement(qry1)
//Insert code here to add values to prepared statement pst1
pst1.executequery();

String qry2 = "..."
pst2 = con.prepareStatement(qry2)
//Insert code here to add values to prepared statement pst2
pst2.executequery();

con.comitt();

//End Code Block

My question is, would the above block of code be classed as a single
transaction, and would the locking work correctly?

Thanks

Jonny




Table locking is very bad for concurrent access.  When a table is 
locked, its one user at a time.


PG usually does not need any locks at all.  As long as you use 
transactions as they were meant to be used (as an atomic operation), 
things usually work really well, with no locking at all.  You could read 
up on MVCC is you were interested.


Without knowing what sql you are running, I can _totally guarantee_ 
it'll work perfectly with NO table locking.  :-)


-Andy

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy


On 01/11/10 18:08, Andy Colson wrote:

On 11/1/2010 12:37 PM, Jonathan Tripathy wrote:

Hi Everyone,

I'm trying to create a server for a database system which will be used
by multiple clients. Of course, table locking is very important. Reading
the Postgresql docs, locking occurs on a transaction-by-transaction 
basis.


In our java code, we are doing this:

//Start Code Block

Connection con = "..."
con.setAutoComitt(false);

//Insert SQL here to lock table

String qry1 = "..."
pst1 = con.prepareStatement(qry1)
//Insert code here to add values to prepared statement pst1
pst1.executequery();

String qry2 = "..."
pst2 = con.prepareStatement(qry2)
//Insert code here to add values to prepared statement pst2
pst2.executequery();

con.comitt();

//End Code Block

My question is, would the above block of code be classed as a single
transaction, and would the locking work correctly?

Thanks

Jonny




Table locking is very bad for concurrent access.  When a table is 
locked, its one user at a time.


PG usually does not need any locks at all.  As long as you use 
transactions as they were meant to be used (as an atomic operation), 
things usually work really well, with no locking at all.  You could 
read up on MVCC is you were interested.


Without knowing what sql you are running, I can _totally guarantee_ 
it'll work perfectly with NO table locking.  :-)


-Andy


Hi Andy,

Thanks for your reply. Would the above code be classed as a single 
transaction then? And if so, I could just simple leave out the line 
which says "//Insert SQL here to lock table"?


Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Tom Lane
Bill Moran  writes:
> To clarify my earlier comments, if you're going to use pg_upgrade, you
> probably won't need to downgrade to 8.4.  My comments about putting
> 8.4 back on would have be necessary if you were going to go the old
> dump/restore route.

Note that pg_upgrade depends on having a copy of the old postmaster
executable available (so that it can read the old catalogs).  If you're
using a pre-packaged version then hopefully the packager took care of
that for you, but if you're trying to build from source then it's
something you'll have to deal with.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Bill Moran
In response to Carlos Mennens :

> On Mon, Nov 1, 2010 at 1:32 PM, Scott Marlowe  wrote:
> > you would do it with 9.0.x installed, and there should be a program in
> > one of the 9.0 packages that has pg_upgrade in it.
> 
> So I have my 8.4.4-6 databases backed up. I don't know if I needed the
> default 'postgres' database dumped but I did that one too just in
> case. I then upgraded the server to 9.0.1-2 and my question is how do
> you create a new database in PostgeSQL 9.0 coming from 8.4.4-6 when
> the server refuses to start. I can't connect to PostgreSQL simply
> because the logs tell me the data is not compatible. Am I missing
> something?

To clarify my earlier comments, if you're going to use pg_upgrade, you
probably won't need to downgrade to 8.4.  My comments about putting
8.4 back on would have be necessary if you were going to go the old
dump/restore route.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Linux x Windows LOCALE/ENCODING compatibility

2010-11-01 Thread Filip Rembiałkowski
2010/11/1 Carlos Henrique Reimer 
>
> Hi,
>
> I currently have my PostgreSQL server running in a windows box and now we're 
> migrating it to a Linux operational system.
>
> Current windows configuration:
> pg_controldata shows the LC_COLLATE and LC_CTYPE are Portuguese_Brasil.1252
> psql \l command shows we have databases with encoding WIN1252 and UTF8
>
> New Linux box:
> Which --locale and --encoding options should I use for the INITDB command to 
> have a compatible locale and encoding cluster?
> Which --encoding option should I use in the CREATEDB command to have a 
> compatible encoding database?

from your description I assume you have 8.3 on Windows,
but you did not tell us which PostgreSQL version you are planning to
deploy on Linux. I'd recommend 8.4.

main difference is that on Windows, UTF-8 encoding can be used with
any locale. on Unix, not.

generally, it is safe to do ``initdb --locale=pt_BR.UTF-8'' and create
fresh databases with default settings.

then, use Linux pg_dump and psql to clone Windows databases. like this:
pg_dump -h windowsbox -p 5432 -U postgres DBNAME | psql DBNAME

UTF-8 is compatible with all client encodings.

you will get compatible bahaviour for your existing apps with
SET client_encoding TO 'whatever'; /* this can be entered as pre-SQL
on every client connection */

see http://www.postgresql.org/docs/8.4/static/multibyte.html,
22.2.3. Automatic Character Set Conversion Between Server and Client


HTH

--
Filip Rembiałkowski
JID,mailto:filip.rembialkow...@gmail.com
http://filip.rembialkowski.net/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why so many xlogs?

2010-11-01 Thread hubert depesz lubaczewski
Hi
have strange situation - too many xlog files.

PostgreSQL 8.3.11 on i386-pc-solaris2.10, compiled by cc -Xa

config:
# select name, setting from pg_settings  where name  ~ 'checkpoint|wal' order 
by 1;
 name |setting
--+---
 checkpoint_completion_target | 0.9
 checkpoint_segments  | 100
 checkpoint_timeout   | 900
 checkpoint_warning   | 30
 log_checkpoints  | on
 wal_buffers  | 2048
 wal_sync_method  | open_datasync
 wal_writer_delay | 200
(8 rows)

as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 *
checkpoint_segments ).
And in our case - it's more.

Added cronjob to log data about number of segments, current segment
name, number of segments in pg_xlog that are before current, and after
current. script is:


#!/usr/bin/bash
LOGFILE=/home/postgres/logs/check_pg_xlog.out

LS_OUTPUT=$( ls -l /pgdata/main/pg_xlog | egrep -v 
"xlogtemp|backup|status|total" | sort -k9 )
FIRST_SEGMENT_LINE=$( echo "$LS_OUTPUT" | head -1 )
LAST_SEGMENT_LINE=$( echo "$LS_OUTPUT" | tail -1 )

FIRST_SEGMENT=$( echo "$FIRST_SEGMENT_LINE" | awk '{print $NF}' )
LAST_SEGMENT=$( echo "$LAST_SEGMENT_LINE" | awk '{print $NF}' )
FIRST_SEGMENT_NUM=$( echo "$FIRST_SEGMENT" | awk '{print $NF}' | cut -b 
9-16,23-24 )
LAST_SEGMENT_NUM=$( echo "$LAST_SEGMENT" | awk '{print $NF}' | cut -b 
9-16,23-24 )

SEGMENT_COUNT=$( printf $'ibase=16\n1 + %s - %s\n' $LAST_SEGMENT_NUM 
$FIRST_SEGMENT_NUM | bc )
CURRENT_WAL_FILE=$( /opt/pgsql8311/bin/psql -U postgres -qAtX -c 'select 
file_name from pg_xlogfile_name_offset( pg_current_xlog_location())' )
CURRENT_WAL_FILE_NUM=$( echo "$CURRENT_WAL_FILE" | cut -b 9-16,23-24 )

SEGMENTS_BEFORE_CURRENT=$( printf $'ibase=16\n%s - %s\n' $CURRENT_WAL_FILE_NUM 
$FIRST_SEGMENT_NUM | bc )
SEGMENTS_AFTER_CURRENT=$( printf $'ibase=16\n%s - %s\n' $LAST_SEGMENT_NUM 
$CURRENT_WAL_FILE_NUM | bc )

CURRENT_SEGMENT_LINE=$( echo "$LS_OUTPUT" | grep "$CURRENT_WAL_FILE" )
(
date
printf $'First segment   : %s\n' "$FIRST_SEGMENT_LINE"
printf $'Current segment : %s\n' "$CURRENT_SEGMENT_LINE"
printf $'Last segment: %s\n' "$LAST_SEGMENT_LINE"
printf $'Segment count   : %s\n' "$SEGMENT_COUNT"
printf $'Current wal segment : %s\n' "$CURRENT_WAL_FILE"
printf $'Segments before current : %s\n' "$SEGMENTS_BEFORE_CURRENT"
printf $'Segments after current  : %s\n' "$SEGMENTS_AFTER_CURRENT"
printf $'Last checkpoint time: %s\n' "$( 
/opt/pgsql8311/bin/pg_controldata /pgdata/main | egrep '^Time of latest 
checkpoint:' | sed 's/^[^:]*: *//' )"
/opt/pgsql8311/bin/psql -U postgres -c "select name, setting from 
pg_settings where name = 
any('{checkpoint_timeout,checkpoint_segments,archive_mode,archive_command}')"
) >> $LOGFILE



sample output looks like this:

| Mon Nov  1 13:46:00 EDT 2010
| First segment   : -rw---   1 postgres postgres 16777216 Nov  1 
13:16 000137670053
| Current segment : -rw---   1 postgres postgres 16777216 Nov  1 
13:45 000137670064
| Last segment: -rw---   1 postgres postgres 16777216 Nov  1 
13:01 000137680029
| Segment count   : 215
| Current wal segment : 000137670064
| Segments before current : 17
| Segments after current  : 197
| Last checkpoint time: Mon Nov 01 13:31:29 2010
| name |setting
| -+---
|  archive_command | /usr/bin/true
|  archive_mode| on
|  checkpoint_segments | 100
|  checkpoint_timeout  | 900
| (4 rows)

As you can see, now we have 215 segments, with 17 that represent wal before 
current location and 197 that are after current segment!

Here - you can see graph which plots number of wal segments in the last week 
http://depesz.com/various/bad-wal.jpg

it virtually never goes below 215, and it spikes to 270-300.

In here: http://www.depesz.com/various/bad-wal.log.gz is log from my test 
script since 20th of october.

Any ideas why number of segments is higher than expected?

Just so that I am clear: I do not want to lower it by changing
checkpoint_segments. I'm looking for information/enlightenment about why
it works the way it works, and what could be possibly wrong.

Best regards,

depesz

-- 
Linkedin: http://www.linkedin.com/in/depesz  /  blog: http://www.depesz.com/
jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JDBC Transactions

2010-11-01 Thread Jonathan Tripathy

Hi Everyone,

I'm trying to create a server for a database system which will be used 
by multiple clients. Of course, table locking is very important. Reading 
the Postgresql docs, locking occurs on a transaction-by-transaction basis.


In our java code, we are doing this:

//Start Code Block

Connection con = "..."
con.setAutoComitt(false);

//Insert SQL here to lock table

String qry1 = "..."
pst1 = con.prepareStatement(qry1)
//Insert code here to add values to prepared statement pst1
pst1.executequery();

String qry2 = "..."
pst2 = con.prepareStatement(qry2)
//Insert code here to add values to prepared statement pst2
pst2.executequery();

con.comitt();

//End Code Block

My question is, would the above block of code be classed as a single 
transaction, and would the locking work correctly?


Thanks

Jonny


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Connection Pool

2010-11-01 Thread Jonathan Tripathy

Hi Everyone,

I'm trying to work with connection pools. The example I'm looking at is 
lockated here:


http://www.developer.com/img/2009/11/Listing1_ConnectionPoolClass.html

You will notice that the getConnectionFromPool method does not implement 
any blocking, and only returns null.


I would like my application to try and get a connection from the pool, 
and if there are none free, wait until either one is free or time is up.


What is the best way to do this? Would creating a wrapper method work? 
This wrapper method would check for null, and if so, execute a while 
loop until a connection is returned. Also, would this wrapper method 
need to be synchronised (remembering that multiple threads may be trying 
to get a connection)?


Thanks

Jonny

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
On Mon, Nov 1, 2010 at 1:32 PM, Scott Marlowe  wrote:
> you would do it with 9.0.x installed, and there should be a program in
> one of the 9.0 packages that has pg_upgrade in it.

So I have my 8.4.4-6 databases backed up. I don't know if I needed the
default 'postgres' database dumped but I did that one too just in
case. I then upgraded the server to 9.0.1-2 and my question is how do
you create a new database in PostgeSQL 9.0 coming from 8.4.4-6 when
the server refuses to start. I can't connect to PostgreSQL simply
because the logs tell me the data is not compatible. Am I missing
something?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Scott Marlowe
On Mon, Nov 1, 2010 at 11:06 AM, Carlos Mennens
 wrote:
> On Mon, Nov 1, 2010 at 12:52 PM, Richard Broersma
>  wrote:
>
>> oops: http://www.postgresql.org/docs/9.0/interactive/pgupgrade.html
>
> Thanks for the URL. I will try this but I am confused how to proceed?
> Can I attempt this with PostgreSQL 9.0.1-2 server installed and the
> data is still 8.4 or do I need to find a way to uninstall 9.0.1-2 and
> reinstall the 8.4 server?

you would do it with 9.0.x installed, and there should be a program in
one of the 9.0 packages that has pg_upgrade in it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] async queries in Perl and poll()/select() loop - how to make them work together?

2010-11-01 Thread Andy Colson

On 11/1/2010 11:58 AM, Alexander Farber wrote:

Hello Andy and others,

On Mon, Nov 1, 2010 at 3:33 PM, Andy Colson  wrote:

On 11/1/2010 4:29 AM, Alexander Farber wrote:

I have a small multiplayer game, a non-forking daemon
reading/writing to sockets and running in a IO::Poll loop.

I.e. I would like to "fire and forget" queries.

But unfortunately I get the error:
DBD::Pg::st execute failed: Cannot execute
until previous async query has finished
even though I'm not using PG_OLDQUERY_WAIT



I believe one database connection can have one async query going at a time.


why are there 3 contants
http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
then? They suggest you can fire a query and forget


I dont see anyplace in the docs that connect (or connect_cached) supports
PG_ASYNC.


True, I've removed it (the problem still persists).


Each iteration of your loop is blowing away the previous values, which
should cause problems.  I assume this is just test code?  Is your real code
really going to connection 10 times per person?  You wont be able to support
very many concurrent users that way.  The code above might work if you
switched it arrays (@dbh and @sth).


No I just need one connection, because I have
1 process (without any forked processes or threads),
which loops in a poll() loop.


Async queries gives you the ability to fire one query, let the db work on it
while you do something else, and them come back to it.  You need to think
about your layout (cuz I'm betting your example code does not reflect what
you really want to do).

Even with async querys, you eventually have to call $dbh->pg_result, so its
not going to be fire and forget.  To really do fire and forget, and totally
take the stats processing away from game play processing, I'd suggest an
event queue (or rpc), like zeromq, PGQ or gearman.


Thanks I'll look at it or maybe I'll fork 1 more process,
and open a pipe to it (then I can poll() it too).

Regards
Alex



Consider the Pg architecture:  On the server a postmaster runs, 
listening for connections.  On the client, you connect to the server. 
The postmaster will spin up a child process to handle the new 
connection.  One postmaster child processes one client connection, and 
it can only do one query at a time.


So:
Postmaster
 |
 |--> child 1
 |--> child 2

Each child runs one query at a time.  Your client program has two options:
1) fire off a query and wait for the response and collect it.
2) fire off a query, do something else for a bit, collect the response.


> why are there 3 contants
> http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
> then? They suggest you can fire a query and forget

I'm not sure what you mean fire and forget.  To me, I'd say no because 
you have to collect the results at some point via $dbh->pg_result. 
(Even if you fire an update or insert, I think you still have to "finish 
off the process" via $dbh->pg_result)


I dont think you can start a second query until you have called 
$dbh->pg_result.  These constants just give you neat ways of waiting... 
its still just one at a time.


Our definitions of fire and forget might be different, and thats ok, but 
in your example code, it looked to me like you wanted to run 10 
simultaneous queries asynchronously, and that cannot be done without 10 
separate database connections.  One connection can only run one query at 
a time.


You still have the option, however, of using async queries in your game, 
for example:


code to calc stats...
start query to update db stats
code to process game play, etc
finish off the db stats query
final bit of game code and respond to player... etc

-Andy



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is it safe...( Upgrade questions)

2010-11-01 Thread Jerry LeVan
I am trying to screw up the courage to update my systems ( 2 macs, 2 fedora ) 
from
the last 8.4 postgresql updates to the latest 9.0.1 versions.

I have a couple of concerns...

1) I have developed my own gui apps using tcl/tk and wxpython for accessing and
   maintaining my db's. 

   I use psycopg and pgtcl interfaces to libpq. 

   Do these packages work ok with version 9?

2) The fedora postgresql rpms at pgrpms.org seem to have introduced a new
   naming convention and a new directory layout. It is not clear to me what
   the impact is going to be with respect to psycopg and pgtcl.

   I am guessing that I will have to rebuilt these packages and will not
   be able to use the standard Fedora releases.


I guess a third option is to wait for Fedora 14...

Can anyone share their experiences with installing on Fedora 13 ( in particular
the problem with the interfaces with libpq) ?

Thanks

Jerry
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
On Mon, Nov 1, 2010 at 12:52 PM, Richard Broersma
 wrote:

> oops: http://www.postgresql.org/docs/9.0/interactive/pgupgrade.html

Thanks for the URL. I will try this but I am confused how to proceed?
Can I attempt this with PostgreSQL 9.0.1-2 server installed and the
data is still 8.4 or do I need to find a way to uninstall 9.0.1-2 and
reinstall the 8.4 server?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] async queries in Perl and poll()/select() loop - how to make them work together?

2010-11-01 Thread Alexander Farber
Hello Andy and others,

On Mon, Nov 1, 2010 at 3:33 PM, Andy Colson  wrote:
> On 11/1/2010 4:29 AM, Alexander Farber wrote:
>> I have a small multiplayer game, a non-forking daemon
>> reading/writing to sockets and running in a IO::Poll loop.
>>
>> I.e. I would like to "fire and forget" queries.
>>
>> But unfortunately I get the error:
>> DBD::Pg::st execute failed: Cannot execute
>> until previous async query has finished
>> even though I'm not using PG_OLDQUERY_WAIT

> I believe one database connection can have one async query going at a time.

why are there 3 contants
http://search.cpan.org/dist/DBD-Pg/Pg.pm#Asynchronous_Constants
then? They suggest you can fire a query and forget

> I dont see anyplace in the docs that connect (or connect_cached) supports
> PG_ASYNC.

True, I've removed it (the problem still persists).

> Each iteration of your loop is blowing away the previous values, which
> should cause problems.  I assume this is just test code?  Is your real code
> really going to connection 10 times per person?  You wont be able to support
> very many concurrent users that way.  The code above might work if you
> switched it arrays (@dbh and @sth).

No I just need one connection, because I have
1 process (without any forked processes or threads),
which loops in a poll() loop.

> Async queries gives you the ability to fire one query, let the db work on it
> while you do something else, and them come back to it.  You need to think
> about your layout (cuz I'm betting your example code does not reflect what
> you really want to do).
>
> Even with async querys, you eventually have to call $dbh->pg_result, so its
> not going to be fire and forget.  To really do fire and forget, and totally
> take the stats processing away from game play processing, I'd suggest an
> event queue (or rpc), like zeromq, PGQ or gearman.

Thanks I'll look at it or maybe I'll fork 1 more process,
and open a pipe to it (then I can poll() it too).

Regards
Alex

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Richard Broersma
On Mon, Nov 1, 2010 at 9:52 AM, Richard Broersma
 wrote:
> On Mon, Nov 1, 2010 at 9:47 AM, Carlos Mennens  
> wrote:
>
>> Sadly I blindly upgraded my database w/o doing a dump / restore so can
>> anyone tell me if I am dead in the water or is there a way I can
>> recover from this error on my part?
>
> No, but you'll want to read this document to proceed.

oops: http://www.postgresql.org/docs/9.0/interactive/pgupgrade.html


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Richard Broersma
On Mon, Nov 1, 2010 at 9:47 AM, Carlos Mennens  wrote:

> Sadly I blindly upgraded my database w/o doing a dump / restore so can
> anyone tell me if I am dead in the water or is there a way I can
> recover from this error on my part?

No, but you'll want to read this document to proceed.


-- 
Regards,
Richard Broersma Jr.

Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Bill Moran
In response to Carlos Mennens :

> On Mon, Nov 1, 2010 at 12:45 PM, Bill Moran  wrote:
> > I can't speak for Arch Linux' upgrade setup, but going from 8.4 -> 9.0
> > requires that the data directory either be dumped/recreated, or ran
> > through the new upgrade process (which (as yet) I have no experience
> > with).
> >
> > If the Arch Linux stuff doesn't do that automatically, then you'll have
> > to do it manually.
> 
> I just read:
> 
> http://www.postgresql.org/docs/9.0/static/release-9-0
> 
> Sadly I blindly upgraded my database w/o doing a dump / restore so can
> anyone tell me if I am dead in the water or is there a way I can
> recover from this error on my part?

It's unlikely that anything is wrong with the data.  If you downgrade
that machine back to 8.4, you should be OK.  Or you could copy the
data directory to another machine that has 8.4 on it (as long as the
hardware architecture is the same)

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
On Mon, Nov 1, 2010 at 12:45 PM, Bill Moran  wrote:
> I can't speak for Arch Linux' upgrade setup, but going from 8.4 -> 9.0
> requires that the data directory either be dumped/recreated, or ran
> through the new upgrade process (which (as yet) I have no experience
> with).
>
> If the Arch Linux stuff doesn't do that automatically, then you'll have
> to do it manually.

I just read:

http://www.postgresql.org/docs/9.0/static/release-9-0

Sadly I blindly upgraded my database w/o doing a dump / restore so can
anyone tell me if I am dead in the water or is there a way I can
recover from this error on my part?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Scott Marlowe
On Mon, Nov 1, 2010 at 10:36 AM, Carlos Mennens
 wrote:
> I did an upgrade on my database server this past weekend and the
> database fails to start. I checked /var/log/postgresql and found the
> reason:
>
> [r...@slave ~]# ps aux | grep postgres
> root      5189  0.0  0.0   8128   956 pts/0    S+   12:28   0:00 grep postgres
>
> [r...@slave ~]# /etc/rc.d/postgresql start
> :: Starting PostgreSQL
>
>                  [BUSY] server starting
>
>
>                  [DONE]
> [r...@slave ~]# ps aux | grep postgres
> root      5205  0.0  0.0   8128   960 pts/0    R+   12:28   0:00 grep postgres
>
> [r...@slave ~]# tail -n 50 /var/log/postgresql.log
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
>
> Does anyone know if this is a issue with PostgreSQL or with the way
> Arch Linux packages the upgrade?

It's always been like that.  There should be a pg_migrator script or
something like that to convert the 8.4 db to 9.0

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Bill Moran
In response to Carlos Mennens :

> I did an upgrade on my database server this past weekend and the
> database fails to start. I checked /var/log/postgresql and found the
> reason:
> 
> [r...@slave ~]# ps aux | grep postgres
> root  5189  0.0  0.0   8128   956 pts/0S+   12:28   0:00 grep postgres
> 
> [r...@slave ~]# /etc/rc.d/postgresql start
> :: Starting PostgreSQL
> 
>   [BUSY] server starting
> 
> 
>   [DONE]
> [r...@slave ~]# ps aux | grep postgres
> root  5205  0.0  0.0   8128   960 pts/0R+   12:28   0:00 grep postgres
> 
> [r...@slave ~]# tail -n 50 /var/log/postgresql.log
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
> FATAL:  database files are incompatible with server
> DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
> which is not compatible with this version 9.0.1.
> 
> Does anyone know if this is a issue with PostgreSQL or with the way
> Arch Linux packages the upgrade?

I can't speak for Arch Linux' upgrade setup, but going from 8.4 -> 9.0
requires that the data directory either be dumped/recreated, or ran
through the new upgrade process (which (as yet) I have no experience
with).

If the Arch Linux stuff doesn't do that automatically, then you'll have
to do it manually.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-01 Thread Carlos Mennens
I did an upgrade on my database server this past weekend and the
database fails to start. I checked /var/log/postgresql and found the
reason:

[r...@slave ~]# ps aux | grep postgres
root  5189  0.0  0.0   8128   956 pts/0S+   12:28   0:00 grep postgres

[r...@slave ~]# /etc/rc.d/postgresql start
:: Starting PostgreSQL

  [BUSY] server starting


  [DONE]
[r...@slave ~]# ps aux | grep postgres
root  5205  0.0  0.0   8128   960 pts/0R+   12:28   0:00 grep postgres

[r...@slave ~]# tail -n 50 /var/log/postgresql.log
FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
which is not compatible with this version 9.0.1.
FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
which is not compatible with this version 9.0.1.
FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 8.4,
which is not compatible with this version 9.0.1.

Does anyone know if this is a issue with PostgreSQL or with the way
Arch Linux packages the upgrade?

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Is this a known feature of 8.1 SSL connection?

2010-11-01 Thread zhong ming wu
I don't recall this being an issue with 8.4 I am also using

Say your 8.1 server has SSL on.  Even though pg_hba.conf have

host or hostnossl  md5

either server or 8.1 psql  insists that you have .postgresql/postgresql.*

Does that make sense to you?

Note: no "cert" in pg_hba.conf


Re: [GENERAL] Problem with Crosstab (Concatenate Problem)

2010-11-01 Thread Joe Conway
On 11/01/2010 06:24 AM, Stefan Schwarzer wrote:
> I need to convert the integer values for the years into column names, i.e. 
> "1965" into "y_1965". How do I achieve this then?

Try something like:

create table foo (
name text,
year_start int,
value float8);

insert into foo values('a',2010,1.23),('b',2011,2.34);

SELECT * FROM
crosstab(
'SELECT name, year_start, value FROM foo ORDER BY 1',
'SELECT DISTINCT year_start FROM foo'
)
AS ct(name varchar, y_2010 float8, y_2011 float8);

 name | y_2010 | y_2011
--++
 a||   1.23
 b|   2.34 |
(2 rows)


HTH,

Joe

-- 
Joe Conway
credativ LLC: http://www.credativ.us
Linux, PostgreSQL, and general Open Source
Training, Service, Consulting, & 24x7 Support



signature.asc
Description: OpenPGP digital signature


Re: [GENERAL] async queries in Perl and poll()/select() loop - how to make them work together?

2010-11-01 Thread Andy Colson

On 11/1/2010 4:29 AM, Alexander Farber wrote:

Hello,

I'm using CentOS 5.5 Linux with stock perl v5.8.8
and have installed DBD-Pg-2.17.1 via CPAN shell
and I'm using postgresql-server-8.4.5-1PGDG.rhel5.

I've prepared a simple test case demonstrating
my problem - it is listed at the bottom. It works ok,
when I remove {pg_async =>  PG_ASYNC}.

I have a small multiplayer game, a non-forking daemon
reading/writing to sockets and running in a IO::Poll loop.

I'd like to add player statistics to it, so I was hoping
that I could call the simple INSERT/UPDATE
statements asynchronously from the loop and
I don't need to fetch any results of those queries,
because displaying statistics will be done by
web scripts, not by my game daemon.

I.e. I would like to "fire and forget" queries.

But unfortunately I get the error:
DBD::Pg::st execute failed: Cannot execute
until previous async query has finished
even though I'm not using PG_OLDQUERY_WAIT

Does anybody know what is wrong and
how would you use async queries with
poll()/select() loops anyway?

I can't even call "$dbh->pg_result if $sth->pg_ready",
on every loop iteration, because I can have
several queries running at that moment,
I don't want to iterate through a list of my $sth's...
This defeats my target of quick poll()-looping.

Regards
Alex

P.S. Here is my test case and it is also listed at
http://stackoverflow.com/questions/4065758/async-queries-with-dbdpg-fail-with-cannot-execute-until-previous-async-query-h

#!/usr/bin/perl -w

use strict;
use DBI;
use DBD::Pg qw(:async);

use constant DBNAME =>  'snake';
use constant DBUSER =>  'snake';
use constant DBPASS =>  'snake';

use constant SQL_CREATE_TABLES =>  q{
 /*
 create table pref_users (
 id varchar(32) primary key,
 first_name varchar(32),
 last_name varchar(32),
 female boolean,
 avatar varchar(128),
 city varchar(32),
 lat real check (-90<= lat and lat<= 90),
 lng real check (-90<= lng and lng<= 90),
 last_login timestamp default current_timestamp,
 last_ip inet,
 medals smallint check (medals>  0)
 );

 create table pref_rate (
 obj varchar(32) references pref_users(id),
 subj varchar(32) references pref_users(id),
 good boolean,
 fair boolean,
 nice boolean,
 about varchar(256),
 last_rated timestamp default current_timestamp
 );

 create table pref_money (
 id varchar(32) references pref_users,
 yw char(7) default to_char(current_timestamp, '-WW'),
 money real
 );
 create index pref_money_yw_index on pref_money(yw);

 create table pref_pass (
 id varchar(32) references pref_users
 );

 create table pref_misere (
 id varchar(32) references pref_users
 );
 */

 create or replace function pref_update_users(_id varchar,
 _first_name varchar, _last_name varchar, _female boolean,
 _avatar varchar, _city varchar, _last_ip inet) returns
void as $BODY$
 begin

 update pref_users set
 first_name = _first_name,
 last_name = _last_name,
 female = _female,
 avatar = _avatar,
 city = _city,
 last_ip = _last_ip
 where id = _id;

 if not found then
 insert into pref_users(id, first_name,
 last_name, female, avatar, city, last_ip)
 values (_id, _first_name, _last_name,
 _female, _avatar, _city, _last_ip);
 end if;
 end;
 $BODY$ language plpgsql;
};

eval {
 my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
 DBNAME, DBUSER, DBPASS, {
 AutoCommit =>  1,
 PrintWarn =>  1,
 PrintError =>  1,
 RaiseError =>  1,
 FetchHashKeyName =>  'NAME_lc',
 pg_enable_utf8 =>  1
 }, {pg_async =>  PG_ASYNC});

 $dbh->do(SQL_CREATE_TABLES, {pg_async =>  PG_ASYNC});
};
warn $@ if $@;

for my $i (1..10) {
 eval {
 my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
 DBNAME, DBUSER, DBPASS, {
 AutoCommit =>  1,
 PrintWarn =>  1,
 PrintError =>  1,
 RaiseError =>  1,
 FetchHashKeyName =>  'NAME_lc',
 pg_enable_utf8 =>  1
 }, {pg_async =>  PG_ASYNC});

 #$dbh->pg_result;

 my $sth = $dbh->prepare_cached(

Re: [GENERAL] PHP PDO->bindValue() vs row execute speed difference

2010-11-01 Thread Cédric Villemain
2010/11/1 Georgi Ivanov :
> Hi,
> I have following situation:
> $q = 'select * from tbl1 where id = :id';
> $stmt = $dbh->prepare($q);
> $stmt->bindValue(':id', $id , PDO::PARAM_INT);
> $stmt->execute();
> //1000 ms
> and
>
> $q1 = ' select * from tbl1 where id = 100 ';
> $stmt = $dbh->prepare($q);
> //NO binding here !
> $stmt->execute();
> //2 ms
>
> The queries are a bit more complex, but this is enough to get the idea.
>
> So the first query runs for about 1000 ms
>
> The second query( w/o binding) runs for about 2 ms.
> If I'm correct, the first query is interpreted as : select * from tbl1 where
> id = (INT )
> and I don't get good execution plan.
>
> The second one is fast, because  the DB see the literal 100 as value for ID
> and makes a better execution plan.
> Am I correct in my thoughts ?

Yes. But usualy for a PK there is no trouble and planner should use index.

you can give a try with psql  'prepare foo ... ;  explain execute
foo(100); ' vs 'explain select  where id = 100'

> Is there anything I can do in tuning the DB or just to skip/rewrite PHP PDO
> ?

Once you have the explain output for the named prepared statement, you'll know.

-- 
Cédric Villemain               2ndQuadrant
http://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pgAdmin 3 index display problem

2010-11-01 Thread Eduardas Tcpa
Hello I'm getting into strange problem with pgAdmin 3.
As we all know, PostgreSQL enforces index creation on primary key's.
The problem with this is the fact, that when i create a table in pgAdmin's
sql editor, in result window i'm getting success and a note that an index
will be created on tables pripamary key - problem, when I am browsing
through table structure in my public schema, under newly created  table in
index section there are no indexes until I create one explicitly.So does it
create an index?and it's pgAdmin's bug not displaying the auto created
index, or it works in such way - not displaying the auto index?

PostgreSQL version: 9.0.1


Re: [GENERAL] Problem with Crosstab (Concatenate Problem)

2010-11-01 Thread Stefan Schwarzer
>> For one of the countries, I have a value for 2007, but not for 1960. 
>> When using only the inner query, than I see one line: Andorra - 2007
>> - 539 But when running the whole SQL, the value for year 2007 get's
>> allocated to the year 1960. The table looks as follows:
>> 
>> name |y_1960|   y_2007
>> Andorra  |   539|   NULL
> 
> 
> That is documented behavior. See:
>  http://www.postgresql.org/docs/8.4/interactive/tablefunc.html
> 
> You probably want the other form of crosstab
> 
>> 
> F.33.1.4. crosstab(text, text)

Thanks a lot for the help. Indeed, that should be the one which should do it. 
Unfortunately, I ran into an error message:

I have this query:

SELECT * FROM 
crosstab( 
'SELECT 
c.name AS name, 
d.year_start AS year, 
d.value AS value 
FROM 
co2_total_cdiac AS d 
RIGHT JOIN 
countries_view AS c ON c.id = d.id_country 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1', 

'SELECT 
DISTINCT ''y_'' || year_start AS year 
FROM 
co2_total_cdiac AS d 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1'
) 
AS ct(name varchar, y_1960 numeric, y_1965 numeric, y_2003 numeric, 
y_2007 numeric)


Now, I get an error message:

ERROR:  invalid input syntax for integer: "SELECT 
DISTINCT 'y_' || year_start AS year 
FROM 
co2_total_cdiac AS d 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1"
LINE 15:   'SELECT 
   ^


I need to convert the integer values for the years into column names, i.e. 
"1965" into "y_1965". How do I achieve this then?

Thanks for any help!

Stef


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PHP PDO->bindValue() vs row execute speed difference

2010-11-01 Thread Georgi Ivanov
Hi,
I have following situation:
$q = 'select * from tbl1 where id = :id';
$stmt = $dbh->prepare($q);
$stmt->bindValue(':id', $id , PDO::PARAM_INT);
$stmt->execute();
//1000 ms
and

$q1 = ' select * from tbl1 where id = 100 ';
$stmt = $dbh->prepare($q);
//NO binding here !
$stmt->execute();
//2 ms

The queries are a bit more complex, but this is enough to get the idea.

So the first query runs for about 1000 ms

The second query( w/o binding) runs for about 2 ms.
If I'm correct, the first query is interpreted as : select * from tbl1 where
id = (INT )
and I don't get good execution plan.

The second one is fast, because  the DB see the literal 100 as value for ID
and makes a better execution plan.
Am I correct in my thoughts ?
Is there anything I can do in tuning the DB or just to skip/rewrite PHP PDO
?


Re: [GENERAL] pgAdmin 3 index display problem

2010-11-01 Thread Guillaume Lelarge
Le 01/11/2010 06:41, Eduardas Tcpa a écrit :
>>
>> Hello I'm getting into strange problem with pgAdmin 3.
>> As we all know, PostgreSQL enforces index creation on primary key's.
>> The problem with this is the fact, that when i create a table in pgAdmin's
>> sql editor, in result window i'm getting success and a note that an index
>> will be created on tables pripamary key - problem, when I am browsing
>> through table structure in my public schema, under newly created  table in
>> index section there are no indexes until I create one explicitly.So does it
>> create an index?and it's pgAdmin's bug not displaying the auto created
>> index, or it works in such way - not displaying the auto index?
>>
>> PostgreSQL version: 9.0.1
>>
> 

pgAdmin displays the primary key in the Constraints node. There is no
reason to display the index in the Indexes node as you won't be able to
change/remove the index.

The same applies to unique keys.

BTW, I see more and more questions here about pgAdmin. You should
probably ask them on pgadmin-support
(http://archives.postgresql.org/pgadmin-support/).


-- 
Guillaume
 http://www.postgresql.fr
 http://dalibo.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pgAdmin 3 index display problem

2010-11-01 Thread Thom Brown
On 1 November 2010 13:41, Eduardas Tcpa  wrote:

> Hello I'm getting into strange problem with pgAdmin 3.
>> As we all know, PostgreSQL enforces index creation on primary key's.
>> The problem with this is the fact, that when i create a table in pgAdmin's
>> sql editor, in result window i'm getting success and a note that an index
>> will be created on tables pripamary key - problem, when I am browsing
>> through table structure in my public schema, under newly created  table in
>> index section there are no indexes until I create one explicitly.So does it
>> create an index?and it's pgAdmin's bug not displaying the auto created
>> index, or it works in such way - not displaying the auto index?
>>
>> PostgreSQL version: 9.0.1
>>
>
>
Yes, it will always create a unique index for primary keys, but just not
visible in pgAdmin as an actual index.  The fact a primary key is there
(listed in the contraints node) indicates that it automatically has an index
anyway.  And the name shown in constraints is the name of the index.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


[GENERAL] pgAdmin 3 index display problem

2010-11-01 Thread Eduardas Tcpa
Hello I'm getting into strange problem with pgAdmin 3.
As we all know, PostgreSQL enforces index creation on primary key's.
The problem with this is the fact, that when i create a table in pgAdmin's
sql editor, in result window i'm getting success and a note that an index
will be created on tables pripamary key - problem, when I am browsing
through table structure in my public schema, under newly created  table in
index section there are no indexes until I create one explicitly.So does it
create an index?and it's pgAdmin's bug not displaying the auto created
index, or it works in such way - not displaying the auto index?

PostgreSQL version: 9.0.1


Re: [GENERAL] pgAdmin 3 index display problem

2010-11-01 Thread Eduardas Tcpa
>
> Hello I'm getting into strange problem with pgAdmin 3.
> As we all know, PostgreSQL enforces index creation on primary key's.
> The problem with this is the fact, that when i create a table in pgAdmin's
> sql editor, in result window i'm getting success and a note that an index
> will be created on tables pripamary key - problem, when I am browsing
> through table structure in my public schema, under newly created  table in
> index section there are no indexes until I create one explicitly.So does it
> create an index?and it's pgAdmin's bug not displaying the auto created
> index, or it works in such way - not displaying the auto index?
>
> PostgreSQL version: 9.0.1
>


Re: [GENERAL] Problem with Crosstab (Concatenate Problem)

2010-11-01 Thread Stefan Schwarzer
>> For one of the countries, I have a value for 2007, but not for 1960. 
>> When using only the inner query, than I see one line: Andorra - 2007
>> - 539 But when running the whole SQL, the value for year 2007 get's
>> allocated to the year 1960. The table looks as follows:
>> 
>> name |y_1960|   y_2007
>> Andorra  |   539|   NULL
> 
> 
> That is documented behavior. See:
> http://www.postgresql.org/docs/8.4/interactive/tablefunc.html
> 
> You probably want the other form of crosstab
> 
>> 
> F.33.1.4. crosstab(text, text)

Thanks a lot for the help. Indeed, that should be the one which should do it. 
Unfortunately, I ran into an error message:

I have this query:

SELECT * FROM 
crosstab( 
'SELECT 
c.name AS name, 
d.year_start AS year, 
d.value AS value 
FROM 
co2_total_cdiac AS d 
RIGHT JOIN 
countries_view AS c ON c.id = d.id_country 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1', 

'SELECT 
DISTINCT ''y_'' || year_start AS year 
FROM 
co2_total_cdiac AS d 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1'
) 
AS ct(name varchar, y_1960 numeric, y_1965 numeric, y_2003 numeric, 
y_2007 numeric)


Now, I get an error message:

ERROR:  invalid input syntax for integer: "SELECT 
DISTINCT 'y_' || year_start AS year 
FROM 
co2_total_cdiac AS d 
WHERE 
(d.year_start = 1960 OR d.year_start = 1965 OR 
d.year_start = 2003 OR d.year_start = 2007 ) 
ORDER BY 1"
LINE 15:   'SELECT 
  ^


I need to convert the integer values for the years into column names, i.e. 
"1965" into "y_1965". How do I achieve this then?

Thanks for any help!

Stef



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Failover on Windows

2010-11-01 Thread Norberto Delle

Em 1/11/2010 09:00, Fujii Masao escreveu:

On Fri, Oct 29, 2010 at 9:58 PM, Norberto Delle  wrote:

I'm testing a warm standby setup using PostgreSQL 9 x64 on Windows 2008 R2.

What command (pg_standby? cp?) is supplied in restore_command for warm-standby?
Or you are testing streaming replication + hot standby?


The problem is that when I put the trigger file on the location specified in
the parameter
'trigger_file' of the recovery.conf, nothing happens. No log entries, the
recovery just continues
as if nothing has happened.
Any clues of what may be wrong?

At least if you use pg_standby, you have to create the trigger file on
the location
specified in -t option of pg_standby.

Regards,


Hi Masao

Yes, I'm using pg_standby in the restore_command. I thought that to specify
a trigger_file in the recovery.conf file would be enough to be able to 
stop the recovery process.
So, I ignored the -t option of the pg_standby. By specifying it, now I'm 
able to stop the recovery process.


Thanks for your help.

Norberto

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Thomas Kellerer

Thom Brown wrote on 01.11.2010 12:33:

You can use:

SELECT nspname
FROM pg_namespace
WHERE oid = pg_my_temp_schema();

to get the name of the current temporary schema for your session.


Thanks that's what I was looking for.

Regards
Thomas

 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Temporary schemas

2010-11-01 Thread Thom Brown
On 1 November 2010 10:46, Thomas Kellerer  wrote:

> Hello,
>
> I have created a temporary table using
>
> create temporary table foo
> (
>  id integer
> );
>
> and noticed this was created in a schema called "pg_temp_2"
>
> My question is:
>
> is this always "pg_temp_2"?
> Or will the name of the "temp schema" change?
>
> If it isn't always the same, is there a way I can retrieve the schema name
> for temporary tables?
>
> Regards
> Thomas
>
>
You can use:

SELECT nspname
FROM pg_namespace
WHERE oid = pg_my_temp_schema();

to get the name of the current temporary schema for your session.

And it's always pg_temp_[nnn] as far as I'm aware, with a corresponding
pg_toast_temp_[nnn] schema.

-- 
Thom Brown
Twitter: @darkixion
IRC (freenode): dark_ixion
Registered Linux user: #516935


Re: [GENERAL] Failover on Windows

2010-11-01 Thread Fujii Masao
On Fri, Oct 29, 2010 at 9:58 PM, Norberto Delle  wrote:
> I'm testing a warm standby setup using PostgreSQL 9 x64 on Windows 2008 R2.

What command (pg_standby? cp?) is supplied in restore_command for warm-standby?
Or you are testing streaming replication + hot standby?

> The problem is that when I put the trigger file on the location specified in
> the parameter
> 'trigger_file' of the recovery.conf, nothing happens. No log entries, the
> recovery just continues
> as if nothing has happened.
> Any clues of what may be wrong?

At least if you use pg_standby, you have to create the trigger file on
the location
specified in -t option of pg_standby.

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Temporary schemas

2010-11-01 Thread Thomas Kellerer

Hello,

I have created a temporary table using

create temporary table foo
(
  id integer
);

and noticed this was created in a schema called "pg_temp_2"

My question is:

is this always "pg_temp_2"?
Or will the name of the "temp schema" change?

If it isn't always the same, is there a way I can retrieve the schema name for 
temporary tables?

Regards
Thomas


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] async queries in Perl and poll()/select() loop - how to make them work together?

2010-11-01 Thread Alexander Farber
Hello,

I'm using CentOS 5.5 Linux with stock perl v5.8.8
and have installed DBD-Pg-2.17.1 via CPAN shell
and I'm using postgresql-server-8.4.5-1PGDG.rhel5.

I've prepared a simple test case demonstrating
my problem - it is listed at the bottom. It works ok,
when I remove {pg_async => PG_ASYNC}.

I have a small multiplayer game, a non-forking daemon
reading/writing to sockets and running in a IO::Poll loop.

I'd like to add player statistics to it, so I was hoping
that I could call the simple INSERT/UPDATE
statements asynchronously from the loop and
I don't need to fetch any results of those queries,
because displaying statistics will be done by
web scripts, not by my game daemon.

I.e. I would like to "fire and forget" queries.

But unfortunately I get the error:
DBD::Pg::st execute failed: Cannot execute
until previous async query has finished
even though I'm not using PG_OLDQUERY_WAIT

Does anybody know what is wrong and
how would you use async queries with
poll()/select() loops anyway?

I can't even call "$dbh->pg_result if $sth->pg_ready",
on every loop iteration, because I can have
several queries running at that moment,
I don't want to iterate through a list of my $sth's...
This defeats my target of quick poll()-looping.

Regards
Alex

P.S. Here is my test case and it is also listed at
http://stackoverflow.com/questions/4065758/async-queries-with-dbdpg-fail-with-cannot-execute-until-previous-async-query-h

#!/usr/bin/perl -w

use strict;
use DBI;
use DBD::Pg qw(:async);

use constant DBNAME => 'snake';
use constant DBUSER => 'snake';
use constant DBPASS => 'snake';

use constant SQL_CREATE_TABLES => q{
/*
create table pref_users (
id varchar(32) primary key,
first_name varchar(32),
last_name varchar(32),
female boolean,
avatar varchar(128),
city varchar(32),
lat real check (-90 <= lat and lat <= 90),
lng real check (-90 <= lng and lng <= 90),
last_login timestamp default current_timestamp,
last_ip inet,
medals smallint check (medals > 0)
);

create table pref_rate (
obj varchar(32) references pref_users(id),
subj varchar(32) references pref_users(id),
good boolean,
fair boolean,
nice boolean,
about varchar(256),
last_rated timestamp default current_timestamp
);

create table pref_money (
id varchar(32) references pref_users,
yw char(7) default to_char(current_timestamp, '-WW'),
money real
);
create index pref_money_yw_index on pref_money(yw);

create table pref_pass (
id varchar(32) references pref_users
);

create table pref_misere (
id varchar(32) references pref_users
);
*/

create or replace function pref_update_users(_id varchar,
_first_name varchar, _last_name varchar, _female boolean,
_avatar varchar, _city varchar, _last_ip inet) returns
void as $BODY$
begin

update pref_users set
first_name = _first_name,
last_name = _last_name,
female = _female,
avatar = _avatar,
city = _city,
last_ip = _last_ip
where id = _id;

if not found then
insert into pref_users(id, first_name,
last_name, female, avatar, city, last_ip)
values (_id, _first_name, _last_name,
_female, _avatar, _city, _last_ip);
end if;
end;
$BODY$ language plpgsql;
};

eval {
my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
DBNAME, DBUSER, DBPASS, {
AutoCommit => 1,
PrintWarn => 1,
PrintError => 1,
RaiseError => 1,
FetchHashKeyName => 'NAME_lc',
pg_enable_utf8 => 1
}, {pg_async => PG_ASYNC});

$dbh->do(SQL_CREATE_TABLES, {pg_async => PG_ASYNC});
};
warn $@ if $@;

for my $i (1..10) {
eval {
my $dbh = DBI->connect_cached('dbi:Pg:dbname=' .
DBNAME, DBUSER, DBPASS, {
AutoCommit => 1,
PrintWarn => 1,
PrintError => 1,
RaiseError => 1,
FetchHashKeyName => 'NAME_lc',
pg_enable_utf8 => 1
}, {pg_async => PG_ASYNC});

#$dbh->pg_result;

my $sth = $dbh->prepare_cached(
q{select pref_update_users(?, ?, ?, ?, ?, ?,
NULL)}, {pg_async => PG_ASYNC});

$sth->execute('ID123', 'Alexand

[GENERAL] Linux x Windows LOCALE/ENCODING compatibility

2010-11-01 Thread Carlos Henrique Reimer
Hi,

I currently have my PostgreSQL server running in a windows box and now we're
migrating it to a Linux operational system.

Current windows configuration:
pg_controldata shows the LC_COLLATE and LC_CTYPE are Portuguese_Brasil.1252
psql \l command shows we have databases with encoding WIN1252 and UTF8

New Linux box:
Which --locale and --encoding options should I use for the INITDB command to
have a compatible locale and encoding cluster?
Which --encoding option should I use in the CREATEDB command to have a
compatible encoding database?

Thank you!
Reimer