Re: [GENERAL] Disconnecting and cancelling a statement

2011-09-07 Thread Jeff Davis
On Wed, 2011-09-07 at 14:46 +0800, Craig Ringer wrote:
> > Right now, PostgreSQL doesn't seem to make an effort to detect a client
> > cancellation. For instance, if you do a "select pg_sleep(1000)" and then
> > kill -9 the client, the SELECT will remain running.
> 
> pg_sleep isn't a good test. In fact, Pg _does_ make an effort to detect 
> when a client dies, and will try to terminate the query. It does this 
> via explicit checks at various points, none of which are reached while 
> Pg is idling in a sleep() syscall. During more typical query processing 
> you'll usually find that a query gets terminated when the client dies.

pg_sleep is not merely a wrapper around the sleep system call, it does
call CHECK_FOR_INTERRUPTS() periodically. Also, you can see that
pg_sleep can be easily canceled if the signal arrives while the query is
actually running (try in psql, or try removing the SIGSTOP/SIGCONT
signals from the C code I attached to the first message).

Try with a large cartesian product and you should get the same problem.

> Pg must find out when the client dies, though. If the client just goes 
> away - such as with a laptop on wifi that wanders out of range - it 
> won't know about it until it next attempts to send data to the client.

How does it know, even on a good network connection, when the client
disconnects? I attached a reproducible case, so you should see what I'm
talking about.

> To address this, if you want reliable client dropout detection, you need 
> to enable tcp keepalives and set them to quite aggressive so the OS will 
> periodically test the connection for aliveness.

I'd be happy if it just detected a disconnect that the OS already knows
about, e.g. explicitly closing the socket.

> I'd love to see Pg accept OOB cancel requests done via lightweight 
> connections that don't go through the whole setup process.

It does that for cancel (see PQcancel), but there is no equivalent for
termination.

>  If the server 
> sent a statement "cookie" when executing a statement, the client could 
> hang onto that and use it to issue a cancel for that statement and only 
> that statement by establishing a new connection to the server and 
> sending that cookie rather than the usual negotiation and auth process. 
> There'd be no need to go through full auth or even bother with SSL, 
> because it's a one-time random (or hash-based) code. Pooling systems 
> could send this to _all_ servers, or it could be prefixed with a server 
> identifier that helped poolers route it to the right server.

That's not too far from what's already done -- again, see the source for
PQcancel() and processCancelRequest().

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex query question

2011-09-07 Thread Jayadevan M
Hello,
> I have a complex query question whose answer I think would help me to
> understand subselects and aggregates better. I have a table with four
> columns of interest:
> 
> id (int primary key), loc_title (varchar null), loc_value (float
> null), loc_unit (varchar null)
> 
> I want the output columns to be:
> (1) each distinct value of loc_title, sorted
> (2) an id of a record containing that loc_title
> (3) the loc_value for the record in column 2
> (4) the loc_unit for the record in column 2
> 
> I don't care as much how the records for columns 2-4 are chosen. It
> could be max(loc_value), min(id), or something else. I just need some
> sample records to test my program against.
> 
> Is this something I should be able to do with a single query with a
> subselect, or is it too much for one query? I tried a few ways and
> none of them were syntactically valid.

Will this do?
test=# select * from myt;
 id | loc_title | loc_value | loc_unit
+---+---+--
  1 | AA|80 | 10
  2 | AA|80 | 10
  3 | BB|80 | 10
  4 | AA|80 | 10
  5 | BB|80 | 10
(5 rows)

test=# select a.* from myt a where id in (select min(id) from myt group by 
loc_title) order by loc_title;
 id | loc_title | loc_value | loc_unit
+---+---+--
  1 | AA|80 | 10
  3 | BB|80 | 10
(2 rows)

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Complex query question

2011-09-07 Thread Albe Laurenz
Mike Orr wrote:
> I have a complex query question whose answer I think would help me to
> understand subselects and aggregates better. I have a table with four
> columns of interest:
> 
> id (int primary key), loc_title (varchar null), loc_value (float
> null), loc_unit (varchar null)
> 
> I want the output columns to be:
> (1) each distinct value of loc_title, sorted
> (2) an id of a record containing that loc_title
> (3) the loc_value for the record in column 2
> (4) the loc_unit for the record in column 2
> 
> I don't care as much how the records for columns 2-4 are chosen. It
> could be max(loc_value), min(id), or something else. I just need some
> sample records to test my program against.
> 
> Is this something I should be able to do with a single query with a
> subselect, or is it too much for one query? I tried a few ways and
> none of them were syntactically valid.

Sorry to disappoint you, but you won't learn a lot about subselects
and aggregates with that:

SELECT DISTINCT ON (loc_title) loc_title, id, loc_value, loc_unit
FROM mytable
ORDER BY loc_title;

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Asia
> Asia  writes:
> > I would expect to have only one top-level CA cert in server's and client's 
> > root.crt and it was not possible to configure with 2-level intermediate CA. 
> 
> This seems a little confused, since in your previous message you stated
> that libpq worked correctly and JDBC did not, and now you seem to be
> saying the opposite.
> 
> As far as libpq goes, I would expect it to function correctly in 9.0 and
> up (and it did function correctly, last I tested it).  Previous releases
> will not do this nicely, for lack of this patch:
> http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=4ed4b6c54
> 
>   regards, tom lane
> 


I apologise then, it seems I was not clear enough when explaining my issue. 

I am using PostgreSQL, version 9.0.

I have all of it (libpq and jdbc) working, however I have some doubts about the 
correctness of my configuration.

The situation is more or less like following:

Client intermediate CA (root.crt): C1 -> C2, Client cert: C1 -> C2 ->C3

Server intermediate CA (root.crt): C1 -> S1, Server Cert: C1 -> S1 -> S2

I always use clientcert=1 in pg_hba to force mutual SSL.

Now with the above configuration libpq connects fine. But when I tried to use 
jdbc it requires me to append client's intermediate CA - "C1 -> C2" 
to server's root.crt. So server's root.crt content looks like follows:

C1 -> S1  ->  C1 -> C2

Then jdbc conenction works fine and the change does not affect libpq - it works 
fine like before.

So my point was general why the behavior for libpq and jdbc driver is not 
common (probably we would need some custom implementation of Java SSL facory 
for PostgreSQL) - both types of connection have different cert configuration 
what I believe could be better when it was common.

And the second issue is that you wrote that it should be enough to put to-level 
CA certs. So I left only C1 in server's root.crt, restarted server
and received following error during connection:

SSL error: certificate verify failed

The question is how to do it correctly?

Please advise.

Kind regards,
Joanna

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Radosław Smogura

On Wed, 07 Sep 2011 12:03:45 +0200, Asia wrote:

Asia  writes:
> I would expect to have only one top-level CA cert in server's and 
client's root.crt and it was not possible to configure with 2-level 
intermediate CA.


This seems a little confused, since in your previous message you 
stated

that libpq worked correctly and JDBC did not, and now you seem to be
saying the opposite.

As far as libpq goes, I would expect it to function correctly in 9.0 
and
up (and it did function correctly, last I tested it).  Previous 
releases

will not do this nicely, for lack of this patch:

http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=4ed4b6c54

regards, tom lane




I apologise then, it seems I was not clear enough when explaining my 
issue.


I am using PostgreSQL, version 9.0.

I have all of it (libpq and jdbc) working, however I have some doubts
about the correctness of my configuration.

The situation is more or less like following:

Client intermediate CA (root.crt): C1 -> C2, Client cert: C1 -> C2 
->C3


Server intermediate CA (root.crt): C1 -> S1, Server Cert: C1 -> S1 -> 
S2


I always use clientcert=1 in pg_hba to force mutual SSL.

Now with the above configuration libpq connects fine. But when I
tried to use jdbc it requires me to append client's intermediate CA -
"C1 -> C2"
to server's root.crt. So server's root.crt content looks like 
follows:


C1 -> S1  ->  C1 -> C2

Then jdbc conenction works fine and the change does not affect libpq
- it works fine like before.

So my point was general why the behavior for libpq and jdbc driver is
not common (probably we would need some custom implementation of Java
SSL facory
for PostgreSQL) - both types of connection have different cert
configuration what I believe could be better when it was common.

And the second issue is that you wrote that it should be enough to
put to-level CA certs. So I left only C1 in server's root.crt,
restarted server
and received following error during connection:

SSL error: certificate verify failed

The question is how to do it correctly?

Please advise.

Kind regards,
Joanna


I think problem is as follows, server sends to client certificates it 
can accept (as accepted parents), without intermediate CA, Java sees 
only top-level cert and tries to find client cert issued directly by 
top-level CA, I may only assume, that without intermediate CA you will 
be able to auth against any cert signed by top-level CA (this may cause 
small security hole as well).


I think this is not needed, but I suggest You too check cert "policies" 
with v3 extensions.


Java is really pedantic, about security.

Regards,
Radek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Asia
> 
> I think problem is as follows, server sends to client certificates it 
> can accept (as accepted parents), without intermediate CA, Java sees 
> only top-level cert and tries to find client cert issued directly by 
> top-level CA, I may only assume, that without intermediate CA you will 
> be able to auth against any cert signed by top-level CA (this may cause 
> small security hole as well).
> 
> I think this is not needed, but I suggest You too check cert "policies" 
> with v3 extensions.
> 
> Java is really pedantic, about security.
> 
> Regards,
> Radek
> 


The problem is that I believe that this configuration could be better but I 
cannot put part 
of CA chain in root.crt as it was advised.
For Java it all depends on current SSL Factory implementation, I was using the 
default one.
If I wrote my own implementation I would probably be able to have common with 
libpq, 
requiring the least info, configuration (but actually I would prefer to avoid 
it).

Kind regards,
Joanna


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Demoting master to slave without an rsync...is it safe?

2011-09-07 Thread Alex Lai

Chris Redekop wrote:



I have two questions:
(1) Did you set recovery_target_timeline='latest' in both master
and slave?


Yesbut it's in recovery.conf so it only really applies to 
whichever server is currently the slave...
 


(2) Did you make any changes after promote the slave to be master?


Yes, somehowever I'm not sure I've done enough changes to have the 
slave rotate+archive an xlog before bringing the old master back up as 
a slaveI would assume that wouldn't make a difference but it's 
something to test I guess
Did you set trigger_file in recovery?  That is the required step to 
switch slave to master.   I set my log rotate+archive to 15 minutes.  
When I touch fail over file, it mostly like force it to rotate+archive. 


--
Best regards,


Alex Lai
OMI SIPS DBA ADNET Systems , Inc. 
7515 Mission Drive, 
Suite A100 Lanham, MD 20706 
301-352-4657 (phone) 
301-352-0437 (fax) 
a...@sesda2.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Radosław Smogura

On Wed, 07 Sep 2011 13:49:30 +0200, Asia wrote:


I think problem is as follows, server sends to client certificates 
it

can accept (as accepted parents), without intermediate CA, Java sees
only top-level cert and tries to find client cert issued directly by
top-level CA, I may only assume, that without intermediate CA you 
will
be able to auth against any cert signed by top-level CA (this may 
cause

small security hole as well).

I think this is not needed, but I suggest You too check cert 
"policies"

with v3 extensions.

Java is really pedantic, about security.

Regards,
Radek




The problem is that I believe that this configuration could be better
but I cannot put part
of CA chain in root.crt as it was advised.
For Java it all depends on current SSL Factory implementation, I was
using the default one.
If I wrote my own implementation I would probably be able to have
common with libpq,
requiring the least info, configuration (but actually I would prefer
to avoid it).

Kind regards,
Joanna


I personally haven't tired SSL for PostgreSQL but, I think, You should 
put in root.crt only intermediate certificate (C1 - from prev post), so 
all and only all "sub-certs" of intermediate CA will be able to 
establish connection (paranoic security).


Putting intermediate CAs as trusted in Java keystore may be solution, 
but I'm not sure if in situation of cert invalidation, such cert will be 
rejected.


If you want to write SSL Factory, you should re-implement KeyManager 
only, to give ability of extended search.


Regards,
Radek

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Adrian Klaver
On Wednesday, September 07, 2011 4:49:30 am Asia wrote:

> 
> The problem is that I believe that this configuration could be better but I
> cannot put part of CA chain in root.crt as it was advised.
> For Java it all depends on current SSL Factory implementation, I was using
> the default one. If I wrote my own implementation I would probably be able
> to have common with libpq, requiring the least info, configuration (but
> actually I would prefer to avoid it).

You might want to take a look at the below and see if it helps:

http://jdbc.postgresql.org/documentation/head/ssl-client.html

> 
> Kind regards,
> Joanna

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Asia
> 
> I personally haven't tired SSL for PostgreSQL but, I think, You should 
> put in root.crt only intermediate certificate (C1 - from prev post), so 
> all and only all "sub-certs" of intermediate CA will be able to 
> establish connection (paranoic security).
> 
> Putting intermediate CAs as trusted in Java keystore may be solution, 
> but I'm not sure if in situation of cert invalidation, such cert will be 
> rejected.
> 
> If you want to write SSL Factory, you should re-implement KeyManager 
> only, to give ability of extended search.
> 
> Regards,
> Radek
> 

I  have already tried with only C1 in root.crt but unfortunately it does not 
work. I get error message that cert is invalid. It seems that chained CA's are 
not supported in a way we would like to have it done. I would prefer to have 
number of trusted certs in root.crt limited as much as possible, but as I said 
it does not work.

About Java, I would need to analyze the libpq code and implement KeyManager in 
a similar way - this is surely possible but not necessarily preferred solution 
;-)

Kind regards,
Joanna


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Tom Lane
Asia  writes:
> The problem is that I believe that this configuration could be better but I 
> cannot put part 
> of CA chain in root.crt as it was advised.
> For Java it all depends on current SSL Factory implementation, I was using 
> the default one.
> If I wrote my own implementation I would probably be able to have common with 
> libpq, 
> requiring the least info, configuration (but actually I would prefer to avoid 
> it).

You would be better off to ask about this on the pgsql-jdbc list; the
people who actually work with JDBC tend to hang out there.  I'm not sure
how carefully any of them follow pgsql-general.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Asia
I have a feeling that jdbc list is not the right list to ask why libpq does not 
work when I 
put top-level CA cert from CA having two certs in root.crt while you stated it 
would be
proper configuration.

There are 2 related threads here: one with consistency between libpq and jdbc 
driver and the other about 
how libpq works with chained CA's.

Kind regards,
Joanna

W dniu 2011-09-07 16:06:10 użytkownik Tom Lane  napisał:
> Asia  writes:
> > The problem is that I believe that this configuration could be better but I 
> > cannot put part 
> > of CA chain in root.crt as it was advised.
> > For Java it all depends on current SSL Factory implementation, I was using 
> > the default one.
> > If I wrote my own implementation I would probably be able to have common 
> > with libpq, 
> > requiring the least info, configuration (but actually I would prefer to 
> > avoid it).
> 
> You would be better off to ask about this on the pgsql-jdbc list; the
> people who actually work with JDBC tend to hang out there.  I'm not sure
> how carefully any of them follow pgsql-general.
> 
>   regards, tom lane
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Tom Lane
Asia  writes:
> I have a feeling that jdbc list is not the right list to ask why libpq does 
> not work when I 
> put top-level CA cert from CA having two certs in root.crt while you stated 
> it would be
> proper configuration.

What is a "CA having two certs"?  AFAIK, there is no such animal.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-07 Thread Lincoln Yeoh

At 05:23 AM 9/7/2011, Merlin Moncure wrote:

On Tue, Sep 6, 2011 at 3:45 PM, Merlin Moncure  wrote:

> b) doesn't block reads if you lock in EXCLUSIVE mode.  a) is the best
> way to go if you prefer to handle errors on the client and/or
> concurrency is important...c) otherwise.

whoops!  meant to say b) otherwise! As far as c) goes, that is
essentially an advisory lock for the purpose -- using advisory locks
in place of mvcc locks is pretty weak sauce -- they should be used
when what you are locking doesn't follow mvcc rules.

merlin


Don't you have to block SELECTs so that the SELECTs get serialized? 
Otherwise concurrent SELECTs can occur at the same time, find no 
existing rows, then "all" the inserts proceed and you get errors (or dupes).


That's how Postgresql still works right? I haven't really been keeping up.

From what I see this (UPSERT/MERGE) has been a common problem/query 
over the years but it's not in a Postgresql FAQ and many people seem 
to be using methods that don't actually work. Google shows that many 
are even recommending those methods to others. Postgresql might still 
get blamed for the resulting problems.


Regards,
Link.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-07 Thread Andrew Sullivan
On Wed, Sep 07, 2011 at 11:45:11PM +0800, Lincoln Yeoh wrote:
> Don't you have to block SELECTs so that the SELECTs get serialized?

If you want to do that, why wouldn't you just use serializable mode?

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL trigger and sequence increment

2011-09-07 Thread jonesd

Seems like you would be a lot better off enforcing this with a unique
index on (submitter_id, date_trunc('month',entry_timestamp)).  The above
not only doesn't provide any feedback, it's got serious race-condition
problems.


I'll take a look at using an index to do this.  The trigger is an ugly  
solution.


> Each row in the table also has a SERIAL identifier with a sequence  
  > providing values.  I'd like to provide information to the user
> regarding why the INSERT or UPDATE failed, as the examples in the   
 > documentation do via using a RAISE EXCEPTION instead of RETURN  
NULL   > (see   >  
http://www.postgresql.org/docs/8.3/interactive/plpgsql-trigger.html,  
 >  which appears to be unchanged in the documentation for 9.0).   >  
However,  if I do so, the sequence increments after the attempted  >  
INSERT or  UPDATE, which is not desired (and does not happen if  >  
RETURN NULL is  the result of the trigger function).



Really?  Frankly, I don't believe it.  Any default value will get filled
in long before triggers run.  In any case, you'd still have issues from
errors occurring later in the transaction.  In general, you *can not*
expect to not have "holes" in the serial number assignment when using a
sequence object.  You'll save yourself a lot of grief if you just accept
that fact, rather than imagining (falsely) that you've found a
workaround to avoid it.


I double-checked it and got the same behavior each time I did it.   
Poking around in the documentation makes me think that the key is when  
the trigger fires.  The trigger in question is a BEFORE trigger, so  
according to the docs if it returns NULL the INSERT never happens.   
Thus, the sequence wouldn't increment - makes sense to me.  It appears  
that, if you get an exception instead, the sequence does increment,  
which is the part that doesn't make sense.



If you really must have gap-free serial numbers, it's possible, but it's
slow, expensive, and doesn't rely on sequence objects.  You can find the
details in the list archives, but basically each insert has to lock the
table against other inserts and then examine it to find the max current
id.


Been there, done that, implemented a solution (which doesn't use  
sequences).  I'm not using that solution here - just don't see why a  
BEFORE trigger should be incrementing a sequence.



Dominic Jones, Ph.D.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Radosław Smogura
Asia  Wednesday 07 of September 2011 16:00:39
> > I personally haven't tired SSL for PostgreSQL but, I think, You should
> > put in root.crt only intermediate certificate (C1 - from prev post), so
> > all and only all "sub-certs" of intermediate CA will be able to
> > establish connection (paranoic security).
> > 
> > Putting intermediate CAs as trusted in Java keystore may be solution,
> > but I'm not sure if in situation of cert invalidation, such cert will be
> > rejected.
> > 
> > If you want to write SSL Factory, you should re-implement KeyManager
> > only, to give ability of extended search.
> > 
> > Regards,
> > Radek
> 
> I  have already tried with only C1 in root.crt but unfortunately it does
> not work. I get error message that cert is invalid. It seems that chained
> CA's are not supported in a way we would like to have it done. I would
> prefer to have number of trusted certs in root.crt limited as much as
> possible, but as I said it does not work.
> 
> About Java, I would need to analyze the libpq code and implement KeyManager
> in a similar way - this is surely possible but not necessarily preferred
> solution ;-)
> 
> Kind regards,
> Joanna
I bearly looked at Javav SSL implementation, and it should support certificate 
chain, even if intermediate cert isn't presented by server (not in root.crt), 
until You have valid chain in key/trust store. I found, and You may try to 
turn it on, "javax.net.debug=all" to see debug info of cert matching.

Only one thing comes to me, why it doesn't works, Your intermediate cert may 
have no issuer DN

Regards,
Radek

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL trigger and sequence increment

2011-09-07 Thread jonesd

Seems like you would be a lot better off enforcing this with a unique
index on (submitter_id, date_trunc('month',entry_timestamp)).  The above
not only doesn't provide any feedback, it's got serious race-condition
problems.


Unfortunately, it didn't work.

CREATE UNIQUE INDEX one_entry_per_submitter_per_month ON table_entry  
(submitter_id , date_trunc('month',entry_timestamp));


runs into

ERROR:  functions in index expression must be marked IMMUTABLE.

If I'm reading this correctly, date_trunc is not IMMUTABLE and thus  
not usable in an index.



Dominic Jones, Ph.D.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSL certificates issue

2011-09-07 Thread Andrew Sullivan
On Wed, Sep 07, 2011 at 04:37:24PM +0200, Asia wrote:

> put top-level CA cert from CA having two certs in root.crt

[. . .]

> how libpq works with chained CA's.

"Two certs" and "chained CAs" are completely different problems.  What
are you trying to do, exactly?

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PL/pgSQL trigger and sequence increment

2011-09-07 Thread Tom Lane
jon...@xmission.com writes:
> CREATE UNIQUE INDEX one_entry_per_submitter_per_month ON table_entry  
> (submitter_id , date_trunc('month',entry_timestamp));
> runs into
> ERROR:  functions in index expression must be marked IMMUTABLE.

> If I'm reading this correctly, date_trunc is not IMMUTABLE and thus  
> not usable in an index.

It is not immutable because it depends on the timezone setting: the same
timestamptz might be truncated to different absolute time instants
depending on which zone you are in.  IOW, when is midnight of the first
of the month, exactly?

You could work around this with something like

date_trunc('month',entry_timestamp AT TIME ZONE 'UTC')

(feel free to substitute a different zone name reflecting what you want
to have happpen) but I wonder whether this doesn't reflect a gap in your
database specification.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Complex query question

2011-09-07 Thread Mike Orr
This works beautifully. Thanks to you and Osvaldo; I learned something
more about querying today.  I wasn't so much wanting to learn about
subqueries as to how to do these kinds of queries.

In this case, I'm testing a search routine, and I needed to extract
some possible results to expect. (I actually needed the 'name' column
too because that's what I'd input for the search, but I didn't realize
that until I got a working query and began testing. So I added the
name column and it worked.)

In other cases I've sometimes wanted to do a min or max but also get
additional information from the chosen rows. That's not quite this
case but it's an example of the kinds of queries I sometimes want to
do and then get stuck on, "Is this a case for a subquery or a window
or do I just need to use 'group by' more smartly? That's when I ask on
the list, to see what's the simplest way to do it all in one query.



On Wed, Sep 7, 2011 at 1:39 AM, Albe Laurenz  wrote:
> Mike Orr wrote:
>> I have a complex query question whose answer I think would help me to
>> understand subselects and aggregates better. I have a table with four
>> columns of interest:
>>
>> id (int primary key), loc_title (varchar null), loc_value (float
>> null), loc_unit (varchar null)
>>
>> I want the output columns to be:
>> (1) each distinct value of loc_title, sorted
>> (2) an id of a record containing that loc_title
>> (3) the loc_value for the record in column 2
>> (4) the loc_unit for the record in column 2
>>
>> I don't care as much how the records for columns 2-4 are chosen. It
>> could be max(loc_value), min(id), or something else. I just need some
>> sample records to test my program against.
>>
>> Is this something I should be able to do with a single query with a
>> subselect, or is it too much for one query? I tried a few ways and
>> none of them were syntactically valid.
>
> Sorry to disappoint you, but you won't learn a lot about subselects
> and aggregates with that:
>
> SELECT DISTINCT ON (loc_title) loc_title, id, loc_value, loc_unit
> FROM mytable
> ORDER BY loc_title;
>
> Yours,
> Laurenz Albe
>



-- 
Mike Orr 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Advice on HA option

2011-09-07 Thread hyelluas
Thank you, I will look at skype's walmgr. 

Also could you explain what makes it  "hands free administration"  9.0? 
Is the shipping of the wal file from the master to HA instance automated ?
Any error checking /self recovery? 

thank you much for the suggestion.

Helen

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Advice-on-HA-option-tp4775605p4779672.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] checkpoint logs

2011-09-07 Thread Martín Marqués
I'm logging checkpoints to see how the background writter is working,
and I bumped into log information that I don't fully understand:

LOG:  checkpoint complete: wrote 5015 buffers (15.1%); 0 transaction
log file(s) added, 0 removed, 15 recycled; write=1004.333 s,
sync=0.106 s, total=1004.571 s

5015 are the WAL buffers written to the transaction log, but what is the 15.1%?

Also, what do the times at the end mean? Time to write the buffers to
the transaction log, or to flush modificactions stored in the
transaction logs to there final destination in the data base files?

-- 
Martín Marqués
select 'martin.marques' || '@' || 'gmail.com'
DBA, Programador, Administrador

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-07 Thread Merlin Moncure
On Wed, Sep 7, 2011 at 10:45 AM, Lincoln Yeoh  wrote:
> At 05:23 AM 9/7/2011, Merlin Moncure wrote:
>>
>> On Tue, Sep 6, 2011 at 3:45 PM, Merlin Moncure  wrote:
>>
>> > b) doesn't block reads if you lock in EXCLUSIVE mode.  a) is the best
>> > way to go if you prefer to handle errors on the client and/or
>> > concurrency is important...c) otherwise.
>>
>> whoops!  meant to say b) otherwise! As far as c) goes, that is
>> essentially an advisory lock for the purpose -- using advisory locks
>> in place of mvcc locks is pretty weak sauce -- they should be used
>> when what you are locking doesn't follow mvcc rules.
>>
>> merlin
>
> Don't you have to block SELECTs so that the SELECTs get serialized?
> Otherwise concurrent SELECTs can occur at the same time, find no existing
> rows, then "all" the inserts proceed and you get errors (or dupes).
>
> That's how Postgresql still works right? I haven't really been keeping up.

yeah -- but you only need to block selects if you are selecting in the
inserting transaction (this is not a full upsert).  if both writers
are doing:
begin;
lock table foo exclusive;
insert into foo select ... where ...;
commit;

is good enough.  btw even if you are doing upsert pattern
(lock...select for update...insert/update), you'd be fine with
straight exclusive locks because the 'for update' lock takes a higher
lock that is blocked by exclusive.  A basic rule of thumb is to try
and not fully block readers unless absolutely necessary...basically
maintenance operations.

> From what I see this (UPSERT/MERGE) has been a common problem/query over the
> years but it's not in a Postgresql FAQ and many people seem to be using
> methods that don't actually work. Google shows that many are even
> recommending those methods to others. Postgresql might still get blamed for
> the resulting problems.

yeah -- there are two basic ways to do upsert -- a) table lock b) row
lock with loop/retry (either in app or server side via procedure).   I
greatly prefer a) for simplicity's sake unless you are shooting for
maximum possible concurrency.

@andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
cases that would push you into retrying the transaction.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-07 Thread Andrew Sullivan
On Wed, Sep 07, 2011 at 02:51:32PM -0500, Merlin Moncure wrote:
> 
> @andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
> cases that would push you into retrying the transaction.

Well, no, of course.  But why not catch the failure and retry?  I
guess I just don't get the problem, since I hear people say this all
the time.  (I mean, I've also seen places where 'upsert' would be
cool, but it doesn't seem trivial to do in a general way and you can
do this with catch-serialization-error-and-retry, I think?)

A

-- 
Andrew Sullivan
a...@crankycanuck.ca

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] conditional insert

2011-09-07 Thread Merlin Moncure
On Wed, Sep 7, 2011 at 3:04 PM, Andrew Sullivan  wrote:
> On Wed, Sep 07, 2011 at 02:51:32PM -0500, Merlin Moncure wrote:
>>
>> @andrew s: going SERIALIZABLE doesn't help if you trying to eliminate
>> cases that would push you into retrying the transaction.
>
> Well, no, of course.  But why not catch the failure and retry?  I
> guess I just don't get the problem, since I hear people say this all
> the time.  (I mean, I've also seen places where 'upsert' would be
> cool, but it doesn't seem trivial to do in a general way and you can
> do this with catch-serialization-error-and-retry, I think?)

good points, but consider that savepoints have a certain amount of
performance overhead, and there may be some dependent client side
processing that is non-trivial to roll back.  Also, if you have a lot
of contention, things can get nasty very quickly -- a full lock is
reliable, simple, and fast, and can be done in one round trip.

Any solution that doesn't have loops is inherently more robust than
one that does.  I'll rest my case on that point -- consider very
carefully that the upsert loop example presented in the docs for years
was vulnerable to an infinite loop condition that was caught by one of
our users in production.   That completely turned me off towards that
general method of dealing with these types of problems unless there is
really no other reasonable way to do it.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoint logs

2011-09-07 Thread Tomas Vondra
On 7 Září 2011, 21:26, Martín Marqués wrote:
> I'm logging checkpoints to see how the background writter is working,
> and I bumped into log information that I don't fully understand:
>
> LOG:  checkpoint complete: wrote 5015 buffers (15.1%); 0 transaction
> log file(s) added, 0 removed, 15 recycled; write=1004.333 s,
> sync=0.106 s, total=1004.571 s
>
> 5015 are the WAL buffers written to the transaction log, but what is the
> 15.1%?

That's the portion of shared buffers that was written. If 5015 is 15.1% of
shared buffers, I guess you have about 256MB shared buffers. Am I right?

> Also, what do the times at the end mean? Time to write the buffers to
> the transaction log, or to flush modificactions stored in the
> transaction logs to there final destination in the data base files?

No, this had nothing to do with transaction logs. Checkpoint means writing
dirty shared buffers to the data files. The numbers give you an idea how
long it took to write the data and sync them to the drive (to make sure
it's actually written to the device).

For example you know you had to write 5015 buffers (that's about 40MB),
and it took about 1000 seconds to write them - that means the average
write speed was about 40kB/s.

I guess this was a timed checkpoint - not sure what the actual timeout and
completion target was, but whenever a timeout expires a checkpoint is
issued (so that the recovery does not take too long). Timed checkpoints
generally are not very intrusive - if the timeout is 10 minutes and
completion target is 0.5, the checkpoint should finish in 5 minutes.

With forced checkpoint (e.g. when there are not enough checkpoint segments
or when you execute CHECKPOINT manually), the behaviour is much more
intrusive as the checkpoint needs to happen ASAP.

Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem using PostgreSQL 9.0.4 with Java

2011-09-07 Thread Arun Nadar
Hi


I am currently using 'PostgreSQL 
9.0.4' database with JDBC driver 'postgresql-9.0-801.jdbc4'. In my Java 
program normal SELECT query didn't work.
ie,  

try {
            Class.forName("org.postgresql.Driver");
            connection = 
DriverManager.getConnection("jdbc:postgresql://localhost/Student","postgres","postgres");
            statement = connection.createStatement();
            String sql="SELECT Id, Name FROM Student ORDER BY Id"; // 
problem

            resultSet = statement.executeQuery(sql);
            if(resultSet.next()) {
                String id=resultSet.getString(1);
                String name=resultSet.getString(2);
            }    
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            try {
   
             connection.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }


I directly write a query via pgAdmin III the above SELECT query didn't work
{ SELECT Id, Name FROM Student ORDER BY Id; }


but it work by putting " ",  like this  SELECT "Id", "Name" FROM "Student" 
ORDER BY "Id";

in java String,  inside of double inverted commas another is does not possible. 


how this code is implement through java program. please kindly send me the 
solution for this.
 
Thanks & Regards
Arun.R.T

Re: [GENERAL] Problem using PostgreSQL 9.0.4 with Java

2011-09-07 Thread Tomas Vondra
On 7 Září 2011, 22:45, Arun Nadar wrote:
> Hi
>
>
> I am currently using 'PostgreSQL
> 9.0.4' database with JDBC driver 'postgresql-9.0-801.jdbc4'. In my Java
> program normal SELECT query didn't work.
> ie, 
>
> try {
>             Class.forName("org.postgresql.Driver");
>             connection =
> DriverManager.getConnection("jdbc:postgresql://localhost/Student","postgres","postgres");
>             statement = connection.createStatement();
>             String sql="SELECT Id, Name FROM Student ORDER BY Id"; //
> problem
>
>             resultSet = statement.executeQuery(sql);
>             if(resultSet.next()) {
>                 String id=resultSet.getString(1);
>                 String name=resultSet.getString(2);
>             }   
>         } catch (Exception e) {
>             e.printStackTrace();
>         } finally {
>             try {
>    
>              connection.close();
>             } catch (Exception e) {
>                 e.printStackTrace();
>             }
>         }
>
>
> I directly write a query via pgAdmin III the above SELECT query didn't
> work
> { SELECT Id, Name FROM Student ORDER BY Id; }
>
>
> but it work by putting " ",  like this  SELECT "Id", "Name" FROM "Student"
> ORDER BY "Id";
>
> in java String,  inside of double inverted commas another is does not
> possible.
>
>
> how this code is implement through java program. please kindly send me the
> solution for this.

First of all, this has nothing to do with Java - this is caused by quoting
the identifiers when creating the table. I.e. you've created the table
like this:

CREATE TABLE "Student" (
  "Id" ...
  "Name" ...
);

in that case you have to quote the identifiers every time you use them. I
generally don't recommend it, in my experience it makes the db difficult
to use and error prone.

CREATE TABLE student (
  Id ...
  Name ...
);

and then you don't need the quotes at all.

But if you really need to use the quotes (e.g. if you can't change the
schema), then you can escape them in the query string, e.g. like this:

String sql="SELECT \"Id\", \"Name\" FROM \"Student\" ORDER BY \"Id\"";

regards
Tomas


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem using PostgreSQL 9.0.4 with Java

2011-09-07 Thread John R Pierce

On 09/07/11 1:45 PM, Arun Nadar wrote:


but it work by putting " ",  like this SELECT "Id", "Name" FROM 
"Student" ORDER BY "Id";


in java String,  inside of double inverted commas another is does not 
possible.



String sql="SELECT \"Id\", \"Name\" FROM \"Student\" ORDER BY \"Id\"";

Alternately, create your table with all lower case names, and you won't 
have this problem.


alter table "Student" rename to student;
alter table student rename column "Id" to id;
alter table student rename column "Name" to name;



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Demoting master to slave without an rsync...is it safe?

2011-09-07 Thread Chris Redekop
Yes, but I don't understand at all where this conversation is going, or how
it's relevant.  I have fail-over working perfectly fine.my original
question was: is it safe to bring a former master back up as a slave without
doing a base-backup first? (using recovery_target_timeline='latest')


On Wed, Sep 7, 2011 at 6:28 AM, Alex Lai  wrote:

> Chris Redekop wrote:
>
>>
>>
>>I have two questions:
>>(1) Did you set recovery_target_timeline='**latest' in both master
>>and slave?
>>
>>
>> Yesbut it's in recovery.conf so it only really applies to whichever
>> server is currently the slave...
>>
>>(2) Did you make any changes after promote the slave to be master?
>>
>>
>> Yes, somehowever I'm not sure I've done enough changes to have the
>> slave rotate+archive an xlog before bringing the old master back up as a
>> slaveI would assume that wouldn't make a difference but it's something
>> to test I guess
>>
> Did you set trigger_file in recovery?  That is the required step to switch
> slave to master.   I set my log rotate+archive to 15 minutes.  When I touch
> fail over file, it mostly like force it to rotate+archive.
> --
> Best regards,
>
>
> Alex Lai
> OMI SIPS DBA ADNET Systems , Inc. 7515 Mission Drive, Suite A100 Lanham, MD
> 20706 301-352-4657 (phone) 301-352-0437 (fax) a...@sesda2.com
>
>


Re: [GENERAL] Demoting master to slave without an rsync...is it safe?

2011-09-07 Thread John R Pierce

On 09/07/11 2:43 PM, Chris Redekop wrote:
my original question was: is it safe to bring a former master back up 
as a slave without doing a base-backup first? (using 
recovery_target_timeline='latest')




no.  you must first sync the new slave's files from the current master.  
if you can do this with rsync odds are most of the old files on the 
master won't need copying.   of course, this sync must be bracketed with 
the same pg_start_backup() and pg_end_backup() as when the original 
slave was created.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem using PostgreSQL 9.0.4 with Java

2011-09-07 Thread David Johnston
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Arun Nadar
Sent: Wednesday, September 07, 2011 4:45 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] Problem using PostgreSQL 9.0.4 with Java

 

Hi

 

but it work by putting " ",  like this SELECT "Id", "Name" FROM "Student"
ORDER BY "Id";

 

in java String,  inside of double inverted commas another is does not
possible. 

 

how this code is implement through java program. please kindly send me the
solution for this.

 

 

To include the "quote" symbol in a Java string you need to escape it with
the "back-slash" symbol (i.e.,  "\" ) as so:

 

String var = "SELECT \"Id\" FROM \"Table\"";

 

The reason you are having to do this is you defined your table and column
names in a case-sensitive manner BUT PostgreSQL automatically converts to
lowercase any identifier that is not enclosed in quotes.

 

I would suggest you do some more reading on how Strings in Java work since
if you missed how to include the common quote in a String literal you likely
missed some other important rules/behavior as well.

 

On the PostgreSQL side I do not know how you created your table(s) but
unless you have some overriding reason to avoid doing so you should stick to
lower-case; it will make using those tables from within Java much easier
(during your hard-coded queries phase of development).  You can still use
mixed-case in Java (without the quotes) and then let PostgreSQL convert it
into lower-case for you.

 

David J.

 

 

 

 

 



[GENERAL] pivoting data?

2011-09-07 Thread Joy Smith
I finally understand why the query looks like it does, even though it is not
what I wanted.  Here is the setup:


Version

"PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 32-bit"


Table Structure

-- Table: modvalues

-- DROP TABLE modvalues;

CREATE TABLE modvalues
(
  parties character varying,
  baloons character varying,
  color character varying,
  dayofpurchase date,
  someint serial NOT NULL,
  amountpur integer,
  CONSTRAINT wfe PRIMARY KEY (someint)
)
WITH (
  OIDS=FALSE
);
ALTER TABLE modvalues OWNER TO postgres;



Test Data

insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
('1','big','red','1/1/2011',7);
insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
('1','big','green','1/2/2011',14);
insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
('1','big','blue','1/2/2011',3)



Query

with a as (select distinct baloons, color,amountpur from modvalues),
b as (select baloons,color,amountpur from modvalues where dayofpurchase =
'2011-01-01'),
c as (select baloons,color,amountpur from modvalues where dayofpurchase =
'2011-01-02'),
d as (select baloons,color,amountpur from modvalues where dayofpurchase =
'2011-01-03')


select
a.baloons,
a.color,
b.amountpur as "Jan First",
c.amountpur as "Jan Second",
d.amountpur as "Jan Third"

from
a left join b on a.baloons=b.baloons
left join c on a.baloons=c.baloons
left join d on a.baloons=d.baloons


Output

"baloons";"color";"Jan First";"Jan Second";"Jan Third"
"big";"red";7;3;
"big";"red";7;14;
"big";"blue";7;3;
"big";"blue";7;14;
"big";"green";7;3;
"big";"green";7;14;


Issue

I now see that it is putting 7's in for "Jan First" because I told it to put
b.amountpur in there - but why are not rows 3-6 of the output blank for "Jan
First" since there were not purchases made on that date for blue and green
"color"'s?  Is there a way to pivot the data so that it can lay out the data
like this:

baloons colorjan first jan second jan third
big red 7  null   null
big green  null14   null
bigblue   null  null  3


?


[GENERAL] master-side counterpart of pg_last_xact_replay_timestamp?

2011-09-07 Thread Chris Redekop
Is there anything available to get the last time a transaction
occurred?like say "pg_last_xact_timestamp"?  In order to accurately
calculate how far behind my slave is I need to do something like
master::pg_last_xact_timestamp() -
slave::pg_last_xact_replay_timestamp()currently I'm using now() instead
of the pg_last_xact_timestamp() call, but then when the master is not busy
the slave appears to lag behind.  I'm considering writing a C module to get
the last modified file time of the xlog, but I'm hoping there is a better
alternative that I haven't found yet


Re: [GENERAL] pivoting data?

2011-09-07 Thread Chris Travers
On Wed, Sep 7, 2011 at 3:25 PM, Joy Smith  wrote:
> I finally understand why the query looks like it does, even though it is not
> what I wanted.  Here is the setup:
>
> Version
> 
> "PostgreSQL 9.0.4, compiled by Visual C++ build 1500, 32-bit"
>
> Table Structure
> 
> -- Table: modvalues
> -- DROP TABLE modvalues;
> CREATE TABLE modvalues
> (
>   parties character varying,
>   baloons character varying,
>   color character varying,
>   dayofpurchase date,
>   someint serial NOT NULL,
>   amountpur integer,
>   CONSTRAINT wfe PRIMARY KEY (someint)
> )
> WITH (
>   OIDS=FALSE
> );
> ALTER TABLE modvalues OWNER TO postgres;
>
>
> Test Data
> 
> insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
> ('1','big','red','1/1/2011',7);
> insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
> ('1','big','green','1/2/2011',14);
> insert into modvalues (parties,baloons,color,dayofpurchase,amountpur) values
> ('1','big','blue','1/2/2011',3)
>
>
> Query
> 
> with a as (select distinct baloons, color,amountpur from modvalues),
> b as (select baloons,color,amountpur from modvalues where dayofpurchase =
> '2011-01-01'),
> c as (select baloons,color,amountpur from modvalues where dayofpurchase =
> '2011-01-02'),
> d as (select baloons,color,amountpur from modvalues where dayofpurchase =
> '2011-01-03')
>
> select
> a.baloons,
> a.color,
> b.amountpur as "Jan First",
> c.amountpur as "Jan Second",
> d.amountpur as "Jan Third"
> from
> a left join b on a.baloons=b.baloons
> left join c on a.baloons=c.baloons
> left join d on a.baloons=d.baloons

Wondering if a CASE statement would be more efficient here:

SELECT baloons, color,
case when dayofpurchase = '2011-01-01' then amountpur AS 'Jan First'
ELSE NULL END,
case when dayofpurchase = '2011-01-02' then amountpur AS 'Jan Second'
ELSE NULL END,
case when dayofpurchase = '2011-01-03' then amountpur AS 'Jan Third'
ELSE NULL END
FROM modvalues;

Best Wishes,
Chris Travers

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.4 -> 9.0 upgrade difficulties

2011-09-07 Thread Steve Crawford
I am attempting to upgrade from 8.4 to 9.0 via parallel install of 9.0 
along with 8.4. It isn't going well. Fortunately it's a dev/test machine 
I'm using as a test and practice space.


The place I'm stuck is that "postgresql-server conflicts with 
postgresql90 (yum output at bottom).


History:

I've mostly been following:
http://people.planetpostgresql.org/devrim/index.php?/archives/48-What-is-new-in-PostgreSQL-9.0-RPMs.html
and
http://people.planetpostgresql.org/devrim/index.php?/archives/50-Upgrading-from-8.4-to-9.0-on-Fedora-Red-Hat-CentOS-using-RPMs.html

The machine was originally built with CentOS 5.3. I installed the PGDG 
8.4 RPMs from yum.pgrpms.org. The machine has been in-service for a 
couple years during which it has been updated and is now up to CentOS 5.6.


Upon reading that the RPMS are now being built so that multiple versions 
can peacefully coexist, I attempted to install 
pgdg-centos-9.0-2.noarch.rpm but got an error saying that there was a 
conflict with /etc/pki/rpm-gpg/RPM-GPG-KEY-PGDG from 8.4. Reading that 
there were still issues that required forcing, I forced it.


Recalling the key-conflict issue from a year or so ago, I erased both 
the pgdg-centos-8 and 9 rpms (just the basic ones that update the repo, 
not the server/client/contribs), reinstalled the latest 
pgdg-centos-8..., ran "yum clean" then "yum upgrade" then confirmed that 
8.4 was up to 8.4.8 and working.


I then installed the pgdg-centos-9... repo rpm without trouble. No more 
key conflict messages and "yum list" shows both the 8.4 and 9.0 
packages. But 9.0 still refuses to install:


Resolving Dependencies
--> Running transaction check
---> Package postgresql90-contrib.i386 0:9.0.4-1PGDG.rhel5 set to be updated
--> Processing Dependency: postgresql90 = 9.0.4 for package: 
postgresql90-contrib
--> Processing Dependency: libossp-uuid.so.15 for package: 
postgresql90-contrib

---> Package postgresql90-server.i386 0:9.0.4-1PGDG.rhel5 set to be updated
--> Running transaction check
---> Package postgresql90.i386 0:9.0.4-1PGDG.rhel5 set to be updated
--> Processing Dependency: postgresql90-libs = 9.0.4-1PGDG.rhel5 for 
package: postgresql90

---> Package uuid.i386 0:1.5.1-4.rhel5 set to be updated
--> Running transaction check
---> Package postgresql90-libs.i386 0:9.0.4-1PGDG.rhel5 set to be updated
--> Processing Conflict: postgresql-server conflicts postgresql < 7.4
--> Finished Dependency Resolution
postgresql-server-8.4.8-1PGDG.rhel5.i386 from installed has depsolving 
problems

  --> postgresql-server conflicts with postgresql90
Error: postgresql-server conflicts with postgresql90
 You could try using --skip-broken to work around the problem
 You could try running: package-cleanup --problems
package-cleanup --dupes
rpm -Va --nofiles --nodigest

Thoughts/suggestions?

Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Lock problem

2011-09-07 Thread Eduardo Piombino
Hello list, I'm having a locking problem and I'm not sure what is causing
it.
I have two pgsql concurrent transactions, running each in a separate
connection to postgres (I can reproduce it from pgadmin).

T1) operates only on table A

begin transaction;
select id from A where id = 100 for update nowait;
update A set x = x + 15 where id = 100;
*update A set x = x + 15 where id = 100;
*commit;

T2) operates only on table B

begin transaction;
select x from B where id = 116 for update nowait;
update B set x = x + 1;
commit;

If I run transaction T1 up to the beginning of the second update, and then i
stall there, transaction T2 is allowed to do the select for update with no
problem at all.

However, if transaction T1 goes a step further, and does the second update,
from that point on, transaction T2 is not able to get the lock on B.

I don't see how a new update to the same record in A, makes the difference
to allow or deny the lock on a row on table B;

This behaviour is backed up with a consistent increase in the locks from the
server status views.

I don't see how:

select * from A for update nowait;
update A set x = x + 1;

has a different effect than (locks-wise)

select * from A for update nowait;
update A set x = x + 1;
update A set x = x + 1;

PS: The only relation between A and B is that A has a two FKs to B, but none
of them are even included in the updates.

I don't see how a second update (identical to the previous one if you wish)
to A on T1 will prevent T2 from getting a row level lock on B.

Does anyone have an explanation on why this happens?
Thank you,
Eduardo.


Re: [GENERAL] Lock problem

2011-09-07 Thread Tom Lane
Eduardo Piombino  writes:
> I don't see how a new update to the same record in A, makes the difference
> to allow or deny the lock on a row on table B;

I think it's probably explained by this:

> PS: The only relation between A and B is that A has a two FKs to B, but none
> of them are even included in the updates.

IIRC there are some optimizations in the FK stuff that don't apply once
a single transaction has updated a relevant row more than once.  You
haven't given enough details (not even a PG version) to be sure about
it, but that's what I'd bet on.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] master-side counterpart of pg_last_xact_replay_timestamp?

2011-09-07 Thread Fujii Masao
On Thu, Sep 8, 2011 at 7:06 AM, Chris Redekop  wrote:
> Is there anything available to get the last time a transaction
> occurred?like say "pg_last_xact_timestamp"?

No.

> In order to accurately
> calculate how far behind my slave is I need to do something like
> master::pg_last_xact_timestamp() -
> slave::pg_last_xact_replay_timestamp()currently I'm using now() instead
> of the pg_last_xact_timestamp() call, but then when the master is not busy
> the slave appears to lag behind.  I'm considering writing a C module to get
> the last modified file time of the xlog, but I'm hoping there is a better
> alternative that I haven't found yet

Your complaint makes sense. I'll implement something like
pg_last_xact_timestamp() for 9.2. But unfortunately there is
no way to know such a timestamp on the master, in 9.1..

Regards,

-- 
Fujii Masao
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] master-side counterpart of pg_last_xact_replay_timestamp?

2011-09-07 Thread Simon Riggs
On Thu, Sep 8, 2011 at 6:43 AM, Fujii Masao  wrote:
> On Thu, Sep 8, 2011 at 7:06 AM, Chris Redekop  wrote:
>> Is there anything available to get the last time a transaction
>> occurred?like say "pg_last_xact_timestamp"?
>
> No.
>
>> In order to accurately
>> calculate how far behind my slave is I need to do something like
>> master::pg_last_xact_timestamp() -
>> slave::pg_last_xact_replay_timestamp()currently I'm using now() instead
>> of the pg_last_xact_timestamp() call, but then when the master is not busy
>> the slave appears to lag behind.  I'm considering writing a C module to get
>> the last modified file time of the xlog, but I'm hoping there is a better
>> alternative that I haven't found yet
>
> Your complaint makes sense. I'll implement something like
> pg_last_xact_timestamp() for 9.2. But unfortunately there is
> no way to know such a timestamp on the master, in 9.1..


I see the reason, but would be against that change.

We don't currently generate a timestamp for each WAL record. Doing so
would be a performance drain and a contention hotspot.

I think Chris should change his function to a CASE statement so that
his function returns zero when master and slave have matching WAL
positions, and we only calculate the delay when there is outstanding
WAL.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Select Output in XML format

2011-09-07 Thread Adarsh Sharma

Dear all,

Today I need to write the output of an postgres table into XML format.
I think there is an easiest way to do this but not able to find it.

In mysql there is simple query for that :

mysql -X -e "select * from db_name.master"  > /hdd2-1/test.xml

In postgres , i find some XML data types but how we can put our select 
output into xml file.



Thanks

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general