Re: [GENERAL] Behavior of subselects in target lists and order by

2012-02-25 Thread Jasen Betts
On 2012-02-24, amit sehas cu...@yahoo.com wrote:
 If we have a query of the form:

 Select *, (Select * FROM T2 WHERE p2 = T1.p1) FROM T1 ORDER BY 1 WHERE p3 = 75

 In the above query there is a subselect in the target list and the
 ORDERBY has an ordinal number which indicates order by column 1. Does
 this mean that the above query will return all results from T1 that
 match p3 =75 and all results from T2 that match p2 = T1.p1 for every
 match on T1 and order them all by the first column of T1 and T2 ? 

 basically i am trying to determine if the order by clause has effect
 only on the tuples of the outer select or both the outer and inner
 select. Or the results returned by the inner select are treated as if
 they are part of a single tuple which includes the tuple from table T1 ? 

 Is this an implementation specific behaviour or it conforms to the SQL 
 standard ...?


with the exception of queries using distinct on(...) order by will
only effect the order in which the results are presented.  Where
distinct on is used it will also effect which distinct subset set of
rows are presented.

Distinct on is outside the standard, otherwise postgres follows the
standard,

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to typecast an integer into a timestamp?

2012-02-25 Thread Jasen Betts
On 2012-01-27, bbo...@free.fr bbo...@free.fr wrote:
 Hello!

 again quite a stupid problem i regularly run into
 and that i still haven't solved yet...

 again i used a type timestamp to keep a track of modification time,
 and again it gets stupid and confusing.   

oops!  (when recording events timestamp with timezone is usually best)

 first of all the errors are labeled as timestamp without timezone, i
 only specified timestamp

it's the same thing since 8.1

 the data was created as a timestamp with php-mktime, but when
 sending to the database postgres complains that its an int, and when i
 try to typecast it, (with the ::timestamp appendix to the value), that
 its not possible to convert an int to a timestamp (without timezone)
 . 

 so as usual i would discard the timezone datatype and alter the
 table to use integer instead, but this time i am wondering, since this
 datatype is present, there's surely a way to use it properly? but how?

just use a string in this format -MM-DD HH:MM:SS.sss +NN:NN

 

-- 
⚂⚃ 100% natural


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Rails pg setup question

2012-02-25 Thread Scott Ribe
What I'm trying to determine is: should I plan on using pgbouncer?

With Rails  Passenger, do the app server processes take and hold connections 
to the db? Or take  release with queries/connections?

This is not a scalability question; given the traffic the site will have I only 
need a small handful of connections. It's a latency question, I don't want to 
be starting up new pg processes excessively.

(Yes, I thought about asking on the RoR list, since it's really a question 
about RoR behaviors, but on the other hand it involves pg optimization  best 
practice, which is, ahem, different than with MySQL.)

-- 
Scott Ribe
scott_r...@elevated-dev.com
http://www.elevated-dev.com/
(303) 722-0567 voice





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rails pg setup question

2012-02-25 Thread Joshua D. Drake


On 02/25/2012 08:13 AM, Scott Ribe wrote:


What I'm trying to determine is: should I plan on using pgbouncer?

With Rails  Passenger, do the app server processes take and hold connections to 
the db? Or take  release with queries/connections?

This is not a scalability question; given the traffic the site will have I only 
need a small handful of connections. It's a latency question, I don't want to 
be starting up new pg processes excessively.

(Yes, I thought about asking on the RoR list, since it's really a question about 
RoR behaviors, but on the other hand it involves pg optimization  best 
practice, which is, ahem, different than with MySQL.)



Alright I did a quick google and passenger appears to be essentially 
mod_rails, which means you are going to generate a connection for every 
httpd process you have (unless something has changed that I don't know 
about).


So on the one hand it is good because your connections will be 
persistent, on another hand it is bad because you will open connections 
even if you aren't using the database. That said, it is a rails app so 
it is likely you are using the database no matter what.


Where pgbouncer might come in handy here, is that it can open more 
connections than you have httpd processes, and thus you will reduce the 
fork cost of the new process just because someone hits the website.


JD

--
Command Prompt, Inc. - http://www.commandprompt.com/
PostgreSQL Support, Training, Professional Services and Development
The PostgreSQL Conference - http://www.postgresqlconference.org/
@cmdpromptinc - @postgresconf - 509-416-6579

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rails pg setup question

2012-02-25 Thread Daniel Podlejski
Joshua D. Drake j...@commandprompt.com:

  Where pgbouncer might come in handy here, is that it can open more 
  connections than you have httpd processes, and thus you will reduce
  the 
  fork cost of the new process just because someone hits the website.

Unfortunatelly, ActiveRecords (Rails ORM) = 3.1 is broken by design:
https://github.com/rails/rails/issues/1627

-- 
Daniel Podlejski

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Stability in Windows?

2012-02-25 Thread Gavin Flower

On 25/02/12 04:39, Durumdara wrote:

Hi!

We planned to port some very old DBASE db into PGSQL.

But somebody said in a developer list that he tried with PGSQL (8.x) 
and it was very unstable in Windows (and it have problem when many 
users use it).


Another people also said that they used PGSQL only in Linux - and 
there is no problem with it, only some patches needed for speeding up 
writes...


What is your experience in this theme?

Do you also have same experience in Windows?

The user number is from 20 to up 100 (concurrently).

Thanks for your every idea, help, link, information about this.

Regards:
   dd

Hmm...

I have heard that PostgreSQL is stable on a Microsoft O/S, but Linux 
tends be a lot faster and more reliable than any Microsoft O/S.  Which 
is probably why you hear people saying they use Linux for PostgreSQL - 
it is more a comment about Microsoft software quality.



Cheers,
Gavin

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What effect does destroying database related threads have on system resources?

2012-02-25 Thread Frank Church
An application I have written connects to a remote Postgresql 8.3/8.4
database using  Lazarus/FreePascal components, Zeoslib, but it has been
timing out too often as the network gets busy in a bursty mode. When the
network times out, restarting the app retrieves the data quickly, it is
always available in good time. The problem with some of the busier networks
is that timeout dialogs happen too often and it annoys the users who have
to restart the app.

My approach then is to run the database for displaying locally (which is
actually the original design) and fetch the data from the remote database
in background thread or in a background helper utility and insert it into
the local database, where the timeouts do not occur.

This creates 2 problems

1. When the connection times out

2. When the query itself times out.

Since handling connection timeouts and query timeouts can be fraught,what I
want to do is to create the whole process of retrieving the data in a
thread within the main program or in a helper utility, and destroying the
thread if it times out, but I can't be sure of what happens to the
resources created by the thread when it is destroyed, both in the program
code itself, the libpq libraries and on the server.

What are the consequences of working this way with lipql and the server?

Any ideas on what is likely to happen?

/voipfcGuy


[GENERAL] Four issues why old elephants lack performance: Explanation sought Four issues why old elephants lack performance: Explanation sought

2012-02-25 Thread Stefan Keller
Hi,

Recently Mike Stonebraker identified four areas where old elephants
lack performance [1]:

1. Buffering/paging
2. Locking/Multithreading
3. WAL logging
4. Latches (aka memory locks for concurrent access of btree structures
in buffer pool?).

He claims having solved these issues while retaining SQL and ACID.
But the only issue I understood is #1 by loading all tuples in-memory.
= Are there any ideas on how to tell Postgres to aggressively load
all data into memory (issue #1)?

All remaining issues make me wonder.
I actually doubt that there are alternatives even theoretically.
= Can anyone help explaining me issues 2,3 and 4, their solutions,
and why Postgres would be unable to resolve them?

Yours, Stefan

[1] NewSQL vs NoSQL for New OLTP, NoSQLNow! Conference, August 2011.
http://voltdb.com/resources/videos

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Four issues why old elephants lack performance: Explanation sought Four issues why old elephants lack performance: Explanation sought

2012-02-25 Thread Scott Marlowe
On Sat, Feb 25, 2012 at 5:54 PM, Stefan Keller sfkel...@gmail.com wrote:
 Hi,

 Recently Mike Stonebraker identified four areas where old elephants
 lack performance [1]:

 1. Buffering/paging
 2. Locking/Multithreading
 3. WAL logging
 4. Latches (aka memory locks for concurrent access of btree structures
 in buffer pool?).

 He claims having solved these issues while retaining SQL and ACID.
 But the only issue I understood is #1 by loading all tuples in-memory.
 = Are there any ideas on how to tell Postgres to aggressively load
 all data into memory (issue #1)?

 All remaining issues make me wonder.
 I actually doubt that there are alternatives even theoretically.
 = Can anyone help explaining me issues 2,3 and 4, their solutions,
 and why Postgres would be unable to resolve them?

 Yours, Stefan

 [1] NewSQL vs NoSQL for New OLTP, NoSQLNow! Conference, August 2011.
 http://voltdb.com/resources/videos

Here's a great speech he gave at the USENIX conference:

http://www.youtube.com/watch?v=uhDM4fcI2aI

Basically he makes the point that IF your dataset fits in memory and
you need fast performance, then using multiple machines like a RAID
array with everything in memory beats everything out there, and that's
the methodology he's shooting for.

For super fast transactional systems that fit in memory, I can see the
advantage of moving everything into memory and using redundant
servers, possibly geographically distant from each other, to ensure
durability.

But he does make the point as well that for LARGE systems that need
transactional integrity, there's still nothing that beats an elephant
like system.

BTW, there's some other great presentations at that conference as
well.  The one or two about btrfs from an oracle guy are quite
fascinating.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general