On 2011-11-16, Dhimant Patel wrote:
> I have postgres *(PostgreSQL) 9.0.3 running.*
> I also created several procedures/functions and now I don't remember the
> last procedure I worked on! - I thought I could always get this from
> metadata.
>
> Now I'm stuck - couldn't find this details anywhere
On Thu, 08 Dec 2011 23:12:36 +
Raymond O'Donnell wrote:
> Just wondering, and without intending to cast any aspersions on the
> poster - is this spam or legit? I didn't take the risk of actually
> clicking it...
>
> There have been a few posts like this recently - links without any
> comment
This message has been digitally signed by the sender.
Re___GENERAL_.eml
Description: Binary data
-
Hi-Tech Gears Ltd, Gurgaon, India
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresq
Hi,
> Andreas Brandl writes:
> > we're currently investigating a statistics issue on postgres. We
> > have some tables which frequently show up with strange values for
> > n_live_tup. If you compare those values with a count on that
> > particular table, there is a mismatch of factor 10-30. This
Andreas Brandl writes:
>> The planner doesn't use n_live_tup;
> I'm just curious: where does the planner take the (approximate) row-count
> from?
It uses the tuple density estimated by the last vacuum or analyze (viz,
reltuples/relpages) and multiplies that by the current relation size.
There a
On 12/9/2011 4:57 PM, David Johnston wrote:
Functions are evaluated once for each row that it generated by the
surrounding query. This is particularly useful if the function in question
takes an aggregate as an input:
SELECT col1, array_processing_function( ARRAY_AGG( col2 ) )
FROM table
GROUP
On 12/10/2011 09:54 AM, Greg Smith wrote:
I'm planning to put that instrumentation into the database directly,
which is what people with Oracle background are asking for.
FWIW, even for folks like me who've come from a general OSS DB
background with a smattering of old Sybase and other primiti
On Sat, Dec 10, 2011 at 7:28 PM, Craig Ringer wrote:
> The main issue would be exempting queries that're expected to take longer
> than the slow query threshold, like reporting queries, where you wouldn't
> want to pay that overhead.
One trick you can use for this is to assign the reporting appli
On 12/10/2011 09:28 PM, Craig Ringer wrote:
One thing I think would be interesting for this would be to identify
slow queries (without doing detailed plan timing) and flag them for
more detailed timing if they're run again within time. I suspect
this would only be practical with parameterised
Hello,
First off, thanks for a great product.
I've been looking at setting up replication on Windows between two servers
using pgsql 9.1
I'm going to give up for now though because I'm finding it difficult to get
it working correctly; after copying the \data directory as per the guide at
http://
10 matches
Mail list logo