On Feb 3, 2012, at 6:23 AM, Ivan Voras wrote:
> On 31/01/2012 09:07, CSS wrote:
>> Hello all,
>>
>> Just wanted to share some results from some very basic benchmarking
>> runs comparing three disk configurations on the same hardware:
>>
>> http://morefoo.com/bench.html
>
> That's great!
Thank
On Fri, Feb 10, 2012 at 5:40 PM, Josh Berkus wrote:
> Peter,
>
>> We've funded some work by Peter Geoghegan to make pg_stat_statements
>> more useful, but the patch is currently sitting in the commitfest in
>> need of a champion. I'd very much like to see it landed.
>
> Ok, let me review it then .
Peter,
> We've funded some work by Peter Geoghegan to make pg_stat_statements
> more useful, but the patch is currently sitting in the commitfest in
> need of a champion. I'd very much like to see it landed.
Ok, let me review it then ...
> Between that work, 9.2, and Dimitri's extension whitelis
On Fri, Feb 10, 2012 at 11:32 AM, Josh Berkus wrote:
> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> So one thought would be to add in pg_stat_statements to your platform
> ... something I'd like to see Heroku do anyway. Then you can sample
> this across dozens (or hundreds) of user databases,
Hm. Tried running ANALYZE. Took almost 10 minutes to run. (Don't know if it
would have been run automatically since I last tried this yesterday, but
figured it couldn't hurt.) Still, no difference:
http://explain.depesz.com/s/xHq
Actually, it's 10x worse (maybe because this is my first time running
Le vendredi 10 février 2012 20:32:50, Josh Berkus a écrit :
> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> > Hmm, perhaps we could usefully aggregate auto_explain output.
>
> The other option is to take a statistical approach. After all, what you
> want to do is optimize average response time
On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> Hmm, perhaps we could usefully aggregate auto_explain output.
The other option is to take a statistical approach. After all, what you
want to do is optimize average response times across all your user's
databases, not optimize for a few specific qu
On Fri, Feb 10, 2012 at 2:00 PM, David Yeu wrote:
>> From your OP:
>>
>>> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;
>
> Yup, sorry.
Ah, ok, so that should do it.
If you need further improvement, remember to take a look at the deleted stuff.
--
Sent via pgsql-performance mailing
On Feb 10, 2012, at 11:58 AM, Claudio Freire wrote:
> From your OP:
>
>> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;
Yup, sorry.
Dave
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/m
On Fri, Feb 10, 2012 at 1:45 PM, David Yeu wrote:
> On Feb 10, 2012, at 11:26 AM, Claudio Freire wrote:
>> That, and an index on "(group_id, created_at) where (deleted_at IS
>> NULL)" to catch the sorted by date kind of query, and you'll be done I
>> think.
>
> Yeah, I didn't quite get that right
On Feb 10, 2012, at 11:26 AM, Claudio Freire wrote:
> That, and an index on "(group_id, created_at) where (deleted_at IS
> NULL)" to catch the sorted by date kind of query, and you'll be done I
> think.
Yeah, I didn't quite get that right -- we're actually sorting all these queries
by "id DESC",
On Fri, Feb 10, 2012 at 1:19 PM, David Yeu wrote:
>> => EXPLAIN ANALYZE SELECT "lines".* FROM "lines" WHERE (lines.deleted_at IS
>> NULL) AND ("lines".group_id = ?) AND (id < ?) ORDER BY id DESC LIMIT 20
>> OFFSET 0;
Interesting...
Do you have many "deleted" rows?
Do you always filter them ou
Yeah, Reply-All...
Begin forwarded message:
> From: David Yeu
> Subject: Re: [PERFORM] Performance on large, append-only tables
> Date: February 10, 2012 10:59:04 AM EST
> To: Merlin Moncure
>
> On Feb 10, 2012, at 10:19 AM, Merlin Moncure wrote:
>
>> You can probably significantly optimize t
David Yeu wrote:
> We have indices against the primary key and the group_id.
> Our queries essentially fall into the following cases:
>
> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;
> * Š WHERE group_id = ? AND id > ? ORDER BY created_at DESC;
> * Š WHERE group_id = ? AND id <
David Yeu writes:
> Our queries essentially fall into the following cases:
> * WHERE group_id = ? ORDER BY created_at DESC LIMIT 20;
> * WHERE group_id = ? AND id > ? ORDER BY created_at DESC;
> * WHERE group_id = ? AND id < ? ORDER BY created_at DESC LIMIT 20;
> * WHERE group_id = ?
On Wed, Feb 8, 2012 at 20:03, David Yeu wrote:
> * Š WHERE group_id = ? ORDER BY created_at DESC LIMIT 20 OFFSET ?;
> * Pages of twenty rows.
A good improvement for this sort of queries is the "scalable paging"
trick. Instead of increasing the OFFSET argument -- which means that
Postgres has to
On Wed, Feb 8, 2012 at 3:03 PM, David Yeu wrote:
> Thankfully, the types of queries that we perform against this table are
> pretty constrained. We never update rows and we never join against other
> tables. The table essentially looks like this:
>
> | id | group_id | created_at | everything elseŠ
On Wed, Feb 8, 2012 at 12:03 PM, David Yeu wrote:
> Hi there,
>
> We've got a pretty large table that sees millions of new rows a day, and
> we're trying our best to optimize queries against it. We're hoping to find
> some guidance on this list.
>
> Thankfully, the types of queries that we perform
Hi there,
We've got a pretty large table that sees millions of new rows a day, and
we're trying our best to optimize queries against it. We're hoping to find
some guidance on this list.
Thankfully, the types of queries that we perform against this table are
pretty constrained. We never update row
19 matches
Mail list logo