On Thu, Nov 21, 2019 at 11:57:22PM +0800, Andy Fan wrote:
On Thu, Nov 21, 2019 at 6:12 PM Tomas Vondra
wrote:
On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:
>>
>>
>> Hm. That actually raises the stakes a great deal, because if that's
>> what you're expecting, it would require plann
On Thu, Nov 21, 2019 at 6:12 PM Tomas Vondra
wrote:
> On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:
> >>
> >>
> >> Hm. That actually raises the stakes a great deal, because if that's
> >> what you're expecting, it would require planning out both the
> transformed
> >> and untransform
On Thu, Nov 21, 2019 at 08:30:51AM +0800, Andy Fan wrote:
Hm. That actually raises the stakes a great deal, because if that's
what you're expecting, it would require planning out both the transformed
and untransformed versions of the query before you could make a cost
comparison.
I don't kn
>
>
> Hm. That actually raises the stakes a great deal, because if that's
> what you're expecting, it would require planning out both the transformed
> and untransformed versions of the query before you could make a cost
> comparison.
I don't know an official name, let's call it as "bloom filte
On Wed, Nov 20, 2019 at 12:34:25PM -0800, Xun Cheng wrote:
On Wed, Nov 20, 2019 at 11:18 AM Tomas Vondra
wrote:
On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:
>Tomas Vondra writes:
>> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:
>>> I'm content to say that the applicat
On Wed, Nov 20, 2019 at 11:18 AM Tomas Vondra
wrote:
> On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:
> >Tomas Vondra writes:
> >> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:
> >>> I'm content to say that the application should have written the query
> >>> with a GROUP B
On Wed, Nov 20, 2019 at 12:36:50PM -0500, Tom Lane wrote:
Tomas Vondra writes:
On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:
I'm content to say that the application should have written the query
with a GROUP BY to begin with.
I'm not sure I agree with that. The problem is this r
Tomas Vondra writes:
> On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:
>> I'm content to say that the application should have written the query
>> with a GROUP BY to begin with.
> I'm not sure I agree with that. The problem is this really depends on
> the number of rows that will need t
On Wed, Nov 20, 2019 at 11:12:56AM -0500, Tom Lane wrote:
Daniel Gustafsson writes:
On 20 Nov 2019, at 13:15, Andy Fan wrote:
2. why pg can't do it, while greenplum can?
It's worth noting that Greenplum, the example you're referring to, is using a
completely different query planner, and d
On Wed, Nov 20, 2019 at 08:15:19PM +0800, Andy Fan wrote:
Hi Hackers:
First I found the following queries running bad on pg.
select count(*) from part2 p1 where p_size > 40 and p_retailprice >
(select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);
the plan is
Daniel Gustafsson writes:
>> On 20 Nov 2019, at 13:15, Andy Fan wrote:
>> 2. why pg can't do it, while greenplum can?
> It's worth noting that Greenplum, the example you're referring to, is using a
> completely different query planner, and different planners have different
> characteristics a
> On 20 Nov 2019, at 13:15, Andy Fan wrote:
> 2. why pg can't do it, while greenplum can?
It's worth noting that Greenplum, the example you're referring to, is using a
completely different query planner, and different planners have different
characteristics and capabilities.
cheers ./daniel
On Wed, Nov 20, 2019 at 8:15 PM Andy Fan wrote:
> Hi Hackers:
>
> First I found the following queries running bad on pg.
>
> select count(*) from part2 p1 where p_size > 40 and p_retailprice >
> (select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);
>
> the plan is
>
Hi Hackers:
First I found the following queries running bad on pg.
select count(*) from part2 p1 where p_size > 40 and p_retailprice >
(select avg(p_retailprice) from part2 p2 where p2.p_brand=p1.p_brand);
the plan is
QUERY PLAN
14 matches
Mail list logo