On Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes
wrote:
> Partitioning the Feature and Point tables on measurement_time (or
> measurement_start_time,
> you are not consistent on what it is called) might be helpful. However,
> measurement_time does not exist
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K wrote:
> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>
> Increasing work_mem did not have great impact
On Sat, Aug 27, 2016 at 7:13 AM, Craig James wrote:
> On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby
> wrote:
>
>> On 8/26/16 3:26 PM, Mike Sofen wrote:
>>
>>> Is there way to keep query time constant as the database size grows.
>>>
>>
>> No. More
Craig James writes:
> Straight hash-table indexes (which Postgres doesn't use) have O(1) access
> time. The amount of data has no effect on the access time.
This is wishful thinking --- once you have enough data, O(1) goes out the
window. For example, a hash index is
On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby wrote:
> On 8/26/16 3:26 PM, Mike Sofen wrote:
>
>> Is there way to keep query time constant as the database size grows.
>>
>
> No. More data == more time. Unless you find a way to break the laws of
> physics.
>
Straight
ecules.com>
> *Cc:* andreas kretschmer <akretsch...@spamfence.net>;
> pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] Slow query with big tables
>
>
>
> Ok, sorry that I did not add the original message. I thought that it would
> be automatically add
On 8/26/16 3:26 PM, Mike Sofen wrote:
Is there way to keep query time constant as the database size grows.
No. More data == more time. Unless you find a way to break the laws of
physics.
Should I use partitioning or partial indexes?
Neither technique is a magic bullet. I doubt either
Subject: Re: [PERFORM] Slow query with big tables
Ok, sorry that I did not add the original message. I thought that it would be
automatically added to the message thread.
Here is the question again:
Is there way to keep query time constant as the database size grows. Should I
use
Ok, sorry that I did not add the original message. I thought that it would
be automatically added to the message thread.
Here is the question again:
Is there way to keep query time constant as the database size grows. Should
I use partitioning or partial indexes?
Thanks,
Tommi Kaksonen
>
On Fri, Aug 26, 2016 at 6:17 AM, Tommi K wrote:
> Hello,
> thanks for the response. I did not get the response to my email even
> though I am subscribed to the pgsql-performance mail list. Let's hope that
> I get the next one :)
>
Please include the email you are replying to
Hello,
thanks for the response. I did not get the response to my email even though
I am subscribed to the pgsql-performance mail list. Let's hope that I get
the next one :)
Increasing work_mem did not have great impact on the performance. But I
will try to update the PostgreSQL version to see if
Tommi Kaksonen wrote:
> ---Version---
> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit
current point release for 9.2 is 9.2.18, you are some years behind.
The plan seems okay for me, apart from the on-disk sort: increase
work_mem to avoid that.
If i where you i
Hello,
I have the following tables and query. I would like to get some help to
find out why it is slow and how its performance could be improved.
Thanks,
Tommi K.
*--Table definitions---*
CREATE TABLE "Measurement"
(
id bigserial NOT NULL,
product_id bigserial NOT NULL,
nominal_data_id
13 matches
Mail list logo