Re: [PERFORM] pgsql-performance issue

2016-08-29 Thread Andres Freund
Hi,

On 2016-08-20 08:38:43 +, debasis.mohar...@ipathsolutions.co.in wrote:
> I have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB of
> RAM.This server is mainly used for inserting/updating large amounts of data
> via copy/insert/update commands, and seldom for running select queries.
> 
> Here are the relevant configuration parameters I changed:
> 
> max_connections = 100
> shared_buffers = 512MB
> effective_cache_size = 3GB
> work_mem = 12233kB
> maintenance_work_mem = 256MB
> min_wal_size = 1GB max_wal_size = 2GB
> checkpoint_completion_target = 0.7
> wal_buffers = 16MB
> default_statistics_target = 100
> 
> After setting in postgresql.conf. I run the select query to fetch large
> amount of record of 29000 in postgresql but it takes 10.3 seconds but the
> same query takes 2 seconds for execution in MSSQL.
> 
> So my query is how to improve the perfermance in postgresql.

Please provide the output EXPLAIN (ANALYZE, BUFFERS) yourquery; and your
query. Then we'll possibly be able to help you - atm we don't have
enough information.

Regards,

Andres


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Slow query with big tables

2016-08-29 Thread Tommi Kaksonen
On Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes
 wrote:

> Partitioning the Feature and Point tables on measurement_time (or
> measurement_start_time,
> you are not consistent on what it is called) might be helpful.  However,
> measurement_time does not exist in those tables, so you would first have
to
> de-normalize by introducing it into them.
>
> More likely to be helpful would be precomputing the aggregates and storing
> them in a materialized view (not available in 9.2).   Also, more RAM and
> better hard-drives can't hurt.

Thanks a lot for help and all suggestions. Before this I tried to partition
by measurement_id (Feature table) and by result_feature_id (Point table)
but the performance was worse than without partitioning. Using
measurement_time in partitioning might be a better idea
(measurement_start_time was meant to be measurement_time).

I think I will update to newer version, use better hardware and try
materialized views for better performance.

Best Regards,
Tommi Kaksonen


2016-08-27 21:33 GMT+03:00 Jeff Janes :

> On Fri, Aug 26, 2016 at 6:17 AM, Tommi K  wrote:
>
>> Hello,
>> thanks for the response. I did not get the response to my email even
>> though I am subscribed to the pgsql-performance mail list. Let's hope that
>> I get the next one :)
>>
>> Increasing work_mem did not have great impact on the performance. But I
>> will try to update the PostgreSQL version to see if it speeds up things.
>>
>> However is there way to keep query time constant as the database size
>> grows.
>>
>
> Not likely.  If the number of rows you are aggregating grows, it will take
> more work to do those aggregations.
>
> If the number of rows being aggregated doesn't grow, because all the
> growth occurs outside of the measurement_time range, even then the new
> data will still make it harder to keep the stuff you want cached in
> memory.  If you really want more-constant query time, you could approach
> that by giving the machine as little RAM as possible.  This works not by
> making the large database case faster, but by making the small database
> case slower.  That usually is not what people want.
>
>
>
>> Should I use partitioning or partial indexes?
>>
>
> Partitioning the Feature and Point tables on measurement_time (or 
> measurement_start_time,
> you are not consistent on what it is called) might be helpful.  However,
> measurement_time does not exist in those tables, so you would first have
> to de-normalize by introducing it into them.
>
> More likely to be helpful would be precomputing the aggregates and storing
> them in a materialized view (not available in 9.2).   Also, more RAM and
> better hard-drives can't hurt.
>
> Cheers,
>
> Jeff
>