Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Marti Raudsepp
On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally
sridhar@gmail.com wrote:
 --
 | Id  | Operation | Name | Rows  | Bytes | Cost (%CPU)| Time |
 --
 |   0 | SELECT STATEMENT  |  |  7444K|   944M| 16077   (4)| 00:03:13 |
 |   1 |  TABLE ACCESS FULL| EMP  |  7444K|   944M| 16077   (4)| 00:03:13 |
 --

Sorry to take this off topic, but... Seriously, over 3 minutes to read
944 MB of data? That's less than 5 MB/s, what's wrong with your
database? :)

Regards,
Marti

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Vitalii Tymchyshyn

27.01.12 11:06, Marti Raudsepp написав(ла):

On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally
sridhar@gmail.com  wrote:

--
| Id  | Operation | Name | Rows  | Bytes | Cost (%CPU)| Time |
--
|   0 | SELECT STATEMENT  |  |  7444K|   944M| 16077   (4)| 00:03:13 |
|   1 |  TABLE ACCESS FULL| EMP  |  7444K|   944M| 16077   (4)| 00:03:13 |
--

Sorry to take this off topic, but... Seriously, over 3 minutes to read
944 MB of data? That's less than 5 MB/s, what's wrong with your
database? :)
Actually I'd ask how parallel CPU may help table sequence scan? Usually 
sequence scan does not take large amount of cpu time, so I see no point 
in parallelism.


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Tomas Vondra
On 27 Leden 2012, 10:06, Marti Raudsepp wrote:
 On Fri, Jan 27, 2012 at 06:31, sridhar bamandlapally
 sridhar@gmail.com wrote:
 --
 | Id  | Operation | Name | Rows  | Bytes | Cost (%CPU)| Time
 |
 --
 |   0 | SELECT STATEMENT  |  |  7444K|   944M| 16077   (4)| 00:03:13
 |
 |   1 |  TABLE ACCESS FULL| EMP  |  7444K|   944M| 16077   (4)| 00:03:13
 |
 --

 Sorry to take this off topic, but... Seriously, over 3 minutes to read
 944 MB of data? That's less than 5 MB/s, what's wrong with your
 database? :)

Yes, those results are quite suspicious. There's probably something
interfering with the queries (other queries, different processes, block
cleanout, ...) or maybe this is purely due to caching.

sridhar, run the queries repeatedly and my quess is the difference will
disappear (and the fist query will be a bit faster I guess).

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Thomas Kellerer

sridhar bamandlapally, 27.01.2012 05:31:

SQL explain plan for select * from hr.emp ;
Explained.
PLAN
--
| Id  | Operation | Name | Rows  | Bytes | Cost (%CPU)| Time |
--
|   0 | SELECT STATEMENT  |  |  7444K|   944M| 16077   (4)| 00:03:13 |
|   1 |  TABLE ACCESS FULL| EMP  |  7444K|   944M| 16077   (4)| 00:03:13 |
--
*WITH PARALLEL Option*
SQL explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;
Explained.
PLAN
-
| Id  | Operation| Name | Rows  | Bytes | Cost (%CPU)| Time 
|
-
|   0 | SELECT STATEMENT |  |  7444K|   944M|  4442   (3)| 00:00:54 
|
|   1 |  PX COORDINATOR  |  |   |   ||  
|
|   2 |   PX SEND QC (RANDOM)| :TQ1 |  7444K|   944M|  4442   (3)| 00:00:54 
|
|   3 |PX BLOCK ITERATOR |  |  7444K|   944M|  4442   (3)| 00:00:54 
|
|   4 | TABLE ACCESS FULL| EMP  |  7444K|   944M|  4442   (3)| 00:00:54 
|
-

In the above plan ( WITH PARALLEL Option )
1. Cost has been nearly reduced to 1/4th
2. CPU has been reduced
3. Time has been nearly reduced to 1/3rd


I have *never* seen the time column in the explain plan output come anywhere 
near the actual execution time in Oracle.




--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread sridhar bamandlapally
For security reasons, I cannot put real-time senario into loop,

the one which I gave is an example, if I have solution for this then
same will be applied for real-time senario

We have implemented PARALLEL technology into our database and same
expecting after migration to PostgreSQL,

The real-time SQL-Query is hiting 18000 times per day, and PARALLEL
option gave us great performance and big window for all other process

Concept is, we need window for every process on database and all
together should fit in our window and time-line.

We think PostgreSQL should also upgrade PARALLEL technology at SQL-Query level





On 1/27/12, Thomas Kellerer spam_ea...@gmx.net wrote:
 sridhar bamandlapally, 27.01.2012 05:31:
 SQL explain plan for select * from hr.emp ;
 Explained.
 PLAN
 --
 | Id  | Operation | Name | Rows  | Bytes | Cost (%CPU)| Time |
 --
 |   0 | SELECT STATEMENT  |  |  7444K|   944M| 16077   (4)| 00:03:13 |
 |   1 |  TABLE ACCESS FULL| EMP  |  7444K|   944M| 16077   (4)| 00:03:13 |
 --
 *WITH PARALLEL Option*
 SQL explain plan for select /*+parallel(emp,4)*/ * from hr.emp ;
 Explained.
 PLAN
 -
 | Id  | Operation| Name | Rows  | Bytes | Cost (%CPU)|
 Time |
 -
 |   0 | SELECT STATEMENT |  |  7444K|   944M|  4442   (3)|
 00:00:54 |
 |   1 |  PX COORDINATOR  |  |   |   ||
  |
 |   2 |   PX SEND QC (RANDOM)| :TQ1 |  7444K|   944M|  4442   (3)|
 00:00:54 |
 |   3 |PX BLOCK ITERATOR |  |  7444K|   944M|  4442   (3)|
 00:00:54 |
 |   4 | TABLE ACCESS FULL| EMP  |  7444K|   944M|  4442   (3)|
 00:00:54 |
 -

 In the above plan ( WITH PARALLEL Option )
 1. Cost has been nearly reduced to 1/4th
 2. CPU has been reduced
 3. Time has been nearly reduced to 1/3rd

 I have *never* seen the time column in the explain plan output come
 anywhere near the actual execution time in Oracle.




 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Tomas Vondra
On 27 Leden 2012, 13:16, sridhar bamandlapally wrote:
 For security reasons, I cannot put real-time senario into loop,

The point of Marti's comment was that the estimates (as presented by
EXPLAIN PLAN FOR in Oracle) are inherently imprecise. Don't trust what
Oracle is telling you about the expected runtime of the queries. Run the
queries repeatedly to see.

 the one which I gave is an example, if I have solution for this then
 same will be applied for real-time senario

There's no way to execute a single query in a parallel manner and it won't
be available anytime soon.

This is not an issue unless you have a CPU bound query and you have unused
CPUs. That's not the case of your example, because the sequential scan is
likely to be I/O bound, thus executing it in parallel won't fix the issue.

 We have implemented PARALLEL technology into our database and same
 expecting after migration to PostgreSQL,

Why? Have you tried to run the query on PostgreSQL?

 The real-time SQL-Query is hiting 18000 times per day, and PARALLEL
 option gave us great performance and big window for all other process

Are we still discussing the example you've posted? Because this 18k hits
per day means running the query every 5 seconds. And if the query takes
more than a few seconds, there will be multiple queries running
concurrently, thus eating CPUs.

 Concept is, we need window for every process on database and all
 together should fit in our window and time-line.

Not sure what you mean by window or time-line?

 We think PostgreSQL should also upgrade PARALLEL technology at SQL-Query
 level

That is currently discussed in other threads, but it won't happen any time
soon (a few years in the future, maybe).

Tomas


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Jayashankar K B
Hi,

We are having an embedded system with a freescale m68k architecture based 
micro-controller, 256MB RAM running a customized version of Slackware 12 linux.
It's a relatively modest Hardware.
We have installed postgres 9.1 as our database engine. While testing, we found 
that the Postgres operations take more than 70% of CPU and the average also 
stays above 40%.
This is suffocating the various other processes running on the system. Couple 
of them are very critical ones.
The testing involves inserting bulk number of records (approx. 1 records 
having between 10 and 20 columns).
Please let us know how we can reduce CPU usage for the postgres.

Thanks and Regards
Jayashankar



Larsen  Toubro Limited

www.larsentoubro.com

This Email may contain confidential or privileged information for the intended 
recipient (s) If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.


Re: [PERFORM] PostgreSQL Parallel Processing !

2012-01-27 Thread Cédric Villemain
 the one which I gave is an example, if I have solution for this then
 same will be applied for real-time senario

 There's no way to execute a single query in a parallel manner and it won't
 be available anytime soon.

 This is not an issue unless you have a CPU bound query and you have unused
 CPUs. That's not the case of your example, because the sequential scan is
 likely to be I/O bound, thus executing it in parallel won't fix the issue.

it is possible to emulate with plproxy for example.


 We have implemented PARALLEL technology into our database and same
 expecting after migration to PostgreSQL,

 Why? Have you tried to run the query on PostgreSQL?

premature optimization ...


 The real-time SQL-Query is hiting 18000 times per day, and PARALLEL
 option gave us great performance and big window for all other process

 Are we still discussing the example you've posted? Because this 18k hits
 per day means running the query every 5 seconds. And if the query takes
 more than a few seconds, there will be multiple queries running
 concurrently, thus eating CPUs.

agreed.


 Concept is, we need window for every process on database and all
 together should fit in our window and time-line.

 Not sure what you mean by window or time-line?

 We think PostgreSQL should also upgrade PARALLEL technology at SQL-Query
 level

 That is currently discussed in other threads, but it won't happen any time
 soon (a few years in the future, maybe).

at the SQL level, I don't see the immediate benefit given that the
feature is not implemented: SQL level stuff  (planner hint) are here
to workaround what the server can not handle on its own. And
PostgreSQL policiy is not to allow planner hint, but to fix/improve
the server.

-- 
Cédric Villemain +33 (0)6 20 30 22 52
http://2ndQuadrant.fr/
PostgreSQL: Support 24x7 - Développement, Expertise et Formation

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Merlin Moncure
On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of
 stored functions s in straight SQL. Each stored proc was calling the next,
 so to get the full effect I had to track down all the pl/pgsql stored
 functions and convert them to sql. However, I was surprised to find after
 all of the rewrites, the LANGUAGE sql procs caused the queries to run slower
 than the LANGUAGE plpgsql.

One reason that plpgsql can outperform sql functions is that plpgsql
caches plans.  That said, I don't think that's what's happening here.
Did you confirm the performance difference outside of EXPLAIN ANALYZE?
 In particular cases EXPLAIN ANALYZE can skew times, either by
injecting time calls or in how it discards results.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Heikki Linnakangas

On 27.01.2012 15:34, Jayashankar K B wrote:

Hi,

We are having an embedded system with a freescale m68k architecture based 
micro-controller, 256MB RAM running a customized version of Slackware 12 linux.
It's a relatively modest Hardware.


Fascinating!


We have installed postgres 9.1 as our database engine. While testing, we found 
that the Postgres operations take more than 70% of CPU and the average also 
stays above 40%.
This is suffocating the various other processes running on the system. Couple 
of them are very critical ones.
The testing involves inserting bulk number of records (approx. 1 records 
having between 10 and 20 columns).
Please let us know how we can reduce CPU usage for the postgres.


The first step would be to figure out where all the time is spent. Are 
there unnecessary indexes you could remove? Are you using INSERT 
statements or COPY? Sending the data in binary format instead of text 
might shave some cycles.


If you can run something like oprofile on the system, that would be 
helpful to pinpoint the expensive part.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Andy Colson

On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:

On 27.01.2012 15:34, Jayashankar K B wrote:

Hi,

We are having an embedded system with a freescale m68k architecture
based micro-controller, 256MB RAM running a customized version of
Slackware 12 linux.
It's a relatively modest Hardware.


Fascinating!


We have installed postgres 9.1 as our database engine. While testing,
we found that the Postgres operations take more than 70% of CPU and
the average also stays above 40%.
This is suffocating the various other processes running on the system.
Couple of them are very critical ones.
The testing involves inserting bulk number of records (approx. 1
records having between 10 and 20 columns).
Please let us know how we can reduce CPU usage for the postgres.


The first step would be to figure out where all the time is spent. Are
there unnecessary indexes you could remove? Are you using INSERT
statements or COPY? Sending the data in binary format instead of text
might shave some cycles.



Do you have triggers on the table?



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] regarding CLUSTER and HUGE work_mem / maintenance_work_mem

2012-01-27 Thread Jon Nelson
Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
Furthermore, let's say I have a machine with sufficient memory for me
to set the work_mem  and maintenance_work_mem to 20GB (just for this
session).
When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
way of strace) performing an index scan which amounts to large
quantities of random I/O.
In my case, that means it takes a very, very long time. PostgreSQL is
largely at defaults, except for a 2GB shared_buffers and a few
unrelated changes. The system itself has 32GB of physical RAM and has
plenty free.
Why didn't PostgreSQL just read the table into memory (and the
interesting index) as a sequential scan, sort, and then write it out?
It seems like there would be more than enough memory for that. The
sequential I/O rate on this machine is 50-100x the random I/O rate.

I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux 6.1.

-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] regarding CLUSTER and HUGE work_mem / maintenance_work_mem

2012-01-27 Thread Heikki Linnakangas

On 27.01.2012 19:43, Jon Nelson wrote:

Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
Furthermore, let's say I have a machine with sufficient memory for me
to set the work_mem  and maintenance_work_mem to 20GB (just for this
session).
When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
way of strace) performing an index scan which amounts to large
quantities of random I/O.
In my case, that means it takes a very, very long time. PostgreSQL is
largely at defaults, except for a 2GB shared_buffers and a few
unrelated changes. The system itself has 32GB of physical RAM and has
plenty free.
Why didn't PostgreSQL just read the table into memory (and the
interesting index) as a sequential scan, sort, and then write it out?
It seems like there would be more than enough memory for that. The
sequential I/O rate on this machine is 50-100x the random I/O rate.

I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux 6.1.


The suppport for doing a seqscan+sort in CLUSTER was introduced in 
version 9.1. Before that, CLUSTER always did an indexscan. See release 
notes: http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Jayashankar K B
Hi Heikki Linnakangas: We are using series of Insert statements to insert the 
records into database.
Sending data in binary is not an option as the module that writes into DB has 
been finalized.
We do not have control over that.

Hi Andy: As of now, there are no triggers in the table.

Please let me know how we can proceed. On the net I couldn't get hold of any 
good example where Postgres has been used on limited Hardware system.
We are starting to feel if Postgres was a good choice for us..!

Thanks and Regards
Jay

-Original Message-
From: Andy Colson [mailto:a...@squeakycode.net]
Sent: Friday, January 27, 2012 10:45 PM
To: Heikki Linnakangas
Cc: Jayashankar K B; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

On 1/27/2012 10:47 AM, Heikki Linnakangas wrote:
 On 27.01.2012 15:34, Jayashankar K B wrote:
 Hi,

 We are having an embedded system with a freescale m68k architecture
 based micro-controller, 256MB RAM running a customized version of
 Slackware 12 linux.
 It's a relatively modest Hardware.

 Fascinating!

 We have installed postgres 9.1 as our database engine. While testing,
 we found that the Postgres operations take more than 70% of CPU and
 the average also stays above 40%.
 This is suffocating the various other processes running on the system.
 Couple of them are very critical ones.
 The testing involves inserting bulk number of records (approx. 1
 records having between 10 and 20 columns).
 Please let us know how we can reduce CPU usage for the postgres.

 The first step would be to figure out where all the time is spent. Are
 there unnecessary indexes you could remove? Are you using INSERT
 statements or COPY? Sending the data in binary format instead of text
 might shave some cycles.


Do you have triggers on the table?




Larsen  Toubro Limited

www.larsentoubro.com

This Email may contain confidential or privileged information for the intended 
recipient (s) If you are not the intended recipient, please do not use or 
disseminate the information, notify the sender and delete it from your system.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Carlo Stonebanks
Yes, I did test it  - i.e. I ran the functions on their own as I had always
noticed a minor difference between EXPLAIN ANALYZE results and direct query
calls.

Interesting, so sql functions DON'T cache plans? Will plan-caching be of any
benefit to SQL that makes no reference to any tables? The SQL is emulating
the straight non-set-oriented procedural logic of the original plpgsql.

-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com] 
Sent: January 27, 2012 10:47 AM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?

On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of
 stored functions s in straight SQL. Each stored proc was calling the next,
 so to get the full effect I had to track down all the pl/pgsql stored
 functions and convert them to sql. However, I was surprised to find after
 all of the rewrites, the LANGUAGE sql procs caused the queries to run
slower
 than the LANGUAGE plpgsql.

One reason that plpgsql can outperform sql functions is that plpgsql
caches plans.  That said, I don't think that's what's happening here.
Did you confirm the performance difference outside of EXPLAIN ANALYZE?
 In particular cases EXPLAIN ANALYZE can skew times, either by
injecting time calls or in how it discards results.

merlin


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Deron
You can use PREPARE... EXECUTE to cache the plan (as well as
parsing).   However, I find it unlikely this will would explain the
loss in performance you experienced.

Deron


On Fri, Jan 27, 2012 at 11:36 AM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 Yes, I did test it  - i.e. I ran the functions on their own as I had always
 noticed a minor difference between EXPLAIN ANALYZE results and direct query
 calls.

 Interesting, so sql functions DON'T cache plans? Will plan-caching be of any
 benefit to SQL that makes no reference to any tables? The SQL is emulating
 the straight non-set-oriented procedural logic of the original plpgsql.

 -Original Message-
 From: Merlin Moncure [mailto:mmonc...@gmail.com]
 Sent: January 27, 2012 10:47 AM
 To: Carlo Stonebanks
 Cc: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?

 On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks
 stonec.regis...@sympatico.ca wrote:
 Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of
 stored functions s in straight SQL. Each stored proc was calling the next,
 so to get the full effect I had to track down all the pl/pgsql stored
 functions and convert them to sql. However, I was surprised to find after
 all of the rewrites, the LANGUAGE sql procs caused the queries to run
 slower
 than the LANGUAGE plpgsql.

 One reason that plpgsql can outperform sql functions is that plpgsql
 caches plans.  That said, I don't think that's what's happening here.
 Did you confirm the performance difference outside of EXPLAIN ANALYZE?
  In particular cases EXPLAIN ANALYZE can skew times, either by
 injecting time calls or in how it discards results.

 merlin


 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Heikki Linnakangas

On 27.01.2012 20:30, Jayashankar K B wrote:

Hi Heikki Linnakangas: We are using series of Insert statements to insert the 
records into database.
Sending data in binary is not an option as the module that writes into DB has 
been finalized.
We do not have control over that.


That certainly limits your options.


Please let me know how we can proceed. On the net I couldn't get hold of any 
good example where Postgres has been used on limited Hardware system.


I don't think there's anything particular in postgres that would make it 
a poor choice on a small system, as far as CPU usage is concerned 
anyway. But inserting rows in a database is certainly slower than, say, 
writing them into a flat file.


At what rate are you doing the INSERTs? And how fast would they need to 
be? Remember that it's normal that while the INSERTs are running, 
postgres will use all the CPU it can to process them as fast as 
possible. So the question is, at what rate do they need to be processed 
to meet your target. Lowering the process priority with 'nice' might 
help too, to give the other important processes priority over postgres.


The easiest way to track down where the time is spent would be to run a 
profiler, if that's possible on your platform.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Carlo Stonebanks
Was I even right in thinking I would gain any performance by converting to
SQL?

-Original Message-
From: Deron [mailto:fecas...@gmail.com] 
Sent: January 27, 2012 2:29 PM
To: Carlo Stonebanks
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?

You can use PREPARE... EXECUTE to cache the plan (as well as
parsing).   However, I find it unlikely this will would explain the
loss in performance you experienced.

Deron


On Fri, Jan 27, 2012 at 11:36 AM, Carlo Stonebanks
stonec.regis...@sympatico.ca wrote:
 Yes, I did test it  - i.e. I ran the functions on their own as I had
always
 noticed a minor difference between EXPLAIN ANALYZE results and direct
query
 calls.

 Interesting, so sql functions DON'T cache plans? Will plan-caching be of
any
 benefit to SQL that makes no reference to any tables? The SQL is emulating
 the straight non-set-oriented procedural logic of the original plpgsql.

 -Original Message-
 From: Merlin Moncure [mailto:mmonc...@gmail.com]
 Sent: January 27, 2012 10:47 AM
 To: Carlo Stonebanks
 Cc: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] pl/pgsql functions outperforming sql ones?

 On Thu, Jan 26, 2012 at 6:09 PM, Carlo Stonebanks
 stonec.regis...@sympatico.ca wrote:
 Assuming there was some sort of cost to pl/pgsql, I rewrote a bunch of
 stored functions s in straight SQL. Each stored proc was calling the
next,
 so to get the full effect I had to track down all the pl/pgsql stored
 functions and convert them to sql. However, I was surprised to find after
 all of the rewrites, the LANGUAGE sql procs caused the queries to run
 slower
 than the LANGUAGE plpgsql.

 One reason that plpgsql can outperform sql functions is that plpgsql
 caches plans.  That said, I don't think that's what's happening here.
 Did you confirm the performance difference outside of EXPLAIN ANALYZE?
  In particular cases EXPLAIN ANALYZE can skew times, either by
 injecting time calls or in how it discards results.

 merlin


 --
 Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-performance


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread A.M.

On Jan 27, 2012, at 2:59 PM, Carlo Stonebanks wrote:

 Was I even right in thinking I would gain any performance by converting to
 SQL?

As always, it depends. I converted an immutable pl/pgsql function to an SQL 
function and the body of the function barely changed. However, I experienced an 
order-of-magnitude speed-up because the SQL function could be folded into the 
plan (like a view) while a pl/pgsql function will never be folded (and the 
planner punts and assumes the function will return 100 rows for set-returning 
functions). However, not all SQL functions can be folded into the plan.

On the other hand, a pl/pgsql function can make use of memoization for 
number-crunching routines and make business-logical short-circuiting decisions.

Cheers,
M
-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Claudio Freire
On Fri, Jan 27, 2012 at 4:56 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 I don't think there's anything particular in postgres that would make it a
 poor choice on a small system, as far as CPU usage is concerned anyway. But
 inserting rows in a database is certainly slower than, say, writing them
 into a flat file.

How did you install postgres?
Did you build it?
Which configure flags did you use?
Exactly which m68k cpu is it? (it does matter)

For instance...

wiki: However, a significant difference is that the 68060 FPU is not
pipelined and is therefore up to three times slower than the Pentium
in floating point applications

This means, if you don't configure the build correctly, you will get
really sub-optimal code. Modern versions are optimized for modern
cpus.
Of utmost importance, I would imagine, is the binary format chosen for
pg data types (floating types especially, if you use them).

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] regarding CLUSTER and HUGE work_mem / maintenance_work_mem

2012-01-27 Thread Jon Nelson
On Fri, Jan 27, 2012 at 12:05 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 27.01.2012 19:43, Jon Nelson wrote:

 Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
 Furthermore, let's say I have a machine with sufficient memory for me
 to set the work_mem  and maintenance_work_mem to 20GB (just for this
 session).
 When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
 way of strace) performing an index scan which amounts to large
 quantities of random I/O.
 In my case, that means it takes a very, very long time. PostgreSQL is
 largely at defaults, except for a 2GB shared_buffers and a few
 unrelated changes. The system itself has 32GB of physical RAM and has
 plenty free.
 Why didn't PostgreSQL just read the table into memory (and the
 interesting index) as a sequential scan, sort, and then write it out?
 It seems like there would be more than enough memory for that. The
 sequential I/O rate on this machine is 50-100x the random I/O rate.

 I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux
 6.1.


 The suppport for doing a seqscan+sort in CLUSTER was introduced in version
 9.1. Before that, CLUSTER always did an indexscan. See release notes:
 http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416

That's what I get for digging through the source (git) but working
with 8.4.10, on a Friday, at the end of a long week.
Thanks for pointing that out to somebody that should have known better.


-- 
Jon

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] regarding CLUSTER and HUGE work_mem / maintenance_work_mem

2012-01-27 Thread Scott Marlowe
On Fri, Jan 27, 2012 at 7:34 PM, Jon Nelson jnelson+pg...@jamponi.net wrote:
 On Fri, Jan 27, 2012 at 12:05 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com wrote:
 On 27.01.2012 19:43, Jon Nelson wrote:

 Let's say I have a 7GB table with 3-4 indices for a total of 10-12GB.
 Furthermore, let's say I have a machine with sufficient memory for me
 to set the work_mem  and maintenance_work_mem to 20GB (just for this
 session).
 When I issue a CLUSTER using one of the indices, I see PostgreSQL (by
 way of strace) performing an index scan which amounts to large
 quantities of random I/O.
 In my case, that means it takes a very, very long time. PostgreSQL is
 largely at defaults, except for a 2GB shared_buffers and a few
 unrelated changes. The system itself has 32GB of physical RAM and has
 plenty free.
 Why didn't PostgreSQL just read the table into memory (and the
 interesting index) as a sequential scan, sort, and then write it out?
 It seems like there would be more than enough memory for that. The
 sequential I/O rate on this machine is 50-100x the random I/O rate.

 I'm using 8.4.10 (with the 'inet' de-toasting patch) on Scientific Linux
 6.1.


 The suppport for doing a seqscan+sort in CLUSTER was introduced in version
 9.1. Before that, CLUSTER always did an indexscan. See release notes:
 http://www.postgresql.org/docs/9.1/static/release-9-1.html#AEN107416

 That's what I get for digging through the source (git) but working
 with 8.4.10, on a Friday, at the end of a long week.
 Thanks for pointing that out to somebody that should have known better.

But if you're stuck on  9.1 for a while, the workaround is to cluster
the table yourself by using a select * ... order by pkey.  For
randomly distributed tables this is far faster for a first time
cluster.  After that, subsequent clusters won't have as much work to
do and the older method for clustering should work ok.

It's kinda funny to have a complaint against pgsql for NOT using a
sequential scan.  Most DBAs that come from other DBAs are upset when
it doesn't use an index.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Postgress is taking lot of CPU on our embedded hardware.

2012-01-27 Thread Scott Marlowe
On Fri, Jan 27, 2012 at 6:34 AM, Jayashankar K B
jayashankar...@lnties.com wrote:
 Hi,

 We are having an embedded system with a freescale m68k architecture based
 micro-controller, 256MB RAM running a customized version of Slackware 12
 linux.

 It’s a relatively modest Hardware.

 We have installed postgres 9.1 as our database engine. While testing, we
 found that the Postgres operations take more than 70% of CPU and the average
 also stays above 40%.

Not to dissuade you from using pgsql, but have you tried other dbs
like the much simpler SQL Lite?

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] pl/pgsql functions outperforming sql ones?

2012-01-27 Thread Pavel Stehule
2012/1/27 Carlo Stonebanks stonec.regis...@sympatico.ca:
 Yes, I did test it  - i.e. I ran the functions on their own as I had always
 noticed a minor difference between EXPLAIN ANALYZE results and direct query
 calls.

 Interesting, so sql functions DON'T cache plans? Will plan-caching be of any
 benefit to SQL that makes no reference to any tables? The SQL is emulating
 the straight non-set-oriented procedural logic of the original plpgsql.


It is not necessary usually - simple SQL functions are merged to outer
query - there are e few cases where this optimization cannot be
processed and then there are performance lost.

For example this optimization is not possible (sometimes) when some
parameter is volatile

Regards

Pavel Stehule

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance