Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread Abbas
Sure thing. Base 60 or Sexagesimal is the numerical system used for 
measuring time (1 hour equals to 60 minutes and so on). But this case 
is even simpler, so without going into much detail about bases, you're 
mapping between two sets of numbers:


0 -> 0
.15 -> .25
.30 -> .50
.45 -> .75

From working with clocks, we know that 15 minutes is .25 hours, 30 
minutes is .5 hours and so on. So you only need to divide the 
fractional part ( effort_hours - floor(effort_hours) ) by .6 to get 
what you want.


For example, let's say effort_hours = 1.15; then floor(1.15) is 1; so:

floor(1.15) + ( (1.15 - floor(1.15)) / 0.6 ) = 1 + ( (1.15 - 1) / 0.6 ) 
= 1 + ( 0.15 / 0.60 ) = 1.25


Hope it helps. Feel free to ask a question if it's still unclear. :)


On Mon, May 21, 2018 at 6:09 PM, pavan95  
wrote:

Hi Abbas,

Thanks for your valuable suggestions. To my surprise I got the same 
output

as what I have executed before.

But unfortunately I'm unable to understand the logic of the code, in
specific what is base 60 number? The used data type for "effort_hours"
column is 'double precision'.

Kindly help me in understanding the logic. Thanks in advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html








Re: Help me in reducing the CPU cost for the high cost query below, as it is hitting production seriously!!

2018-05-21 Thread Abbas
Hi. Basically you want to convert a base 60 number to a decimal. So you 
don't need conditionals. See if this works for you:


SELECT floor(effort_hours) + ( (effort_hours - floor(effort_hours)) / 
0.6 )

from tms_timesheet_details detail , tms_wsr_header header  where
wsr_header_id=header.id and work_order_no != 'CORPORATE';

Regards,
Abbas

On Mon, May 21, 2018 at 3:43 PM, pavan95  
wrote:

Hi mlunon,

A great thanks for your timely response. And yes it worked when I 
rewritten

the query.

The query got enhanced with approximate of 1000 planner seeks. You 
can find

it from the explain plan below:

amp_test=# explain select
sum (
CASE MOD(cast(effort_hours as decimal),1)
WHEN 0.45 THEN cast(effort_hours as int)+0.75
WHEN 0.15 THEN cast(effort_hours as int)+0.25
WHEN 0.30 THEN cast(effort_hours as int)+0.5
WHEN 0 THEN cast(effort_hours as int)
END
)
from tms_timesheet_details detail , tms_wsr_header header  where
wsr_header_id=header.id and work_order_no != 'CORPORATE';
   QUERY PLAN

 Aggregate  (cost=8813.60..8813.61 rows=1 width=8)
   ->  Hash Join  (cost=608.27..5647.67 rows=70354 width=8)
 Hash Cond: (detail.wsr_header_id = header.id)
 ->  Seq Scan on tms_timesheet_details detail  
(cost=0.00..3431.14

rows=72378 width=12)
   Filter: ((work_order_no)::text <> 'CORPORATE'::text)
 ->  Hash  (cost=399.23..399.23 rows=16723 width=4)
   ->  Seq Scan on tms_wsr_header header  
(cost=0.00..399.23

rows=16723 width=4)
(7 rows)


But is this the optimum, can we reduce the cost more at least to 
around 5000
planner seeks. As it is only a subpart of the query which is called 
multiple

number of times in the main query.

And to send the main query along with tables description and explain 
plan it

will be a vast message so send you a sub-part.

Please help me to tune it more. Thanks in Advance.

Regards,
Pavan



--
Sent from: 
http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html