slow query with inline function on AWS RDS with RDS 24x large
below query is slow even with no data explain ANALYZE WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext'LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_id JOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_id LEFT JOIN order_offer_map oom using (order_id) WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz(); Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual time=1056.926..1056.934 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual time=1056.926..1056.932 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) (actual time=1036.809..1036.810 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.006..0.008 rows=1 loops=1) -> Bitmap Heap Scan on restaurant_order ro (cost=5427.94..3353966.60 rows=19275986 width=108) (actual time=1036.793..1036.793 rows=0 loops=1) Recheck Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) Rows Removed by Index Recheck: 5039976 Heap Blocks: lossy=275230 -> Bitmap Index Scan on rest_ord_date_brin (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 rows=2917120 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual time=20.106..20.109 rows=20949 loops=1) Buckets: 32768 (originally 8192) Batches: 1 (originally 1) Memory Usage: 3112kB -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) (actual time=1.620..14.539 rows=20949 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=2.26..726.91 rows=5866 width=109) (actual time=0.029..8.597 rows=20949 loops=1) Hash Cond: (b.city_id = c.city_id) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.020..0.021 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.575..1.575 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using "restaurant_idx$$_274b003d" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.006..0.684 rows=8609 loops=1) Heap Fetches: 0 -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed) Planning Time: 1.180 ms Execution Time: 1057.535 ms could some one explain why it is slow, if I insert 50k records the execution time reaches 20 seconds
Re: slow query with inline function on AWS RDS with RDS 24x large
BRIN index is only on the date_time column, I even tried with btree index with no performance gains. On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule wrote: > > > pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan napsal: > >> >> below query is slow even with no data >> >> >> explain ANALYZE >> >> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) >> SELECT ro.order_id, >> ro.date_time, >> round(ro.order_amount, 2) AS order_amount, >> b.branch_id, >> b.branch_name, >> st_x(b.location) AS from_x, >> st_y(b.location) AS from_y, >> b.user_id AS branch_user_id, >> b.contact_info, >> r.restaurant_id, >>c.city_id, >>c.city_name, >>c.city_name_ar, >>st_linefromtext'LINESTRING('::text || st_x(b.location)) || ' >> '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) >> || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, >>to_char(ro.date_time, 'HH24:MI'::text) AS order_time, >>ro.customer_comment, >>'N'::text AS is_new_customer, >>ro.picked_up_time, >>ro.driver_assigned_date_time, >>oom.offer_amount, >> oom.offer_type_code AS offer_type, >>ro.uk_vat >> FROM business, restaurant_order ro >> >> JOIN branch b ON b.branch_id = ro.branch_id >> JOIN restaurant r ON r.restaurant_id = b.restaurant_id >> JOIN city c ON c.city_id = b.city_id >> LEFT JOIN order_offer_map oom using (order_id) >> WHERE ro.date_time >= business.start_date AND ro.date_time<= >> f_now_immutable_with_tz(); >> >> >> >> Hash Left Join (cost=55497.32..5417639.59 rows=5397276 width=291) (actual >> time=1056.926..1056.934 rows=0 loops=1) >> Hash Cond: (ro.order_id = oom.order_id) >> -> Hash Join (cost=6584.61..3674143.44 rows=5397276 width=209) (actual >> time=1056.926..1056.932 rows=0 loops=1) >> Hash Cond: (ro.branch_id = b.branch_id) >> -> Nested Loop (cost=5427.94..3546726.47 rows=19275986 width=108) >> (actual time=1036.809..1036.810 rows=0 loops=1) >> -> Function Scan on start_date (cost=0.00..0.01 rows=1 >> width=8) (actual time=0.006..0.008 rows=1 loops=1) >> -> Bitmap Heap Scan on restaurant_order ro >> (cost=5427.94..3353966.60 rows=19275986 width=108) (actual >> time=1036.793..1036.793 rows=0 loops=1) >> Recheck Cond: ((date_time >= start_date.start_date) AND >> (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) >> Rows Removed by Index Recheck: 5039976 >> Heap Blocks: lossy=275230 >> -> Bitmap Index Scan on rest_ord_date_brin >> (cost=0.00..608.94 rows=19359111 width=0) (actual time=14.037..14.038 >> rows=2917120 loops=1) >> Index Cond: ((date_time >= start_date.start_date) >> AND (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) >> -> Hash (cost=1083.35..1083.35 rows=5866 width=109) (actual >> time=20.106..20.109 rows=20949 loops=1) >> Buckets: 32768 (originally 8192) Batches: 1 (originally 1) >> Memory Usage: 3112kB >> -> Hash Join (cost=343.29..1083.35 rows=5866 width=109) >> (actual time=1.620..14.539 rows=20949 loops=1) >> Hash Cond: (b.restaurant_id = r.restaurant_id) >> -> Hash Join (cost=2.26..726.91 rows=5866 width=109) >> (actual time=0.029..8.597 rows=20949 loops=1) >> Hash Cond: (b.city_id = c.city_id) >> -> Seq Scan on branch b (cost=0.00..668.49 >> rows=20949 width=88) (actual time=0.004..1.609 rows=20949 loops=1) >> -> Hash (cost=1.56..1.56 rows=56 width=29) >> (actual time=0.020..0.021 rows=56 loops=1) >> Buckets: 1024 Batches: 1 Memory Usage: 12kB >> -> Seq Scan on city c (cost=0.00..1.56 >> rows=56 width=29) (actual time=0.004..0.010 rows=56 loops=1) >> -> Hash (cost=233.42..233.42 rows=8609 width=8) >> (actual time=1.575..1.575 rows=8609 loops=1) >> Buckets: 16384 Batches: 1 Memory Usage: 465kB >> -> Index Only Scan using >> "restaurant_idx$$_274b003d"
Re: slow query with inline function on AWS RDS with RDS 24x large
You are right, I dropped BRIN and created btree and the performance on 0 rows matching criteria table is good, below is the plan with BTREE. I will test by inserting lot of data. Hash Join (cost=50186.91..3765911.10 rows=5397411 width=291) (actual time=1.501..1.504 rows=0 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Left Join (cost=49845.88..2078197.48 rows=5397411 width=216) (actual time=0.079..0.081 rows=0 loops=1) Hash Cond: (ro.order_id = oom.order_id) -> Hash Join (cost=933.18..2007856.35 rows=5397411 width=209) (actual time=0.078..0.080 rows=0 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=930.92..1956181.11 rows=19276467 width=188) (actual time=0.048..0.050 rows=0 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Nested Loop (cost=0.56..1904639.80 rows=19276467 width=108) (actual time=0.048..0.048 rows=0 loops=1) -> Function Scan on start_date (cost=0.00..0.01 rows=1 width=8) (actual time=0.002..0.003 rows=1 loops=1) -> Index Scan using rest_ord_date_brin on restaurant_order ro (cost=0.56..1711875.12 rows=19276467 width=108) (actual time=0.042..0.042 rows=0 loops=1) Index Cond: ((date_time >= start_date.start_date) AND (date_time <= '2021-06-04 08:48:45.377833+00'::timestamp with time zone)) -> Hash (cost=668.49..668.49 rows=20949 width=88) (never executed) -> Seq Scan on branch b (cost=0.00..668.49 rows=20949 width=88) (never executed) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.026..0.027 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.009..0.016 rows=56 loops=1) -> Hash (cost=33000.09..33000.09 rows=1273009 width=13) (never executed) -> Seq Scan on order_offer_map oom (cost=0.00..33000.09 rows=1273009 width=13) (never executed) -> Hash (cost=233.42..233.42 rows=8609 width=8) (actual time=1.403..1.403 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 465kB -> Index Only Scan using "restaurant_idx$$_274b003d" on restaurant r (cost=0.29..233.42 rows=8609 width=8) (actual time=0.007..0.634 rows=8609 loops=1) Heap Fetches: 0 Planning Time: 1.352 ms Execution Time: 1.571 ms On Fri, Jun 4, 2021 at 11:41 AM Pavel Stehule wrote: > Hi > > > pá 4. 6. 2021 v 10:32 odesílatel Ayub Khan napsal: > >> BRIN index is only on the date_time column, I even tried with btree index >> with no performance gains. >> > > -> Bitmap Heap Scan on restaurant_order ro > (cost=5427.94..3353966.60 rows=19275986 width=108) (actual > time=1036.793..1036.793 rows=0 loops=1) > Recheck Cond: ((date_time >= start_date.start_date) AND > (date_time <= '2021-06-04 08:05:32.784199+00'::timestamp with time zone)) > Rows Removed by Index Recheck: 5039976 > Heap Blocks: lossy=275230 > > When the most rows are removed in recheck, then the effectivity of the index > is not good > > Pavel > > > > >> >> On Fri, Jun 4, 2021 at 11:23 AM Pavel Stehule >> wrote: >> >>> >>> >>> pá 4. 6. 2021 v 10:07 odesílatel Ayub Khan napsal: >>> >>>> >>>> below query is slow even with no data >>>> >>>> >>>> explain ANALYZE >>>> >>>> WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) >>>> SELECT ro.order_id, >>>> ro.date_time, >>>> round(ro.order_amount, 2) AS order_amount, >>>> b.branch_id, >>>> b.branch_name, >>>> st_x(b.location) AS from_x, >>>> st_y(b.location) AS from_y, >>>> b.user_id AS branch_user_id, >>>> b.contact_info, >>>> r.restaurant_id, >>>>c.city_id, >>>>c.city_name, >>>>c.city_name_ar, >>>>st_linefromtext'LINESTRING('::text || st_x(b.location)) || ' >>>> '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) >>>> || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS >>>> from_to, >>>>to_char(ro.date_time, 'HH24:MI'::text) AS order_time, >>>>ro.customer_comment, >>>>'N'::text AS is_new_customer, >>>>ro.picked_
query planner not using index, instead using squential scan
I am using postgres 12 on AWS RDS could someone clarify why the LEFT JOIN order_offer_map oom using (order_id) in the below query is using sequential scan instead of using index on order_id which is defined in order_offer_map table. explain ANALYZE WITH business AS( SELECT * FROM get_businessday_utc_f() start_date) SELECT ro.order_id, ro.date_time, round(ro.order_amount, 2) AS order_amount, b.branch_id, b.branch_name, st_x(b.location) AS from_x, st_y(b.location) AS from_y, b.user_id AS branch_user_id, b.contact_info, r.restaurant_id, c.city_id, c.city_name, c.city_name_ar, st_linefromtext'LINESTRING('::text || st_x(b.location)) || ' '::text) || st_y(b.location)) || ','::text) || st_x(ro.location_geometry)) || ' '::text) || st_y(ro.location_geometry)) || ')'::text, 28355) AS from_to, to_char(ro.date_time, 'HH24:MI'::text) AS order_time, ro.customer_comment, 'N'::text AS is_new_customer, ro.picked_up_time, ro.driver_assigned_date_time, oom.offer_amount, oom.offer_type_code AS offer_type, ro.uk_vat FROM business, restaurant_order ro JOIN branch b ON b.branch_id = ro.branch_id JOIN restaurant r ON r.restaurant_id = b.restaurant_id JOIN city c ON c.city_id = b.city_id LEFT JOIN order_offer_map oom using (order_id) WHERE ro.date_time >= business.start_date AND ro.date_time<= f_now_immutable_with_tz(); Hash Join (cost=39897.37..161872.06 rows=259399 width=494) (actual time=605.767..2060.712 rows=156253 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=39895.11..78778.16 rows=259399 width=355) (actual time=605.583..789.863 rows=156253 loops=1) Hash Cond: (b.restaurant_id = r.restaurant_id) -> Hash Join (cost=39542.41..77744.20 rows=259399 width=307) (actual time=602.096..738.765 rows=156253 loops=1) Hash Cond: (ro.branch_id = b.branch_id) -> Hash Right Join (cost=38607.06..76127.79 rows=259399 width=225) (actual time=591.342..672.039 rows=156253 loops=1) Hash Cond: (oom.order_id = ro.order_id) -> Seq Scan on order_offer_map oom (cost=0.00..34179.09 rows=1273009 width=15) (actual time=0.007..91.121 rows=1273009 loops=1) -> Hash (cost=35364.57..35364.57 rows=259399 width=218) (actual time=244.571..244.571 rows=156253 loops=1) Buckets: 262144 Batches: 1 Memory Usage: 29098kB -> Index Scan using "idx$$_00010001" on restaurant_order ro (cost=0.56..35364.57 rows=259399 width=218) (actual time=0.033..195.939 rows=156253 loops=1) Index Cond: ((date_time >= '2021-05-27 05:00:00'::timestamp without time zone) AND (date_time <= '2021-06-05 16:38:22.758875+00'::timestamp with time zone)) Filter: (order_status_code = 'D'::bpchar) Rows Removed by Filter: 73969 -> Hash (cost=673.49..673.49 rows=20949 width=90) (actual time=10.715..10.715 rows=20949 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 2758kB -> Seq Scan on branch b (cost=0.00..673.49 rows=20949 width=90) (actual time=0.006..6.397 rows=20949 loops=1) -> Hash (cost=245.09..245.09 rows=8609 width=56) (actual time=3.466..3.467 rows=8609 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 903kB -> Seq Scan on restaurant r (cost=0.00..245.09 rows=8609 width=56) (actual time=0.003..2.096 rows=8609 loops=1) -> Hash (cost=1.56..1.56 rows=56 width=29) (actual time=0.026..0.026 rows=56 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Seq Scan on city c (cost=0.00..1.56 rows=56 width=29) (actual time=0.007..0.015 rows=56 loops=1) Planning Time: 1.377 ms Execution Time: 2071.965 ms -Ayub
Re: query planner not using index, instead using squential scan
;; >QUERY PLAN > > > Hash Join (cost=37.01..39.28 rows=1 width=4) (actual time=0.124..0.125 > rows=0 loops=1) >Hash Cond: (t1.id = t.id) >-> Seq Scan on t1 (cost=0.00..2.00 rows=100 width=4) (actual > time=0.004..0.008 rows=100 loops=1) >-> Hash (cost=37.00..37.00 rows=1 width=4) (actual time=0.109..0.109 > rows=1 loops=1) > Buckets: 1024 Batches: 1 Memory Usage: 9kB > -> Seq Scan on t (cost=0.00..37.00 rows=1 width=4) (actual > time=0.058..0.107 rows=1 loops=1) >Filter: (created_on = '2021-06-01 > 12:48:45.141123'::timestamp without time zone) >*Rows Removed by Filter: 999 --- as no useful > index, t_created_on_idx will fetch all pages and then remove rows from > them, expensive* > Planning Time: 0.111 ms > Execution Time: 0.162 ms > (10 rows) > > > postgres=# explain analyze select 1 from t join t1 on (t.id = t1.id) > where created_on = '2021-06-01 12:48:45.141123'; >QUERY PLAN > > - > Hash Join (cost=8.32..33.47 rows=1 width=4) (actual time=0.025..0.026 > rows=0 loops=1) >Hash Cond: (t1.id = t.id) >-> Seq Scan on t1 (cost=0.00..22.00 rows=1200 width=4) (actual > time=0.009..0.009 rows=1 loops=1) >-> Hash (cost=8.31..8.31 rows=1 width=4) (actual time=0.014..0.014 > rows=0 loops=1) > Buckets: 1024 Batches: 1 Memory Usage: 8kB > -> Index Scan using t_created_on_idx1 on t (cost=0.29..8.31 > rows=1 width=4) (actual time=0.014..0.014 rows=0 loops=1) >Index Cond: (created_on = '2021-06-01 > 12:48:45.141123'::timestamp without time zone) -- *exact match using > btree index, * > Planning Time: 0.255 ms > Execution Time: 0.071 ms > (9 rows) > > > but from Ayub's plan, the number of rows fetched are a lot, but is also > removing rows post index scan. > if that can be improved with a btree index that does not filter unwanted > rows, the run may be faster ? > but i guess if there are 156k rows, planner would a have found a win in > seq scan. > > Ayub, > just for the sake of understanding, > > can you run the query using > > postgres=# set enable_seqscan TO 0; > SET > postgres=# -- explain analyze > > postgres=# set enable_seqscan TO 1; > SET > > > On Sun, 6 Jun 2021 at 00:59, Tom Lane wrote: > >> Ayub Khan writes: >> > could someone clarify why the LEFT JOIN order_offer_map oom using >> > (order_id) in the below query is using sequential scan instead of >> > using index on order_id which is defined in order_offer_map table. >> >> Probably because it estimates the hash join to restaurant_order is >> faster than a nestloop join would be. I think it's likely right. >> You'd need very optimistic assumptions about the cost of an >> individual index probe into order_offer_map to conclude that 156K >> of them would be faster than the 476ms that are being spent here >> to read order_offer_map and join it to the result of the >> indexscan on restaurant_order. >> >> If, indeed, that *is* faster on your hardware, you might want >> to dial down random_page_cost to get more-relevant estimates. >> >> regards, tom lane >> >> >> > > -- > Thanks, > Vijay > Mumbai, India > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
dexter on AWS RDS auto tune queries
Other than Dexter, Is there an auto tune or query performance indicator for postgres ? Also which are the most commonly used monitoring (slow query, cpu, index creation for missing indexs ) tools being used for postgres ? --Ayub
Re: dexter on AWS RDS auto tune queries
Thank you @Julian @Christophe: yes I am using RDS performance insights, however it might be more helpful if it could give more info about the slowness of the queries and what improvements could be done to the queries itself. I am using pgMusted to analyze a slow query and there the suggestion is to create an index on app2.user_id, however app2.user_id is a primary key. below is the query and its explain: select * from ( SELECT act.*, app1.user_name AS created_by_username, app2.user_name AS last_updated_by_username FROM account_transactions AS act LEFT OUTER JOIN app_user AS app1 ON app1.user_id = act.created_by LEFT OUTER JOIN app_user AS app2 ON app2.user_id = act.last_updated_by WHERE act.is_deleted = 'false' AND act.CREATION_DATE BETWEEN TO_DATE('06/06/2021', 'DD-MM-') AND TO_DATE('07-06-2021', 'DD-MM-') ORDER BY act.ID DESC ) as items order by id desc Sort (cost=488871.14..489914.69 rows=417420 width=270) (actual time=2965.815..2979.921 rows=118040 loops=1) Sort Key: act.id DESC Sort Method: quicksort Memory: 57607kB -> Merge Left Join (cost=422961.21..449902.61 rows=417420 width=270) (actual time=2120.021..2884.484 rows=118040 loops=1) Merge Cond: (act.last_updated_by = ((app2.user_id)::numeric)) -> Sort (cost=7293.98..7301.62 rows=3054 width=257) (actual time=464.243..481.292 rows=118040 loops=1) Sort Key: act.last_updated_by Sort Method: quicksort Memory: 50899kB -> Nested Loop Left Join (cost=0.87..7117.21 rows=3054 width=257) (actual time=0.307..316.148 rows=118040 loops=1) -> Index Scan using creation_date on account_transactions act (cost=0.44..192.55 rows=3054 width=244) (actual time=0.295..67.330 rows=118040 loops=1) " Index Cond: ((creation_date >= to_date('06/06/2021'::text, 'DD-MM-'::text)) AND (creation_date <= to_date('07-06-2021'::text, 'DD-MM-'::text)))" Filter: ((is_deleted)::text = 'false'::text) -> Index Scan using app_user_pk on app_user app1 (cost=0.43..2.27 rows=1 width=21) (actual time=0.002..0.002 rows=1 loops=118040) Index Cond: (user_id = act.created_by) -> Sort (cost=415667.22..423248.65 rows=3032573 width=21) (actual time=1655.748..1876.596 rows=3079326 loops=1) Sort Key: ((app2.user_id)::numeric) Sort Method: quicksort Memory: 335248kB -> Seq Scan on app_user app2 (cost=0.00..89178.73 rows=3032573 width=21) (actual time=0.013..575.630 rows=3032702 loops=1) Planning Time: 2.222 ms Execution Time: 3009.387 ms On Mon, Jun 7, 2021 at 8:00 AM Christophe Pettus wrote: > > > > On Jun 6, 2021, at 21:51, Ayub Khan wrote: > > Other than Dexter, Is there an auto tune or query performance indicator > for postgres ? > > Generally, auto-creating indexes isn't a great idea. I respect the work > that went into Dexter, but it's much better to find the queries and study > them, then decide if index creation is the right thing. > > RDS has Performance Insights, which is a very useful tool for finding > where the load on your server is actually coming from. -- ---- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: dexter on AWS RDS auto tune queries
Julien, Thank you for the pointer. I will change the data type and verify the query again. -Ayub On Mon, Jun 7, 2021 at 7:51 AM Ayub Khan wrote: > > Other than Dexter, Is there an auto tune or query performance indicator > for postgres ? > Also which are the most commonly used monitoring (slow query, cpu, index > creation for missing indexs ) tools being used for postgres ? > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
slow query
I checked all the indexes are defined on the tables however the query seems slow, below is the plan. Can any one give any pointers to verify ? SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' ORDER BY a.row_order, menu_item_id; below is the plan Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 rows=89 loops=1) " Sort Key: a.row_order, a.menu_item_id" Sort Method: quicksort Memory: 48kB -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual time=0.188..5.809 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual time=0.181..5.629 rows=89 loops=1) -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual time=0.174..5.443 rows=89 loops=1) -> Nested Loop (cost=4.61..185.57 rows=1 width=144) (actual time=0.168..5.272 rows=89 loops=1) -> Nested Loop (cost=4.32..185.25 rows=1 width=136) (actual time=0.162..5.066 rows=89 loops=1) -> Nested Loop (cost=0.71..179.62 rows=1 width=99) (actual time=0.137..3.986 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) (actual time=0.130..3.769 rows=89 loops=1) Index Cond: (restaurant_id = 1528) "Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.17..3.18 rows=1 width=8) (actual time=0.009..0.009 rows=1 loops=89) -> Aggregate (cost=3.17..3.18 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using "idx$$_023a0001" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) (actual time=0.004..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using "restaurant_idx$$_274b003d" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.002 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12)
Re: slow query
In AWS RDS performance insights the client writes is high and the api which receives data on the mobile side is slow during load test. On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote: > > I checked all the indexes are defined on the tables however the query > seems slow, below is the plan. Can any one give any pointers to verify ? > > SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, > b.menu_item_category_desc, c.menu_item_variant_id, > c.menu_item_variant_type_id, c.price, c.size_id, > c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, > f.currency_code, a.image, a.mark_id, m.mark_name > > FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type > AS d, item_size AS e, restaurant AS f, menu_item AS a > > LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE > a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = > c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id > AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = > NULL OR NULL IS NULL) > > AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM > menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit > 1) AND a.active = 'Y' > AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', > '%,4191,%') OR NULL IS NULL) > AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > > below is the plan > > > Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 > rows=89 loops=1) > " Sort Key: a.row_order, a.menu_item_id" > Sort Method: quicksort Memory: 48kB > -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual > time=0.188..5.809 rows=89 loops=1) > Join Filter: (a.mark_id = m.mark_id) > Rows Removed by Join Filter: 267 > -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual > time=0.181..5.629 rows=89 loops=1) > -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual > time=0.174..5.443 rows=89 loops=1) > -> Nested Loop (cost=4.61..185.57 rows=1 width=144) > (actual time=0.168..5.272 rows=89 loops=1) > -> Nested Loop (cost=4.32..185.25 rows=1 > width=136) (actual time=0.162..5.066 rows=89 loops=1) > -> Nested Loop (cost=0.71..179.62 rows=1 > width=99) (actual time=0.137..3.986 rows=89 loops=1) > -> Index Scan using > menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) > (actual time=0.130..3.769 rows=89 loops=1) > Index Cond: (restaurant_id = 1528) > "Filter: ((active = 'Y'::bpchar) > AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))" > Rows Removed by Filter: 194 > -> Index Scan using > menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 > width=20) (actual time=0.002..0.002 rows=1 loops=89) > Index Cond: > (menu_item_category_id = a.menu_item_category_id) > -> Index Scan using menu_item_variant_pk on > menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual > time=0.002..0.002 rows=1 loops=89) > Index Cond: (menu_item_variant_id = > (SubPlan 1)) > Filter: (a.menu_item_id = menu_item_id) > SubPlan 1 > -> Limit (cost=3.17..3.18 rows=1 > width=8) (actual time=0.009..0.009 rows=1 loops=89) > -> Aggregate (cost=3.17..3.18 > rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) > -> Index Scan using > "idx$$_023a0001" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) > (actual time=0.004..0.007 rows=7 loops=89) > Index Cond: > (menu_item_id = a.menu_item_id) > Filter: (deleted = > 'N'::bpchar) > Rows Removed by > Filter:
Re: slow query
below is function definition of is_menu_item_available, for each item based on current day time it returns when it's available or not. The same api works fine on oracle, I am seeing this slowness after migrating the queries to postgresql RDS on AWS CREATE OR REPLACE FUNCTION is_menu_item_available( i_menu_item_id bigint, i_check_availability character) RETURNS character LANGUAGE 'plpgsql' COST 100 VOLATILE PARALLEL UNSAFE AS $BODY$ DECLARE l_current_day NUMERIC(1); o_time CHARACTER VARYING(10); l_current_interval INTERVAL DAY TO SECOND(2); item_available_count NUMERIC(10); BEGIN item_available_count := 0; BEGIN IF i_check_availability = 'Y' THEN BEGIN SELECT CASE TO_CHAR(now(), 'fmday') WHEN 'monday' THEN 1 WHEN 'tuesday' THEN 2 WHEN 'wednesday' THEN 3 WHEN 'thursday' THEN 4 WHEN 'friday' THEN 5 WHEN 'saturday' THEN 6 WHEN 'sunday' THEN 7 END AS d INTO STRICT l_current_day; select (('0 ' || EXTRACT (HOUR FROM ((now() at time zone 'UTC') at time zone '+03:00')) || ':' || EXTRACT (minute FROM ((now() at time zone 'UTC') at time zone '+03:00')) || ':00') :: interval) INTO l_current_interval; END; BEGIN SELECT COUNT(*) INTO STRICT item_available_count FROM menu_item_availability WHERE menu_item_id = i_menu_item_id; IF item_available_count = 0 THEN RETURN 'Y'; ELSE SELECT COUNT(*) INTO STRICT item_available_count FROM menu_item_availability AS mia, availability AS av WHERE mia.menu_item_id = i_menu_item_id AND mia.availability_id = av.id AND date_trunc('DAY',now()) + l_current_interval >= (CASE WHEN l_current_interval < '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.start_time - (1::NUMERIC || ' days')::INTERVAL WHEN l_current_interval >= '6 hour'::INTERVAL THEN date_trunc('DAY',now())+ av.start_time END) AND date_trunc('DAY',now()) + l_current_interval <= (CASE WHEN l_current_interval < '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.end_time - (1::NUMERIC || ' days')::INTERVAL WHEN l_current_interval >= '6 hour'::INTERVAL THEN date_trunc('DAY',now()) + av.end_time END) AND (av.day_of_week LIKE CONCAT_WS('', '%', l_current_day, '%') OR av.day_of_week LIKE '%0%') AND is_deleted = 0; END IF; END; BEGIN IF item_available_count > 0 THEN RETURN 'Y'; ELSE RETURN 'N'; END IF; END; ELSE RETURN 'Y'; END IF; END; END; $BODY$; On Tue, Jun 8, 2021 at 7:03 PM Ayub Khan wrote: > > I checked all the indexes are defined on the tables however the query > seems slow, below is the plan. Can any one give any pointers to verify ? > > SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, > b.menu_item_category_desc, c.menu_item_variant_id, > c.menu_item_variant_type_id, c.price, c.size_id, > c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, > f.currency_code, a.image, a.mark_id, m.mark_name > > FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type > AS d, item_size AS e, restaurant AS f, menu_item AS a > > LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE > a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = > c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id > AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = > NULL OR NULL IS NULL) > > AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM > menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit > 1) AND a.active = 'Y' > AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', > '%,4191,%') OR NULL IS
Re: waiting for client write
@Magnus There is an EC2 tomcat server which communicates to postgresql. This is a replica of our production server except that in this case the test database is postgres RDS and our production is running oracle on EC2 instance. On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: slow query
Below is the test setup Jmeter-->(load balanced tomcat on ec2 instances)>rds read replicas All these are running on different ec2 instances in AWS cloud in the same region On Tue, 8 Jun 2021, 19:03 Ayub Khan, wrote: > > I checked all the indexes are defined on the tables however the query > seems slow, below is the plan. Can any one give any pointers to verify ? > > SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, > b.menu_item_category_desc, c.menu_item_variant_id, > c.menu_item_variant_type_id, c.price, c.size_id, > c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, > f.currency_code, a.image, a.mark_id, m.mark_name > > FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type > AS d, item_size AS e, restaurant AS f, menu_item AS a > > LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE > a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = > c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id > AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = > NULL OR NULL IS NULL) > > AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM > menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit > 1) AND a.active = 'Y' > AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', > '%,4191,%') OR NULL IS NULL) > AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > > below is the plan > > > Sort (cost=189.27..189.27 rows=1 width=152) (actual time=5.876..5.885 > rows=89 loops=1) > " Sort Key: a.row_order, a.menu_item_id" > Sort Method: quicksort Memory: 48kB > -> Nested Loop Left Join (cost=5.19..189.26 rows=1 width=152) (actual > time=0.188..5.809 rows=89 loops=1) > Join Filter: (a.mark_id = m.mark_id) > Rows Removed by Join Filter: 267 > -> Nested Loop (cost=5.19..188.19 rows=1 width=148) (actual > time=0.181..5.629 rows=89 loops=1) > -> Nested Loop (cost=4.90..185.88 rows=1 width=152) (actual > time=0.174..5.443 rows=89 loops=1) > -> Nested Loop (cost=4.61..185.57 rows=1 width=144) > (actual time=0.168..5.272 rows=89 loops=1) > -> Nested Loop (cost=4.32..185.25 rows=1 > width=136) (actual time=0.162..5.066 rows=89 loops=1) > -> Nested Loop (cost=0.71..179.62 rows=1 > width=99) (actual time=0.137..3.986 rows=89 loops=1) > -> Index Scan using > menu_item_restaurant_id on menu_item a (cost=0.42..177.31 rows=1 width=87) > (actual time=0.130..3.769 rows=89 loops=1) > Index Cond: (restaurant_id = 1528) > "Filter: ((active = 'Y'::bpchar) > AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))" > Rows Removed by Filter: 194 > -> Index Scan using > menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 > width=20) (actual time=0.002..0.002 rows=1 loops=89) > Index Cond: > (menu_item_category_id = a.menu_item_category_id) > -> Index Scan using menu_item_variant_pk on > menu_item_variant c (cost=3.60..5.62 rows=1 width=45) (actual > time=0.002..0.002 rows=1 loops=89) > Index Cond: (menu_item_variant_id = > (SubPlan 1)) > Filter: (a.menu_item_id = menu_item_id) > SubPlan 1 > -> Limit (cost=3.17..3.18 rows=1 > width=8) (actual time=0.009..0.009 rows=1 loops=89) > -> Aggregate (cost=3.17..3.18 > rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) > -> Index Scan using > "idx$$_023a0001" on menu_item_variant (cost=0.43..3.15 rows=8 width=8) > (actual time=0.004..0.007 rows=7 loops=89) > Index Cond: > (menu_item_id = a.menu_item_id) > Filter: (deleted = > 'N'::bpchar) >
Re: waiting for client write
I did profiling of the application and it seems most of the CPU consumption is for executing the stored procedure. Attached is the screenshot of the profile --Ayub On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Vijay, Both tomcat and postgresql are on the same region as that of the database server. It is an RDS so I do not have shell access to it. Jeff, The tomcat profile is suggesting that it's waiting for a response from the database server. Tomcat and RDS are in the same availability region as eu-central-1a On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Pavan, In jProfiler , I see that most cpu is consumed when the Tomcat thread is stuck at PgPreparedStatement.execute. I am using version 42.2.16 of JDBC driver. Ranier, EXPLAIN ANALYZE SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM menu_item_category AS b, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e, restaurant AS f, menu_item AS a LEFT OUTER JOIN mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = 1528 AND (a.menu_item_category_id = NULL OR NULL IS NULL) AND c.menu_item_variant_id = (SELECT min(menu_item_variant_id) FROM menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N' limit 1) AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR NULL IS NULL) AND is_menu_item_available(a.menu_item_id, 'Y') = 'Y' Nested Loop Left Join (cost=5.15..162.10 rows=1 width=148) (actual time=0.168..5.070 rows=89 loops=1) Join Filter: (a.mark_id = m.mark_id) Rows Removed by Join Filter: 267 -> Nested Loop (cost=5.15..161.04 rows=1 width=144) (actual time=0.161..4.901 rows=89 loops=1) -> Nested Loop (cost=4.86..158.72 rows=1 width=148) (actual time=0.156..4.729 rows=89 loops=1) -> Nested Loop (cost=4.57..158.41 rows=1 width=140) (actual time=0.151..4.572 rows=89 loops=1) -> Nested Loop (cost=4.28..158.10 rows=1 width=132) (actual time=0.145..4.378 rows=89 loops=1) -> Nested Loop (cost=0.71..152.51 rows=1 width=95) (actual time=0.121..3.334 rows=89 loops=1) -> Index Scan using menu_item_restaurant_id on menu_item a (cost=0.42..150.20 rows=1 width=83) (actual time=0.115..3.129 rows=89 loops=1) Index Cond: (restaurant_id = 1528) " Filter: ((active = 'Y'::bpchar) AND (is_menu_item_available(menu_item_id, 'Y'::bpchar) = 'Y'::bpchar))" Rows Removed by Filter: 194 -> Index Scan using menu_item_category_pk on menu_item_category b (cost=0.29..2.31 rows=1 width=20) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_category_id = a.menu_item_category_id) -> Index Scan using menu_item_variant_pk on menu_item_variant c (cost=3.57..5.59 rows=1 width=45) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_id = (SubPlan 1)) Filter: (a.menu_item_id = menu_item_id) SubPlan 1 -> Limit (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Aggregate (cost=3.13..3.14 rows=1 width=8) (actual time=0.008..0.008 rows=1 loops=89) -> Index Scan using "idx$$_023a0001" on menu_item_variant (cost=0.43..3.11 rows=8 width=8) (actual time=0.003..0.007 rows=7 loops=89) Index Cond: (menu_item_id = a.menu_item_id) Filter: (deleted = 'N'::bpchar) Rows Removed by Filter: 4 -> Index Scan using menu_item_variant_type_pk on menu_item_variant_type d (cost=0.29..0.31 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=89) Index Cond: (menu_item_variant_type_id = c.menu_item_variant_type_id) Filter: ((is_hidden)::text = 'false'::text) -> Index Scan using size_pk on item_size e (cost=0.29..0.31 rows=1 width=16) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (size_id = c.size_id) -> Index Scan using "restaurant_idx$$_274b003d" on restaurant f (cost=0.29..2.30 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=89) Index Cond: (restaurant_id = 1528) -> Seq Scan on mark m (cost=0.00..1.03 rows=3 width=12) (actual time=0.000..0.001 rows=3 loops=89) Planning Time: 2.078 ms Execution Time: 5.141 ms On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: >
Re: waiting for client write
Ranier, I tried to VACCUM ANALYZE the tables involved multiple times and also tried the statistics approach as well Pavan, I upgraded to 42.2.21 version of jdbc driver and using HikariCp connection pool management 3.1.0 jProfiler shows the threads are stuck with high cpu usage on. org.postgresql.jdbc.PgPreparedStatement.execute , below is the java code which calls postgresql Connection con = null; CallableStatement callableStatement = null; ResultSet rs = null; ResultSet rs1 = null; PreparedStatement ps = null; try { con = connectionManager.getConnetion(); con.setAutoCommit(false); callableStatement = con.prepareCall("call menu_pkg$get_menu_items_p_new(?,?,?,?,?,?)"); if (catId == 0) callableStatement.setNull(2, Types.BIGINT); else callableStatement.setLong(2, catId); callableStatement.setString(3, "Y"); if (branchId == 0) callableStatement.setString(4, null); else callableStatement.setLong(4, branchId); callableStatement.setNull(5, Types.OTHER); callableStatement.setNull(6, Types.OTHER); callableStatement.registerOutParameter(5, Types.OTHER); callableStatement.registerOutParameter(6, Types.OTHER); callableStatement.execute(); rs = (ResultSet) callableStatement.getObject(5); rs1 = (ResultSet) callableStatement.getObject(6); MenuMobile menuMobile; try { while (rs.next()) { //process rs } MenuCombo menuCombo; while (rs1.next()) { //process rs1 } menuMobileListCombo.setMenuComboList(menuComboList); menuMobileListCombo.setMenuMobileList(menuMobileList); } catch (SQLException e) { LOG.error(e.getLocalizedMessage(), e); } con.commit(); con.setAutoCommit(true); } catch (SQLException e) { LOG.error(e.getLocalizedMessage(), e); throw e; } finally { if (rs != null) rs.close(); if (rs1 != null) rs1.close(); if (ps != null) ps.close(); if (callableStatement != null) callableStatement.close(); if (con != null) con.close(); } return menuMobileListCombo; } On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > >ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Ranier, I verified the link you gave and also checked AWS documentation and found the exact output as shown in AWS: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/network_mtu.html [ec2-user ~]$ tracepath amazon.com 1?: [LOCALHOST] pmtu 9001 1: ip-xxx-xx-xx-1.us-west-1.compute.internal (xxx.xx.xx.x) 0.187ms pmtu 1500 Should the LOCALHOST pmtu needs to be updated to 1500 ? On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Ranier, Both production and test vms are running on Ubuntu: the below command when executed from client VM shows that its using PMTU 9001. # tracepath dns-name-of-rds 1?: [LOCALHOST] pmtu 9001 On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Jeff, Both tomcat vm and RDS vms have 25Gbps Postgresql Db class is db.r6g.16xlarge Tomcat vm is c5.9xlarge --Ayub On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > >ORDER BY a.row_order, menu_item_id; > > --Ayub >
Re: waiting for client write
Ranier, This issue is only with queries which are slow, if it's an MTU issue then it should be with all the APIs. I tried on Aurora db and I see same plan and also same slowness On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > >ORDER BY a.row_order, menu_item_id; > > --Ayub >
Re: waiting for client write
Ranier, Vijay, Sure will try and check out pgbench and MTU --Ayub On Wed, 9 Jun 2021, 17:47 Ayub Khan, wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > >ORDER BY a.row_order, menu_item_id; > > --Ayub >
Re: waiting for client write
Ranier, I did the MTU change and it did seem to bring down the clientWrite waits to half. The change I did was to enable ICMP to have Destination Unreachable fragmentation needed and DF set "When there is a difference in the MTU size in the network between two hosts, first make sure that your network settings don't block path MTU discovery (PMTUD). PMTUD enables the receiving host to respond to the originating host with the following ICMP message: Destination Unreachable: fragmentation needed and DF set (ICMP Type 3, Code 4). This message instructs the originating host to use the lowest MTU size along the network path to resend the request. " https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-drop-issues.html Vijay, Below is the pgbench result which was executed from the remote instance pointing to RDS postgres@localhost:/archive$ pgbench -h pg-cluster -p 5432 -U testuser -c 50 -j 2 -P 60 -T 600 testdb -f /archive/test.sql starting vacuum...pgbench: error: ERROR: relation "pgbench_branches" does not exist pgbench: (ignoring this error and continuing anyway) pgbench: error: ERROR: relation "pgbench_tellers" does not exist pgbench: (ignoring this error and continuing anyway) pgbench: error: ERROR: relation "pgbench_history" does not exist pgbench: (ignoring this error and continuing anyway) end. progress: 60.0 s, 18.3 tps, lat 2631.655 ms stddev 293.592 progress: 120.0 s, 19.6 tps, lat 2533.271 ms stddev 223.722 progress: 180.0 s, 20.3 tps, lat 2446.050 ms stddev 158.397 progress: 240.1 s, 19.2 tps, lat 2575.506 ms stddev 292.418 progress: 300.0 s, 20.0 tps, lat 2482.908 ms stddev 181.770 progress: 360.0 s, 22.1 tps, lat 2245.147 ms stddev 110.855 progress: 420.0 s, 20.7 tps, lat 2397.270 ms stddev 289.324 progress: 480.0 s, 18.8 tps, lat 2625.595 ms stddev 240.250 progress: 540.0 s, 20.1 tps, lat 2467.336 ms stddev 133.121 progress: 600.0 s, 20.2 tps, lat 2455.824 ms stddev 137.976 transaction type: /archive/test.sql scaling factor: 1 query mode: simple number of clients: 50 number of threads: 2 duration: 600 s number of transactions actually processed: 12007 latency average = 2480.042 ms latency stddev = 242.243 ms tps = 19.955602 (including connections establishing) tps = 19.955890 (excluding connections establishing) On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > &g
Re: waiting for client write
Vijay, I did not change the MTU on the network interface but created incoming rule on the security group as per the below documentation: PMTUD enables the receiving host to respond to the originating host with the following ICMP message: Destination Unreachable: fragmentation needed and DF set (ICMP Type 3, Code 4). This message instructs the originating host to use the lowest MTU size along the network path to resend the request. Without this negotiation, packet drop can occur because the request is too large for the receiving host to accept. I also did another test, instead of using RDS, installed postgresql on a similar VM as that of where oracle is installed and tested it. Now even when both client and postgresql VMs have the same MTU settings still in the pg activity table I could see clientwrite waits. -Ayub On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Vijay, below is the benchmark result when executed against bench_mark database instead of running the test with slow query on the application database. This shows that it might not be an issue with MTU but some issue with the application database itself and the query. postgres@localhost:~$ pgbench -h test-cluster -p 5432 -U testuser -c 50 -j 2 -P 60 -T 600 bench_mark starting vacuum...end. progress: 60.0 s, 17830.3 tps, lat 2.765 ms stddev 0.632 progress: 120.0 s, 18450.3 tps, lat 2.681 ms stddev 0.582 progress: 180.0 s, 18405.0 tps, lat 2.688 ms stddev 0.588 progress: 240.0 s, 17087.9 tps, lat 2.897 ms stddev 0.717 progress: 300.0 s, 18280.6 tps, lat 2.706 ms stddev 0.595 progress: 360.0 s, 18433.9 tps, lat 2.683 ms stddev 0.582 progress: 420.0 s, 18308.4 tps, lat 2.702 ms stddev 0.599 progress: 480.0 s, 18156.7 tps, lat 2.725 ms stddev 0.615 progress: 540.0 s, 16803.3 tps, lat 2.946 ms stddev 0.764 progress: 600.0 s, 18266.6 tps, lat 2.708 ms stddev 0.602 transaction type: scaling factor: 150 query mode: simple number of clients: 50 number of threads: 2 duration: 600 s number of transactions actually processed: 10801425 latency average = 2.747 ms latency stddev = 0.635 ms tps = 18001.935315 (including connections establishing) tps = 18002.205940 (excluding connections establishing) On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' > LIMIT 1) AND > a.active = 'Y' >AND (CONCAT_WS('', ',', a.hidden_branch_ids, > ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR > NULL IS NULL) >AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y' > > ORDER BY a.row_order, menu_item_id; > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: waiting for client write
Would it be a cursor issue on postgres, as there seems to be a difference in how cursors are handled in postgres and Oracle database. It seems cursors are returned as buffers to the client side. Below are the steps we take from jdbc side below is the stored procedure code: CREATE OR REPLACE PROCEDURE ."menu_pkg$get_menu_items_p_new"( i_restaurant_id bigint, i_category_id bigint, i_check_availability text, i_branch_id bigint, INOUT o_items refcursor, INOUT o_combo refcursor) LANGUAGE 'plpgsql' AS $BODY$ BEGIN OPEN o_items FOR SELECT a.menu_item_id, a.menu_item_name, a.menu_item_category_id, b.menu_item_category_desc, c.menu_item_variant_id, c.menu_item_variant_type_id, c.price, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, e.size_desc, f.currency_code, a.image, a.mark_id, m.mark_name FROM .menu_item_category AS b, .menu_item_variant AS c, .menu_item_variant_type AS d, .item_size AS e, .restaurant AS f, .menu_item AS a LEFT OUTER JOIN .mark AS m ON (a.mark_id = m.mark_id) WHERE a.menu_item_category_id = b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.restaurant_id = f.restaurant_id AND f.restaurant_id = i_restaurant_id AND (a.menu_item_category_id = i_category_id OR i_category_id IS NULL) AND c.menu_item_variant_id = (SELECT MIN(menu_item_variant_id) FROM .menu_item_variant WHERE menu_item_id = a.menu_item_id AND deleted = 'N') AND a.active = 'Y' AND (CONCAT_WS('', ',', a.hidden_branch_ids, ',') NOT LIKE CONCAT_WS('', '%,', i_branch_id, ',%') OR i_branch_id IS NULL) AND .is_menu_item_available(a.menu_item_id, i_check_availability) = 'Y' ORDER BY a.row_order, menu_item_id; OPEN o_combo FOR SELECT mc.*, f.currency_code, (CASE WHEN blob_id IS NOT NULL THEN 'Y' ELSE 'N' END) AS has_image FROM .menu_combo AS mc, .restaurant AS f WHERE mc.restaurant_id = i_restaurant_id AND active = 'Y' AND mc.restaurant_id = f.restaurant_id AND (menu_item_category_id = i_category_id OR i_category_id IS NULL) ORDER BY combo_id; END; $BODY$; 1. open connection 2. set auto commit to false 3. create callable statement 4. execute the call 5. get the results 6. set autocommit to true 7. close the resultset,callable statement and connection On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan wrote: > attached is the screenshot of RDS performance insights for AWS and it > shows high waiting client writes. The api performance is slow. I read that > this might be due to IOPS on RDS. However we have 80k IOPS on this test > RDS. > > Below is the query which is being load tested > > SELECT > >a.menu_item_id, >a.menu_item_name, >a.menu_item_category_id, >b.menu_item_category_desc, >c.menu_item_variant_id, >c.menu_item_variant_type_id, >c.price, >c.size_id, >c.parent_menu_item_variant_id, >d.menu_item_variant_type_desc, >e.size_desc, >f.currency_code, >a.image, >a.mark_id, >m.mark_name > > FROM .menu_item_category AS b, .menu_item_variant AS > c, >.menu_item_variant_type AS d, .item_size AS e, > .restaurant AS f, >.menu_item AS a > >LEFT OUTER JOIN .mark AS m > ON (a.mark_id = m.mark_id) > > WHERE a.menu_item_category_id = > b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND >c.menu_item_variant_type_id = > d.menu_item_variant_type_id AND d.is_hidden = 'false' AND >c.size_id = e.size_id AND a.restaurant_id = > f.restaurant_id AND f.restaurant_id = 1528 AND >(a.menu_item_category_id = NULL OR NULL IS NULL) > >AND c.menu_item_variant_id = (SELECT > min(menu_item_variant_id) > FROM > .menu_item_variant > WHERE > menu_item_id = a.menu_item_id AND deleted = 'N' >
slow performance with cursor
I am using postgresql 12 and using cursors in a stored procedure, executing procedure which has cursor is slowing down the call. However if I do not use the cursor and just execute the queries using JDBC (Java client) it's fast. Is there any setting which needs to be modified to improve the performance of cursors. Also facing slow response with reading blobs (images) from db. Not an ideal way for storing images in db but this is a legacy application and wanted to check if there a quick tweak which can improve the performance while reading blob data from db. --Ayub
Re: slow performance with cursor
slowness is on the database side as I see the CPU goes high for procedures returning the result using cursors. If the same query is executed as a prepared statement from Java client there is no slowness. for example there are 84 rows returning all are text data from a query. If the result is returned by cursor from the database, the cpu is high on the db. stored procedure A executes query Q and returns cursor1, this process has high cpu on the database. code changed in Java client to execute the same query as the prepared statement and get back the resultset from the database, this does not create a high cpu on the database. --Ayub On Fri, Jun 25, 2021 at 7:09 PM Ayub Khan wrote: > > I am using postgresql 12 and using cursors in a stored procedure, > executing procedure which has cursor is slowing down the call. However if I > do not use the cursor and just execute the queries using JDBC (Java client) > it's fast. > > Is there any setting which needs to be modified to improve the performance > of cursors. Also facing slow response with reading blobs (images) from db. > Not an ideal way for storing images in db but this is a legacy application > and wanted to check if there a quick tweak which can improve the > performance while reading blob data from db. > > --Ayub > -- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
Re: slow performance with cursor
I set the cursor_tuple_fraction to 1 now I am seeing high cpu for fetach all in The number of rows returned is less than 200. Why is the high cpu being shown for fetch all -Ayub On Fri, 25 Jun 2021, 19:09 Ayub Khan, wrote: > > I am using postgresql 12 and using cursors in a stored procedure, > executing procedure which has cursor is slowing down the call. However if I > do not use the cursor and just execute the queries using JDBC (Java client) > it's fast. > > Is there any setting which needs to be modified to improve the performance > of cursors. Also facing slow response with reading blobs (images) from db. > Not an ideal way for storing images in db but this is a legacy application > and wanted to check if there a quick tweak which can improve the > performance while reading blob data from db. > > --Ayub >
Re: slow performance with cursor
Justin, Below is the stored procedure, is there any scope for improvement? CREATE OR REPLACE PROCEDURE "new_api_pkg$get_menu_details_p"( i_user_id bigint, i_menu_item_id bigint, INOUT o_menu refcursor, INOUT o_item refcursor, INOUT o_choice refcursor) LANGUAGE 'plpgsql' AS $BODY$ BEGIN IF i_user_id IS NOT NULL THEN OPEN o_menu FOR SELECT mi.menu_item_id, mi.menu_item_name, mi.menu_item_title, mi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null then 'N' end as has_image, 0.0 AS rating, 0 AS votes, 0 AS own_rating FROM menu_item AS mi WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y'; ELSE OPEN o_menu FOR SELECT mi.menu_item_id, mi.menu_item_name, mi.menu_item_title, mi.restaurant_id, case when mi.image !=null then 'Y' when mi.image is null then 'N' end as has_image, 0.0 AS rating, 0 AS votes, 0 AS own_rating FROM menu_item AS mi WHERE mi.menu_item_id = i_menu_item_id AND mi.active = 'Y'; END IF; OPEN o_item FOR SELECT c.menu_item_variant_id, c.menu_item_variant_type_id, c.package_type_code, c.packages_only, c.price, CASE WHEN c.package_type_code = 'P' THEN (SELECT SUM(miv1.calories) FROM package_component AS pkg_cpm1 INNER JOIN menu_item_variant AS miv1 ON pkg_cpm1.component_id = miv1.menu_item_variant_id WHERE pkg_cpm1.package_id = c.menu_item_variant_id) ELSE c.calories END AS calories, c.size_id, c.parent_menu_item_variant_id, d.menu_item_variant_type_desc, d.menu_item_variant_type_desc_ar, e.size_desc, e.size_desc_ar,15 AS preparation_time, (SELECT STRING_AGG(CONCAT_WS('', mi.menu_item_name, ' ', s.size_desc), ' + '::TEXT ORDER BY pc.component_id) FROM package_component AS pc, menu_item_variant AS miv, menu_item AS mi, menu_item_variant_type AS mivt, item_size AS s WHERE pc.component_id = miv.menu_item_variant_id AND miv.menu_item_id = mi.menu_item_id AND miv.size_id = s.size_id AND pc.package_id = c.menu_item_variant_id AND mivt.is_hidden = 'false' AND mivt.menu_item_variant_type_id = miv.menu_item_variant_type_id GROUP BY pc.package_id) AS package_name FROM menu_item AS a, menu_item_variant AS c, menu_item_variant_type AS d, item_size AS e WHERE a.menu_item_id = c.menu_item_id AND c.menu_item_variant_type_id = d.menu_item_variant_type_id AND d.is_hidden = 'false' AND c.size_id = e.size_id AND a.menu_item_id = i_menu_item_id AND a.active = 'Y' AND c.deleted = 'N' ORDER BY c.menu_item_variant_id; OPEN o_choice FOR SELECT c.choice_id, c.choice_name, c.choice_name_ar, c.calories FROM choice AS c, menu_item_choice AS mc, menu_item AS mi WHERE c.choice_id = mc.choice_id AND mc.menu_item_id = mi.menu_item_id AND mc.menu_item_id = i_menu_item_id AND mi.active = 'Y'; END; $BODY$; On Fri, Jun 25, 2021 at 7:09 PM Ayub Khan wrote: > > I am using postgresql 12 and using cursors in a stored procedure, > executing procedure which has cursor is slowing down the call. However if I > do not use the cursor and just execute the queries using JDBC (Java client) > it's fast. > > Is there any setting which needs to be modified to improve the performance > of cursors. Also facing slow response with reading blobs (images) from db. > Not an ideal way for storing images in db but this is a legacy application > and wanted to check if there a quick tweak which can improve the > performance while reading blob data from db. > > --Ayub > -- ---- Sun Certified Enterprise Architect 1.5 Sun Certified Java Programmer 1.4 Microsoft Certified Systems Engineer 2000 http://in.linkedin.com/pub/ayub-khan/a/811/b81 mobile:+966-502674604 -- It is proved that Hard Work and kowledge will get you close but attitude will get you there. However, it's the Love of God that will put you over the top!!
slow query to improve performace
Hi, Could some some verify the attached query to verify the performance and suggest some steps to improve it, this query is created as a view. This view is used to get the aggregates of orders based on its current status Thanks HashAggregate (cost=334063.59..334064.17 rows=58 width=213) (actual time=1180.016..1180.053 rows=49 loops=1) Group Key: c.city_id, '2022-02-25 23:09:26.587835'::timestamp without time zone, (count(*)) -> Nested Loop Left Join (cost=154967.02..312398.93 rows=133321 width=92) (actual time=450.918..961.314 rows=156105 loops=1) Join Filter: (c_1.city_id = c.city_id) Rows Removed by Join Filter: 103951 -> Hash Join (cost=2343.27..154695.90 rows=133321 width=76) (actual time=12.345..456.445 rows=156105 loops=1) Hash Cond: (b.city_id = c.city_id) -> Hash Join (cost=2324.74..154301.83 rows=133321 width=55) (actual time=12.266..420.340 rows=156105 loops=1) Hash Cond: (o.branch_id = b.branch_id) -> Append (cost=0.57..151627.65 rows=133321 width=55) (actual time=0.022..347.441 rows=156105 loops=1) Subplans Removed: 23 -> Index Scan using restaurant_order_p_202202_202203_date_time_idx on restaurant_order_p_202202_202203 o (cost=0.56..150904.37 rows=133298 width=55) (actual time=0.022..333.768 rows=156105 loops=1) Index Cond: ((date_time >= '2022-02-25 05:00:00'::timestamp without time zone) AND (date_time <= '2022-02-25 23:09:26.587835'::timestamp without time zone)) -> Hash (cost=1942.41..1942.41 rows=30541 width=16) (actual time=12.172..12.172 rows=29242 loops=1) Buckets: 32768 Batches: 1 Memory Usage: 1627kB -> Seq Scan on branch b (cost=0.00..1942.41 rows=30541 width=16) (actual time=0.005..7.358 rows=29242 loops=1) -> Hash (cost=17.80..17.80 rows=58 width=29) (actual time=0.072..0.072 rows=58 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 12kB -> Index Only Scan using city_idx$$_274b0022 on city c (cost=0.27..17.80 rows=58 width=29) (actual time=0.011..0.056 rows=58 loops=1) Index Cond: (city_id IS NOT NULL) Heap Fetches: 40 -> Materialize (cost=152623.75..155703.21 rows=1 width=16) (actual time=0.003..0.003 rows=1 loops=156105) -> GroupAggregate (cost=152623.75..155703.19 rows=1 width=16) (actual time=438.567..438.611 rows=2 loops=1) Group Key: c_1.city_id -> Nested Loop Left Join (cost=152623.75..155703.18 rows=1 width=8) (actual time=434.578..438.605 rows=6 loops=1) -> Nested Loop (cost=152623.32..155701.62 rows=1 width=16) (actual time=434.569..438.581 rows=6 loops=1) Join Filter: (b_1.city_id = c_1.city_id) Rows Removed by Join Filter: 342 -> Index Only Scan using city_pk on city c_1 (cost=0.27..13.27 rows=58 width=8) (actual time=0.015..0.062 rows=58 loops=1) Heap Fetches: 40 -> Materialize (cost=152623.06..155687.49 rows=1 width=16) (actual time=7.492..7.560 rows=6 loops=58) -> Nested Loop (cost=152623.06..155687.48 rows=1 width=16) (actual time=434.549..438.450 rows=6 loops=1) -> Nested Loop (cost=152622.77..155687.17 rows=1 width=32) (actual time=434.539..438.425 rows=6 loops=1) Join Filter: (r_1.restaurant_id = b_1.restaurant_id) -> Nested Loop (cost=152622.48..155685.14 rows=1 width=24) (actual time=434.529..438.396 rows=6 loops=1) -> GroupAggregate (cost=152621.92..152628.01 rows=79 width=55) (actual time=434.465..438.214 rows=3 loops=1) Group Key: (round(ro_1.total_amount, 2)), ro_1.phone, r_1.restaurant_id Filter: (count(1) > 1) Rows Removed by Filter: 9365 -> Sort (cost=152621.92..152622.51 rows=236 width=55) (actual time=434.273..435.044 rows=9371 loops=1) Sort Key: (round(ro_1.total_amount, 2)), ro_1.phone, r_1.restaurant_id Sort Method: quicksort Memory: 1117kB -> Nested Loop Left Join (cost=1.83..152612.62