Re: waiting for client write

2021-06-10 Thread Ayub Khan
I did profiling of the application and it seems most of the CPU consumption
is for executing the stored procedure. Attached is the screenshot of the
profile

--Ayub


On Wed, Jun 9, 2021 at 5:47 PM Ayub Khan  wrote:

> attached is the screenshot of RDS performance insights for AWS and it
> shows high waiting client writes. The api performance is slow. I read that
> this might be due to IOPS on RDS. However we have 80k IOPS on this test
> RDS.
>
> Below is the query which is being load tested
>
> SELECT
>
>a.menu_item_id,
>a.menu_item_name,
>a.menu_item_category_id,
>b.menu_item_category_desc,
>c.menu_item_variant_id,
>c.menu_item_variant_type_id,
>c.price,
>c.size_id,
>c.parent_menu_item_variant_id,
>d.menu_item_variant_type_desc,
>e.size_desc,
>f.currency_code,
>a.image,
>a.mark_id,
>m.mark_name
>
>  FROM .menu_item_category AS b, .menu_item_variant AS
> c,
>.menu_item_variant_type AS d, .item_size AS e,
> .restaurant AS f,
>.menu_item AS a
>
>LEFT OUTER JOIN .mark AS m
>  ON (a.mark_id = m.mark_id)
>
>  WHERE a.menu_item_category_id =
> b.menu_item_category_id AND a.menu_item_id = c.menu_item_id AND
>c.menu_item_variant_type_id =
> d.menu_item_variant_type_id AND d.is_hidden = 'false' AND
>c.size_id = e.size_id AND a.restaurant_id =
> f.restaurant_id AND f.restaurant_id = 1528 AND
>(a.menu_item_category_id = NULL OR NULL IS NULL)
>
>AND c.menu_item_variant_id = (SELECT
> min(menu_item_variant_id)
>  FROM
> .menu_item_variant
>  WHERE
> menu_item_id = a.menu_item_id AND deleted = 'N'
>  LIMIT 1) AND
> a.active = 'Y'
>AND (CONCAT_WS('', ',', a.hidden_branch_ids,
> ',') NOT LIKE CONCAT_WS('', '%,4191,%') OR
> NULL IS NULL)
>AND .is_menu_item_available(a.menu_item_id, 'Y') = 'Y'
>
>ORDER BY a.row_order, menu_item_id;
>
> --Ayub
>


-- 

Sun Certified Enterprise Architect 1.5
Sun Certified Java Programmer 1.4
Microsoft Certified Systems Engineer 2000
http://in.linkedin.com/pub/ayub-khan/a/811/b81
mobile:+966-502674604
--
It is proved that Hard Work and kowledge will get you close but attitude
will get you there. However, it's the Love
of God that will put you over the top!!


Re: Page File Size Reached Critical Threshold PostgreSQL V13

2021-06-10 Thread Justin Pryzby
On Thu, Jun 10, 2021 at 11:41:09AM +0500, Haseeb Khan wrote:
> You mean, So should I request for to increase the System Ram from 32 Gb to 64 
> Gb and keep the same parameter setting.Is it ?

No - I don't know how large your DB is, or the other question that I asked.
So I can't possibly make a suggestion to add RAM.

But I do know that "half" is the worst possible setting for many databases.

I suggest to provide some more information, and we can try to suggest a better
configuration.

https://wiki.postgresql.org/wiki/Slow_Query_Questions

On 10-Jun-2021, at 9:28 AM, Justin Pryzby  wrote:
> > On Thu, Jun 10, 2021 at 05:45:45AM +0500, Haseeb Khan wrote:
> >> We have installed PostgreSQL V13 on window’s server 2016, where we kept 
> >> the Ram of the Server is 32 GB and disk size is 270 GB.Later we faced some 
> >> performance issues regarding the database, after deep dive into it we came 
> >> up and increased the Shared buffer size to 16 Gb. After the changed I am 
> >> not sure we are facing that Page file Size reached to critical threshold. 
> >> Currently the Page File size is 9504MB.
> > 
> > How large is your DB ?  (Or the "active set" of the DB, if parts of it are
> > accessed infrequently).
> > 
> > What was the original performance issue that led you to increase 
> > shared_buffers ?
> > 
> > You've set shared_buffers to half of your RAM, which may be a "worst case"
> > setting, since everything that's read into shared_buffers must first be read
> > into the OS cache.  So it may be that many blocks are cached twice, rather 
> > than
> > relying on a smaller shared_buffers only for the "hottest" blocks, and the
> > larger OS cache for everything else.
> > 
> > There are exceptions to the guideline - for example, if your DB is 23 GB in
> > size, it might make sense to have the entire thing in 24GB OF 
> > shared_buffers.
> > But most DB don't need to fit in shared_buffers, and you shouldn't make 
> > that a
> > goal, unless you can measure a performance benefit.




Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Andrew Dunstan


On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:
> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in
> Amazon RDS, I wondered, why I am paying AWS for an RDS-based version,
> when I was forced by their POLICY to go through the effort I did?  I'm
> not one of the crowd who thinks, "It works OK, so I don't update
> anything".  I'm usually one who is VERY quick to apply upgrades,
> especially when there is a fallback ability.  However, the initial
> failure to successfully upgrade from v9.6 to any more recent major
> version, put me in a time-limited box that I really don't like to be in.
>
> If I'm going to have to deal with maintenance issues, like I easily
> did when I ran native PostgreSQL, why not go back to that?  So, I've
> ported my database back to native PostgreSQL v13.3 on an AWS EC2
> instance.  It looks like I will save about 40% of the cost, which is
> in accord with this article: 
> https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/
>
> Why am I mentioning this here?  Because there were minor issues &
> benefits in porting back to native PostgreSQL, that may be of interest
> here:
>
> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a
> superuser, & it tries to dump protected stuff.  If there is a way
> around that, I'd like to know it, even though it's not an issue now. 
> pg_dump works OK, but of course you don't get the roles dumped. 
> Fortunately, I kept script files that have all the database setup, so
> I just ran them to create all the relationships, & then used the
> pg_dump output.  Worked flawlessly.



This was added in release 12 specifically with RDS in mind:


   pg_dumpall --exclude-database


cheers


andrew


-- 

Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Page File Size Reached Critical Threshold PostgreSQL V13

2021-06-10 Thread Justin Pryzby
Please work through the Slow Query wiki page and try to provide as much
information as possible.  It's too hard to try to help if each communication
includes only a fraction of the requested information.

On Thu, Jun 10, 2021 at 04:33:51PM +0500, Haseeb Khan wrote:
> PFB the query and there are many other queries like these
> 
> select
> pd.gender,
> count(1) as dispensation_counts
> from cdss_wasfaty.dispensation_fact df
> inner join cdss_wasfaty.patient_dim pd
> on df.patient_key = pd.patient_key
> inner join cdss_wasfaty.date_dim date_dim
> on df.dispensation_date_key = date_dim.date_key
> and date_dim.year IN (2020)
> group by pd.gender
> 
> On Thu, Jun 10, 2021 at 2:48 PM Justin Pryzby  wrote:
> 
> > Can you give an example of a query that performed poorly?
> >
> > Send the query, and its explain (analyze,buffers,settings) for the query,
> > and schema for the relevant queries.
> >
> > > > https://wiki.postgresql.org/wiki/Slow_Query_Questions

-- 
Justin Pryzby
System Administrator
Telsasoft
+1-952-707-8581




Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Joshua Drake
On Wed, Jun 9, 2021 at 6:50 PM Dean Gibson (DB Administrator) <
postgre...@mailpen.com> wrote:

> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon
> RDS, I wondered, why I am paying AWS for an RDS-based version, when I was
> forced by their POLICY to go through the effort I did?  I'm not one of the
> crowd who thinks, "It works OK, so I don't update anything".  I'm usually
> one who is VERY quick to apply upgrades, especially when there is a
> fallback ability.  However, the initial failure to successfully upgrade
> from v9.6 to any more recent major version, put me in a time-limited box
> that I really don't like to be in.
>

Right, and had you deployed on EC2 you would not have been forced to
upgrade. This is an argument against RDS for this particular problem.


>
> If I'm going to have to deal with maintenance issues, like I easily did
> when I ran native PostgreSQL, why not go back to that?  So, I've ported my
> database back to native PostgreSQL v13.3 on an AWS EC2 instance.  It looks
> like I will save about 40% of the cost, which is in accord with this
> article:  https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/
>

That is correct, it is quite a bit less expensive to host your own EC2
instances. Where it is not cheaper is when you need to easily configure
backups, take a snapshot, or bring up a replica. For those in the know,
putting in some work upfront largely removes the burden that RDS corrects
but a lot of people who deploy RDS are *not* DBAs, or even Systems people.
They are front end developers.

Glad to see you were able to work things out.

JD

-- 

   - Partner, Father, Explorer and Founder.
   - Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997
   - Founder and Co-Chair - https://postgresconf.org/
   - Founder - https://postgresql.us - United States PostgreSQL
   - Public speaker, published author, postgresql expert, and people
   believer.
   - Host - More than a refresh
   : A podcast about
   data and the people who wrangle it.


Re: waiting for client write

2021-06-10 Thread Jeff Janes
On Thu, Jun 10, 2021 at 4:06 AM Ayub Khan  wrote:

> I did profiling of the application and it seems most of the CPU
> consumption is for executing the stored procedure. Attached is the
> screenshot of the profile
>

That is of your tomcat server?  If that is really a profile of your CPU
time (rather than wall-clock time) then it seems pretty clear your problem
is on the client side, so there isn't much that can be done about it on the
database server.

Cheers,

Jeff

>


Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Dean Gibson (DB Administrator)

On 2021-06-10 03:29, Andrew Dunstan wrote:

On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:

First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & 
it tries to dump protected stuff.  If there is a way around that, I'd like to know it, 
even though it's not an issue now. pg_dump works OK, but of course you don't get the 
roles dumped. Fortunately, I kept script files that have all the database setup, so I 
just ran them to create all the relationships, & then used the pg_dump output.  
Worked flawlessly.

This was added in release 12 specifically with RDS in mind:

   pg_dumpall --exclude-database

cheers, andrew


I guess I don't understand what that option does:

=>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql
pg_dump: error: could not write to output file: No space left on device
pg_dumpall: error: pg_dump failed on database "MailPen", exiting

I expected a tiny file, not 3.5GB.  "MailPen" is the only database 
(other than what's pre-installed).  Do I need quotes on the command line?




Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Ranier Vilela
Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) <
postgre...@mailpen.com> escreveu:

> On 2021-06-10 03:29, Andrew Dunstan wrote:
>
> On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:
>
> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a 
> superuser, & it tries to dump protected stuff.  If there is a way around 
> that, I'd like to know it, even though it's not an issue now. pg_dump works 
> OK, but of course you don't get the roles dumped. Fortunately, I kept script 
> files that have all the database setup, so I just ran them to create all the 
> relationships, & then used the pg_dump output.  Worked flawlessly.
>
> This was added in release 12 specifically with RDS in mind:
>
>   pg_dumpall --exclude-database
>
> cheers, andrew
>
>
> I guess I don't understand what that option does:
>
> =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql
> pg_dump: error: could not write to output file: No space left on device
> pg_dumpall: error: pg_dump failed on database "MailPen", exiting
>
> I expected a tiny file, not 3.5GB.  "MailPen" is the only database (other
> than what's pre-installed).  Do I need quotes on the command line?
>
See at:
https://www.postgresql.org/docs/13/app-pg-dumpall.html

Your cmd lacks =
=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql

regards,
Ranier Vilela


Re: waiting for client write

2021-06-10 Thread Vijaykumar Jain
Ayub,

Ideally when i have to deal with this,
i run a pgbench stress test locally on the db server on lo interface
which does not suffer mtu / bandwidth saturation issues.
then run the same pgbench from a remote server in the same subnet as the
app and record the results and compare.
that helps me get rid of any non standard client issues or network latency
issues.


A typical case where people above are pointing to is
1) for ex. When I am in India and query a server in the US across WAN on a
client like pgadmin (which may not handle loading million rows
efficiently), I have a high chance of getting ClientWrite ,ClientRead wait
events. ( Read client and or network issues )
Of course this is much worse than ec2 and db  in the same region, but you
get the point that you have to rule out sketchy networks between the
servers.
Ideally an iperf like stress test can help to test bandwidth.

So if you can run pgbench from across some test servers and get consistent
results, then you can come back with a reply that more people can help with.
pgbench 
using a custom script

postgres@go:~/pgbench_example$ more pgbench.script
\set r1 random(0, 1) -- you can use them below in queries as params
like col = :r1
\set r2 random(0, 8000)

begin;
select random();
end;

-- put in any query that you use in jmeter between begin/end like above
-- select * from foo where (u1 = :r1  and u2 = :r2);
-- insert into foo values (:u1v, :u2v) on conflict do nothing;
-- update foo set u1 = :u1v where u2 = 100;
-- select pg_sleep(1);


and then run pgbench with the custom script

postgres@go:~/pgbench_example$ pgbench -c 10 -f ./pgbench.script -j 10 -n
-T 30
transaction type: ./pgbench.script
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 10
duration: 30 s
number of transactions actually processed: 528984
latency average = 0.567 ms
tps = 17631.650584 (including connections establishing)
tps = 17642.174229 (excluding connections establishing)


Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Dean Gibson (DB Administrator)

On 2021-06-10 09:54, Ranier Vilela wrote:
Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) 
mailto:postgre...@mailpen.com>> escreveu:



I guess I don't understand what that option does:

=>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql
pg_dump: error: could not write to output file: No space left on
device
pg_dumpall: error: pg_dump failed on database "MailPen", exiting

I expected a tiny file, not 3.5GB.  "MailPen" is the only database
(other than what's pre-installed).  Do I need quotes on the
command line?

See at:
https://www.postgresql.org/docs/13/app-pg-dumpall.html 



Your cmd lacks =
=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql

regards, Ranier Vilela


I read that before posting, but missed that.  Old command line patterns 
die hard!


However, the result was the same:  3.5GB before running out of space.



Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Tom Lane
"Dean Gibson (DB Administrator)"  writes:
> On 2021-06-10 09:54, Ranier Vilela wrote:
>> Your cmd lacks =
>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql

> I read that before posting, but missed that.  Old command line patterns 
> die hard!
> However, the result was the same:  3.5GB before running out of space.

[ experiments... ]  Looks like you gotta do it like this:

pg_dumpall '--exclude-database="MailPen"' ...

This surprises me, as I thought it was project policy not to
case-fold command-line arguments (precisely because you end
up needing weird quoting to prevent that).

regards, tom lane




Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Andrew Dunstan


On 6/10/21 2:00 PM, Tom Lane wrote:
> "Dean Gibson (DB Administrator)"  writes:
>> On 2021-06-10 09:54, Ranier Vilela wrote:
>>> Your cmd lacks =
>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql
>> I read that before posting, but missed that.  Old command line patterns 
>> die hard!
>> However, the result was the same:  3.5GB before running out of space.
> [ experiments... ]  Looks like you gotta do it like this:
>
>   pg_dumpall '--exclude-database="MailPen"' ...
>
> This surprises me, as I thought it was project policy not to
> case-fold command-line arguments (precisely because you end
> up needing weird quoting to prevent that).
>
>   



Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used
the same logic that we use for pg_dump's --exclude-* options, so we need
to check if they have similar issues.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Dean Gibson (DB Administrator)

On 2021-06-10 11:23, Andrew Dunstan wrote:

On 6/10/21 2:00 PM, Tom Lane wrote:

"Dean Gibson (DB Administrator)"  writes:

... Do I need quotes on the command line?
On 2021-06-10 09:54, Ranier Vilela wrote:

Your cmd lacks =
=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql

I read [the manual] before posting, but missed that.  Old command line patterns 
die hard!
However, the result was the same:  3.5GB before running out of space.

[ experiments... ]  Looks like you gotta do it like this:

pg_dumpall '--exclude-database="MailPen"' ...

This surprises me, as I thought it was project policy not to case-fold 
command-line arguments (precisely because you end up needing weird quoting to 
prevent that). 

Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used the same 
logic that we use for pg_dump's --exclude-* options, so we need to check if 
they have similar issues.

cheers, andrew


That works!  I thought it was a quoting/case issue!  I was next going to 
try single quotes just outside double quotes, & that works as well (& is 
a bit more natural):


pg_dumpall -U Admin --exclude-database='"MailPen"' >zzz.sql

Using mixed case has bitten me before, but I am not deterred!  I run 
phpBB 3.0.14 (very old version) because upgrades to more current 
versions fail on the mixed case of the DB name, as well as the use of 
SCHEMAs to isolate the message board from the rest of the data. Yes, I 
reported it years ago.


I use lower-case for column, VIEW, & function names;  mixed (camel) case 
for table, schema, & database names;  & upper-case for SQL keywords.  It 
helps readability (as does murdering a couple semicolons in the prior 
sentence).




Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Dean Gibson (DB Administrator)

On 2021-06-10 10:43, Dean Gibson (DB Administrator) wrote:

On 2021-06-10 09:54, Ranier Vilela wrote:
Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) 
mailto:postgre...@mailpen.com>> escreveu:



...  Do I need quotes on the command line?

See at:
https://www.postgresql.org/docs/13/app-pg-dumpall.html 



Your cmd lacks =
=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql

regards, Ranier Vilela


...

However, the result was the same:  3.5GB before running out of space.



It turns out the "=" is not needed.  The double-quoting is (this works):

pg_dumpall -U Admin --exclude-database '"MailPen"'  >zzz.sql