Re: Problems with installing pgwatch2 without docker

2018-07-08 Thread Andrew Dunstan




On 07/08/2018 10:22 AM, Mariel Cherkassky wrote:


Hi,

I'm trying to install the pgwatch2 tool in our company without using 
the docker option. I followed the instructions that are specified in 
the github page but I'm facing an error during STEP 4.2 when I try to 
add my cluster to be the /dbs page in order to monitor it. After I add 
it I'm getting the error :


ERROR: Could not connect to InfluxDB

On the same machine I have an influxdb database running :

ps -ef | grep influx

influxdb 3680 1  0 17:12 ?    00:00:01 influxd -config 
/PostgreSQL/influxdb/config/influxdb.conf


When I look at the log of the influxdb I see that every time I press 
the "New" button under DBS page the next row :


[httpd] ::1 - root [08/Jul/2018:17:19:04 +0300] "GET 
/query?q=SHOW+TAG+VALUES+WITH+KEY+%3D+%22dbname%22&db=pgwatch2 
HTTP/1.1" 401 33 "-" "python-requests/2.19.1" 
de27bc5c-82b9-11e8-8003- 141


What else do you recommend to check ?

Thanks , Mariel.




Please stop asking questions in inappropriate forums. This is not a 
performance issue, so it definitely doesn't belong on this list.. If it 
belongs on a postgres forum at all it belongs on pgsql-general. More 
likely, you should be asking in the pgwatch2 forums, not Postgres forums.


cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: amazon aroura config - seriously overcommited defaults? (May be Off Topic)

2018-12-09 Thread Andrew Dunstan



On 12/8/18 6:38 PM, Andres Freund wrote:

On 2018-12-08 15:23:19 -0800, Rob Sargent wrote:



On Dec 8, 2018, at 3:12 PM, Andres Freund  wrote:

On 2018-12-08 12:06:23 -0800, Jeremy Schneider wrote:

On RDS PostgreSQL, the default is 25% of your server memory. This seems
to be pretty widely accepted as a good starting point on PostgreSQL.

FWIW, I think it's widely cited, but also bad advice.  25% for a OLTP
workload on a 1TB machine with a database size above 25% is a terrible
idea.


Sorry, could you please expand “database size above 25%”?  25% of what?

Memory available to postgres (i.e. 100% of the server's memory on a
server dedicated to postgres, less if it's shared duty).




I think the best advice these days is that you need to triangulate to 
find the best setting for shared_buffers. It's very workload dependent, 
and there isn't even a semi-reliable rule of thumb.



cheers


andrew


--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Poor Performance running Django unit tests after upgrading from 10.6

2020-10-15 Thread Andrew Dunstan


On 10/15/20 1:21 AM, Roger Hunwicks wrote:
>
> I think we have narrowed down the problem to a single, very complex,
> materialized view using CTEs; the unit tests create the test data and
> then refresh the materialized view before executing the actual test
> code.
>


Have you checked to see if the CTE query is affected by the change to
how CTEs are run in release 12?


The release notes say:

Allow common table expressions (CTEs) to be inlined into the outer
query (Andreas Karlsson, Andrew Gierth, David Fetter, Tom Lane)

Specifically, CTEs are automatically inlined if they have no
side-effects, are not recursive, and are referenced only once in the
query. Inlining can be prevented by specifying MATERIALIZED, or
forced for multiply-referenced CTEs by specifying NOT MATERIALIZED.
Previously, CTEs were never inlined and were always evaluated before
the rest of the query.

So if you haven't already, start by putting MATERIALIZED before each CTE
clause:

with foo as MATERIALIZED (select ...),

bar as MATERIALIZED  (select ...),

...

and see if that changes anything.



cheers


andrew



-- 
Andrew Dunstan
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company





Re: time taking deletion on large tables

2020-12-03 Thread Andrew Dunstan


On 12/3/20 11:16 AM, Tom Lane wrote:
> Justin Pryzby  writes:
>> On Thu, Dec 03, 2020 at 08:43:57PM +0530, Ravikumar Reddy wrote:
>>> When I try to do a delete like this:  it hangs for an entire day, so I
>>> need to kill it with pg_terminate_backend(pid).
>>>
>>> DELETE FROM feed_posts
>>> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'
>>> AND created_at > '2020-05-11 00:00:00'
>>> AND created_at < '2020-05-12 00:00:00';
> 90% of the "delete takes forever" complaints that we hear trace down to
> having a foreign key reference to the deletion-target table that's not
> backed by an index on the referencing column.  Then you end up getting
> a seqscan on the referencing table to look for rows referencing a
> row-to-be-deleted.  And then another one for the next row.  Etc.
>
> You could try "explain analyze" on a query deleting just a single
> one of these rows and see if an RI enforcement trigger is what's
> eating the time.
>
>   



Yeah. IIRC some other RDBMS systems actually create such an index if it
doesn't already exist. Maybe we should have a warning when setting up an
FK constraint if the referencing fields aren't usefully indexed.


cheers


andrew



--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Potential performance issues

2021-03-01 Thread Andrew Dunstan


On 2/28/21 10:04 AM, Jung, Jinho wrote:
> # install DBMS
> sudo su
> make install
> adduser postgres
> rm -rf /usr/local/pgsql/data
> mkdir /usr/local/pgsql/data
> chown -R postgres /usr/local/pgsql/data
> su - postgres
> /usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data
> /usr/local/pgsql/bin/pg_ctl -D /usr/local/pgsql/data -l logfile start
> /usr/local/pgsql/bin/createdb jjung


Using an untuned Postgres is fairly useless for a performance test. Out
of the box, shared_buffers and work_mem are too low for almost all
situations, and many other settings can also usually be improved. The
default settings are deliberately very conservative.


cheers


andrew



-- Andrew Dunstan EDB: https://www.enterprisedb.com




Re: select count(*) is slow

2021-04-06 Thread Andrew Dunstan


On 4/6/21 9:30 AM, aditya desai wrote:
> Thanks Tom. Will try with numeric. Please ignore table and index naming.
>
> On Tue, Apr 6, 2021 at 6:55 PM Tom Lane  <mailto:t...@sss.pgh.pa.us>> wrote:
>
> aditya desai mailto:admad...@gmail.com>> writes:
> > Below query takes 12 seconds. We have an index on  postcode.
>
> > select count(*) from table where postcode >= '00420' AND
> postcode <= '00500'
>
> That query does not match this index:
>
> > CREATE INDEX Table_i1
> >     ON table  USING btree
> >     ((postcode::numeric));
>
> You could either change postcode to numeric, change all your queries
> of this sort to include the cast explicitly, or make an index that
> doesn't have a cast.
>
>                       
>


IMNSHO postcodes, zip codes, telephone numbers and the like should never
be numeric under any circumstances. This isn't numeric data (what is the
average postcode?), it's textual data consisting of digits, so they
should always be text/varchar. The index here should just be on the
plain text column, not cast to numeric.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: SHARED LOCKS , EXCLUSIVE LOCKS, ACCESS EXCLUSIVE LOCKS

2021-04-06 Thread Andrew Dunstan


On 4/4/21 6:42 AM, aditya desai wrote:
> Hi,
> We have few select queries during which we see SHARED LOCKS and
> EXCLUSIVE LOCKS on tables. Can these locks cause slowness? Is there
> any way to reduce the locks?
>
> What must be causing ACCESS EXCLUSIVE LOCKS when the application is
> running select queries? Is it AUTOVACUUM?
>

Suggest you read this part of The Fine Manual:
<https://www.postgresql.org/docs/current/explicit-locking.html>


cheers


andrew


-- 

Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: logical replication

2021-05-21 Thread Andrew Dunstan


On 5/21/21 8:41 AM, Marc Millas wrote:
> Hi,
>
> my POC in postgres 12.(important ?)
>
> if I setup 2 postgres clusters, and create a publication in one and a
> subscription in the other,
> and do on the pub an update which does not change the data (updating
> an existing record with same data) then this (useless) update go
> through replication.(ie consumes network ressource)
>
> what are ways to avoid this ?
> (I thought of a trigger to not execute the useless update, but I
> dont see how to do this)
> any ideas ?
>
> thanks
> PS: remarks about the meaning of this are off topic, thanks
>
>

Postgres provides exactly such a trigger. See
https://www.postgresql.org/docs/12/functions-trigger.html


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-05-28 Thread Andrew Dunstan


On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:
> [Reposted to the proper list]
>
> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4
> at one point), gradually moving to v9.0 w/ replication in 2010.  In
> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,
> & was entirely satisfied with the result.
>
> In March of this year, AWS announced that v9.6 was nearing end of
> support, & AWS would forcibly upgrade everyone to v12 on January 22,
> 2022, if users did not perform the upgrade earlier.  My first attempt
> was successful as far as the upgrade itself, but complex queries that
> normally ran in a couple of seconds on v9.x, were taking minutes in v12.
>
> I didn't have the time in March to diagnose the problem, other than
> some futile adjustments to server parameters, so I reverted back to a
> saved copy of my v9.6 data.
>
> On Sunday, being retired, I decided to attempt to solve the issue in
> earnest.  I have now spent five days (about 14 hours a day), trying
> various things, including adding additional indexes.  Keeping the v9.6
> data online for web users, I've "forked" the data into new copies, &
> updated them in turn to PostgreSQL v10, v11, v12, & v13.  All exhibit
> the same problem:  As you will see below, it appears that versions 10
> & above are doing a sequential scan of some of the "large" (200K rows)
> tables.  Note that the expected & actual run times both differ for
> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post
> a huge eMail (ha ha), I'll start with this one, that shows an "EXPLAIN
> ANALYZE" from both v9.6 & v13.2, followed by the related table & view
> definitions.  With one exception, table definitions are from the FCC
> (Federal Communications Commission);  the view definitions are my own.
>
>
>

Have you tried reproducing these results outside RDS, say on an EC2
instance running vanilla PostgreSQL?


cheers


andrew



--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-05-28 Thread Andrew Dunstan


On 5/28/21 4:23 PM, Jan Wieck wrote:
> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:
>
> What sticks out for me are these two scans, which balloon from 50-60
> heap fetches to 1.5M each.
>
>>   ->  Nested Loop  (cost=0.29..0.68 rows=1
>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)
>>     Join Filter: ("_IsoCountry".iso_alpha2 =
>> "_Territory".country_id)
>>     Rows Removed by Join Filter: 0
>>     ->  Index Only Scan using
>> "_IsoCountry_iso_alpha2_key" on "_IsoCountry"  (cost=0.14..0.38
>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)
>>   Index Cond: (iso_alpha2 =
>> "_GovtRegion".country_id)
>>   Heap Fetches: 1487153
>>     ->  Index Only Scan using
>> "_Territory_pkey" on "_Territory"  (cost=0.14..0.29 rows=1 width=7)
>> (actual time=0.001..0.001 rows=1 loops=1487153)
>>   Index Cond: (territory_id =
>> "_GovtRegion".territory_id)
>>   Heap Fetches: 1550706
>
> How did you load the database? pg_dump -> psql/pg_restore?
>
> If so, did you perform a VACUUM FREEZE after the load?
>
>
>

Jan


AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I
assume you would know better than him or me what it actually does do :-)


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-05-28 Thread Andrew Dunstan


On 5/28/21 10:27 PM, Jan Wieck wrote:
>
>
> On Fri, May 28, 2021, 17:15 Andrew Dunstan  <mailto:and...@dunslane.net>> wrote:
>
>
>
>
> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I
> assume you would know better than him or me what it actually does
> do :-)
>
>
> Since I am not working at AWS I can't tell for sure. ;)


Apologies, my mistake then.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-05-29 Thread Andrew Dunstan


On 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:
>
>
> Meanwhile, I've been doing some checking.  If I remove "CAST(
> license_status AS CHAR ) = 'A'", the problem disappears.  Changing the
> JOIN to a RIGHT JOIN, & replacing WHERE with ON, also "solves" the
> problem, but there is an extra row where license_status is NULL, due
> to the RIGHT JOIN.  Currently trying to figure that out (why did the
> CAST ... match 'A', if it is null?)...


Why are you using this expression? It's something you almost never want
to do in my experience. Why not use the substr() function to get the
first character?


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-07 Thread Andrew Dunstan


On 6/6/21 7:49 PM, Dean Gibson (DB Administrator) wrote:
> On 2021-05-29 13:35, Andrew Dunstan wrote:
>> On 5/29/21 3:59 PM, Dean Gibson (DB Administrator) wrote:
>>> Meanwhile, I've been doing some checking.  If I remove "CAST(
>>> license_status AS CHAR ) = 'A'", the problem disappears.  Changing the
>>> JOIN to a RIGHT JOIN, & replacing WHERE with ON, also "solves" the
>>> problem, but there is an extra row where license_status is NULL, due
>>> to the RIGHT JOIN.  Currently trying to figure that out (why did the
>>> CAST ... match 'A', if it is null?)...
>> Why are you using this expression? It's something you almost never want
>> to do in my experience. Why not use the substr() function to get the
>> first character?
>>
>
> Although it doesn't matter in this case, I do it because in general,
> it changes the type of the value from CHAR to bptext or whatever it
> is, & that has causes comparison issues in the past.  It's just a
> matter of habit for me when working with CHAR() types.
>
> But this case, where it doesn't matter, I'd use LEFT().



That raises the issue of why you're using CHAR(n) fields. Just about
every consultant I know advises simply avoiding them. :-)


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Andrew Dunstan


On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:
> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in
> Amazon RDS, I wondered, why I am paying AWS for an RDS-based version,
> when I was forced by their POLICY to go through the effort I did?  I'm
> not one of the crowd who thinks, "It works OK, so I don't update
> anything".  I'm usually one who is VERY quick to apply upgrades,
> especially when there is a fallback ability.  However, the initial
> failure to successfully upgrade from v9.6 to any more recent major
> version, put me in a time-limited box that I really don't like to be in.
>
> If I'm going to have to deal with maintenance issues, like I easily
> did when I ran native PostgreSQL, why not go back to that?  So, I've
> ported my database back to native PostgreSQL v13.3 on an AWS EC2
> instance.  It looks like I will save about 40% of the cost, which is
> in accord with this article: 
> https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/
>
> Why am I mentioning this here?  Because there were minor issues &
> benefits in porting back to native PostgreSQL, that may be of interest
> here:
>
> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a
> superuser, & it tries to dump protected stuff.  If there is a way
> around that, I'd like to know it, even though it's not an issue now. 
> pg_dump works OK, but of course you don't get the roles dumped. 
> Fortunately, I kept script files that have all the database setup, so
> I just ran them to create all the relationships, & then used the
> pg_dump output.  Worked flawlessly.



This was added in release 12 specifically with RDS in mind:


   pg_dumpall --exclude-database


cheers


andrew


-- 

Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-10 Thread Andrew Dunstan


On 6/10/21 2:00 PM, Tom Lane wrote:
> "Dean Gibson (DB Administrator)"  writes:
>> On 2021-06-10 09:54, Ranier Vilela wrote:
>>> Your cmd lacks =
>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql
>> I read that before posting, but missed that.  Old command line patterns 
>> die hard!
>> However, the result was the same:  3.5GB before running out of space.
> [ experiments... ]  Looks like you gotta do it like this:
>
>   pg_dumpall '--exclude-database="MailPen"' ...
>
> This surprises me, as I thought it was project policy not to
> case-fold command-line arguments (precisely because you end
> up needing weird quoting to prevent that).
>
>   



Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used
the same logic that we use for pg_dump's --exclude-* options, so we need
to check if they have similar issues.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





pg_dumpall --exclude-database case folding, was Re: AWS forcing PG upgrade from v9.6 a disaster

2021-06-14 Thread Andrew Dunstan


On 6/10/21 2:23 PM, Andrew Dunstan wrote:
> On 6/10/21 2:00 PM, Tom Lane wrote:
>> "Dean Gibson (DB Administrator)"  writes:
>>> On 2021-06-10 09:54, Ranier Vilela wrote:
>>>> Your cmd lacks =
>>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql
>>> I read that before posting, but missed that.  Old command line patterns 
>>> die hard!
>>> However, the result was the same:  3.5GB before running out of space.
>> [ experiments... ]  Looks like you gotta do it like this:
>>
>>  pg_dumpall '--exclude-database="MailPen"' ...
>>
>> This surprises me, as I thought it was project policy not to
>> case-fold command-line arguments (precisely because you end
>> up needing weird quoting to prevent that).
>>
>>  
>
>
> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used
> the same logic that we use for pg_dump's --exclude-* options, so we need
> to check if they have similar issues.
>
>



Peter Eisentraut has pointed out to me that this is documented, albeit a
bit obscurely for pg_dumpall. But it is visible on the pg_dump page.


Nevertheless, it's a bit of a POLA violation as we've seen above, and
I'd like to get it fixed, if there's agreement, both for this pg_dumpall
option and for pg_dump's pattern matching options.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-08-22 Thread Andrew Dunstan


On 8/22/21 4:11 PM, Tom Lane wrote:
> "l...@laurent-hasson.com"  writes:
>> I do have a Linux install of 13.3, and things work beautifully, so this is 
>> definitely a Windows thing here that started in V12.
> It's good to have a box around it, but that's still a pretty large
> box :-(.
>
> I'm hoping that one of our Windows-using developers will see if
> they can reproduce this, and if so, try to bisect where it started.
> Not sure how to make further progress without that.
>
>   


Can do. Assuming the assertion that it started in Release 12 is correct,
I should be able to find it by bisecting between the branch point for 12
and the tip of that branch. That's a little over 20 probes by my
calculation.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-08-22 Thread Andrew Dunstan


On 8/22/21 5:59 PM, l...@laurent-hasson.com wrote:
>
>>  -Original Message-
>    >  From: Andrew Dunstan 
>>  Sent: Sunday, August 22, 2021 17:27
>>  To: Tom Lane ; l...@laurent-hasson.com
>>  Cc: Justin Pryzby ; Ranier Vilela
>>  ; pgsql-performa...@postgresql.org
>>  Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2
>>  and 13.4
>>  
>>  
>>  On 8/22/21 4:11 PM, Tom Lane wrote:
>>  > "l...@laurent-hasson.com"  writes:
>>  >> I do have a Linux install of 13.3, and things work beautifully, so 
> this is
>>  definitely a Windows thing here that started in V12.
>>  > It's good to have a box around it, but that's still a pretty large box
>>  > :-(.
>>  >
>>  > I'm hoping that one of our Windows-using developers will see if they
>>  > can reproduce this, and if so, try to bisect where it started.
>>  > Not sure how to make further progress without that.
>>  >
>>  >
>>  
>>  
>>  Can do. Assuming the assertion that it started in Release 12 is 
> correct, I
>>  should be able to find it by bisecting between the branch point for 12
>>  and the tip of that branch. That's a little over 20 probes by my
>>  calculation.
>>  
>>  
>>  cheers
>>  
>>  
>>  andrew
>>  
>>  
>>  --
>>  Andrew Dunstan
>>  EDB: https://www.enterprisedb.com
>
>
> I tried it on 11.13 and 12.3. Is there a place where I could download 12.1 
> and 12.2 and test that? Is it worth it or you think you have all you need?
>


I think I have everything I need.


Step one will be to verify that the difference exists between the branch
point and the tip of release 12. Once that's done it will be a matter of
probing until the commit at fault is identified.


cheers


andrew



--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-08-26 Thread Andrew Dunstan


On 8/26/21 10:47 AM, l...@laurent-hasson.com wrote:
> Hello all,
>
> Any update on this issue?



Please don't top-post.


We are working on the issue. Please be patient.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-08-31 Thread Andrew Dunstan


On 8/30/21 10:18 PM, l...@laurent-hasson.com wrote:
>>  I see on https://www.postgresql.org/download/ that there is a different
>>  installer from 2ndQuadrant. I am going to try that one and see what I
>>  come up with. Are there any other "standard" distros of Postgres that I
>>  could try out?
>>  
>>  I found out I could download Visual Studio community edition so I am
>>  trying this, but may not have the time to get through a build any time
>>  soon as per my unfamiliarity with the process. I'll follow Ranier's 
> steps
>>  and see if that gets me somewhere.
>>  
>>  Thank you,
>>  Laurent.
>
>
> Hello all,
>
> I think I had a breakthrough. I tried to create a local build and wasn't able 
> to. But I downloaded the 2nd Quadrant installer and the issue disappeared!!! 
> I think this is proof that it's not my personal environment, nor something 
> intrinsic in the codebase, but definitely something in the standard EDB 
> installer.
>
>

No, you're on the wrong track. As I reported earlier, I have reproduced
this issue with a vanilla build which has no installer involvement
whatsoever.

I'm pretty sure the reason you are not seeing this with the 2ndQuadrant
installer is quite simple: it wasn't build with NLS support.

Let me repeat what I said earlier. I will get to the bottom of this.
Please be patient and stop running after red herrings.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-08-31 Thread Andrew Dunstan


On 8/31/21 11:37 AM, Julien Rouhaud wrote:
> On Tue, Aug 31, 2021 at 10:51 PM l...@laurent-hasson.com
>  wrote:
>> OK... I thought that track had been abandoned as per Julien's last message. 
>> Anyways, I'll be patient!
>>
> I just happened to have both standard installer and locally compiled
> versions available, so I could confirm that I reproduced the problem
> at least with the standard installer.  Note that my message also said
> " if default build on windows has NLS included".  After looking a bit
> more into the Windows build system, I confirm that NLS isn't included
> by default so this is not the problem, as Andrew said.
>
> After installing gettext and a few other dependencies, adapting
> config.pl I wish I could also confirm being able to reproduce the
> problem on my build, but apparently I'm missing something as I can't
> get any modification in config.pl have any effect.  I'm not gonna
> waste more time on that since Andrew is already in the middle of the
> investigation.



The culprit turns out to be the precise version of libiconv/libintl
used. There is a slight difference between the versions used in the
11.13 installer and the 13.4 installer. We need to dig into performance
more (e.g. why does the test take much longer on an NLS enabled build
even when we are using 'initdb --no-locale'?) But I'm pretty confident
now that this is the issue. I've started talks with our installer guys
about fixing it.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-09-02 Thread Andrew Dunstan


On 9/2/21 11:34 AM, Julien Rouhaud wrote:
> On Thu, Sep 2, 2021 at 11:22 PM Andrew Dunstan  wrote:
>> Here are a couple of pictures of profiles made with a tool called
>> sleepy. The bad profile is from release 13.4 built with the latest
>> gettext, built with vcpkg. The good profile is the same build but using
>> the intl-8.dll copied from the release 11.13 installer. The good run
>> takes about a minute. The bad run takes about 30 minutes.
>>
>>
>> I'm not exactly sure what the profiles tell us.
> Isn't GetLocaleInfoA suspicious?  Especially since the doc [1] says
> that it shouldn't be called anymore unless you want to have
> compatibility with OS from more than a decade ago?
>
> [1] 
> https://docs.microsoft.com/en-us/windows/win32/api/winnls/nf-winnls-getlocaleinfoa

Possibly, but the profile doesn't show it as having a great impact.

Maybe surrounding code is affected.

cheers

andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Better performance no-throw conversion?

2021-09-08 Thread Andrew Dunstan


On 9/8/21 1:17 PM, l...@laurent-hasson.com wrote:
>
> Hello,
>
>  
>
> Some databases such as SQLServer (try_cast) or BigQuery (safe.cast)
> offer not-throw conversion. In general, these tend to perform better
> than custom UDFs that catch exceptions and are also simpler to use.
> For example, in Postgres, I have a function that does the following:
>
>  
>
> CREATE OR REPLACE FUNCTION toFloat(str varchar, val real)
>
> RETURNS real AS $$
>
> BEGIN
>
>   RETURN case when str is null then val else str::real end;
>
> EXCEPTION WHEN OTHERS THEN
>
>   RETURN val;
>
> END;
>
> $$ LANGUAGE plpgsql COST 1 IMMUTABLE;
>
>  
>
> I couldn’t find a reference to such capabilities in Postgres and
> wondered if I missed it, and if not, is there any plan to add such a
> feature?
>
>  
>


Not that I know of, but you could probably do this fairly simply in C.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-09-13 Thread Andrew Dunstan


On 9/13/21 10:32 AM, l...@laurent-hasson.com wrote:
>
> Hello all,
>
> Any further update or guidance on this issue at this time?
>

Wait for a new installer. Our team is working on it. As I have
previously advised you, please be patient.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-09-13 Thread Andrew Dunstan


On 9/13/21 11:53 AM, l...@laurent-hasson.com wrote:
>
>>  -Original Message-
>    >  From: Andrew Dunstan 
>>  Sent: Monday, September 13, 2021 11:36
>>  To: l...@laurent-hasson.com; Julien Rouhaud 
>>  Cc: Tom Lane ; Ranier Vilela ;
>>  Justin Pryzby ; pgsql-
>>  performa...@postgresql.org
>>  Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2
>>  and 13.4
>>  
>>  
>>  On 9/13/21 10:32 AM, l...@laurent-hasson.com wrote:
>>  >
>>  > Hello all,
>>  >
>>  > Any further update or guidance on this issue at this time?
>>  >
>>  
>>  Wait for a new installer. Our team is working on it. As I have 
> previously
>    >  advised you, please be patient.
>>  
>>  
>>  cheers
>>  
>>  
>>  andrew
>>  
>>  --
>>  Andrew Dunstan
>>  EDB: https://www.enterprisedb.com
>
>
> Hello Andrew,
>
> I'll be as patient as is needed and appreciate absolutely all the work you 
> are all doing. I also know V14 is just around the corner too so the team is 
> super busy 😊
>
> Just looking for some super-rough ETA for some rough planning on our end. Is 
> this something potentially for 13.5 later this year? Or something that may 
> happen before the end of Sept? Or still unknown? And I understand all is 
> always tentative.
>

This is not governed at all by the Postgres release cycle. The issue is
not with Postgres but with the version of libintl used in the build. I
can't speak for the team, they will publish an updated installer when
they get it done. But rest assured it's being worked on. I got email
about it just this morning.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Performance for initial copy when using pg_logical to upgrade Postgres

2021-09-24 Thread Andrew Dunstan


On 9/24/21 10:28 AM, Westwood, Giles wrote:
> At Orcid we're trying to upgrade our Postgres database (10 to 13)
> using pg_logical for no downtime. The problem we have is how long the
> initial copy is taking for the ~500GB database. If it takes say 20days
> to complete, will we need to have 20days of WAL files to start
> catching up when it's complete?
>
> I read an earlier thread which pointed me to the tool
> pglogical_create_subscriber which does a pg_basebackup to start the
> initial replication but this is only going to be useful for logical
> clusters on the same version.
>
> I had hoped that the COPY could be parallelized more by
> "max_sync_workers_per_subscription" which is set to 2. However there's
> only a single process:-
>
> postgres 1022196  6.0 24.5 588340 491564 ?       Ds   Sep22 193:19
> postgres: main: xxx  10.xx.xx.xx(59144) COPY
>
> One of the best resources I've found of real world examples are thead
> on gitlabs own gitlab about their Postgres migrations. They discussed
> one method that might work:-
>
> 1. Setup 9.6 secondary via streaming
> 2. Turn physical secondary into logical secondary
> 3. Shutdown and upgrade secondary to 10
> 4. Turn secondary back on.
>
> In which case we would only need the time required to perform the upgrade.


If you're using the pglogical extension, the best way is often to create
the replica as a physical replica (using pg_basebackup for example), and
then using the extension's utility program pglogical_create_subscriber
to convert the physical replica to a logical replica, which you then
upgrade and switch over to.


Of course, test it out before doing this for real.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-09-24 Thread Andrew Dunstan


On 9/13/21 4:36 PM, Andrew Dunstan wrote:
> On 9/13/21 11:53 AM, l...@laurent-hasson.com wrote:
>>>  -Original Message-
>>    >  From: Andrew Dunstan 
>>>  Sent: Monday, September 13, 2021 11:36
>>>  To: l...@laurent-hasson.com; Julien Rouhaud 
>>>  Cc: Tom Lane ; Ranier Vilela ;
>>>  Justin Pryzby ; pgsql-
>>>  performa...@postgresql.org
>>>  Subject: Re: Big Performance drop of Exceptions in UDFs between V11.2
>>>  and 13.4
>>>  
>>>  
>>>  On 9/13/21 10:32 AM, l...@laurent-hasson.com wrote:
>>>  >
>>>  > Hello all,
>>>  >
>>>  > Any further update or guidance on this issue at this time?
>>>  >
>>>  
>>>  Wait for a new installer. Our team is working on it. As I have 
>> previously
>>>  advised you, please be patient.
>>>  
>>>  
>>>  cheers
>>>  
>>>  
>>>  andrew
>>>  
>>>  --
>>>  Andrew Dunstan
>>>  EDB: https://www.enterprisedb.com
>>
>>
>> Hello Andrew,
>>
>> I'll be as patient as is needed and appreciate absolutely all the work you 
>> are all doing. I also know V14 is just around the corner too so the team is 
>> super busy 😊
>>
>> Just looking for some super-rough ETA for some rough planning on our end. Is 
>> this something potentially for 13.5 later this year? Or something that may 
>> happen before the end of Sept? Or still unknown? And I understand all is 
>> always tentative.
>>
> This is not governed at all by the Postgres release cycle. The issue is
> not with Postgres but with the version of libintl used in the build. I
> can't speak for the team, they will publish an updated installer when
> they get it done. But rest assured it's being worked on. I got email
> about it just this morning.
>
>

EDB has now published new installers for versions later than release 11,
containing Postgres built with an earlier version of gettext that does
not exhibit the problem. Please verify that these fix the issue. If you
already have Postgres installed from our installer you should be able to
upgrade using Stackbuilder. Otherwise, you can download from our usual
download sites.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Big Performance drop of Exceptions in UDFs between V11.2 and 13.4

2021-09-27 Thread Andrew Dunstan


On 9/25/21 9:33 PM, l...@laurent-hasson.com wrote:
>>  > EDB has now published new installers for versions later than release
>>  > 11, containing Postgres built with an earlier version of gettext that
>>  > does not exhibit the problem. Please verify that these fix the issue.
>>  > If you already have Postgres installed from our installer you should
>>  > be able to upgrade using Stackbuilder. Otherwise, you can download
>>  > from our usual download sites.
>>  >
>>  > cheers
>>  >
>>  > andrew
>>  >
>>  > --
>>  > Andrew Dunstan
>>  > EDB: https://www.enterprisedb.com
>  
>
> Hello Andrew,
>
> I just download the 13.4 Windows x86-64 installer from 
> https://www.enterprisedb.com/downloads/postgres-postgresql-downloads but it's 
> the exact same file bit for bit from the previous version I had. Am I looking 
> at the wrong place?
>

Thanks. We're dealing with that. However, you can update that version
via stackbuilder. It will show you that 13.4.2 is available. This has
the correct libintl DLL. I just did this to verify it.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: SQL performance issue after migration from Oracle to Aurora postgres

2022-04-14 Thread Andrew Dunstan


On 2022-04-14 Th 05:35, Goti wrote:
> Hi All,
>
> We migrated from Oracle 12.1 to Aurora postgres 12.8.1. The query in
> Oracle takes less than a millisecond however the same query in aurora
> is taking more than a second. We have a larger number of executions
> for the SQL which is causing an overall latency for the application. I
> am new to postgres and trying to get some ideas around how better we
> can optimize. I have the plan details for the SQL as below. Can
> someone shed some light on possible ways that can make this query to
> meet its original execution time?
>
> https://explain.depesz.com/s/jlVc#html
>

Without knowing much about your data I would suggest trying to rewrite
the query to get rid of the correlated subselect, using a join instead.
I note the use of both implicit and explicit joins in your FROM clause,
which is something I always advise against, as it hurts clarity, but
that's a matter of style rather than performance.


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: PostgresSQL 9.5.21 very slow to connect and perform basic queries

2022-07-21 Thread Andrew Dunstan


On 2022-07-21 Th 14:37, bruno da silva wrote:
> Hello.
>
> I'm investigating an issue on a PostgresSql 9.5.21 installation that
> becomes unusable in an intermittent way. Simple queries like "select
> now();" could take 20s. commits take 2s. and all gets fixed after an
> engine restart.
>
> I look into the pg logs and no signs of errors. and checkpoints are
> always timed. The machine is well provisioned, load isn't too high,
> and cpu io wait is under 1%.
>
> any suggestions on what I should check more?
>
>
>


9.5 has been out of support for nearly 2 years. You should be looking to
upgrade.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: pgbench: could not connect to server: Resource temporarily unavailable

2022-08-21 Thread Andrew Dunstan


On 2022-08-20 Sa 23:20, Tom Lane wrote:
> Kevin McKibbin  writes:
>> What's limiting my DB from allowing more connections?
>> This is a sample of the output I'm getting, which repeats the error 52
>> times (one for each failed connection)
>> -bash-4.2$ pgbench -c 200 -j 200 -t 100 benchy
>> ...
>> connection to database "benchy" failed:
>> could not connect to server: Resource temporarily unavailable
>> Is the server running locally and accepting
>> connections on Unix domain socket
>> "/var/run/postgresql/.s.PGSQL.5432"?
> This is apparently a client-side failure not a server-side failure
> (you could confirm that by seeing whether any corresponding
> failure shows up in the postmaster log).  That means that the
> kernel wouldn't honor pgbench's attempt to open a connection,
> which implies you haven't provisioned enough networking resources
> to support the number of connections you want.  Since you haven't
> mentioned what platform this is on, it's impossible to say more
> than that --- but it doesn't look like Postgres configuration
> settings are at issue at all.



The first question in my mind from the above is where this postgres
instance is actually listening. Is it really /var/run/postgresql? Its
postmaster.pid will tell you. I have often seen client programs pick up
a system libpq which is compiled with a different default socket directory.



cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: pgbench: could not connect to server: Resource temporarily unavailable

2022-08-21 Thread Andrew Dunstan


On 2022-08-21 Su 17:15, Tom Lane wrote:
> Andrew Dunstan  writes:
>> On 2022-08-20 Sa 23:20, Tom Lane wrote:
>>> Kevin McKibbin  writes:
>>>> What's limiting my DB from allowing more connections?
>> The first question in my mind from the above is where this postgres
>> instance is actually listening. Is it really /var/run/postgresql? Its
>> postmaster.pid will tell you. I have often seen client programs pick up
>> a system libpq which is compiled with a different default socket directory.
> I wouldn't think that'd explain a symptom of some connections succeeding
> and others not within the same pgbench run.


Oh, yes, I agree, I missed that aspect of it.


>
> I tried to duplicate this behavior locally (on RHEL8) and got something
> interesting.  After increasing the server's max_connections to 1000,
> I can do
>
> $ pgbench -S -c 200 -j 100 -t 100 bench
>
> and it goes through fine.  But:
>
> $ pgbench -S -c 200 -j 200 -t 100 bench
> pgbench (16devel)
> starting vacuum...end.
> pgbench: error: connection to server on socket "/tmp/.s.PGSQL.5440" failed: 
> Resource temporarily unavailable
> Is the server running locally and accepting connections on that 
> socket?
> pgbench: error: could not create connection for client 154
>
> So whatever is triggering this has nothing to do with the server,
> but with how many threads are created inside pgbench.  I notice
> also that sometimes it works, making it seem like possibly a race
> condition.  Either that or there's some limitation on how fast
> threads within a process can open sockets.
>
> Also, I determined that libpq's connect() call is failing synchronously
> (we get EAGAIN directly from the connect() call, not later).  I wondered
> if libpq should accept EAGAIN as a synonym for EINPROGRESS, but no:
> that just makes it fail on the next touch of the socket.
>
> The only documented reason for connect(2) to fail with EAGAIN is
>
>EAGAIN Insufficient entries in the routing cache.
>
> which seems pretty unlikely to be the issue here, since all these
> connections are being made to the same local address.
>
> On the whole this is smelling more like a Linux kernel bug than
> anything else.
>
>   


*nod*


cheers


andrew


--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Catching up with performance & PostgreSQL 15

2022-11-30 Thread Andrew Dunstan


On 2022-11-29 Tu 16:06, David Rowley wrote:
> On Wed, 30 Nov 2022 at 03:31, Tom Lane  wrote:
>> Alvaro Herrera  writes:
>>> IMO it was a mistake to turn JIT on in the default config, so that's one
>>> thing you'll likely want to change.
>> I wouldn't necessarily go quite that far, but I do think that the
>> default cost thresholds for invoking it are enormously too low,
>> or else there are serious bugs in the cost-estimation algorithms
>> for deciding when to use it.  A nearby example[1] of a sub-1-sec
>> partitioned query that took 30sec after JIT was enabled makes me
>> wonder if we're accounting correctly for per-partition JIT costs.
> I'm very grateful for JIT. However, I do agree that the costs need to work.
>
> The problem is that the threshold to turn JIT on does not consider how
> many expressions need to be compiled. It's quite different to JIT
> compile a simple one-node plan with a total cost of 10 than to JIT
> compile a plan that  costs the same but queries 1000 partitions. I
> think we should be compiling expressions based on the cost of the
> individial node rather than the total cost of the plan. We need to
> make some changes so we can more easily determine the number of times
> a given node will be executed before we can determine how worthwhile
> JITting an expression in a node will be.
>

I think Alvaro's point is that it would have been better to work out
these wrinkles before turning on JIT by default. Based on anecdotal
reports from the field I'm inclined to agree.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: Catching up with performance & PostgreSQL 15

2022-12-01 Thread Andrew Dunstan


On 2022-11-30 We 11:36, Tom Lane wrote:
> Andres Freund  writes:
>> On November 30, 2022 3:47:32 AM PST, Andrew Dunstan  
>> wrote:
>>> I think Alvaro's point is that it would have been better to work out
>>> these wrinkles before turning on JIT by default. Based on anecdotal
>>> reports from the field I'm inclined to agree.
>> The problem is that back when it was introduced these problems didn't exist 
>> to a significant degree. JIT was developed when partitioning was very 
>> minimal- and the problems we're seeing are almost exclusively with queries 
>> with many partitions. The problems really only started much more recently. 
>> It also wasn't enabled in the first release..
> Well, wherever you want to pin the blame, it seems clear that we
> have a problem now.  And I don't think flipping back to off-by-default
> is the answer -- surely there is some population of users who will
> not be happy with that.  We really need to prioritize fixing the
> cost-estimation problems, and/or tweaking the default thresholds.
>
>   


+1


FTR I am not trying to pin blame anywhere. I think the work that's been
done on JIT is more than impressive.


cheers


andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com





Re: INSERT statement going in IPC Wait_event

2023-03-01 Thread Andrew Dunstan


On 2023-03-01 We 15:40, aditya desai wrote:

Hi All,
Unfortunately I am unable to share a query  plan or query.

I have a SQL which is getting called from a web service. At a certain 
point where it inserts data in the table . Process is going in a hung 
state. pg_stat_activity shows wait_even='IPC' , 
wait_even_type=MessageQueueSend. In Webservice log we see I/O error 
occurred message.


Surprisingly when I run it from PSQL or pgadmin it runs fine.




Doesn't this suggest that the problem is probably not with Postgres but 
with your web service (about which you have given us no information 
whatsoever)?



cheers


andrew

--
Andrew Dunstan
EDB:https://www.enterprisedb.com