Re: Very newbie question

2023-10-26 Thread Peter J. Holzer
On 2023-10-25 17:48:46 +0200, Olivier Gautherot wrote:
> El mié, 25 oct 2023 16:58, Олег Самойлов  escribió:
> Okey, I see no one was be able to solve this problem. But I could. May be
> for someone this will be useful too. There is solution.
[...]
> Now query is:
> 
> SELECT generate_series(min(id)/1000, max(id)/1000) AS n FROM
> delivery) as part_numbers
>          WHERE (SELECT max(created_at) from delivery where n*1000 <=id
> and id < (n+1)*1000)
>             < CURRENT_DATE-'3 month'::interval;
> 
> Return the same (number of partition need to archive), accelerated by two
> btree index: on id and created_at. Works very quick, less then second.
[...]
> Your fast solution will work as long as you don't have missing sequences (like
> deleted rows).

Why do you think this would break with missing sequence numbers?

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Disk wait problem...

2023-10-26 Thread Peter J. Holzer
On 2023-10-23 16:31:30 -0700, Adrian Klaver wrote:
> On 10/23/23 14:55, p...@pfortin.com wrote:
> > Next, I made a query list for 30 tables, turned on timing and pasted the
> > queries.  Pasting results inline:
> > ncsbe=# select count(*) from ncvhis_2012_12_06;
> > select count(*) from ncvhis_2016_12_03;
> > select count(*) from ncvhis_2020_08_01;
> > select count(*) from ncvhis_2020_10_31;
[...]
> > Time: 72404.786 ms (01:12.405)
> > Time: 301.246 ms
> > Time: 409.974 ms
> > Time: 363.836 ms
> > Time: 351.167 ms
> > Time: 348.378 ms
> > Time: 348.712 ms
> > Time: 351.901 ms
> > Time: 487.837 ms
> > Time: 40987.826 ms (00:40.988)
> > Time: 76964.281 ms (01:16.964)
> > Time: 483.329 ms
> > Time: 18919.267 ms (00:18.919)
> > Time: 50896.978 ms (00:50.897)
> > Time: 25784.000 ms (00:25.784)
> > Time: 45407.599 ms (00:45.408)
> > Time: 75171.609 ms (01:15.172)
> > Time: 88871.004 ms (01:28.871)
> > Time: 128565.127 ms (02:08.565)
> > Time: 210248.222 ms (03:30.248)
> > Time: 246609.561 ms (04:06.610)   4 minutes!  ;p
> > Time: 166406.283 ms (02:46.406)
> > Time: 291483.538 ms (04:51.484)  nearly 5 minutes ;p ;p
> > Time: 2269.961 ms (00:02.270)
> > Time: 413.219 ms
> > Time: 433.817 ms
> > Time: 395.844 ms
> > Time: 7251.637 ms (00:07.252)
> > Time: 384.328 ms
> > Time: 384.887 ms
> > 
> > I don't see a pattern in the above; matches the randomness I saw using
> > WB...

It would be interesting to see whether there's a difference in plan
between the slow and the fast queries.

If your experiment above is repeatable, you can just prefix every
query with 
explain (analyze, buffers)


explain (analyze, buffers) select count(*) from ncvhis_2016_12_03;
explain (analyze, buffers) select count(*) from ncvhis_2020_08_01;
explain (analyze, buffers) select count(*) from ncvhis_2020_10_31;
...

If it is hard to reproduce (happens only sometimes), the best way to
catch is probably to enable auto_explain
, possibly
with the auto_explain.log_analyze option and check the logs after it
happened. (auto_explain is generally a good idea, IMHO).


> > Before the above finished, I issued this command on another konsole...
> > 
> > $ while true; do ls -l > /tmp/ll; date; done

This is unlikely to generate noticeable disk waits. The current
directory will be in the cache after the first ls and the writes happen
asynchroneously.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Very newbie question

2023-10-26 Thread Olivier Gautherot
Hi,

El jue, 26 oct 2023 11:15, Peter J. Holzer  escribió:

> On 2023-10-25 17:48:46 +0200, Olivier Gautherot wrote:
> > El mié, 25 oct 2023 16:58, Олег Самойлов  escribió:
> > Okey, I see no one was be able to solve this problem. But I could.
> May be
> > for someone this will be useful too. There is solution.
> [...]
> > Now query is:
> >
> > SELECT generate_series(min(id)/1000, max(id)/1000) AS n FROM
> > delivery) as part_numbers
> >  WHERE (SELECT max(created_at) from delivery where
> n*1000 <=id
> > and id < (n+1)*1000)
> > < CURRENT_DATE-'3 month'::interval;
> >
> > Return the same (number of partition need to archive), accelerated
> by two
> > btree index: on id and created_at. Works very quick, less then
> second.
> [...]
> > Your fast solution will work as long as you don't have missing sequences
> (like
> > deleted rows).
>
> Why do you think this would break with missing sequence numbers?
>
> hp
>

In the suggested query, the return value contains a list of sequential
numbers from a min to a max - they seem to be markers of the partitions.
Let's assume that a complete partition is deleted in the middle: its index
will still be returned by the query, although it doesn't exist any more in
the table. It can be an issue if the list of indexes is actually used and
partitions are not deleted sequentially.

My cent worth to ensure data integrity.


Regards
Olivier Gautherot

>


PgAmin view

2023-10-26 Thread Shaozhong SHI
If a PgAmin view is created and tables it queries are missing, what will
happen?

If a PdAmin view is created, and it may take 20 hours to complete, what
will happen?

Would views automatically run, when you start the PgAmin?

Regards,

David


Re: Very newbie question

2023-10-26 Thread Peter J. Holzer
On 2023-10-26 11:56:56 +0200, Olivier Gautherot wrote:
> El jue, 26 oct 2023 11:15, Peter J. Holzer  escribió:
> On 2023-10-25 17:48:46 +0200, Olivier Gautherot wrote:
> > El mié, 25 oct 2023 16:58, Олег Самойлов  escribió:
> >     Okey, I see no one was be able to solve this problem. But I could.
> > May be
> >     for someone this will be useful too. There is solution.
> [...]
> >     Now query is:
> >
> >     SELECT generate_series(min(id)/1000, max(id)/1000) AS n FROM
> >     delivery) as part_numbers
> >              WHERE (SELECT max(created_at) from delivery where 
> n*1000
> <=id
> >     and id < (n+1)*1000)
> >                 < CURRENT_DATE-'3 month'::interval;

I just realized that this query is mangled. I'm going to assume that it
should have been something like 

with part_numbers as (
SELECT generate_series(min(id)/100, max(id)/100) as n
from delivery
)
select * from part_numbers
WHERE (SELECT max(created_at) from delivery where n*100 <= id and id < 
(n+1)*100) < CURRENT_DATE-'3 month'::interval;

> [...]
> > Your fast solution will work as long as you don't have missing sequences
> (like
> > deleted rows).
> 
> Why do you think this would break with missing sequence numbers?
> 
> 
> In the suggested query, the return value contains a list of sequential numbers
> from a min to a max - they seem to be markers of the partitions. Let's assume
> that a complete partition is deleted in the middle: its index will still be
> returned by the query, although it doesn't exist any more in the table.

I don't think it will. While the generate_series() will produce the
partition number, the where clause will not find any matching rows, so
the query will not return it.

E.g. (this table isn't partitioned, but that shouldn't affect the
result, also I'll reduce the "partition size" to 100 to make it more
readable):

create table delivery (id int, created_at date);
insert into delivery(200, '2000-01-01');
insert into delivery values(200, '2000-01-01');
insert into delivery values(299, '2000-12-01');
insert into delivery values(412, '2002-02-01');
insert into delivery values(439, '2002-03-01');
insert into delivery values(501, '2023-01-01');
insert into delivery values(555, now());

Note that there are no records in "partition" 3, and "partition" 5
contains current data, so we should get only "partition numbers" 2 and
4:

with part_numbers as (
SELECT generate_series(min(id)/100, max(id)/100) as n
from delivery
)
select * from part_numbers
WHERE (SELECT max(created_at) from delivery where n*100 <= id and id < 
(n+1)*100) < CURRENT_DATE-'3 month'::interval;

╔═══╗
║ n ║
╟───╢
║ 2 ║
║ 4 ║
╚═══╝
(2 rows)

Looks ok to me.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: PgAmin view

2023-10-26 Thread David G. Johnston
On Thursday, October 26, 2023, Shaozhong SHI  wrote:

> If a PgAmin view is created and tables it queries are missing, what will
> happen?
>
> If a PdAmin view is created, and it may take 20 hours to complete, what
> will happen?
>
> Would views automatically run, when you start the PgAmin?
>
>
Test it with a view that takes seconds to run.

David J.


Introducing PostgresFly: A New PostgreSQL Installation Tool

2023-10-26 Thread Rozad Khalaf
Hello PostgreSQL Community,
I've recently developed a tool named PostgresFly that I'd like to share
with you. It's a Python-crafted tool designed specifically for Windows
users to remotely install PostgreSQL on Linux machines directly from the
source.
This approach offers the benefits of customization.

For more details and to access the code, please visit my GitHub Repository.
https://github.com/Rozado9/PostgresFly
I'd appreciate any feedback and suggestions!

Best regards
Rozad


Question regarding the new SQL standard

2023-10-26 Thread Anders Kildemand
Hello!

What's the situation with Property Graph Queries in Postgres?

I've heard that it has become part of the SQL standard from this blog: 
https://peter.eisentraut.org/blog/2023/04/04/sql-2023-is-finished-here-is-whats-new

How far in development is this feature? and is there an expected time frame for 
when it'll be finished?

Best regards
Anders Kildemand


Uninstalling Ora2pg

2023-10-26 Thread Yongye Serkfem
Hello PostgreSQL Community,
I am having difficulties uninstalling ora2pg which I manually installed on
our Postgresql servers. I would appreciate any input on how to uninstall
it. Thanks everyone for your time.

Very Respectfully
Yong


Re: Question regarding the new SQL standard

2023-10-26 Thread Tom Lane
Anders Kildemand  writes:
> What's the situation with Property Graph Queries in Postgres?

There isn't any.

> I've heard that it has become part of the SQL standard from this blog: 
> https://peter.eisentraut.org/blog/2023/04/04/sql-2023-is-finished-here-is-whats-new
> How far in development is this feature? and is there an expected time frame 
> for when it'll be finished?

I think you have a fundamental misunderstanding about how Postgres
development works.  There is plenty of stuff in the standard that
nobody has any particular interest in implementing for PG.  Perhaps
someone will take an interest in implementing this feature, or
perhaps not.  But there's no project-wide goal to make it happen,
much less a timetable.  Even if someone does take an interest,
they might not succeed in producing a committable patch.

regards, tom lane




Re: PgAmin view

2023-10-26 Thread Adrian Klaver

On 10/26/23 03:56, Shaozhong SHI wrote:
If a PgAmin view is created and tables it queries are missing, what will 
happen?


Easy enough to test:

create view test_vw as select * from non_existent_table;
ERROR:  relation "non_existent_table" does not exist
LINE 1: create view test_vw as select * from non_existent_table;



If a PdAmin view is created, and it may take 20 hours to complete, what 
will happen?



Docs:

https://www.postgresql.org/docs/current/sql-createview.html

"CREATE VIEW defines a view of a query. The view is not physically 
materialized. Instead, the query is run every time the view is 
referenced in a query."




Would views automatically run, when you start the PgAmin?



See above. The view query will not execute until specifically asked for 
and nothing I have seen in my limited use of pgAdmin does that 
automatically.




Regards,

David


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Uninstalling Ora2pg

2023-10-26 Thread Adrian Klaver

On 10/26/23 05:42, Yongye Serkfem wrote:

Hello PostgreSQL Community,
I am having difficulties uninstalling ora2pg which I manually installed 
on our Postgresql servers. I would appreciate any input on how to 
uninstall it. Thanks everyone for your time.


Need way more information:

1) ora2pg version.

2) OS and version.

3) Define 'difficulties' with:

a) Commands used.

b) Error messages received.



Very Respectfully
Yong


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Uninstalling Ora2pg

2023-10-26 Thread Ron

On 10/26/23 07:42, Yongye Serkfem wrote:

Hello PostgreSQL Community,
I am having difficulties uninstalling ora2pg which I manually installed on 
our Postgresql servers. I would appreciate any input on how to uninstall 
it. Thanks everyone for your time.


Why do you have to uninstall it, when "just don't use it" accomplishes the task.

(This ignores the question "why did you install ora2pg on a *Postgresql* 
server?  That's backwards.)


--
Born in Arizona, moved to Babylonia.

Re: Question regarding the new SQL standard

2023-10-26 Thread Raymond Brinzer
I'd like to express my appreciation for a project with a feature set so
extensive that a person might mistakenly assume that its goal was to cover
the spec comprehensively.

On Thu, Oct 26, 2023 at 10:32 AM Tom Lane  wrote:

> Anders Kildemand  writes:
> > What's the situation with Property Graph Queries in Postgres?
>
> There isn't any.
>
> > I've heard that it has become part of the SQL standard from this blog:
> https://peter.eisentraut.org/blog/2023/04/04/sql-2023-is-finished-here-is-whats-new
> > How far in development is this feature? and is there an expected time
> frame for when it'll be finished?
>
> I think you have a fundamental misunderstanding about how Postgres
> development works.  There is plenty of stuff in the standard that
> nobody has any particular interest in implementing for PG.  Perhaps
> someone will take an interest in implementing this feature, or
> perhaps not.  But there's no project-wide goal to make it happen,
> much less a timetable.  Even if someone does take an interest,
> they might not succeed in producing a committable patch.
>
> regards, tom lane
>
>
>

-- 
Ray Brinzer


issue with remote backup

2023-10-26 Thread Atul Kumar
Hi,

I have the postgres12 version running on centos7.

In pg_hba.conf file I have an entry like below:

hostall all /32md5

So from the above entry, I understand that all users can connect to all
databases from the remote ip.


But when I use pg_basebackup from remote server using below command I get
the below error:

pg_basebackup -h DB_server_hostname -p 5444 -U postgres -D
/var/lib/edb/backups/10262023_1 -Fp -Xs -c fast -P

Error that I got:

pg_basebackup: error: FATAL:  no pg_hba.conf entry for replication
connection from host "remote_ip", user "postgres", SSL off

So please help me understand why I am getting this error even after having
a related entry in the pg_hba.conf file.




 Regards.


Re: issue with remote backup

2023-10-26 Thread Tom Lane
Atul Kumar  writes:
> In pg_hba.conf file I have an entry like below:

> hostall all /32md5

That is an entry governing ordinary session connections ...

> Error that I got:
> pg_basebackup: error: FATAL:  no pg_hba.conf entry for replication
> connection from host "remote_ip", user "postgres", SSL off

... and that is complaining about the lack of an entry for
replication connections.  They are handled by different pg_hba.conf
entries.

regards, tom lane




Re: issue with remote backup

2023-10-26 Thread Adrian Klaver

On 10/26/23 09:51, Atul Kumar wrote:

Hi,

I have the postgres12 version running on centos7.

In pg_hba.conf file I have an entry like below:

host    all             all             /32            md5

So from the above entry, I understand that all users can connect to all 
databases from the remote ip.



But when I use pg_basebackup from remote server using below command I 
get the below error:


pg_basebackup -h DB_server_hostname -p 5444 -U postgres -D 
/var/lib/edb/backups/10262023_1 -Fp -Xs -c fast -P


Error that I got:

pg_basebackup: error: FATAL:  no pg_hba.conf entry for replication 
connection from host "remote_ip", user "postgres", SSL off


So please help me understand why I am getting this error even after 
having a related entry in the pg_hba.conf file.



https://www.postgresql.org/docs/current/auth-pg-hba-conf.html

"
database

Specifies which database name(s) this record matches. The value all 
specifies that it matches all databases. The value sameuser specifies 
that the record matches if the requested database has the same name as 
the requested user. The value samerole specifies that the requested user 
must be a member of the role with the same name as the requested 
database. (samegroup is an obsolete but still accepted spelling of 
samerole.) Superusers are not considered to be members of a role for the 
purposes of samerole unless they are explicitly members of the role, 
directly or indirectly, and not just by virtue of being a superuser. The 
value replication specifies that the record matches if a physical 
replication connection is requested, however, it doesn't match with 
logical replication connections. Note that physical replication 
connections do not specify any particular database whereas logical 
replication connections do specify it. Otherwise, this is the name of a 
specific PostgreSQL database or a regular expression. Multiple database 
names and/or regular expressions can be supplied by separating them with 
commas.


If the database name starts with a slash (/), the remainder of the 
name is treated as a regular expression. (See Section 9.7.3.1 for 
details of PostgreSQL's regular expression syntax.)


A separate file containing database names and/or regular 
expressions can be specified by preceding the file name with @.


"






  Regards.


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Question regarding the new SQL standard

2023-10-26 Thread Imre Samu
Anders Kildemand  ezt írta (időpont: 2023. okt. 26., Cs,
14:38):

> What's the situation with Property Graph Queries in Postgres?
>

If the underlying question is how to utilize graph database functionality
in PostgreSQL, there are similar extensions that may be worth exploring:
- Apache AGE is a PostgreSQL extension that provides graph database
functionality.  https://age.apache.org/
- AgensGraph is a transactional graph database based on PostgreSQL.
https://github.com/bitnine-oss/agensgraph
- etc.

Related : https://www.dylanpaulus.com/posts/postgres-is-a-graph-database/
(  https://news.ycombinator.com/item?id=35386948 )

Best regards,
  Imre


Prepared statements plan_cache_mode considerations

2023-10-26 Thread Zain Kabani
Hi team,

I was looking into using prepared statements and using the auto
plan_cache_mode. The issue is that the current sample of data collected to
determine whether to use custom plans or the generic one is very small and
susceptible to a bad set of queries that might pick the suboptimal choice.
It’s currently hard coded as 5 custom plans [
https://github.com/postgres/postgres/blob/master/src/backend/utils/cache/plancache.c#L1051],
 but I’d like to see this as a configurable parameter if possible. Failing
that - I’d love to hear any recommendations on who to deal with this?

I'd want PG to be accurate at picking between the custom_plan and
generic_plan strategy, so that we could safely realize the benefits that
cached plans can give us.

I’d also be curious to know people’s experience with using this and if
moving to using prepared statements has resulted in a latency regression
due to a bad strategy being used in production.
I’m a contributor to the PgCat [https://github.com/postgresml/pgcat]
project and recently added support for prepared statements so I’m looking
to understand the space a little better as we would be looking to migrate
services that were previously not using prepared statements to using them.

-- 
Thanks,
Zain Kabani


Re: Uninstalling Ora2pg

2023-10-26 Thread Yongye Serkfem
Thanks, I appreciate the input

Regards
Yong

On Thu, Oct 26, 2023, 11:39 AM Ron  wrote:

> On 10/26/23 07:42, Yongye Serkfem wrote:
>
> Hello PostgreSQL Community,
> I am having difficulties uninstalling ora2pg which I manually installed on
> our Postgresql servers. I would appreciate any input on how to uninstall
> it. Thanks everyone for your time.
>
>
> Why do you have to uninstall it, when "just don't use it" accomplishes the
> task.
>
> (This ignores the question "why did you install ora2pg on a *Postgresql*
> server?  That's backwards.)
>
> --
> Born in Arizona, moved to Babylonia.
>


Re: Prepared statements plan_cache_mode considerations

2023-10-26 Thread David G. Johnston
On Thu, Oct 26, 2023 at 10:56 AM Zain Kabani 
wrote:

> I was looking into using prepared statements and using the auto
> plan_cache_mode. The issue is that the current sample of data collected to
> determine whether to use custom plans or the generic one is very small and
> susceptible to a bad set of queries that might pick the suboptimal choice.
>

It seems unreasonable for the live instance of PostgreSQL to figure this
out on-the-fly.  A more useful approach would seem to be: use "always
custom" for inexpensive plans over simple queries until you hit a point of
pain.  Then test whether changing to "always generic" results in a better
outcome given the specific situation you encountered.

The choice of 5 is basically the trade-off between people using prepared
statements solely for their sql injection protection benefits without
actually reusing them and never getting to a point of switching to a
generic plan because the amount of time it takes to be confident (get a
statistically significant sampling) is too far out.  How would you go about
deciding what an appropriate value for this setting if you did have control
of it?

If anything, removing the arbitrary number and basing the decision on, say,
whether or not the incoming parameter is within the MCV array, would seem
to be a way to expend a bit more effort with a cheap lookup that likely
will save the need for a custom plan.  I would think some plans - say ones
related to PK lookups, would simply be assumed better off done with a
generic plan involving an index scan (maybe based upon a table size check)
derived from the initial custom plan.

David J.


Re: purpose of an entry in pg_hba.conf file

2023-10-26 Thread Atul Kumar
as per response

"It allows anyone/anything on the local machine to connect to the database
without authentication.  Whether that impacts any particular one/thing
depends on your personal setup."

There is already one line to serve your stated purpose
local   all  alltrust


That's why I specifically raised this question for below from postgresql
experts
hostall all 127.0.0.1/32trust

So still I am not able to find a valid reason for keeping this entry. So
please help me in explaining the same.


Regards,

On Thu, Oct 26, 2023 at 11:56 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> Always reply to the list, it is ok to CC individuals.  Also, the
> convention here is to inline post (or bottom if you must) as in my first
> reply; not top-post as you and I have done here.
>
> I'd suggest also putting into your own words what you believe the entry is
> providing/enabling.  Read the relevant documentation for aid in formulating
> such a description.
>
> It allows anyone/anything on the local machine to connect to the database
> without authentication.  Whether that impacts any particular one/thing
> depends on your personal setup.
>
> David J.
>
> On Thu, Oct 26, 2023 at 11:04 AM Atul Kumar  wrote:
>
>> Hi,
>>
>> Could you elaborate more as it seems that your response doesn't satisfy
>> my query which is "what is the exact purpose of this entry and what
>> would be the impact of removing it on other tools/processes like pgbouncer,
>> pem, replication etc  ?"
>>
>> I am yet to understand the impact of removing this entry.
>>
>>
>> Regards.
>>
>> On Thu, Oct 26, 2023 at 5:52 AM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Wed, Oct 25, 2023 at 5:11 PM Atul Kumar 
>>> wrote:
>>>
 Hi,

 As you know already,  pg_hba.conf file always has the below entry after
 successful installation of postgres.

 hostall all 127.0.0.1/32trust


 Please let me know what is the exact purpose of this entry and what
 would be the impact of removing it on other tools/processes like pgbouncer,
 pem, replication etc  ?



>>> While that may be a true statement for installation from source I'm
>>> pretty certain most packagers have a more tightly controlled setup that
>>> doesn't involve "trust" authentication.
>>>
>>> The reason behind choosing to include that specific line is to minimize
>>> the amount of post-install effort needed for one to connect to the server
>>> from the local machine, which is often a personal machine with only the
>>> "DBA" having access to it.
>>>
>>> The better and more widely implemented default is requiring a password
>>> for host while accepting peer for local.
>>>
>>> All external tools should be told what credentials to use to connect to
>>> the server and those credentials added to the system and a more restrictive
>>> pg_hba.conf entry added to permit those connections.  All trust connections
>>> in pg_hba.conf should be removed from it as quickly as possible.
>>>
>>> David J.
>>>
>>>
>>>


Re: purpose of an entry in pg_hba.conf file

2023-10-26 Thread Christophe Pettus



> On Oct 26, 2023, at 11:44, Atul Kumar  wrote:
> There is already one line to serve your stated purpose
> local   all  alltrust
> 
> 
> That's why I specifically raised this question for below from postgresql 
> experts
> hostall all 127.0.0.1/32trust

The first line applies to local sockets; the second to connections over the 
local loopback network port.



Re: purpose of an entry in pg_hba.conf file

2023-10-26 Thread Atul Kumar
Please share the required link having such information in detail, It would
be more helpful to me.



Regards.

On Fri, Oct 27, 2023 at 12:16 AM Christophe Pettus  wrote:

>
>
> > On Oct 26, 2023, at 11:44, Atul Kumar  wrote:
> > There is already one line to serve your stated purpose
> > local   all  alltrust
> >
> >
> > That's why I specifically raised this question for below from postgresql
> experts
> > hostall all 127.0.0.1/32trust
>
> The first line applies to local sockets; the second to connections over
> the local loopback network port.


Re: purpose of an entry in pg_hba.conf file

2023-10-26 Thread Christophe Pettus



> On Oct 26, 2023, at 11:53, Atul Kumar  wrote:
> 
> Please share the required link having such information in detail, It would be 
> more helpful to me.

https://www.postgresql.org/docs/current/auth-pg-hba-conf.html






Re: Disk wait problem...

2023-10-26 Thread pf
Hi Peter,

All of the following is based on using SQL_workbench/J (WB)  (Mageia Linux
has not provided a viable pgAdmin4 package); WB is setup to autoload the
table row count and 10 rows. I'm sticking to one set of files where they
are all roughly 33.6M rows.

I've been doing a lot of digging and found more interesting info...

At first, I thought the disk was in disk wait the entire time of the
delay; this _appeared_ to be the case since the system journal logged
everything about each query, with every query looking the same; no errors.

I was going to provide some logs; but suffice it to say that the logs,
combined with running iostat every second has been enlightening...

Every time I switch from one table to another, sometimes the results
display in:
* less than a half-second
* one or more seconds
* up to 5+ minutes.
So far, none has failed to eventually return; just l..o..n..g delays...

This morning, I've been using iostat:
$ while true; do echo -n "$(date +%H:%M:%S)  "; iostat /dev/nvme1n1 |
grep nvme1n1; sleep 1; done

When the response is fast; I see iostat report ~225000 bytes read. On the
other hand when the query takes a long time to respond, I see the read
bytes spread over time...  a little every second...  So while htop
appears to report a steady D (disk wait) status, I now suspect the short
transfers are fast and most of the time, htop is going to catch the D
state.

It definitely appears to be a problem with the 4TB NVMe SSD drive which
utilities say it still has 95% life left[1].  The drive is reporting:
Error Information Log Entries:  1,177
which is growing, so I'll be digging into that.  It also reports:

Error Information (NVMe Log 0x01, 16 of 63 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc  LBA  NSIDVS  Message
  0   1177 0  0xc002  0x4004  -0 1 -  Invalid 
Field in Command
  1   1176 0  0xb001  0x4004  -0 1 -  Invalid 
Field in Command
  2   1175 0  0x801a  0x4004  -0 1 -  Invalid 
Field in Command
...
This list is not growing, so I'll look into it later than sooner.

A few moments ago, I decided to switch to the backup which resides on a
17TB mechanical platter drive which is running great.

[1] SSD drives wear out with usage; think of it as a vehicle tire which
slowly wears out.

Also found that there is a firmware update available from:
https://www.seagate.com/support/downloads/

No idea if this will solve the problem; but...

Thanks for your suggestions.

Thought I saw there may be instrumentation within PG which I can access;
suggestions welcome if so...

Cheers,
Pierre


On Thu, 26 Oct 2023 11:50:00 +0200 Peter J. Holzer wrote:

>On 2023-10-23 16:31:30 -0700, Adrian Klaver wrote:
>> On 10/23/23 14:55, p...@pfortin.com wrote:  
>> > Next, I made a query list for 30 tables, turned on timing and pasted the
>> > queries.  Pasting results inline:
>> > ncsbe=# select count(*) from ncvhis_2012_12_06;
>> > select count(*) from ncvhis_2016_12_03;
>> > select count(*) from ncvhis_2020_08_01;
>> > select count(*) from ncvhis_2020_10_31;  
>[...]
>> > Time: 72404.786 ms (01:12.405)
>> > Time: 301.246 ms
>> > Time: 409.974 ms
>> > Time: 363.836 ms
>> > Time: 351.167 ms
>> > Time: 348.378 ms
>> > Time: 348.712 ms
>> > Time: 351.901 ms
>> > Time: 487.837 ms
>> > Time: 40987.826 ms (00:40.988)
>> > Time: 76964.281 ms (01:16.964)
>> > Time: 483.329 ms
>> > Time: 18919.267 ms (00:18.919)
>> > Time: 50896.978 ms (00:50.897)
>> > Time: 25784.000 ms (00:25.784)
>> > Time: 45407.599 ms (00:45.408)
>> > Time: 75171.609 ms (01:15.172)
>> > Time: 88871.004 ms (01:28.871)
>> > Time: 128565.127 ms (02:08.565)
>> > Time: 210248.222 ms (03:30.248)
>> > Time: 246609.561 ms (04:06.610)   4 minutes!  ;p
>> > Time: 166406.283 ms (02:46.406)
>> > Time: 291483.538 ms (04:51.484)  nearly 5 minutes ;p ;p
>> > Time: 2269.961 ms (00:02.270)
>> > Time: 413.219 ms
>> > Time: 433.817 ms
>> > Time: 395.844 ms
>> > Time: 7251.637 ms (00:07.252)
>> > Time: 384.328 ms
>> > Time: 384.887 ms
>> > 
>> > I don't see a pattern in the above; matches the randomness I saw using
>> > WB...  
>
>It would be interesting to see whether there's a difference in plan
>between the slow and the fast queries.
>
>If your experiment above is repeatable, you can just prefix every
>query with 
>explain (analyze, buffers)
>
>
>explain (analyze, buffers) select count(*) from ncvhis_2016_12_03;
>explain (analyze, buffers) select count(*) from ncvhis_2020_08_01;
>explain (analyze, buffers) select count(*) from ncvhis_2020_10_31;
>...
>
>If it is hard to reproduce (happens only sometimes), the best way to
>catch is probably to enable auto_explain
>, possibly
>with the auto_explain.log_analyze option and check the logs after it
>happened. (auto_explain is generally a good idea, IMHO).
>
>
>> > Before the above finished, I issued this command on another konsole...
>> > 
>

Re: Question regarding the new SQL standard

2023-10-26 Thread Bruce Momjian
On Thu, Oct 26, 2023 at 11:46:18AM -0400, Raymond Brinzer wrote:
> I'd like to express my appreciation for a project with a feature set so
> extensive that a person might mistakenly assume that its goal was to cover the
> spec comprehensively.

Okay, I couldn't stop smiling from this comment.  :-)

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  Only you can decide what is important to you.




Re: Disk wait problem... may not be hardware...

2023-10-26 Thread pf
On Thu, 26 Oct 2023 15:50:16 -0400 p...@pfortin.com wrote:

>Hi Peter,
>
>All of the following is based on using SQL_workbench/J (WB)  (Mageia Linux
>has not provided a viable pgAdmin4 package); WB is setup to autoload the
>table row count and 10 rows. I'm sticking to one set of files where they
>are all roughly 33.6M rows.
>
>I've been doing a lot of digging and found more interesting info...
>
>At first, I thought the disk was in disk wait the entire time of the
>delay; this _appeared_ to be the case since the system journal logged
>everything about each query, with every query looking the same; no errors.
>
>I was going to provide some logs; but suffice it to say that the logs,
>combined with running iostat every second has been enlightening...
>
>Every time I switch from one table to another, sometimes the results
>display in:
>* less than a half-second
>* one or more seconds
>* up to 5+ minutes.
>So far, none has failed to eventually return; just l..o..n..g delays...
>
>This morning, I've been using iostat:
>$ while true; do echo -n "$(date +%H:%M:%S)  "; iostat /dev/nvme1n1 |
>grep nvme1n1; sleep 1; done
>
>When the response is fast; I see iostat report ~225000 bytes read. On the
>other hand when the query takes a long time to respond, I see the read
>bytes spread over time...  a little every second...  So while htop
>appears to report a steady D (disk wait) status, I now suspect the short
>transfers are fast and most of the time, htop is going to catch the D
>state.
>
>It definitely appears to be a problem with the 4TB NVMe SSD drive which
>utilities say it still has 95% life left[1].  The drive is reporting:
>Error Information Log Entries:  1,177
>which is growing, so I'll be digging into that.  It also reports:
>
>Error Information (NVMe Log 0x01, 16 of 63 entries)
>Num   ErrCount  SQId   CmdId  Status  PELoc  LBA  NSIDVS  Message
>  0   1177 0  0xc002  0x4004  -0 1 -  Invalid 
> Field in Command
>  1   1176 0  0xb001  0x4004  -0 1 -  Invalid 
> Field in Command
>  2   1175 0  0x801a  0x4004  -0 1 -  Invalid 
> Field in Command
>...
>This list is not growing, so I'll look into it later than sooner.
>
>A few moments ago, I decided to switch to the backup which resides on a
>17TB mechanical platter drive which is running great.
>
>[1] SSD drives wear out with usage; think of it as a vehicle tire which
>slowly wears out.
>
>Also found that there is a firmware update available from:
>https://www.seagate.com/support/downloads/
>
>No idea if this will solve the problem; but...
>
>Thanks for your suggestions.
>
>Thought I saw there may be instrumentation within PG which I can access;
>suggestions welcome if so...
>
>Cheers,
>Pierre
>
>
>On Thu, 26 Oct 2023 11:50:00 +0200 Peter J. Holzer wrote:
>
>>On 2023-10-23 16:31:30 -0700, Adrian Klaver wrote:  
>>> On 10/23/23 14:55, p...@pfortin.com wrote:
>>> > Next, I made a query list for 30 tables, turned on timing and pasted the
>>> > queries.  Pasting results inline:
>>> > ncsbe=# select count(*) from ncvhis_2012_12_06;
>>> > select count(*) from ncvhis_2016_12_03;
>>> > select count(*) from ncvhis_2020_08_01;
>>> > select count(*) from ncvhis_2020_10_31;
>>[...]  
>>> > Time: 72404.786 ms (01:12.405)
>>> > Time: 301.246 ms
>>> > Time: 409.974 ms
>>> > Time: 363.836 ms
>>> > Time: 351.167 ms
>>> > Time: 348.378 ms
>>> > Time: 348.712 ms
>>> > Time: 351.901 ms
>>> > Time: 487.837 ms
>>> > Time: 40987.826 ms (00:40.988)
>>> > Time: 76964.281 ms (01:16.964)
>>> > Time: 483.329 ms
>>> > Time: 18919.267 ms (00:18.919)
>>> > Time: 50896.978 ms (00:50.897)
>>> > Time: 25784.000 ms (00:25.784)
>>> > Time: 45407.599 ms (00:45.408)
>>> > Time: 75171.609 ms (01:15.172)
>>> > Time: 88871.004 ms (01:28.871)
>>> > Time: 128565.127 ms (02:08.565)
>>> > Time: 210248.222 ms (03:30.248)
>>> > Time: 246609.561 ms (04:06.610)   4 minutes!  ;p
>>> > Time: 166406.283 ms (02:46.406)
>>> > Time: 291483.538 ms (04:51.484)  nearly 5 minutes ;p ;p
>>> > Time: 2269.961 ms (00:02.270)
>>> > Time: 413.219 ms
>>> > Time: 433.817 ms
>>> > Time: 395.844 ms
>>> > Time: 7251.637 ms (00:07.252)
>>> > Time: 384.328 ms
>>> > Time: 384.887 ms
>>> > 
>>> > I don't see a pattern in the above; matches the randomness I saw using
>>> > WB...
>>
>>It would be interesting to see whether there's a difference in plan
>>between the slow and the fast queries.
>>
>>If your experiment above is repeatable, you can just prefix every
>>query with 
>>explain (analyze, buffers)
>>
>>
>>explain (analyze, buffers) select count(*) from ncvhis_2016_12_03;
>>explain (analyze, buffers) select count(*) from ncvhis_2020_08_01;
>>explain (analyze, buffers) select count(*) from ncvhis_2020_10_31;
>>...
>>
>>If it is hard to reproduce (happens only sometimes), the best way to
>>catch is probably to enable auto_explain
>>, possibly
>>with the 

Re: Disk wait problem... may not be hardware...

2023-10-26 Thread Justin Clift

On 2023-10-27 12:03, p...@pfortin.com wrote:


I can't think of a common hardware bus/other that would only affect
PostgreSQL disk accesses.


Which file system is PostgreSQL being run on?

Asking because I remember seeing weirdness reported with *some* SSD
drives when used with ZFS:

  https://github.com/openzfs/zfs/discussions/14793

Note - that's not PostgreSQL specific or anything, but more of a
"weird stuff showing up with NVMe drives" thing.

Regards and best wishes,

Justin Clift