Re:Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread gzh
Dear Adrian,

I appreciate your reply. Your reply gave me a new idea, 

it should not be the problem that the lower() function causes the unique index 
to fail. 

I checked the postgresql.conf file and found that shared_buffers, work_mem and 
maintenance_work_mem are default value, 

but in the postgresql.conf file of PostgreSQL 8.4, the value of these 
parameters are very large. 

When I changed the value of these parameters to a larger value, the problem 
solved.

Re: ERROR: catalog is missing 3 attribute(s) for relid 150243

2022-08-18 Thread Tom Lane
milist ujang  writes:
> On Fri, Aug 19, 2022 at 11:02 AM Julien Rouhaud  wrote:
>> What it means is that you likely have data corruption.  You could try
>> to reindex pg_attribute and see if that fixes that specific problems,
>> but even if it does you will have to investigate how data got
>> corrupted, fix that root problem, and then try to check for other
>> corrupted data or restore from a sane backup.

> yes we have a problem with vm; suddenly restart without reason

Ugh.

> I've done reinding pg_class, pg_attribute, pg_constraint without luck.

That was your only chance of an easy way out :-(.  At this point you
clearly have data corruption in one or more system catalogs, and there's
no particular reason to think that the damage is only in the catalogs
and not also in your user-data tables.

If the data is worth a substantial amount of money to you, I'd recommend
hiring a professional Postgres support company with experience in data
recovery.  You can find some links here:
https://www.postgresql.org/support/professional_support/

Otherwise, restore from your latest backup, and resolve to get better
at keeping backups, and ask some hard questions about the reliability
of the storage stack you're sitting on.

regards, tom lane




Re: ERROR: catalog is missing 3 attribute(s) for relid 150243

2022-08-18 Thread milist ujang
On Fri, Aug 19, 2022 at 11:02 AM Julien Rouhaud  wrote:

> Hi,
>
> Please don't top-post on this list (and please trim quoted messages too).
>
>
>
> No, this function is comparing pg_class.relnatts to rows fetched querying
> pg_attribute for the given relation.
>
> any concern if I update pg_class.natts and or pg_attribute to let it match?



> What it means is that you likely have data corruption.  You could try to
> reindex pg_attribute and see if that fixes that specific problems, but
> even if
> it does you will have to investigate how data got corrupted, fix that root
> problem, and then try to check for other corrupted data or restore from a
> sane
> backup.
>

yes we have a problem with vm; suddenly restart without reason

I've done recovery by creating empty files under pg_xact , cluster can
bring up
then a file under pg_multixact due to error on a table.

I've done reinding pg_class, pg_attribute, pg_constraint without luck.


-- 
regards

ujang jaenudin | DBA Consultant (Freelancer)
http://ora62.wordpress.com
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab

[image: width=]

Virus-free.www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>


Re: ERROR: catalog is missing 3 attribute(s) for relid 150243

2022-08-18 Thread Julien Rouhaud
Hi,

Please don't top-post on this list (and please trim quoted messages too).

On Fri, Aug 19, 2022 at 05:55:03AM +0700, milist ujang wrote:
> On Thu, Aug 18, 2022 at 5:33 PM milist ujang  wrote:
> >
> > 3rd query ERROR same as subject (ERROR:  catalog is missing 3 attribute(s)
> > for relid 150243)
>
> my version is 12.9 on x86_64.
>
> reading source code, got the message form RelationBuildTupleDesc() function:
>
> /*
>   * end the scan and close the attribute relation
>   */
>  systable_endscan(pg_attribute_scan);
>  table_close(pg_attribute_desc, AccessShareLock);
>
>  if (need != 0)
>  elog(ERROR, "pg_attribute catalog is missing %d attribute(s) for
> relation OID %u",
>   need, RelationGetRelid(relation));
>
> I'm not sure whether this function will compare the mentioned relation
> attribute in pg_attribute and physical table (header) file?

No, this function is comparing pg_class.relnatts to rows fetched querying
pg_attribute for the given relation.

What it means is that you likely have data corruption.  You could try to
reindex pg_attribute and see if that fixes that specific problems, but even if
it does you will have to investigate how data got corrupted, fix that root
problem, and then try to check for other corrupted data or restore from a sane
backup.




Re: Unable to Create or Drop Index Concurrently

2022-08-18 Thread Abdul Qoyyuum
Apparently just leaving it alone until tomorrow managed to finish
creating/dropping the index. Thank you all very much.

On Thu, Aug 18, 2022 at 5:00 PM hubert depesz lubaczewski 
wrote:

> On Thu, Aug 18, 2022 at 01:57:48PM +0800, Abdul Qoyyuum wrote:
> > Hi list,
> >
> > We have a running Master-Slave High Availability set up. Naturally, we
> > can't run any changes on read-only databases on slave, so we have to do
> it
> > on the master node.
> >
> > When trying to run the following command:
> >
> > create index concurrently idx_cash_deposit_channel_id_batch_id on
> > cash_deposit (channel_id, batch_id);
> >
> >
> > Waiting for a long time, and my connection dropped. When checking the
> > table, we get the index as INVALID
> >
> > Indexes:
> > "pk_cash_deposit" PRIMARY KEY, btree (id)
> > "idx_cash_deposit_channel_id_batch_id" btree (channel_id, batch_id)
> > INVALID
> >
> > And when dropping the invalid index, also takes a long time, my
> connection
> > timed out, then when logging back in and check the table, it hasn't
> dropped.
>
> This means that you have some very long transactions.
>
> To make/drop index concurrently, all transactions that have started
> before you started create/drop, have to finish.
>
> You can see your oldest transactions by doing:
>
> select * from pg_stat_activity where xact_start is not null order by
> xact_start
>
> Best regards,
>
> depesz
>
>

-- 
Abdul Qoyyuum Bin Haji Abdul Kadir
HP No: +673 720 8043


Re: Is it possible to keep indexes on different disk location?

2022-08-18 Thread Ron

On 8/18/22 14:54, W.P. wrote:
[snip]

But You are right, I am taking backups  irregular.


cron job.


--
Angular momentum makes the world go 'round.




Re: ERROR: catalog is missing 3 attribute(s) for relid 150243

2022-08-18 Thread milist ujang
my version is 12.9 on x86_64.

reading source code, got the message form RelationBuildTupleDesc() function:

/*
  * end the scan and close the attribute relation
  */
 systable_endscan(pg_attribute_scan);
 table_close(pg_attribute_desc, AccessShareLock);

 if (need != 0)
 elog(ERROR, "pg_attribute catalog is missing %d attribute(s) for
relation OID %u",
  need, RelationGetRelid(relation));

I'm not sure whether this function will compare the mentioned relation
attribute in pg_attribute and physical table (header) file?



On Thu, Aug 18, 2022 at 5:33 PM milist ujang  wrote:

> Hi lists,
>
> I have an index anomaly on a table; getting the error as subject.
>
> dumping queries behind \dS+ of a table, got 4 queries (see detail below)
> 1st query --> OK
> 2nd query --> OK
> 3rd query ERROR same as subject (ERROR:  catalog is missing 3 attribute(s)
> for relid 150243)
>
> comment on 3rd query line --pg_catalog.pg_get_indexdef looks good.
> so my assumption we have an issue when call pg_catalog.pg_get_indexdef
> function.
>
> select pg_catalog.pg_get_indexdef ( 150243, 0, TRUE );
> return blank.
>
> any ideas?
>
>
> 1st query:
> 
> SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules,
> c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, false AS
> relhasoids, c.relispartition, pg_catalog.array_to_string(c.reloptions ||
> array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')
> , c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE
> c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence,
> c.relreplident, am.amname
> FROM pg_catalog.pg_class c
>  LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
> LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)
> WHERE c.oid = '150243';
>
> 2nd query:
> 
> SELECT a.attname,
>   pg_catalog.format_type(a.atttypid, a.atttypmod),
>   (SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid, true)
>FROM pg_catalog.pg_attrdef d
>WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND
> a.atthasdef),
>   a.attnotnull,
>   (SELECT c.collname FROM pg_catalog.pg_collation c,
> pg_catalog.pg_type t
>WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND
> a.attcollation <> t.typcollation) AS attcollation,
>   a.attidentity,
>   a.attgenerated,
>   a.attstorage,
>   CASE WHEN a.attstattarget=-1 THEN NULL ELSE a.attstattarget END
> AS attstattarget,
>   pg_catalog.col_description(a.attrelid, a.attnum)
> FROM pg_catalog.pg_attribute a
> WHERE a.attrelid = '150243' AND a.attnum > 0 AND NOT a.attisdropped
> ORDER BY a.attnum;
>
> 3rd query:
> 
> SELECT
> c2.relname,
> i.indisprimary,
> i.indisunique,
> i.indisclustered,
> i.indisvalid,
> pg_catalog.pg_get_indexdef ( i.indexrelid, 0, TRUE ),
> pg_catalog.pg_get_constraintdef ( con.oid, TRUE ),
> contype,
> condeferrable,
> condeferred,
> i.indisreplident,
> c2.reltablespace
> FROM
> pg_catalog.pg_class C,
> pg_catalog.pg_class c2,
> pg_catalog.pg_index i
> LEFT JOIN pg_catalog.pg_constraint con ON ( conrelid = i.indrelid AND
> conindid = i.indexrelid AND contype IN ( 'p', 'u', 'x' ) )
> WHERE
> C.oid = '150243'
> AND C.oid = i.indrelid
> AND i.indexrelid = c2.oid
> ORDER BY
> i.indisprimary DESC,
> i.indisunique DESC,
> c2.relname;
>
> 4th query:
> 
> SELECT
> c2.relname,
> i.indisprimary,
> i.indisunique,
> i.indisclustered,
> i.indisvalid,
> pg_catalog.pg_get_indexdef ( i.indexrelid, 0, TRUE ),
> pg_catalog.pg_get_constraintdef ( con.oid, TRUE ),
> contype,
> condeferrable,
> condeferred,
> i.indisreplident,
> c2.reltablespace
> FROM
> pg_catalog.pg_class C,
> pg_catalog.pg_class c2,
> pg_catalog.pg_index i
> LEFT JOIN pg_catalog.pg_constraint con ON ( conrelid = i.indrelid AND
> conindid = i.indexrelid AND contype IN ( 'p', 'u', 'x' ) )
> WHERE
> C.oid = '150243'
> AND C.oid = i.indrelid
> AND i.indexrelid = c2.oid
> ORDER BY
> i.indisprimary DESC,
> i.indisunique DESC,
> c2.relname;
>
>
> --
> regards
>
> ujang jaenudin | DBA Consultant (Freelancer)
> http://ora62.wordpress.com
> http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab
>


-- 
regards

ujang jaenudin | DBA Consultant (Freelancer)
http://ora62.wordpress.com
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab


SIReadLock vs hot_standby_feedback

2022-08-18 Thread Christophe Pettus
I am reasonably sure the answer to this is "no", but can the oldest xmin that 
hot_standby_feedback sends to the primary also delay SIReadLocks cleanup?  
Going through the code, it looks like they're independent, but this isn't a 
part of the system I know particularly well.



Re: Is it possible to keep indexes on different disk location?

2022-08-18 Thread W.P.

W dniu 18.08.2022 o 16:46, Laurenz Albe pisze:

On Thu, 2022-08-18 at 08:39 +0200, W.P. wrote:

I have a PostgreSQL 11 server on OrangePi3 (ARM 64 bit, Armbian, PG from
distro).

Database (all cluster) is located on USB disk. This approach give me
already 2 times loosing DB contents (it is a replica of DB on i7).

But the whole thing (mainly indexes) is about 50G, and internal storage
is only 32GB.

Is it possible to move  DB tables etc to this internal storage (sure
connection) and put only    indexes on USB  HDD?

And will it help in case of losing connection to USB disk? (DB
recoverable instead of total crash)?

I'd say that that is a bad idea.  It would not be easy to recover from
losing a tablespace, even if it contains only indexes.
So do You see any (chaeper) solution for using 40GB DB having only 32 
(max) GB of storage I can trust? (uSD card)

Get a real computer.  Take backups regularly.


Main DB is on "real" i7 8GB / 500 SSD.

But You are right, I am taking backups  irregular. It is not critical, 
but loosing "main" would not be nice...


W.P.


Yours,
Laurenz Albe







Re: Fwd: Data caching

2022-08-18 Thread Adrian Klaver

On 8/18/22 09:39, Anant ngo wrote:

Hello

Is there a postgres extension or project related to 
application-level/foreign-table data caching ? The postgres_fdw 
extension fetches data from foreign table for each command.


I have seen previous messages in archive about caching in form of global 
temp tables, query cache etc. There are good discussions about 
whether support should be built-in but did not find any implementation.


Cursors?

https://www.postgresql.org/docs/current/sql-declare.html

"A cursor created with WITH HOLD is closed when an explicit CLOSE 
command is issued on it, or the session ends. In the current 
implementation, the rows represented by a held cursor are copied into a 
temporary file or memory area so that they remain available for 
subsequent transactions."




I have seen the 44 postgres extensions that come pre-installed with 
ubuntu 16.04 but none of them do this.


Thanks.
Anant.



--
Adrian Klaver
adrian.kla...@aklaver.com




Fwd: Data caching

2022-08-18 Thread Anant ngo
Hello

Is there a postgres extension or project related to
application-level/foreign-table data caching ? The postgres_fdw extension
fetches data from foreign table for each command.

I have seen previous messages in archive about caching in form of global
temp tables, query cache etc. There are good discussions about
whether support should be built-in but did not find any implementation.

I have seen the 44 postgres extensions that come pre-installed with ubuntu
16.04 but none of them do this.

Thanks.
Anant.


Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread Adrian Klaver

On 8/18/22 02:50, gzh wrote:

Dear Tom,


Thanks for your reply.
Please refer to the information below:




I run following sql in PostgreSQL 8.2 and PostgreSQL 12.5, it returns 
different execution plan.


There are ~13 years of improvements to the planner and the database as a 
whole, I would more surprised if the execution plans where the same.





The execution plan shows that PostgreSQL 12.5 takes less time,

but the data can not display, and SQL has been in a suspended state.


Per:

https://www.postgresql.org/docs/current/sql-explain.html

"This command displays the execution plan that the PostgreSQL planner 
generates for the supplied statement. The execution plan shows how the 
table(s) referenced by the statement will be scanned — by plain 
sequential scan, index scan, etc. — and if multiple tables are 
referenced, what join algorithms will be used to bring together the 
required rows from each input table."


...

"Keep in mind that the statement is actually executed when the ANALYZE 
option is used. Although EXPLAIN will discard any output that a SELECT 
would return, ..."




When I change the select clause to the following( crew_base.crewid → 
count(*) ), I can retrieve the number of data rows.


The amount of data in the crew_base table is 1485255.

The data type of the crew_base.crewid field is text.

The crew_base.crewid field has a unique index: CREATE UNIQUE INDEX 
crew_base_crewid_index ON public.crew_base USING btree (crewid)



select

   count(*)

from crew_base

      left join crew_base as crew_base_introduced on 
crew_base.introduced_by=crew_base_introduced.crewid


where crew_base.status = '1';











--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Is it possible to keep indexes on different disk location?

2022-08-18 Thread Laurenz Albe
On Thu, 2022-08-18 at 08:39 +0200, W.P. wrote:
> I have a PostgreSQL 11 server on OrangePi3 (ARM 64 bit, Armbian, PG from 
> distro).
> 
> Database (all cluster) is located on USB disk. This approach give me 
> already 2 times loosing DB contents (it is a replica of DB on i7).
> 
> But the whole thing (mainly indexes) is about 50G, and internal storage 
> is only 32GB.
> 
> Is it possible to move  DB tables etc to this internal storage (sure 
> connection) and put only    indexes on USB  HDD?
> 
> And will it help in case of losing connection to USB disk? (DB 
> recoverable instead of total crash)?

I'd say that that is a bad idea.  It would not be easy to recover from
losing a tablespace, even if it contains only indexes.

Get a real computer.  Take backups regularly.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




Re: [(catch-ext)] Re: [(catch-ext)] Re: Setting up streaming replication on large database (20+ TB) for the first time

2022-08-18 Thread Ivan N. Ivanov
Yes, pgbackrest seems the best solution for faster backup and restore.

We can close the discussion here for now, replaying million WAL files is
just slow and this is normal.

On Thu, Aug 18, 2022 at 12:10 PM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

> Sorry for top posting, from phone.
>
> But pgbackrest exactly helped with that. With compression and parallel
> process in backup, the backup and restore was quick. I used this, where I
> took a backup and immediately did a restore so less wals to replay, else
> wal replay is indeed slow.
>
> On Thu, Aug 18, 2022, 1:03 PM Ivan N. Ivanov 
> wrote:
>
>> Thank you, people. The big problem in my case, which I have not
>> mentioned, is that I think the network is a bottleneck, because I am
>> running pg_basebackup through internet from local country to Amazon
>> instance in Germany and the speed in copying is around 50 MB/sec max, that
>> is why it takes 2 days for copying.
>>
>> I will try using high compression for the basebackup to reduce the time.
>>
>> pgbackrest is an alternative, too
>>
>> Thank you again!
>>
>>
>>
>> On Wed, Aug 17, 2022 at 11:13 PM Ivan N. Ivanov 
>> wrote:
>>
>>> Thank you for your answer! I have found this tool and I will try it
>>> tomorrow to see if this "read-ahead" feature will speed up the process.
>>>
>>> On Wed, Aug 17, 2022 at 11:09 PM Christophe Pettus 
>>> wrote:
>>>


 > On Aug 17, 2022, at 13:06, Ivan N. Ivanov 
 wrote:
 >
 > How to speed up recovering of WAL files?

 Since you are running on your own hardware, you might take a look at:

 https://github.com/TritonDataCenter/pg_prefaulter

>>>


Re: Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread Torsten Krah
But you should do that according to the documentation:

...
After restoring a backup, it is wise to run ANALYZE on each database so the
query optimizer has useful statistics.
...


Re:Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread gzh



Dear Adrian,


Thanks for your reply.


>Did you run ANALYZE on the 12.5 server after restoring the data to it?
No, I did not run ANALYZE on the 12.5 server after restoring the data.


When I change the select clause like this ( crew_base.crewid → count(*) ), I 
can retrieve the number of data rows.
Please refer to my previous reply for more information.











At 2022-08-18 12:18:31, "Adrian Klaver"  wrote:
>On 8/17/22 20:01, gzh wrote:
>> Hi,
>> 
>> 
>> I have had a Perl Website working for 7 years and have had no problems
>> 
>> until a few weeks ago I replaced my database server with a newer one.
>
>Did you run ANALYZE on the 12.5 server after restoring the data to it?
>
>
>> 
>> gzh
>> 
>> 
>
>
>-- 
>Adrian Klaver
>adrian.kla...@aklaver.com


Re:Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread gzh
Dear David,


Thanks for your reply.
>In addition to that, I couldn't help notice that the quoted SQL does
>not seem to belong to the explain.  The EXPLAIN has a Limit node, but
>the query does not. I'm assuming this isn't due to the relations being
>views since we don't pull up subqueries with a LIMIT.
LIMIT node because I connect PostgreSQL with A5M2, the tool will add LIMIT when 
outputting the execution plan, please ignore it.



















At 2022-08-18 11:38:58, "David Rowley"  wrote:
>On Thu, 18 Aug 2022 at 15:32, Tom Lane  wrote:
>> The 12.5 plan looks like it thinks that the join condition is not
>> hashable --- and probably not mergeable as well, else it would have
>> done a mergejoin.  This is odd if we assume that the lower()
>> outputs are just text.  But you haven't said anything about the
>> data types involved, nor what locale setting you're using, nor
>> what nondefault settings or extensions you might be using, so
>> speculation about the cause would just be speculation.
>
>In addition to that, I couldn't help notice that the quoted SQL does
>not seem to belong to the explain.  The EXPLAIN has a Limit node, but
>the query does not. I'm assuming this isn't due to the relations being
>views since we don't pull up subqueries with a LIMIT.
>
>The costs for the 12.5 are cheaper than 8.4's, so I imagine the more
>likely cause is the planner favouring an early startup plan.
>
>It's probably more likely that lower() is providing the planner with
>bad estimates and there's likely far less than the expected rows,
>resulting in the LIMIT 1 being a much larger proportion of the
>total rows than the planner expects.
>
>David


ERROR: catalog is missing 3 attribute(s) for relid 150243

2022-08-18 Thread milist ujang
Hi lists,

I have an index anomaly on a table; getting the error as subject.

dumping queries behind \dS+ of a table, got 4 queries (see detail below)
1st query --> OK
2nd query --> OK
3rd query ERROR same as subject (ERROR:  catalog is missing 3 attribute(s)
for relid 150243)

comment on 3rd query line --pg_catalog.pg_get_indexdef looks good.
so my assumption we have an issue when call pg_catalog.pg_get_indexdef
function.

select pg_catalog.pg_get_indexdef ( 150243, 0, TRUE );
return blank.

any ideas?


1st query:

SELECT c.relchecks, c.relkind, c.relhasindex, c.relhasrules,
c.relhastriggers, c.relrowsecurity, c.relforcerowsecurity, false AS
relhasoids, c.relispartition, pg_catalog.array_to_string(c.reloptions ||
array(select 'toast.' || x from pg_catalog.unnest(tc.reloptions) x), ', ')
, c.reltablespace, CASE WHEN c.reloftype = 0 THEN '' ELSE
c.reloftype::pg_catalog.regtype::pg_catalog.text END, c.relpersistence,
c.relreplident, am.amname
FROM pg_catalog.pg_class c
 LEFT JOIN pg_catalog.pg_class tc ON (c.reltoastrelid = tc.oid)
LEFT JOIN pg_catalog.pg_am am ON (c.relam = am.oid)
WHERE c.oid = '150243';

2nd query:

SELECT a.attname,
  pg_catalog.format_type(a.atttypid, a.atttypmod),
  (SELECT pg_catalog.pg_get_expr(d.adbin, d.adrelid, true)
   FROM pg_catalog.pg_attrdef d
   WHERE d.adrelid = a.attrelid AND d.adnum = a.attnum AND
a.atthasdef),
  a.attnotnull,
  (SELECT c.collname FROM pg_catalog.pg_collation c,
pg_catalog.pg_type t
   WHERE c.oid = a.attcollation AND t.oid = a.atttypid AND
a.attcollation <> t.typcollation) AS attcollation,
  a.attidentity,
  a.attgenerated,
  a.attstorage,
  CASE WHEN a.attstattarget=-1 THEN NULL ELSE a.attstattarget END
AS attstattarget,
  pg_catalog.col_description(a.attrelid, a.attnum)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = '150243' AND a.attnum > 0 AND NOT a.attisdropped
ORDER BY a.attnum;

3rd query:

SELECT
c2.relname,
i.indisprimary,
i.indisunique,
i.indisclustered,
i.indisvalid,
pg_catalog.pg_get_indexdef ( i.indexrelid, 0, TRUE ),
pg_catalog.pg_get_constraintdef ( con.oid, TRUE ),
contype,
condeferrable,
condeferred,
i.indisreplident,
c2.reltablespace
FROM
pg_catalog.pg_class C,
pg_catalog.pg_class c2,
pg_catalog.pg_index i
LEFT JOIN pg_catalog.pg_constraint con ON ( conrelid = i.indrelid AND
conindid = i.indexrelid AND contype IN ( 'p', 'u', 'x' ) )
WHERE
C.oid = '150243'
AND C.oid = i.indrelid
AND i.indexrelid = c2.oid
ORDER BY
i.indisprimary DESC,
i.indisunique DESC,
c2.relname;

4th query:

SELECT
c2.relname,
i.indisprimary,
i.indisunique,
i.indisclustered,
i.indisvalid,
pg_catalog.pg_get_indexdef ( i.indexrelid, 0, TRUE ),
pg_catalog.pg_get_constraintdef ( con.oid, TRUE ),
contype,
condeferrable,
condeferred,
i.indisreplident,
c2.reltablespace
FROM
pg_catalog.pg_class C,
pg_catalog.pg_class c2,
pg_catalog.pg_index i
LEFT JOIN pg_catalog.pg_constraint con ON ( conrelid = i.indrelid AND
conindid = i.indexrelid AND contype IN ( 'p', 'u', 'x' ) )
WHERE
C.oid = '150243'
AND C.oid = i.indrelid
AND i.indexrelid = c2.oid
ORDER BY
i.indisprimary DESC,
i.indisunique DESC,
c2.relname;


-- 
regards

ujang jaenudin | DBA Consultant (Freelancer)
http://ora62.wordpress.com
http://id.linkedin.com/pub/ujang-jaenudin/12/64/bab


Re:Re: Different execution plan between PostgreSQL 8.2 and 12.5

2022-08-18 Thread gzh
Dear Tom,




Thanks for your reply.
Please refer to the information below:




PostgreSQL 8.2



[root@PostgreSQL8 ~]# cat /etc/redhat-release

CentOS release 6.6 (Final)

[root@PostgreSQL8 ~]# locale | grep LANG

LANG=ja_JP.UTF-8




PostgreSQL 12.5



[root@PostgreSQL12 ~]# cat /etc/redhat-release

Red Hat Enterprise Linux release 8.5 (Ootpa)

[root@PostgreSQL12 ~]# locale | grep LANG

LANG=ja_JP.utf8







I run following sql in PostgreSQL 8.2 and PostgreSQL 12.5, it returns different 
execution plan.




--SQL




explain analyze select 

  crew_base.crewid

from crew_base

 left join crew_base as crew_base_introduced on 
crew_base.introduced_by=crew_base_introduced.crewid

where crew_base.status = '1';







--PostgreSQL 8.2

---

QUERY PLAN

Limit  (cost=0.00..229939.53 rows=7921 width=10) (actual time=2.137..4598.114 
rows=4489 loops=1)

  ->  Nested Loop Left Join  (cost=0.00..229939.53 rows=7921 width=10) (actual 
time=2.136..4597.484 rows=4489 loops=1)

->  Seq Scan on crew_base  (cost=0.00..165072.69 rows=7921 width=20) 
(actual time=2.131..4348.423 rows=4489 loops=1)

  Filter: (status = 1)

->  Index Scan using crew_base_crewid_index on crew_base 
crew_base_introduced  (cost=0.00..8.18 rows=1 width=10) (actual 
time=0.055..0.055 rows=0 loops=4489)

  Index Cond: (crew_base.introduced_by = 
crew_base_introduced.crewid)

Total runtime: 4599.985 ms







--PostgreSQL 12.5

---

QUERY PLAN

Limit  (cost=1000.00..119573.46 rows=4759 width=7) (actual time=0.996..633.557 
rows=4489 loops=1)

  ->  Gather  (cost=1000.00..119573.46 rows=4759 width=7) (actual 
time=0.994..633.043 rows=4489 loops=1)

Workers Planned: 2

Workers Launched: 2

->  Parallel Seq Scan on crew_base  (cost=0.00..118097.56 rows=1983 
width=7) (actual time=0.573..611.579 rows=1496 loops=3)

  Filter: (status = 1)

  Rows Removed by Filter: 493589

Planning Time: 15.966 ms

Execution Time: 634.035 ms




The execution plan shows that PostgreSQL 12.5 takes less time, 

but the data can not display, and SQL has been in a suspended state.

When I change the select clause to the following( crew_base.crewid → count(*) 
), I can retrieve the number of data rows.

The amount of data in the crew_base table is 1485255.

The data type of the crew_base.crewid field is text.

The crew_base.crewid field has a unique index: CREATE UNIQUE INDEX 
crew_base_crewid_index ON public.crew_base USING btree (crewid)




select 

  count(*)

from crew_base

 left join crew_base as crew_base_introduced on 
crew_base.introduced_by=crew_base_introduced.crewid

where crew_base.status = '1';




















At 2022-08-18 11:32:22, "Tom Lane"  wrote:
>gzh  writes:
>> I run following sql in PostgreSQL 8.2 and PostgreSQL 12.5, it returns 
>> different execution plan.
>
>8.2 is ... well, not stone age maybe, but pretty durn ancient.
>You really ought to update a bit more often than that.  (And
>maybe pay more attention to staying up to date with minor releases?
>Whatever was your reasoning for choosing 12.5, when the latest 12.x
>release is 12.12?)
>
>The 12.5 plan looks like it thinks that the join condition is not
>hashable --- and probably not mergeable as well, else it would have
>done a mergejoin.  This is odd if we assume that the lower()
>outputs are just text.  But you haven't said anything about the
>data types involved, nor what locale setting you're using, nor
>what nondefault settings or extensions you might be using, so
>speculation about the cause would just be speculation.
>
>There is some advice here about how to ask this sort of
>question in a way that would obtain useful answers:
>
>https://wiki.postgresql.org/wiki/Slow_Query_Questions
>
>   regards, tom lane


Re: [(catch-ext)] Re: Setting up streaming replication on large database (20+ TB) for the first time

2022-08-18 Thread Vijaykumar Jain
Sorry for top posting, from phone.

But pgbackrest exactly helped with that. With compression and parallel
process in backup, the backup and restore was quick. I used this, where I
took a backup and immediately did a restore so less wals to replay, else
wal replay is indeed slow.

On Thu, Aug 18, 2022, 1:03 PM Ivan N. Ivanov 
wrote:

> Thank you, people. The big problem in my case, which I have not mentioned,
> is that I think the network is a bottleneck, because I am running
> pg_basebackup through internet from local country to Amazon instance in
> Germany and the speed in copying is around 50 MB/sec max, that is why it
> takes 2 days for copying.
>
> I will try using high compression for the basebackup to reduce the time.
>
> pgbackrest is an alternative, too
>
> Thank you again!
>
>
>
> On Wed, Aug 17, 2022 at 11:13 PM Ivan N. Ivanov 
> wrote:
>
>> Thank you for your answer! I have found this tool and I will try it
>> tomorrow to see if this "read-ahead" feature will speed up the process.
>>
>> On Wed, Aug 17, 2022 at 11:09 PM Christophe Pettus 
>> wrote:
>>
>>>
>>>
>>> > On Aug 17, 2022, at 13:06, Ivan N. Ivanov 
>>> wrote:
>>> >
>>> > How to speed up recovering of WAL files?
>>>
>>> Since you are running on your own hardware, you might take a look at:
>>>
>>> https://github.com/TritonDataCenter/pg_prefaulter
>>>
>>


Re: Unable to Create or Drop Index Concurrently

2022-08-18 Thread hubert depesz lubaczewski
On Thu, Aug 18, 2022 at 01:57:48PM +0800, Abdul Qoyyuum wrote:
> Hi list,
> 
> We have a running Master-Slave High Availability set up. Naturally, we
> can't run any changes on read-only databases on slave, so we have to do it
> on the master node.
> 
> When trying to run the following command:
> 
> create index concurrently idx_cash_deposit_channel_id_batch_id on
> cash_deposit (channel_id, batch_id);
> 
> 
> Waiting for a long time, and my connection dropped. When checking the
> table, we get the index as INVALID
> 
> Indexes:
> "pk_cash_deposit" PRIMARY KEY, btree (id)
> "idx_cash_deposit_channel_id_batch_id" btree (channel_id, batch_id)
> INVALID
> 
> And when dropping the invalid index, also takes a long time, my connection
> timed out, then when logging back in and check the table, it hasn't dropped.

This means that you have some very long transactions.

To make/drop index concurrently, all transactions that have started
before you started create/drop, have to finish.

You can see your oldest transactions by doing:

select * from pg_stat_activity where xact_start is not null order by xact_start

Best regards,

depesz





Re: Is it possible to keep indexes on different disk location?

2022-08-18 Thread hubert depesz lubaczewski
On Thu, Aug 18, 2022 at 08:39:27AM +0200, W.P. wrote:
> Is it possible to move  DB tables etc to this internal storage (sure
> connection) and put only    indexes on USB  HDD?

Sure. There is a thing called tablespace, which is basically, directory
where files for db objects reside.

You can specify it both when making new objects
(https://www.postgresql.org/docs/current/sql-createtable.html,
https://www.postgresql.org/docs/current/sql-createtable.html) and you
can also move object between tablespaces (
(https://www.postgresql.org/docs/current/sql-altertable.html,
https://www.postgresql.org/docs/current/sql-altertable.html).

Just keep in mind that moving object means that it will get locked, data
copied to new device, and then removed from old, and unlocked.

> And will it help in case of losing connection to USB disk? (DB recoverable
> instead of total crash)?

Well, you need *all* data from all tablespaces to have fully working db.

Best regards,

depesz





Re: [(catch-ext)] Re: Setting up streaming replication on large database (20+ TB) for the first time

2022-08-18 Thread Ivan N. Ivanov
Thank you, people. The big problem in my case, which I have not mentioned,
is that I think the network is a bottleneck, because I am running
pg_basebackup through internet from local country to Amazon instance in
Germany and the speed in copying is around 50 MB/sec max, that is why it
takes 2 days for copying.

I will try using high compression for the basebackup to reduce the time.

pgbackrest is an alternative, too

Thank you again!



On Wed, Aug 17, 2022 at 11:13 PM Ivan N. Ivanov 
wrote:

> Thank you for your answer! I have found this tool and I will try it
> tomorrow to see if this "read-ahead" feature will speed up the process.
>
> On Wed, Aug 17, 2022 at 11:09 PM Christophe Pettus 
> wrote:
>
>>
>>
>> > On Aug 17, 2022, at 13:06, Ivan N. Ivanov 
>> wrote:
>> >
>> > How to speed up recovering of WAL files?
>>
>> Since you are running on your own hardware, you might take a look at:
>>
>> https://github.com/TritonDataCenter/pg_prefaulter
>>
>