Re: Removing duplicate rows in table

2024-09-10 Thread Andreas Kretschmer




Am 10.09.24 um 17:07 schrieb Rich Shepard:
I've no idea how I entered multiple, identical rows in a table but I 
want to

delete all but one of these rows.

Here's an example:

bustrac=# select * from projects where proj_nbr = '4242.01';
 proj_nbr |   proj_name    | start_date |  end_date  | description  | 
notes 
--++++---+---

 4242.01  | Expert witness | 2008-10-15 | 2008-10-28 | Consol Energy |
 4242.01  | Expert witness | 2008-10-15 | 2008-10-28 | Consol Energy |
 4242.01  | Expert witness | 2008-10-15 | 2008-10-28 | Consol Energy |
 4242.01  | Expert witness | 2008-10-15 | 2008-10-28 | Consol Energy | 
(4 rows)


How do I clean this up so there's only a single row for this project 
number?


TIA,

Rich




you can use the hidden ctid-column:

postgres=# create table demo (id int, val text);
CREATE TABLE
postgres=# insert into demo values (1, 'test1');
INSERT 0 1
postgres=# insert into demo values (1, 'test1');
INSERT 0 1
postgres=# insert into demo values (1, 'test1');
INSERT 0 1
postgres=# insert into demo values (1, 'test1');
INSERT 0 1
postgres=# select ctid, * from demo;
 ctid  | id |  val
---++---
 (0,1) |  1 | test1
 (0,2) |  1 | test1
 (0,3) |  1 | test1
 (0,4) |  1 | test1
(4 rows)

postgres=# with my_ctid as (select min(ctid) from demo where id = 1 and 
val = 'test1') delete from demo using my_ctid where id=1 and val='test1' 
and ctid != my_ctid.min;

DELETE 3
postgres=# select ctid, * from demo;
 ctid  | id |  val
---++---
 (0,1) |  1 | test1
(1 row)

postgres=#


--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: Is this a buggy behavior?

2024-03-24 Thread Andreas Kretschmer




Am 24.03.24 um 16:41 schrieb Thiemo Kellner:



Am 24.03.2024 um 16:36 schrieb Andreas Kretschmer:
the null-able constraint addition to a column is pointless because by 
default all columns are nullable. definition as a primary key adds 
the not null constraint.


While this is certainly true, I do not see why the information that a 
not null constraint is to be created or has been created is not 
available.





postgres=# create table bla(i int null primary key);
CREATE TABLE
postgres=# \d bla
    Table "public.bla"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 i  | integer |   | not null |
Indexes:
    "bla_pkey" PRIMARY KEY, btree (i)

postgres=# drop table bla;
DROP TABLE
postgres=# create table bla(i int not null primary key);
CREATE TABLE
postgres=# \d bla
    Table "public.bla"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 i  | integer |   | not null |
Indexes:
    "bla_pkey" PRIMARY KEY, btree (i)

postgres=#


as you can see, there is no difference.  the PK-Constraint is the 
important thing here.


Andreas



--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: Is this a buggy behavior?

2024-03-24 Thread Andreas Kretschmer




Am 24.03.24 um 16:28 schrieb Thiemo Kellner:


Am 24.03.2024 um 16:17 schrieb Tom Lane:


To do that, we'd have to remember that you'd said NULL, which we
don't: the word is just discarded as a noise clause. Considering
that this usage of NULL isn't even permitted by the SQL standard,
that seems like a bit too much work.


If I understood correctly, only the NOT NULL expression gets 
remembered, but the NULL gets discarded. No, I do not quite get it. 
Somehow, it has to be decided whether to create a "check constraint" 
or not, but this information is not available any more when creating 
the primary key? Not even in some kind of intermediary catalogue?


the null-able constraint addition to a column is pointless because by 
default all columns are nullable. definition as a primary key adds the 
not null constraint.


Andreas

--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: PostgreSQL Read-only mode usage

2024-02-28 Thread Andreas Kretschmer




Am 28.02.24 um 13:34 schrieb Jason Long:

Hello,
What is the use of a database in read-only mode?


a standby-database will also be in read-only mode.

Regards, Andreas

--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: Deleting duplicate rows using ctid ?

2024-02-06 Thread Andreas Kretschmer




Am 06.02.24 um 00:32 schrieb David G. Johnston:
On Mon, Feb 5, 2024 at 4:09 PM David Gauthier  
wrote:



I want the result to be just 2 recs, one for each dog.


My present goto link for this question:

https://harshitjain.home.blog/2019/06/17/postgresql-how-to-delete-duplicate-rows-in-postgresql/

David J.



postgres=# select * from dogs;
 dog
--
 dog1
 dog1
 dog2
 dog2
 dog2
 dog3
(6 rows)

postgres=# select ctid, dog, row_number() over (partition by dog) from 
dogs ;

 ctid  | dog  | row_number
---+--+
 (0,1) | dog1 |  1
 (0,2) | dog1 |  2
 (0,3) | dog2 |  1
 (0,4) | dog2 |  2
 (0,5) | dog2 |  3
 (0,6) | dog3 |  1
(6 rows)

postgres=# with ct as (select ctid, dog, row_number() over (partition by 
dog) from dogs) delete from dogs where ctid in (select ctid from ct 
where row_number != 1) ;

DELETE 3
postgres=# select * from dogs;
 dog
--
 dog1
 dog2
 dog3
(3 rows)

postgres=#


Regards, Andreas


--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: Why scan all columns when we select distinct c1?

2024-01-14 Thread Andreas Kretschmer




Am 14.01.24 um 12:17 schrieb Yongtao Huang:



My question is why scan all columns in PG 16.01?
If `select distinct c1`, scan the column `c1` is enough, like PG 9.4.


good question, I think because the seq scan always reads the column. If 
you create an index on c1, this changes:


postgres=# create index idx1 on t1(c1);
CREATE INDEX
postgres=# set enable_seqscan to off;
SET
postgres=# explain (costs, verbose) select distinct c1 from t1;
 QUERY PLAN
-
 Unique  (cost=0.15..63.93 rows=200 width=4)
   Output: c1
   ->  Index Only Scan using idx1 on public.t1  (cost=0.15..61.10 
rows=1130 width=4)

 Output: c1
(4 rows)


now we scan only the index and not the heap.

Regards, Andreas

--
Andreas Kretschmer
CYBERTEC PostgreSQL Services and Support





Re: Query runtime differences- trying to understand why.

2023-11-29 Thread Andreas Kretschmer




Am 29.11.23 um 21:25 schrieb Dirschel, Steve:


I have a question on the execution time of a query and the 
fluctuations I'm seeing.  I enabled auto_trace to capture some actual 
executions of a query by an app.  Below are 2 executions of it.  The 
top one took 1.697 milliseconds. The 2nd one took 31.241 
milliseconds.  Note the query has hints in it




what database are you using? PostgreSQL doesn't hav hints...

Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: replication primary writting infinite number of WAL files

2023-11-24 Thread Andreas Kretschmer




Am 24.11.23 um 13:52 schrieb Les:



Andreas Kretschmer  wrote (2023. nov. 24., P, 
13:22):




Am 24.11.23 um 12:39 schrieb Les:
>
> Hello,
>

please check the database log, a VACUUM can also lead to massive wal
generation. Can you find other related messages? by the way, the
picture
is hard to read, please post text instead of pictures.


First I was also thinking about vacuum.  But removing a replication 
slot should have no effect on vacuum on the primary (AFAIK). Please 
correct me if I'm wrong.




yeah, depends. there are 2 processes:

* 1 process generating the wal's, maybe a VACUUM
* an inactive slot holding the wals

For instance, if a standby not reachable the wal's will accumulated 
within the slot, till the standby is reachable again.



Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: replication primary writting infinite number of WAL files

2023-11-24 Thread Andreas Kretschmer




Am 24.11.23 um 12:39 schrieb Les:


Hello,

Yesterday we were faced with a problem that we do not understand and 
cannot solve ourselves. We have a postgresql cluster using repmgr, 
which has three members. The version of all instances (currently) is 
"PostgreSQL 14.5 (Debian 14.5-2.pgdg110+2) on x86_64-pc-linux-gnu, 
compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit.". Under 
normal circumstances, the number of write operations is relatively 
low, with an average of 4-5 MB/sec total write speed on the disk 
associated with the data directory. Yesterday, the primary server 
suddenly started writing to the pg_wal directory at a crazy pace, 
1.5GB/sec, but sometimes it went up to over 3GB/sec. The pg_wal 
started fattening up and didn't stop until it ran out of disk space.




please check the database log, a VACUUM can also lead to massive wal 
generation. Can you find other related messages? by the way, the picture 
is hard to read, please post text instead of pictures.



Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: IPV6 issue

2023-11-23 Thread Andreas Kretschmer




Am 23.11.23 um 20:18 schrieb Atul Kumar:


I am not able to understand that my both machines are resolved to IPV6 
then why is my first machine is throwing this error ?




please check 
https://techdocs.broadcom.com/us/en/ca-enterprise-software/it-operations-management/network-flow-analysis/21-2/installing/system-recommendations-and-requirements/linux-servers/disable-ipv6-networking-on-linux-servers.html



Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: Configuration knobs & dials to speed up query optimization

2023-11-23 Thread Andreas Kretschmer




Am 22.11.23 um 17:13 schrieb Ron Johnson:

Pg 9.6.24, which will change by April, but not now.




out of support since many years!




Here are the only non-default config values which I can think of that 
are relevant to the question at hand:

shared_buffers = 16GB
work_mem = 300MB
maintenance_work_mem = 12GB
effective_cache_size = 96GB
default_statistics_target = 200


As Laurenz said, default_statistics_target is too high at 200 in my 
opinion. It makes all planning more expensive and increases internal 
statistics. It is better to do this specifically for selected 
tables/columns.


Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 18:44 schrieb Atul Kumar:

I am giving this command
psql -d postgres -U postgres -p 5432 -h localhost
Then only I get that error.


so localhost resolved to an IPv6 - address ...



but when I  pass ip or hostname of the local server then I don't get 
such error message

1. psql -d postgres -U postgres -p 5432 -h 
2. psql -d postgres -U postgres -p 5432 -h 


resolves to an IPv4 - address. you can see the difference?

localhost != iv4-address != hostname with ipv4 address

Andreas




I don;t get that error while using the above two commands.


Regards.


On Wed, Nov 22, 2023 at 10:45 PM Adrian Klaver 
 wrote:


On 11/22/23 09:03, Atul Kumar wrote:
> The entries that I changed were to replace the md5 with
scram-sha-256
> and remove unnecessary remote IPs.

FYI from:

https://www.postgresql.org/docs/current/auth-password.html

md5

     The method md5 uses a custom less secure challenge-response
mechanism. It prevents password sniffing and avoids storing
passwords on
the server in plain text but provides no protection if an attacker
manages to steal the password hash from the server. Also, the MD5
hash
algorithm is nowadays no longer considered secure against determined
attacks.

     The md5 method cannot be used with the db_user_namespace feature.

     To ease transition from the md5 method to the newer SCRAM
method,
if md5 is specified as a method in pg_hba.conf but the user's
password
on the server is encrypted for SCRAM (see below), then SCRAM-based
authentication will automatically be chosen instead.

>
> But it has nothing to do with connecting the server locally with
"psql
> -d postgres -U postgres -h localhost"

The error:

no pg_hba.conf entry for host "::1", user "postgres", database
"postgres


says it does and the error is correct as you do not have an IPv6
entry
for localhost in pg_hba.conf. At least in the snippet you showed us.


>
> But when I try to connect it locally I get this error. So it is
related

When you say connect locally do you mean to localhost or to
local(socket)?

> to local connections only and when I pass the hostname or ip of the
> server it works fine without any issue.
>
>
> Regards.
>

-- 
Adrian Klaver

adrian.kla...@aklaver.com



--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: strange behavior of pg_hba.conf file

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 17:21 schrieb Atul Kumar:



Since then I have started getting the below error:

no pg_hba.conf entry for host "::1", user "postgres", database "postgres




What I am missing here, please suggest.




that's sounds like an issue with IPv6. Do you use it? Disable it or add 
an entry for it.



Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: PITR

2023-11-22 Thread Andreas Kretschmer




Am 22.11.23 um 11:50 schrieb Ron Johnson:
On Wed, Nov 22, 2023 at 3:12 AM Rajesh Kumar 
 wrote:




How do I do PITR. Backup strategy is weekly full backup and daily
differential backup. Using pgbackrest.

Also. In future how do i monitor time of drop commands.



https://blog.hagander.net/locating-the-recovery-point-just-before-a-dropped-table-230/


Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: Postgresql went crazy and flooded all the SSD

2023-11-06 Thread Andreas Kretschmer



On 6 November 2023 12:11:31 CET, Gabriel Dodan  wrote:
>Not sure exactly what happened but Postgresql flooded all the available SSD
>space and obviously crashed. It has written a lot of data in the pg_wal
>folder. Most likely it was caused by replication. The postgresql instance
>that crashed was a subscriber and publisher at the same time. Even so, it
>was not a heavy write replication, I mean the data that needed to be
>replicated was rarely updated. Has anyone else encountered such an issue?
>
>Best!


Check the logs.

Andreas




Re: REINDEX in tables

2023-10-25 Thread Andreas Kretschmer




Am 25.10.23 um 14:11 schrieb Laurenz Albe:

On Wed, 2023-10-25 at 11:59 +0200, Andreas Kretschmer wrote:

Am 25.10.23 um 11:57 schrieb Matthias Apitz:

El día miércoles, octubre 25, 2023 a las 11:33:11 +0200, Andreas Kretschmer 
escribió:

Am 25.10.23 um 11:24 schrieb Matthias Apitz:

We have a client who run REINDEX in certain tables of the database of
our application (on Linux with PostgreSQL 13.x):

REINDEX TABLE CONCURRENTLY d83last;
REINDEX TABLE CONCURRENTLY d86plz;
REINDEX TABLE CONCURRENTLY ig_memtable;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY d50zweig ;
REINDEX TABLE CONCURRENTLY d50zweig ;

We as the software vendor and support, do not use or recommend this
procedure, because we have own SQL files for creating or deleting
indices in the around 400 tables.

The client is now concerned about the issue that the number of
rows in some of the above tables has increased. Is this possible?

no, reindex will not add rows to the table.

But if the indexes were corrupted before the REINDEX, it is possible that
a query that didn't find a result before the REINDEX can find one afterwards.


Thanks for the addition my friend, that's correct and could be an 
explanation here.



Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: REINDEX in tables

2023-10-25 Thread Andreas Kretschmer




Am 25.10.23 um 11:57 schrieb Matthias Apitz:

El día miércoles, octubre 25, 2023 a las 11:33:11 +0200, Andreas Kretschmer 
escribió:


Am 25.10.23 um 11:24 schrieb Matthias Apitz:

We have a client who run REINDEX in certain tables of the database of
our application (on Linux with PostgreSQL 13.x):

REINDEX TABLE CONCURRENTLY d83last;
REINDEX TABLE CONCURRENTLY d86plz;
REINDEX TABLE CONCURRENTLY ig_memtable;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY d50zweig ;
REINDEX TABLE CONCURRENTLY d50zweig ;

We as the software vendor and support, do not use or recommend this
procedure, because we have own SQL files for creating or deleting
indices in the around 400 tables.

The client is now concerned about the issue that the number of
rows in some of the above tables has increased. Is this possible?

In principle, there is nothing wrong with doing this in a maintenance
window, for example.

But, this wasn't the question. It was: can it happen that the number of
rows in thze table will increase by this operation?


no, reindex will not add rows to the table.


Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: REINDEX in tables

2023-10-25 Thread Andreas Kretschmer




Am 25.10.23 um 11:24 schrieb Matthias Apitz:

We have a client who run REINDEX in certain tables of the database of
our application (on Linux with PostgreSQL 13.x):

REINDEX TABLE CONCURRENTLY d83last;
REINDEX TABLE CONCURRENTLY d86plz;
REINDEX TABLE CONCURRENTLY ig_memtable;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY ig_dictionary;
REINDEX TABLE CONCURRENTLY d50zweig ;
REINDEX TABLE CONCURRENTLY d50zweig ;

We as the software vendor and support, do not use or recommend this
procedure, because we have own SQL files for creating or deleting
indices in the around 400 tables.

The client is now concerned about the issue that the number of
rows in some of the above tables has increased. Is this possible?


In principle, there is nothing wrong with doing this in a maintenance 
window, for example.



Regards, Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: why generated columsn cannot be used in COPY TO?

2023-10-06 Thread Andreas Kretschmer




Am 06.10.23 um 15:47 schrieb Tom Lane:

Luca Ferrari  writes:

I'm wondering why in COPY TO (file or program) I cannot use generated
columns: since I'm pushing data out of the table, why they are not
allowed?

There's a comment about that in copy.c:

  * We don't include generated columns in the generated full list and we don't
  * allow them to be specified explicitly.  They don't make sense for COPY
  * FROM, but we could possibly allow them for COPY TO.  But this way it's at
  * least ensured that whatever we copy out can be copied back in.


ha, as always: read the source ;-)



Not sure how convincing that reasoning is, but it was at least
thought about.  I do agree with it as far as the default column
list goes, but maybe we could allow explicit selection of these
columns in COPY TO.


sounds okay


Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: why generated columsn cannot be used in COPY TO?

2023-10-06 Thread Andreas Kretschmer




Am 06.10.23 um 13:53 schrieb Luca Ferrari:

Hi all,
I'm wondering why in COPY TO (file or program) I cannot use generated
columns: since I'm pushing data out of the table, why they are not
allowed?

Example:

testdb=# CREATE TABLE test( pk int generated always as identity primary key
, ts timestamp default current_timestamp
, month int generated always as ( extract( month from ts ) ) stored
);

testdb=# insert into test( ts ) values( current_timestamp );

testdb=# copy test to program 'head'; -- ok but silently removes the column
COPY 1
testdb=# copy test( month ) to program 'head';
ERROR:  column "month" is a generated column
DETAIL:  Generated columns cannot be used in COPY.


you can use copy (select * ...) to ...

test=*# CREATE TABLE test( pk int generated always as identity primary 
key , ts timestamp default current_timestamp , month int generated 
always as ( extract( month from ts ) ) stored );

CREATE TABLE
test=*# commit;
COMMIT
test=# insert into test( ts ) values( current_timestamp );
INSERT 0 1
test=*# select * from test;
 pk |    ts | month
+---+---
  1 | 06-OCT-23 14:18:28.742152 |    10
(1 row)

test=*# commit;
COMMIT
test=# copy test to stdout;
1    06-OCT-23 14:18:28.742152
test=*# copy test to stdout;
1    06-OCT-23 14:18:28.742152
test=*# copy (select * from test) to stdout;
1    06-OCT-23 14:18:28.742152    10
test=*#



Andreas

--
Andreas Kretschmer - currently still (garden leave)
Technical Account Manager (TAM)
www.enterprisedb.com





Re: How to investigate deadlocks

2023-10-03 Thread Andreas Kretschmer




Am 02.10.23 um 13:27 schrieb Matthias Apitz:

Hello,

One of our clients running our LMS on top of PostgreSQL 13.1 created a
ticket with these messages:

2023-09-30 16:50:50.951 CEST [18117] ERROR:  deadlock detected
2023-09-30 16:50:50.951 CEST [18117] DETAIL:  Process 18117 waits for ShareLock 
on transaction 150396154; blocked by process 18187.
Process 18187 waits for ShareLock on transaction 150396155; blocked by 
process 18117.
Process 18117: fetch hc_d03geb
Process 18187: fetch hc_d02ben
2023-09-30 16:50:50.951 CEST [18117] HINT:  See server log for query details.
2023-09-30 16:50:50.951 CEST [18117] CONTEXT:  while locking tuple (38,57) in relation 
"d03geb"
2023-09-30 16:50:50.951 CEST [18117] STATEMENT:  fetch hc_d03geb



have you checked the server log?

See server log for query details.



Regards, Andreas

--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: How to investigate deadlocks

2023-10-03 Thread Andreas Kretschmer




Am 02.10.23 um 13:27 schrieb Matthias Apitz:

Hello,

One of our clients running our LMS on top of PostgreSQL 13.1 created a
ticket with these messages:

2023-09-30 16:50:50.951 CEST [18117] ERROR:  deadlock detected
2023-09-30 16:50:50.951 CEST [18117] DETAIL:  Process 18117 waits for ShareLock 
on transaction 150396154; blocked by process 18187.
Process 18187 waits for ShareLock on transaction 150396155; blocked by 
process 18117.
Process 18117: fetch hc_d03geb
Process 18187: fetch hc_d02ben
2023-09-30 16:50:50.951 CEST [18117] HINT:  See server log for query details.
2023-09-30 16:50:50.951 CEST [18117] CONTEXT:  while locking tuple (38,57) in relation 
"d03geb"
2023-09-30 16:50:50.951 CEST [18117] STATEMENT:  fetch hc_d03geb

The shown PIDs for sure are the ones of the Pos backend proc (on Linux).
Is there any chance to investigate it further?


please also check 
https://www.cybertec-postgresql.com/en/postgresql-understanding-deadlocks/



Andreas

--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: Operating of synchronous master when no standby is available

2023-10-02 Thread Andreas Kretschmer




Am 02.10.23 um 10:54 schrieb Sergey Cherukhin:

Hello!

I use Postgresql+Pacemaker+Corosync cluster with 2 Postgresql 
instances in synchronous replication mode. When one of the nodes is 
down, clients hang on INSERT operations because the primary server 
waits until standby confirms that it got the data, but the standby 
node is down.


I need clients to feel no difference when they operate with a fully 
operational cluster or with a degraded cluster containing primary 
node alone.


Is it possible to configure synchronous replication to operate 
normally when primary and replica are online, but primary no waits 
confirmation from replica if replica is not connected?


yes, with 3 or more sync. standbys.


Andreas

--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: Will PostgreSQL 16 supports native transparent data encryption ?

2023-08-21 Thread Andreas Kretschmer



On 22 August 2023 06:52:10 CEST, Ron  wrote:
>On 8/21/23 18:49, Bruce Momjian wrote:
>> On Mon, Aug 21, 2023 at 07:02:46PM +0300, Mostafa Fathy wrote:
>>> Hi there,
>>> 
>>> It is mentioned here https://www.postgresql.org/about/press/faq/#:~:text=
>>> Q%3A%20What%20features%20will%20PostgreSQL%2016%20have%3F that native
>>> transparent data encryption is being worked on and it may be delivered with
>>> PostgreSQL 16.
>>> 
>>> Is PostgreSQL 16 beta version includes native transparent data encryption or
>>> not ? because I checked the docs 
>>> https://www.postgresql.org/docs/16/index.html
>>> and couldn't find anything related to transparent data encryption.
>>> 
>>> If not supported yet in the beta version I would like to know if PostgreSQL 
>>> 16
>>> final version will support native transparent data encryption or not?
>> Not, PG 16 will not support it, and I am unclear if later major versions
>> will either.
>
>That's disappointing, since TDE makes PCI audits that much simpler.
>

Sure. You can use EDB products (EPAS or Postgres Extended) with TDE.

Andreas




Re: Fatal Error : Invalid Memory alloc request size 1236252631

2023-08-14 Thread Andreas Kretschmer



On 14 August 2023 11:59:26 CEST, Sai Teja  
wrote:
>Hi Team,
>
>We are trying to fetch the one row of data (bytea data) for one table in
>But getting the error stating that "Invalid Memory alloc request size
>1236252631"
>
>The row which we were trying to fetch have one bytea column which is more
>than 1GB
>
>Could anyone please help me to resolve this issue.
>

You can try to change the bytea_output. Possible values are hex and escape. 
With some luck it will work, but maybe your application will have problems with 
that.

Andreas




Re: PostgreSQL Server Hang​

2023-06-21 Thread Andreas Kretschmer



On 22 June 2023 07:09:26 CEST, KK CHN  wrote:
>*Description of System: *
>1. We are running a Postgres Server (version 12, on CentOS 6) for an
>emergency call attending and  vehicle tracking system fitted with mobile
>devices for vehicles with navigation apps for emergency service.
>
>2.   vehicles every 30 Seconds sending location coordinates( Lat /Long ) and
>getting stored into the DB server at the emergency call center cum control
>room.
>
>*Issue: *
>We are facing an issue of  the database hanging and becoming unresponsive
>for
>applications running which try to connect to the DB.
>
>*Question: *
>How to optimize and fine tune this database performance issue ?  Definitely
>pouring the resources like the above is not a solution.
>
>What to check the root cause and find for the performance bottle neck
>reasons  ?
>

Have you THP disabled?

Andreas




Re: Active Active PostgreSQL Solution

2023-06-09 Thread Andreas Kretschmer



On 9 June 2023 12:38:40 CEST, Mohsin Kazmi  wrote:
>Hello Everyone,
>
>I have been working on PostgreSQL databases for the last three years and I
>have also migrate databases from Oracle to PostgreSQL as well. I configured
>PostgreSQL for logical replication as well.
>
>Now in order to deploy PostgreSQL in our production servers, I need to
>configure it in Active Active mode. Can anyone help me to do so?
>

You can ask EDB, we have a solution, but it is not for free and not open source.

Andreas 




Re: Would PostgreSQL 16 native transparent data encryption support database level encryption?

2023-05-18 Thread Andreas Kretschmer
It is in epas15, but for the whole cluster. Different keys for each database is 
not possible, how should it works for instance the wal - stream?

On 18 May 2023 00:35:39 CEST, Tony Xu  wrote:
>Hi There,
>
>The FAQ (copied below) mentioned that native transparent data encryption
>might be included in 16. Is it fair to assume that it will support database
>level encryption, that is, we can use two encryption keys for two databases
>in the same server, respectively? How can one verify that?
>
>Thanks
>Tony
>
>
>
>
>*https://www.postgresql.org/about/press/faq/
>*
>
>*Q: What features will PostgreSQL 16 have?A: As always, we can't be certain
>what will go in and what won't; the project has strict quality standards
>that not all patches can make before deadline. All we can tell you is
>what's currently being worked on, which includes native transparent data
>encryption support, continued improvements to logical replication,
>parallelism, partitioning, and vacuuming, and many more features. By the
>time 16 is released, though, this list may have changed considerably.*


Re: Additive backup and restore?

2023-05-08 Thread Andreas Kretschmer
Consider table partitioning. You can detach, save and delete partitions, and 
you can restore and attach partitions.

On 8 May 2023 12:24:06 CEST, Age Apache  wrote:
>I am designing a database for a web application. In the near future I will
>require past data for Audit, Security and Analysis purpose. But storing all
>that data will increase the database size. What are some well known
>approaches to archiving data for later use without increasing the database
>size?
>
>One approach I am thinking of is compressed additive backup and restore
>i.e. copy the rows of the tables that will be required later and store them
>in a compressed form, and then delete those rows from the table. And when
>the data is required then restore them from the backup files in an additive
>way.
>
>Is there an easy way to achieve this, say via pg_dump?


Re: pg_wal directory max size

2022-12-21 Thread Andreas Kretschmer
On 22 December 2022 04:00:57 CET, Yi Sun  wrote:
>Hello guys,
>
>We are planning the server disk space, pg_wal directory max size is wal
>file size*wal_keep_segments? or is it also decided by other parameters
>please? We tried to search for this, but could not find the answer
>
>For example our postgresql is 9.6 below parameters value, is the pg_wal
>directory max size 320*16MB please? Thank you
>wal file size 16MB
>wal_keep_segments = 320
>min_wal_size = 1GB
>max_wal_size = 2GB
>
>Best Regards
>Dennis

No, you need space to store all wal's between 2 checkpoints. So it depends more 
on the checkpoint parameters and your workload. Plus wal_keep_segments...
9.6 is out of support.

Andreas




Re: pg_wal directory max size

2022-12-21 Thread Andreas Kretschmer
On 22 December 2022 04:00:57 CET, Yi Sun  wrote:
>Hello guys,
>
>We are planning the server disk space, pg_wal directory max size is wal
>file size*wal_keep_segments? or is it also decided by other parameters
>please? We tried to search for this, but could not find the answer
>
>For example our postgresql is 9.6 below parameters value, is the pg_wal
>directory max size 320*16MB please? Thank you
>wal file size 16MB
>wal_keep_segments = 320
>min_wal_size = 1GB
>max_wal_size = 2GB
>
>Best Regards
>Dennis

No, you need space to store all wal's between 2 checkpoints. So it depends more 
on the checkpoint parameters and your workload. Plus wal_keep_segments...
9.6 is out of support.

Andreas




Re: Drop role cascade ?

2022-11-17 Thread Andreas Kretschmer
On 18 November 2022 01:14:37 CET, Ron  wrote:
>On 11/17/22 17:57, Tom Lane wrote:
>> Ron  writes:
>>> On 11/17/22 17:16, Tom Lane wrote:
>>>> Not directly, but see REASSIGN OWNED and DROP OWNED.
>>> SIMULATE and VERBOSE options would be handy, to see what those commands
>>> would actually do.
>> Hmph.  I'm surprised to realize that those commands don't produce
>> trace output comparable to DROP CASCADE.  If they did, this need
>> would be met by the traditional hack of "BEGIN; DROP ...; ROLLBACK".
>
>So... you'll add it to v16  :D
>

+1 as feature request




-- 
Andreas Kretschmer, EDB




Re: Findout long unused tables in database

2022-09-26 Thread Andreas Kretschmer




Am 26.09.22 um 14:05 schrieb Andreas Fröde:

Hello,
I am looking for a way to find out when a table was last used for 
reading. (Without writing every statement in the logfile or putting a 
trigger on it). Is there such a thing?




no really what you are looking for, i know, but we have 
pg_stat_user_tables. There can you find how often the table was queried 
in the past. Take the data, wait some time, take it again and compare.




Regards, Andreas

--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: lifetime of the old CTID

2022-07-05 Thread Andreas Kretschmer




Am 06.07.22 um 07:54 schrieb Andreas Kretschmer:



Am 06.07.22 um 07:44 schrieb Christophe Pettus:



On Jul 5, 2022, at 22:35, Matthias Apitz  wrote:
Internally, in the DB layer, the read_where() builds the row list 
matching

the WHERE clause as a SCROLLED CURSOR of

    SELECT ctid, * FROM d01buch WHERE ...

and each fetch() delivers the next row from this cursor. The functions
start_transaction() and end_transaction() do what their names 
suggest and
rewrite_actual_row() does a new SELECT based on the ctid of the 
actual row


    SELECT * FROM d01buch WHERE ctid = ... FOR UPDATE
    ...
    UPDATE ...
On first glance, it appears that you are using the ctid as a primary 
key for a row, and that's highly not-recommended.  The ctid is never 
intended to be stable in the database, as you have discovered.  There 
are really no particular guarantees about ctid values being retained.


I'd suggest having a proper primary key column on the table, and 
using that instead.



100% ACK.

Andreas




it reminds me somehow on how people used he OID in old times - and now 
we removed the OID completely.


Andreas

--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: lifetime of the old CTID

2022-07-05 Thread Andreas Kretschmer




Am 06.07.22 um 07:44 schrieb Christophe Pettus:



On Jul 5, 2022, at 22:35, Matthias Apitz  wrote:
Internally, in the DB layer, the read_where() builds the row list matching
the WHERE clause as a SCROLLED CURSOR of

SELECT ctid, * FROM d01buch WHERE ...

and each fetch() delivers the next row from this cursor. The functions
start_transaction() and end_transaction() do what their names suggest and
rewrite_actual_row() does a new SELECT based on the ctid of the actual row

SELECT * FROM d01buch WHERE ctid = ... FOR UPDATE
...
UPDATE ...

On first glance, it appears that you are using the ctid as a primary key for a 
row, and that's highly not-recommended.  The ctid is never intended to be 
stable in the database, as you have discovered.  There are really no particular 
guarantees about ctid values being retained.

I'd suggest having a proper primary key column on the table, and using that 
instead.



100% ACK.

Andreas


--
Andreas Kretschmer
Technical Account Manager (TAM)
www.enterprisedb.com





Re: Postgres Wal Full

2022-03-03 Thread Andreas Kretschmer
On 3 March 2022 08:46:45 CET, pgdba pgdba  wrote:
>Hi ,
>
>I have a problem from pg_wal. I am using postgresql version 11 and taking 
>backup and writing archive_command in postgresql.conf but did not archive wal 
>and my disk ise full from pg_wal. I research why is my wal is full and dont 
>found anything could  you help me please ?


Check if your archive_command works. Read the log.


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Max# of tablespaces

2021-01-05 Thread Andreas Kretschmer
On 3 January 2021 13:59:31 CET, Thomas Flatley  wrote:
>Hello, I've checked the docs but cant seem to find if there is a max #
>of tablespaces allowed - I've come across a 9.5 env with 1600
>tablespaces - they want to double that 

why on earth do you think you will need so many tablespaces? They have an other 
meaning than in oracle.


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: PostgreSQL HA

2020-12-28 Thread Andreas Kretschmer




Am 26.12.20 um 17:20 schrieb venkata786 k:

Hi Ganesh,

BDR supports postgres 12 & 13 versions ??
I think we have BDR compatible for 9.4 postgres.
Could you plz confirm.


That's true. PG 9.4 is out of support.



Regards, Andreas

--
2ndQuadrant, an EDB company
www.2ndQuadrant.com / www.enterprisedb.com





Re: Upgrade 9.4 to 12 on windows system

2020-11-19 Thread Andreas Kretschmer




Am 19.11.20 um 15:05 schrieb Asya Nevra Buyuksoy:



connection to database failed: could not connect to server: Connection 
refused (0x274D/10061)

Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 50432?
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 50432?




have you a firewall installed on the machine? Is port 50432 blocked? 
Default port for PostgreSQL is 5432, not 50432.



Andreas


--
2ndQuadrant, an EDB company
www.2ndQuadrant.com / www.enterprisedb.com





Re: Is it possible to write a generic UPSERT?

2020-11-12 Thread Andreas Kretschmer




Am 12.11.20 um 18:34 schrieb Michael Lewis:
MERGE command is implemented for this use case in some DBMS, but not 
Postgres (yet?).


MERGE is available in 2ndqPostgres, but that's not open source (it's 
available for 2ndQ-customers on request).



Regards, Andreas

--
2ndQuadrant, an EDB company
www.2ndQuadrant.com / www.enterprisedb.com





Re: PANIC: could not write to log file {} at offset {}, length {}: Invalid argument

2020-11-04 Thread Andreas Kretschmer
On 4 November 2020 11:24:03 CET, Shani Israeli  
wrote:
>Hi all,
>
>We are running PostgreSQL v9.5.19 over Windows Server 2012 R2, 16GB
>RAM.
>Lately, postgres started to crash (happened already 3 times ~once a
>month)
>and before its crashes I found this message in Event Log:
>
>PANIC:  could not write to log file {} at offset {}, length {}: Invalid
>argument
>
>(so I assumed it is related).
>
>attached is our configuration.
>
>Any ideas about what is the problem? or anything else I need to check?


wild guess: Antivirus Software?



>
>Thanks is advance,
>Shani Israeli - Software Developer
>
>+972 54 6689920
>sisra...@illusivenetworks.com
>www.illusivenetworks.com


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: archive command in streaming replication in windows server

2020-10-29 Thread Andreas Kretschmer




Am 29.10.20 um 20:12 schrieb Atul Kumar:

hi,

I am trying to configure streaming replication on windows server.

i have postgres version 10

after successful installation of postgres, I create a archive
directory "C:\Program Files\PostgreSQL\10\archive_files"  and here in
archive_flies, I need to copy the wal files.

in postgresql conf file I gave path in archive_command like this:

'copy %p C:\Program Files\PostgreSQL\10\archive_files\%f'

After successfully restarting the postgres service, I could not find
any file in the archive directory.

and when I am giving command
show archive_command

I get this as an output

"copy %p C:Program FilesPostgreSQL archive_files%f"

So please help in setting me the correct path format.


from the fine documentation an example:

archive_command = 'test ! -f /mnt/server/archivedir/%f && cp %p 
/mnt/server/archivedir/%f'  # Unix
archive_command = 'copy "%p" "C:\\server\\archivedir\\%f"'  # Windows


https://www.postgresql.org/docs/current/continuous-archiving.html


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Feature Requests

2020-10-25 Thread Andreas Kretschmer




Am 25.10.20 um 10:26 schrieb Nikolai Lusan:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Hi,

I was wondering where I can see open feature requests. One I would like
to see is multi-master replication ... I did find a 2016 request that
was marked as "planned", but to the best of my knowledge it hasn't made
production yet.


- -- 
Nikolai Lusan 


Multi-Master exists with BDR, but it's not OpenSource. If you need more 
information, contact me private.



Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





bug in PG13?

2020-10-14 Thread Andreas Kretschmer

Hi all,

it seems to me a bug. i have a partitioned table:


test=*# select version();
version
-
 PostgreSQL 13.0 (Ubuntu 13.0-1.pgdg18.04+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit

(1 row)

test=*# \d+ kunden
    Partitioned table 
"public.kunden"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition key: HASH (kundenname)
Indexes:
    "kunden_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0 FOR VALUES WITH (modulus 4, remainder 0), PARTITIONED,
    kunden_1 FOR VALUES WITH (modulus 4, remainder 1), PARTITIONED,
    kunden_2 FOR VALUES WITH (modulus 4, remainder 2), PARTITIONED,
    kunden_3 FOR VALUES WITH (modulus 4, remainder 3), PARTITIONED

test=*# \d+ kunden_0
   Partitioned table 
"public.kunden_0"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition of: kunden FOR VALUES WITH (modulus 4, remainder 0)
Partition constraint: satisfies_hash_partition('16574'::oid, 4, 0, 
kundenname)

Partition key: RANGE (datum)
Indexes:
    "kunden_0_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
    kunden_0_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
    kunden_0_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
    kunden_0_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
    kunden_0_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
    kunden_0_default DEFAULT

test=*#


this plan seems okay, partition pruning is working as expected:


test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN
-
 Gather  (cost=1000.15..5030.91 rows=6 width=40) (actual 
time=10.068..14.326 rows=0 loops=1)

   Workers Planned: 1
   Workers Launched: 1
   ->  Parallel Append  (cost=0.15..4030.31 rows=6 width=40) (actual 
time=0.004..0.005 rows=0 loops=2)

 Subplans Removed: 5
 ->  Parallel Index Only Scan using kunden_0_default_pkey on 
kunden_0_default kunden_1  (cost=0.15..20.16 rows=1 width=40) (actual 
time=0.008..0.008 rows=0 loops=1)
   Index Cond: ((kundenname = 'Kunde 11'::text) AND (datum 
= CURRENT_DATE))

   Heap Fetches: 0
 Planning Time: 0.303 ms
 Execution Time: 14.364 ms
(10 rows)



but, if i switch ``parallel_leader_participation`` to off, the plan 
changed in a bad way:



test=*# set parallel_leader_participation to off;
SET
test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN

 Gather  (cost=1000.00..4833.46 rows=6 width=21) (actual 
time=37.188..40.386 rows=0 loops=1)

   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Append  (cost=0.00..3832.86 rows=2 width=21) (actual 
time=23.559..23.562 rows=0 loops=2)
 ->  Seq Scan on kunden_0_2016 kunden_2 (cost=0.00..1446.92 
rows=1 width=17) (actual time=12.094..12.094 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60624
 ->  Seq Scan on kunden_0_2015 kunden_1 (cost=0.00..1445.22 
rows=1 width=17) (actual time=10.313..10.313 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60527
 ->  Seq Scan on kunden_0_2

Re: Wal_keep_segment value too high

2020-07-10 Thread Andreas Kretschmer
On 10 July 2020 10:26:25 CEST, Brajendra Pratap Singh 
 wrote:
>Hi,
>
>What will happen if the wal_keep_segments value is too high ,is this

wasted disk space.

What do you want to achive?


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Postgresql HA Cluster

2020-06-29 Thread Andreas Kretschmer




Am 29.06.20 um 09:33 schrieb Laurenz Albe:

That would not provode a multi-master solution, though.  There are some
commercial solutions for that, but be warned that it would require non-trivial
changes to your application.


not really with BDR3 ;-)


Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: [HELP] query regarding replication

2020-06-09 Thread Andreas Kretschmer




Am 09.06.20 um 10:44 schrieb Praveen Kumar K S:

Thanks. Will this approach replicate DDL changes ?


sure.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: [HELP] query regarding replication

2020-06-09 Thread Andreas Kretschmer




Am 09.06.20 um 09:55 schrieb Praveen Kumar K S:
Can I achieve master/slave streaming replication by setting WAL_LEVEL 
to logical on master ? Are there any drawbacks of it ?


yes, no problem. the wal's would be a bit larger, that's all.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Andreas Kretschmer




Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:

Hi

Respected to PGDG GLOBAL TEAM

I am getting this error( ERROR:  data type character varying has no 
default operator class for access method "gin"
HINT:  You must specify an operator class for the index or define a 
default operator class for the data type ) while i try to create gin 
index on vch_message column of slp01 table (CREATE INDEX ON slp01 
using gin(vch_message);)


vch_message column has lot of this info like 
{"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"remuda...@hotmail.com 
","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic 
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json, 
text/json, text/x-json, text/javascript, application/xml, text/xml"






the string looks like JSON (with missing the correct end of the string). 
Please check. If you are storing JSON or JSONB - Values, you should use 
the proper datatype -JSON/JSONB. In this case you can create an 
GIN-Index on it.



Regards, Andreas
(and please don't email me private the same question)

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_dump crashes

2020-05-22 Thread Andreas Kretschmer




Am 22.05.20 um 14:37 schrieb Nico De Ranter:

Postgres version: 9.5


which minor-version?

Can you check if the table has TOAST-Tables? Can you try to select all 
columns but not TOASTed columns?

Maybe there is data-corruption only in toast-tables.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Clarification related to BDR

2020-05-14 Thread Andreas Kretschmer




Am 14.05.20 um 06:37 schrieb Santhosh Kumar:
Can you please help me understand, why the following news is published 
in "postgresql" with an encouraging message acknowledging BDR as an 
open source? We invested time and effort to use BDR only to understand 
at a later point in time, that it is not. Kindly clarify, if I am 
missing anything. 


BDR version 1 was Open Source, version 2 and 3 are not. Version 1 
(patched 9.4) and Version 2 (community PG 9.6) are not under support 
now, stable and supported version is 3 (PG 10 and 11, 12 soon).
You need a usage license which is bundled with a diamond support 
subscription.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg11: Query using index, but only for half the partitions

2020-04-23 Thread Andreas Kretschmer




Am 23.04.20 um 12:30 schrieb Stefan Knecht:
There's no question that this is more expensive than just reading the 
95 rows from the index directly and returning them


not sure, you can play with

enable_seqscan = off

and compare the costs. What is the setting for random_page_cost ?


Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg11: Query using index, but only for half the partitions

2020-04-23 Thread Andreas Kretschmer




Am 23.04.20 um 10:13 schrieb Stefan Knecht:
Seq Scan on snap_20200225 s  (cost=0.00..1.19 rows=1 width=12) 


the partition is very small, so it's cheaper to scan only the table (one 
block) than index + table (1 + 1 block).



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: PostgreSQL native multi-master

2020-04-07 Thread Andreas Kretschmer




Am 07.04.20 um 13:39 schrieb Vano Beridze:

Hello,

What are the plans to support multi-master natively?
What solution would you recommend at this point? preferably free.


BDR3 works well for our customers, but it isn't free. You can ask us for 
more information.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Real application clustering in postgres.

2020-03-09 Thread Andreas Kretschmer




Am 05.03.20 um 13:07 schrieb Laurenz Albe:

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and
an applicatoin that uses it has to be specifically designed for that.


Depends. We have an automation tool for setup (TPAexec), and as long you 
are not using special feature like CAMO (Commit At Most Once) you can 
use applications without changes.

(you have to set some parameters, for instance for sequences)

It works well, we have a lot of happy customers. Some of them using it 
for globally distributed databases, other for high availability with 5 
nines.

Here you can find more information:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Who mades the inserts?

2020-03-09 Thread Andreas Kretschmer




Am 09.03.20 um 13:52 schrieb Durumdara:

Do you know any query which can show me the inserts per databases?

And I don't know it works as TPS query? So I need to make differents 
between measured values in two time point?


yes, you can use tup_inserted from pg_stat_database.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Backup & Restore

2020-02-24 Thread Andreas Kretschmer




Am 24.02.20 um 09:18 schrieb Dor Ben Dov:


Hi All,

What is your backup and restore solution in production when working 
with Postgres ?




most of our customers using Barman: https://www.pgbarman.org/


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 18:18 schrieb Tom Lane:


https://www.postgresql.org/docs/current/functions-trigger.html

regards, tom lane

cool.


Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 14:37 schrieb Andreas Kretschmer:



Am 02.02.20 um 10:24 schrieb Condor:

CREATE TRIGGER last_changes
  BEFORE UPDATE ON status_table
  FOR EACH ROW
  WHEN (OLD.* IS DISTINCT FROM NEW.*) 


try to exclude the column lastchange from the comparison.




test=*# select ctid, * from status_table ;
 ctid  | rowid | status0 |  lastchage
---+---+-+-
 (0,3) |    11 |   1 |
 (0,5) |    12 |   4 | 2020-02-02 15:40:42
(2 rows)

test=*# UPDATE status_table SET status0 = 4 WHERE rowid = 12;
UPDATE 1
test=*# commit;
COMMIT
test=# select ctid, * from status_table ;
 ctid  | rowid | status0 |  lastchage
---+---+-+-
 (0,3) |    11 |   1 |
 (0,6) |    12 |   4 | 2020-02-02 15:40:42
(2 rows)

test=*# \d status_table
 Table "public.status_table"
  Column   |  Type  | Collation | Nullable | 
Default

---++---+--+-
 rowid | integer    |   |  |
 status0   | integer    |   |  |
 lastchage | timestamp(0) without time zone |   |  |
Triggers:
    last_changes BEFORE UPDATE ON status_table FOR EACH ROW WHEN 
(old.rowid IS DISTINCT FROM new.rowid OR old.status0 IS DISTINCT FROM 
new.status0) EXECUTE FUNCTION log_last_changed()



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 10:24 schrieb Condor:

CREATE TRIGGER last_changes
  BEFORE UPDATE ON status_table
  FOR EACH ROW
  WHEN (OLD.* IS DISTINCT FROM NEW.*) 


try to exclude the column lastchange from the comparison.


Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_stat_statements extension

2020-01-16 Thread Andreas Kretschmer
On 13 January 2020 20:15:21 CET, Rushikesh socha  wrote:
>HI, Is there any good link that shows how to install pg_stat_statements
>extension
>I am getting below error
>
>postgres=# CREATE EXTENSION pg_stat_statements;
>ERROR:  could not open extension control file
>"/usr/pgsql-11/share/extension/pg_stat_statements.control": No such
>file or
>directory
>
>
>Thanks.



Have you installed the contrib-package and also load the shared lib via 
postgresql.conf?

Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Backup and Restore

2019-12-25 Thread Andreas Kretschmer




Am 25.12.19 um 14:34 schrieb Dor Ben Dov:


Hi All,

What Is the best recommended / used tool for backup and restore that 
you suggest or work with postgres ?



depends on your needs, most of our customers using barman.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Backup and Restore

2019-12-12 Thread Andreas Kretschmer




Am 12.12.19 um 16:12 schrieb Dor Ben Dov:

What is the most common used back and restore solution for postgres ?


most of our customers are using Barman, which is not a surprise since it 
is developed by us ;-)



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Upgrade PostgreSQL 9.6 to 10.6

2019-12-10 Thread Andreas Kretschmer




Am 09.12.19 um 23:37 schrieb github kran:
Great, thanks Andreas, So this seems to be a good feature using the 
core concept of replication. Can I use this extension and do the major 
upgrade without paying ?.


yes, this extension is free.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Upgrade PostgreSQL 9.6 to 10.6

2019-12-09 Thread Andreas Kretschmer




Am 09.12.19 um 16:43 schrieb github kran:

Hello PostgreSQL Team,
I would like to know what would be the best way to do Database 
migration from PostgreSQL 9.6 engine to 10.6 by creating a new cluster 
in 10.6 and then copy data.


Size of the cluster is 3.8 TB.

1) It would be a new cluster we want to create on 10.6 and then copy 
data form 9.6 and shutdown 9.6
2) Cluster today contains 1 master instance and another instance for 
reader
3) Can this be done without a downtime ?  what are the options I have 
to do this without making this complicated . We have about 30 -40 
tables to do the migration.





yes, you can use pg_logical from us (2ndquadrant). Please visit our 
homepage, there you can find it for 9.6. There are also some 
blogs-postings about all that in our blog.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: security on user for replication

2019-11-11 Thread Andreas Kretschmer




Am 11.11.19 um 14:26 schrieb PegoraroF10:
How can I hide that info from users which are connected to my replica 
server



you can use a .pgpass - file, see the documentation.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Can you please suggest how to configure hot_standby_feedback?

2019-10-31 Thread Andreas Kretschmer




Am 31.10.19 um 06:21 schrieb M Tarkeshwar Rao:


Can you please suggest how to configure hot_standby_feedback?




turn it on if you want execute long running queries on the standby, keep 
in mind it can lead to more bloat on the master.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Kretschmer
On 18 October 2019 07:59:21 CEST, Daulat Ram  wrote:
>Hello All,
>Can you please share some ideas and scenarios how we can do the PITR in
>case of disaster.
>
>
>Thanks,


Consider Barman.


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: GPS coordinates problem

2019-10-08 Thread Andreas Kretschmer




Am 08.10.19 um 12:50 schrieb Timmy Siu:
Now, I need Global Position System coordinates as a data type. How do 
I define it in Postgresql 11 or 12?


consider PostGIS.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Postgres 12: backend crashes when creating non-deterministic collation

2019-10-04 Thread Andreas Kretschmer




Am 04.10.19 um 12:13 schrieb Thomas Kellerer:
I was trying to learn how the new non-deterministic collations in v12 
work, but the following makes the backend crash:


CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false);


Which leads to:

2019-10-04 11:54:23 CEST   LOG:  server process (PID 7540) was 
terminated by exception 0xC005

2019-10-04 11:54:23 CEST   DETAIL:  Failed process was running:
CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false)
2019-10-04 11:54:23 CEST   HINT:  See C include file "ntstatus.h" for 
a description of the hexadecimal value.
2019-10-04 11:54:23 CEST   LOG:  terminating any other active server 
processes
2019-10-04 11:54:23 CEST   WARNING:  terminating connection because of 
crash of another server process
2019-10-04 11:54:23 CEST   DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited abnormally and possibly corrupted shared 
memory.


This is on Windows 10 with the Postgres 12 binaries from EDB.
Exact Postgres version is: PostgreSQL 12.0, compiled by Visual C++ 
build 1914, 64-bit

The database was pg_upgraded if that makes any difference




works for me, with:

psql (12rc1 (Ubuntu 12~rc1-1.pgdg18.04+1))
Type "help" for help.

test=# CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false);

CREATE COLLATION
test=*# commit;
COMMIT
test=#


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 13:44 schrieb Luca Ferrari:

On Mon, Sep 23, 2019 at 10:55 AM Andreas Kretschmer
 wrote:

you can use both of them, and you should consider "Barman".

If I remember well Barman uses pg_receivexlog when streaming, and
archive_command when doing a "normal" backup.


Barman < version 2 can only archive_command, version 2 and higher can both.


Also pgbackrest is another good tool for backup.
The idea here should be not reinventing the wheel.

ack.

Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 10:25 schrieb Vikas Sharma:

Hi,

I am wondering which one is the best way to archive the xlogs for 
Backup and Recovery - pg_receivexlog or archive_command.


pg_receivexlog seems best suited because the copied/archived file is 
streamed as it is being written to in xlog while archive_command only 
copies when the WAL is fully written to.




you can use both of them, and you should consider "Barman".


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: BDR: moving a node

2019-08-25 Thread Andreas Kretschmer




Am 25.08.19 um 18:11 schrieb E:
What is the process to update the DSN? I assume I'll have to relay the 
changes in my pg_hba.conf, but do not understand, and don't want to 
tinker, with BDR before obtaining some educated advice.


I apologize if my question comes across as dumb. I understand I have 
to update bdr.bdr_connections, but how, and what do I restart? What 
about the server that was moved, any updates needed there?


the safest way would be to part that node from the cluster and rejoin it 
afterwards.


You have (at least) to check all replication slots, please check also 
pg_replication_identifier (please use pg_replication_identifier_drop() 
to drop the node ) , bdr.bdr_nodes and bdr.bdr_connections. Please keep 
in mind, BDR version 1 will be out of support soon, and PG 9.4 as well. 
Current development of BDR is BDR 3.6.x., it works with PG 10+ .




Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: how to upgrade production PostgreSQL from 9.4 to 11.3

2019-06-12 Thread Andreas Kretschmer




Am 12.06.19 um 14:50 schrieb Rahul Chordiya:

postgres=#
postgres=# select subscription_name, status FROM 
pglogical.show_subscription_status();

 subscription_name | status
---+
(0 rows)

postgres=# select pglogical.create_subscription(subscription_name := 
'subscription1', provider_dsn := 'host=10.101.40.70 port=5432 
dbname=registration user=repuser password=password', replication_sets 
:= '{my_set}');
*ERROR:  could not fetch remote node info: ERROR:  schema "pglogical" 
does not exist*


What should I do?



you have create the replication set (on the provider) in the 
postgres-database, but now you referencing to an other database, 
registration. There isn't this replication set defined.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Featured Big Name Users of Postgres

2019-06-11 Thread Andreas Kretschmer
On 11 June 2019 19:45:27 CEST, Igal Sapir  wrote:
>I'm doing a presentation about Postgres to SQL Server users this
>weekend,
>and I want to showcase some of the big names that use Postgres, e.g.
>MasterCard, Government agencies, Banks, etc.
>
>There used to be a Wiki page of Featured Users but that link is broken
>now.
>
>I also "found" a page about MasterCard at the EnterpriseDB website, but
>that one throws an error with too many redirects.
>
>Any suggestions on where to find a recent user list?  Thanks!


Not so easy, we have a lot of big customers, but we are not allowed to tell the 
names ...

You can visit our page, we have some case studies ...


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Drive Architecture for new PostgreSQL Environment

2019-06-10 Thread Andreas Kretschmer




Am 10.06.19 um 18:35 schrieb Hilbert, Karin:


We did this as a precaution against disk failure.  If we lose one, we 
would still have the other two to recover from.


Is that really necessary anymore, with having a repmgr cluster?



Repmgr is for HA, not for Backup/Recovery.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: table is hanging

2019-05-31 Thread Andreas Kretschmer




Am 31.05.19 um 14:06 schrieb Saurabh Agrawal:

Which query are you trying to run?


you can show us also the EXPLAIN  - Output.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: with and trigger

2019-05-29 Thread Andreas Kretschmer


>
>Are CTEs still optimization fences?
>https://www.2ndquadrant.com/en/blog/postgresql-ctes-are-optimization-fences/

Yes, but not in 12.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: POSTGRES_FSM_RELATIONS CRITICAL: DB control fsm relations used: 79569 of 80000 (99%)

2019-05-28 Thread Andreas Kretschmer
On 28 May 2019 20:20:10 CEST, Julie Nishimura  wrote:
>What is the impact of fsm_relatiosn being maxed out?

https://www.postgresql.org/docs/8.2/runtime-config-resource.html#RUNTIME-CONFIG-RESOURCE-FSM

Please no top-posting with fullquote.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: How to search using daterange (using gist)

2019-05-16 Thread Andreas Kretschmer




Am 16.05.19 um 11:57 schrieb Winanjaya Amijoyo:

Hi All,

I have records as below that I inserted using exclusion gist constraint

user_id    start_date      end_date         pid
001          2019-01-01    2019-02-10        1
001          2019-02-01    2019-03-12        2
001          2019-03-05    2019-06-15        3

I need to find records based on date range, so for example:

if I search using parameters 001, 2019-01-10, 2019-02-11 .. it will 
found as pid 1
if I search using parameters 001, 2019-03-10, 2019-07-13 .. it will 
found as pid 3

but if I search out of the date range, for example:
if I search using parameters 001, 2019-07-10, 2019-09-13 .. it will 
not found as pid 0


how to search with the above scenario in postgresql?




can you please show us the table-definition? Are you sure there is an 
exclusion constraint?


with your data:

test=*# select * from demo;
 user_id | start_date |  end_date  | pid
-+++-
   1 | 2019-01-01 | 2019-02-10 |   1
   1 | 2019-02-01 | 2019-03-12 |   2
   1 | 2019-03-05 | 2019-06-15 |   3
(3 rows)

test=*# select * from demo where daterange(start_date, end_date,'[)') && 
daterange('2019-01-10','2019-02-11');

 user_id | start_date |  end_date  | pid
-+++-
   1 | 2019-01-01 | 2019-02-10 |   1
   1 | 2019-02-01 | 2019-03-12 |   2
(2 rows)

test=*# select * from demo where daterange(start_date, end_date,'[)') && 
daterange('2019-03-10','2019-07-13');

 user_id | start_date |  end_date  | pid
-+++-
   1 | 2019-02-01 | 2019-03-12 |   2
   1 | 2019-03-05 | 2019-06-15 |   3
(2 rows)

test=*# select * from demo where daterange(start_date, end_date,'[)') && 
daterange('2019-07-10','2019-09-13');

 user_id | start_date | end_date | pid
-++--+-
(0 rows)

test=*#

test=*# \d demo
  Table "public.demo"
   Column   |  Type   | Collation | Nullable | Default
+-+---+--+-
 user_id    | integer |   |  |
 start_date | date    |   |  |
 end_date   | date    |   |  |
 pid    | integer |   |  |

test=*#


Regards, Andreas



--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Import Database

2019-05-06 Thread Andreas Kretschmer




Am 05.05.19 um 19:26 schrieb Ron:

On 5/5/19 12:20 PM, Andreas Kretschmer wrote:



Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking 
Postgresql config like maintenance_workmem, work_mem, shared_buffers 
etc.,


sure, take the dump in custom-format and use pg_restore with -j 
.


Custom format or directory format?


both are possible for that.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Import Database

2019-05-05 Thread Andreas Kretschmer




Am 05.05.19 um 18:47 schrieb Sathish Kumar:
Is there a way to speed up the importing process by tweaking 
Postgresql config like maintenance_workmem, work_mem, shared_buffers etc.,


sure, take the dump in custom-format and use pg_restore with -j 
. You can increase maintenance_work_mem maybe to 4GB to 
speedup index creation.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Multicolumn index for single-column queries?

2019-04-18 Thread Andreas Kretschmer




Am 18.04.19 um 08:52 schrieb rihad:

Hi. Say there are 2 indexes:

    "foo_index" btree (foo_id)

    "multi_index" btree (foo_id, approved, expires_at)


foo_id is an integer. Some queries involve all three columns in their 
WHERE clauses, some involve only foo_id.
Would it be ok from general performance standpoint to remove foo_index 
and rely only on multi_index? I know that
PG would have to do less work updating just one index compared to 
updating them both, but wouldn't searches
on foo_id alone become slower? 


it depends .

it depends on the queries you are using, on your workload. a 
multi-column-index will be large than an index over just one column,

therefore you will have more disk-io when you read from such an index.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: SQLSTATE when PostgreSQL crashes during COMMIT statement

2019-04-15 Thread Andreas Kretschmer




Am 15.04.19 um 12:41 schrieb Francisco Olarte:

On Mon, Apr 15, 2019 at 4:11 AM Takahashi, Ryohei
 wrote:

If application executes COMMIT statement and COMMIT failes because of 
PostgreSQL crash,
it is unknown whether the transaction is really committed.
Therefore, I think application should check the transaction is really committed 
after a while when certain SQLSTATE is returned.
Which SQLSTATE should application check? Or, is there any document which is 
written about this?
In my observation, ecpg returns '57P02' and libpq returns 'null' when 
PostgreSQL crashes during COMMIT statement.
Any other SQLSTATE?

My math is rusty, but I doubt you can reliably detect wheter a commit
failed. Detecting full success is easy, you get a correct code. Commit
failed without server crash is easy to, you get fail code. But if the
library has sent the commit message but not received the ok/fail code,
as the server has to 1st persist the query to disk and 2nd send the
result (OK/FAIL) back you never know what has happened. So, if the
library gives you an error ( "I have not received the commit
confirmation" , not something like "commit failed" ), you will still
need to test the data, if you can, to know how the server come back
up.




since pg10 you can use check the status of a transaction, here is a 
explanation:

https://blog.2ndquadrant.com/postgresql-10-transaction-traceability/

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: SELECT query fails after pg_upgrade as the conditional operator fails

2019-04-13 Thread Andreas Kretschmer




Am 13.04.19 um 11:22 schrieb Nithin Johnson:


We are seeing this intermittent problem after we upgrade (using 
pg_upgrade) from postgres 9.3.12 to 9.6.12


Querying  few of the rows in the table using a TEXT field is failing.



sounds like a corrupt index, can you show us the complete error message 
and the table definition?



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Table Export & Import

2019-03-31 Thread Andreas Kretschmer
On 1 April 2019 08:09:37 CEST, Sathish Kumar  wrote:
>Hi Team,
>
>We have a requirement to copy a table from one database server to
>another
>database server. We are looking for a solution to achieve this with
>lesser
>downtime on Prod. Can you help us with this?
>
>Table Size: 160GB
>Postgresql Server Version: 9.5

You can take a dump at any time without downtime. Other solution would be a 
logical replicarion using, for instance, slony, londiste or pg_logical from 
2ndQuadrant.


Regards, Andreas




-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Archival process of partition tables with filtering few rows from tables.

2019-03-28 Thread Andreas Kretschmer
On 29 March 2019 05:13:31 CET, github kran  wrote:
>Hello Team,
>
>We are using PostgreSQL Version 9.6 and planning to archive our
>partition
>tables containing about 300 - 500 million rows . We have around ~ 50
>partition tables to be archived to a new
>cold path PostgreSQL database , version 10.6.

Consider Version 11 instead, much better features for partitioning.


 We have a requirement to
>filter few rows before exporting this data from these tables as we dont
>want to archive those rows .
>
> What is a better approach to export and restore these tables ?.
>
>- Does COPY Command with a filter query to filter few rows using select
>   works better ?.
>   - pg_dump with filtering these rows ?.
> - Can i able to export my indexes , check constraints , constraints ?.
>   - Any other options ?.

Yeah, consider logical replication using pg_logical from us with row filtering.



Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Case Insensitive

2019-03-28 Thread Andreas Kretschmer




Am 28.03.19 um 09:33 schrieb Sameer Kumar:


test=*# select * from emp where ename = 'aaa';
  eid | ename
-+---
    1 | aaa
    2 | AAA
(2 rows)



Ummm... Will it use an index (a BTree index)?



test=# explain select * from emp where ename = 'aaa';
 QUERY PLAN
-
 Seq Scan on emp  (cost=0.00..25.88 rows=6 width=36)
   Filter: (ename = 'aaa'::citext)
(2 rows)

test=*# set enable_seqscan to off;
SET
test=*# explain select * from emp where ename = 'aaa';
   QUERY PLAN

 Seq Scan on emp  (cost=100.00..125.88 rows=6 width=36)
   Filter: (ename = 'aaa'::citext)
(2 rows)

test=*# create index emp_ename on emp(ename);
CREATE INDEX
test=*# explain select * from emp where ename = 'aaa';
  QUERY PLAN
--
 Index Scan using emp_ename on emp  (cost=0.13..8.14 rows=1 width=36)
   Index Cond: (ename = 'aaa'::citext)
(2 rows)

test=*#


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Case Insensitive

2019-03-28 Thread Andreas Kretschmer




Am 28.03.19 um 09:20 schrieb Sridhar N Bamandlapally:

Hi PG-General and Pgsql-Admin

Can we achieve CASE INSENSITIVE in PostgreSQL?


test=# create extension citext;
CREATE EXTENSION
test=*# create table emp (eid int, ename citext);
CREATE TABLE
test=*# insert into emp values (1, 'aaa');
INSERT 0 1
test=*# insert into emp values (2, 'AAA');
INSERT 0 1
test=*# select * from emp where ename = 'aaa';
 eid | ename
-+---
   1 | aaa
   2 | AAA
(2 rows)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Conditional INSERT

2019-03-15 Thread Andreas Kretschmer




Am 15.03.19 um 18:55 schrieb basti:

Hello,

I want to insert data into table only if condition is true.
For example:

INSERT into  mytable (domainid, hostname, txtdata)
   VALUES (100,'_acme.challenge.example', 'somedata');

The insert should only be done if Hostname like %_acme.challenge%.




you can use a check-constraint:

create table mytable(hostname text, check(hostname like 
'%_acme.challenge%'));



I think you can see the idea...


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Ran out of memory retrieving query results.

2019-03-11 Thread Andreas Kretschmer




Am 11.03.19 um 06:44 schrieb Nanda Kumar:

Hello Tem,

Can you please help on the below issues . The below Error occurred when I run 
the select statement for the huge data volume.

  Error Details :

  Ran out of memory retrieving query results.


you should provide more details, for instance (at least)
* os and pg version
* how much ram contains the machine
* config-parameters (shared_buffers, work_mem, max_connections)
* running activities
* the query itself
* table-definition
* how large are the tables
* the EXPLAIN of the query


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Postgres 10 temp tablespace question

2019-03-09 Thread Andreas Kretschmer




Am 09.03.19 um 02:05 schrieb Joseph Dunleavy:


I am building a multi-tenant deployment with multiple database - 1 
tenant per database.


I would like to be able to dedicate specific temp tablespace to 
a specific database or user/schemas.


I understand how to define temp_tablespace in postgresql.conf.


I can't find anything in the documentation on how to dedicate a 
specific temp_tablespaces to a specific database, user or schema.


I also thought maybe I could create a trigger on logon to set a 
specific temp tablespace per user, but I can't do that either.



Is it possible in Postgresql to dedicate a specific temp tablespace to 
a specific database or user/schema?





yes:

test=*# create tablespace tmp_tbsp1 location '/tmp/tbsp1';
FEHLER:  CREATE TABLESPACE kann nicht in einem Transaktionsblock laufen
test=*# commit;
COMMIT
test=# create tablespace tmp_tbsp1 location '/tmp/tbsp1';
CREATE TABLESPACE
test=# create tablespace tmp_tbsp2 location '/tmp/tbsp2';
CREATE TABLESPACE
test=# create tablespace tmp_tbsp3 location '/tmp/tbsp3';
CREATE TABLESPACE
test=# create user usr1;
CREATE ROLE
test=*# create user usr2;
CREATE ROLE
test=*# create user usr3;
CREATE ROLE
test=*# alter user usr1 set temp_tablespaces = 'tmp_tbsp1';
ALTER ROLE
test=*# alter user usr2 set temp_tablespaces = 'tmp_tbsp2';
ALTER ROLE
test=*# alter user usr3 set temp_tablespaces = 'tmp_tbsp3';
ALTER ROLE
test=*#


test=*# show temp_tablespaces;
 temp_tablespaces
--

(1 row)

test=*# commit;
COMMIT
test=# \c - usr2;
psql (11.1 (Ubuntu 11.1-3.pgdg16.04+1), server 9.5.15)
You are now connected to database "test" as user "usr2".
test=> show temp_tablespaces;
 temp_tablespaces
--
 tmp_tbsp2
(1 row)

test=*>



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Non-pausing table scan on 9.6 replica?

2019-03-05 Thread Andreas Kretschmer




Am 06.03.19 um 06:41 schrieb Mark Fletcher:

Thank you for responding to my email.

On Tue, Mar 5, 2019 at 9:20 PM Andreas Kretschmer 
mailto:andr...@a-kretschmer.de>> wrote:



have you set ```max_standby_streaming_delay``? The default is 30
seconds, which means that this will be the maximum time allowed for a
replication lag caused by a conflicting query.


Yes, we've bumped that up a lot.


i tought so.



You can use ``hot_standby_feedback = on``, but the downside will
be more
bloat on the tables.

I'm not sure I understand. I'm not worried about the query being 
cancelled on the replica. max_standby_streaming_delay fixes that for 
us. I'm worried about the streaming replication being paused while 
this table scan is running. If the table scan takes several minutes, 
then the replica becomes several minutes out of sync with the master. 
I'd prefer that not to happen and I'm wondering if there's a way to do 
that.




You can choose. max_standby_streaming_delay with delay in the streaming 
(hence the name) or hot_standby_feedback, with the downside of possible 
more bloat.



Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: write on standby

2019-03-05 Thread Andreas Kretschmer
On 6 March 2019 06:26:45 CET, Julie Nishimura  wrote:
>Thank you. Are you going to have any presentations on Postgresql
>conference in NYC soon?
>
>
>From: Andreas Kretschmer 
>Sent: Tuesday, March 5, 2019 9:16 PM
>To: pgsql-general@lists.postgresql.org
>Subject: Re: write on standby
>
>
>
>Am 06.03.19 um 00:34 schrieb Julie Nishimura:
>> Hello there,
>> Is it possible for a test app to connect to the standby dB of an
>> active-Standby dB pair?
>
>that's possible, but ...
>
>
>> While both continue to be connected and replicating? What if it’s
>> needed to write tmp tables that are later dropped?
>
>... that's not. Not with streaming replication, but ...
>
>
>> Can it be done in hot standby scenario on standby? Version is 9.6.2
>>
>
>... it would be possible with logical replication (trigger based
>solutions like slony or lindiste or logical replication using pglogical
>from us, 2ndQ).
>
>https://www.2ndquadrant.com/en/resources/pglogical/
>
>
>You can also use BDR2 with PG9.6, but this is only available for our
>customers.
>
>
>Regards, Andreas
>
>--
>2ndQuadrant - The PostgreSQL Support Company.
>www.2ndQuadrant.com<http://www.2ndQuadrant.com>

Hi,

I think so, if you want i can organize a contact with someone from our company.


Andreas (from germany)
-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Non-pausing table scan on 9.6 replica?

2019-03-05 Thread Andreas Kretschmer




Am 06.03.19 um 01:26 schrieb Mark Fletcher:

Hi All,

On a 9.6 streaming replica, we do table scans for stats and other 
things. During these scans, the replication is paused (the 
'recovering' postgres process has 'waiting' appended to it). We're not 
using transactions with these scans. Is there anything we can do to 
prevent the pausing?






have you set ```max_standby_streaming_delay``? The default is 30 
seconds, which means that this will be the maximum time allowed for a 
replication lag caused by a conflicting query.
You can use ``hot_standby_feedback = on``, but the downside will be more 
bloat on the tables.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: write on standby

2019-03-05 Thread Andreas Kretschmer




Am 06.03.19 um 00:34 schrieb Julie Nishimura:

Hello there,
Is it possible for a test app to connect to the standby dB of an 
active-Standby dB pair?


that's possible, but ...


While both continue to be connected and replicating? What if it’s 
needed to write tmp tables that are later dropped?


... that's not. Not with streaming replication, but ...



Can it be done in hot standby scenario on standby? Version is 9.6.2



... it would be possible with logical replication (trigger based 
solutions like slony or lindiste or logical replication using pglogical 
from us, 2ndQ).


https://www.2ndquadrant.com/en/resources/pglogical/


You can also use BDR2 with PG9.6, but this is only available for our 
customers.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: LIMIT not showing all results

2019-03-05 Thread Andreas Kretschmer




Am 05.03.19 um 19:41 schrieb Casey Deccio:


On Mar 5, 2019, at 10:37 AM, Andreas Kretschmer 
mailto:andr...@a-kretschmer.de>> wrote:


no, but you can set enable_indexscan to off and maybe also 
enable_bitmapscan to off to force the planner to choose a seq-scan.

I'm sure in this case you will get a correct result.


So this (setting enable_indexscan to off) might be a short-term fix, 
while reindexing is the more robust fix?




yes, sure. The index is corrupt.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: [External] LIMIT not showing all results

2019-03-05 Thread Andreas Kretschmer




Am 05.03.19 um 19:09 schrieb Matthew Pounsett:



On Tue, 5 Mar 2019 at 12:54, Tom Lane <mailto:t...@sss.pgh.pa.us>> wrote:


    Andreas Kretschmer mailto:andr...@a-kretschmer.de>> writes:
> the other thing is, it would be nice to to know why the index is
corrupt.

Given that (a) this was triggered by a server migration and (b)
the leading column of the index looks like it's probably varchar,
I'm suspicious that the new server has different collation behavior.


The migration in question was an rsync from a Debian 9 box 
running 9.4.19-0+deb8u1 to a FreeBSD 11 box  running 9.4.20.


yeah, check and compare the glibc - version on both systems. (ldd --version)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: [External] LIMIT not showing all results

2019-03-05 Thread Andreas Kretschmer
On 5 March 2019 18:54:33 CET, Tom Lane  wrote:
>Andreas Kretschmer  writes:
>> the other thing is, it would be nice to to know why the index is
>corrupt.
>
>Given that (a) this was triggered by a server migration and (b)
>the leading column of the index looks like it's probably varchar,
>I'm suspicious that the new server has different collation behavior.
>
>   regards, tom lane


yes, sounds reasonable.



Regards, Andreas.



-- 
2ndQuadrant - The PostgreSQL Support Company
-- 
2ndQuadrant - The PostgreSQL Support Company



Re: [External] LIMIT not showing all results

2019-03-05 Thread Andreas Kretschmer




Am 05.03.19 um 17:51 schrieb Vijaykumar Jain:

Thanks Tom.


I mean if the instance is a test instance,
probably analysis_name_date_key can be dropped and the query can be
run again so as to check if it still returns the correct rows.
or create an index in parallel with the same col as
analysis_name_date_key and check if the optimizer choses the right
index.
and then come to conclusion of bad index.

Also is there an option where we can force a particular index to be used ?



no, but you can set enable_indexscan to off and maybe also 
enable_bitmapscan to off to force the planner to choose a seq-scan.

I'm sure in this case you will get a correct result.

the other thing is, it would be nice to to know why the index is corrupt.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




  1   2   3   >