> Not sure why would it matter that the pg_dump connection is read-only, this
> is about locking because
> pg_dump needs to ensure the schema does not change while it's running.
> pg_dump does not do
>
> LOCK TABLE gm.tableabc;
>
> but
>
> LOCK TABLE gm.tableabc IN ACCESS SHARE MODE;
>
> Wh
Please ignore this thread. After several days of debugging I found bug in
my application. It was misalignment of data when internal buffers
reallocated. After the application fix it all works as expected. Sorry for
the false alarm.
On Mon, Oct 7, 2019 at 7:03 PM Konstantin Izmailov wrote:
> Toma
Tomas, thank you for your reply! I cannot upload 2100+ xml files. Some of
them are huge.
I'm not sure if libpq is using libxml2 on Windows. In debugger I see very
strange behavior of pqsecure_write. It seems like it stops sending data
from provided buffer after 9,100,000 bytes.
I hoped that someo
Tomas Vondra writes:
> On Mon, Oct 07, 2019 at 11:11:43AM -0400, Ravi Krishna wrote:
>>https://postgresapp.com/downloads.html
>>
>>The link which says PG 12 is actually PG 11.
>>
>
> Not sure if the link is correct or not (it seems to point to
> Postgres-2.3-12.dmg, which seems like it might be v
On 10/7/19 6:17 PM, Michael Lewis wrote:
No, what you want is not possible and probably won't ever be I would expect.
Sure it is. Maybe not the (weird) way that Postgres does partitioning, but
the legacy RDBMS that I still occasionally maintain has for at least 25
years had partition key ind
No, what you want is not possible and probably won't ever be I would
expect. Scanning every partition to validate the primary key isn't scalable.
>
> Hi
>>
>>By inserting data in a JSONB type column I got the following error
>> message:
>>
>> *>> *
>>
>> *ERROR: string too long to represent as jsonb string*
>>
>> *DETAIL: Due to an implementation restriction, jsonb strings cannot
>> exceed 268435455 bytes.*
>>
>> *<< *
>>
>> could anyo
>
> My Json has always a status
>
Why declare the "where" clause when creating the index? It would not seem
needed if status is always set and so your index will reference all rows in
the table.
Thanks for sharing the trick of having the second column in the index
determine the key based on the f
On Mon, Oct 07, 2019 at 11:00:08PM +0200, Artur Zając wrote:
Hi,
I cannot reproduce some scenario I found in my PostgreSQL logs.
I have two connections/processes:
Process 24755 is standard pg_backup connection with:
.
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY
On Mon, Oct 07, 2019 at 01:52:41PM -0700, dangal wrote:
Hello everyone, I have a question to see if you can help me, I have set
work_mem in 100 MB but I have the following in the pgbadger
Queries generating the most temporary files (N)
Count Total size Min size Max sizeAvg size
58
Hi,
I cannot reproduce some scenario I found in my PostgreSQL logs.
I have two connections/processes:
Process 24755 is standard pg_backup connection with:
.
BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ, READ ONLY
.
LOCK TABLE gm.tableabc;
.
COPY FROM gm.tableabc
Hello everyone, I have a question to see if you can help me, I have set
work_mem in 100 MB but I have the following in the pgbadger
Queries generating the most temporary files (N)
Count Total size Min size Max sizeAvg size
58 3.24 GiB 57.15 MiB 57.15 MiB 57.15 MiB
How could
Actually I found a nice open source product "Acra" ... seems to do
the whole thing via a proxy. Now I need to see if I can customize the
encryption enough using a plugin (but at least I can fork it and start
from there). A proxy encryption system seems to be the right call,
then all my client a
On Sat, Oct 5, 2019 at 10:50:14AM +0200, Luca Ferrari wrote:
> On Fri, Oct 4, 2019 at 10:38 PM Miles Elam wrote:
> >
> > The event trigger firing matrix lists tags like DROP TABLE and DROP
> > FUNCTION are listed below the ddl_command_end event, but when I created a
> > basic audit table and ev
On Mon, Oct 07, 2019 at 02:51:30PM -0400, Erik Aronesty wrote:
Good idea for "psycopg". It would be easy for a POC, but I think the
only meaningful layer to operate at would be a libpq drop-in
replacement that intercepts PQgetvalue, PQprepare, PQexecParams,
PQexecPrepared ... etc. That way odb
Good idea for "psycopg". It would be easy for a POC, but I think the
only meaningful layer to operate at would be a libpq drop-in
replacement that intercepts PQgetvalue, PQprepare, PQexecParams,
PQexecPrepared ... etc. That way odbc, python, node, etc would "just
work" as long as you used LD
Hi,
I am trying to create a table in postgres 11 with timestamp column as a
partition key using PARTITION BY RANGE (create_dtt). The table definition has
also an id column which is a primary key.
ERROR: insufficient columns in PRIMARY KEY constraint definition
DETAIL: PRIMARY KEY constraint
On Mon, Oct 07, 2019 at 12:05:16PM -0400, Erik Aronesty wrote:
Currently, it is my understanding that the pgcrypto library requires
the user to send a password or private key up to the server for
decryption.
Correct. In the naive case the key is included in each SQL query, which
does have vari
bhargav kamineni writes:
>> So ... how many is "a bunch"?
> more than 85
Hm. That doesn't seem like it'd be enough to trigger the problem;
you'd need about max_connections excess connections (that are shortly
going to be rejected) to run into this problem, and you said you
had max_connections =
Currently, it is my understanding that the pgcrypto library requires
the user to send a password or private key up to the server for
decryption.
Is there a notion of a client-side encrypt/decrypt plugin when doing a
postgres query?
For example, a user could query postgres, get back data of type
"
Or
CREATE TABLE t();
SELECT *
FROM t;
Van: Tatsuo Ishii
Verzonden: maandag 7 oktober 2019 10:00
Aan: pgsql-gene...@postgresql.org
Onderwerp: RowDescription message
According to the manualof RowDescription message
https://www.postgresql.org/docs/12/protocol-messa
On Mon, Oct 07, 2019 at 11:11:43AM -0400, Ravi Krishna wrote:
https://postgresapp.com/downloads.html
The link which says PG 12 is actually PG 11.
Not sure if the link is correct or not (it seems to point to
Postgres-2.3-12.dmg, which seems like it might be v12). But more
imporantly, that's no
https://postgresapp.com/downloads.html
The link which says PG 12 is actually PG 11.
On 10/7/19 12:41 AM, Arnaud L. wrote:
Le 04/10/2019 à 19:08, Adrian Klaver a écrit :
On 10/4/19 12:19 AM, Arnaud L. wrote:
OK I can do that. I thought I nailed it down to this line because it
started failing when this line was ~5th in the script, and it kept
failing on that very same line afte
Le 07/10/2019 à 16:36, Adrian Klaver a écrit :
So you are saying that you have not run the problematic line by itself?
It depends what you mean by that.
I've run this line by itself many times. Everytime the script has failed
in fact.
But until today I had not splitted the batch script to call
Tatsuo Ishii writes:
>>> According to the manualof RowDescription message
>>> https://www.postgresql.org/docs/12/protocol-message-formats.html
>>> Specifies the number of fields in a row (can be zero).
>>> Does 0 fields could actually happen?
>> Yes, e.g.:
>> SELECT;
> Thanks. Is it a valid SQL
>> According to the manualof RowDescription message
>> https://www.postgresql.org/docs/12/protocol-message-formats.html
>>
>>Specifies the number of fields in a row (can be zero).
>>
>> Does 0 fields could actually happen?
>
> Yes, e.g.:
>
> SELECT;
Thanks. Is it a valid SQL statement
Il 04/10/19 21:14, Alvaro Herrera ha scritto:
On 2019-Oct-04, Moreno Andreo wrote:
Il 04/10/19 18:28, Alvaro Herrera ha scritto:
I wonder if it would work to just clear that multixact with
SELECT ... WHERE ctid='(3160,31)' FOR UPDATE
select ...what? :-) Sorry but it's totally beyond my knowle
On Mon, Oct 07, 2019 at 01:00:32AM -0400, Mageshwaran Janarthanam wrote:
Hi Team,
I have question on the postgres install process. Tried to read the archives
but couldn't find it.
I wanted to install postgres server via rpm and wanted to store the
binaries under non default location like /pgbin
On Sun, Oct 06, 2019 at 08:45:40PM -0700, Konstantin Izmailov wrote:
Hi,
I'm using libpq (v10) to import lots of xml files into a PG10 table. I
noticed if number of records imported exceeds 2100 then the following error
is returned:
ERROR: invalid XML content
DETAIL: line 1: Couldn't find end o
Hi,
On 2019-10-07 17:00:24 +0900, Tatsuo Ishii wrote:
> According to the manualof RowDescription message
> https://www.postgresql.org/docs/12/protocol-message-formats.html
>
> Specifies the number of fields in a row (can be zero).
>
> Does 0 fields could actually happen?
Yes, e.g.:
SEL
According to the manualof RowDescription message
https://www.postgresql.org/docs/12/protocol-message-formats.html
Specifies the number of fields in a row (can be zero).
Does 0 fields could actually happen?
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.
Le 04/10/2019 à 19:08, Adrian Klaver a écrit :
On 10/4/19 12:19 AM, Arnaud L. wrote:
OK I can do that. I thought I nailed it down to this line because it
started failing when this line was ~5th in the script, and it kept
failing on that very same line after I moved it at the very end of the
sc
33 matches
Mail list logo