alter system command

2020-12-26 Thread Atul Kumar
hi,

I have set archive_command like below:

alter system set archive_command ='pgbackrest --stanza=main archive-push %';

then I reloaded the conf file

select pg_reload_conf();


now when i checked it using the show command like it gave me correct output

postgres=# show archive_command;
 archive_command
-
 pgbackrest --stanza=main archive-push %


but...

while checking the postgresql.conf file it didn't show me above
output, the parameter acrhive_command is still set with default value

archive_command = '/bin/true/'  # command to use to archive a
logfile segment

So please help me in giving the reason of it that even after reloading
the conf file why it didn't set the value in postgresql.conf file ?


Regards,
Atul




Re: Multi-column index vs index on individual columns

2020-12-26 Thread Guyren Howe
In a typical web storefront or catalog app, there are a fairly small number of 
queries that are run often that are more complex and slower. It is perfectly 
appropriate to consider creating compound/partial indexes to speed up those 
queries. Just be aware that indexes trade off slower insert/update time against 
faster search time. I would read use-the-index-luke.com for detailed advice 
about using compound and partial indexes effectively.
On Dec 26, 2020, 12:18 -0800, Shantanu Shekhar , wrote:
> Hi,
>
> I was going through Postgres documentation on multi-column indexes and came 
> across the recommendation at the bottom states: "Multicolumn indexes should 
> be used sparingly. In most situations, an index on a single column is 
> sufficient and saves space and time".
>
> In my experience typical webapps the queries are known ahead of time and lot 
> of the times the queries are restricted to a single table (no joins). Does 
> this recommendation that still apply in this use-case? I was under the 
> impression that instead of scanning multiple indexes and then combining the 
> results, it may be more performant to simply use a single multi-index column.
>
> Thanks,
>
> Shantanu


Re: Problem with ssl and psql in Postgresql 13

2020-12-26 Thread Tom Lane
Here's a draft patch for the libpq-side issues.  The core of the
fix is to get rid of pqsecure_open_gss's clearing of allow_ssl_try,
and instead check whether GSS encryption is already enabled before
we try to enable SSL.  While I was at it I also fixed the places
where we drop an attempted GSS connection: they should set
need_new_connection = true rather than incompletely doing it for
themselves.  Notably that coding misses resetting auth_req_received
and password_needed; the consequences of that are minor but not zero.

There are things to fix on the server side, and the documentation
needs work, but this should be enough to solve Mikael's problem
if he's in a position to apply the patch locally.

regards, tom lane

diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c
index 7d04d3664e..ec7c4c5e87 100644
--- a/src/interfaces/libpq/fe-connect.c
+++ b/src/interfaces/libpq/fe-connect.c
@@ -2909,11 +2909,16 @@ keep_going:		/* We will come back to here until there is
 #ifdef USE_SSL
 
 /*
- * If SSL is enabled and we haven't already got it running,
- * request it instead of sending the startup message.
+ * If SSL is enabled and we haven't already got encryption of
+ * some sort running, request SSL instead of sending the
+ * startup message.
  */
 if (conn->allow_ssl_try && !conn->wait_ssl_try &&
-	!conn->ssl_in_use)
+	!conn->ssl_in_use
+#ifdef ENABLE_GSS
+	&& !conn->gssenc
+#endif
+	)
 {
 	ProtocolVersion pv;
 
@@ -3042,6 +3047,7 @@ keep_going:		/* We will come back to here until there is
 		}
 		/* Otherwise, proceed with normal startup */
 		conn->allow_ssl_try = false;
+		/* We can proceed using this connection */
 		conn->status = CONNECTION_MADE;
 		return PGRES_POLLING_WRITING;
 	}
@@ -3139,8 +3145,7 @@ keep_going:		/* We will come back to here until there is
 		 * don't hang up the socket, though.
 		 */
 		conn->try_gss = false;
-		pqDropConnection(conn, true);
-		conn->status = CONNECTION_NEEDED;
+		need_new_connection = true;
 		goto keep_going;
 	}
 
@@ -3158,6 +3163,7 @@ keep_going:		/* We will come back to here until there is
 		}
 
 		conn->try_gss = false;
+		/* We can proceed using this connection */
 		conn->status = CONNECTION_MADE;
 		return PGRES_POLLING_WRITING;
 	}
@@ -3186,8 +3192,7 @@ keep_going:		/* We will come back to here until there is
 	 * the current connection to do so, though.
 	 */
 	conn->try_gss = false;
-	pqDropConnection(conn, true);
-	conn->status = CONNECTION_NEEDED;
+	need_new_connection = true;
 	goto keep_going;
 }
 return pollres;
@@ -3354,10 +3359,9 @@ keep_going:		/* We will come back to here until there is
 	 */
 	if (conn->gssenc && conn->gssencmode[0] == 'p')
 	{
-		/* postmaster expects us to drop the connection */
+		/* only retry once */
 		conn->try_gss = false;
-		pqDropConnection(conn, true);
-		conn->status = CONNECTION_NEEDED;
+		need_new_connection = true;
 		goto keep_going;
 	}
 #endif
diff --git a/src/interfaces/libpq/fe-secure-gssapi.c b/src/interfaces/libpq/fe-secure-gssapi.c
index bfc0f55214..9416306eea 100644
--- a/src/interfaces/libpq/fe-secure-gssapi.c
+++ b/src/interfaces/libpq/fe-secure-gssapi.c
@@ -647,17 +647,14 @@ pqsecure_open_gss(PGconn *conn)
 	if (output.length == 0)
 	{
 		/*
-		 * We're done - hooray!  Kind of gross, but we need to disable SSL
-		 * here so that we don't accidentally tunnel one over the other.
+		 * We're done - hooray!  Set flag to tell the low-level I/O routines
+		 * to do GSS wrapping/unwrapping.
 		 */
-#ifdef USE_SSL
-		conn->allow_ssl_try = false;
-#endif
+		conn->gssenc = true;
 
 		/* Clean up */
 		gss_release_cred(&minor, &conn->gcred);
 		conn->gcred = GSS_C_NO_CREDENTIAL;
-		conn->gssenc = true;
 		gss_release_buffer(&minor, &output);
 
 		/*


Re: Problem with ssl and psql in Postgresql 13

2020-12-26 Thread Tom Lane
I wrote:
> So this directly explains the original report: if the only applicable
> pg_hba.conf entry is "hostssl", right here is where libpq will see
> that error, and it will go around and fail again because the next
> try uses neither GSS nor SSL.

> However, in the tests Mikael ran after backing that pg_hba.conf entry
> off to just "host", pg_hba wouldn't have caused an authentication-stage
> failure, so it's not so clear why we'd have looped back at this step.
> We'd need to explain the later cases to have a full theory.  It's
> plenty plausible that something else caused an auth-stage failure,
> but if so, why would the second GSS-free try work?

So, after actually trying to reproduce this error, the answer
to that question is blindingly obvious:

2020-12-26 15:34:01.425 EST [1853] FATAL:  GSSAPI encryption can only be used 
with gss, trust, or reject authentication methods

You'll recall that Mikael was using PAM auth; I hit this with
just regular password auth.

So, given that the server and client are in a Kerberos-enabled
environment, they'll negotiate a GSS-encrypted connection (that the
user did not ask for), then the server refuses to use it because of
the above restriction, then libpq drops down to a new connection
... which, because of this allow_ssl_try bug, is not SSL-encrypted
but just plain, even if the user had set sslmode=require.

I'd say this is not just a nasty bug, but verges on requiring a CVE.

I'm also of the opinion that this restriction on authentication type
is useless damfool nannyism, and we should just drop it.  Sure, if
you've managed to GSS-ify the connection then the client is known
to the Kerberos system, but that proves little about whether the
client is entitled to log into the database under the username
he's asking for.  Why should we restrict how that auth is done?
Even if there's some reason to do so, it's hard to believe that the
reason justifies forcing a whole new round of encrypted-connection
setup to switch to SSL encryption, when we have a perfectly good
encrypted connection already.

regards, tom lane

PS: AFAICS, it's also undocumented useless damfool nannyism.




Multi-column index vs index on individual columns

2020-12-26 Thread Shantanu Shekhar
Hi,
I was going through Postgres documentation on multi-column indexes and came 
across the recommendation at the bottom states: "Multicolumn indexes should be 
used sparingly. In most situations, an index on a single column is sufficient 
and saves space and time". 
In my experience typical webapps the queries are known ahead of time and lot of 
the times the queries are restricted to a single table (no joins). Does this 
recommendation that still apply in this use-case? I was under the impression 
that instead of scanning multiple indexes and then combining the results, it 
may be more performant to simply use a single multi-index column. 
Thanks,
Shantanu

Re: PostgreSQL HA

2020-12-26 Thread venkata786 k
Hi Ganesh,

I saw symmetric DS in Google.
Are there any disadvantages with symmetric DS?




Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>

On Sat, 26 Dec 2020 at 16:22, Ganesh Korde  wrote:

> You can go for BDR from 2nd Quadrant.
>
> On Sat, 26 Dec 2020, 4:17 pm venkata786 k,  wrote:
>
>> Hi Team,
>>
>> Could you please share postgresql's best active(R/W)-active(R/W) (multi
>> master replication) solutions.
>> My Team is thinking about implementing active-active replacing
>> master-slave.
>>
>> Regards,
>> Venkat
>>
>>
>>
>>
>>
>>
>> 
>>  Virus-free.
>> www.avast.com
>> 
>> <#m_9193675869156348298_m_-8285752177786612489_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>


Re: PostgreSQL HA

2020-12-26 Thread venkata786 k
Hi Ganesh,

BDR supports postgres 12 & 13 versions ??
I think we have BDR compatible for 9.4 postgres.
Could you plz confirm.


On Sat, 26 Dec 2020 at 16:22, Ganesh Korde  wrote:

> You can go for BDR from 2nd Quadrant.
>
> On Sat, 26 Dec 2020, 4:17 pm venkata786 k,  wrote:
>
>> Hi Team,
>>
>> Could you please share postgresql's best active(R/W)-active(R/W) (multi
>> master replication) solutions.
>> My Team is thinking about implementing active-active replacing
>> master-slave.
>>
>> Regards,
>> Venkat
>>
>>
>>
>>
>>
>>
>> 
>>  Virus-free.
>> www.avast.com
>> 
>> <#m_9193675869156348298_m_-8285752177786612489_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>


Re: Postgres read jsonb content from stdin

2020-12-26 Thread Markur Sens
Hello,

Hadn't really thought of using a foreign table up at this point...
thanks for that.

Will the first solution handle formatting issues (e.g. single quotes)
gracefully?

I think I'd tried it in the past and it didn't work.

PD: I have such a script that handle's the intricacies but it'd still emit
to stdout. (hence the curl simplified in the example)


On Sat, Dec 26, 2020 at 2:40 PM Ian Lawrence Barwick 
wrote:

> 2020年12月26日(土) 20:19 Markur Sens :
> >
> > Hello,
> >
> > I'm trying to build a few data pipelines with Unix tools but can't
> figure out how to insert in a slurp mode (e.g. not COPY line by line)
> content inside a variable.
> >
> > Consider the following script (using a heredoc)
> >
> > json_url="https://file.json";
> > local_file="/tmp/a.json"
> >
> > curl -s -m 10 -A 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0)
> Gecko/20100101 Firefox/30.0' \
> > --max-redirs 0 -o ${local_file} ${json_url}
> >
> > psql "$PG_URI" -qAt < > create table if not exists (data jsonb);
> >
> > insert into my_table(data) values (pg_read_file('${local_file}')::jsonb)
> > on conflict do nothing;
> > SQL
> >
> > The question is, how can I achieve the same result, without having to
> hit the disk due. to the temporary file.
> > I tried running by using pg_read_file('/dev/stdin')::jsonb
>
> It can be done like this:
>
> $ curl http://localhost/json.txt
> {"bar": "baz", "balance": 7.77, "active": false}
>
> $ psql -v jsonval="`curl -s http://localhost/json.txt`
> " -d
> 'host=localhost dbname=postgres user=postgres' < INSERT INTO json_test values(:'jsonval')
> SQL
>
> INSERT 0 1
> Time: 0.374 ms
>
> though TBH if I were doing that on a regular basis, I'd do it via a script
> which
> could cope with errors retrieving the remote file, etc.
>
> If the data source (URL) is constant, you could try something along these
> lines
> with file_fdw:
>
> CREATE EXTENSION file_fdw;
>
> CREATE SERVER json_curl FOREIGN DATA WRAPPER file_fdw;
> CREATE FOREIGN TABLE json_src (
>   json_data jsonb
> )
> SERVER json_curl
> OPTIONS (
>   PROGRAM 'curl -s http://localhost/json.txt'
> );
>
>
> Better alternatives may be available.
>
>
> Regards
>
> Ian Barwick
>
>
>
> --
> EnterpriseDB: https://www.enterprisedb.com
>


Re: Postgres read jsonb content from stdin

2020-12-26 Thread Ian Lawrence Barwick
2020年12月26日(土) 20:19 Markur Sens :
>
> Hello,
>
> I'm trying to build a few data pipelines with Unix tools but can't figure out 
> how to insert in a slurp mode (e.g. not COPY line by line) content inside a 
> variable.
>
> Consider the following script (using a heredoc)
>
> json_url="https://file.json";
> local_file="/tmp/a.json"
>
> curl -s -m 10 -A 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101 
> Firefox/30.0' \
> --max-redirs 0 -o ${local_file} ${json_url}
>
> psql "$PG_URI" -qAt < create table if not exists (data jsonb);
>
> insert into my_table(data) values (pg_read_file('${local_file}')::jsonb)
> on conflict do nothing;
> SQL
>
> The question is, how can I achieve the same result, without having to hit the 
> disk due. to the temporary file.
> I tried running by using pg_read_file('/dev/stdin')::jsonb

It can be done like this:

$ curl http://localhost/json.txt
{"bar": "baz", "balance": 7.77, "active": false}

$ psql -v jsonval="`curl -s http://localhost/json.txt`"; -d
'host=localhost dbname=postgres user=postgres' 

Re: PostgreSQL HA

2020-12-26 Thread Atul Kumar
Use xdb, simpler and better

On Saturday, December 26, 2020, venkata786 k  wrote:

> Hi Team,
>
> Could you please share postgresql's best active(R/W)-active(R/W) (multi
> master replication) solutions.
> My Team is thinking about implementing active-active replacing
> master-slave.
>
> Regards,
> Venkat
>
>
>
>
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_1212501648721883306_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Postgres read jsonb content from stdin

2020-12-26 Thread Markur Sens
Hello,

I'm trying to build a few data pipelines with Unix tools but can't figure
out how to insert in a slurp mode (e.g. not COPY line by line) content
inside a variable.

Consider the following script (using a heredoc)

json_url="https://file.json";
local_file="/tmp/a.json"

curl -s -m 10 -A 'Mozilla/5.0 (X11; Linux x86_64; rv:30.0) Gecko/20100101
Firefox/30.0' \
--max-redirs 0 -o ${local_file} ${json_url}

psql "$PG_URI" -qAt <

Re: PostgreSQL HA

2020-12-26 Thread Ganesh Korde
You can go for BDR from 2nd Quadrant.

On Sat, 26 Dec 2020, 4:17 pm venkata786 k,  wrote:

> Hi Team,
>
> Could you please share postgresql's best active(R/W)-active(R/W) (multi
> master replication) solutions.
> My Team is thinking about implementing active-active replacing
> master-slave.
>
> Regards,
> Venkat
>
>
>
>
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_-8285752177786612489_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


PostgreSQL HA

2020-12-26 Thread venkata786 k
Hi Team,

Could you please share postgresql's best active(R/W)-active(R/W) (multi
master replication) solutions.
My Team is thinking about implementing active-active replacing master-slave.

Regards,
Venkat






Virus-free.
www.avast.com

<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>