Re: [GENERAL] mild modification to pg_dump

2017-11-17 Thread John R Pierce

On 11/17/2017 12:19 PM, marcelo wrote:

Sorry, I was not exact.
I don't need nor like to change pg_dump. Rather, based on pg_dump 
code, I need to develop a daemon which can receive a TCP message (from 
a privileged app) containing some elements: the database to dump, the 
user under which do that, and his password. (My apps are using that 
same data, of course, encripted to the common users).



I would just fork pg_dump to do the actual dump rather than try and 
incorporate its source code into your app.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Setting up replication slave on remote high latency host

2017-11-15 Thread John R Pierce

On 11/15/2017 6:02 PM, Rory Falloon wrote:


Right now I am trying to dump the database, gzip, move across, and 
import into the new slave (which is configured as a master to perform 
the initial setup). Ideally I do this dump, move and import during a 
period of inactivity on the master so the new server will come up and 
immediately be able to catch up on replication due to lack of 
activity. However, I have been importing the current db as a test and 
after 90 minutes it seems to have only got 2/3 of the way. I am not 
confident this will work but it seems like the most efficient way to 
start.



you can't use pg_dump to create a slave, as it won't have the same timeline.

I would use pg_basebackup, but in general streaming replication over a 
high latency erratic link will never work real well.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] jsonb

2017-11-14 Thread John R Pierce

On 11/14/2017 2:30 PM, hmidi slim wrote:

I'm trying to use the function to_jsonb and create the name:
to_jsonb ('{"key1":" ' || 'text1' || '","key2":" ' || 'text2' || '"}');
But after that I used Objection.js ORM to get data using the query:
Product.query().where('id',1).then(prod => {console.log(prod)})
I think that the problem maybe with the usage of to_jsonb function, 
maybe I miss something. But when I fetch the data with the ORM I found 
that the type was a string and not a jsonb




never heard of your ORM... does it even know what postgres jsonb is ?   
do you know what actual SQL query that piece of ORMism generates ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migrating plattaform

2017-11-12 Thread John R Pierce

On 11/8/2017 11:38 AM, Valdir Kageyama wrote:


I need migrated the postgres from Linux on IBM Power to Oracle Linux 
on SPARC.


My doubt is possible copy the datafiles to new enviorement ? or I need 
using  other means of copying the data.

For exemples: pg_dump/pg_restore.



pretty sure you can't copy binary database files between architectures, 
as various data structures have different binary representations.


sure, pg_dump  -Fc | pg_restore, that works fine across architectures.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] OpeSSL - PostgreSQL

2017-11-09 Thread John R Pierce

On 11/9/2017 1:59 PM, chiru r wrote:


How to configure the PostgreSQL to allow specif cipher suites from 
different client applications?


see https://www.postgresql.org/docs/9.5/static/ssl-tcp.html


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] database size changed after restoring using pg_restore

2017-11-08 Thread John R Pierce

On 11/8/2017 2:59 PM, Dylan Luong wrote:


Hi

I am upgrading some databases from a PostgreSQL 9.2 server to a new 
PostgreSQL 9.6 server.


I used pg_dump and pg_restore for the upgrade.

.

But when I listed the size of the database (postgres=# \l+) between 
the 9.2 and the upgraded 9.6, they were different.


on 9.2 it was 3776 MB
on 9.6 it was 1570 MB

I also did a few more databases using the same steps and they all 
appeared to be smaller. Is that normal?



yes.  a freshly restored database will be all contiguous with no 
embedded free tuples left over from operations.   databases that get 
updates (or inserts/deletes) tend to bloat.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Because PostgreSQL is compiling in old versions of OS?

2017-11-08 Thread John R Pierce

On 11/8/2017 11:28 AM, DrakoRod wrote:

Which the best OS version to complining with the goal to build binaries "standard" o 
"more compatible"?



thats very difficult because library versions change between major 
distro releases.   Stuff compiled for RHEL6/CentOS6 will run on 
RHEL7/CentOS7 if you install the corresponding -compat libraries, but 
thats about as good as it gets.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental refresh - Materialized view

2017-11-07 Thread John R Pierce

On 11/6/2017 11:34 PM, Krithika Venkatesh wrote:
Materialized view log is one of the feature in oracle. It creates a 
log in which the changes made to the table are recorded. This log is 
required for an asynchronous materialized view that is refreshed 
incrementally.


I read in the below link about incrementally refreshing the 
materialized view in postgresql:


https://medium.com/@hariprasathnallsamy/postgresql-materialized-view-incremental-refresh-44d1ca742599

Can someone let me how to do incremental refresh using Write Ahead Log



I note that bloggers sample code on github no longer exists.m   I 
suspect it was half baked, and ran into intractable problems.


to do what you want, you would need to implement logical decoding [1] of 
the WAL stream,  you would need to 'understand' the views completely so 
you can tell if a given tuple update affects one of your views or not 
(relatively simple for a view which is just `select fields from table 
where simplecondition`, not so easy for a view which is a N way join 
with complex filtering and/or aggregation, or whatever), then accumulate 
these updates somewhere so your incremental refresh could replay them 
and update the table underlying a given materialized view.


I'm sure i'm not thinking of major aspects complicating this.


[1] 
https://www.postgresql.org/docs/current/static/logicaldecoding-explanation.html




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Incremental refresh - Materialized view

2017-11-06 Thread John R Pierce

On 11/6/2017 10:38 PM, Krithika Venkatesh wrote:

I need to implement incremental refresh of materialized view.

Please let me know how to do the incremental refresh of materialized 
view in postgresql 9.5.9 version.


Is there anything similar to materialized view log in postgresql.



you refresh a materialized view with REFRESH MATERIALIZED VIEW name;    
There's no 'incremental' methods, as views can be quite complex.


I do not know what you mean by 'materialized view log', is this a 
feature of some other database server ?




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql 9.3 service doesn't start on RedHat Linux 6.8

2017-11-06 Thread John R Pierce

On 11/6/2017 5:28 AM, Vikas Sharma wrote:
I am having issues in starting up postgresql service on RHEL 6.8, It 
was all working fine yesterday but now it doesn't start and throws 
error - same kind of error that is received connecting to remote 
postgres database.


what /exact/ error is it throwing ?



for the meantime I have started postgresql as below -
/usr/pgsql-9.3/bin/postmaster  -D /var/lib/pgsql/9.3/data &
and it is working fine.

Can someone let me know where to look at for why I can't start the 
service?


whats in /var/lib/pgsql/9.3/pgstartup.log  and 
/var/lib/pgsql/9.3/data/(latest).log  ?



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] SSL and Encryption

2017-11-02 Thread John R Pierce

On 11/2/2017 10:12 PM, Jeff Janes wrote:

https://wiki.postgresql.org/wiki/List_of_drivers

What is 'python native'?  psycopg works as long you update your libpq.



I thought pythonistas preferred using a native driver that didn't use 
libpq ?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] SSL and Encryption

2017-11-02 Thread John R Pierce

On 11/2/2017 9:39 PM, Michael Paquier wrote:

The SCRAM discussion is spread across two threads mainly with hundreds
of emails, which may discourage even the bravest. Here are links to
the important documentation:
https://www.postgresql.org/docs/current/static/auth-methods.html#auth-password


so that says...  ... [scram-sha-256] is the most secure of the currently 
provided methods, but it is not supported by older client libraries


whats the state of the more popular bindings now?   jdbc, python native, 
etc ?


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] How to access a second database

2017-10-31 Thread John R Pierce

On 10/31/2017 12:41 AM, John R Pierce wrote:
if you're doing a lot of this, why not use two schema in the same 
database?  then its just ...schema.table...


otherwise, you need to use FDW and foreign tables.

see
https://www.postgresql.org/docs/current/static/postgres-fdw.html



oh, I should add... the advantage of using FDW is the other database can 
be on another server.   the disadvantage is, the remote data has to be 
queried and merged with the local query, the optimizer may not be able 
to do as good a job as it might with tables in different schema of the 
same database (which are treated exactly the same as tables in the same 
schema, other than naming).



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to access a second database

2017-10-31 Thread John R Pierce

On 10/31/2017 12:15 AM, Sherman Willden wrote:
I am trying to access a table from another database. I have the 
permissions to create under my own login. I have performed the 
following so far:

sherman@sql-dev: createdb sandbox01
sherman@sql-dev:~$ createdb sandbox02.
After logging into sandbox02 I performed the following: sandbox02=# 
CREATE TABLE last_names(last_name TEXT);

sandbox02=# INSERT INTO last_names VALUES(willden);

Now I think I want to use a foreign key in sandbox01. Is the following 
how it works after logging into sandbox01?


sandbox01=# CREATE TABLE first_and_last(first_name TEXT, last_name 
FOREIGN KEY last_name REFERENCES sandbox02(last_names(last_name))


and then sandbox01=# INSERT INTO first_and_last(sherman, willden);



if you're doing a lot of this, why not use two schema in the same 
database?  then its just ...schema.table...


otherwise, you need to use FDW and foreign tables.

see
https://www.postgresql.org/docs/current/static/postgres-fdw.html



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_audit to mask literal sql

2017-10-30 Thread John R Pierce

On 10/30/2017 10:55 AM, rakeshkumar464 wrote:

Is there a way in pgaudit to mask literal sqls like the below:

insert into table (col1,col2) values(1,2)
select * from table where col1 = 1

These sqls are typed by our QA folks using pgadmin. pgaudit records this
verbatim which runs afoul of our HIPAA requirement.  Prepared statements are
not an issue since pgaudit provides a way to suppress values.


if you have a HIPAA requirement that says 'dont run manual sql 
statements', then, well, DONT.


why are QA folks making changes on production databases, anyways?   
thats not within their domain.   QA should be working on development or 
staging databases.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can't build ODBC -- odbc_config not found

2017-10-27 Thread John R Pierce

On 10/27/2017 12:39 AM, Devrim Gündüz wrote:

On Thu, 2017-10-26 at 23:40 -0700, Chris Albertson wrote:

I just tried to build Postgres ODBC from source (psqlodbc-10.00..tar.gz)

I type "./configure"

Then get this message:
configure: error: odbc_config not found (required for unixODBC build)

You need to install unixODBC package (or equivalent in your distro) for this
command.



and on a RHEL style system, you probably also need unixODBC-devel  to 
compile the psqlodbc driver.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] using conda environment for plpython3u?

2017-10-23 Thread John R Pierce

On 10/23/2017 2:05 PM, Celia McInnis wrote:
Is it possible for users to use their conda environment for plpython, 
complete with the modules that they have loaded in that environment? 
If so, what do I do?


I am running postgres 9.6.2 and would like to use a conda environment 
for python 3.6 which contrains a fair number of modules that I want to 
use (eg., regex, recordclass, pandas, ...).



plpython runs in the context of the server user, not the end user.   as 
long as you can maket his 'conda environment' available to that user, 
and it doesn't violate the single threaded design of a postgres 
connection, I dunno why not.


that said, everything you do in a PL is running in the process context 
of the core database server.   I'm very very hesitant to drag in large 
complex external systems, and would generally prefer to do that sort of 
thing in an app server context outside the DB server.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Alternative to pgAdmin Postgres Manager that support pgagent (jobs)

2017-10-19 Thread John R Pierce

On 10/19/2017 8:14 PM, Adam Brusselback wrote:

No other tool I
have used will manage pgAgent jobs.


they can be managed with SQL, the schema pgAgent uses really isn't that 
complicated.   each job is a row in a table, IIRC.


there's also pg_cron,  I've never used it, but it is standalone, and 
managed by SQL statements.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Is it OK to create a directory in PGDATA dir

2017-10-19 Thread John R Pierce

On 10/19/2017 1:25 PM, Tomas Vondra wrote:

Is it fine to create a subdir inside PGDATA and store our stuff
there, or will PG freak out seeing a foreign object.


PostgreSQL certainly does not check if there are unknown directories in
the data directory, and it will not crash and burn. But it causes all
sorts of problems, and it increases the probability of human error.



most importantly, ONLY the postgres system process should have access to 
the pgdata directory, it should have permissions 700. your apps should 
be running as a different user, and that user won't have access to said 
PGDATA.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Alternative to pgAdmin Postgres Manager that support pgagent (jobs)

2017-10-19 Thread John R Pierce

On 10/19/2017 3:15 PM, Juliano wrote:


Omnidb looks nice, but, I guess doesn't support pgAgent as well, any 
suggestions?



pgAgent isn't part of postgres, its part of pgAdmin.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Multiple Schemas vs. Multiple Databases

2017-10-13 Thread John R Pierce

On 10/13/2017 12:29 PM, Igal @ Lucee.org wrote:


I have read quite a few articles about multiple schemas vs. multiple 
databases, but they are all very generic so I wanted to ask here for a 
specific use case:


I am migrating a Web Application from MS SQL Server to PostgreSQL.  
For the sake of easier maintenance, on SQL Server I have two separate 
databases:


  1) Primary database containing the data for the application

  2) Secondary database containing "transient" data, e.g. logging of 
different activities on the website in order to generate statistics etc.


Both databases belong to the same application with the same roles and 
permissions.


The secondary database grows much faster, but the data in it is not 
mission-critical , and so the data is aggregated daily and the 
summaries are posted to the primary database, because only the 
aggregates are important here.


To keep the database sizes from growing too large, I periodically 
delete old data from the secondary database since the data becomes 
obsolete after a certain period of time.


At first I thought of doing the same in Postgres, but now it seems 
like the better way to go would be to keep one database with two 
schemas: primary and transient.


The main things that I need to do is:

  a) Be able to backup/restore each "part" separately.  Looks like 
pg_dump allows that for schemas via the --schema=schema argument.


  b) Be able to query aggregates from the secondary "part" and store 
the results in the primary one, which also seems easier with multiple 
schemas than multiple databases.


Am I right to think that two schemas are better in this use case or am 
I missing something important?




generally, yeah, unless you eventually decide to split off the two 
databases onto separate servers for performance reasons.   Of course, to 
access the 'other' database, you'd need to use postgres_fdw or dblink.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] core system is getting unresponsive because over 300 cpu load

2017-10-10 Thread John R Pierce

On 10/10/2017 3:28 PM, pinker wrote:

It was exactly my first guess. work_mem is set to ~ 350MB and I see a lot of
stored procedures with unnecessary WITH clauses (i.e. materialization) and
right after it IN query with results of that (hash).


1000 connections all doing queries that need 1 work_mem each will 
consume 1000*350MB == 350GB of your ram.    many queries use several 
work_mem's.


if the vast majority of your operations are OLTP and only access a few 
rows, then large work_mem is NOT a good idea.   If you're doing large 
aggregate operations like OLAP for reporting or whatever, then thats 
another story, but generally doing that sort of thing does NOT use 1000 
connections.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function to return per-column counts?

2017-09-28 Thread John McKown
On Thu, Sep 28, 2017 at 12:15 PM, Tomas Vondra <tomas.von...@2ndquadrant.com
> wrote:

>
>
> On 09/28/2017 04:34 PM, Seamus Abshere wrote:
> > hey,
> >
> > Does anybody have a function lying around (preferably pl/pgsql) that
> > takes a table name and returns coverage counts?
> >
>
> What is "coverage count"?
>

​I'm guessing it's what is described here:
https://www.red-gate.com/blog/sql-cover​

IIUC, this is "code coverage" for things kept in your RDMS system, such as
triggers, procedures, and other "code" items which are implicitly part of
your application code.



>
> cheers
>
> --
> Tomas Vondra  http://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


-- 
I just child proofed my house.
But the kids still manage to get in.


Maranatha! <><
John McKown


Re: [GENERAL] WAL Archive command.

2017-09-28 Thread John Britto
Hi,

I am running streaming replication with the archive.

As you can see below that Master pg_xlog is at WAL:
000101330093. But archive_status shows way behind:
000101330088.done
What could be the reason behind this? How should I let the PostgreSQL
archive the WAL from 000101330089 to 000101330092.

pg_xlog/
--
-rw--- 1 postgres postgres 16777216 Sep 27 23:30
000101330082
-rw--- 1 postgres postgres 16777216 Sep 27 23:31
000101330083
-rw--- 1 postgres postgres 16777216 Sep 27 23:31
000101330084
-rw--- 1 postgres postgres 16777216 Sep 27 23:31
000101330085
-rw--- 1 postgres postgres 16777216 Sep 27 23:51
000101330086
-rw--- 1 postgres postgres 16777216 Sep 28 00:47
000101330087
-rw--- 1 postgres postgres 16777216 Sep 28 01:55
000101330088
-rw--- 1 postgres postgres 16777216 Sep 28 02:47
000101330089
-rw--- 1 postgres postgres 16777216 Sep 27 18:04
00010133008A
-rw--- 1 postgres postgres 16777216 Sep 27 18:02
00010133008B
-rw--- 1 postgres postgres 16777216 Sep 27 18:02
00010133008C
-rw--- 1 postgres postgres 16777216 Sep 27 18:05
00010133008D
-rw--- 1 postgres postgres 16777216 Sep 27 18:03
00010133008E
-rw--- 1 postgres postgres 16777216 Sep 27 18:02
00010133008F
-rw--- 1 postgres postgres 16777216 Sep 27 18:02
000101330090
-rw--- 1 postgres postgres 16777216 Sep 27 18:37
000101330091
-rw--- 1 postgres postgres 16777216 Sep 27 20:21
000101330092
-rw--- 1 postgres postgres 16777216 Sep 27 21:00
000101330093

pg_xlog/archive_status
--
-rw--- 1 postgres postgres 0 Sep 27 23:30 000101330081.done
-rw--- 1 postgres postgres 0 Sep 27 23:30 000101330082.done
-rw--- 1 postgres postgres 0 Sep 27 23:31 000101330083.done
-rw--- 1 postgres postgres 0 Sep 27 23:31 000101330084.done
-rw--- 1 postgres postgres 0 Sep 27 23:31 000101330085.done
-rw--- 1 postgres postgres 0 Sep 27 23:51 000101330086.done
-rw--- 1 postgres postgres 0 Sep 28 00:47 000101330087.done
-rw--- 1 postgres postgres 0 Sep 28 01:55 000101330088.done

postgresql.conf
#--
wal_level = hot_standby
max_wal_senders = 5
wal_keep_segments = 32
archive_mode = on
archive_command = '/opt/wal_archive.sh "%p" "%f" > archive_command.log 2>&1'

wal_archive.sh

#!/bin/bash -xv
PG_XLOG="$1"
PG_XLOG_FILENAME="$2"
HOST=hostname
ARCHIVE_DIR="/pg_archive/master"
ARCHIVE_TO_KEEP="3" #No of days of archive logs to keep
EMAIL="a...@b.com"
ERROR_COUNT_FILE="/tmp/replication_archive_error_count.txt"

OLD_COUNT=`cat ${ERROR_COUNT_FILE}`
DNOW=`date +%u`
hour=$(date +%H)
D=`date`

#Do the cleanup if the day is Monday or Thursday and time is between 11
p.m. UTC and 22 hrs UTC
if [ "$DNOW" -eq "1" -o "$DNOW" -eq "4" -a "$hour" -ge 11 -a "$hour" -lt 22
]; then
find "${ARCHIVE_DIR}"/ -type f -mtime +"${ARCHIVE_TO_KEEP}" -exec rm -f
{} +
if [ "$?" -eq "1" ]; then
echo "The wal_archive script could not cleanup the archive
directory of $HOST" | mail -s "ERROR - WAL Archive for $HOST" "$EMAIL"
fi
fi

if [ ! -f "${ARCHIVE_DIR}"/"${PG_XLOG_FILENAME}" ]; then
cp "${PG_XLOG}" "${ARCHIVE_DIR}"/"${PG_XLOG_FILENAME}"
/usr/bin/rsync -W -az "${PG_XLOG}" postgres@standby
:"${ARCHIVE_DIR}"/"${PG_XLOG_FILENAME}"
if [ "$?" -ne "0" ]; then
#If rsync fails, then remove the copied file from master, increase
the error count, and retry.
rm -rf "${ARCHIVE_DIR}"/"${PG_XLOG_FILENAME}"
NEW_COUNT=`expr $OLD_COUNT + 1`
if [ "$NEW_COUNT" -ge "100" ]; then
    echo -e "${D}""\n""Rsync could not transfer the WAL file from
Master to slave." | mail -s "ALERT - WAL Archive for $HOST" "$EMAIL"
echo "0" > $ERROR_COUNT_FILE
else
echo "$NEW_COUNT" > $ERROR_COUNT_FILE
fi
exit 1
else
echo "0" > $ERROR_COUNT_FILE
exit 0
fi
else
exit 0
fi


John Britto, M.Sc.
DevOps Engineer
Sodexis, Inc.
www.sodexis.com
M: +91-0-8012186991
<j...@sodexis.com>


On Thu, Sep 28, 2017 at 1:05 AM, Scott Mead <sco...@openscg.com> wrote:

>
>
> On 

[GENERAL] WAL Archive command.

2017-09-27 Thread John Britto
Hello,

I have a streaming replication setup along with WAL archive.

archive_command = ‘test ! -f /var/pg_archive/%f && cp %p %f && scp %p postgres@192.168.0.123:/%f'

When the SCP command fails, the master repeatedly tries to send the
archived WAL to standby. But during this time, the pg_xlog directly grew
with newer WAL files.

The streaming replication hasn't had the problem because on my check, the
WAL write location on the primary was same with the last WAL location
received/replayed in standby.

Since the pg_xlog in the master had few newer WAL files, the master archive
is lagging to pick the current pg_xlog WAL file.  When a new WAL occur in
the pg_xlog, Master picks the old WAL file to send to the standby.

How should I force the PostgreSQL to batch copy the lagging WAL files to
pg_archive and then send to standby.  Can I do this manually using rsync?
I wonder how PostgreSQL knows the changes because it maintains info in
archive_status with extension as .ready and .done.

Please assist.

Thanks,
John Britto


Re: [GENERAL] Adding Cyrillic support

2017-09-23 Thread John R Pierce

On 9/23/2017 1:44 AM, Job wrote:

how can i add more character support to PostgreSql 9.6?
I need to also store some information in Cyrillic, for Russian users.

Can more characters coexist in the same database?



utf-8 should be able to store just about any character.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why can't the database owner create schemas and how can I enable that?

2017-09-23 Thread John R Pierce

On 9/22/2017 10:29 PM, Tim Uckun wrote:
I am setting up a database for an app. So I create a user without 
superuser but with login priveleges


I then create a database and set it's owner to that user like this...

dropdb --if-exists api_development
dropuser --if-exists api_user

createuser api_user -P -d
createdb api_development -O api_user

The app can now connect to the database but it can't create any 
tables, schemas or anything else unless I give it superuser privileges.


Is there any way I can make this user a superuser for this database 
without making it a superuser on other databases?



that should have worked just fine.


[root@new ~]# useradd fred
[root@new ~]# su - postgres
$ createuser fred
$ createdb fred -O fred
$ logout
[root@new ~]# su - fred
[fred@new ~]$ psql
psql (9.3.19)
Type "help" for help.

fred=> create schema xyzzy;
CREATE SCHEMA
fred=> create table xyzzy.abc (id serial, dat text);
CREATE TABLE
fred=> \q

.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq confusion

2017-09-20 Thread John R Pierce

On 9/20/2017 10:34 AM, Igor Korot wrote:

>From the documentation:
https://www.postgresql.org/docs/9.1/static/libpq-exec.html

[quote]
PGRES_COMMAND_OK

Successful completion of a command returning no data.
[/quote]

No data = no rows, right?


from that same page, a bit farther down, clarifying the potentially 
confusing wording.


   If the result status isPGRES_TUPLES_OK, then the functions described
   below can be used to retrieve the rows returned by the query. Note
   that aSELECTcommand that happens to retrieve zero rows still
   showsPGRES_TUPLES_OK.PGRES_COMMAND_OKis for commands that can never
   return rows (INSERT,UPDATE, etc.). A response
   ofPGRES_EMPTY_QUERYmight indicate a bug in the client software.


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-20 Thread John R Pierce

On 9/20/2017 6:55 AM, Stephen Frost wrote:

If AD is in the mix here, then there's no need to have things happening
at the database level when it comes to passwords- configure PG to use
Kerberos and create a princ in AD and put that on the database server
and then users can authenticate that way.



for web apps?   how does a web browser do kerberos over http ?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] libpq confusion

2017-09-20 Thread John R Pierce

On 9/20/2017 6:30 AM, Igor Korot wrote:

Hi, guys,

On Wed, Sep 20, 2017 at 12:20 AM, Allan Harvey
<allan.har...@libertyonesteel.com>  wrote:

How do I properly check if the record exists from libpq?

Igor,
I use PQntuples() to check the number of ... tuples, for > 0

I was actually curious - isn't it what "PGRES_COMMAND_OK" for?
IIUC, this constant indicates successful query run, but no records was
generated.

Or am I missing something and I will have to check PQntuples()?



a query that returns zero rows is still successful.

--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread John R Pierce

On 9/19/2017 3:32 PM, chiru r wrote:

How those application accounts get recognized in database?

Let say  App_user1 authenticated through application ,after that how 
the App_user1 get access to DB?


can you please provide more information ,how the app users are 
accessing database ?



the user isn't accessing the database, the application itself is 
accessing the database.   a web app might have 10s of 1000s of unique 
users, but a web app typically only uses a single application account to 
access the database.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] [HACKERS] USER Profiles for PostgreSQL

2017-09-19 Thread John R Pierce

On 9/19/2017 12:33 PM, chiru r wrote:
Yes, LDAP will do. However we need to sync the user accounts and 
 groups between AD and PG servers.and then AD profiles will apply to 
PG user accounts for authentication.




if you're using LDAP from the AD servers to authenticate, whats to sync?


my database servers, the only 'users' connecting to them directly are 
the database administrators...  the applications connect with 
application accounts, and if more security is required, these use 
certificates, or they use unix 'ident' local connections.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] advisory locks namespace?

2017-09-17 Thread John R Pierce

On 9/17/2017 1:33 PM, Rob Nikander wrote:

Am I right if two applications use advisory locks in the same database, they 
need to know which lock numbers are used to by the other application, to avoid 
conflicts?



indeed.   it also depends if they want to honor each others locks.

--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] New interface to PG from Chapel?

2017-09-15 Thread John R Pierce

On 9/15/2017 12:56 PM, Thelonius Buddha wrote:
I’m interested to know the level of effort to build a psycopg2-like 
library for Chapel: http://chapel.cray.com/ Not being much of a 
programmer myself, does someone have an educated opinion on this?



I don't see any standard database interface frameworks to hang a SQL 
library/driver on.


the fact that its a heavily concurrent/parallel language would likely 
mean there's many boobytraps en route to successfully using SQL, as you 
need to ensure that one PG connection is only ever used by the thread 
that created it



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] looking for a globally unique row ID

2017-09-14 Thread John R Pierce

On 9/14/2017 12:45 AM, Rafal Pietrak wrote:

Can anybody help me find a way to implement an ID which:

1. guarantees being unique across multiple tables.

2. guarantees its uniqueness not only during INSERT, but also during the
lifetime of the database/application (e.i. during future UPDATES).

3. guarantees persistence of value across database backup/restore/upgrade.



isn't that the problem that GUID are supposed to answer ?


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB is failed due to pg_Xlog is continues full.

2017-09-13 Thread John R Pierce

On 9/13/2017 9:11 PM, Yogesh Sharma wrote:
>>What you could do is copying its contents to a large disk, and then 
allow it to recover from the crash.
 I will copy the PGDATA into large disk. After that it is require to 
execute some specific command or automatically recovery will start?

If any command is require to execute please let me know.



you're going to need an experienced postgres admin who understands low 
level disk recovery.    there's a variety of postgres businesses who 
offer such services for hire.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB is failed due to pg_Xlog is continues full.

2017-09-13 Thread John R Pierce

On 9/13/2017 8:29 PM, Yogesh Sharma wrote:


We are using Postgres 8.1.18 version.
In Postgres log, we found below logs.
–-
CONTEXT:writing block 0 of relation 1664/0/1260
ERROR: could not write block 0 of relation 1664/0/1260: Bad address


Due to this pglog_Xlog directory has been continuously increased and 
directory has been full and Postgres is stopped.

Please let me know how to recover this issue.



PostgreSQL 8.1 has been unsupported for quite a long time. 8.1.18 was 
released in 2009, 8.1.23 was the last update of 8.1 in late 2010.


the oldest 'supported' postgres is 9.2, and thats at EOL.

prior to that error, something else catastrophic must have happened to 
the system, that error is more of a side effect. recovering a database 
server that far gone which is running such an obsolete version will 
likely be an expensive proposition. before doing anything, you should 
make a complete backup of the $PGDATA directory (and other tablespace 
directories, if you use any).




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-10 Thread John Turner
On Sun, Sep 10, 2017 at 1:24 PM Tom Lane <t...@sss.pgh.pa.us> wrote:

>
> For every other purpose, PG just pays attention to the actual column
> values' lengths.
>
> Thanks for elaborating, Tom.  This would appear to be a(nother) case where
PG represents the voice of sanity as compared with 'the other guys' : )

John


Re: [GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-10 Thread John Turner
On Sun, Sep 10, 2017 at 10:42 AM Merlin Moncure <mmonc...@gmail.com> wrote:

> On Friday, September 8, 2017, John Turner <fenwayri...@gmail.com> wrote:
>
>>
>>
>> On Fri, Sep 8, 2017 at 6:57 AM Tom Lane <t...@sss.pgh.pa.us> wrote:
>>
>>> Ron Johnson <ron.l.john...@cox.net> writes:
>>> > Based on LENGTH(offending_column), none of the values are more than 144
>>> > bytes in this 44.2M row table.  Even though VARCHAR is, by definition,
>>> > variable length, are there any internal design issues which would make
>>> > things more efficient if it were dropped to, for example, VARCHAR(256)?
>>>
>>> No.
>>>
>>> So the declarative column length has no bearing on memory grants during
>> plan generation/execution?
>>
>
> Nope.  Memory usage is proportional to the size of the string, not the
> maximum length for varchar.  Maximum length is a constraint.
>
> Ok, thanks for verifying.  I was curious since other platforms seem to
handle this aspect of memory allocation differently (more crudely, perhaps)
based on estimation of how fully populated the column _might_ be given a
size constraint:
https://sqlperformance.com/2017/06/sql-plan/performance-myths-oversizing-strings

John


Re: [GENERAL] B-tree index on a VARCHAR(4000) column

2017-09-08 Thread John Turner
On Fri, Sep 8, 2017 at 6:57 AM Tom Lane  wrote:

> Ron Johnson  writes:
> > Based on LENGTH(offending_column), none of the values are more than 144
> > bytes in this 44.2M row table.  Even though VARCHAR is, by definition,
> > variable length, are there any internal design issues which would make
> > things more efficient if it were dropped to, for example, VARCHAR(256)?
>
> No.
>
> So the declarative column length has no bearing on memory grants during
plan generation/execution?


Re: [GENERAL] SAP Application deployment on PostgreSQL

2017-09-08 Thread John R Pierce

On 9/8/2017 12:34 PM, chiru r wrote:


We have multiple SAP applications running on Oracle as backend and 
looking for an opportunity to migrate from Oracle to PostgreSQL. Has 
anyone ever deployed SAP on PostgreSQL community edition?


Is PostgreSQL community involved in any future road-map of SAP 
application deployment on PostgreSQL?



Does SAP support PostgreSQL ?



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] column names query

2017-09-07 Thread John R Pierce

On 9/7/2017 12:18 AM, haman...@t-online.de wrote:

is there a simple way to retrieve column names from a query - basically the way 
psql adds
column headings when I do a select?


if you're using libpq to do your queries, PQfname(*result, 
column_number) returns the name of that column number.


there are equivalent functions in most other APIs.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Create Action for psql when NOTIFY Recieved

2017-08-29 Thread John McKown
On Tue, Aug 29, 2017 at 10:21 AM, Daniel Verite <dan...@manitou-mail.org>
wrote:

> Jerry Regan wrote:
>
> > I think I could justify the effort to ‘script’ psql. I’m not so sure I
> can
> > justify the effort to write a standalone program.
>
> As a hack around psql, you could have a script that feeds psql
> with "SELECT 1" from time to time and capture only the
> notifications output:
>
>  (echo "listen foo;"; while true; do echo "select 1;"; sleep 1; done) | \
> psql | grep notification
>
> When another session issues NOTIFY foo,  'bar'
>  that output filtered by the above command is, for example:
>
>   Asynchronous notification "foo" with payload "bar" received from
>   server process with PID 20033.
>
> which just needs to be piped into another step that runs your custom
> action.
>

​Sounds like a job for "expect".
https://www.tcl.tk/man/expect5.31/expect.1.html​



>
>
> Best regards,
> --
> Daniel Vérité
>


-- 
Caution! The OP is an hyperpolysyllabicsesquipedalianist and this email may
cause stress to those with hippopotomonstrosesquipedaliophobia.

Maranatha! <><
John McKown


[GENERAL] Installing 9.5 doesn't create 'postgres' unix account on Cent OS 7

2017-08-28 Thread John Lawlor
Hi,


I am installing Postgres 9.5 on a Cent OS 7 server. I have done the following:

yum install 
https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-redhat95-9.5-2.noarch.rpm


yum install postgresql95 postgresql95-server.x86_64

Then I try to switch to the 'postgres' user normally created by installing 
postgres:

su - postgres
su: user postgres does not exist

If I install the bundled postgres that comes with Cent OS, the postgres linux 
account is always created.

Not sure why it's not created for 9.5

Regards,

John

This email and any files transmitted with it are confidential and intended 
solely for the use of the individual or entity to whom they are addressed. If 
you are not the intended recipient, please note that any review, dissemination, 
disclosure, alteration, printing, circulation, retention or transmission of 
this e-mail and/or any file or attachment transmitted with it, is prohibited 
and may be unlawful. If you have received this e-mail or any file or attachment 
transmitted with it in error please notify postmas...@openet.com. Although 
Openet has taken reasonable precautions to ensure no viruses are present in 
this email, we cannot accept responsibility for any loss or damage arising from 
the use of this email or attachments.


Re: [GENERAL] PG and database encryption

2017-08-22 Thread John McKown
On Tue, Aug 22, 2017 at 3:27 PM, rakeshkumar464
<rakeshkumar...@outlook.com> wrote:
> Thanks John and JD.
>
> John: Are you telling that the backup of a database has no protection?

If you use LUKS to encrypt a filesystem and then copy any file data on
that filesystem to another file on an unencrypted filesystem, then the
copy is not encrypted. You'd need to use something like gpg2 to
encrypt it before storing. The same if you dumped the database using a
utility such as pg_dump. I don't know of a way to encrypt a database
such that it is unencrypted transparently for the PostgreSQL server,
but not for some other application which can access the physical
files. And even if this were possible, the pg_dump would output
unencrypted data. This is basically due to your requirement that the
PostgreSQL client (application) not supply a password to PostgreSQL to
do the decryption. Of course, you could embed the
encryption/decryption into the application itself so that the data is
encrypted before it is passed to PostgreSQL to store. But I think that
violates your original requirements.


-- 
If you look around the poker table & don't see an obvious sucker, it's you.

Maranatha! <><
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG and database encryption

2017-08-22 Thread John McKown
On Tue, Aug 22, 2017 at 2:48 PM, rakeshkumar464
<rakeshkumar...@outlook.com> wrote:
> We have a requirement to encrypt the entire database.  What is the best tool
> to accomplish this. Our primary goal is that it should be transparent to the
> application, with no change in the application, as compared to un-encrypted
> database. Reading about pgcrypto module, it seems it is good for few columns
> only and using it to encrypt entire database is not a good use-case.
>
> Is this which can be done best by file level encryption?  What are the good
> tools on Linux (RHES), preferably open-source.
>
> Thanks

In addition to the link that Joshua gave you, there is this:
https://www.enterprisedb.com/blog/postgres-and-transparent-data-encryption-tde

Personally, what I'd do (and actually do at work) is to us LUKS. This
is a "full disk encryption". When the filesystem is mounted, the
system asks for the password. Unfortunately, this method allows all
users who have the proper authority (UNIX & SELinux) to read (maybe
write) the underlying files. Of course, a properly secured environment
would not allow this, but systems can be hacked. And it does not
address any off-filesystem backups, which would need to be separately
encrypted. LUKS is a good method, IMO, to protect the data if the
media is stolen, but not for protecting the individual files from
improper access. SELinux is pretty good at that.


-- 
If you look around the poker table & don't see an obvious sucker, it's you.

Maranatha! <><
John McKown


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Corrupt index

2017-08-15 Thread John R Pierce

On 8/15/2017 1:07 PM, Andreas Kretschmer wrote:

I only wanted to exclude it. Anyway, you should install the latest patches.


he can't, with RDS, he's at the mercy of Amazon for all sysadmin duties.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pglogical repo

2017-08-10 Thread John R Pierce

On 8/10/2017 10:30 AM, armand pirvu wrote:

Looking at the installatoion steps and the yum repositories

sudo yum 
installhttp://packages.2ndquadrant.com/pglogical/yum-repo-rpms/pglogical-rhel-1.0-2.noarch.rpm

[   ] pglogical-rhel-1.0-1.noarch.rpm
[   ] pglogical-rhel-1.0-2.noarch.rpm
[   ] pglogical-rhel-1.0-3.noarch.rpm


1.0-3 is for Postgres 9.6 ? Or can it be used on 9.5 as well ? I am on 9.5



That is clearly answered here, 
https://www.2ndquadrant.com/en/resources/pglogical/pglogical-installation-instructions/



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] About using IMCS moldule

2017-08-10 Thread John McKown
On Tue, Aug 8, 2017 at 4:31 AM, 송기훈 <songk...@gmail.com> wrote:

> Hi, I'm trying to use imcs module to store table space in memory.
> But, It dose not work with 9.6 version anymore.
> Releasing 9.6, lwlockassign() function has been deleted, cause of some
> issues.
>
> So, What I want to ask you something is that postgresql decide not to
> support to imcs module officially (I know imcs module is not updated long
> time and not offcially supporting module). And are there other way to store
> table space in memory only?
>
> ​I just went to https://github.com/knizhnik/imcs to look at IMCS. Just
one day ago (9 Aug 2017), someone put in a new file which says that it
addresses the LWLocks in 9.6.3 . Perhaps if you got this newest version,
this is fixed?​




-- 
If you look around the poker table & don't see an obvious sucker, it's you.

Maranatha! <><
John McKown


Re: [GENERAL] Audit based on role

2017-08-07 Thread John R Pierce

On 8/7/2017 4:33 PM, anand086 wrote:

We are running Postgres 9.6.3 version and have requirement, where we want to
audit any DML action performed by a user whose has module_dml role granted.

What would be the best way to do that? I was thinking to write something
likehttps://wiki.postgresql.org/wiki/Audit_trigger, but I am not sure on
how to integrate "user whose has module_dml role granted" into the function.

Instead of trigger is there any better way to achieve this?


DML as in select/insert/update/delete ?  or did you mean DDL as in 
CREATE/ALTER TABLE, etc ?



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] bidirectional mapping?

2017-08-02 Thread John McKown
On Wed, Aug 2, 2017 at 10:55 AM, Chris Travers <chris.trav...@gmail.com>
wrote:

>
>
> On Wed, Aug 2, 2017 at 5:44 PM, John McKown <john.archie.mck...@gmail.com>
> wrote:
>
>> Is there a simple way to do bidirectional mapping of a table with itself?
>> I am thinking of a "spousal" type relationship, where it is true that if A
>> is spouse of B, then B is spouse of A. I don't necessarily want "A" and "B"
>> to be a monogamous relationship because that is not always be true world
>> wide. The best I can come up with so far is something like:
>>
>> CREATE TABLE forespouse (PERSON integer PRIMARY KEY,
>> SPOUSE integer UNIQUE
>> CHECK( PERSON != SPOUSE) -- sorry, can't marry self
>> );
>> CREATE UNIQUE INDEX ON forespouse(PERSON, SPOUSE);
>> CREATE UNIQUE INDEX ON forespouse(SPOUSE, PERSON);
>> -- I'm not sure that the above indices are needed.
>>
>> CREATE VIEW backspouse AS SELECT SPOUSE, PERSON FROM forespouse;
>> CREATE VIEW spouse AS
>>  SELECT PERSON, SPOUSE FROM forespouse
>>  UNION
>>  SELECT SPOUSE, PERSON FROM backspouse
>> ;
>>
>
> Usually the way I have done this is to normalise the representation and
> use a table method for converting for joins.  In other words:
>
> create table marriage (party integer primary key, counterparty integer
> unique, check party < counterparty);
>

​I _knew_ there must be a better way. I just didn't see it. Many thanks!​



>
> This way you can ensure that each relationship is only recorded once.
>
> Then I would create a function that returns an array of the parties.
>
> CREATE OR REPLACE FUNCTION parties(marriage) returns int[] language sql as
> $$
> select array[$1.party, $1.counterparty];
> $$;
>
> Then you can create a gin index:
>

​I need to become familiar with "gin" indices, I guess. I'm a bit behind in
my knowledge of PostgreSQL. I also try to use "plain old SQL" as defined in
the "standard". Mainly because I use both PostgreSQL and SQLite.​



>
> create index marriage_parties_idx on marriage using gin(parties(marriage));
>
> Then you can query on:
> select ... from people p1 where first_name = 'Ashley'
>  join marriage m on p1 = any(marriage.parties)
>  join people p2 on p2 = any(marriage.parties) and p2.id <> p1.id
>
>>
>>
> --
> Best Wishes,
> Chris Travers
>
> Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
> lock-in.
> http://www.efficito.com/learn_more
>



-- 
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown


[GENERAL] bidirectional mapping?

2017-08-02 Thread John McKown
Is there a simple way to do bidirectional mapping of a table with itself? I
am thinking of a "spousal" type relationship, where it is true that if A is
spouse of B, then B is spouse of A. I don't necessarily want "A" and "B" to
be a monogamous relationship because that is not always be true world wide.
The best I can come up with so far is something like:

CREATE TABLE forespouse (PERSON integer PRIMARY KEY,
SPOUSE integer UNIQUE
CHECK( PERSON != SPOUSE) -- sorry, can't marry self
);
CREATE UNIQUE INDEX ON forespouse(PERSON, SPOUSE);
CREATE UNIQUE INDEX ON forespouse(SPOUSE, PERSON);
-- I'm not sure that the above indices are needed.

CREATE VIEW backspouse AS SELECT SPOUSE, PERSON FROM forespouse;
CREATE VIEW spouse AS
 SELECT PERSON, SPOUSE FROM forespouse
 UNION
 SELECT SPOUSE, PERSON FROM backspouse
;


-- 
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown


Re: [GENERAL] Invalid byte sequence for encoding UTF-8 0xc3\n

2017-07-30 Thread John R Pierce

On 7/30/2017 1:43 PM, Igor Korot wrote:

what encodings are default on your system ?`\l+` in psql should show the
encodings.

Is this "backslash + pipe + plus-sign"?

Trying it gives: "Invalid command".



\ + lower case L + plus sign, thats the psql metacommand to list all 
databases with extra info


postgres=# \l+
List of databases
  Name  |   Owner   | Encoding |   Collate   | Ctype|   
Access privileges   |  Size   | Tablespace |Description

+---+--+-+-+---+-++
 junk   | pierce| UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 6586 kB | pg_default |
 observers_test | observers | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 207 MB  | pg_default |
 pierce | pierce| UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 6786 kB | pg_default |
 postgres   | postgres  | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 6610 kB | pg_default | default administrative 
connection database
 scac   | scac  | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 75 MB   | pg_default |
 scac_save  | scac  | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 105 MB  | pg_default |
 smf| smf   | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 34 MB   | pg_default |
 sympa  | sympa | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 6898 kB | pg_default |
 template0  | postgres  | UTF8 | en_US.UTF-8 | en_US.UTF-8 | 
=c/postgres  +| 6457 kB | pg_default | unmodifiable empty database
|   |  | | | 
postgres=CTc/postgres | ||
 template1  | postgres  | UTF8 | en_US.UTF-8 | en_US.UTF-8 | 
=c/postgres  +| 6465 kB | pg_default | default template for new 
databases
|   |  | | | 
postgres=CTc/postgres | ||
 tendenci   | tendenci  | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 15 MB   | pg_default |
 test   | postgres  | UTF8 | en_US.UTF-8 | en_US.UTF-8 
|   | 6634 kB | pg_default |

(12 rows)


for instance, all my databases are UTF8 on that server.


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Invalid byte sequence for encoding UTF-8 0xc3\n

2017-07-30 Thread John R Pierce

On 7/30/2017 1:19 PM, Igor Korot wrote:

I am using a database for my project that I created inside SQLite3.
This database contains a table called "abc<ALT+225>" (it is "abc" +
symbol with the code 225 -
greek letter "beta or a German symbol for "ss").


in what encoding?   in ISO 8859-1, -15, beta aka sharp S is code 223 
(U+00DF), not 225.  in UTF-8, its C3,9F.



...
Both the database and the table are created with default encoding.


what encodings are default on your system ?`\l+` in psql should show 
the encodings.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about paritioning

2017-07-27 Thread John R Pierce

On 7/27/2017 12:43 AM, Alex Samad wrote:


... as long as the queries stay on a small amount of parts that we 
should be okay.


thats true as long as the planner can restrict the queries to the 
correct partition...  but there's lots of queries that end up having to 
hit all partitions because the planner can't guess correctly.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about paritioning

2017-07-26 Thread John R Pierce

On 7/26/2017 10:08 PM, Alex Samad wrote:
I have a large table about 3B rows, that I would like to partition on 
a column called _received which is  of type timestamp




a good goal is to have no more than about 100 partitions max, and 
ideally more like 25.


when we partition on time stamp, we typically do it by the week, as 
we're doing 6 month data retention.


IIRC, we're using DATE_TRUNC('week', timestamp)::DATE  for use as the 
partition label and key.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC driver issue

2017-07-26 Thread John R Pierce

On 7/26/2017 9:06 PM, Igor Korot wrote:

With the char(), is there a "Standard SQL" way to do trimming?



trim(trailing from fieldname)


but really, if you want a variable length string without padding, don't 
use CHAR() as a data type.  use VARCHAR or TEXT.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ODBC driver issue

2017-07-26 Thread John R Pierce

On 7/26/2017 7:25 PM, Igor Korot wrote:

When I tried to query the database table with the column "char(129)" I get:

"My field text"

(the text with the bunch of spaces at the end).

The driver is not the current one - but I don't remember the version.

Is this known issue? Maybe its already fixed with the latest ODBC driver?


a char(129) field is 129 characters long, and will always be padded with 
spaces to that length.


maybe you wanted a varchar (aka char varying) ?  these are variable 
length and return just what you put in them.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backward compatibility

2017-07-22 Thread John McKown
On Fri, Jul 21, 2017 at 10:21 PM, Igor Korot <ikoro...@gmail.com> wrote:

> Hi, guys,
> Below query does not even run:
>
> SELECT version(), substring( version() from position( '\s' in version() )
> );
>
> Could you spot the error?
>
>
​works for me.

psql
psql (9.5.7)
Type "help" for help.

joarmc=# SELECT version(), substring( version() from position( '\s' in
version() ) );
  version
  |

 substring
-+

-
PostgreSQL 9.5.7 on x86_64-redhat-linux-gnu, compiled by gcc (GCC) 6.3.1
20161221 (Red Hat 6.3.1-1), 64-bit | PostgreSQL 9.5.7 on
x86_64-redhat-linux-gnu, co
mpiled by gcc (GCC) 6.3.1 20161221 (Red Hat 6.3.1-1), 64-bit
(1 row)

​


-- 
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown


Re: [GENERAL] Logging at schema level

2017-07-21 Thread John R Pierce

On 7/20/2017 11:11 PM, Nikhil wrote:
Schema = tenant. So basically tenant level logging. 



select  from schema1.table1 join schema2.table2 on  where 
.;



if you have per schema logging, where should that get logged ?

you could implement per DATABASE logging, if you A) add the database 
name to the log_prefix, and B) feed your logs to a program that 
understands this and splits them out to a log file per database.you 
could also do this on a  per user basis. but, schema is something very 
dynamic, its a namespace within a database, and queries can touch 
multiiple schemas.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Logging at schema level

2017-07-20 Thread John R Pierce

On 7/20/2017 10:10 PM, Nikhil wrote:


I am using postgresql schema feature for multi-tenancy. can we get 
postgresql logs at schema level. Currently it is for the whole 
database server (pg_log)


if you /were/ able to split the logs by schema, and a query touched 
multiple schemas, then where would that get logged?



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Backward compatibility

2017-07-20 Thread John R Pierce

On 7/20/2017 8:40 PM, Tom Lane wrote:

 Applications might use this function to determine the version of the
 database server they are connected to. The number is formed by
 converting the major, minor, and revision numbers into
 two-decimal-digit numbers and appending them together. For example,
 version 8.1.5 will be returned as 80105, and version 8.2 will be
 returned as 80200 (leading zeroes are not shown). Zero is returned
 if the connection is bad.

Hm, we need to update that text for the new 2-part version numbering
scheme, don't we?



will 10 return like 100100 if its 10.1, or 11 ?


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Backward compatibility

2017-07-20 Thread John R Pierce

On 7/20/2017 7:57 PM, David G. Johnston wrote:


Actually, The docs do cover how to do this directly in libpq.


odds are pretty good that...

   |PQserverVersion|











   Returns an integer representing the backend version.

   int PQserverVersion(const PGconn *conn);

   Applications might use this function to determine the version of the
   database server they are connected to. The number is formed by
   converting the major, minor, and revision numbers into
   two-decimal-digit numbers and appending them together. For example,
   version 8.1.5 will be returned as 80105, and version 8.2 will be
   returned as 80200 (leading zeroes are not shown). Zero is returned
   if the connection is bad.


Actually invokes `show server_version_num;'



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Backward compatibility

2017-07-20 Thread John R Pierce

On 7/20/2017 7:46 PM, Igor Korot wrote:

ALso, I presume there is no special libpg function, right?



libpq would only be able to return the libpq version, which might not be 
the same as the server version.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What is exactly a schema?

2017-07-14 Thread John R Pierce

On 7/14/2017 4:59 AM, marcelo wrote:

Now I have a related question.
Could I select a specific schema in the connection string? Say, by 
example database=mydb.schemanumbertwo ? 


the default search_path is $user,public

so if you connect with different SQL usernames for your different 
schemas, and have all your common tables in PUBLIC, then it will just 
fall out.   you'll need to be careful with permissions, of course.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql - commercial version

2017-07-07 Thread John R Pierce

On 7/7/2017 8:22 AM, Terry Schmitt wrote:
You have the open source version installed. Now where it was installed 
from is a different story.



if its a redhat/centos/fedora type system, try...

rpm -qa |grep ^postgres

if it was installed from rpm's, the full names of the rpms will clue you in.

also, see what path postgres's tools are installed in, the RPM versions 
install everything in /usr/pgsql-9.x/bin while EDB typically installs to 
/opt or something.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgresql - commercial version

2017-07-07 Thread John McKown
On Fri, Jul 7, 2017 at 9:04 AM, Krithika Venkatesh <
krithikavenkates...@gmail.com> wrote:

>  select version () gives me output something like this
>
> PostgreSQL 9.5.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.1.2
> 20080704 (Red hat 4.1.2-55), 64 bit.
>
> Is there any other way to find
>
>
​I don't have access to EDB, just the community version (from the Fedora
RPM). But from looking at the manual:
https://www.enterprisedb.com/docs/en/9.6/DB_Compat_Oracle_Reference/Database_Compatibility_for_Oracle_Developers_Reference_Guide.1.203.html

you might try:

SELECT * FROM PRODUCT_COMPONENT_VERSION;

This table does not exist in the version which I have, and so I am guessing
that it would be a decent test.


-- 
Veni, Vidi, VISA: I came, I saw, I did a little shopping.

Maranatha! <><
John McKown


Re: [GENERAL] Is PL-PGSQL interpreted or complied?

2017-07-05 Thread John R Pierce

On 7/5/2017 5:10 PM, Tim Uckun wrote:
I am curious about the stored proc languages inside of postgres. When 
I write a stored proc is it compiled to some internal representation 
or just interpreted?




plpgsql is interpreted directly, I don't believe its even pre-tokenized.


How does this work with other languages?


that varies with the language..  PLJava is compiled to java byte codes 
by the javac compiler even before its loaded (as you load the 
precompiled .jar file with the pljava loader),   plpython uses .pyc 
files, same as if python is run from the command line,  plperl is direct 
interpreted, same as perl normally.   embedded C is precompiled to 
machine language as you just load the DLL/SO files into postgres etc 
etc.


Also would it be possible to extract PL-PGSQL into a standalone 
(albeit crippled) language? Is the interpreter/compiler modular like that?



the interpreter *IS* SQL, which is the whole database server.   I don't 
think a standalone plpgsql without SQL would be of much use.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Imperative Query Languages

2017-07-04 Thread John Turner
(copying the list)

On Wed, Jul 5, 2017 at 12:22 AM, Jason Dusek <jason.du...@gmail.com> wrote:

> Are there any “semi-imperative” query languages that have been tried in
> the past?
>
not particularly relevant to the Unix or Windows worlds, but on OpenVMS
there's Datatrieve:
https://en.wikipedia.org/wiki/DATATRIEVE

-John

On Wed, Jul 5, 2017 at 12:22 AM, Jason Dusek <jason.du...@gmail.com> wrote:

> Hi All,
>
> This more of a general interest than specifically Postgres question. Are
> there any “semi-imperative” query languages that have been tried in the
> past? I’m imagining a language where something like this:
>
> for employee in employees:
> for department in department:
> if employee.department == department.department and
>department.name == "infosec":
> yield employee.employee, employee.name, employee.location, 
> employee.favorite_drink
>
> would be planned and executed like this:
>
> SELECT employee.employee, employee.name, employee.location, 
> employee.favorite_drink
>   FROM employee JOIN department USING (department)
>  WHERE department.name == "infosec"
>
> The only language I can think of that is vaguely like this is Fortress, in
> that it attempts to emulate pseudocode and Fortran very closely while being
> fundamentally a dataflow language.
>
> Kind Regards,
>
>   Jason
> ​
>


Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread John R Pierce

On 6/8/2017 6:36 PM, marcinha rocha wrote:

|UPDATEtablea a SETmigrated =yes WHEREa.id =row.id;|
On my original select, the row will have migrated = false. Maybe All I 
need to put is a limit 2000 and the query will do the rest?


 SELECT does not return data in any determinate order unless you use an 
ORDER BY   so LIMIT 2000 would return some 2000 elements, not 
neccessarily the 'first' 2000 elements unless you somehow order them by 
however you feel 'first' is defined.



WITH ids AS (INSERT INTO tableb (id) SELECT id FROM tablea WHERE 
migrated=FALSE ORDER BY id LIMIT 2000 RETURNING id)
UPDATE tablea a SET a.migrated=TRUE WHERE a.id = ids.id 
RETURNING COUNT(a.id);




I'm not 100% sure you can do UPDATE  RETURNING COUNT(...), worse 
case the UPDATE RETURNING would be a subquery of a SELECT COUNT()...



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Function with limit and offset - PostgreSQL 9.3

2017-06-08 Thread John R Pierce

On 6/8/2017 5:53 PM, marcinha rocha wrote:
Hi guys! I have the following queries, which will basically select 
data, insert it onto a new table and update a column on the original 
table.



I'm sure your example is a gross simplification of what you're really 
doing, but if that's really all you're doing, why not do it all at once, 
instead of row at a time?



BEGIN;
insert into tableb (id) select id from tablea;
update tablea set migrated=true;
COMMIT;


thats far more efficient that the row-at-a-time iterative solution you 
showed.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Db backup

2017-06-08 Thread John R Pierce

On 6/8/2017 12:29 PM, John R Pierce wrote:

On 6/8/2017 12:10 PM, Jimmy Lunceford wrote:
A windows 7 computer went bad but the hdd is still operable. Is there 
a way to transfer the db data directories to another computer or does 
it require a dump and restore?



the data directories should work as-is on another Windows computer 
with the same major version and 'build' of postgresql and same 32 vs 
64 bitness.   By 'build' I mean like enterpriseDB installer vs BigSQL 
vs custom-built.   if custom built, the build options will need to be 
pretty much the same (visualC vs gcc/mingw vs cygwin, and the same 
general ./configure options, etc etc).



to restore said file system level backup,  I would...

A) install the compatible postgresql build (if the original was 9.3.5, 
its fine to use 9.3.17)


B) stop the postgresql database service

C) replace the 'data' directory with a copy of the data directory from 
the disk of said dead box


D) restart the postgresql database service

E) verify your databases are intact and complete.


if you used custom tablespaces, be sure to restore all those at step C, 
too, qirh the same paths as the originala.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Db backup

2017-06-08 Thread John R Pierce

On 6/8/2017 12:10 PM, Jimmy Lunceford wrote:
A windows 7 computer went bad but the hdd is still operable. Is there 
a way to transfer the db data directories to another computer or does 
it require a dump and restore?



the data directories should work as-is on another Windows computer with 
the same major version and 'build' of postgresql and same 32 vs 64 
bitness.   By 'build' I mean like enterpriseDB installer vs BigSQL vs 
custom-built.   if custom built, the build options will need to be 
pretty much the same (visualC vs gcc/mingw vs cygwin, and the same 
general ./configure options, etc etc).




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread John R Pierce

On 6/5/2017 5:49 PM, David G. Johnston wrote:
On Mon, Jun 5, 2017 at 5:40 PM, John R Pierce <pie...@hogranch.com 
<mailto:pie...@hogranch.com>>wrote:


​i​
ndeed, any sort of constraint that invokes a function call which
looks at other tables could later be invalidated if those other
tables change, and postgres would be none the smarter.   the same
goes for trigger based checks.


​ Yes.  I could imagine a new kind of "multi-referential trigger" that 
would specify all relations it touches and the function to fire when 
each of them is updated.  While you'd still have to write the 
functions correctly it would at least allow one to explicitly model 
the multi-table dynamic in pg_catalog.  Lacking that CHECK is no worse 
than TRIGGER and we've decided to say "use triggers".



at $job, the policy is, AVOID ALL TRIGGERS AND FANCY CONSTRAINTS :)

they don't even like using foreign key references, and rely on code 
logic to do most joins in the performance-critical OLTP side of things.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread John R Pierce

On 6/5/2017 5:32 PM, David G. Johnston wrote:
On Mon, Jun 5, 2017 at 5:15 PM, Ken Tanzer <ken.tan...@gmail.com 
<mailto:ken.tan...@gmail.com>>wrote:


From the docs:
https://www.postgresql.org/docs/9.6/static/sql-createtable.html
<https://www.postgresql.org/docs/9.6/static/sql-createtable.html>
"Currently, CHECK expressions cannot contain subqueries nor
refer to variables other than columns of the current row. The
system column tableoid may be referenced, but not any other
system column.


I wonder if that should say "should not," or be followed by
something like this:


Make it say "must not" and I'd agree to change the word "cannot" and 
leave the rest.  Adding a note regarding functions seems appropriate.


Aside from being a bit more verbose there is nothing useful that 
writing this as "CHECK function()" provides that you don't also get by 
writing "CREATE TRIGGER". In a green field we'd probably lock down 
CHECK a bit more but there is too much code that is technically wrong 
but correctly functioning that we don't want to break.  IOW, we cannot 
mandate that the supplied function be immutable even though we 
should.  And we don't even enforce immutable execution if a function 
is defined that way.



indeed, any sort of constraint that invokes a function call which looks 
at other tables could later be invalidated if those other tables change, 
and postgres would be none the smarter.   the same goes for trigger 
based checks.




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Help with restoring a dump in Tar format? (dependencies/ordering)

2017-06-05 Thread John R Pierce

On 6/5/2017 5:15 PM, Ken Tanzer wrote:
I can't really make this an FK.  I can (and probably will) put this 
into a trigger.  Although it seems like an extra layer of wrapping 
just to call a function.  I'm curious if there's any conceptual reason 
why constraints couldn't (as an option) be restored after all the data 
is loaded, and whether there would be any negative consequences of 
that?  I could see if your data still didn't pass the CHECKs, it's 
already loaded.  But the constraint could then be marked not valid?



when you have constraints that rely on calling functions, how would it 
know what order to check things in ?



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Access Management question

2017-05-30 Thread John R Pierce

On 5/30/2017 2:06 PM, chiru r wrote:
Is there any reason, why it is showing roles name as owner of table 
instead of user?


'user' is a synonym for 'role' with login permission.

CREATE USER fred;

is exactly the same as...

CREATE ROLE fred WITH LOGIN;

when you SET ROLE rolename;   its virtually the same as logging in as 
rolename


Why it is not allowed to GRANT SUPERUSER/CREATEDB etc roles Directly 
to user (using GRANT ,not ALTER USER) ?



GRANT is used to grant object related permissions or to grant role 
membership, its not a 'role' nor do roles inherit special attributes 
like SUPERUSER, CREATEDB.




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-24 Thread John R Pierce

On 5/23/2017 11:39 PM, Ken Tanzer wrote:


Can I also ask y'all a more general question about this, specifically 
related to how Postgres is packaged for RHEL/Centos?  I've got both 
9.6 and 9.2 installed.  In this case though, it seems that the 9.2 
version is privileged/selected by default.  But psql defaults to the 
9.6 version.  Are there other similar things that will default to 
either 9.2 or 9.6?  And if so, what controls that behavior, is it 
easily-changeable, and/or can you go back and forth?


I've never tried running two versions at once before.  Maybe this is 
an isolated incident, but I'm just trying to get my mind around the 
concept, and know what kind of pitfalls if any to expect or beware of. 
Thanks!




when you run multiple versions, you need to keep the path *and* the port 
straight.  each server running is on a separate port. I have one dev box 
at work that runs pg 9.3, 9.4, 9.5, and 9.6, all on seperate ports.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Have just libpg installer

2017-05-23 Thread John R Pierce

On 5/23/2017 7:45 PM, Igor Korot wrote:

Because I can't find libpg.so or libpg.dylib or libpg.Framework anywhere.

Is there a place to get them?



maybe because thats not the correct name?  its libpq as in PQ. I have no 
idea what a .Framework file is, but any of the postgres installers for 
Mac OSX should install libpq.dylib and the appropriate .h files etc.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing module for 9.6, not 9.2, on Centos?

2017-05-23 Thread John R Pierce

On 5/23/2017 4:54 PM, Ken Tanzer wrote:

But the install still goes to 9.2:

PATH=/usr/local/pgsql96/bin/:$PATH make USE_PGXS=1 install 
 |

/bin/mkdir -p '/usr/pgsql-9.2/share/contrib'



earlier you said something about /usr/pgsql-9.6/bin ...   is it that, or 
is it /usr/local/pgsql96/bin ?


does

/usr/pgsql-9.6/bin/pg_config

output a whole pile of directory assignments that make sense ?

or does

/usr/local/pgsql96/bin/pg_config

do that?



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] libpg sources

2017-05-23 Thread John McKown
On Tue, May 23, 2017 at 9:39 AM, Albe Laurenz <laurenz.a...@wien.gv.at>
wrote:

> Igor Korot wrote:
>
> > Can I put libpg sources into my project? Or I will have to provide
> binaries?
>
> You can do anything you want as long as you stick with the license:
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=
> blob_plain;f=COPYRIGHT;hb=HEAD
>
> All you have to do is include the following information:
>
>
​Being as I am (for good or ill), I really wish there was a file entitled:
LICENSE with detailed this explicitly.​


-- 
Windows. A funny name for a operating system that doesn't let you see
anything.

Maranatha! <><
John McKown


Re: [GENERAL] Weird periodical pg log

2017-05-19 Thread John R Pierce

On 5/19/2017 1:25 AM, cen wrote:
< 2017-05-15 17:00:41.861 CEST >LOG:  parameter "archive_command" 
changed to ""/opt/omni/lbin/pgsqlbar.exe" -stage %p -backup"



I believe /opt/omni is the default installation path for HP Data 
Protector, formerly known as OmniBack.  That comprehensive backup 
system includes database backup capabilities, I'm guessing thats what 
you're seeing here.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Call for users to talk about table partitioning

2017-05-18 Thread John R Pierce

On 5/18/2017 2:29 PM, Robert Eckhardt wrote:
All the code for creating and managing partitions is part of the core 
Postgres code. What we are interested in looking into is what that 
work flow might look like and how that workflow can be supported with 
a GUI management tool. 



only thing I'd expect from a GUI management tool would be to allow me to 
create partitioned tables and its partitions, and display their 
attributes appropriately.




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Error that shouldn't happen?

2017-05-18 Thread John R Pierce

On 5/18/2017 1:40 PM, Andrew Kerber wrote:
It appears to me you might be making this a lot more difficult than 
necessary. Why not just pre-create the required partitions daily or 
weekly or monthly? Or do you have a requirement that a new partition 
only be created the first time it is required?


+1

we create new partitions in advance of their being needed as part of a 
maintenance process that's strictly single threaded.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgrSQL server : CPU and Memory

2017-05-17 Thread John R Pierce

On 5/17/2017 8:50 PM, kaustubh kelkar wrote:
I am a developer and I want to monitor metrics related to CPU and 
Memory of PostgreSQL server only using SQL queries.


postgres itself doesn't track that level of OS specific stuff.



Do we have any system tables which can give us the data? I have found 
this one
https://aaronparecki.com/2015/02/19/8/monitoring-cpu-memory-usage-from-postgres 


but dont know whether it is exactly the same what I need.



thats reading the /proc/ stuff available on Linux systems via SQL, 
its the same stuff OS level tools like ps, top, free use... it won't 
work on non-Linux systems, even other Unix systems like BSD don't have 
the same /proc stuff, and other OS's like Windows don't have /proc at all.


if you don't know whether that's what you need, I'm not sure how we 
could know what you need.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] column names and dollar sign

2017-05-17 Thread John R Pierce

On 5/17/2017 4:51 PM, Gavin Flower wrote:
Variables ending in '$' date back to at least the early days of BASIC 
- long before the spectre of Microsoft loomed large, let alone 'Visual 
Basic'! 



I note even INT fields have $ names there...   IBM used to like to use $ 
in names for system stuff, SYS$BLAHBLAH or whatever.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-16 Thread John R Pierce

On 5/16/2017 5:25 AM, Eric Hill wrote:
I do have the Sequelize ORM and the pg driver in between my code and 
the database. 



Can you try a similar test without the ORM, just going straight from 
node.js to sql ?



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] storing large files in database - performance

2017-05-16 Thread John R Pierce

On 5/16/2017 7:35 AM, Thomas Kellerer wrote:

When my (JDBC based) SQL client and the database server are on the same 
computer...


node.js is Javascript, not java w/ jdbc


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] EnterpriseDB installed PostgreSQL 9.6 vs. REPMGR. Round 2 - compilation issues.

2017-05-14 Thread John R Pierce

On 5/14/2017 2:20 PM, Joshua D. Drake wrote:

A bit outdated but:

http://raghavt.blogspot.com/2015/01/configuring-automatic-failover-using.html 




the advice on that page to unlink a library in /lib64 on an RPM managed 
system makes me very leery of the rest of the article.



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to recover my postgres database ?

2017-05-11 Thread John R Pierce

On 5/11/2017 9:53 AM, Pierre Couderc wrote:
I have the pg_dumpall of last  night, but many dbs have changed 
today... ? 


suggestion in the future, instead of simply pg_dumpall, where all your 
databases are in one opaque lump, try something like...


#!/bin/bash
#
d=`date +\%a`
dst=/home2/backups/pgsql
/usr/pgsql-9.3/bin/pg_dumpall --globals-only | gzip > 
$dst/pgdumpall.globals.$d.sql.gz
for i in $(psql -tc "select datname from pg_database where not 
datistemplate"); do pg_dump -Fc -f $dst/pgdump.$i.$d.dump $i

done


which uses pg_dumpall to dump the globals only into one file, then uses 
pg_dump -Fc to create compressed format dumps of each individual 
database, these can be selectively restored with pg_restore (for 
instance, you could restore just one table, or schema only, or data 
only, etcetc).that script is setup to create a different set of 
files for each day of the week, so you have 7 days backup history, 
change the parameter of the d=`date...  line if you want a different 
backup rotation scheme, and of course, dst is the destination




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Top posting....

2017-05-11 Thread John McKown
On Thu, May 11, 2017 at 12:04 PM, Francisco Olarte <fola...@peoplecall.com>
wrote:

> Slightly unrelated...
>
> ​
>
>
> Full quoting ( I mean the people which even quotes others signatures )
> is especially ugly, combined with top posting I feel it as insulting (
> to me it feels as 'you do not deserve me taking time to edit a bit and
> make things clear' ) ( but well, I started when all the university
> multiplexed over a 9600bps link, so I may be a bit extreme on this )
>

​I feel the same way. Because I started out long ago with a 300bps acoustic
modem with a _dial_ phone!​



>
> Regards.
> Francisco Olarte.
>


-- 
Advertising is a valuable economic factor because it is the cheapest way of
selling goods, particularly if the goods are worthless. -- Sinclair Lewis


Maranatha! <><
John McKown


Re: [GENERAL] How to recover my postgres database ?

2017-05-11 Thread John R Pierce

On 5/11/2017 9:53 AM, Pierre Couderc wrote:

I have broken my postgres database by typing :

psql How can I safely repair, knowing that I have the pg_dumpall of last  
night, but many dbs have changed today... ?


Thanks in advance 



was there anything in the postgres database other than the default?

psql template1 -c "drop database postgres; create database postgres with 
template template0"



should restore it to a virgin stock empty 'postgres'


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL]

2017-05-10 Thread John R Pierce
 varying::information_schema.yes_or_no AS 
is_self_referencing,

'NO'::character varying::information_schema.yes_or_no AS is_identity,
NULL::character varying::information_schema.character_data AS 
identity_generation,
NULL::character varying::information_schema.character_data AS 
identity_start,
NULL::character varying::information_schema.character_data AS 
identity_increment,
NULL::character varying::information_schema.character_data AS 
identity_maximum,
NULL::character varying::information_schema.character_data AS 
identity_minimum,
NULL::character varying::information_schema.yes_or_no AS 
identity_cycle,
'NEVER'::character varying::information_schema.character_data AS 
is_generated,
NULL::character varying::information_schema.character_data AS 
generation_expression,

CASE
WHEN c.relkind = 'r'::"char" OR (c.relkind = ANY 
(ARRAY['v'::"char", 'f'::"char"])) AND 
pg_column_is_updatable(c.oid::regclass, a.attnum, false) THEN 'YE

S'::text
ELSE 'NO'::text
END::information_schema.yes_or_no AS is_updatable
   FROM pg_attribute a
 LEFT JOIN pg_attrdef ad ON a.attrelid = ad.adrelid AND a.attnum = 
ad.adnum

 JOIN (pg_class c
 JOIN pg_namespace nc ON c.relnamespace = nc.oid) ON a.attrelid = c.oid
 JOIN (pg_type t
 JOIN pg_namespace nt ON t.typnamespace = nt.oid) ON a.atttypid = t.oid
 LEFT JOIN (pg_type bt
 JOIN pg_namespace nbt ON bt.typnamespace = nbt.oid) ON t.typtype = 
'd'::"char" AND t.typbasetype = bt.oid

 LEFT JOIN (pg_collation co
 JOIN pg_namespace nco ON co.collnamespace = nco.oid) ON 
a.attcollation = co.oid AND (nco.nspname <> 'pg_catalog'::name OR 
co.collname <> 'default'::name)
  WHERE NOT pg_is_other_temp_schema(nc.oid) AND a.attnum > 0 AND NOT 
a.attisdropped AND (c.relkind = ANY (ARRAY['r'::"char", 'v'::"char", 
'f'::"char"])) AND (pg_has_
role(c.relowner, 'USAGE'::text) OR has_column_privilege(c.oid, a.attnum, 
'SELECT, INSERT, UPDATE, REFERENCES'::text));




--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Python versus Other Languages using PostgreSQL

2017-05-10 Thread John R Pierce

On 5/10/2017 2:43 PM, Adrian Klaver wrote:

1) Uber
2)Yahoo
3) Instagram 


and, each of those giant businesses has their own entirely custom 
'platforms', so its not really fair to call them 'largest web platforms' 
as each of those custom platforms is in use at only one business.   
Sure, largest web SITES by traffic, usage.


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-05 Thread John R Pierce

On 5/5/2017 11:28 AM, Peter J. Holzer wrote:

On 2017-05-04 23:08:25 +0200, Sven R. Kunze wrote:

On 03.05.2017 12:57, Thomas Güttler wrote:

Am 02.05.2017 um 05:43 schrieb Jeff Janes:

No.  You can certainly use PostgreSQL to store blobs.  But then, you
need to store the PostgreSQL data **someplace**.
If you don't store it in S3, you have to store it somewhere else.

I don't understand what you mean here. AFAIK storing blobs in PG is not
recommended since it is not very efficient.

Seems like several people here disagree with this conventional wisdom.

I think it depends very much on what level of "efficiency" you need. On
my home server (i5 processor, 32GB RAM, Samsung 850 SSD - not a piece of
junk, but not super powerful either) I can retrieve a small blob from a
100GB table in about 0.1 ms, and for large blobs the speed approaches
200MB/s. For just about everything I'd do on that server (or even at
work) this is easily fast enough.



S3 is often used for terabyte to petabyte file collections.   I would 
not want to burden my relational database with this.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread John R Pierce

On 5/4/2017 2:50 PM, John R Pierce wrote:

But there's an extension - pg_cron:
https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/ 




there's also pg_agent which is a cron-like extension, usually bundled 
with pg_admin but also available standalone


https://www.pgadmin.org/docs4/dev/pgagent.html



oh, it should be made clear... both of these extensions require an OS 
level cron/scheduler job to be run every minute or whatever, this job 
invokes some SQL stuff which checks the pg_cron or pg_agent tables and 
runs whatever sql tasks are due.



--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread John R Pierce

On 5/4/2017 2:08 PM, Sven R. Kunze wrote:
After searching the web, it seems to me that PostgreSQL doesn't offer 
a cron-like background job for cleanup tasks.


http://stackoverflow.com/questions/18187490/postgresql-delete-old-rows-on-a-rolling-basis 




But there's an extension - pg_cron:
https://www.citusdata.com/blog/2016/09/09/pgcron-run-periodic-jobs-in-postgres/



there's also pg_agent which is a cron-like extension, usually bundled 
with pg_admin but also available standalone


https://www.pgadmin.org/docs4/dev/pgagent.html


--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread John R Pierce

On 5/4/2017 2:28 PM, Alan Hodgson wrote:

On Thursday 04 May 2017 14:21:00 John R Pierce wrote:

or EBS, and I've heard from more than a few people that EBS can be
something of a sand trap.


Sorry for following up off-topic, but EBS has actually improved considerably
in the last few years. You can get guaranteed (and very high) IOPS on SSD
storage, and many instance types come with high-speed throughput to EBS. It's
much much better for databases than it was 5 years ago.



has it become more stable when Amazon has their occasional major hiccups?


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Caching and Blobs in PG? Was: Can PG replace redis, amqp, s3 in the future?

2017-05-04 Thread John R Pierce

On 5/4/2017 2:08 PM, Sven R. Kunze wrote:
No.  You can certainly use PostgreSQL to store blobs.  But then, you 
need to store the PostgreSQL data **someplace**.

If you don't store it in S3, you have to store it somewhere else.


I don't understand what you mean here. AFAIK storing blobs in PG is 
not recommended since it is not very efficient.


Seems like several people here disagree with this conventional wisdom.

I think what he was talking about the data itself. You have to store 
the bits and bytes somewhere (e.g. on S3). 



afaik, S3 is not suitable for the $PGDATA directory, its more of an 
archival block file store for sequential access.for the actual 
database storage in the AWS world, you'd either use EC2 local storage, 
or EBS, and I've heard from more than a few people that EBS can be 
something of a sand trap.


re: storing blobs in postgres, I would be very hesitant to storage LARGE 
amounts of bulk data directly in postgres


--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] Compatibility of libpg

2017-05-04 Thread John R Pierce

On 5/4/2017 2:19 AM, Magnus Hagander wrote:
On Wed, May 3, 2017 at 11:31 PM, John R Pierce <pie...@hogranch.com 
<mailto:pie...@hogranch.com>> wrote:


On 5/3/2017 2:20 PM, Magnus Hagander wrote:


Please note that this method of building libpq has been
removed from Postgres 10, so it's considered to be deprecated
for quite some time.


this page
https://www.postgresql.org/docs/current/static/install-windows-libpq.html
<https://www.postgresql.org/docs/current/static/install-windows-libpq.html>
probably should be updated then.


That page is completely gone in version 10. If you look at 
https://www.postgresql.org/docs/devel/static/install-windows.html you 
will notice the entire 17.2 section has been removed, as well as the 
text on the root page referring to it.



well, dropped in the latest not-yet-released version doesn't really make 
it 'deprecated for quite some time'. if this has been the long term 
intention, the docs should have reflected this some revisions back.


I still think the Windows packagers (EnterpriseDB) should 
have a client-only package which has options to just install the libs, 
or the libs + client utils (psql, pg_dump/restore/dumpall and optionally 
pg_admin).   I realize that this wouldn't be /that/ much smaller than 
the whole 9 yards, but its a psychological thing for the end user, they 
think of the server as 'heavy', also would be good for automated client 
deployments in business envirnoments.



--
john r pierce, recycling bits in santa cruz



  1   2   3   4   5   6   7   8   9   10   >