Re: Upgrading old server

2019-09-25 Thread Ekaterina Amez


El 25/9/19 a las 20:21, Adrian Klaver escribió:

On 9/25/19 9:00 AM, Tom Lane wrote:

Ron  writes:

On 9/25/19 9:29 AM, Christoph Berg wrote:
Re: Ekaterina Amez 2019-09-25 
<8818b028-bd2d-412e-d4e3-e29c49ffe...@zunibal.com>
We've decided to upgrade our PostgreSQL production servers. First 
task is

remove an old v7.14 version. It was supposed to be upgraded to a v8.4
server. The server was installed, several databases where released 
here but
v7.4 was never migrated. The plan is pg_dump this database and 
psql it to

existing 8.4 server. After this, we'll pg_upgrade.



If you doing dump-restore anyway, why not restore into v11 rightaway?
I won't use v11 because the existing server where de DB is going to be 
re-allocated is v8.4. Our Postgres servers are "a bit" out-dated.


Since it's recommend to run the newer pg_dump on the older database, 
I've

got to wonder if v11 pg_dump can read the v7.4 on-disk structures.


We dropped support for pre-8.0 source servers in pg_dump sometime
recently, though I forget if v11 is affected by that or not.


Version 10.0:

https://www.postgresql.org/docs/10/release-10.html



Ok, v10 release notes says it explicitly:

 *

   Remove pg_dump/pg_dumpall support for dumping from pre-8.0 servers
   (Tom Lane)

   Users needing to dump from pre-8.0 servers will need to use dump
   programs from PostgreSQL 9.6 or earlier. The resulting output should
   still load successfully into newer servers.


You could try just dumping with 7.4's pg_dump and seeing if the
output will load into v11 --- ideally it would, but I'd not be
surprised if there are issues that have to be resolved manually.
Or, if you have 8.4's pg_dump at hand, try using that.

Yes, that's what I have for my tests: 8.4's pg_dump.


7.4 to 11 is a big jump to be doing in one step.  There's definitely
something to be said for porting to an intermediate release, just to
break down the work into smaller chunks.  But I'd go for halfway 
between,

which if I counted releases correctly would be about 9.1, not 8.4.

    regards, tom lane

v8.4 is mandatory middle step, because we'd like to remove v7.14 ASAP 
and the only available server is 8.4. After that upgrade is what I'm 
talking about. I was thinking as you, Tom: upgrading to v11 is really a 
big jump. v10 is also a big jump that scares me less, but maybe going 
first to 9.6 (which gives us a couple of years) would be a better 
solution that could let us experiment with some of the new performance 
features we're interested in.


Re: Use of ?get diagnostics'?

2019-09-25 Thread Thiemo Kellner

Hello Adrian

Quoting Adrian Klaver :

To get above I believe you will need to use GET CURRENT DIAGNOSTICS  
PG_CONTEXT:


I actually use "get stacked diagnostics" to retrieve the exception  
place. And it works. I am not sure why I did no see it.


However, I noticed, that the stack does not include the error place in  
dynamic SQL executed by the "execute" command. Maybe I am missing  
something again.


Kind regards

Thiemo

--
S/MIME Public Key: https://oc.gelassene-pferde.biz/index.php/s/eJuAUFONag6ofnH
Signal (Safer than WhatsApp): +49 1578 7723737
Handys: +41 78 947 36 21 | +49 1578 772 37 37
Tox-Id:  
B7282A9E31D944DDAD1ECC5B33C36AAE80B32D119AB7391D602C937A65E8CA0625C495C5322B






Re: updating sequence value for column 'serial'

2019-09-25 Thread Matthias Apitz
El día miércoles, septiembre 25, 2019 a las 07:42:11a. m. -0700, Adrian Klaver 
escribió:

> >>> sisis$# DECLARE
> >>> sisis$#maxikatkey integer := ( select max(katkey) from titel_daten );
> >>> sisis$#result integer := 1;
> >>> sisis$# BEGIN
> >>> sisis$#maxikatkey := maxikatkey +1;
> >>> sisis$#RAISE NOTICE '%', maxikatkey ;
> >>> sisis$#result := (SELECT SETVAL('titel_daten_katkey_seq', maxikatkey) 
> >>> );
> >>> sisis$#RAISE NOTICE '%', result ;
> >>> sisis$# END $$;
> >>> NOTICE:  330722
> >>> NOTICE:  330723
> >>> DO
> >>>
> >>> Is there any better way? Thanks
> >>
> >> I have not found a better way. I use ALTER SEQUENCE .. RESTART 330722
> >> though:
> > 
> > Yes, but I found no way to use a variable like 'maxikatkey' in the ALTER 
> > SEQUENCE ...
> > it only excepts digits like 330722.
> 
> DO $$
> DECLARE
>  max_id int;
> BEGIN
>  SELECT INTO max_id max(id) + 1 FROM seq_test;
>  RAISE NOTICE 'Max id is %', max_id;
>  EXECUTE 'ALTER SEQUENCE seq_test_id_seq RESTART ' || max_id::text;
> END;
> $$ LANGUAGE plpgsql;

Hi Adrian,

I adopted your code to the name of my table 'ig_target_ipfilter' and its
SERIAL column 'id'; it does not work (and I don't know how it could
works because in the 'ALTER SEQUENCE ...' stmt is somehow missing '...  WITH 
value ...')
or do I understand something wrong?):

cat -n /home/apitzm/postgreSQL/test.sql
 1  DO $$
 2  DECLARE
 3  max_id int;
 4  BEGIN
 5  SELECT INTO max_id max(id) + 1 FROM ig_target_ipfilter ;
 6  RAISE NOTICE 'Max id in % is %', 'ig_target_ipfilter', max_id;
 7  EXECUTE 'ALTER SEQUENCE ig_target_ipfilter_id_seq RESTART ' || 
max_id::text;
 8  END;
 9  $$ LANGUAGE plpgsql;

psql -Usisis -dsisis < /home/apitzm/postgreSQL/test.sql
NOTICE:  Max id in ig_target_ipfilter is 
ERROR:  query string argument of EXECUTE is null
KONTEXT:  PL/pgSQL function inline_code_block line 7 at EXECUTE

Please clarify. Thanks

matthias


-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub

Mientras haya voluntad de lucha habrá esperanza de vencer.


signature.asc
Description: PGP signature


Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-25 Thread Adrian Klaver

On 9/25/19 12:34 PM, Marco Ippolito wrote:
Following the indications here: 
https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database 
I'm trying to understand how to correctly set Fabric-CA with a 
PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.




This is the corresponding part in 
/var/log/postgresql/postgresql-11-fabmnet.log :


     2019-09-25 20:51:52.655 CEST [1096] LOG:  listening on IPv6 address 
"::1",

     port 5433
     2019-09-25 20:51:52.673 CEST [1096] LOG:  listening on IPv4 address
     "127.0.0.1", port 5433
     2019-09-25 20:51:52.701 CEST [1096] LOG:  listening on Unix socket
     "/var/run/postgresql/.s.PGSQL.5433"
     2019-09-25 20:51:52.912 CEST [1171] LOG:  database system was 
interrupted;

      last known up at 2019-09-25 09:50:30 CEST
     2019-09-25 20:51:53.001 CEST [1171] LOG:  database system was not 
properly

      shut down; automatic recovery in progress
     2019-09-25 20:51:53.011 CEST [1171] LOG:  redo starts at 0/1668238
     2019-09-25 20:51:53.011 CEST [1171] LOG:  invalid record length at
     0/1668318: wanted 24, got 0
     2019-09-25 20:51:53.011 CEST [1171] LOG:  redo done at 0/16682E0
     2019-09-25 20:51:53.043 CEST [1096] LOG:  database system is ready to
     accept connections
     2019-09-25 20:51:53.569 CEST [1206] [unknown]@[unknown] LOG: 
  incomplete

     startup packet
     2019-09-25 20:56:57.540 CEST [4620] [unknown]@[unknown] LOG:  could 
not

     accept SSL connection: sslv3 alert bad certificate
     2019-09-25 20:56:57.543 CEST [4622] [unknown]@[unknown] LOG:  could not
     accept SSL connection: sslv3 alert bad certificate
     2019-09-25 20:56:57.544 CEST [4623] [unknown]@[unknown] LOG:  could 
not

     accept SSL connection: sslv3 alert bad certificate



Aargh, I missed the part above.

What happens if you remove the sslmode=verify-full from the *.yaml file?



     And this is the db's configuration in (base) marco@pc:~$ nano 
./fabric/fabric-ca/fabric-ca-

     server-config.yaml :

     db:
       type: postgres
       datasource: host=localhost port=5433 user=fabmnet_admin 
password=pwd dbname=fabmnetdb

     sslmode=verify-full


How to correctly set up SSL connection to PostgresSQL-11 db?

Looking forward to your kind help
Marco



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: could not accept SSL connection: sslv3 alert bad certificate

2019-09-25 Thread Adrian Klaver

On 9/25/19 12:34 PM, Marco Ippolito wrote:
Following the indications here: 
https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database 
I'm trying to understand how to correctly set Fabric-CA with a 
PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.


I created a postgresql-11 db to which I can connect with SSL:

     (base) marco@pc:~$ psql --cluster 11/fabmnet -h 127.0.0.1 -d 
fabmnetdb -U fabmnet_admin

     Password for user fabmnet_admin:
     psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
     SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, 
bits: 256, compression: off)

     Type "help" for help.

     fabmnetdb=> \l
                                     List of databases
        Name    |     Owner     | Encoding | Collate |  Ctype  |   
Access privileges
 
---+---+--+-+-+---

      fabmnetdb | fabmnet_admin | UTF8     | C.UTF-8 | C.UTF-8 |
      postgres  | postgres      | UTF8     | C.UTF-8 | C.UTF-8 |
      template0 | postgres      | UTF8     | C.UTF-8 | C.UTF-8 | 
=c/postgres          +
                |               |          |         |         | 
postgres=CTc/postgres
      template1 | postgres      | UTF8     | C.UTF-8 | C.UTF-8 | 
=c/postgres          +
                |               |          |         |         | 
postgres=CTc/postgres

     (4 rows)

     fabmnetdb=>


but when trying to start a fabric-ca-server :

     (base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b
     admin:adminpw
     2019/09/25 20:56:57 [INFO] Configuration file location: 
/home/marco/fabric

     /fabric-ca/fabric-ca-server-config.yaml
     2019/09/25 20:56:57 [INFO] Starting server in home directory: 
/home/marco

     /fabric/fabric-ca
     2019/09/25 20:56:57 [INFO] Server Version: 1.4.4
     2019/09/25 20:56:57 [INFO] Server Levels: &{Identity:2 Affiliation:1
     Certificate:1 Credential:1 RAInfo:1 Nonce:1}
     2019/09/25 20:56:57 [INFO] The CA key and certificate already exist
     2019/09/25 20:56:57 [INFO] The key is stored by BCCSP provider 'SW'
     2019/09/25 20:56:57 [INFO] The certificate is at: /home/marco/fabric
     /fabric-ca/ca-cert.pem
     2019/09/25 20:56:57 [WARNING] Failed to connect to database 'fabmnetdb'
     2019/09/25 20:56:57 [WARNING] Failed to connect to database 'postgres'
     2019/09/25 20:56:57 [WARNING] Failed to connect to database 'template1'
     2019/09/25 20:56:57 [ERROR] Error occurred initializing database: 
Failed

     to connect to Postgres database. Postgres requires connecting to a
     specific database, the following databases were tried: [fabmnetdb 
postgres

      template1]. Please create one of these database before continuing
     2019/09/25 20:56:57 [INFO] Home directory for default CA: /home/marco
     /fabric/fabric-ca
     2019/09/25 20:56:57 [INFO] Operation Server Listening on 
127.0.0.1:9443 

     2019/09/25 20:56:57 [INFO] Listening on http://0.0.0.0:7054

This is the corresponding part in 
/var/log/postgresql/postgresql-11-fabmnet.log :


     2019-09-25 20:51:52.655 CEST [1096] LOG:  listening on IPv6 address 
"::1",

     port 5433
     2019-09-25 20:51:52.673 CEST [1096] LOG:  listening on IPv4 address
     "127.0.0.1", port 5433
     2019-09-25 20:51:52.701 CEST [1096] LOG:  listening on Unix socket
     "/var/run/postgresql/.s.PGSQL.5433"
     2019-09-25 20:51:52.912 CEST [1171] LOG:  database system was 
interrupted;

      last known up at 2019-09-25 09:50:30 CEST
     2019-09-25 20:51:53.001 CEST [1171] LOG:  database system was not 
properly

      shut down; automatic recovery in progress
     2019-09-25 20:51:53.011 CEST [1171] LOG:  redo starts at 0/1668238
     2019-09-25 20:51:53.011 CEST [1171] LOG:  invalid record length at
     0/1668318: wanted 24, got 0
     2019-09-25 20:51:53.011 CEST [1171] LOG:  redo done at 0/16682E0
     2019-09-25 20:51:53.043 CEST [1096] LOG:  database system is ready to
     accept connections
     2019-09-25 20:51:53.569 CEST [1206] [unknown]@[unknown] LOG: 
  incomplete

     startup packet
     2019-09-25 20:56:57.540 CEST [4620] [unknown]@[unknown] LOG:  could 
not

     accept SSL connection: sslv3 alert bad certificate
     2019-09-25 20:56:57.543 CEST [4622] [unknown]@[unknown] LOG:  could not
     accept SSL connection: sslv3 alert bad certificate
     2019-09-25 20:56:57.544 CEST [4623] [unknown]@[unknown] LOG:  could 
not

     accept SSL connection: sslv3 alert bad certificate


This is how I set the pg_hba.conf file in the fabmnet postgresql cluster :

     (base) marco@pc:~$ sudo -su postgres
     (base) postgres@pc:~$ nano /etc/postgresql/11/fabmnet/pg_hba.conf
     Unable to create directory /home/marco/.local/share/nano/: 
Permission denied

     It is required for saving/loading search history or cursor positions.

     Press Enter to continue

     # TYPE  DATABASE        USER            ADDRESS                 METH

Re: lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
På torsdag 26. september 2019 kl. 00:53:28, skrev Tom Lane mailto:t...@sss.pgh.pa.us>>: Andreas Joseph Krogh  writes:
 > Can anybody shed som light
 > on when negative-prefix is supposed to be respected by PG's
 > formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is 
'−'(8722),
 > not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must 
use
 > lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / 
Partner

 PG does not consider LC_NUMERIC at all when producing output from
 the standard numeric data types (and we aren't going to start).
 AFAIR the only functions that do pay attention to LC_NUMERIC are
 to_char() and friends.

 regards, tom lane Thanks for clarifying. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
 www.visena.com   


Re: lc_numeric and negative-prefix

2019-09-25 Thread Tom Lane
Andreas Joseph Krogh  writes:
> Can anybody shed som light 
> on when negative-prefix is supposed to be respected by PG's 
> formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is 
> '−'(8722), 
> not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must 
> use 
> lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / 
> Partner

PG does not consider LC_NUMERIC at all when producing output from
the standard numeric data types (and we aren't going to start).
AFAIR the only functions that do pay attention to LC_NUMERIC are
to_char() and friends.

regards, tom lane




lc_numeric and negative-prefix

2019-09-25 Thread Andreas Joseph Krogh
Hi. We're having this thread over at 
https://github.com/impossibl/pgjdbc-ng/issues/420 
 Can anybody shed som light 
on when negative-prefix is supposed to be respected by PG's 
formatting-functions? In lc_numeric='nb_NO.UTF-8' negative-prefix is '−'(8722), 
not '-'(45), at least in the JAVA-world, but it seems the JDBC-driver must use 
lc_numeric='C' when parsing server output. -- Andreas Joseph Krogh CTO / Partner
 - Visena AS Mobile: +47 909 56 963 andr...@visena.com 
 www.visena.com   


Re: Arrays and ANY problem

2019-09-25 Thread Tom Lane
Alban Hertroys  writes:
>> On 25 Sep 2019, at 22:50, Alban Hertroys  wrote:
>> You probably meant:
>> select name from table_name_ds_tmp where categoryid = ANY ( select 
>> string_to_array( '200,400', ',')::bigint[] );

> Or rather:
> select name from table_name_ds_tmp where categoryid = ANY ( string_to_array( 
> '200,400', ',')::bigint[] );

Yeah, this is fairly confusing, because there are multiple different
features with barely distinguishable syntaxes here.  You can do

value = ANY (SELECT ...)

which compares "value" to each row of the sub-SELECT result (and the
sub-SELECT had better return one column, of a type comparable to
"value").  Or you can do

value = ANY (array-expression)

which compares "value" to each element of the array value (which had
better have elements of a type comparable to "value").  What you
can't do is generate the array value from a sub-select, because that
will be taken as being an instance of the first feature.

David didn't say what his real problem was, but I'd suggest that
making his sub-select return a rowset result rather than an array
result might be the best way to resolve things.  It's more SQL-y,
for sure.

regards, tom lane




Re: Arrays and ANY problem

2019-09-25 Thread David G. Johnston
On Wed, Sep 25, 2019 at 3:08 PM David Salisbury  wrote:

> Thanks,
>
> Unfortunately I believe I need to include a postgres module to get the
> "<@" operator, which I have no power to do.  This is what I get with that
> operator..
>
>  select name from table_name_ds_tmp where categoryid <@ ANY ( ARRAY[ 200,
> 400]::BIGINT[] );
>  ERROR:  operator does not exist: bigint <@ bigint
>

The "<@" operator is standard...you were even provided an example of how to
use it.  That its doesn't work when you do something different isn't
surprising.


> The second query does work, but in the end I'll need to have a select in
> that area to pick out my numbers, can't hard code it, and that seems to be
> what screws my query up, the select,  and that makes no sense.
>

Then how about providing what you will eventually need so people aren't
wasting their time with stuff you won't be able to use.

There are two "ANY" constructs documented.  One covers a subquery and one
encapsulates an array.  The presence of "select" forces the subquery
interpretation even if the select just happens to be providing a set of
arrays (a set of cardinality one).


> Here's what happens without the ARRAY wrapping around categoryid, as it
> your second thought...
>
> select name from table_name_ds_tmp where categoryid = ANY ( select
> string_to_array( '200,400', ',')::bigint[] );
> ERROR:  operator does not exist: bigint = bigint[]
>
> At least it runs with ARRAY[categoryid], it just doesn't return anything.
> :-(
>

This is all documented and the specific reason this doesn't match has been
previously explained in this thread.

See:

https://www.postgresql.org/docs/11/functions-subquery.html#FUNCTIONS-SUBQUERY-ANY-SOME


compared to

https://www.postgresql.org/docs/11/functions-comparisons.html#id-1.5.8.28.16

David J.


Re: Arrays and ANY problem

2019-09-25 Thread David Salisbury
Thanks,

Unfortunately I believe I need to include a postgres module to get the "<@"
operator, which I have no power to do.  This is what I get with that
operator..

 select name from table_name_ds_tmp where categoryid <@ ANY ( ARRAY[ 200,
400]::BIGINT[] );
 ERROR:  operator does not exist: bigint <@ bigint

The second query does work, but in the end I'll need to have a select in
that area to pick out my numbers, can't hard code it, and that seems to be
what screws my query up, the select,  and that makes no sense.

Here's what happens without the ARRAY wrapping around categoryid, as it
your second thought...

select name from table_name_ds_tmp where categoryid = ANY ( select
string_to_array( '200,400', ',')::bigint[] );
ERROR:  operator does not exist: bigint = bigint[]

At least it runs with ARRAY[categoryid], it just doesn't return anything.
:-((

On Wed, Sep 25, 2019 at 2:48 PM Michael Lewis  wrote:

> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY (
>> select string_to_array( '200,400', ',')::bigint[] );
>>
>
> Using either of the below instead, I get the proper result. Why doesn't
> ANY work? I do not know.
>
> select name from table_name_ds_tmp where ARRAY[categoryid] <@ ( select
> (string_to_array( '200,400', ','))::bigint[] );
> select name from table_name_ds_tmp where categoryid = ANY ( ARRAY[ 200,
> 400]::BIGINT[] );
>
> I used-
>
> drop table if exists pg_temp.table_name_ds_tmp;
> create temp table table_name_ds_tmp AS(
> SELECT 100::BIGINT AS categoryid, 'one'::VARCHAR AS name UNION ALL
> SELECT 200::BIGINT, 'two'::VARCHAR UNION ALL
> SELECT 300::BIGINT, 'three'::VARCHAR UNION ALL
> SELECT 400::BIGINT, 'four'::VARCHAR
> );
>


Re: Arrays and ANY problem

2019-09-25 Thread Alban Hertroys


> On 25 Sep 2019, at 22:50, Alban Hertroys  wrote:
> 
> 
>> On 25 Sep 2019, at 22:25, David Salisbury  wrote:
>> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY ( 
>> select string_to_array( '200,400', ',')::bigint[] );
>> name 
>> --
>> (0 rows)
> 
> You are comparing two arrays for equality. Since the left-hand array has only 
> 1 item and the right-hand one has two, there’s not much equality between them.
> 
> You probably meant:
> select name from table_name_ds_tmp where categoryid = ANY ( select 
> string_to_array( '200,400', ',')::bigint[] );

Or rather:
select name from table_name_ds_tmp where categoryid = ANY ( string_to_array( 
'200,400', ',')::bigint[] );

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.





Re: Arrays and ANY problem

2019-09-25 Thread Alban Hertroys


> On 25 Sep 2019, at 22:25, David Salisbury  wrote:
> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY ( 
> select string_to_array( '200,400', ',')::bigint[] );
>  name 
> --
> (0 rows)

You are comparing two arrays for equality. Since the left-hand array has only 1 
item and the right-hand one has two, there’s not much equality between them.

You probably meant:
select name from table_name_ds_tmp where categoryid = ANY ( select 
string_to_array( '200,400', ',')::bigint[] );


Alban Hertroys
--
There is always an exception to always.








Re: Arrays and ANY problem

2019-09-25 Thread Michael Lewis
>
> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY (
> select string_to_array( '200,400', ',')::bigint[] );
>

Using either of the below instead, I get the proper result. Why doesn't ANY
work? I do not know.

select name from table_name_ds_tmp where ARRAY[categoryid] <@ ( select
(string_to_array( '200,400', ','))::bigint[] );
select name from table_name_ds_tmp where categoryid = ANY ( ARRAY[ 200,
400]::BIGINT[] );

I used-

drop table if exists pg_temp.table_name_ds_tmp;
create temp table table_name_ds_tmp AS(
SELECT 100::BIGINT AS categoryid, 'one'::VARCHAR AS name UNION ALL
SELECT 200::BIGINT, 'two'::VARCHAR UNION ALL
SELECT 300::BIGINT, 'three'::VARCHAR UNION ALL
SELECT 400::BIGINT, 'four'::VARCHAR
);


Arrays and ANY problem

2019-09-25 Thread David Salisbury
Perhaps someone can guide me here as I'm having a "moment". :)

Not sure why I am getting 0 rows returned here:

db=# \d table_name_ds_tmp

   Column   |   Type| Modifiers
+---+---
 categoryid | bigint|
 name   | character varying |

db=# select * from table_name_ds_tmp;
 categoryid | name
+---
100 | one
200 | two
300 | three
400 | four
(4 rows)

db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY (
select string_to_array( '200,400', ',')::bigint[] );
 name
--
(0 rows)

I would expect, and am in need of, the query returning these 2 rows,
"two"
"four"
which I plan to aggregate together later on with array_to_string.

Basically I hope to take a string and substitute the values in it with
their associated values.

Other incantations and permutations just seem to give me various error
messages.

Thanks for any help.


Re: Operator is not unique

2019-09-25 Thread PegoraroF10
format_type format_type castfunccastcontext 
castmethod
bigint  smallint714 a   f
bigint  integer 480 a   f
bigint  real652 i   f
bigint  double precision482 i   f
bigint  numeric 1781i   f
bigint  regclass1287i   f
bigint  regtype 1287i   f
bigint  regconfig   1287i   f
bigint  regdictionary   1287i   f
bigint  regrole 1287i   f
bigint  regnamespace1287i   f
bigint  bit 2075e   f
bigint  money   3812a   f
bigint  oid 1287i   f
bigint  regproc 1287i   f
bigint  regprocedure1287i   f
bigint  regoper 1287i   f
bigint  regoperator 1287i   f
numeric bigint  1779a   f
numeric smallint1783a   f
numeric integer 1744a   f
numeric real1745i   f
numeric double precision1746i   f
numeric money   3824a   f
numeric numeric 1703i   f




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Operator is not unique

2019-09-25 Thread Adrian Klaver

On 9/25/19 6:38 AM, PegoraroF10 wrote:
Doing a inner join with pg_proc I´ll get this result. So, none of this 
casts I´ve created. Column prorettype is different, this is the problem 
? select format_type(castsource, NULL), format_type(casttarget, NULL), 
castfunc, castcontext, castmethod, pr.* from pg_cast inner join pg_proc 
pr on castfunc = pr.oid where (castsource = 'numeric'::regtype or 
castsource = 'bigint'::regtype) and format_type(casttarget, NULL) in 
('numeric','bigint') order by castsource;


Please run the query provided:

select format_type(castsource, NULL), format_type(casttarget, NULL), 
castfunc, castcontext, castmethod  from pg_cast  where castsource = 
'numeric'::regtype or castsource = 'bigint'::regtype order by castsource;



format_type	format_type	castfunc	castcontext	castmethod	proname 
pronamespace	proowner	prolang	procost	prorows	provariadic	protransform 
prokind	prosecdef	proleakproof	proisstrict	proretset	provolatile 
proparallel	pronargs	pronargdefaults	prorettype	proargtypes 
proallargtypes	proargmodes	proargnames	proargdefaults	protrftypes 
prosrc	probin	proconfig	proacl
bigint	numeric	1781	i	f	numeric	11	10	12	1	0	0	-	f	false	false	true 
false	i	s	1	0	1700	20	NULL	NULL	NULL	NULL	NULL	int8_numeric	NULL	NULL	NULL
numeric	bigint	1779	a	f	int8	11	10	12	1	0	0	-	f	false	false 
true	false	i	s	1	0	20	1700	NULL	NULL	NULL	NULL	NULL	numeric_int8	NULL 
NULL	NULL
numeric	numeric	1703	i	f	numeric	11	10	12	1	0	0	numeric_transform	f 
false	false	true	false	i	s	2	0	1700	1700 23	NULL	NULL	NULL	NULL	NULL 
numeric	NULL	NULL	NULL




Sent from the PostgreSQL - general mailing list archive 
 at 
Nabble.com.



--
Adrian Klaver
adrian.kla...@aklaver.com




could not accept SSL connection: sslv3 alert bad certificate

2019-09-25 Thread Marco Ippolito
Following the indications here:
https://hyperledger-fabric-ca.readthedocs.io/en/release-1.4/users-guide.html#configuring-the-database
I'm trying to understand how to correctly set Fabric-CA with a
PostgreSQL-11 database in Ubuntu 18.04.02 Server Edition.

I created a postgresql-11 db to which I can connect with SSL:

(base) marco@pc:~$ psql --cluster 11/fabmnet -h 127.0.0.1 -d fabmnetdb
-U fabmnet_admin
Password for user fabmnet_admin:
psql (11.5 (Ubuntu 11.5-1.pgdg18.04+1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384,
bits: 256, compression: off)
Type "help" for help.

fabmnetdb=> \l
List of databases
   Name| Owner | Encoding | Collate |  Ctype  |   Access
privileges

---+---+--+-+-+---
 fabmnetdb | fabmnet_admin | UTF8 | C.UTF-8 | C.UTF-8 |
 postgres  | postgres  | UTF8 | C.UTF-8 | C.UTF-8 |
 template0 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres
 +
   |   |  | | |
postgres=CTc/postgres
 template1 | postgres  | UTF8 | C.UTF-8 | C.UTF-8 | =c/postgres
 +
   |   |  | | |
postgres=CTc/postgres
(4 rows)

fabmnetdb=>


but when trying to start a fabric-ca-server :

(base) marco@pc:~/fabric/fabric-ca$ fabric-ca-server start -b
admin:adminpw
2019/09/25 20:56:57 [INFO] Configuration file location:
/home/marco/fabric
/fabric-ca/fabric-ca-server-config.yaml
2019/09/25 20:56:57 [INFO] Starting server in home directory:
/home/marco
/fabric/fabric-ca
2019/09/25 20:56:57 [INFO] Server Version: 1.4.4
2019/09/25 20:56:57 [INFO] Server Levels: &{Identity:2 Affiliation:1
Certificate:1 Credential:1 RAInfo:1 Nonce:1}
2019/09/25 20:56:57 [INFO] The CA key and certificate already exist
2019/09/25 20:56:57 [INFO] The key is stored by BCCSP provider 'SW'
2019/09/25 20:56:57 [INFO] The certificate is at: /home/marco/fabric
/fabric-ca/ca-cert.pem
2019/09/25 20:56:57 [WARNING] Failed to connect to database 'fabmnetdb'
2019/09/25 20:56:57 [WARNING] Failed to connect to database 'postgres'
2019/09/25 20:56:57 [WARNING] Failed to connect to database 'template1'
2019/09/25 20:56:57 [ERROR] Error occurred initializing database:
Failed
to connect to Postgres database. Postgres requires connecting to a
specific database, the following databases were tried: [fabmnetdb
postgres
 template1]. Please create one of these database before continuing
2019/09/25 20:56:57 [INFO] Home directory for default CA: /home/marco
/fabric/fabric-ca
2019/09/25 20:56:57 [INFO] Operation Server Listening on 127.0.0.1:9443
2019/09/25 20:56:57 [INFO] Listening on http://0.0.0.0:7054

This is the corresponding part in
/var/log/postgresql/postgresql-11-fabmnet.log :

2019-09-25 20:51:52.655 CEST [1096] LOG:  listening on IPv6 address
"::1",
port 5433
2019-09-25 20:51:52.673 CEST [1096] LOG:  listening on IPv4 address
"127.0.0.1", port 5433
2019-09-25 20:51:52.701 CEST [1096] LOG:  listening on Unix socket
"/var/run/postgresql/.s.PGSQL.5433"
2019-09-25 20:51:52.912 CEST [1171] LOG:  database system was
interrupted;
 last known up at 2019-09-25 09:50:30 CEST
2019-09-25 20:51:53.001 CEST [1171] LOG:  database system was not
properly
 shut down; automatic recovery in progress
2019-09-25 20:51:53.011 CEST [1171] LOG:  redo starts at 0/1668238
2019-09-25 20:51:53.011 CEST [1171] LOG:  invalid record length at
0/1668318: wanted 24, got 0
2019-09-25 20:51:53.011 CEST [1171] LOG:  redo done at 0/16682E0
2019-09-25 20:51:53.043 CEST [1096] LOG:  database system is ready to
accept connections
2019-09-25 20:51:53.569 CEST [1206] [unknown]@[unknown] LOG:
 incomplete
startup packet
2019-09-25 20:56:57.540 CEST [4620] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-25 20:56:57.543 CEST [4622] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate
2019-09-25 20:56:57.544 CEST [4623] [unknown]@[unknown] LOG:  could not
accept SSL connection: sslv3 alert bad certificate


This is how I set the pg_hba.conf file in the fabmnet postgresql cluster :

(base) marco@pc:~$ sudo -su postgres
(base) postgres@pc:~$ nano /etc/postgresql/11/fabmnet/pg_hba.conf
Unable to create directory /home/marco/.local/share/nano/: Permission
denied
It is required for saving/loading search history or cursor positions.

Press Enter to continue

# TYPE  DATABASEUSERADDRESS METHOD

# Database administrative login by Unix domain socket
local   all postgrespeer

# TYPE  DATABASEUSERADDRE

Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Adrian Klaver

On 9/25/19 8:04 AM, Rob Sargent wrote:



On Sep 25, 2019, at 8:24 AM, Krishnakant Mane > wrote:




On 25/09/19 7:50 PM, Adrian Klaver wrote:

On 9/25/19 12:15 AM, Krishnakant Mane wrote:

Hello all,

I have been using postgresql for an enterprise quality account's 
automation and inventory management software called GNUKhata 



Our team is planning to add backup and restore function in the 
software.


But we don't want to dump the entire database and then restore the 
same.


What we are trying to do is to copy data specific to an organization.

The challenge here is that I might copy all data (account heads, 
bills, vouchers etc ) for one organization from an instance on one 
machine.


I take the archive in what ever format to another machine and now 
attempt to restore.


The risk here is for example if the primary key value for orgcode in 
the organization table is 5, it might conflict with the data where I 
am attempting it to be restored.


Same holds true for bills, invoices etc.

A certain account head with accountcode 1 might be already present 
on the second machine.


I am not expecting the users to empty all data from the destination 
machine before restoring a backup.


The reason is that an auditor may have many client's data and one 
can't predict what primary key values are going to come from a backup.


Basically I can even say this is a copy paste instead of a pure 
backup and restore.


Can any one suggest how to handle such conflicts?


Hard to say. If the data is held in common tables(bills, vouchers, 
etc)then the only thing I see happening is changing the PK values to 
an unused value. That could turn into a nightmare though. Not only 
that you lose the connection to the original data source. If the data 
can be broken out into separate tables then I could see placing them 
in their own schema.





--
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//




Hi Adrian,

Even I am thinnking to do some kind of upsert with this situation.


So to be clear the tables you are working can have records from multiple 
organizations in a single table?




And I would have to set the pkey to an unassigned value when there is 
conflict.


I am seeing nextval() in your future:)



I may also choose to revamp the serial by timestamps but don't know if 
the target customers would like it.


I would avoid that. In my opinion timestamps are to too volatile to 
serve as a PK. If you are going to change I would go with the previous 
suggestion of UUID:

https://www.postgresql.org/docs/11/datatype-uuid.html

Not sure your customers would like that either.



--
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//
It would likely be easier to rethink your backup and restore plan. 
Putting each restore into its own space would be one tack.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Upgrading old server

2019-09-25 Thread Adrian Klaver

On 9/25/19 9:00 AM, Tom Lane wrote:

Ron  writes:

On 9/25/19 9:29 AM, Christoph Berg wrote:

Re: Ekaterina Amez 2019-09-25 <8818b028-bd2d-412e-d4e3-e29c49ffe...@zunibal.com>

We've decided to upgrade our PostgreSQL production servers. First task is
remove an old v7.14 version. It was supposed to be upgraded to a v8.4
server. The server was installed, several databases where released here but
v7.4 was never migrated. The plan is pg_dump this database and psql it to
existing 8.4 server. After this, we'll pg_upgrade.



If you doing dump-restore anyway, why not restore into v11 rightaway?



Since it's recommend to run the newer pg_dump on the older database, I've
got to wonder if v11 pg_dump can read the v7.4 on-disk structures.


We dropped support for pre-8.0 source servers in pg_dump sometime
recently, though I forget if v11 is affected by that or not.


Version 10.0:

https://www.postgresql.org/docs/10/release-10.html



You could try just dumping with 7.4's pg_dump and seeing if the
output will load into v11 --- ideally it would, but I'd not be
surprised if there are issues that have to be resolved manually.
Or, if you have 8.4's pg_dump at hand, try using that.

7.4 to 11 is a big jump to be doing in one step.  There's definitely
something to be said for porting to an intermediate release, just to
break down the work into smaller chunks.  But I'd go for halfway between,
which if I counted releases correctly would be about 9.1, not 8.4.

regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Pg_auto_failover

2019-09-25 Thread Paul Jungwirth

On 9/25/19 8:55 AM, Sonam Sharma wrote:

My user has permission to touch inside the Fs.
I am not getting why it is checking for /backup.

Any suggestions on this , what else I can check


In addition to the code already linked to, I would look at 
https://github.com/citusdata/pg_auto_failover/blob/1290edd0bab54c627f577cf4462bd16a56b20a1a/src/bin/pg_autoctl/pgctl.c#L429-L437


You see that pg_auto_failover is trying to `mkdir -p $pgdata/../backup` 
(where pgdata is a C variable, not a shell/environment variable). You 
might want to read through the rest of that code to see where pgdata is 
coming from, and see if it's being set correctly. Do you really want 
/backup at the root level?


Clearly whatever user is running pg_basebackup doesn't have permission 
to make that directory. But maybe that's okay and the directory should 
be somewhere else.


--
Paul  ~{:-)
p...@illuminatedcomputing.com




Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Michael Lewis
>
> If the data is held in common tables(bills, vouchers, etc)then the only
> thing I see happening is changing the PK values to an unused value. That
> could turn into a nightmare though. Not only that you lose the connection
> to the original data source. If the data can be broken out into separate
> tables then I could see placing them in their own schema.
>
>
Why not have separate databases for each "restore"? They can run together
on one machine still. What is the user doing with this copy of the data?


Re: Upgrading old server

2019-09-25 Thread Tom Lane
Ron  writes:
> On 9/25/19 9:29 AM, Christoph Berg wrote:
>> Re: Ekaterina Amez 2019-09-25 
>> <8818b028-bd2d-412e-d4e3-e29c49ffe...@zunibal.com>
>>> We've decided to upgrade our PostgreSQL production servers. First task is
>>> remove an old v7.14 version. It was supposed to be upgraded to a v8.4
>>> server. The server was installed, several databases where released here but
>>> v7.4 was never migrated. The plan is pg_dump this database and psql it to
>>> existing 8.4 server. After this, we'll pg_upgrade.

>> If you doing dump-restore anyway, why not restore into v11 rightaway?

> Since it's recommend to run the newer pg_dump on the older database, I've 
> got to wonder if v11 pg_dump can read the v7.4 on-disk structures.

We dropped support for pre-8.0 source servers in pg_dump sometime
recently, though I forget if v11 is affected by that or not.
You could try just dumping with 7.4's pg_dump and seeing if the
output will load into v11 --- ideally it would, but I'd not be
surprised if there are issues that have to be resolved manually.
Or, if you have 8.4's pg_dump at hand, try using that.

7.4 to 11 is a big jump to be doing in one step.  There's definitely
something to be said for porting to an intermediate release, just to
break down the work into smaller chunks.  But I'd go for halfway between,
which if I counted releases correctly would be about 9.1, not 8.4.

regards, tom lane




Re: Pg_auto_failover

2019-09-25 Thread Sonam Sharma
My user has permission to touch inside the Fs.
I am not getting why it is checking for /backup.

Any suggestions on this , what else I can check

On Wed, Sep 25, 2019, 12:15 AM bricklen  wrote:

>
>
> On Tue, Sep 24, 2019 at 8:41 AM Sonam Sharma  wrote:
>
>> I was setting up pg_auto_failover. Have successfully set up and monitor
>> and primary instance. While setting up secondary it's failing with below :
>>
>> ERROR Failed to ensure empty directory "//backup" : Permission denied
>> Error Failed initialise standby server , see above for details.
>>
>
> If you're using the tool from Citus, per
> https://github.com/citusdata/pg_auto_failover/blob/d8ba26f47a60eaf1e1dc63ae67279553984d84f5/src/bin/pg_autoctl/file_utils.c#L112
> Is the problem literally that the OS user executing the commands does not
> have sufficient permission to work in that directory? If you are creating
> your data directory manually in "/", you might want to create a directory
> under that path that the user executing the commands has sufficient
> permission to modify (delete, create, write to)
>


Re: Upgrading old server

2019-09-25 Thread Ron

On 9/25/19 9:29 AM, Christoph Berg wrote:

Re: Ekaterina Amez 2019-09-25 <8818b028-bd2d-412e-d4e3-e29c49ffe...@zunibal.com>

We've decided to upgrade our PostgreSQL production servers. First task is
remove an old v7.14 version. It was supposed to be upgraded to a v8.4
server. The server was installed, several databases where released here but
v7.4 was never migrated. The plan is pg_dump this database and psql it to
existing 8.4 server. After this, we'll pg_upgrade.

If you doing dump-restore anyway, why not restore into v11 rightaway?


Since it's recommend to run the newer pg_dump on the older database, I've 
got to wonder if v11 pg_dump can read the v7.4 on-disk structures.


--
Angular momentum makes the world go 'round.




Re: Receivgin error while altering the table column datatype

2019-09-25 Thread David G. Johnston
On Wed, Sep 25, 2019 at 5:08 AM Raghavendra Rao J S V <
raghavendra...@gmail.com> wrote:

> *Reason:*
>
> *SQL Error [0A000]: ERROR: cannot alter type of a column used in a trigger
> definition  Detail: trigger extensiontrigger on table tele depends on
> column "phonenumber"*
>
> Do I need to drop the trigger and alter the table column to modify the
> datatype of the columns. Am I correct?
>

Seems like a reasonable course of action, and simple enough to just
experiment with on a test relation.

> or is there any other way to resolve it.
>

Doubtful

David J.


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Rob Sargent


> On Sep 25, 2019, at 8:24 AM, Krishnakant Mane  wrote:
> 
> 
>> On 25/09/19 7:50 PM, Adrian Klaver   wrote:
>>> On 9/25/19 12:15 AM, Krishnakant Mane wrote: 
>>> Hello all, 
>>> 
>>> I have been using postgresql for an enterprise quality account's automation 
>>> and inventory management software called GNUKhata  
>>> 
>>> Our team is planning to add backup and restore function in the software. 
>>> 
>>> But we don't want to dump the entire database and then restore the same. 
>>> 
>>> What we are trying to do is to copy data specific to an organization. 
>>> 
>>> The challenge here is that I might copy all data (account heads, bills, 
>>> vouchers etc ) for one organization from an instance on one machine. 
>>> 
>>> I take the archive in what ever format to another machine and now attempt 
>>> to restore. 
>>> 
>>> The risk here is for example if the primary key value for orgcode in the 
>>> organization table is 5, it might conflict with the data where I am 
>>> attempting it to be restored. 
>>> 
>>> Same holds true for bills, invoices etc. 
>>> 
>>> A certain account head with accountcode 1 might be already present on the 
>>> second machine. 
>>> 
>>> I am not expecting the users to empty all data from the destination machine 
>>> before restoring a backup. 
>>> 
>>> The reason is that an auditor may have many client's data and one can't 
>>> predict what primary key values are going to come from a backup. 
>>> 
>>> Basically I can even say this is a copy paste instead of a pure backup and 
>>> restore. 
>>> 
>>> Can any one suggest how to handle such conflicts? 
>> 
>> Hard to say. If the data is held in common tables(bills, vouchers, etc)then 
>> the only thing I see happening is changing the PK values to an unused value. 
>> That could turn into a nightmare though. Not only that you lose the 
>> connection to the original data source. If the data can be broken out into 
>> separate tables then I could see placing them in their own schema. 
>> 
>>> 
>>> 
>>> -- 
>>> Regards, 
>>> Krishnakant Mane, 
>>> Project Founder and Leader, 
>>> GNUKhata  
>>> //(Opensource Accounting, Billing and Inventory Management Software)// 
>> 
> 
> Hi Adrian,
> 
> Even I am thinnking to do some kind of upsert with this situation.
> 
> And I would have to set the pkey to an unassigned value when there is 
> conflict.
> 
> I may also choose to revamp the serial by timestamps but don't know if the 
> target customers would like it.
> 
> -- 
> Regards, 
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata
> (Opensource Accounting, Billing and Inventory Management Software)
It would likely be easier to rethink your backup and restore plan. Putting each 
restore into its own space would be one tack. 

Re: updating sequence value for column 'serial'

2019-09-25 Thread Adrian Klaver

On 9/24/19 10:40 PM, Matthias Apitz wrote:

El día martes, septiembre 24, 2019 a las 08:01:46a. m. -0700, Adrian Klaver 
escribió:


On 9/24/19 7:47 AM, Matthias Apitz wrote:


Hello,

We have in a database some 400 tables, 75 of them have a 'serial'
column, like the one in the example table 'titel_daten', column 'katkey'.

I want to create a SQL script to adjust alls these sequences to the
max+1 value in its column after loading the database from CSV file.
I found no other way as the code below (the RAISE NOTICE is
only for test at the moment and the output is correct for this table,
i.e current max in 'katkey' is 330721):

sisis=# DO $$
sisis$# DECLARE
sisis$#maxikatkey integer := ( select max(katkey) from titel_daten );
sisis$#result integer := 1;
sisis$# BEGIN
sisis$#maxikatkey := maxikatkey +1;
sisis$#RAISE NOTICE '%', maxikatkey ;
sisis$#result := (SELECT SETVAL('titel_daten_katkey_seq', maxikatkey) );
sisis$#RAISE NOTICE '%', result ;
sisis$# END $$;
NOTICE:  330722
NOTICE:  330723
DO

Is there any better way? Thanks


I have not found a better way. I use ALTER SEQUENCE .. RESTART 330722
though:


Yes, but I found no way to use a variable like 'maxikatkey' in the ALTER 
SEQUENCE ...
it only excepts digits like 330722.


DO $$
DECLARE
max_id int;
BEGIN
SELECT INTO max_id max(id) + 1 FROM seq_test;
RAISE NOTICE 'Max id is %', max_id;
EXECUTE 'ALTER SEQUENCE seq_test_id_seq RESTART ' || max_id::text;
END;
$$ LANGUAGE plpgsql;



Sometimes, when the table has no rows for example, the SELECT MAX(...) FROM ...
returns . I'm surprised about that even maxikatkey := maxikatkey +1; does
not set it to 1 'maxikatkey'. Should I worry about this in SELECT SETVAL(...) 
or can
I make it somehow to 1 or 0?


It's just going to leave it where it is:

test=# create sequence test_seq;
CREATE SEQUENCE
test=# select * from test_seq ;
 last_value | log_cnt | is_called
+-+---
  1 |   0 | f
(1 row)

test=# SELECT SETVAL('test_seq', NULL);
 setval

   NULL
(1 row)

test=# select * from test_seq ;
 last_value | log_cnt | is_called
+-+---
  1 |   0 | f
(1 row)

test=# SELECT SETVAL('test_seq', 15);
 setval

 15
(1 row)

test=# SELECT SETVAL('test_seq', NULL);
 setval

   NULL
(1 row)

test=# select * from test_seq ;
 last_value | log_cnt | is_called
+-+---
 15 |   0 | t
(1 row)



matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Mapping view columns to their source columns

2019-09-25 Thread Tom Lane
Matt Andrews  writes:
> Incidentally, I have become interested in the pg_node_tree type. I can't
> find much info on it. Would that be in the source? What would be the first
> steps for writing an extension for this sort of thing?

pg_node_tree is just a serialization of the tree-of-Node-structs
representation used inside the server.  See
src/include/nodes/primnodes.h and adjacent files for Node definitions.
The actual serialization code is in src/backend/nodes/outfuncs.c
and adjacent files.

(Beware, however, that there's proposals floating around to change
the serialization format.)

regards, tom lane




Re: Upgrading old server

2019-09-25 Thread Christoph Berg
Re: Ekaterina Amez 2019-09-25 <8818b028-bd2d-412e-d4e3-e29c49ffe...@zunibal.com>
> We've decided to upgrade our PostgreSQL production servers. First task is
> remove an old v7.14 version. It was supposed to be upgraded to a v8.4
> server. The server was installed, several databases where released here but
> v7.4 was never migrated. The plan is pg_dump this database and psql it to
> existing 8.4 server. After this, we'll pg_upgrade.

If you doing dump-restore anyway, why not restore into v11 rightaway?

Christoph




Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane

On 25/09/19 7:50 PM, Adrian Klaver wrote:
> On 9/25/19 12:15 AM, Krishnakant Mane wrote:
>> Hello all,
>>
>> I have been using postgresql for an enterprise quality account's
>> automation and inventory management software called GNUKhata
>> 
>>
>> Our team is planning to add backup and restore function in the software.
>>
>> But we don't want to dump the entire database and then restore the same.
>>
>> What we are trying to do is to copy data specific to an organization.
>>
>> The challenge here is that I might copy all data (account heads,
>> bills, vouchers etc ) for one organization from an instance on one
>> machine.
>>
>> I take the archive in what ever format to another machine and now
>> attempt to restore.
>>
>> The risk here is for example if the primary key value for orgcode in
>> the organization table is 5, it might conflict with the data where I
>> am attempting it to be restored.
>>
>> Same holds true for bills, invoices etc.
>>
>> A certain account head with accountcode 1 might be already present on
>> the second machine.
>>
>> I am not expecting the users to empty all data from the destination
>> machine before restoring a backup.
>>
>> The reason is that an auditor may have many client's data and one
>> can't predict what primary key values are going to come from a backup.
>>
>> Basically I can even say this is a copy paste instead of a pure
>> backup and restore.
>>
>> Can any one suggest how to handle such conflicts?
>
> Hard to say. If the data is held in common tables(bills, vouchers,
> etc)then the only thing I see happening is changing the PK values to
> an unused value. That could turn into a nightmare though. Not only
> that you lose the connection to the original data source. If the data
> can be broken out into separate tables then I could see placing them
> in their own schema.
>
>>
>>
>> -- 
>> Regards,
>> Krishnakant Mane,
>> Project Founder and Leader,
>> GNUKhata 
>> //(Opensource Accounting, Billing and Inventory Management Software)//
>

Hi Adrian,

Even I am thinnking to do some kind of upsert with this situation.

And I would have to set the pkey to an unassigned value when there is
conflict.

I may also choose to revamp the serial by timestamps but don't know if
the target customers would like it.

-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//


Upgrading old server

2019-09-25 Thread Ekaterina Amez

Hi All,

We've decided to upgrade our PostgreSQL production servers. First task 
is remove an old v7.14 version. It was supposed to be upgraded to a v8.4 
server. The server was installed, several databases where released here 
but v7.4 was never migrated. The plan is pg_dump this database and psql 
it to existing 8.4 server. After this, we'll pg_upgrade. In order to 
make some tests to be ready for production server I'd like to know what 
would be the best approach: upgrade to v10 or maybe v11? Or start 
upgrading to 9.6 and if everything goes fine migrate to v10/v11?


If is useful: server is CentOS 6.8.

I've installed succesfully a backup of the old v7.14 version in my v8.4 
test server (with some help of psql-admin list), and now that I've 
studied a bit about pg_upgrade-ing I'm going to upgrade installed PG 
version to the one that's the best for this situation.



Thank you all,

Ekaterina





Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Adrian Klaver

On 9/25/19 12:15 AM, Krishnakant Mane wrote:

Hello all,

I have been using postgresql for an enterprise quality account's 
automation and inventory management software called GNUKhata 



Our team is planning to add backup and restore function in the software.

But we don't want to dump the entire database and then restore the same.

What we are trying to do is to copy data specific to an organization.

The challenge here is that I might copy all data (account heads, bills, 
vouchers etc ) for one organization from an instance on one machine.


I take the archive in what ever format to another machine and now 
attempt to restore.


The risk here is for example if the primary key value for orgcode in the 
organization table is 5, it might conflict with the data where I am 
attempting it to be restored.


Same holds true for bills, invoices etc.

A certain account head with accountcode 1 might be already present on 
the second machine.


I am not expecting the users to empty all data from the destination 
machine before restoring a backup.


The reason is that an auditor may have many client's data and one can't 
predict what primary key values are going to come from a backup.


Basically I can even say this is a copy paste instead of a pure backup 
and restore.


Can any one suggest how to handle such conflicts?


Hard to say. If the data is held in common tables(bills, vouchers, 
etc)then the only thing I see happening is changing the PK values to an 
unused value. That could turn into a nightmare though. Not only that you 
lose the connection to the original data source. If the data can be 
broken out into separate tables then I could see placing them in their 
own schema.





--
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Operator is not unique

2019-09-25 Thread PegoraroF10
Doing a inner join with pg_proc I´ll get this result. So, none of this casts
I´ve created. Column prorettype is different, this is the problem ?select
format_type(castsource, NULL), format_type(casttarget, NULL),castfunc,
castcontext, castmethod, pr.* from pg_cast inner join pg_proc pr on castfunc
= pr.oid where (castsource = 'numeric'::regtype or castsource =
'bigint'::regtype) and format_type(casttarget, NULL) in ('numeric','bigint')
order by castsource;
format_type format_type castfunccastcontext castmethod  
proname pronamespace
proownerprolang procost prorows provariadic protransformprokind 
prosecdef
proleakproofproisstrict proretset   provolatile proparallel 
pronargs
pronargdefaults prorettype  proargtypes proallargtypes  proargmodes
proargnames proargdefaults  protrftypes prosrc  probin  proconfig   
proacl
bigint  numeric 1781i   f   numeric 11  10  12  1   
0   0   -   f   false   false   truefalse   i
s   1   0   170020  NULLNULLNULLNULLNULL
int8_numericNULLNULLNULL
numeric bigint  1779a   f   int811  10  12  1   
0   0   -   f   false   false   truefalse   i   s   
1
0   20  1700NULLNULLNULLNULLNULLnumeric_int8
NULLNULLNULL
numeric numeric 1703i   f   numeric 11  10  12  1   
0   0   numeric_transform   f   false
false   truefalse   i   s   2   0   17001700 23 NULL
NULLNULLNULLNULLnumeric NULL
NULLNULL




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane

On 25/09/19 4:32 PM, Rob Sargent wrote:
>
>
> On Sep 25, 2019, at 1:15 AM, Krishnakant Mane  > wrote:
>
>> Hello all,
>>
>> I have been using postgresql for an enterprise quality account's
>> automation and inventory management software called GNUKhata
>> 
>>
>> Our team is planning to add backup and restore function in the software.
>>
>> But we don't want to dump the entire database and then restore the same.
>>
>> What we are trying to do is to copy data specific to an organization.
>>
>> The challenge here is that I might copy all data (account heads,
>> bills, vouchers etc ) for one organization from an instance on one
>> machine.
>>
>> I take the archive in what ever format to another machine and now
>> attempt to restore.
>>
>> The risk here is for example if the primary key value for orgcode in
>> the organization table is 5, it might conflict with the data where I
>> am attempting it to be restored.
>>
>> Same holds true for bills, invoices etc.
>>
>> A certain account head with accountcode 1 might be already present on
>> the second machine.
>>
>> I am not expecting the users to empty all data from the destination
>> machine before restoring a backup.
>>
>> The reason is that an auditor may have many client's data and one
>> can't predict what primary key values are going to come from a backup.
>>
>> Basically I can even say this is a copy paste instead of a pure
>> backup and restore.
>>
>> Can any one suggest how to handle such conflicts?
>>
>>
>> -- 
>> Regards,
>> Krishnakant Mane,
>> Project Founder and Leader,
>> GNUKhata 
>> //(Opensource Accounting, Billing and Inventory Management Software)//
> I’m not sure I like your definition of ‘backup and restore’ but you
> might get away with your approach if your keys were UUIDs. But I’ll
> bet dollars to doughnuts you’re using serial keys. Those seem to
> appeal to accounting types.  
-- 
Regards,
Hi Rob,
yes you are right, they are serial keys because timestamp or anything
thereoff is agressively despised by the accountants and the likes.
Now I am really stuck as to what could be done.Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//


Receivgin error while altering the table column datatype

2019-09-25 Thread Raghavendra Rao J S V
Hi All,

I am receiving below error while altering the table column datatype.



*Reason:SQL Error [0A000]: ERROR: cannot alter type of a column used in a
trigger definition  Detail: trigger extensiontrigger on table tele depends
on column "phonenumber"*

Do I need to drop the trigger and alter the table column to modify the
datatype of the columns. Am I correct? or is there any other way to resolve
it.

-- 
Regards,
Raghavendra Rao J S V


Re: managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Rob Sargent


> On Sep 25, 2019, at 1:15 AM, Krishnakant Mane  wrote:
> 
> Hello all,
> 
> I have been using postgresql for an enterprise quality account's automation 
> and inventory management software called GNUKhata 
> 
> Our team is planning to add backup and restore function in the software.
> 
> But we don't want to dump the entire database and then restore the same.
> 
> What we are trying to do is to copy data specific to an organization.
> 
> The challenge here is that I might copy all data (account heads, bills, 
> vouchers etc ) for one organization from an instance on one machine.
> 
> I take the archive in what ever format to another machine and now attempt to 
> restore.
> 
> The risk here is for example if the primary key value for orgcode in the 
> organization table is 5, it might conflict with the data where I am 
> attempting it to be restored.
> 
> Same holds true for bills, invoices etc.
> 
> A certain account head with accountcode 1 might be already present on the 
> second machine.
> 
> I am not expecting the users to empty all data from the destination machine 
> before restoring a backup.
> 
> The reason is that an auditor may have many client's data and one can't 
> predict what primary key values are going to come from a backup.
> 
> Basically I can even say this is a copy paste instead of a pure backup and 
> restore.
> 
> Can any one suggest how to handle such conflicts?
> 
> 
> 
> -- 
> Regards, 
> Krishnakant Mane,
> Project Founder and Leader,
> GNUKhata
> (Opensource Accounting, Billing and Inventory Management Software)
I’m not sure I like your definition of ‘backup and restore’ but you might get 
away with your approach if your keys were UUIDs. But I’ll bet dollars to 
doughnuts you’re using serial keys. Those seem to appeal to accounting types. 

pgq is one of the most underrated pg related stuff

2019-09-25 Thread Миша Тюрин


Ok! How are you doing on (with)without pgq?

// another underrated was multicorn. Is it still alive?

Thank you! And sorry for provocative way.
— Misha



managing primary key conflicts while restoring data to table with existing data

2019-09-25 Thread Krishnakant Mane
Hello all,

I have been using postgresql for an enterprise quality account's
automation and inventory management software called GNUKhata


Our team is planning to add backup and restore function in the software.

But we don't want to dump the entire database and then restore the same.

What we are trying to do is to copy data specific to an organization.

The challenge here is that I might copy all data (account heads, bills,
vouchers etc ) for one organization from an instance on one machine.

I take the archive in what ever format to another machine and now
attempt to restore.

The risk here is for example if the primary key value for orgcode in the
organization table is 5, it might conflict with the data where I am
attempting it to be restored.

Same holds true for bills, invoices etc.

A certain account head with accountcode 1 might be already present on
the second machine.

I am not expecting the users to empty all data from the destination
machine before restoring a backup.

The reason is that an auditor may have many client's data and one can't
predict what primary key values are going to come from a backup.

Basically I can even say this is a copy paste instead of a pure backup
and restore.

Can any one suggest how to handle such conflicts?


-- 
Regards,
Krishnakant Mane,
Project Founder and Leader,
GNUKhata 
//(Opensource Accounting, Billing and Inventory Management Software)//