Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Pavel Stehule
út 15. 1. 2019 v 12:56 odesílatel Daniel Verite 
napsal:

> Mihalidesová Jana wrote:
>
> > nipjd=> select distinct encode(serializable_value, 'escape') from
> > alf_node_properties_zaloha where serializable_value is not null;
> >
>encode
> >
> 
> >
> decode(E'aced00057e72002c6f72672e616c66726573636f2e736572766963652e636d722e76657273696f6e2e56657273696f6e54797065127872000e6a6176612e6c616e672e456e756d1278707400054d494e4f52',
> > 'hex')
>
> These contents are code, in the form of function calls, instead of data.
> Possibly if you ran these function calls they would output the
> original data, but that's a pretty weird form of storage.  If the
> insert was done by Ora2Pg, you want to report this as a bug.
>

+1

https://github.com/darold/ora2pg

a author is very cooperative

Regards

Pavel

>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>


Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Ravi Krishna
Sorry I misunderstood.  The term "read consistency" is generally used
either in the context of isolation level or in the context of slaves.
> We don't have standby instance, as I have mentioned we are using just
> one instance of postgres serving local clients running on the same
> machine, do you know in this case what is the behavior ?> 

You are good.  All transactions update buffer cache too, along with WAL
buffer and hence other sessions can immediately see the changes.
synchronous_commit=off will only reduce the fsync calls, which makes
them less crash safe, but the database consistency is not compromised.  





Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread pshadangi
 > After better thinking, I have to reply to myself since I m not entirely
sure of my previous question. (I m digging into the docs, but i do not want
to mislead you in the meanwhile)
Yes, I was not able to get some statement regarding this in the docs,
please let me know if you find something. Thanks.

> No.  On the standby the buffer cache has to be populated with the updates
before other client sessions can read it.  AFAIK other client sessions do
not read WAL buffers.
> That is why synchronous_commit=ON option is there.
We don't have standby instance, as I have mentioned we are using just one
instance of postgres serving local clients running on the same machine, do
you know in this case what is the behavior ?


On Tue, Jan 15, 2019 at 5:24 PM Ravi Krishna  wrote:

>
> > I m not sure other clients are able to read from WAL buffer, therefore i
> m not sure the data is
> > available to other clients at that specific point in time.
>
> No.  On the standby the buffer cache has to be populated with the updates
> before other client sessions can read it.  AFAIK other client sessions do
> not read WAL buffers.
>
> That is why synchronous_commit=ON option is there.
>
>
>


Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
Thank you, the following seems to have worked -

On Tue, Jan 15, 2019 at 8:49 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:
>
> UPDATE users
> SET avg_time = diffs.average_time_for_the_grouped_by_user
> FROM diffs
> WHERE users.uid = diffs.uid --< the missing "where" I commented about
earlier
>
> But you need to construct the "diffs" CTE/subquery so that it Group[s] By
uid
>

WITH diffs AS (
  SELECT
  gid,
  uid,
  played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
  FROM moves
),
avg_diffs AS (
  SELECT uid, AVG(diff) as avg_diff FROM diffs GROUP BY uid
)
UPDATE users SET avg_time = avg_diff
FROM avg_diffs
WHERE users.uid = avg_diffs.uid;

https://www.db-fiddle.com/f/w1AYGpoZiWW9bLCYjHDk7H/9

Or did you mean something else?

Regards
Alex


RE: Can anyone please provide me list of customers using postgreSQL

2019-01-15 Thread Igor Neyman

From: Ramamoorthi, Meenakshi [mailto:meenakshi.ramamoor...@cpuc.ca.gov]
Sent: Tuesday, January 15, 2019 2:42 PM
To: pgsql-general 
Subject: Can anyone please provide me list of customers using postgreSQL

Dear folks:


1)  Can someone please send me a link of all companies using PostgreSQL ?

2)  Both government and private companies using PostgreSQL

3)  Any security issues found earlier and the steps taken for resolution or 
how it was mitigated.

4)  Advantages of PostgreSQL compared to other databases like Oracle, MySQL 
etc.


Thanks and best regards
Meenakshi Ramamoorthi


Some are listed here:

https://stackshare.io/postgresql/in-stacks

Or, just do google search.

Regards,
Igor Neyman


Re: Can anyone please provide me list of customers using postgreSQL

2019-01-15 Thread David G. Johnston
On Tue, Jan 15, 2019 at 12:42 PM Ramamoorthi, Meenakshi
 wrote:
>
> Dear folks:

> 1) Can someone please send me a link of all companies using PostgreSQL ?
>
> 2) Both government and private companies using PostgreSQL
>
> 3) Any security issues found earlier and the steps taken for resolution 
> or how it was mitigated.
>
> 4) Advantages of PostgreSQL compared to other databases like Oracle, 
> MySQL etc.

Doubtful.  Though I suspect there are quite a few published resources
for questions 3 and 4 that you can go search for.  CVE's are reported
against the product so searching those - and reviewing the release
notes and commits - will provide insight to #3.

Many people have written comparisons of various databases - search for
and read those to accomplish #4

You are delusional if you actually expect a comprehensive answer to
numbers one and two.

David J.



Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread David G. Johnston
On Tue, Jan 15, 2019 at 12:42 PM Alexander Farber
 wrote:
>
> Last question please - how to run the query for all users?
>
> I know I could use the FOR-loop from PL/PgSQL, but is there also a pure SQL 
> way?
>
> How to refer to the outside "uid" from inside the CTE in the query below?
>
> WITH diffs AS (
>   SELECT
>   gid,
>   uid,
>   played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
>   FROM moves
>   WHERE uid = how_to_refer_to_the_outside_uid
> )
> UPDATE users SET avg_time =
> (
>   SELECT
> AVG(diff)
>   FROM diffs
>  GROUP BY uid
> )
> WHERE uid IN (SELECT uid FROM users);

UPDATE users
SET avg_time = diffs.average_time_for_the_grouped_by_user
FROM diffs
WHERE users.uid = diffs.uid --< the missing "where" I commented about earlier

But you need to construct the "diffs" CTE/subquery so that it Group[s] By uid

David J.



Re: Can anyone please provide me list of customers using postgreSQL

2019-01-15 Thread Rob Sargent


On 1/15/19 12:42 PM, Ramamoorthi, Meenakshi wrote:


Dear folks:

1)Can someone please send me a link of all companies using PostgreSQL ?


Postgres is free.  There is no list of licensed sites


2)Both government and private companies using PostgreSQL

3)Any security issues found earlier and the steps taken for resolution 
or how it was mitigated.


4)Advantages of PostgreSQL compared to other databases like Oracle, 
MySQL etc.


*Thanks and best regards*

*Meenakshi Ramamoorthi*



Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
Last question please - how to run the query for all users?

I know I could use the FOR-loop from PL/PgSQL, but is there also a pure SQL
way?

How to refer to the outside "uid" from inside the CTE in the query below?

WITH diffs AS (
  SELECT
  gid,
  uid,
  played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
  FROM moves
  WHERE uid = how_to_refer_to_the_outside_uid
)
UPDATE users SET avg_time =
(
  SELECT
AVG(diff)
  FROM diffs
 GROUP BY uid
)
WHERE uid IN (SELECT uid FROM users);

https://www.db-fiddle.com/f/w1AYGpoZiWW9bLCYjHDk7H/2

Regards
Alex


Can anyone please provide me list of customers using postgreSQL

2019-01-15 Thread Ramamoorthi, Meenakshi
Dear folks:


1) Can someone please send me a link of all companies using PostgreSQL ?

2) Both government and private companies using PostgreSQL

3) Any security issues found earlier and the steps taken for resolution or 
how it was mitigated.

4) Advantages of PostgreSQL compared to other databases like Oracle, MySQL 
etc.


Thanks and best regards
Meenakshi Ramamoorthi




From: Alexander Farber 
Sent: Tuesday, January 15, 2019 11:24 AM
Cc: pgsql-general 
Subject: Re: aggregate functions are not allowed in UPDATE

And I should better change the avg_time column from TEXT to TIMESTAMPTZ (and 
use TO_CHAR on it later down the road) so that I can compare my players

Regards
Alex


Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
And I should better change the avg_time column from TEXT to TIMESTAMPTZ
(and use TO_CHAR on it later down the road) so that I can compare my players

Regards
Alex

>


Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
Ahh, the subqueries -

On Tue, Jan 15, 2019 at 5:59 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Jan 15, 2019 at 9:52 AM Alexander Farber
>  wrote:
> >> So calculate the average somewhere else, put the result in a column,
> >> and then reference that column in the SET clause.
> >
> > do you suggest to add a second CTE?
>
> That would qualify as "somewhere else" - as would a simple subquery in
> FROM.
>

they escaped my mind for some reason! :-)

WITH diffs AS (
  SELECT
  gid,
  uid,
  played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
  FROM moves
  WHERE uid = 1
)
UPDATE users SET avg_time =
(
  SELECT
AVG(diff)
  FROM diffs
 GROUP BY uid
)
WHERE uid = 1;

https://www.db-fiddle.com/f/w1AYGpoZiWW9bLCYjHDk7H/1

seems to work, thank you


Re: Refining query statement

2019-01-15 Thread Rob Sargent



On 1/15/19 9:02 AM, Ron wrote: select distinct on (C.contact_id) 
C.contact_id, C.lname, C.fname, C.direct_phone, O.org_name, A.next_contact

 from Contacts as C
   join Organizations as O on C.org_id = O.org_id
   join Activities as A on C.contact_id = A.contact_id
 where A.next_contact <= 'today'
   and A.next_contact > '2018-12-31'
 order by c.contact_id, a.next_contact DESC;


And I've never liked this method (though I'm old and crotchety)

I suspect I can match your old and crotchety, but jump in, the water is 
fine (once you get used to it)




Re: Refining query statement

2019-01-15 Thread Thomas Kellerer

Adrian Klaver schrieb am 15.01.2019 um 17:44:

So we end up with something like this:

 select distinct on (C.contact_id) C.contact_id, C.lname, C.fname, 
C.direct_phone, O.org_name, A.next_contact
 from Contacts as C
   join Organizations as O on C.org_id = O.org_id
   join Activities as A on C.contact_id = A.contact_id
 where A.next_contact <= 'today'
   and A.next_contact > '2018-12-31'
 order by c.contact_id, a.next_contact DESC;


And I've never liked this method (though I'm old and crotchety)



What is the specific objection?

To me:

1) Plus side. It saves an explicit sub-query

2) Minus side. It is not portable. But then complete portability is a unicorn 
in my opinion.


I think Ron was referring to the JOIN operator, rather than the DISTINCT ON

 





Re: Refining query statement

2019-01-15 Thread Rich Shepard

On Tue, 15 Jan 2019, Adrian Klaver wrote:


Use BETWEEN?:
https://www.postgresql.org/docs/10/functions-comparison.html
a BETWEEN x AND y   between

So:

next_contact BETWEEN '01/01/2019'::date AND '01/15/2019'::date

Or a range function:

https://www.postgresql.org/docs/10/functions-range.html


Adrian,

  Ah, yes! I do need to refresh my SQL knowledge and am working on that.

Thanks very much again,

Rich



Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread David G. Johnston
On Tue, Jan 15, 2019 at 9:52 AM Alexander Farber
 wrote:
>> So calculate the average somewhere else, put the result in a column,
>> and then reference that column in the SET clause.
>>
>
> do you suggest to add a second CTE?

That would qualify as "somewhere else" - as would a simple subquery in FROM.

David J.



Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
Unfortunately, I don't understand your advice, David -

On Tue, Jan 15, 2019 at 5:46 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Tue, Jan 15, 2019 at 9:42 AM Alexander Farber
>  wrote:
> > When I am trying
> >
> > WITH diffs AS (
> >   SELECT
> >   gid,
> >   uid,
> >   played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
> >   FROM moves
> >   WHERE uid = 1
> > )
> > UPDATE users SET
> > avg_time = TO_CHAR(AVG(diff), 'HH24:MI')
> > FROM diffs;
> >
> > the syntax error is unfortunately printed by PostgreSQL 10:
> >
> > aggregate functions are not allowed in UPDATE
>
> So calculate the average somewhere else, put the result in a column,
> and then reference that column in the SET clause.
>
>
do you suggest to add a second CTE?

Regards
Alex


Re: aggregate functions are not allowed in UPDATE

2019-01-15 Thread David G. Johnston
On Tue, Jan 15, 2019 at 9:42 AM Alexander Farber
 wrote:
> When I am trying
>
> WITH diffs AS (
>   SELECT
>   gid,
>   uid,
>   played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
>   FROM moves
>   WHERE uid = 1
> )
> UPDATE users SET
> avg_time = TO_CHAR(AVG(diff), 'HH24:MI')
> FROM diffs;
>
> the syntax error is unfortunately printed by PostgreSQL 10:
>
> aggregate functions are not allowed in UPDATE

So calculate the average somewhere else, put the result in a column,
and then reference that column in the SET clause.

Its also unusual to have FROM with UPDATE but no WHERE clause...

David J.



Re: Refining query statement

2019-01-15 Thread Adrian Klaver

On 1/15/19 8:02 AM, Ron wrote:


the best way to do it:

So we end up with something like this:

 select distinct on (C.contact_id) C.contact_id, C.lname, C.fname, 
C.direct_phone, O.org_name, A.next_contact

 from Contacts as C
   join Organizations as O on C.org_id = O.org_id
   join Activities as A on C.contact_id = A.contact_id
 where A.next_contact <= 'today'
   and A.next_contact > '2018-12-31'
 order by c.contact_id, a.next_contact DESC;


And I've never liked this method (though I'm old and crotchety)



What is the specific objection?

To me:

1) Plus side. It saves an explicit sub-query

2) Minus side. It is not portable. But then complete portability is a 
unicorn in my opinion.


--
Adrian Klaver
adrian.kla...@aklaver.com



aggregate functions are not allowed in UPDATE

2019-01-15 Thread Alexander Farber
Good evening,

I have prepared a simple test case for my question:
https://www.db-fiddle.com/f/w1AYGpoZiWW9bLCYjHDk7H/0

There I create 3 tables:

CREATE TABLE users (
  uid SERIAL PRIMARY KEY,
  avg_time TEXT
);

CREATE TABLE games (
  gid SERIAL PRIMARY KEY,
  player1 INTEGER NOT NULL REFERENCES users(uid) ON DELETE CASCADE,
  player2 INTEGER NOT NULL REFERENCES users(uid) ON DELETE CASCADE CHECK
(player1 <> player2)
);

CREATE TABLE moves (
  mid BIGSERIAL PRIMARY KEY,
  played  timestamptz NOT NULL,
  gid INTEGER NOT NULL REFERENCES games(gid) ON DELETE CASCADE,
  uid INTEGER NOT NULL REFERENCES users(uid) ON DELETE CASCADE
);

And then I fill them with 3 test players and 2 test games (1 vs. 3 and 1
vs. 2):

INSERT INTO users (uid) VALUES
  (1),
  (2),
  (3);

INSERT INTO games (gid, player1, player2) VALUES
  (1, 2, 3),
  (2, 1, 3),
  (3, 1, 2);

INSERT INTO moves (played, gid, uid) VALUES
  (NOW() + INTERVAL '1 hour', 2, 1),
  (NOW() + INTERVAL '2 hour', 2, 3),
  (NOW() + INTERVAL '3 hour', 2, 1), -- +1 hour
  (NOW() + INTERVAL '4 hour', 2, 3),
  (NOW() + INTERVAL '5 hour', 2, 1), -- +1 hour
  (NOW() + INTERVAL '6 hour', 2, 3),
  (NOW() + INTERVAL '7 hour', 2, 1), -- +1 hour
  (NOW() + INTERVAL '8 hour', 2, 3),

  (NOW() + INTERVAL '10 hour', 3, 1),
  (NOW() + INTERVAL '20 hour', 3, 2),
  (NOW() + INTERVAL '30 hour', 3, 1), -- +10 hours
  (NOW() + INTERVAL '40 hour', 3, 2),
  (NOW() + INTERVAL '50 hour', 3, 1), -- +10 hours
  (NOW() + INTERVAL '60 hour', 3, 2),
  (NOW() + INTERVAL '70 hour', 3, 1), -- +10 hours
  (NOW() + INTERVAL '80 hour', 3, 2);

After that I am able to calculate the average time that the player 1 needs
to perform a move (11 hours):

WITH diffs AS (
  SELECT
  gid,
  uid,
  played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
  FROM moves
  WHERE uid = 1
)
SELECT
  uid, AVG(diff)
FROM diffs
GROUP BY uid;

But how to take that calculated value and store it in the "avg_time" text
column of the users table?

When I am trying

WITH diffs AS (
  SELECT
  gid,
  uid,
  played - LAG(played) OVER(PARTITION BY gid ORDER BY played) AS diff
  FROM moves
  WHERE uid = 1
)
UPDATE users SET
avg_time = TO_CHAR(AVG(diff), 'HH24:MI')
FROM diffs;

the syntax error is unfortunately printed by PostgreSQL 10:

aggregate functions are not allowed in UPDATE

Regards
Alex


Re: Refining query statement

2019-01-15 Thread Adrian Klaver

On 1/15/19 8:26 AM, Rich Shepard wrote:

On Tue, 15 Jan 2019, Adrian Klaver wrote:


For the above I could see using a datepicker widget that allows for
multidate select. The specifics would depend on the software you are 
using

to write the UI.


Adrian,

   I'm using wxPython4, and I will use a calendar with that. But, is 
there a

way to enhance the psql version, too?


Use BETWEEN?:
https://www.postgresql.org/docs/10/functions-comparison.html
a BETWEEN x AND y   between

So:

next_contact BETWEEN '01/01/2019'::date AND '01/15/2019'::date

Or a range function:

https://www.postgresql.org/docs/10/functions-range.html

<@   element is contained by





Thanks,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Refining query statement

2019-01-15 Thread Rich Shepard

On Tue, 15 Jan 2019, Ron wrote:


And I've never liked this method (though I'm old and crotchety)


Ron,

  I'm older but not crotchety (most days), and I'm taking my SQL knowledge
beyone what I've used in the past. I would appreciate you're explaining why
you don't like the explicit JOINs presented by Thomas. The more I learn from
you more experienced SQL programmers the better I am.

Regards,

Rich



Re: Refining query statement

2019-01-15 Thread Rich Shepard

On Tue, 15 Jan 2019, Thomas Kellerer wrote:


With regards to "cleaner": the first thing to do is to remove the
parentheses around the column list. In Postgres "(a,b,c)" creates a single
column with an anonymous record type (that contains three fields), rather
than selecting three columns. In other DBMS those parentheses are simply
useless.


Thomas,

I should have seen that myself. Thanks for pointing out the obvious to me.


"cleaner" is always subjective, but I find explicit JOIN operators a lot
cleaner than the old implicit joins.


  This does make sense; I need to refresh my knowledge of JOIN operators and
will do so.


The condition "A.next_contact is not null" is actually no necessary
because you already have a condition on that column, so NULL values won't
be returned anyway.


  True that.


To get the "most recent one" in Postgres, DISTINCT ON () is usually the
best way to do it: So we end up with something like this:

   select distinct on (C.contact_id) C.contact_id, C.lname, C.fname, 
C.direct_phone, O.org_name, A.next_contact
   from Contacts as C
 join Organizations as O on C.org_id = O.org_id
 join Activities as A on C.contact_id = A.contact_id
   where A.next_contact <= 'today'
 and A.next_contact > '2018-12-31'
   order by c.contact_id, a.next_contact DESC;


  This insight really helps. Now I know how to approach other queries.

Many thanks,

Rich



Re: Refining query statement

2019-01-15 Thread Rich Shepard

On Tue, 15 Jan 2019, Adrian Klaver wrote:


For the above I could see using a datepicker widget that allows for
multidate select. The specifics would depend on the software you are using
to write the UI.


Adrian,

  I'm using wxPython4, and I will use a calendar with that. But, is there a
way to enhance the psql version, too?

Thanks,

Rich



Re: Refining query statement

2019-01-15 Thread Ron

On 1/15/19 9:47 AM, Thomas Kellerer wrote:

Rich Shepard schrieb am 15.01.2019 um 16:39:

   Working with my sales/client management system using psql I have a select
statement to identify contacts to be made. This statement works:

select (C.contact_id, C.lname, C.fname, C.direct_phone, O.org_name, 
A.next_contact)
from Contacts as C, Organizations as O, Activities as A
where C.org_id = O.org_id and C.contact_id = A.contact_id and
   A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
   A.next_contact is not null;

but would benefit from tweaking. When I have had multiple contacts with
someone I want only the most recent one displayed, not all, and they should
be no more ancient than a defined period (e.g., a month).

   I want to learn how to make this query cleaner and more flexible. When I
write the UI for this I want to be able to specify a data range in addition
to a fixed 'today'. Pointers on what to read will be very helpful.

With regards to "cleaner": the first thing to do is to remove the parentheses 
around the column list.
In Postgres "(a,b,c)" creates a single column with an anonymous record type 
(that contains three fields), rather than selecting three columns.
In other DBMS those parentheses are simply useless.

"cleaner" is always subjective, but I find explicit JOIN operators a lot 
cleaner than the old implicit joins.

The condition "A.next_contact is not null" is actually no necessary because you 
already have a condition on that column, so NULL values won't be returned anyway.

To get the "most recent one" in Postgres, DISTINCT ON () is usually the best 
way to do it:

So we end up with something like this:

 select distinct on (C.contact_id) C.contact_id, C.lname, C.fname, 
C.direct_phone, O.org_name, A.next_contact
 from Contacts as C
   join Organizations as O on C.org_id = O.org_id
   join Activities as A on C.contact_id = A.contact_id
 where A.next_contact <= 'today'
   and A.next_contact > '2018-12-31'
 order by c.contact_id, a.next_contact DESC;


And I've never liked this method (though I'm old and crotchety)

--
Angular momentum makes the world go 'round.



Sv: Using psql variables in DO-blocks

2019-01-15 Thread Andreas Joseph Krogh
På tirsdag 15. januar 2019 kl. 16:51:09, skrev Andreas Joseph Krogh <
andr...@visena.com >:
Hi all.
 
I'm trying to use a psql variable in a DO-block, but it fails:
 
[snip] 
 
Seems I was a bit lazy, here's what works:
 
\set resource_group 'Ressurser' \set quoted_resource_group '\'' 
:resource_group '\'' set myvars.quoted_resource_group to :quoted_resource_group;
DO$$ begin  if not exists(SELECT * FROM onp_group WHERE groupname = 
current_setting('myvars.quoted_resource_group')) then  raise notice 'Group % 
not found, creating it.', current_setting('myvars.quoted_resource_group'); end 
if; end; $$; 
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: Using psql variables in DO-blocks

2019-01-15 Thread David G. Johnston
On Tue, Jan 15, 2019 at 8:48 AM Andreas Joseph Krogh 
wrote:

> Hi all.
>
> I'm trying to use a psql variable in a DO-block, but it fails:
>
 [...]

> Any hints?
>

Don't do that.  The body of a DO block is a string literal and psql won't
and shouldn't mess with its contents.  You'll need to use an actual
function and pass in the psql variable data via an input parameter.

set_config(...)/current_setting(...) is another option to consider.

David J.


Re: Refining query statement

2019-01-15 Thread Adrian Klaver

On 1/15/19 7:39 AM, Rich Shepard wrote:
   Working with my sales/client management system using psql I have a 
select

statement to identify contacts to be made. This statement works:

select (C.contact_id, C.lname, C.fname, C.direct_phone, O.org_name, 
A.next_contact)

from Contacts as C, Organizations as O, Activities as A
where C.org_id = O.org_id and C.contact_id = A.contact_id and
   A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
   A.next_contact is not null;

but would benefit from tweaking. When I have had multiple contacts with
someone I want only the most recent one displayed, not all, and they should
be no more ancient than a defined period (e.g., a month).

   I want to learn how to make this query cleaner and more flexible. When I
write the UI for this I want to be able to specify a data range in addition
to a fixed 'today'. Pointers on what to read will be very helpful.


For the above I could see using a datepicker widget that allows for 
multidate select. The specifics would depend on the software you are 
using to write the UI.




TIA,

Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Refining query statement

2019-01-15 Thread Thomas Kellerer
Rich Shepard schrieb am 15.01.2019 um 16:39:
>   Working with my sales/client management system using psql I have a select
> statement to identify contacts to be made. This statement works:
> 
> select (C.contact_id, C.lname, C.fname, C.direct_phone, O.org_name, 
> A.next_contact)
> from Contacts as C, Organizations as O, Activities as A
> where C.org_id = O.org_id and C.contact_id = A.contact_id and
>   A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
>   A.next_contact is not null;
> 
> but would benefit from tweaking. When I have had multiple contacts with
> someone I want only the most recent one displayed, not all, and they should
> be no more ancient than a defined period (e.g., a month).
> 
>   I want to learn how to make this query cleaner and more flexible. When I
> write the UI for this I want to be able to specify a data range in addition
> to a fixed 'today'. Pointers on what to read will be very helpful.

With regards to "cleaner": the first thing to do is to remove the parentheses 
around the column list. 
In Postgres "(a,b,c)" creates a single column with an anonymous record type 
(that contains three fields), rather than selecting three columns.
In other DBMS those parentheses are simply useless.

"cleaner" is always subjective, but I find explicit JOIN operators a lot 
cleaner than the old implicit joins. 

The condition "A.next_contact is not null" is actually no necessary because you 
already have a condition on that column, so NULL values won't be returned 
anyway. 

To get the "most recent one" in Postgres, DISTINCT ON () is usually the best 
way to do it: 

So we end up with something like this: 

select distinct on (C.contact_id) C.contact_id, C.lname, C.fname, 
C.direct_phone, O.org_name, A.next_contact
from Contacts as C
  join Organizations as O on C.org_id = O.org_id
  join Activities as A on C.contact_id = A.contact_id 
where A.next_contact <= 'today' 
  and A.next_contact > '2018-12-31' 
order by c.contact_id, a.next_contact DESC;

  




Using psql variables in DO-blocks

2019-01-15 Thread Andreas Joseph Krogh
Hi all.
 
I'm trying to use a psql variable in a DO-block, but it fails:
 
\set resource_group 'Ressurser' \set quoted_resource_group '\'' 
:resource_group '\'' DO $$ begin  if not exists(SELECT * FROM tbl_group WHERE 
groupname = :quoted_resource_group)then  raise notice 'Group % not found, 
creating it.', :quoted_resource_group; end if; end; $$; 
 
ERROR:  syntax error at or near ":" 
 LINE 3: ... exists(SELECT * FROM tbl_group WHERE groupname = :quoted_re...
                                                            ^
  
Any hints?
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 




Refining query statement

2019-01-15 Thread Rich Shepard

  Working with my sales/client management system using psql I have a select
statement to identify contacts to be made. This statement works:

select (C.contact_id, C.lname, C.fname, C.direct_phone, O.org_name, 
A.next_contact)

from Contacts as C, Organizations as O, Activities as A
where C.org_id = O.org_id and C.contact_id = A.contact_id and
  A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
  A.next_contact is not null;

but would benefit from tweaking. When I have had multiple contacts with
someone I want only the most recent one displayed, not all, and they should
be no more ancient than a defined period (e.g., a month).

  I want to learn how to make this query cleaner and more flexible. When I
write the UI for this I want to be able to specify a data range in addition
to a fixed 'today'. Pointers on what to read will be very helpful.

TIA,

Rich



Re: pg_restore restores privileges differently from psql

2019-01-15 Thread Sherrylyn Branchaw
Yeah, this is a known issue --- the various GRANTs for a specific object
are stored in a single "TOC entry" in the archive, which pg_restore will
send to the server in a single PQexec call, causing them to be effectively
one transaction.  The easiest way to deal with it is to not send
pg_restore's output directly to the target server, but feed it through
psql, something like

pg_restore  ... | psql [connection parameters]

There's been some discussion of a real fix, but it seems messy.
pg_restore doesn't have a parser that would be adequate to separate
out multiple SQL commands in a TOC entry, and we'd rather not try
to give it one (mainly because of fear of cross-version compatibility
issues).

Okay, thank you. I thought that might be the case: undesirable behavior
where the costs outweigh the benefits of fixing. Given that, would it be
worth making it more obvious in the pg_restore documentation that
pg_restore and its psql output don't always give the same results in the
target database?

Best,
Sherrylyn


RE: repmgr and SSH

2019-01-15 Thread ROS Didier
Hi Ian
If we setup SSH between all the nodes (master, standby and witness) 
there is a big security problem when repmgr monitors several configurations :
For instance this architecture :
- configuration A : primary A, standby A
- configuration B : primary B, standby B
- Configuration C : primary C, standby C
- witness : monitors the three configurations

If I can connect on primary A with the "postgres" user, I can login on 
the witness, and then I can connect to all the nodes (primary B, primary C, 
standby B, standby C).

Question : is it OK if I setup SSH between only the primary and the 
standby nodes of each configuration ?

Thanks in advance

Best Regards
Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD

-Message d'origine-
De : ian.barw...@2ndquadrant.com [mailto:ian.barw...@2ndquadrant.com] 
Envoyé : mardi 15 janvier 2019 12:53
À : ROS Didier ; pgsql-general@lists.postgresql.org
Objet : Re: repmgr and SSH

On 1/14/19 6:29 PM, ROS Didier wrote:
> Hi
> 
> I would like to setup a repmgr configuration with one primary node, one 
> standby node and one witness node.
> 
>     Regarding SSH configuration, the documentation is not clear, 
> I think.
> 
> Do we need to setup SSH between the three nodes or only between primary and 
> standby nodes ?

Between all three, though outbound SSH connections from the witness node are 
not essential.

We'll clarify the documentation.


Regards

Ian Barwick


-- 
  Ian Barwick   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services



Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.




RE: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Daniel Verite
Mihalidesová Jana wrote:

> nipjd=> select distinct encode(serializable_value, 'escape') from
> alf_node_properties_zaloha where serializable_value is not null; 
>   
>   encode
> 
> decode(E'aced00057e72002c6f72672e616c66726573636f2e736572766963652e636d722e76657273696f6e2e56657273696f6e54797065127872000e6a6176612e6c616e672e456e756d1278707400054d494e4f52',
> 'hex') 

These contents are code, in the form of function calls, instead of data.
Possibly if you ran these function calls they would output the
original data, but that's a pretty weird form of storage.  If the
insert was done by Ora2Pg, you want to report this as a bug.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Ravi Krishna

> I m not sure other clients are able to read from WAL buffer, therefore
> i m not sure the data is available to other clients at that specific
> point in time.
No.  On the standby the buffer cache has to be populated with the
updates before other client sessions can read it.  AFAIK other client
sessions do not read WAL buffers.
That is why synchronous_commit=ON option is there.




Re: repmgr and SSH

2019-01-15 Thread Ian Barwick

On 1/14/19 6:29 PM, ROS Didier wrote:

Hi

I would like to setup a repmgr configuration with one primary node, one standby 
node and one witness node.

    Regarding SSH configuration, the documentation is not clear, I 
think.

Do we need to setup SSH between the three nodes or only between primary and 
standby nodes ?


Between all three, though outbound SSH connections from the witness node are 
not essential.

We'll clarify the documentation.


Regards

Ian Barwick


--
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: repmgr and STONIT - SPLIT BAIN

2019-01-15 Thread Ian Barwick

Hi

On 1/15/19 5:54 PM, ROS Didier wrote:

Hi

We are going to implement repmgr according to the following configuration :

-Primary node on the site A

-Standby node on the site B

-Witness node on the same site A

Do we need to use LOCATION repmgr parameter in this case to avoid Split Brain 
and permit STONIT ?


In this case, the "location" parameter should be the same for all nodes.

FYI the "location" parameter is intended for the use case where you have a large
number of nodes in multiple data centres and need to ensure that only a node in 
the
same data centre (location) as the primary can get promoted.

STONITH is something you'll need to implement yourself.


Regards

Ian Barwick


--
 Ian Barwick   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Fabio Pardi
Hi,

After better thinking, I have to reply to myself since I m not entirely sure of 
my previous question. (I m digging into the docs, but i do not want to mislead 
you in the meanwhile)

If i recall correctly, written data is parked in WAL buffer before being synced 
to disk (to the transaction log).

I m not sure other clients are able to read from WAL buffer, therefore i m not 
sure the data is available to other clients at that specific point in time.

Maybe somebody else in the ML knows the details by heart?

regards,

fabio pardi



On 15/01/2019 12:15, Fabio Pardi wrote:
> Hi,
>
> all clients will get the latest version of the row (from RAM, that is). The 
> only thing is that in case of server crash, not-yet-written-to-disk commits 
> will be lost.
>
> detailed explanation can be found here:
>
> https://www.postgresql.org/docs/current/wal-async-commit.html
>
> regards,
>
> fabio pardi 
>
>
> On 15/01/2019 11:58, pshadangi wrote:
>> To improve commit performance we are planning to use 
>> "synchronous_commit=off", with this if multiple clients are reading the same 
>> data/row will they always get the latest updated data/row ? (clients are 
>> using committed read and we are not using clustered environment, we have 
>> just one instance of postgres serving local clients running on the same 
>> machine).
>> For example if client1 updates a row then the updated value is available to 
>> client2 immediately after the commit or there is a delay as commit is now 
>> asynchronous ?
>



Re: Read consistency when using synchronous_commit=off

2019-01-15 Thread Fabio Pardi
Hi,

all clients will get the latest version of the row (from RAM, that is). The 
only thing is that in case of server crash, not-yet-written-to-disk commits 
will be lost.

detailed explanation can be found here:

https://www.postgresql.org/docs/current/wal-async-commit.html

regards,

fabio pardi 


On 15/01/2019 11:58, pshadangi wrote:
> To improve commit performance we are planning to use 
> "synchronous_commit=off", with this if multiple clients are reading the same 
> data/row will they always get the latest updated data/row ? (clients are 
> using committed read and we are not using clustered environment, we have just 
> one instance of postgres serving local clients running on the same machine).
> For example if client1 updates a row then the updated value is available to 
> client2 immediately after the commit or there is a delay as commit is now 
> asynchronous ?



Read consistency when using synchronous_commit=off

2019-01-15 Thread pshadangi
To improve commit performance we are planning to use
"synchronous_commit=off", with this if multiple clients are reading the
same data/row will they always get the latest updated data/row ? (clients
are using committed read and we are not using clustered environment, we
have just one instance of postgres serving local clients running on the
same machine).
For example if client1 updates a row then the updated value is available to
client2 immediately after the commit or there is a delay as commit is now
asynchronous ?


Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Pavel Stehule
út 15. 1. 2019 v 11:37 odesílatel Pavel Stehule 
napsal:

>
>
> út 15. 1. 2019 v 11:13 odesílatel Mihalidesová Jana <
> jana.mihalides...@cetin.cz> napsal:
>
>> Hi,
>>
>>
>>
>> These are original data in blob on oracle
>>
>>
>>
>> SYS@CENIPR_1 > select distinct
>> UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(serializable_value, 4000,1)) from
>> NIP_NE.ALF_NODE_PROPERTIES where serializable_value is not null;
>>
>>
>>
>> UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(SERIALIZABLE_VALUE,4000,1))
>>
>>
>> --
>>
>> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>>
>>   xpw 1.0x
>>
>> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>>
>>   xpw 1.2.1212270143x
>>
>> ▒▒ sr java.util.ArrayListxa▒ I sizexpwx
>>
>> ▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr
>> java.lang.Enum  xpt MAJOR
>>
>> ▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr
>> java.lang.Enum  xpt MINOR
>>
>> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>>
>>  xpw 2.0.1407280322x
>>
>>
>>
>> After import into the postgres using ora2pg the data looks
>>
>>
>>
>>
>>
>>
>>
>> nipjd=> select distinct serializable_value from
>> alf_node_properties_zaloha where serializable_value is not null;
>>
>>
>>  
>>  
>> serializable_value
>>
>>
>> -
>>
>>
>> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>>
>>
>> \x6465636f6465284527616365643030303537333732303031333661363137363631326537353734363936633265343137323732363137393463363937333734373838316432316439396337363139643033303030313439303030343733363937613635373837303030303030303030373730343030
>>
>>
>> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>>
>>
>> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>>
>>
>> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>>
>>
>> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>>
>> (6 rows)
>>
>>
>>
>> nipjd=>
>>
>>
>>
>> when I use convert function to text, the result is
>>
>>
>>
>> nipjd=> select distinct encode(serializable_value, 'hex') from
>> alf_node_properties_zaloha where serializable_value is not null;
>>
>>
>> encode
>>
>>
>> -
>>
>>
>> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>>
>>
>> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>>
>>
>> 6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
>>
>>
>> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>>
>>
>> 

Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Pavel Stehule
út 15. 1. 2019 v 11:13 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> Hi,
>
>
>
> These are original data in blob on oracle
>
>
>
> SYS@CENIPR_1 > select distinct
> UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(serializable_value, 4000,1)) from
> NIP_NE.ALF_NODE_PROPERTIES where serializable_value is not null;
>
>
>
> UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(SERIALIZABLE_VALUE,4000,1))
>
>
> --
>
> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>
>   xpw 1.0x
>
> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>
>   xpw 1.2.1212270143x
>
> ▒▒ sr java.util.ArrayListxa▒ I sizexpwx
>
> ▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr
> java.lang.Enum  xpt MAJOR
>
> ▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr
> java.lang.Enum  xpt MINOR
>
> ▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
>
>  xpw 2.0.1407280322x
>
>
>
> After import into the postgres using ora2pg the data looks
>
>
>
>
>
>
>
> nipjd=> select distinct serializable_value from alf_node_properties_zaloha
> where serializable_value is not null;
>
>
>   
>   
>   serializable_value
>
>
> -
>
>
> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>
>
> \x6465636f6465284527616365643030303537333732303031333661363137363631326537353734363936633265343137323732363137393463363937333734373838316432316439396337363139643033303030313439303030343733363937613635373837303030303030303030373730343030
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>
> (6 rows)
>
>
>
> nipjd=>
>
>
>
> when I use convert function to text, the result is
>
>
>
> nipjd=> select distinct encode(serializable_value, 'hex') from
> alf_node_properties_zaloha where serializable_value is not null;
>
>
> encode
>
>
> -
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
>
>
> 

RE: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Mihalidesová Jana
Hi,

These are original data in blob on oracle

SYS@CENIPR_1 > select distinct 
UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(serializable_value, 4000,1)) from 
NIP_NE.ALF_NODE_PROPERTIES where serializable_value is not null;

UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(SERIALIZABLE_VALUE,4000,1))
--
▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
  xpw 1.0x
▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
  xpw 1.2.1212270143x
▒▒ sr java.util.ArrayListxa▒ I sizexpwx
▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr java.lang.Enum  
xpt MAJOR
▒▒ ~r ,org.alfresco.service.cmr.version.VersionType  xr java.lang.Enum  
xpt MINOR
▒▒ sr ,org.alfresco.repo.module.ModuleVersionNumberwG=▒▒▒�
 xpw 2.0.1407280322x

After import into the postgres using ora2pg the data looks



nipjd=> select distinct serializable_value from alf_node_properties_zaloha 
where serializable_value is not null;


   serializable_value
-
\x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
\x6465636f6465284527616365643030303537333732303031333661363137363631326537353734363936633265343137323732363137393463363937333734373838316432316439396337363139643033303030313439303030343733363937613635373837303030303030303030373730343030
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
(6 rows)

nipjd=>

when I use convert function to text, the result is

nipjd=> select distinct encode(serializable_value, 'hex') from 
alf_node_properties_zaloha where serializable_value is not null;


encode
-
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030

Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Achilleas Mantzios

Dobro jutro

On 15/1/19 10:39 π.μ., Mihalidesová Jana wrote:


Hi,

We try to migrate from oracle to postgres using ora2pg but we hit some weird 
behavior of bytea. Or it’s just our ignorance.

Table migration were ok, but we are not able to read bytea data. What we did 
wrong.



Assuming your data are jpeg images,  try smth like :
psql -c "copy (select encode(serializable_value,'hex') from alf_node_properties_zaloha 
where serializable_value is not null LIMIT 1) to stdout" | xxd -p -r > foo.jpg

then use some program (gwenview) to see the image.

If you want to dump all images (assuming again we are dealing with images 
here), then loop over each row and do the same as above.

But it is essential that you know what those values represent. Are they pdf 
docs? images? binaries? octet-streams?


Thank you for your help,




--
Achilleas Mantzios
IT DEV Lead
IT DEPT
Dynacom Tankers Mgmt



repmgr and STONIT - SPLIT BAIN

2019-01-15 Thread ROS Didier
Hi
   We are going to implement repmgr according to the following 
configuration :

-Primary node on the site A

-Standby node on the site B

-Witness node on the same site A

Do we need to use LOCATION repmgr parameter in this case to avoid Split Brain 
and permit STONIT ?

   Thanks in advance for your advice

Best Regards
[cid:image002.png@01D14E0E.8515EB90]


Didier ROS
Expertise SGBD
DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD







Ce message et toutes les pièces jointes (ci-après le 'Message') sont établis à 
l'intention exclusive des destinataires et les informations qui y figurent sont 
strictement confidentielles. Toute utilisation de ce Message non conforme à sa 
destination, toute diffusion ou toute publication totale ou partielle, est 
interdite sauf autorisation expresse.

Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de le 
copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si 
vous avez reçu ce Message par erreur, merci de le supprimer de votre système, 
ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support 
que ce soit. Nous vous remercions également d'en avertir immédiatement 
l'expéditeur par retour du message.

Il est impossible de garantir que les communications par messagerie 
électronique arrivent en temps utile, sont sécurisées ou dénuées de toute 
erreur ou virus.


This message and any attachments (the 'Message') are intended solely for the 
addressees. The information contained in this Message is confidential. Any use 
of information contained in this Message not in accord with its purpose, any 
dissemination or disclosure, either whole or partial, is prohibited except 
formal approval.

If you are not the addressee, you may not copy, forward, disclose or use any 
part of it. If you have received this message in error, please delete it and 
all copies from your system and notify the sender immediately by return message.

E-mail communication cannot be guaranteed to be timely secure, error or 
virus-free.


Re: Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Pavel Stehule
Hi

út 15. 1. 2019 v 9:40 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> Hi,
>
>
>
> We try to migrate from oracle to postgres using ora2pg but we hit some
> weird behavior of bytea. Or it’s just our ignorance.
>
> Table migration were ok, but we are not able to read bytea data. What we
> did wrong.
>
>
>
> Thank you for your help,
>
> JM
>
>
>
> nipjd=> \d alf_node_properties_zaloha
>
>   Table "nip_dms.alf_node_properties_zaloha"
>
>Column   |  Type   | Collation | Nullable |
> Default
>
>
> +-+---+--+-
>
> node_id| bigint  |   |  |
>
> actual_type_n  | integer |   |  |
>
> persisted_type_n   | integer |   |  |
>
> boolean_value  | boolean |   |  |
>
> long_value | bigint  |   |  |
>
> float_value| real|   |  |
>
> double_value   | double precision|   |  |
>
> string_value   | character varying(1024) |   |  |
>
> serializable_value | bytea   |   |  |
>
> qname_id   | bigint  |   |  |
>
> list_index | integer |   |  |
>
> locale_id  | bigint  |   |  |
>
>
>
>
>
> nipjd=> select distinct (serializable_value) from
> alf_node_properties_zaloha where serializable_value is not null;
>
>
>   
>   
> serializable_value
>
>
> -
>
>
> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>
>
> \x6465636f6465284527616365643030303537333732303031333661363137363631326537353734363936633265343137323732363137393463363937333734373838316432316439396337363139643033303030313439303030343733363937613635373837303030303030303030373730343030
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
>
>
> \x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
>
> (6 rows)
>
>
>
>
>
> nipjd=> select distinct encode(serializable_value, 'hex') from
> alf_node_properties_zaloha where serializable_value is not null;
>
>
> encode
>
>
> -
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
>
>
> 6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
>
>
> 

Oracke BLOB to Postgres BYTEA using ora2pg

2019-01-15 Thread Mihalidesová Jana
Hi,

We try to migrate from oracle to postgres using ora2pg but we hit some weird 
behavior of bytea. Or it's just our ignorance.
Table migration were ok, but we are not able to read bytea data. What we did 
wrong.

Thank you for your help,
JM

nipjd=> \d alf_node_properties_zaloha
  Table "nip_dms.alf_node_properties_zaloha"
   Column   |  Type   | Collation | Nullable | Default
+-+---+--+-
node_id| bigint  |   |  |
actual_type_n  | integer |   |  |
persisted_type_n   | integer |   |  |
boolean_value  | boolean |   |  |
long_value | bigint  |   |  |
float_value| real|   |  |
double_value   | double precision|   |  |
string_value   | character varying(1024) |   |  |
serializable_value | bytea   |   |  |
qname_id   | bigint  |   |  |
list_index | integer |   |  |
locale_id  | bigint  |   |  |


nipjd=> select distinct (serializable_value) from alf_node_properties_zaloha 
where serializable_value is not null;


   serializable_value
-
\x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
\x6465636f6465284527616365643030303537333732303031333661363137363631326537353734363936633265343137323732363137393463363937333734373838316432316439396337363139643033303030313439303030343733363937613635373837303030303030303030373730343030
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537333732303032633666373236373265363136633636373236353733363336663265373236353730366632653664366636343735366336353265346436663634373536633635353636353732373336393666366534653735366436323635373237373437
\x6465636f6465284527616365643030303537653732303032633666373236373265363136633636373236353733363336663265373336353732373636393633363532653633366437323265373636353732373336393666366532653536363537323733363936663665353437393730363530303030
(6 rows)


nipjd=> select distinct encode(serializable_value, 'hex') from 
alf_node_properties_zaloha where serializable_value is not null;


encode
-
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733
6465636f646528452761636564303030353765373230303263366637323637326536313663363637323635373336333666326537333635373237363639363336353265363336643732326537363635373237333639366636653265353636353732373336393666366535343739373036353030303030
6465636f646528452761636564303030353733373230303263366637323637326536313663363637323635373336333666326537323635373036663265366436663634373536633635326534643666363437353663363535363635373237333639366636653465373536643632363537323737343733