Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread Moreno Andreo

Il 05/04/2017 23:26, pinker ha scritto:

Hi,
I'm trying to write an archive manager which will be first copying data from
tables with where clause and then, after successful load into second server
- delete them.
The simplest (and probably fastest) solution I came up with is to use copy:
psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) to
stdout " | psql -h localhost  postgres   -c "copy b from stdin"

both  psql -h are on localhost. Is it a typo?


I have made very simple test to check if I can be sure about "transactional"
safety. It's not two phase commit of course but it's seems to throw an error
if something went wrong and it's atomic (i assume). The test was:

CREATE TABLE public.a
(
   id integer,
   k01 numeric (3)
);

CREATE TABLE public.b
(
   id integer,
   k01 numeric (1)
);

insert into a select n,n from generate_series(1,100) n;

and then:
psql -h localhost postgres -c "copy a to stdout "|psql -h localhost
postgres   -c "copy b from stdin"

so psql has thrown an error

... and what is the error?

and no rows were inserted to the b table - so it
seems to be ok.

Is there maybe something I'm missing?
Some specific condition when something could go wrong and make the process
not atomic? (i don't care about data consistency in this particular case).
Without knowing OS and psql version of both servers, how they are 
connected, or what error you get, it's hard for me to help you further.


Best regards
Moreno.





--
View this message in context: 
http://www.postgresql-archive.org/Archiving-data-to-another-server-using-copy-psql-with-pipe-tp5954469.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread pinker


W dniu 2017-04-06 13:24:16 użytkownik Moreno Andreo  
napisał:
> Il 05/04/2017 23:26, pinker ha scritto:
> > Hi,
> > I'm trying to write an archive manager which will be first copying data from
> > tables with where clause and then, after successful load into second server
> > - delete them.
> > The simplest (and probably fastest) solution I came up with is to use copy:
> > psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) to
> > stdout " | psql -h localhost  postgres   -c "copy b from stdin"
> both  psql -h are on localhost. Is it a typo?

No, It's not a typo, just a test ;)

> >
> > I have made very simple test to check if I can be sure about "transactional"
> > safety. It's not two phase commit of course but it's seems to throw an error
> > if something went wrong and it's atomic (i assume). The test was:
> >
> > CREATE TABLE public.a
> > (
> >id integer,
> >k01 numeric (3)
> > );
> >
> > CREATE TABLE public.b
> > (
> >id integer,
> >k01 numeric (1)
> > );
> >
> > insert into a select n,n from generate_series(1,100) n;
> >
> > and then:
> > psql -h localhost postgres -c "copy a to stdout "|psql -h localhost
> > postgres   -c "copy b from stdin"
> >
> > so psql has thrown an error
> ... and what is the error?
> > and no rows were inserted to the b table - so it
> > seems to be ok.
> >
> > Is there maybe something I'm missing?
> > Some specific condition when something could go wrong and make the process
> > not atomic? (i don't care about data consistency in this particular case).
> Without knowing OS and psql version of both servers, how they are 
> connected, or what error you get, it's hard for me to help you further.

psql in version 9.6 and OS: Red Hat 7
Does Os version really make any difference?

Best regards,
A. Kucharczyk


> 
> Best regards
> Moreno.
> >
> >
> >
> >
> > --
> > View this message in context: 
> > http://www.postgresql-archive.org/Archiving-data-to-another-server-using-copy-psql-with-pipe-tp5954469.html
> > Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> >
> >
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread Moreno Andreo

Il 06/04/2017 13:58, pinker ha scritto:


W dniu 2017-04-06 13:24:16 użytkownik Moreno Andreo  
napisał:

Il 05/04/2017 23:26, pinker ha scritto:

Hi,
I'm trying to write an archive manager which will be first copying data from
tables with where clause and then, after successful load into second server
- delete them.
The simplest (and probably fastest) solution I came up with is to use copy:
psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) to
stdout " | psql -h localhost  postgres   -c "copy b from stdin"

both  psql -h are on localhost. Is it a typo?

No, It's not a typo, just a test ;)

... so source and destination database are the same? (just guessing...)



I have made very simple test to check if I can be sure about "transactional"
safety. It's not two phase commit of course but it's seems to throw an error
if something went wrong and it's atomic (i assume). The test was:

CREATE TABLE public.a
(
id integer,
k01 numeric (3)
);

CREATE TABLE public.b
(
id integer,
k01 numeric (1)
);

insert into a select n,n from generate_series(1,100) n;

and then:
psql -h localhost postgres -c "copy a to stdout "|psql -h localhost
postgres   -c "copy b from stdin"

so psql has thrown an error

... and what is the error?

and no rows were inserted to the b table - so it
seems to be ok.

Is there maybe something I'm missing?
Some specific condition when something could go wrong and make the process
not atomic? (i don't care about data consistency in this particular case).

Without knowing OS and psql version of both servers, how they are
connected, or what error you get, it's hard for me to help you further.

psql in version 9.6 and OS: Red Hat 7
Does Os version really make any difference?
AFAIK the biggest differences are among different OS families, say 
Windows and Linux, but there could be some small things among linux 
distributions. More depth about this topic is beyond my knowledge.
... but you did not report the error message, with this is much easier 
to help you without guessing too much :-)


Best regards,
A. Kucharczyk



Best regards
Moreno.




--
View this message in context: 
http://www.postgresql-archive.org/Archiving-data-to-another-server-using-copy-psql-with-pipe-tp5954469.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread pinker


W dniu 2017-04-06 14:28:04 użytkownik Moreno Andreo  
napisał:
> Il 06/04/2017 13:58, pinker ha scritto:
> >
> > W dniu 2017-04-06 13:24:16 użytkownik Moreno Andreo 
> >  napisał:
> >> Il 05/04/2017 23:26, pinker ha scritto:
> >>> Hi,
> >>> I'm trying to write an archive manager which will be first copying data 
> >>> from
> >>> tables with where clause and then, after successful load into second 
> >>> server
> >>> - delete them.
> >>> The simplest (and probably fastest) solution I came up with is to use 
> >>> copy:
> >>> psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) 
> >>> to
> >>> stdout " | psql -h localhost  postgres   -c "copy b from stdin"
> >> both  psql -h are on localhost. Is it a typo?
> > No, It's not a typo, just a test ;)

> ... so source and destination database are the same? (just guessing...)
yes, they are as you can easily read - it's postgres in both cases.
This is just easy to reproduce example.

> >
> >>> I have made very simple test to check if I can be sure about 
> >>> "transactional"
> >>> safety. It's not two phase commit of course but it's seems to throw an 
> >>> error
> >>> if something went wrong and it's atomic (i assume). The test was:
> >>>
> >>> CREATE TABLE public.a
> >>> (
> >>> id integer,
> >>> k01 numeric (3)
> >>> );
> >>>
> >>> CREATE TABLE public.b
> >>> (
> >>> id integer,
> >>> k01 numeric (1)
> >>> );
> >>>
> >>> insert into a select n,n from generate_series(1,100) n;
> >>>
> >>> and then:
> >>> psql -h localhost postgres -c "copy a to stdout "|psql -h localhost
> >>> postgres   -c "copy b from stdin"
> >>>
> >>> so psql has thrown an error
> >> ... and what is the error?
> >>> and no rows were inserted to the b table - so it
> >>> seems to be ok.
> >>>
> >>> Is there maybe something I'm missing?
> >>> Some specific condition when something could go wrong and make the process
> >>> not atomic? (i don't care about data consistency in this particular case).
> >> Without knowing OS and psql version of both servers, how they are
> >> connected, or what error you get, it's hard for me to help you further.
> > psql in version 9.6 and OS: Red Hat 7
> > Does Os version really make any difference?
> AFAIK the biggest differences are among different OS families, say 
> Windows and Linux, but there could be some small things among linux 
> distributions. More depth about this topic is beyond my knowledge.
> ... but you did not report the error message, with this is much easier 
> to help you without guessing too much :-)

Error message says, as one could expect, that the second table has got smaller 
precision...
The question isn't about this particular error - which was induced for purpose 
- but about atomicity of this operation

> >
> > Best regards,
> > A. Kucharczyk
> >
> >
> >> Best regards
> >> Moreno.
> >>>
> >>>
> >>>
> >>> --
> >>> View this message in context: 
> >>> http://www.postgresql-archive.org/Archiving-data-to-another-server-using-copy-psql-with-pipe-tp5954469.html
> >>> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
> >>>
> >>>
> >>
> >>
> >>
> >> -- 
> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> >> To make changes to your subscription:
> >> http://www.postgresql.org/mailpref/pgsql-general
> >>
> >
> >
> >
> 
> 
> 
> 
> -- 
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
> 





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread Moreno Andreo

Il 06/04/2017 16:10, pinker ha scritto:


Error message says, as one could expect, that the second table has got smaller 
precision...
The question isn't about this particular error - which was induced for purpose 
- but about atomicity of this operation
Sorry, I read your message without paying the right attention and I 
totally lost the table structure, plus the answer about transactional 
safety.
Honestly, I can't help you on theory side, I'm sorry. I've never gone so 
far with transactions.
On pratcal side, instead, I can say that rubyrep (a replication system 
written in Ruby) uses quite this kind of approach in its replication 
process (extremely simplified below):

- open transaction
- read from db 1
- write to db 2
- close transaction
When the process encounters some problems (say, connection issues or 
table structure mismatch) an exception is thrown, the transaction is 
rolled back and everyting is as before it started.
I have this process running widely for about 4 years on some hundred 
machines and I've never been reported of transactional problems.


HTH
Moreno.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving data to another server using copy, psql with pipe

2017-04-06 Thread David G. Johnston
On Thu, Apr 6, 2017 at 4:24 AM, Moreno Andreo 
wrote:

> psql -h localhost postgres -c "copy (SELECT * FROM a WHERE time < now()) to
> stdout " | psql -h localhost  postgres   -c "copy b from stdin"


​The first question at hand is whether the source psql command will provoke
an EOF (which is the only thing that will stop the copy-from) even though
the complete contents of the copy-to have not yet been sent.

​The second question is whether, even if it does send EOF, the second
command will be allowed to see that EOF and complete its command.  Pipeline
failure mode might impact this (from bash experience).

Unfortunately I do not know the answers to those questions.

The source code might be of some help on the first.

Another option is to replace the first psql process with custom program
that will send data to stdout and in the middle of doing so die with a
non-zero exit code.​  That should be a workable simulation of the copy to
command and evaluation of the target can be done to see what behavior is
exhibitied.

David J.