[ADMIN] DBLink
hello! pgAdmins :D I am trying to use the dblink in a trigger, however when the computer (source) that is running the triggers are not accessible by ethernet to the target computer, the dblink returns me an error and the trigger is not executed as planned, it is terminated and input records in the table are not written. my idea is that the insert is done in the table on the local computer if the local computer has access to ethernet it should send the data to another computer using the dblink, but do not have access ethernet writes in the local table. thank you. INSERT INTO teste(id, valor) VALUES (1,'valor'); -- On Ethernet, connected OK INSERT INTO teste(id, valor) VALUES (2,'valor'); -- Off Ethernet, disconnected ERROR. Error: ERRO: could not establish connection DETAIL: não pôde conectar ao servidor: No route to host (0x2751/10065) O servidor está executando na máquina 192.168.102.23 e aceitando conexões TCP/IP na porta 5432? CONTEXT: comando SQL SELECT (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)) PL/pgSQL function senddatato line 6 at atribuição comando SQL SELECT (SELECT SendDataTo(SQL)) PL/pgSQL function teste_after_insert line 8 at PERFORM ** Error ** ERRO: could not establish connection SQL state: 08001 Context: comando SQL SELECT (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)) PL/pgSQL function senddatato line 6 at atribuição comando SQL SELECT (SELECT SendDataTo(SQL)) PL/pgSQL function teste_after_insert line 8 at PERFORM Code: CREATE DATABASE teste WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'Portuguese, Brazil' LC_CTYPE = 'Portuguese, Brazil' CONNECTION LIMIT = -1; CREATE TABLE teste ( id bigint NOT NULL, valor text, CONSTRAINT teste_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE teste OWNER TO postgres; CREATE OR REPLACE FUNCTION senddatato(sql text) RETURNS integer AS $BODY$ DECLARE ServerON int; BEGIN ServerON := (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)); return ServerON; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION senddatatoserver(text) OWNER TO postgres; CREATE OR REPLACE FUNCTION teste_after_insert() RETURNS trigger AS $BODY$ DECLARE SQL text; BEGIN SQL := 'insert into teste(id,valor) values (' || NEW.id || ',' || || NEW.valor || || ')'; IF (true) THEN PERFORM(SELECT SendDataTo(SQL)); RETURN NEW; END IF; RETURN NEW; RAISE NOTICE 'gravado local!'; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION teste_after_insert() OWNER TO postgres; -- -- Thomaz Luiz Santos Linux User: #359356 http://thomaz.santos.googlepages.com/
Re: [ADMIN] DBLink
From: pgsql-admin-ow...@postgresql.org [mailto:pgsql-admin-ow...@postgresql.org] On Behalf Of Thomaz Luiz Santos Sent: Thursday, April 25, 2013 10:22 AM To: pgsql-admin@postgresql.org Subject: [ADMIN] DBLink hello! pgAdmins :D I am trying to use the dblink in a trigger, however when the computer (source) that is running the triggers are not accessible by ethernet to the target computer, the dblink returns me an error and the trigger is not executed as planned, it is terminated and input records in the table are not written. my idea is that the insert is done in the table on the local computer if the local computer has access to ethernet it should send the data to another computer using the dblink, but do not have access ethernet writes in the local table. thank you. INSERT INTO teste(id, valor) VALUES (1,'valor'); -- On Ethernet, connected OK INSERT INTO teste(id, valor) VALUES (2,'valor'); -- Off Ethernet, disconnected ERROR. Error: ERRO: could not establish connection DETAIL: não pôde conectar ao servidor: No route to host (0x2751/10065) O servidor está executando na máquina 192.168.102.23 e aceitando conexões TCP/IP na porta 5432? CONTEXT: comando SQL SELECT (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)) PL/pgSQL function senddatato line 6 at atribuição comando SQL SELECT (SELECT SendDataTo(SQL)) PL/pgSQL function teste_after_insert line 8 at PERFORM ** Error ** ERRO: could not establish connection SQL state: 08001 Context: comando SQL SELECT (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)) PL/pgSQL function senddatato line 6 at atribuição comando SQL SELECT (SELECT SendDataTo(SQL)) PL/pgSQL function teste_after_insert line 8 at PERFORM Code: CREATE DATABASE teste WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'Portuguese, Brazil' LC_CTYPE = 'Portuguese, Brazil' CONNECTION LIMIT = -1; CREATE TABLE teste ( id bigint NOT NULL, valor text, CONSTRAINT teste_pkey PRIMARY KEY (id ) ) WITH ( OIDS=FALSE ); ALTER TABLE teste OWNER TO postgres; CREATE OR REPLACE FUNCTION senddatato(sql text) RETURNS integer AS $BODY$ DECLARE ServerON int; BEGIN ServerON := (select count(*) from dblink_exec('host=192.168.102.23 port=5432 dbname=teste password=admin ', '' || $1 || '', false)); return ServerON; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION senddatatoserver(text) OWNER TO postgres; CREATE OR REPLACE FUNCTION teste_after_insert() RETURNS trigger AS $BODY$ DECLARE SQL text; BEGIN SQL := 'insert into teste(id,valor) values (' || NEW.id || ',' || || NEW.valor || || ')'; IF (true) THEN PERFORM(SELECT SendDataTo(SQL)); RETURN NEW; END IF; RETURN NEW; RAISE NOTICE 'gravado local!'; END; $BODY$ LANGUAGE plpgsql VOLATILE COST 100; ALTER FUNCTION teste_after_insert() OWNER TO postgres; -- -- Thomaz Luiz Santos Linux User: #359356 http://thomaz.santos.googlepages.com/ Did you try to intercept this error with exception handler inside senddatato(...) function? See PG docs: http://www.postgresql.org/docs/9.2/interactive/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING Regards, Igor Neyman -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] DBLink
Isn't that basically what replication is supposed to do? RobR, quite possibly revealing his ignorance about replication. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] DBLink
No. Here it's the master pushing change to the slave, rather than the slave pulling it (as you'd normally expect), hence the inner fragility of the model. Moreover, replication is about having exactly the same thing in two places, while this could be an audit log he keeps in its historical completeness on a remote box, while pruning the local table on regular basis, so not exactly a replication. Bèrto On 25 April 2013 16:17, Rob Richardson rdrichard...@rad-con.com wrote: Isn't that basically what replication is supposed to do? RobR, quite possibly revealing his ignorance about replication. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin -- == If Pac-Man had affected us as kids, we'd all be running around in a darkened room munching pills and listening to repetitive music.
Re: [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Paul Hinze paul.t.hi...@gmail.com writes: [ multiple CREATE INDEX CONCURRENTLY commands will deadlock with each other ] Hm. I guess the reason nobody noticed this before now is that generally the idea with CREATE INDEX CONCURRENTLY is to minimize the impact on system load, hence you wouldn't do more than one at a time. Still, it's surely a POLA violation that you *can't* do more than one at a time. The cause is that each one will wait for all older snapshots to be gone --- and it does that before dropping its own snapshot, so that the other ones will see it as something to be waited out too. Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
On 2013-04-25 13:17:31 -0400, Tom Lane wrote: Paul Hinze paul.t.hi...@gmail.com writes: [ multiple CREATE INDEX CONCURRENTLY commands will deadlock with each other ] Hm. I guess the reason nobody noticed this before now is that generally the idea with CREATE INDEX CONCURRENTLY is to minimize the impact on system load, hence you wouldn't do more than one at a time. Still, it's surely a POLA violation that you *can't* do more than one at a time. The cause is that each one will wait for all older snapshots to be gone --- and it does that before dropping its own snapshot, so that the other ones will see it as something to be waited out too. Makes sense. Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? Except that it still will unnecessarily wait for other CICs, just not deadlock, I don't see a problem. We could have a PROC_IN_CIC flag or something so we can ignore other index creations, but I am not sure if its worth the complication. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Andres Freund and...@2ndquadrant.com writes: On 2013-04-25 13:17:31 -0400, Tom Lane wrote: Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? Except that it still will unnecessarily wait for other CICs, just not deadlock, I don't see a problem. We could have a PROC_IN_CIC flag or something so we can ignore other index creations, but I am not sure if its worth the complication. I'm not sure it's a good idea to ignore other CICs altogether --- they could be executing user-defined index functions that do strange things like consult other tables. Since this seems to me to be a bit outside the intended use-case for CIC anyway, I think it's good enough if they just don't deadlock. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
Tom Lane t...@sss.pgh.pa.us schrieb: Andres Freund and...@2ndquadrant.com writes: On 2013-04-25 13:17:31 -0400, Tom Lane wrote: Since we know that C.I.C. executes in its own transaction, and there can't be more than one on the same table due to locking, it seems to me that it'd be safe to drop our own snapshot before waiting for other xacts to end. That is, we could just rearrange the last few steps in DefineIndex(), taking care to save snapshot-xmin before we destroy the snapshot so that we still have that value to pass to GetCurrentVirtualXIDs(). Anybody see a flaw in that solution? Except that it still will unnecessarily wait for other CICs, just not deadlock, I don't see a problem. We could have a PROC_IN_CIC flag or something so we can ignore other index creations, but I am not sure if its worth the complication. I'm not sure it's a good idea to ignore other CICs altogether --- they could be executing user-defined index functions that do strange things like consult other tables. Since this seems to me to be a bit outside the intended use-case for CIC anyway, I think it's good enough if they just don't deadlock Fine with me, especially as nobody seems to have complained so far other than the OP, so it doesn't seem to be to common. I don't have access to the code ATM an I wonder whether DROP CONCURRENTLY has a similar problem? Depends a bit on how the waiting is done... Andres --- Please excuse brevity and formatting - I am writing this on my mobile phone. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [HACKERS] [ADMIN] Simultaneous index creates on different schemas cause deadlock?
anara...@anarazel.de and...@anarazel.de writes: I don't have access to the code ATM an I wonder whether DROP CONCURRENTLY has a similar problem? Depends a bit on how the waiting is done... It's not a problem --- that code doesn't depend on waiting for snapshots to expire, it just checks for other sessions holding locks on the target table. (I also did some experimental testing to verify this.) regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] archive falling behind
How can the archive process fall behind? Postgres will never reuse WAL files which are not yet archived. Regarding your question about slowing down WAL generation, that is not possible to do, unless you slow down the application which is doing the writing into the database. Regards, Strahinja On Mon, Apr 22, 2013 at 6:03 PM, German Becker german.bec...@gmail.comwrote: Hi I am working with version 9.1.9. The dabase is in archiving mode. The archives are then sended to a hotstandby replication sever. I have seen that under heavy load, the archive process will fall behind the WAL generation and thuse some WAL segments won't get archived so the replication stops. To recover from this I need to do a new cold backup and send it to the replication server. Is there a way to limit the WAL generation (i.e slowing down insterts and deletes) so as to prevent the archive from falling behind? Thanks!!
[ADMIN] Size of pgstat.stat
Hello, guys. I was getting a lot of pg_stat timeout messages from postgres and I saw people na dpostgres doc suggesting to set this file to ram disk to reduce IO. But how large can this file be? I'm worried that it gets so large that it will get all the memory. Does anybody know?