Re: [GENERAL] archive_cleanup_command recovery.conf Standby server error
1. I tried with below command in recovery.conf with the exe pointing to the location where it is present. Result: 2012-02-29 19:56:49 MYT WARNING: archive_cleanup_command ""D:\Program Files (x86)\PostgreSQL\9.1\data\pg_archivecleanup.exe" -d \\NOBE-HGG-SRV-04\DB_Stream_Share\ %r": return code 1 'D:\Program' is not recognized as an internal or external command, operable program or batch file. It is clearly the path is not recognized by the exe. 2. I tried manually running in the DOS command D:\Program Files (x86)\PostgreSQL\9.1\data\pg_archivecleanup.exe -d "\\NOBE-HGG-SRV-04\DB_Stream_Share\" %r Error: Pg_archivecleanup: must specify restartfilename 3. pg_archivecleanup.exe -d "\\NOBE-HGG-SRV-04\DB_Stream_Share\" %r error: Invalid filename input 4. Tried pg_archivecleanup.exe -d "d:\DB_Stream_Share\" %r Error: archive location doesn't exist tried with \\ as well same 5. Tried pg_archivecleanup.exe -d "d:\DB_Stream_Share\10002D" %r Error: must specify restartfilename archive_Cleanup_command:the command it runs is the DOS command ?? The tool is not able to recognize the path Adrian, clearly the share or the folder WAL files have permissions to everyone. Please let me know if you got something on this .. -- View this message in context: http://postgresql.1045698.n5.nabble.com/archive-cleanup-command-recovery-conf-Standby-server-error-tp5520966p5530337.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Quoted strings on CLI
Hi, I just migrated from mysql and am running into an issue I found no solution for when researching. Using the web interface I can insert values that contain single-quotes. But using the CLI I found no way to select or insert single quotes: PetWork=# select * from persons where firstname='\'Peter\''; Invalid command \'';. Try \? for help. PetWork-# select * from persons where firstname='\\'Peter\\''; Invalid command \. Try \? for help. PetWork-# select * from persons where firstname='\\\'Peter\\\''; Invalid command \. Try \? for help. PetWork-# select * from persons where firstname=''Peter''; Invalid command \. Try \? for help. PetWork-# select * from persons where firstname=''Peter''; On different forums the solution suggested is to prepend backslashes. But it doesn't matter how many backslashes I prepend, it always fails. What does one have to do to achieve that? Thanks for any help -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Quoted strings on CLI
Hello you cannot use a \ escaping when you have standard_conforming_strings enabled. There are four ways: first - use a usual way for using quotes - use a doubling single quotes postgres=# select 'Peter''s toy'; ?column? ─ Peter's toy (1 row) or you can use a own string delimiters postgres=# select $$Peter's toy$$; ?column? ─ Peter's toy (1 row) or you can disable standard_conforming_strings postgres=# set standard_conforming_strings to off; SET postgres=# select 'Peter\'s toy'; WARNING: nonstandard use of \' in a string literal LINE 1: select 'Peter\'s toy'; ^ HINT: Use '' to write quotes in strings, or use the escape string syntax (E'...'). ?column? ─ Peter's toy (1 row) or you can use PostgreSQL enhanced strings postgres=# select E'Peter\'s toy'; ?column? ─ Peter's toy (1 row) Regards Pavel Stehule 2012/3/2 Alexander Reichstadt : > Hi, > > I just migrated from mysql and am running into an issue I found no solution > for when researching. Using the web interface I can insert values that > contain single-quotes. But using the CLI I found no way to select or insert > single quotes: > > PetWork=# select * from persons where firstname='\'Peter\''; > Invalid command \'';. Try \? for help. > PetWork-# select * from persons where firstname='\\'Peter\\''; > Invalid command \. Try \? for help. > PetWork-# select * from persons where firstname='\\\'Peter\\\''; > Invalid command \. Try \? for help. > PetWork-# select * from persons where firstname=''Peter''; > Invalid command \. Try \? for help. > PetWork-# select * from persons where firstname=''Peter''; > > On different forums the solution suggested is to prepend backslashes. But it > doesn't matter how many backslashes I prepend, it always fails. What does one > have to do to achieve that? > > Thanks for any help > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Quoted strings on CLI
Thanks a lot, I use the '' approach, works for my case. Regards Am 02.03.2012 um 10:14 schrieb Pavel Stehule: > Hello > > you cannot use a \ escaping when you have standard_conforming_strings enabled. > > There are four ways: > > first - use a usual way for using quotes - use a doubling single quotes > > postgres=# select 'Peter''s toy'; > ?column? > ─ > Peter's toy > (1 row) > > or you can use a own string delimiters > > postgres=# select $$Peter's toy$$; > ?column? > ─ > Peter's toy > (1 row) > > or you can disable standard_conforming_strings > > postgres=# set standard_conforming_strings to off; > SET > postgres=# select 'Peter\'s toy'; > WARNING: nonstandard use of \' in a string literal > LINE 1: select 'Peter\'s toy'; > ^ > HINT: Use '' to write quotes in strings, or use the escape string > syntax (E'...'). > ?column? > ─ > Peter's toy > (1 row) > > or you can use PostgreSQL enhanced strings > > postgres=# select E'Peter\'s toy'; > ?column? > ─ > Peter's toy > (1 row) > > Regards > > Pavel Stehule > > > 2012/3/2 Alexander Reichstadt : >> Hi, >> >> I just migrated from mysql and am running into an issue I found no solution >> for when researching. Using the web interface I can insert values that >> contain single-quotes. But using the CLI I found no way to select or insert >> single quotes: >> >> PetWork=# select * from persons where firstname='\'Peter\''; >> Invalid command \'';. Try \? for help. >> PetWork-# select * from persons where firstname='\\'Peter\\''; >> Invalid command \. Try \? for help. >> PetWork-# select * from persons where firstname='\\\'Peter\\\''; >> Invalid command \. Try \? for help. >> PetWork-# select * from persons where firstname=''Peter''; >> Invalid command \. Try \? for help. >> PetWork-# select * from persons where firstname=''Peter''; >> >> On different forums the solution suggested is to prepend backslashes. But it >> doesn't matter how many backslashes I prepend, it always fails. What does >> one have to do to achieve that? >> >> Thanks for any help >> >> -- >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-general > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Yearly date comparison?
Le mardi 28 février 2012 à 20:14 -0800, Nick a écrit : > What is the best way to find an event with a yearly occurrence? > > > start_date DATE, > end_date DATE, > recurring TEXT > ); Hi Nick, Your problem seems similar to that of managing subscriptions? If you can do anything about it, you might make things simpler with a table structure like this: CREATE TABLE events ( last_date DATE, duration integer, recurring integer) where last_date is the date when the event was held last time, duration and recurring are a number of units (chosen as appropriate : hours, days, weeks, months, years...) > INSERT INTO events (start_date, end_date, recurring) VALUES > ('2010-02-28','2010-03-01','yearly'); > Using days as the unit, this becomes INSERT INTO events (last_date, duration, recurring) VALUES ('2010-02-28', 3, 365); You then run daily: SELECT * FROM events where (last_date + recurring) <= NOW(); For all records that show up : -start event -update db with : UPDATE events SET last_date=NOW() WHERE ... you may want to add a field initial_date that stays untouched, if you want to record when the event was held first > Since I may not know how many years back the start/end_date is, is > there a way to just ignore the year or make it the current year, > without killing performance? > With the structure you have now, you'll have to refactor your code (or add a function that does it for you) every year. -- Vincent Veyron http://marica.fr/ Logiciel de gestion des sinistres et des contentieux pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problems with non use of indexes
Hi, I can't figure out why query planner doesn't use the proper index, anyone can help me? This query properly uses indexes: mydb=# EXPLAIN SELECT U0."object_id" FROM "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND U0."user_id" = 1); QUERY PLAN Index Scan using activity_follow_user_id on activity_follow u0 (cost=0.00..4875.15 rows=4898 width=4) Index Cond: (user_id = 1) Filter: (content_type_id = 3) (3 rows) But the same query on a "IN" statement doesn't. The query planner uses Seq Scan on *U0."user_id" = 1* mydb=# EXPLAIN SELECT "activity_action"."id", "activity_action"."actor_id", "activity_action"."verb", "activity_action"."action_content_type_id", "activity_action"."action_object_id", "activity_action"."target_content_type_id", "activity_action"."target_object_id", "activity_action"."public", "activity_action"."created", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" ON ("activity_action"."actor_id" = "auth_user"."id") WHERE "activity_action"."actor_id" IN (SELECT U0."object_id" FROM "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND *U0."user_id" = 1*)) ORDER BY "activity_action"."created" DESC LIMIT 100; QUERY PLAN -- Limit (cost=9206.97..9207.22 rows=100 width=155) -> Sort (cost=9206.97..9320.34 rows=45347 width=155) Sort Key: activity_action.created -> Hash Join (cost=5447.39..7473.84 rows=45347 width=155) Hash Cond: (activity_action.actor_id = auth_user.id) -> Nested Loop (cost=4887.39..5020.58 rows=45347 width=55) -> HashAggregate (cost=4887.39..4887.41 rows=2 width=4) -> Index Scan using activity_follow_user_id on activity_follow u0 (cost=0.00..4875.15 rows=4898 width=4) Index Cond: (user_id = 1) Filter: (content_type_id = 3) -> Index Scan using activity_action_actor_id on activity_action (cost=0.00..65.20 rows=111 width=51) Index Cond: (activity_action.actor_id = u0.object_id) -> Hash (cost=278.00..278.00 rows=1 width=104) -> Seq Scan on auth_user (cost=0.00..278.00 rows=1 width=104) If I do a SET enable_seqscan TO 'off'; It uses the index but is also slow. --- *TABLES:* CREATE TABLE "auth_user" ( "id" serial NOT NULL PRIMARY KEY, "username" varchar(30) NOT NULL UNIQUE, "first_name" varchar(30) NOT NULL, "last_name" varchar(30) NOT NULL, "email" varchar(75) NOT NULL, "password" varchar(128) NOT NULL, "is_staff" boolean NOT NULL, "is_active" boolean NOT NULL, "is_superuser" boolean NOT NULL, "last_login" timestamp with time zone NOT NULL, "date_joined" timestamp with time zone NOT NULL ); CREATE TABLE "activity_follow" ( "id" serial NOT NULL PRIMARY KEY, "user_id" integer NOT NULL REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED, "content_type_id" integer NOT NULL REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED, "object_id" integer CHECK ("object_id" >= 0), UNIQUE ("user_id", "content_type_id", "object_id") ); CREATE TABLE "activity_action" ( "id" serial NOT NULL PRIMARY KEY, "actor_id" integer NOT NULL REFERENCES "auth_user" ("id") DEFERRABLE INITIALLY DEFERRED, "verb" varchar(50) NOT NULL, "action_content_type_id" integer REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED, "action_object_id" integer CHECK ("action_object_id" >= 0), "target_content_type_id" integer REFERENCES "django_content_type" ("id") DEFERRABLE INITIALLY DEFERRED, "target_object_id" integer CHECK ("target_object_id" >= 0), "public" boolean NOT NULL, "created" timestamp with time zone NOT NULL ); CREATE INDEX "activity_follow_user_id" ON "activity_follow" ("user_id"); CREATE INDEX "activity_follow_content_type_id" ON "activity_follow" ("content_type_id"); CREATE INDEX "activity_follow_object_id" ON "activity_follow" ("object_id"); CREATE INDEX "activity_action_actor_id" ON "activity_action" ("actor_id"); CREATE INDEX "activity_action_action_content_type_id" ON "activity_action" ("action_content_type_id"); CREATE INDEX "activity_action_action_object_id" ON "activity_action" ("action_object_id"); CREATE INDEX "activity_action_target_content_type_id" ON "activity_action" ("target_content_type_id"); CREATE INDEX "activity_action_target_object_id" ON "activity_action" ("target_objec
Re: [GENERAL] Problems with non use of indexes
Hi, On 2 Březen 2012, 13:12, Tyler Durden wrote: > Hi, > I can't figure out why query planner doesn't use the proper index, anyone > can help me? > > This query properly uses indexes: > > mydb=# EXPLAIN SELECT U0."object_id" FROM "activity_follow" U0 WHERE > (U0."content_type_id" = 3 AND U0."user_id" = 1); > > QUERY PLAN > > Index Scan using activity_follow_user_id on activity_follow u0 > (cost=0.00..4875.15 rows=4898 width=4) >Index Cond: (user_id = 1) >Filter: (content_type_id = 3) > (3 rows) > > But the same query on a "IN" statement doesn't. The query planner uses Seq > Scan on *U0."user_id" = 1* > > mydb=# EXPLAIN SELECT "activity_action"."id", > "activity_action"."actor_id", > "activity_action"."verb", "activity_action"."action_content_type_id", > "activity_action"."action_object_id", > "activity_action"."target_content_type_id", > "activity_action"."target_object_id", "activity_action"."public", > "activity_action"."created", "auth_user"."id", "auth_user"."username", > "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", > "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", > "auth_user"."is_superuser", "auth_user"."last_login", > "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" ON > ("activity_action"."actor_id" = "auth_user"."id") WHERE > "activity_action"."actor_id" IN (SELECT U0."object_id" FROM > "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND *U0."user_id" > = 1*)) ORDER BY "activity_action"."created" DESC LIMIT 100; > > QUERY PLAN > -- > Limit (cost=9206.97..9207.22 rows=100 width=155) >-> Sort (cost=9206.97..9320.34 rows=45347 width=155) > Sort Key: activity_action.created > -> Hash Join (cost=5447.39..7473.84 rows=45347 width=155) >Hash Cond: (activity_action.actor_id = auth_user.id) >-> Nested Loop (cost=4887.39..5020.58 rows=45347 > width=55) > -> HashAggregate (cost=4887.39..4887.41 rows=2 > width=4) >-> Index Scan using activity_follow_user_id on > activity_follow u0 (cost=0.00..4875.15 rows=4898 width=4) > Index Cond: (user_id = 1) > Filter: (content_type_id = 3) > -> Index Scan using activity_action_actor_id on > activity_action (cost=0.00..65.20 rows=111 width=51) >Index Cond: (activity_action.actor_id = > u0.object_id) >-> Hash (cost=278.00..278.00 rows=1 width=104) > -> Seq Scan on auth_user (cost=0.00..278.00 > rows=1 width=104) > > > If I do a SET enable_seqscan TO 'off'; It uses the index but is also slow. Errr, what? The only sequential scan in that explain output is on auth_user, not activity_follow which is the table referenced in the original query. It actually uses index scan to read activity_follow -> Index Scan using activity_follow_user_id on activity_follow u0 (cost=0.00..4875.15 rows=4898 width=4) Index Cond: (user_id = 1) Filter: (content_type_id = 3) kind regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems with non use of indexes
Hi, On Fri, Mar 2, 2012 at 12:23 PM, Tomas Vondra wrote: > Hi, > > On 2 Březen 2012, 13:12, Tyler Durden wrote: > > Hi, > > I can't figure out why query planner doesn't use the proper index, anyone > > can help me? > > > > This query properly uses indexes: > > > > mydb=# EXPLAIN SELECT U0."object_id" FROM "activity_follow" U0 WHERE > > (U0."content_type_id" = 3 AND U0."user_id" = 1); > > > > QUERY PLAN > > > > > Index Scan using activity_follow_user_id on activity_follow u0 > > (cost=0.00..4875.15 rows=4898 width=4) > >Index Cond: (user_id = 1) > >Filter: (content_type_id = 3) > > (3 rows) > > > > But the same query on a "IN" statement doesn't. The query planner uses > Seq > > Scan on *U0."user_id" = 1* > > > > mydb=# EXPLAIN SELECT "activity_action"."id", > > "activity_action"."actor_id", > > "activity_action"."verb", "activity_action"."action_content_type_id", > > "activity_action"."action_object_id", > > "activity_action"."target_content_type_id", > > "activity_action"."target_object_id", "activity_action"."public", > > "activity_action"."created", "auth_user"."id", "auth_user"."username", > > "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", > > "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", > > "auth_user"."is_superuser", "auth_user"."last_login", > > "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" > ON > > ("activity_action"."actor_id" = "auth_user"."id") WHERE > > "activity_action"."actor_id" IN (SELECT U0."object_id" FROM > > "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND *U0."user_id" > > = 1*)) ORDER BY "activity_action"."created" DESC LIMIT 100; > > > > QUERY PLAN > > > -- > > Limit (cost=9206.97..9207.22 rows=100 width=155) > >-> Sort (cost=9206.97..9320.34 rows=45347 width=155) > > Sort Key: activity_action.created > > -> Hash Join (cost=5447.39..7473.84 rows=45347 width=155) > >Hash Cond: (activity_action.actor_id = auth_user.id) > >-> Nested Loop (cost=4887.39..5020.58 rows=45347 > > width=55) > > -> HashAggregate (cost=4887.39..4887.41 rows=2 > > width=4) > >-> Index Scan using activity_follow_user_id > on > > activity_follow u0 (cost=0.00..4875.15 rows=4898 width=4) > > Index Cond: (user_id = 1) > > Filter: (content_type_id = 3) > > -> Index Scan using activity_action_actor_id on > > activity_action (cost=0.00..65.20 rows=111 width=51) > >Index Cond: (activity_action.actor_id = > > u0.object_id) > >-> Hash (cost=278.00..278.00 rows=1 width=104) > > -> Seq Scan on auth_user (cost=0.00..278.00 > > rows=1 width=104) > > > > > > If I do a SET enable_seqscan TO 'off'; It uses the index but is also > slow. > > Errr, what? The only sequential scan in that explain output is on > auth_user, not activity_follow which is the table referenced in the > original query. It actually uses index scan to read activity_follow > > -> Index Scan using activity_follow_user_id on > activity_follow u0 (cost=0.00..4875.15 rows=4898 > width=4) > Index Cond: (user_id = 1) > Filter: (content_type_id = 3) > > > kind regards > Tomas > > Yes, but if I remove *U0."user_id" = 1 *will use the index: EXPLAIN SELECT "activity_action"."id", "activity_action"."actor_id", "activity_action"."verb", "activity_action"."action_content_type_id", "activity_action"."action_object_id", "activity_action"."target_content_type_id", "activity_action"."target_object_id", "activity_action"."public", "activity_action"."created", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" ON ("activity_action"."actor_id" = "auth_user"."id") WHERE "activity_action"."actor_id" IN (SELECT U0."object_id" FROM "activity_follow" U0 WHERE ( U0."content_type_id" = 3 )) ORDER BY "activity_action"."created" DESC LIMIT 100; QUERY PLAN Limit (cost=0.00..4502.18 rows=100 width=155) -> Nested Loop (cost=0.00..2041605.23 rows=45347 width=155) -> Nested Loop Semi Join (cost=0.00..1907985.65 rows=45347 width=55) -> Index Scan u
Re: [GENERAL] Problems with non use of indexes
On 2 Březen 2012, 13:45, Tyler Durden wrote: >> > Yes, but if I remove *U0."user_id" = 1 *will use the index: Which PostgreSQL version is that? Post EXPLAIN ANALYZE output for all three queries, not just EXPLAIN. And use explain.depesz.com if possible, it's much more readable. kind regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems with non use of indexes
On Fri, Mar 2, 2012 at 12:55 PM, Tomas Vondra wrote: > On 2 Březen 2012, 13:45, Tyler Durden wrote: > >> > > Yes, but if I remove *U0."user_id" = 1 *will use the index: > > Which PostgreSQL version is that? Post EXPLAIN ANALYZE output for all > three queries, not just EXPLAIN. And use explain.depesz.com if possible, > it's much more readable. > > kind regards > Tomas > I'm using PostgreSQL 8.4.2 and you can find the EXPLAIN ANALYSE VERBOSE in http://explain.depesz.com/s/hk2 Thanks!
Re: [GENERAL] Problems with non use of indexes
On 2 Březen 2012, 14:34, Tyler Durden wrote: > On Fri, Mar 2, 2012 at 12:55 PM, Tomas Vondra wrote: > >> On 2 Březen 2012, 13:45, Tyler Durden wrote: >> >> >> > Yes, but if I remove *U0."user_id" = 1 *will use the index: >> >> Which PostgreSQL version is that? Post EXPLAIN ANALYZE output for all >> three queries, not just EXPLAIN. And use explain.depesz.com if possible, >> it's much more readable. >> >> kind regards >> Tomas >> > > I'm using PostgreSQL 8.4.2 and you can find the EXPLAIN ANALYSE VERBOSE in > http://explain.depesz.com/s/hk2 For all three queries, please. Tomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problems with non use of indexes
http://explain.depesz.com/s/f92O *EXPLAIN ANALYSE VERBOSE SELECT "activity_action"."id", "activity_action"."actor_id", "activity_action"."verb", "activity_action"."action_content_type_id", "activity_action"."action_object_id", "activity_action"."target_content_type_id", "activity_action"."target_object_id", "activity_action"."public", "activity_action"."created", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" ON ("activity_action"."actor_id" = "auth_user"."id") WHERE "activity_action"."actor_id" IN (SELECT U0."object_id" FROM "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND U0."user_id" = 1)) ORDER BY "activity_action"."created" DESC LIMIT 100;* --- http://explain.depesz.com/s/o3w EXPLAIN ANALYSE VERBOSE SELECT "activity_action"."id", "activity_action"."actor_id", "activity_action"."verb", "activity_action"."action_content_type_id", "activity_action"."action_object_id", "activity_action"."target_content_type_id", "activity_action"."target_object_id", "activity_action"."public", "activity_action"."created", "auth_user"."id", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."password", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."is_superuser", "auth_user"."last_login", "auth_user"."date_joined" FROM "activity_action" INNER JOIN "auth_user" ON ("activity_action"."actor_id" = "auth_user"."id") WHERE "activity_action"."actor_id" IN (SELECT U0."object_id" FROM "activity_follow" U0 WHERE (U0."content_type_id" = 3)) ORDER BY "activity_action"."created" DESC LIMIT 100; --- http://explain.depesz.com/s/ccJ EXPLAIN ANALYSE VERBOSE SELECT U0."object_id" FROM "activity_follow" U0 WHERE (U0."content_type_id" = 3 AND U0."user_id" = 1); On Fri, Mar 2, 2012 at 2:23 PM, Tomas Vondra wrote: > On 2 Březen 2012, 14:34, Tyler Durden wrote: > > On Fri, Mar 2, 2012 at 12:55 PM, Tomas Vondra wrote: > > > >> On 2 Březen 2012, 13:45, Tyler Durden wrote: > >> >> > >> > Yes, but if I remove *U0."user_id" = 1 *will use the index: > >> > >> Which PostgreSQL version is that? Post EXPLAIN ANALYZE output for all > >> three queries, not just EXPLAIN. And use explain.depesz.com if > possible, > >> it's much more readable. > >> > >> kind regards > >> Tomas > >> > > > > I'm using PostgreSQL 8.4.2 and you can find the EXPLAIN ANALYSE VERBOSE > in > > http://explain.depesz.com/s/hk2 > > For all three queries, please. > > Tomas > >
Re: [GENERAL] archive_cleanup_command recovery.conf Standby server error
On Friday, March 02, 2012 12:28:48 am chinnaobi wrote: > 1. I tried with below command in recovery.conf with the exe pointing to the > location where it is present. > Result: > > 2012-02-29 19:56:49 MYT WARNING: archive_cleanup_command ""D:\Program > Files (x86)\PostgreSQL\9.1\data\pg_archivecleanup.exe" -d > \\NOBE-HGG-SRV-04\DB_Stream_Share\ %r": return code 1 > 'D:\Program' is not recognized as an internal or external command, Would seem it is having problem with space in path name and is trying to run D:\Program as a program and failing. Try quoting that part of the path i.e "Program Files (x86)" > > operable program or batch file. > > It is clearly the path is not recognized by the exe. > > 2. I tried manually running in the DOS command > > D:\Program Files (x86)\PostgreSQL\9.1\data\pg_archivecleanup.exe -d > "\\NOBE-HGG-SRV-04\DB_Stream_Share\" %r > > Error: Pg_archivecleanup: must specify restartfilename Again please read the information in the link below: http://www.postgresql.org/docs/9.1/interactive/pgarchivecleanup.html Running from the command line is different from running from recovery.conf The Error is telling you what to do and the link will give you the information on how to do it. > > 3. pg_archivecleanup.exe -d "\\NOBE-HGG-SRV-04\DB_Stream_Share\" %r > error: Invalid filename input > > 4. Tried pg_archivecleanup.exe -d "d:\DB_Stream_Share\" %r > Error: archive location doesn't exist tried with \\ as well same > 5. Tried pg_archivecleanup.exe -d > "d:\DB_Stream_Share\10002D" %r > Error: must specify restartfilename > > archive_Cleanup_command:the command it runs is the DOS command ?? > The tool is not able to recognize the path Just the same problem repeated. As stated before %r is not recognized when the program is run from the command line. > > Adrian, > clearly the share or the folder WAL files have permissions to everyone. > Please let me know if you got something on this .. > -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] what Linux to run
Ivan Voras wrote: On 28/02/2012 17:57, mgo...@isstrucksoftware.net wrote: Our application runs on Windows, however we have been told that we can pick any OS to run our server on. I'm thinking Linux because from everything I've read, it appears to be a better on performance and there are other features like tablespaces which we could take advantage of. On our hosted solution, the application runs in a Software as a Service model and being able to keep each companies tables in their own table space would be nice. Additionally it appears that there are a lot more ways to tune the engine if we need to than under windows, plus the capability to hold more connections. If we move to Linux, what is the preferred Linux for running Postgres on. This machine would be dedicated to the database only. I'd like a recommendation for both a GUI hosted version and a non-GUI version. I haven't used Linux in the past but did spend several year s in a mixed Unix and IBM mainframe environment at the console level. Hi, PostgreSQL administration would not benefit much from a GUI, as it is basically centered around editing and tuning configuration files (either its or the OS's). For Linux, if you want stability and decent performance, you should probably choose either CentOS, or if you want commercial support, Red Hat Enterprise Linux (which is basically the same thing, only commercial). Personally, I'd recommend FreeBSD (it's not a Linux, it's more Unix-like) but I'm probably biased ;) +1 from me. http://leaf.dragonflybsd.org/mailarchive/kernel/2011-11/msg00017.html Nice numbers with a choice, BSD excel not in numbers but in stability surviving all tests. -- Sphinx of black quartz judge my vow. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Opposite function of hstore each function
Each function returns key/value pairs from hstore type. Is there any opposite function that returns hstore type from key/value rows? I know hstore (text[], text[]) can construct it. Is there any other choice? I have a table with ID & Description columns and want to return hstore type (ID1=>Desc1,ID2=>Desc2,) that matches a query condition. >From top of my head, this is the only query I can think of SELECT hstore ((SELECT array(SELECT CAST(ID as TEXT) FROM T1 WHERE )), (SELECT array(SELECT Description FROM T1 WHERE ...))) But I don't want to run the same where condition twice. If I can construct a single array (K1, V1, K2, V2, ...) from key/value rows, then I can use hstore(text[]) function. Of course, I can create a stored function that loops through the rows and construct it, that's the last resort and I don't like that approach. Any idea? Thank you, Choon Park
Re: [GENERAL] Opposite function of hstore each function
ChoonSoo Park wrote: > Each function returns key/value pairs from hstore type. > Is there any opposite function that returns hstore type from key/value rows? > > I know hstore (text[], text[]) can construct it. Is there any other choice? > > I have a table with ID & Description columns and want to return hstore type > (ID1=>Desc1,ID2=>Desc2,) that matches a query condition. >>From top of my head, this is the only query I can think of > > SELECT hstore ((SELECT array(SELECT CAST(ID as TEXT) FROM T1 WHERE )), > (SELECT array(SELECT Description FROM T1 WHERE ...))) > > But I don't want to run the same where condition twice. You could try: select hstore(array_agg(CAST(ID as TEXT)), array_agg(Description)) from T1 where ... I don't have hstore loaded but that seems like it should work. HTH Bosco. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Opposite function of hstore each function
It works! Thank you, Choon Park On Fri, Mar 2, 2012 at 12:19 PM, Bosco Rama wrote: > ChoonSoo Park wrote: > > Each function returns key/value pairs from hstore type. > > Is there any opposite function that returns hstore type from key/value > rows? > > > > I know hstore (text[], text[]) can construct it. Is there any other > choice? > > > > I have a table with ID & Description columns and want to return hstore > type > > (ID1=>Desc1,ID2=>Desc2,) that matches a query condition. > >>From top of my head, this is the only query I can think of > > > > SELECT hstore ((SELECT array(SELECT CAST(ID as TEXT) FROM T1 WHERE > )), > > (SELECT array(SELECT Description FROM T1 WHERE ...))) > > > > But I don't want to run the same where condition twice. > > You could try: > select hstore(array_agg(CAST(ID as TEXT)), array_agg(Description)) from > T1 where ... > > I don't have hstore loaded but that seems like it should work. > > HTH > > Bosco. >
[GENERAL] concatenating text and bytea
Hi, I got recently bitten by this : # select 'ascii'::text || E'\\xdeadbeef'::bytea, pg_typeof('ascii'::text || '\xdeadbeef'::bytea), 'ascii'::bytea || E'\\xdeadbeef'::bytea; ?column? | pg_typeof | ?column? -+---+-- ascii\xdeadbeef | text | \x6173636969deadbeef I would have expected a result cast as bytea or an error message telling me about incompatible types, but the result from the first column is a nasty gotcha. Is it the intented behaviour ? -- Vincent de Phily
Re: [GENERAL] concatenating text and bytea
Vincent de Phily writes: > I got recently bitten by this : > # select 'ascii'::text || E'\\xdeadbeef'::bytea, pg_typeof('ascii'::text || > '\xdeadbeef'::bytea), 'ascii'::bytea || E'\\xdeadbeef'::bytea; > ?column? | pg_typeof | ?column? > -+---+-- > ascii\xdeadbeef | text | \x6173636969deadbeef > I would have expected a result cast as bytea or an error message telling me > about incompatible types, but the result from the first column is a nasty > gotcha. Is it the intented behaviour ? Yes, it is. Personally I'd prefer this sort of thing to be rejected, but allowing text concatenation to still accept non-text inputs was one of the compromises that was made when we got rid of the former behavior of implicitly casting to text in darn near *any* context: http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=31edbadf4af45dd4eecebcb732702ec6d7ae1819 http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=2d4db3675fa7a2f4831b755bc98242421901042f Considering the volume and the persistence of the complaints about that change, I'm not about to propose tightening it up any more. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] || versus concat( ), diff behavior
Can anybody please point me to where this "difference of behavior" is explained/documented ? Thanks, -dvs- -- version = 9.1.3 do $$ declare v_str char(10); begin v_str := 'abc' ; raise info '%', concat(v_str, v_str) ; raise info '%', v_str||v_str ; end $$; INFO: abc abc INFO: abcabc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] || versus concat( ), diff behavior
On Mar 2, 2012, at 12:58 PM, wrote: > Can anybody please point me to where this "difference of behavior" is > explained/documented ? > Thanks, > -dvs- > > -- version = 9.1.3 > do $$ > declare > v_str char(10); > begin > v_str := 'abc' ; > raise info '%', concat(v_str, v_str) ; > raise info '%', v_str||v_str ; > end > $$; > > INFO: abc abc > INFO: abcabc I'm not sure why || works differently than concat(), but char() is whitespace-padded. You told the db you wanted "10 characters", so that's what it gave you. Perhaps you wanted vchar(10)? For that matter, do you even want that restriction of 10 characters in the first place? Perhaps the type text is what you're really after? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] || versus concat( ), diff behavior
On 02/03/12 20:58, david.sahag...@emc.com wrote: Can anybody please point me to where this "difference of behavior" is explained/documented ? Thanks, -dvs- -- version = 9.1.3 do $$ declare v_str char(10); begin v_str := 'abc' ; raise info '%', concat(v_str, v_str) ; raise info '%', v_str||v_str ; end $$; INFO: abc abc INFO: abcabc Concat is a function which concatenates whatever you give it blindly. Hence it has the behavior that includes the blanks. The || operator reflects the more general PostgreSQL principle that trailing blanks are insignificant for char fields. You see the same behavior when comparing char variables. This can be found in the manual: http://www.postgresql.org/docs/current/static/datatype-character.html Values of type character are physically padded with spaces to the specified width n, and are stored and displayed that way. However, the padding spaces are treated as semantically insignificant. Trailing spaces are disregarded when comparing two values of type character, and they will be removed when converting a character value to one of the other string types. Note that trailing spaces are semantically significant in character varying and text values, and when using pattern matching, e.g. LIKE, regular expressions. Hope this makes it just a little clearer. Regards -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Restoring selected records from backup file
hi, 1. How can we restore selected records from a plain text backup (backup.sql). 2. Is it possible using psql. If yes how Please Help Thanks & Regards Piyush
Re: [GENERAL] || versus concat( ), diff behavior
-Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Philip Couling Sent: Friday, March 02, 2012 4:47 PM To: david.sahag...@emc.com Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] || versus concat( ), diff behavior On 02/03/12 20:58, david.sahag...@emc.com wrote: > Can anybody please point me to where this "difference of behavior" is explained/documented ? > Thanks, > -dvs- > > -- version = 9.1.3 > do $$ > declare >v_str char(10); > begin >v_str := 'abc' ; >raise info '%', concat(v_str, v_str) ; >raise info '%', v_str||v_str ; > end > $$; > > INFO: abc abc > INFO: abcabc > > Concat is a function which concatenates whatever you give it blindly. Hence it has the behavior that includes the blanks. The || operator reflects the more general PostgreSQL principle that trailing blanks are insignificant for char fields. You see the same behavior when comparing char variables. This can be found in the manual: http://www.postgresql.org/docs/current/static/datatype-character.html Values of type character are physically padded with spaces to the specified width n, and are stored and displayed that way. However, the padding spaces are treated as semantically insignificant. Trailing spaces are disregarded when comparing two values of type character, and they will be removed when converting a character value to one of the other string types. Note that trailing spaces are semantically significant in character varying and text values, and when using pattern matching, e.g. LIKE, regular expressions. Hope this makes it just a little clearer. Regards - Philip, The question to ask is whether the behavior of the "concat" function is intentionally different than the "||" operator. Aside from the ability to take more than two arguments I would suggest they should behave identically. Given the newness of the "concat" function I would guess the difference is unintentional. Regardless, either the documentation or the function code needs to be modified: either to synchronize the behavior or to explicitly point out the different treatment of "character" types. I'd argue that the "||" behavior is incorrect but at this point it doesn't matter. Prior to the introduction of the "concat" function how would one perform a concatenation with a "character" type and preserve the trailing whitespace? If the new function intends to fix that behavior documenting such would be helpful. DVS, >From a curiosity standpoint I presume that the "concat" output leaves whitespace surrounding the second half as well? In the future, when debugging string content, I would suggest you bracket your output so you know when there is trailing whitespace. I.E., '[' || string_to_view || ']' => '[string with trailing whitespace ]' Dave -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] role missing in dump
Hi, when upgrading from 8.4 to 9.1 (Ubuntu 10.10 to 11.10) I did face some problems (probably due to a non standard data directory. But I don't care about that. Finally I decided to use the dump to recreate this very small database. When starting the application that uses the db I realized that a role (the only one I have created) was missing. Is this some bug in 8.4 that roles are not included in the dump or did I miss something? Regards, Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] role missing in dump
On Friday, March 02, 2012 2:39:22 pm Thomas Prause wrote: > Hi, > when upgrading from 8.4 to 9.1 (Ubuntu 10.10 to 11.10) I did face some > problems (probably due to a non standard data directory. But I don't > care about that. Finally I decided to use the dump to recreate this very > small database. > > When starting the application that uses the db I realized that a role > (the only one I have created) was missing. Is this some bug in 8.4 that > roles are not included in the dump or did I miss something? Roles are global to a cluster. If you do a pg_dump you will get only the information/data for a particular database. If you do pg_dumpall you will get the information/data for all the databases in the cluster as well as the cluster wide information. A compromise solution is to pg_dump a particular database and then do pg_dumpall -g which dumps only the global information. See here for more info: http://www.postgresql.org/docs/9.0/interactive/app-pg-dumpall.html http://www.postgresql.org/docs/9.0/interactive/app-pgdump.html > > Regards, > Thomas -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Restoring selected records from backup file
On Friday, March 02, 2012 2:20:14 am Piyush Lenka wrote: > hi, > > 1. How can we restore selected records from a plain text backup > (backup.sql). The only way I know is to use some text processing tool to walk through and pull out the records. > 2. Is it possible using psql. If yes how I assume you mean the psql client program. There is no way to get it to read a backup file and pull data, as far as I know. You can use the \copy command to pull out selected records from a table and dump them to a disk file. See here for more info: http://www.postgresql.org/docs/9.0/interactive/app-psql.html You could also do a CREATE TABLE AS to create a table with the records you are interested and then do pg_dump -t to get that information. http://www.postgresql.org/docs/9.0/interactive/sql-createtableas.html > Please Help > > Thanks & Regards > Piyush -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general