[GENERAL] How to update multiple rows
Dear All, I want to update multiple row in single query.I am trying for below query.I am getting error as ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 Here is my Query. update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) where kasarano='1' and murabano='119' Please let me know.I am waiting for your great response. Thanks and Regards, Venkat
Re: [GENERAL] How to update multiple rows
On 26 Oct 2010, at 9:07, venkat wrote: Dear All, I want to update multiple row in single query.I am trying for below query.I am getting error as ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 You're probably looking for UPDATE table FROM other_table. That said, I think your subqueries are rather under-constrained - you don't correlate the records in your subqueries to the records you're updating at all! The result will be that all your rows will be based on the last row selected by each subquery. I can't imagine why you'd want that, so I assume you don't. Here is my Query. update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) where kasarano='1' and murabano='119' You would rewrite that to, for example: update parcelsdata set gid = random() * 10, kasarano = pd2.kasarano, murabano = pd2.murabano from parcelsdata pd2 where id = pd2.id -- substitute for whatever your primary key/condition is and kasarano = '1' and murabano = '119' Yeah, I left out the geometry thing as I'm too lazy to figure out where your brackets start and end ;) Alban Hertroys -- Screwing up is an excellent way to attach something to the ceiling. !DSPAM:737,4cc68b1c10291756917282! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] what can depend on index
Hi, today I noticed that in the documentation there is DROP INDEX CASCADE. I've got one question: what is that for? What can depend on index? regards Szymon
Re: [GENERAL] How to update multiple rows
Dear Alban, Thanks for your great response.I am not able to compile the query which you have given..I am not able to understand.Please alter my code. (select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) if i compile above code , its giving me 2 records.. and when i try to update the table i am getting using below code... update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) I am getting below error.. ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 Please let me know where I am doing wrong.. guide me how to update those multiple records..I am waiting for your great response. Thanks and Regards, Venkat On Tue, Oct 26, 2010 at 1:32 PM, Alban Hertroys dal...@solfertje.student.utwente.nl wrote: On 26 Oct 2010, at 9:07, venkat wrote: Dear All, I want to update multiple row in single query.I am trying for below query.I am getting error as ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 You're probably looking for UPDATE table FROM other_table. That said, I think your subqueries are rather under-constrained - you don't correlate the records in your subqueries to the records you're updating at all! The result will be that all your rows will be based on the last row selected by each subquery. I can't imagine why you'd want that, so I assume you don't. Here is my Query. update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) where kasarano='1' and murabano='119' You would rewrite that to, for example: update parcelsdata set gid = random() * 10, kasarano = pd2.kasarano, murabano = pd2.murabano from parcelsdata pd2 where id = pd2.id -- substitute for whatever your primary key/condition is and kasarano = '1' and murabano = '119' Yeah, I left out the geometry thing as I'm too lazy to figure out where your brackets start and end ;) Alban Hertroys -- Screwing up is an excellent way to attach something to the ceiling. !DSPAM:1184,4cc68b1610291250718568!
Re: [GENERAL] How to update multiple rows
If you do not understand what you are doing, it might be a good idea to find someone in your organization who understands SQL. You were given the suggestion UPDATE table SET column list FROM from list ... which is documented here: http://www.postgresql.org/docs/9.0/interactive/sql-update.html In the end, your query should probably end up as an update with an inner join in the FROM section. Do you know what the unique indexes are on the tables in question? If you do not use unique indexes in the join, or some other way to ensure that there is a one to one correspondence, you will be altering great big patches of data. If you are performing an update query, which modifies the data, you should be very careful to get it right. It is risky to have someone else write your query for you, because it is your organization that is intimately familiar with your data. Do you have any SQL experts in your company? These two kinds of knowledge are essential: 1) Knowledge of SQL 2) Knowledge of your company's data Without both of those qualifications, the query produced will not create correct results. From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of venkat Sent: Tuesday, October 26, 2010 2:15 AM To: Alban Hertroys Cc: pgsql-general@postgresql.org; pgsql-...@postgresql.org Subject: Re: [GENERAL] How to update multiple rows Dear Alban, Thanks for your great response.I am not able to compile the query which you have given..I am not able to understand.Please alter my code. (select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) if i compile above code , its giving me 2 records.. and when i try to update the table i am getting using below code... update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) I am getting below error.. ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 Please let me know where I am doing wrong.. guide me how to update those multiple records..I am waiting for your great response. Thanks and Regards, Venkat On Tue, Oct 26, 2010 at 1:32 PM, Alban Hertroys dal...@solfertje.student.utwente.nlmailto:dal...@solfertje.student.utwente.nl wrote: On 26 Oct 2010, at 9:07, venkat wrote: Dear All, I want to update multiple row in single query.I am trying for below query.I am getting error as ERROR: more than one row returned by a subquery used as an expression SQL state: 21000 You're probably looking for UPDATE table FROM other_table. That said, I think your subqueries are rather under-constrained - you don't correlate the records in your subqueries to the records you're updating at all! The result will be that all your rows will be based on the last row selected by each subquery. I can't imagine why you'd want that, so I assume you don't. Here is my Query. update parcelsdata set gid=(select random() * 10), kasarano=(select kasarano from parcelsdata), murabano=(select murabano from parcelsdata), the_geom = (select (st_dump(st_polygonize(st_union(st_boundary(the_geom), the_line.geom as the_geom from (select gid,kasarano,murabano,st_linefromtext('LINESTRING(' ||(st_xmin(the_geom)-1)::text||' '||(st_ymax(the_geom)-the_length)||', '||st_xmax(the_geom)+1||' '||st_ymax(the_geom)-the_length||')',24047) as the_line, the_geom from (select 100 as the_length, * from parcelsdata) a) b where gid = 113 GROUP BY gid,kasarano,murabano) where kasarano='1' and murabano='119' You would rewrite that to, for
[GENERAL] Slow connection once the PC is network connected
Hello, I am deploying a database to customer PC, which will be only connected by local application. However, I realize once the PC is network connected, the connection take extremely long time (around 10 seconds) May I know why this happen? I was wondering whether is there any workaround to overcome this? I connect the database using C++ code PQconnectdb(dbname = ProductionFactory user = postgres password = password); Thanks. Thanks and Regards Yan Cheng CHEOK -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Slow connection once the PC is network connected
Hello check your dns service, first. second try to use IP address instead host name Regards Pavel Stehule 2010/10/26 Yan Cheng CHEOK ycch...@yahoo.com: Hello, I am deploying a database to customer PC, which will be only connected by local application. However, I realize once the PC is network connected, the connection take extremely long time (around 10 seconds) May I know why this happen? I was wondering whether is there any workaround to overcome this? I connect the database using C++ code PQconnectdb(dbname = ProductionFactory user = postgres password = password); Thanks. Thanks and Regards Yan Cheng CHEOK -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Slow connection once the PC is network connected
On 26 Oct 2010, at 11:39, Yan Cheng CHEOK wrote: Hello, I am deploying a database to customer PC, which will be only connected by local application. However, I realize once the PC is network connected, the connection take extremely long time (around 10 seconds) That sounds suspiciously like the default timeout for DNS lookups. What platform is this? Windows? Is that machine able to look up localhost? May I know why this happen? I was wondering whether is there any workaround to overcome this? I connect the database using C++ code PQconnectdb(dbname = ProductionFactory user = postgres password = password); Does it help any to add hostaddr = 127.0.0.1 to that string? Alban Hertroys -- Screwing up is an excellent way to attach something to the ceiling. !DSPAM:737,4cc6a70710292409920920! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] rule which unpredictable modify a sequence number
On Sat, Oct 23, 2010 at 9:53 AM, Armand Turpel geocontex...@gmail.com wrote: I have a problem with a rule which unpredictable modify a sequence number. When I add a new table entry, the new id_keyword hasnt the value as expected. ex.: 1000, 1001, 1002,.. If i remove the rule it works. Here the table, rule and sequence definitions: CREATE TABLE geocontexter.gc_keyword( id_keyword bigint NOT NULL DEFAULT nextval('geocontexter.seq_gc_keyword'::regclass), id_parent bigint NOT NULL DEFAULT 0, id_attribute_group bigint, id_status smallint NOT NULL DEFAULT 100, update_time timestamp without time zone NOT NULL, preferred_order smallint, lang character varying(30) NOT NULL DEFAULT 'en'::character varying, title character varying(126) NOT NULL, description character varying(1) NOT NULL DEFAULT ''::character varying, attribute_value text, CONSTRAINT gc_keyword_id_keyword PRIMARY KEY (id_keyword) ) WITH ( OIDS=FALSE ); CREATE OR REPLACE RULE keyword_insert_or_replace AS ON INSERT TO geocontexter.gc_keyword WHERE (EXISTS ( SELECT 1 FROM geocontexter.gc_keyword WHERE gc_keyword.id_keyword = new.id_keyword)) DO INSTEAD UPDATE geocontexter.gc_keyword SET lang = new.lang, description = new.description, title = new.title, update_time = new.update_time, id_parent = new.id_parent, preferred_order = new.preferred_order, id_attribute_group = new.id_attribute_group, attribute_value = new.attribute_value WHERE gc_keyword.id_keyword = new.id_keyword AND gc_keyword.update_time new.update_time; CREATE SEQUENCE geocontexter.seq_gc_keyword INCREMENT 1 MINVALUE -1999 MAXVALUE 1999 START 1000 CACHE 1 CYCLE; default values unfortunately don't play well with rules. rules in fact are a giant headache and your strategy of upsert in rule is probably going to need a rethink if you want to expose sql-like behaviors to the calling query. I would advise moving your upsert into a function call, or doing it in the application. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] what can depend on index
On Tue, Oct 26, 2010 at 10:13:04AM +0200, Szymon Guz wrote: Hi, today I noticed that in the documentation there is DROP INDEX CASCADE. I've got one question: what is that for? What can depend on index? A foreign key can, if the index is unique. Cheers, David. -- David Fetter da...@fetter.org http://fetter.org/ Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter Skype: davidfetter XMPP: david.fet...@gmail.com iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics Remember to vote! Consider donating to Postgres: http://www.postgresql.org/about/donate -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] what can depend on index
Szymon Guz mabew...@gmail.com writes: today I noticed that in the documentation there is DROP INDEX CASCADE. I've got one question: what is that for? What can depend on index? Unique constraints and foreign key constraints, for starters. But even if there weren't anything, we'd still provide the CASCADE syntax for future-proofing purposes. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] what can depend on index
On 26 October 2010 16:21, David Fetter da...@fetter.org wrote: On Tue, Oct 26, 2010 at 10:13:04AM +0200, Szymon Guz wrote: Hi, today I noticed that in the documentation there is DROP INDEX CASCADE. I've got one question: what is that for? What can depend on index? A foreign key can, if the index is unique. Thanks for the answer. regards Szymon
Re: [GENERAL] what can depend on index
On 26 October 2010 16:29, Tom Lane t...@sss.pgh.pa.us wrote: Szymon Guz mabew...@gmail.com writes: today I noticed that in the documentation there is DROP INDEX CASCADE. I've got one question: what is that for? What can depend on index? Unique constraints and foreign key constraints, for starters. But even if there weren't anything, we'd still provide the CASCADE syntax for future-proofing purposes. regards, tom lane Thanks for the answer, I didn't know about the foreign key, I only thought about the future use. regards Szymon
[GENERAL] Why Select Count(*) from table - took over 20 minutes?
I am the only user on this system right now, and one table select count(*) took over 20 minutes: wikitags exists and has 58,988,656 records. Structure (in pascal) is: quer.SQL.Add('create table '+DBTags+' ('); quer.SQL.Add(' pagename '+SQL_TITLE+'(100) not null,'); quer.SQL.Add(' tagword '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' soundex2 '+SQL_TITLE+'(4) not null,'); quer.SQL.Add(' metaphone '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' metaphone2 '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' carverphone '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' instances '+SQL_INT32+' not null,'); if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add(' constraint '+DBTags+'_PK'); quer.SQL.Add(' primary key(pagename, tagword, instances)'); quer.SQL.Add(')'); where SQL_TITLE = 'varchar', SQL_IN32 = 'int' I have hung off indexes for each column, to resolve my previous performance issue from 3+ weeks ago. However, COUNT() is still dog slow - this table is a write once, read many... *never* update, nor delete. Any suggestions? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?
In response to Ozz Nixon ozzni...@gmail.com: I am the only user on this system right now, and one table select count(*) took over 20 minutes: wikitags exists and has 58,988,656 records. Structure (in pascal) is: quer.SQL.Add('create table '+DBTags+' ('); quer.SQL.Add(' pagename '+SQL_TITLE+'(100) not null,'); quer.SQL.Add(' tagword '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' soundex2 '+SQL_TITLE+'(4) not null,'); quer.SQL.Add(' metaphone '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' metaphone2 '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' carverphone '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' instances '+SQL_INT32+' not null,'); if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add(' constraint '+DBTags+'_PK'); quer.SQL.Add(' primary key(pagename, tagword, instances)'); quer.SQL.Add(')'); where SQL_TITLE = 'varchar', SQL_IN32 = 'int' I have hung off indexes for each column, to resolve my previous performance issue from 3+ weeks ago. However, COUNT() is still dog slow - this table is a write once, read many... *never* update, nor delete. Any suggestions? Generate the count one time and store it somewhere for quick retrieval. In an MVCC database, count(*) is designed to be accurate, which requires a scan of the entire table (which appears to take about 20 mins on your hardware). MVCC just isn't optimized for a table that never changes. However, it's easy to cache that value, since it never changes the cache never needs to be updated. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?
On 10/26/10 10:18 AM, Ozz Nixon wrote: I am the only user on this system right now, and one table select count(*) took over 20 minutes: wikitags exists and has 58,988,656 records. Structure (in pascal) is: quer.SQL.Add('create table '+DBTags+' ('); quer.SQL.Add(' pagename '+SQL_TITLE+'(100) not null,'); quer.SQL.Add(' tagword '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' soundex2 '+SQL_TITLE+'(4) not null,'); quer.SQL.Add(' metaphone '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' metaphone2 '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' carverphone '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' instances '+SQL_INT32+' not null,'); if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add(' constraint '+DBTags+'_PK'); quer.SQL.Add(' primary key(pagename, tagword, instances)'); quer.SQL.Add(')'); where SQL_TITLE = 'varchar', SQL_IN32 = 'int' I have hung off indexes for each column, to resolve my previous performance issue from 3+ weeks ago. However, COUNT() is still dog slow - this table is a write once, read many... *never* update, nor delete. count(*) has to read the whole table to get the accurate count. The reason for this is that different clients can see different versions of that table, for instance, if client A is already in a transaction, and client B then does an INSERT, the two clients will see different values for the count. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?
On October 26, 2010 10:18:41 am Ozz Nixon wrote: I have hung off indexes for each column, to resolve my previous performance issue from 3+ weeks ago. However, COUNT() is still dog slow - this table is a write once, read many... *never* update, nor delete. Any suggestions? If you need to do count(*) on 60 million row tables, you will probably need faster hardware. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Full text search in Chinese
At 11:42 AM 10/25/2010, Mike Chamberlain wrote: Has anyone implemented FTS in Chinese on PG? Â I guess I need a Chinese ispell dictionary and parser, neither of which I can find after a lot of googling. I have a bounty on this question on Stackoverflow if anyone wants to claim it: http://stackoverflow.com/questions/3994504/how-do-i-implement-full-text-search-in-chinese-on-postgresqlhttp://stackoverflow.com/questions/3994504/how-do-i-implement-full-text-search-in-chinese-on-postgresql Thanks, Mike What sort of usage would you be expecting? e.g. search terms. Written chinese is a character based language, not an alphabet style language. To complicate things a bit, there are two main character sets- Traditional Chinese and Simplified Chinese. Chinese characters would be the equivalent of an English keyword. But lots of words/meanings would require two or more characters. You might be able to handle this similar to the way english phrases are handled (indexed and searched for), after all bee's knees usually means a different thing from the actual bee's knees. Japanese on the other hand, has _three_ main scripts. Two for alphabet style, and one chinese character style... Regards, Link. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] What is better method to backup postgresql DB.
new to postgresl. need to backup postgresql DB, which way is better to backup DB. from training, I learned that we can backup the whole PGdata and other direcotries to acheive backup goal, originally I was plainned to schedule jobs to use pgdump to backup all DBs. so which way is better for backup or there is other way to backup PGDB. Thanks
Re: [GENERAL] What is better method to backup postgresql DB.
In response to Steeles stee...@gmail.com: new to postgresql. need to backup postgresql DB, which way is better to backup DB. from training, I learned that we can backup the whole PGdata and other directories to achieve backup goal, originally I was planned to schedule jobs to use pgdump to backup all DBs. so which way is better for backup or there is other way to backup PGDB. Use pg_dump or pg_dumpall. If you're at the level that you have to ask this question, then you'll have nothing but trouble getting reliable backups by backing up directories. Also, since you're new to PostgreSQL, I _HIGHLY_ recommend that you don't assume that you're getting backups until you can demonstrate that you can restore them. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model
On 10/23/10 11:01 AM, Craig Ringer wrote: Yep. As for not explicitly mentioning lower roles when granting a higher role (ie admin isn't also a user) - role inheritance. I knew about role inheritance, I just didn't know about the pg_has_role() function for determining if a user has a role. That's helpful, but I really don't want to be hitting the database with a pg_has_role() call for every time I want to check if a user should have access to a certain page or function in my application. Normally, when the user logs in, I'd cache their user info, and any roles they have, either directly or indirectly. But how can I do this if I'm not directly making administrators members of the other groups they inherit the rights of? In other words, is there a convenience function or view I can use to get a list of all roles the user has access to, both directly or indirectly? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why Select Count(*) from table - took over 20 minutes?
On Tue, Oct 26, 2010 at 2:18 PM, Ozz Nixon ozzni...@gmail.com wrote: I am the only user on this system right now, and one table select count(*) took over 20 minutes: wikitags exists and has 58,988,656 records. Structure (in pascal) is: quer.SQL.Add('create table '+DBTags+' ('); quer.SQL.Add(' pagename '+SQL_TITLE+'(100) not null,'); quer.SQL.Add(' tagword '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' soundex2 '+SQL_TITLE+'(4) not null,'); quer.SQL.Add(' metaphone '+SQL_TITLE+'(15) not null,'); quer.SQL.Add(' metaphone2 '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' carverphone '+SQL_TITLE+'(22) not null,'); quer.SQL.Add(' instances '+SQL_INT32+' not null,'); if SQL_NAME_PRIMARY_KEYS then quer.SQL.Add(' constraint '+DBTags+'_PK'); quer.SQL.Add(' primary key(pagename, tagword, instances)'); quer.SQL.Add(')'); where SQL_TITLE = 'varchar', SQL_IN32 = 'int' I have hung off indexes for each column, to resolve my previous performance issue from 3+ weeks ago. However, COUNT() is still dog slow - this table is a write once, read many... *never* update, nor delete. Any suggestions? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general You could try something like what's suggested in this blog post: http://jakub.fedyczak.net/post/26 I didn't actually tried it, but I think it should work ok. cheers, diego -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model
Hey Tony, 2010/10/27 Tony Cebzanov tony...@andrew.cmu.edu On 10/23/10 11:01 AM, Craig Ringer wrote: Yep. As for not explicitly mentioning lower roles when granting a higher role (ie admin isn't also a user) - role inheritance. I knew about role inheritance, I just didn't know about the pg_has_role() function for determining if a user has a role. That's helpful, but I really don't want to be hitting the database with a pg_has_role() call for every time I want to check if a user should have access to a certain page or function in my application. Why not? Performance? It's just one function call. Normally, when the user logs in, I'd cache their user info, and any roles they have, either directly or indirectly. But how can I do this if I'm not directly making administrators members of the other groups they inherit the rights of? In other words, is there a convenience function or view I can use to get a list of all roles the user has access to, both directly or indirectly? Please, see http://www.postgresql.org/docs/9.0/static/infoschema-applicable-roles.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- // Dmitriy.
[GENERAL] DB become enormous with continuos insert and update
Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Vacuum doesn't solve the problem. Only vacuum full is too invasive, the only way to reduce the DB is dump and restore, but we cannot stop service... I know that a in a similar db where there is a lot of update the problem was solved but I don't know how... Some one Know this problem? Thank's :) -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237814.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
On Tue, 2010-10-26 at 14:01 -0700, Hfe80 wrote: Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Vacuum doesn't solve the problem. Only vacuum full is too invasive, the only way to reduce the DB is dump and restore, but we cannot stop service... I know that a in a similar db where there is a lot of update the problem was solved but I don't know how... Some one Know this problem? You are not vacuuming enough. If you vacuuum more often you should be able to keep bloat down. Joshua D. Drake Thank's :) -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237814.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
Hi, Il 26/10/10 23:01, Hfe80 ha scritto: Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Which version of PostgreSQL are you using? Some one Know this problem? Yes. Welcome to the Bloating club. :) Ciao, Gabriele -- Gabriele Bartolini - 2ndQuadrant Italia PostgreSQL Training, Services and Support gabriele.bartol...@2ndquadrant.it | www.2ndQuadrant.it -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
On 10/26/10 2:01 PM, Hfe80 wrote: Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Vacuum doesn't solve the problem. Only vacuum full is too invasive, the only way to reduce the DB is dump and restore, but we cannot stop service... I know that a in a similar db where there is a lot of update the problem was solved but I don't know how... Some one Know this problem? if you have a lot of updates of small non-indexed fields, use a fillfactor of something like 60 or 70% so that the HOT thing can do its job (assuming 8.3+).do be sure to have autovacuum running and cranked up sufficiently that its keeping up with the workload. never do VACUUM FULL. Rather, use CLUSTER to rebuild heavily used tables in order of the most frequently used key (typically the PK), however this requires a global table lock for the duration, so should only be used when your application is relatively inactive. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
On 26 October 2010 22:01, Hfe80 federico...@gmail.com wrote: Some one know how to solve the problem of db that became huge from 700 Mbyte to 16 Gbyte in 10 days and grow... Vacuum doesn't solve the problem. Only vacuum full is too invasive, the only way to reduce the DB is dump and restore, but we cannot stop service... I know that a in a similar db where there is a lot of update the problem was solved but I don't know how... Some one Know this problem? This article describes why you should almost never use vacuum full: http://it.toolbox.com/blogs/database-soup/getting-rid-of-vacuum-full-feedback-needed-33959 You need to vacuum more aggressively. -- Regards, Peter Geoghegan -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
We tried everything...vacumm works. The problem is that updates need more space becouse datas is not overwritten in the same space... it seems that when there is huge update db grow to create space necessary ...700 Mb every time and than vacuum works but only since next update! and db grow 700Mb more and 700 Mb more It seems a bug Sorry for my english that isn't so good :( -- View this message in context: http://postgresql.1045698.n5.nabble.com/DB-become-enormous-with-continuos-insert-and-update-tp3237814p3237903.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] DB become enormous with continuos insert and update
On 27 Oct 2010, at 24:22, Hfe80 wrote: We tried everything...vacumm works. The problem is that updates need more space becouse datas is not overwritten in the same space... it seems that when there is huge update db grow to create space necessary ...700 Mb every time and than vacuum works but only since next update! and db grow 700Mb more and 700 Mb more It seems a bug Sorry for my english that isn't so good :( Do you have any long-running transactions? Rows can't be reclaimed while there are transactions active in which those rows are visible. So if you have long-running transactions, those could be locking the deleted rows' space and prevent vacuum from re-using that space for new rows. That's one reason that could explain your database growth. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4cc75a3310291878029243! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL 32 bit DB on 64 bit machine
Can a 32 bit PostgreSQL v. 9.0 db be copied from a 32 bit machine and overwrite the db on a 64 bit PostgreSQL machine? I'm trying to get a good initial starting point before performing hot standy write ahead logging (WAL) replication between a 32 bit and 64 bit machine. When I copy the 32 bit db to the 64 bit db, the PostgreSQL logs are saying the pg_control file has a checksum mis-match. When I backup and restore the 64 bit pg_control then the log files report a sys id mismatch. Thank you, Brian Martin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL 32 bit DB on 64 bit machine
Can a 32 bit PostgreSQL v. 9.0 db be copied from a 32 bit machine and overwrite the db on a 64 bit PostgreSQL machine? I'm trying to get a good initial starting point before performing hot standy write ahead logging (WAL) replication between a 32 bit and 64 bit machine. When I copy the 32 bit db to the 64 bit db, the PostgreSQL logs are saying the pg_control file has a checksum mis-match. When I backup and restore the 64 bit pg_control then the log files report a sys id mismatch. Thank you, Brian Martin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 32 bit DB on 64 bit machine
On Tue, 2010-10-26 at 18:53 -0500, Martin, Brian D. (JSC-OD)[UNITED SPACE ALLIANCE LLC] wrote: Can a 32 bit PostgreSQL v. 9.0 db be copied from a 32 bit machine and overwrite the db on a 64 bit PostgreSQL machine? I'm trying to get a good initial starting point before performing hot standy write ahead logging (WAL) replication between a 32 bit and 64 bit machine. When I copy the 32 bit db to the 64 bit db, the PostgreSQL logs are saying the pg_control file has a checksum mis-match. When I backup and restore the 64 bit pg_control then the log files report a sys id mismatch. You must run the same architecture 32-32 or 64-64 Thank you, Brian Martin -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 32 bit DB on 64 bit machine
-Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general- ow...@postgresql.org] On Behalf Of Joshua D. Drake Sent: Tuesday, October 26, 2010 5:18 PM To: Martin, Brian D. (JSC-OD)[UNITED SPACE ALLIANCE LLC] Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL 32 bit DB on 64 bit machine On Tue, 2010-10-26 at 18:53 -0500, Martin, Brian D. (JSC-OD)[UNITED SPACE ALLIANCE LLC] wrote: Can a 32 bit PostgreSQL v. 9.0 db be copied from a 32 bit machine and overwrite the db on a 64 bit PostgreSQL machine? I'm trying to get a good initial starting point before performing hot standy write ahead logging (WAL) replication between a 32 bit and 64 bit machine. When I copy the 32 bit db to the 64 bit db, the PostgreSQL logs are saying the pg_control file has a checksum mis-match. When I backup and restore the 64 bit pg_control then the log files report a sys id mismatch. You must run the same architecture 32-32 or 64-64 I think he can get what he wants by doing a SQL dump instead of an ordinary dump. For instance, from PG Admin III, right click on the database cylinder and choose the plain text options. You'll be left with a command something like this: pg_dump --host localhost --port 5433 --username postgres --format plain --create --clean --inserts --column-inserts --verbose --file q:\donzbrane.sql Chess (Here, I am sending the output of a database called 'Chess' to a SQL file called donzbrane.sql). By examining the options and command line choices for pg_dump from the manuals, you can also create exactly what you want. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 32 bit DB on 64 bit machine
On 27/10/10 08:34, Dann Corbit wrote: -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general- ow...@postgresql.org] On Behalf Of Joshua D. Drake Sent: Tuesday, October 26, 2010 5:18 PM To: Martin, Brian D. (JSC-OD)[UNITED SPACE ALLIANCE LLC] Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] PostgreSQL 32 bit DB on 64 bit machine On Tue, 2010-10-26 at 18:53 -0500, Martin, Brian D. (JSC-OD)[UNITED SPACE ALLIANCE LLC] wrote: Can a 32 bit PostgreSQL v. 9.0 db be copied from a 32 bit machine and overwrite the db on a 64 bit PostgreSQL machine? I'm trying to get a good initial starting point before performing hot standy write ahead logging (WAL) replication between a 32 bit and 64 bit machine. When I copy the 32 bit db to the 64 bit db, the PostgreSQL logs are saying the pg_control file has a checksum mis-match. When I backup and restore the 64 bit pg_control then the log files report a sys id mismatch. You must run the same architecture 32-32 or 64-64 It's also possible to run 32-bit PostgreSQL binaries on a 64-bit machine. In that case, a 32-bit data directory may be used. I think he can get what he wants by doing a SQL dump instead of an ordinary dump. I tend to agree. Running a 32-bit DB on the 64-bit host won't tell them anything much for this case. -- Craig Ringer Tech-related writing: http://soapyfrogs.blogspot.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice needed on application/database authentication/authorization/auditing model
On 27/10/10 04:49, Dmitriy Igrishin wrote: Hey Tony, 2010/10/27 Tony Cebzanov tony...@andrew.cmu.edu mailto:tony...@andrew.cmu.edu On 10/23/10 11:01 AM, Craig Ringer wrote: Yep. As for not explicitly mentioning lower roles when granting a higher role (ie admin isn't also a user) - role inheritance. I knew about role inheritance, I just didn't know about the pg_has_role() function for determining if a user has a role. That's helpful, but I really don't want to be hitting the database with a pg_has_role() call for every time I want to check if a user should have access to a certain page or function in my application. Why not? Performance? It's just one function call. It's potentially a fair bit more than that. It requires a new connection (tcp connection, backend startup, auth, etc) or borrowing one from a pool. If the pool is server side there's still a tcp connection with the associated latency. Then there's a round trip for the query and result. Processing the result. etc. It's not trivial, especially if your client and server aren't co-located. Like you, I'd suggest using information_schema for the job. -- Craig Ringer Tech-related writing: http://soapyfrogs.blogspot.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general