Re: [GENERAL] Database/Table Design for Global Country Statistics
SELECT f.year, f.id, c.name, (f.value / p.value) AS per_capita FROM fish_catch AS f JOIN pop_total AS p USING (year, id) INNER JOIN countries AS c ON f.id = c.id ORDER BY (year = 2005), value, name Seems to never end Why is redesigning tables so difficult?! :-)) And furthermore, it is really difficult to grab and formulate the problem that I have now. Let's see: The above SQL leads to a correct SQL result; but the (year = 2005) changes the pattern of the output completely. Before, without sorting by a specific year, it would look like this: yearvalue name 1995NULLAfghanistan 2000NULLAfghanistan 2005NULLAfghanistan 20002365Albania 20052065Albania 19951160Albania 2000113157 Algeria 2005126259 Algeria 1995105872 Algeria 2000832 American Samoa 20053943American Samoa 1995152 American Samoa With specifying ORDER BY (y_2005), value, name I have this: yearvalue name 19950 Ethiopia 20000 Ethiopia 20000.5 Bosnia and Herzegovina 19950.5 Bosnia and Herzegovina 20000.5 Christmas Island 19950.5 Christmas Island 20050 Bosnia and Herzegovina 20050 Ethiopia 20050.5 Christmas Island 20050.5 Cocos (Keeling) Islands But what I would need is this: 19950.5 Bosnia and Herzegovina 20000.5 Bosnia and Herzegovina 20050 Bosnia and Herzegovina 19950 Ethiopia 20000 Ethiopia 20050 Ethiopia 19950.5 Christmas Island 20000.5 Christmas Island 20050.5 Christmas Island Looks similar to the first result, but all content would be sorted by the year 2005 without separating it from the other years. Hmmm don't know if this is clear... Most grateful for any feedback, Stef ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Tablespaces on tertiary media
Where does PostgreSQL stand with storing /really/ large amounts of data offline? Specifically, if a FUSE is used to move a tablespace to something like a tape archiver can the planner be warned that access might take an extended period? I know that at one point (v6?) there were hooks in the code for experimental Berkeley code to do this sort of thing but as far as I know there has never been anything publicly available. -- Mark Morgan Lloyd markMLl .AT. telemetry.co .DOT. uk [Opinions above are the author's, not those of his employers or colleagues] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Issue with uninstalling postgres 8.1.9
We have a system that came with pg 8.1.9. When I try to uninstall those RPMs, it works for all the rpms except for libs: rpm -ev postgresql-libs-8.1.9-1.el5 error: Failed dependencies: libpq.so.4 is needed by (installed) apr-util-1.2.7-6.i386 I am not sure what this is about and how to uninstall it. I cannot leave the libs rpm because with it on the system, the equivalent for 8.2.4 will not install. For instance, while installing the compatibility stuff prior to 8.2.4 -- rpm -Uhv compat-postgresql-libs-4-1PGDG.rhel5.i686.rpm warning: waiting for transaction lock on /var/lib/rpm/__db.000 Preparing...### [100%] file /usr/lib/libpq.so.4 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 file /usr/lib/libpq.so.4.1 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 How does one resolve this? How can I uninstall libpg.so.4? I am on CentOS 5, and I'm using the rpms for Red Hat ES 5. Thanks for any pointers! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Issue with uninstalling postgres 8.1.9
On 14/09/2007, Phoenix Kiula [EMAIL PROTECTED] wrote: We have a system that came with pg 8.1.9. When I try to uninstall those RPMs, it works for all the rpms except for libs: rpm -ev postgresql-libs-8.1.9-1.el5 error: Failed dependencies: libpq.so.4 is needed by (installed) apr-util-1.2.7-6.i386 I am not sure what this is about and how to uninstall it. I cannot leave the libs rpm because with it on the system, the equivalent for 8.2.4 will not install. For instance, while installing the compatibility stuff prior to 8.2.4 -- rpm -Uhv compat-postgresql-libs-4-1PGDG.rhel5.i686.rpm warning: waiting for transaction lock on /var/lib/rpm/__db.000 Preparing...### [100%] file /usr/lib/libpq.so.4 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 file /usr/lib/libpq.so.4.1 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 How does one resolve this? How can I uninstall libpg.so.4? I am on CentOS 5, and I'm using the rpms for Red Hat ES 5. Thanks for any pointers! Ok, I found the answer to my own question here - http://www.westnet.com/~gsmith/content/postgresql/pgrpm.htm Hope someone in the same situation finds this bit useful. The mail archives did not seem to have info about this yet, and quite a bit of googling brought up that site. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Tablespaces on tertiary media
Mark Morgan Lloyd [EMAIL PROTECTED] writes: Where does PostgreSQL stand with storing /really/ large amounts of data offline? Specifically, if a FUSE is used to move a tablespace to something like a tape archiver can the planner be warned that access might take an extended period? No, Postgres can't deal with this. You'll have to dump the tables with pg_dump or COPY or something like that and then drop them from the database. If you need them again you have to load them again. Actually if the tables are missing but nobody tries to access them (including autovacuum) then nothing will notice they're missing. But if you do try to access them you'll get an error. And if you leave it in this situation too long your database will shut down from getting too close to transaction wraparound. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Tablespaces on tertiary media
Gregory Stark wrote: Where does PostgreSQL stand with storing /really/ large amounts of data offline? Specifically, if a FUSE is used to move a tablespace to something like a tape archiver can the planner be warned that access might take an extended period? No, Postgres can't deal with this. You'll have to dump the tables with pg_dump or COPY or something like that and then drop them from the database. If you need them again you have to load them again. Actually if the tables are missing but nobody tries to access them (including autovacuum) then nothing will notice they're missing. But if you do try to access them you'll get an error. And if you leave it in this situation too long your database will shut down from getting too close to transaction wraparound. Thanks. If the tables were in a tablespace that was stored on something that looked like a conventional filesystem would the server code be prepared to wait the minutes that it took the operating system and FUSE implementation to load the tables onto disc? The earlier work e.g. http://www.vldb.org/conf/1996/P156.PDF apparently warned the planner about long-latency devices but that's probably unnecessary if the application program was aware that a table had been partitioned by age and accessing old data could be slow. -- Mark Morgan Lloyd markMLl .AT. telemetry.co .DOT. uk [Opinions above are the author's, not those of his employers or colleagues] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Tablespaces on tertiary media
Mark Morgan Lloyd [EMAIL PROTECTED] writes: Thanks. If the tables were in a tablespace that was stored on something that looked like a conventional filesystem would the server code be prepared to wait the minutes that it took the operating system and FUSE implementation to load the tables onto disc? Ah, I see what you mean now. I think you might have a problem with the planner opening the files to do an lseek to measure how large they are. I'm not sure if that gets triggered before or after constraint exclusion. That's the only problem I can think of. The earlier work e.g. http://www.vldb.org/conf/1996/P156.PDF apparently warned the planner about long-latency devices but that's probably unnecessary if the application program was aware that a table had been partitioned by age and accessing old data could be slow. Well it's not like there are any alternative plans that will avoid the need to access the data at all. I assume the FUSE setup will always have to load the entire file so there's no even any difference between indexed and sequential access. (Unless the table is over 1G in which case you might want to avoid sequential scans if index scans would avoid accessing some segments.) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Scalability Design Questions
Hi, novnov wrote: OK, this has been very informative and I'd like to thank the three of you. Asynchronous replication to readonly slaves is something I will look into. I've never touched posgtres replication; and Scott mentioned that he was not familiar with PGCluster, so there must be some other replication system he's referencing, maybe Slony-I? Not sure if you've found those, but just to make sure: there's the a nice chapter in the official Postgres Documentation about High Availability and Load Balancing [1]. Another starting point might be the advocacy wiki at [2]. Regards Markus [1]: Postgres Documentation, Chapter 24. High Availability and Load Balancing: http://www.postgresql.org/docs/8.2/static/high-availability.html [2]: Postgres Advocacy Wiki, Replication: http://developer.postgresql.org/index.php/Replication%2C_Clustering%2C_and_Connection_Pooling ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] oracle rank() over partition by queries
Thanks Markus Markus Schiltknecht [EMAIL PROTECTED] wrote: Hello Sharmi Joe, sharmi Joe wrote: Is there a way to get the oracle's rank() over partition by queries in postgresql? These are known as window functions. AFAIK Gavin Sherry is working on an implementation for Postgres. Regards Markus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ - Be a better Heartthrob. Get better relationship answers from someone who knows. Yahoo! Answers - Check it out.
Re: [GENERAL] oracle rank() over partition by queries
Hello Sharmi Joe, sharmi Joe wrote: Is there a way to get the oracle's rank() over partition by queries in postgresql? These are known as window functions. AFAIK Gavin Sherry is working on an implementation for Postgres. Regards Markus ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Issue with uninstalling postgres 8.1.9
Phoenix Kiula wrote: We have a system that came with pg 8.1.9. When I try to uninstall those RPMs, it works for all the rpms except for libs: rpm -ev postgresql-libs-8.1.9-1.el5 error: Failed dependencies: libpq.so.4 is needed by (installed) apr-util-1.2.7-6.i386 I am not sure what this is about and how to uninstall it. I cannot leave the libs rpm because with it on the system, the equivalent for 8.2.4 will not install. For instance, while installing the compatibility stuff prior to 8.2.4 -- rpm -Uhv compat-postgresql-libs-4-1PGDG.rhel5.i686.rpm warning: waiting for transaction lock on /var/lib/rpm/__db.000 Preparing... ### [100%] file /usr/lib/libpq.so.4 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 file /usr/lib/libpq.so.4.1 from install of compat-postgresql-libs-4-1PGDG.rhel5 conflicts with file from package postgresql-libs-8.1.9-1.el5 How does one resolve this? How can I uninstall libpg.so.4? If compat-postgresql-libs-4 replaces postgresql-libs-8.1.9, you could do: rpm -ev --nodeps postgresql-libs and then rpm -Uhv compat-postgresql-libs-4-1PGDG.rhel5.i686.rpm The second will restore the dependency that the first command breaks. Yours, Laurenz Albe ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] How to recover database instance from a disaster
Hi, I'm supporting Sun Grid Engine and it uses Postgres DB as a backend server for ARCo accounting and reporting module. One of my customers is asking how to recover data if Postgres DB server got crashed. SGE constantly generates accounting data and records them into Postgres DB. I think one way is to periodically to back up the database using pg_dump. But this is only a snapshot and will loose any information generated after it was taken. I guess running Postgres DB as HA server is another solution although I am not sure if this is feasible. Is there any other way? Thanks, - Chansup ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Locking entire database
Hello, Is there some way of locking all database tables in a transaction without knowing their names or even better just locking the entire database? I know this is bad tactics but there is a specific case where i need it. Can it be done? Thank you Panagiotis ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Tablespaces on tertiary media
Gregory Stark wrote: Thanks. If the tables were in a tablespace that was stored on something that looked like a conventional filesystem would the server code be prepared to wait the minutes that it took the operating system and FUSE implementation to load the tables onto disc? Ah, I see what you mean now. I think you might have a problem with the planner opening the files to do an lseek to measure how large they are. I'm not sure if that gets triggered before or after constraint exclusion. That's the only problem I can think of. The size could be stored in the catalogue though. However at that point I guess that anything that was used before constraint exclusion would have to be in the catalogue and anything after would have to initiate retrieval from tertiary media if it's not already cached. The earlier work e.g. http://www.vldb.org/conf/1996/P156.PDF apparently warned the planner about long-latency devices but that's probably unnecessary if the application program was aware that a table had been partitioned by age and accessing old data could be slow. Well it's not like there are any alternative plans that will avoid the need to access the data at all. I assume the FUSE setup will always have to load the entire file so there's no even any difference between indexed and sequential access. (Unless the table is over 1G in which case you might want to avoid sequential scans if index scans would avoid accessing some segments.) I'd imagine in most cases that sequential scan time would be dwarfed by medium-load and seek time. It would be important here that the server didn't time out assuming that it had hit a hardware problem when in actual fact the table was still being pulled from tape. I'd presume that when Sarawagi (who I believe is now with IBM) was doing the work that there wasn't a straightforward way to partition tables (as is currently described in section 5.9 of the manual) so she had to add internal hooks. Now granted that I don't pretend to really understand how things work (I'm a luser, not a guru) but it seems to me that it would not be difficult to extend the tablespace definition from CREATE TABLESPACE tablespacename LOCATION '/directory' to something like CREATE TABLESPACE tablespacename LOCATION '|check_loaded.pl /directory' where the check_loaded.pl script could check that the table was cached and return its name when available. However I guess that the script would probably need to see the initial lseek or whatever as well... there's probably a whole lot of non-obvious details that I've totally overlooked. Just my 2d-worth :-) -- Mark Morgan Lloyd markMLl .AT. telemetry.co .DOT. uk [Opinions above are the author's, not those of his employers or colleagues] ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Locking entire database
On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote: ... there is a specific case where i need it. Don't really know, but, explain what the case is, and maybe someone could help you. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] How to recover database instance from a disaster
On 9/14/07, Chansup Byun [EMAIL PROTECTED] wrote: Is there any other way? See: http://www.postgresql.org/docs/8.2/static/backup.html ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Locking entire database
Well the problem is I am working on rdf query engine for persistent RDF data. The data is stored/structured in a specific way in the database. When i perform updates in parallel, because there are cross table dependencies, I end up with inconsistencies, For example One transaction reads to see if there is a resource so as to add a property where it is a subject. Then an other transaction deletes the resource after the first has decided that the resource is there but before it added the property. Thus it would be helpful for me to avoid the difficult task of dependency based locking and just lock the whole database. any ideas? Rodrigo De León wrote: On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote: ... there is a specific case where i need it. Don't really know, but, explain what the case is, and maybe someone could help you. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Locking entire database
On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote: Hello, Is there some way of locking all database tables in a transaction without knowing their names or even better just locking the entire database? I know this is bad tactics but there is a specific case where i need it. Can it be done? AFAIK Locking the entire database may not be an option in postgresql. However you can virtually restrict access to everyone to a particular database via pg_hba.conf. More details here http://www.postgresql.org/docs/8.2/static/auth-pg-hba-conf.html regards, -- Sibte Abbas ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Locking entire database
On Fri, Sep 14, 2007 at 05:45:07PM +0300, Panagiotis Pediaditis wrote: Well the problem is I am working on rdf query engine for persistent RDF data. The data is stored/structured in a specific way in the database. When i perform updates in parallel, because there are cross table dependencies, I end up with inconsistencies, For example One transaction reads to see if there is a resource so as to add a property where it is a subject. Then an other transaction deletes the resource after the first has decided that the resource is there but before it added the property. Sounds like what you need is serializable transactions. Then the server will tell you if something conflicts. Have a ncie day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Locking entire database
Panagiotis Pediaditis, 14.09.2007 16:45: Well the problem is I am working on rdf query engine for persistent RDF data. The data is stored/structured in a specific way in the database. When i perform updates in parallel, because there are cross table dependencies, I end up with inconsistencies, For example One transaction reads to see if there is a resource so as to add a property where it is a subject. Then an other transaction deletes the resource after the first has decided that the resource is there but before it added the property. Thus it would be helpful for me to avoid the difficult task of dependency based locking and just lock the whole database. any ideas? Hmm. To me this sounds like all those steps should in fact be _one_ transaction and not several transactions. Thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] problems with large table
Thanks, recreating the table solved my problems. Our team is working on implementing some performance tuning based on other recommendations from the list (FSM, etc). Mike Joshua D. Drake wrote: At this point, you are in a world of hurt :). If you stop a vacuum you have created a huge mess of dead rows in that table. My suggestion is this to create a new table that is populated from the old table, rename the old table to big_table new, rename new table to old table. Run analyze. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Documentation fix regarding atan2
Change made. Thanks. Your documentation changes can be viewed in five minutes using links on the developer's page, http://www.postgresql.org/developer/testing. --- Andrew Maclean wrote: In Table 9.4 of the documentation atan2 is described as follows: atan2(*x*, *y*) inverse tangent of *x*/*y* I am sure it should read as: atan2(*y*, x) inverse tangent of y/x This looks to be the standard C++/c atan2(y,x) function. You can easily test this: If y = 2, x = 1, then degrees(atan(y/x)) =63.4 but if we proceed according to the documentation; degrees(atan2(x,y))=degrees(atan2(1,2))=25.6 which is not the same as degrees(atan(y/x)). So it must be degrees(atan2(y,x))=degrees(atan2(2,1))=63.4. Thanks Andrew -- ___ Andrew J. P. Maclean Centre for Autonomous Systems The Rose Street Building J04 The University of Sydney 2006 NSW AUSTRALIA Ph: +61 2 9351 3283 Fax: +61 2 9351 7474 URL: http://www.acfr.usyd.edu.au/ ___ -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Locking entire database
A simpler example, In the context of one transaction i do many queries of the form INSERT INTO table value WHERE value NOT IN TABLE; If i have 2 processes running the same 100s of these at the same time i end up with duplicates. Even with isolation set to serializable any ideas? thnx Panagiotis Thomas Kellerer wrote: Panagiotis Pediaditis, 14.09.2007 16:45: Well the problem is I am working on rdf query engine for persistent RDF data. The data is stored/structured in a specific way in the database. When i perform updates in parallel, because there are cross table dependencies, I end up with inconsistencies, For example One transaction reads to see if there is a resource so as to add a property where it is a subject. Then an other transaction deletes the resource after the first has decided that the resource is there but before it added the property. Thus it would be helpful for me to avoid the difficult task of dependency based locking and just lock the whole database. any ideas? Hmm. To me this sounds like all those steps should in fact be _one_ transaction and not several transactions. Thomas ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Inherited FK Indexing
On 9/14/07, Ketema Harris [EMAIL PROTECTED] wrote: I have the following table set up: CREATE TABLE states ( state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), state character(2), full_name character varying, timezone character varying, CONSTRAINT PK_state_id PRIMARY KEY (state_id) ) CREATE TABLE canadian_provinces ( -- Inherited: state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), -- Inherited: state character(2), -- Inherited: full_name character varying, -- Inherited: timezone character varying, CONSTRAINT PK_province_id PRIMARY KEY (state_id) ) as expected I can do select * from states and get everything out of the child table as well. What I can't do is create a FK to the states table and have it look in the child table as well. Is this on purpose? Not so much on purpose as an artifact of the design process. PostgreSQL can't span multiple tables with indexes, a unique one of which is required for a FK to point to a field. Is it possible to have FK that spans into child tables? Not really. You might be able to write your own function that approximates such behavior. I would think some kind of intermediate table with every value from all the children for that one column could be used, but performance would suffer. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Locking entire database
On 9/14/07, Panagiotis Pediaditis [EMAIL PROTECTED] wrote: A simpler example, In the context of one transaction i do many queries of the form INSERT INTO table value WHERE value NOT IN TABLE; If i have 2 processes running the same 100s of these at the same time i end up with duplicates. Even with isolation set to serializable any ideas? Unique index? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Inherited FK Indexing
On Friday 14 September 2007, Ketema Harris [EMAIL PROTECTED] wrote: as expected I can do select * from states and get everything out of the child table as well. What I can't do is create a FK to the states table and have it look in the child table as well. Is this on purpose? Is it possible to have FK that spans into child tables? No. -- The only difference between conservatives and liberals regarding budget cuts is tense. Conservatives say they will cut the budget, and then they increase it. After the budget has increased, liberals say that it has been cut. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Inherited FK Indexing
On Sep 14, 2007, at 10:35 AM, Ketema Harris wrote: I have the following table set up: CREATE TABLE states ( state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), state character(2), full_name character varying, timezone character varying, CONSTRAINT PK_state_id PRIMARY KEY (state_id) ) CREATE TABLE canadian_provinces ( -- Inherited: state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), -- Inherited: state character(2), -- Inherited: full_name character varying, -- Inherited: timezone character varying, CONSTRAINT PK_province_id PRIMARY KEY (state_id) ) as expected I can do select * from states and get everything out of the child table as well. What I can't do is create a FK to the states table and have it look in the child table as well. Is this on purpose? Is it possible to have FK that spans into child tables? I'm assuming you just left out an INHERITS clause or ALTER TABLE statement to add the inheritance? Anyways, the answer to your question is no, you'll need to create any dependencies to child tables separately. Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Inherited FK Indexing
Hi, Ketema Harris wrote: as expected I can do select * from states and get everything out of the child table as well. What I can't do is create a FK to the states table and have it look in the child table as well. Is this on purpose? Is it possible to have FK that spans into child tables? This is a well known (and documented, see [1]) deficiency. It's due to the current implementation of indices, which are bound to exactly one table, meaning they do return a position within the table, but cannot point to different tables. Regards Markus [1]: Postgres Documentation, Chapter 5.8.1 Caveats (of Inheritance): http://www.postgresql.org/docs/8.2/static/ddl-inherit.html ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Inherited FK Indexing
I have the following table set up: CREATE TABLE states ( state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), state character(2), full_name character varying, timezone character varying, CONSTRAINT PK_state_id PRIMARY KEY (state_id) ) CREATE TABLE canadian_provinces ( -- Inherited: state_id integer NOT NULL DEFAULT nextval ('state_province_id_seq'::regclass), -- Inherited: state character(2), -- Inherited: full_name character varying, -- Inherited: timezone character varying, CONSTRAINT PK_province_id PRIMARY KEY (state_id) ) as expected I can do select * from states and get everything out of the child table as well. What I can't do is create a FK to the states table and have it look in the child table as well. Is this on purpose? Is it possible to have FK that spans into child tables? Thanks ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Tablespaces on tertiary media
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09/14/07 04:06, Mark Morgan Lloyd wrote: Where does PostgreSQL stand with storing /really/ large amounts of data offline? Specifically, if a FUSE is used to move a tablespace to something like a tape archiver can the planner be warned that access might take an extended period? I know that at one point (v6?) there were hooks in the code for experimental Berkeley code to do this sort of thing but as far as I know there has never been anything publicly available. While tertiary media certainly was relevant 10 years ago, is it really necessary in 2007? A couple of MSA-1000s stuffed with 1TB disks would hold an l-o-t *lot* of historical data. - -- Ron Johnson, Jr. Jefferson LA USA Give a man a fish, and he eats for a day. Hit him with a fish, and he goes away for good! -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFG6rpUS9HxQb37XmcRApN4AJ9ETn8nRlfGn67oRk4KVvd2+S6vtQCeKzlh pxIham1MIue8+PhxuK0PBFQ= =nOC4 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Pgsql roles, SQL injection, and utility statements
Hi all; I have a bit of concern about writing applications which use Pgsql roles for security. Since the utility statements are not parameterized, the easiest way to manage the roles in an application is to use stored procedures which EXECUTE strings to create SQL queries. These EXECUTE statements include user-supplied data, and since these would generally run with some sort of administrative rights, I am worried about people doing things like: select * from add_user_to_role('username', 'rolename; drop table foo;'); Is this a problem? Is there a way to do this safely? Best Wishes, Chris Travers ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Tablespaces on tertiary media
Ron Johnson wrote: I know that at one point (v6?) there were hooks in the code for experimental Berkeley code to do this sort of thing but as far as I know there has never been anything publicly available. While tertiary media certainly was relevant 10 years ago, is it really necessary in 2007? A couple of MSA-1000s stuffed with 1TB disks would hold an l-o-t *lot* of historical data. I was considering it from the point-of-view of completeness rather than anything else, but as a specific example I seem to recall that one of the particle accelerator sites uses PostgreSQL for cataloging captured data but actually stores it on either tape or optical disc (I forget which). I'm sure that there would be advantages to being able to retrieve both metadata and data using the same API, rather than using database queries for the former and something like an AMANDA-compatible interface for the latter. -- Mark Morgan Lloyd markMLl .AT. telemetry.co .DOT. uk [Opinions above are the author's, not those of his employers or colleagues] ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [GENERAL] AutoVacuum Behaviour Question
Andrew Hammond wrote: On 9/13/07, Bruce Momjian [EMAIL PROTECTED] wrote: Alvaro Herrera wrote: Bruce Momjian wrote: Is this item closed? No, it isn't. Please add a TODO item about it: * Prevent long-lived temp tables from causing frozen-Xid advancement starvation Sorry, I don't understand this. Can you give me more text? Thanks. s/long-lived/orphaned/ ? And possibly this means better orphan detection and removal. Added: o Prevent long-lived temporary tables from causing frozen-xid advancement starvation http://archives.postgresql.org/pgsql-general/2007-06/msg01645.php -- Bruce Momjian [EMAIL PROTECTED] http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] GRANT on group does not give access to group members
Well, after further searching and reviewing the code I believe the problem was the NOINHERIT in the login role creation. So the remaining question is: On another question, if I want to grant privileges to all tables I have to do them ONE BY ONE. Granting the privileges on the database or the schema won't recursively grant them on the tables, am I correct? Is the only solution the usage of scripts such as this one: http://pgedit.com/public/sql/acl_admin/index.html http://pgedit.com/public/sql/acl_admin/index.html ? wild_oscar wrote: Dear all, I'm a bit confused about privilege management in PostgreSQL. I have a database db1 , schema schema1 and table table1, created with a superuser. Now, following the documentation (and what I've learnt about user management), I created a group called admin and a user login, and gave the login user the admin privileges. CREATE ROLE admin NOINHERIT; CREATE ROLE login LOGIN ENCRYPTED PASSWORD 'md5c2740ac0c81b17602438f3ac849fea08' NOINHERIT; GRANT admin TO login; Now, if I grant: GRANT ALL ON TABLE schema1.table1 TO GROUP admin; Selecting * from the tabel with user login won't work: ERROR: permission denied for relation table1 If I grant directly to the user: GRANT ALL ON TABLE schema1.table1 TO login; It WORKS. Now, that makes the idea of creating few role groups and setting privileges to them, and later adding maybe a lot of users and just adding them to the role groups pointless. So why does PostgreSQL work like this and how can I achieve the common grant to group approach? On another question, if I want to grant privileges to all tables I have to do them ONE BY ONE. Granting the privileges on the database or the schema won't recursively grant them on the tables, am I correct? Thanks a lot for your help! -- View this message in context: http://www.nabble.com/GRANT-on-group-does-not-give-access-to-group-members-tf4435748.html#a12655884 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] tsearch2 and parsing host strings
A question related to tsearch2 functionality in postgres: When I run the following query: select to_tsvector('default', 'website.com') I get 'website.com':1. What I need to get back is 'website':1 instead. I can see that the parser correctly determines term website.com as a host token, which then are routed (on my, and I believe default, configuration) to 'default' dictionary (en_stem for me). Has anyone written a special dictionary for cases just like the above, so that I could change the pg_ts_cfgmap to map it to that special dictionary? Is there any way I can accomplish this with tsearch2? Thanks, Laimis
[GENERAL] use COPY TO on normalized database
I am working with PostgreSQL 8.2.4. I need to use the SQL COPY (COPY table FROM file) statement to populate my database. I have created a normalized data model (up to 3NF). Is it possible to use COPY TO on a particular table (that is linked to other tables using foreign keys) whilst keeping the foreign keys and other tables updated? (Also, will de-normalizing the model to 2NF or 1NF help?) Thank you. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] su: adduser: command not found mac osx
I've done everything I can find related to this error including su - instead of su useradd instead of adduser but nothing is helping, is there anyone out there using bash on mac os who knows how to fix this? Thanks, Jason ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] plpgsql trigger original query
Hi, I am looking for a way to get the original query that caused a trigger to fire. I need to be able to get this query either inside the trigger itself (and then send it to the function the trigger calls) or get it in the end function. Is this doable? The reason i am asking is that I would like to be able to send a variable (this variable would come from php where the original query is executed) to the end function called by the trigger. Any help with this would be greatly appreciated. Thanks, Dan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] NOT NULL Issue
Hi Why is '' not considered null is postgres (8.1.3) Currently I have to use the following workaround where I have zero length strings in char fields. select * from security.users where length(us_username)=0; Surely this a null. Apparently not in Postgres. Currently I have to use the following SQL to pick up zero length strings: alter table security.users add constraint notnull_username check(us_username ''); Thanks Gustav ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] NOT NULL Issue
Gustav Lindenberg [EMAIL PROTECTED] writes: select * from security.users where length(us_username)=0; Surely this a null. Surely not. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] database still hanging
What would cause psql to hang indefinitely when the backend disappears? We have a script that uses psql to insert a record (TCP connection to DB on different machine). The command is basically psql connection_stuff -c insert into... A while back I had to restart the server and today discovered that some of the client machines have psql processes dating back several months. Obviously no TCP connection on the server end but client-side shows the connection as ESTABLISHED. -- View this message in context: http://www.nabble.com/database-still-hanging-tf4440057.html#a12668226 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] constrains on two tables
In response to finecur [EMAIL PROTECTED]: On Sep 10, 9:55 pm, finecur [EMAIL PROTECTED] wrote: Hi Here is my table: Table School ( id integer, name text ); Table Department ( id integer, school_id integer reference school(id), name text ); Table Course ( department_id integer references department(id), name text, course_number text ) I would like to make the course_number unique, but not in the course table, nor in department. I would like to make the course_number unique in the scope of School. So, you may see the same course_number in Course table, but (school_id, course_number) should be always unique. How can I make the constrain? ALTER TABLE Course PRIMARY KEY(school_id, course_number); and add the school_id column to the Course table. -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Event-driven programming?
On Sep 12, 3:05 pm, [EMAIL PROTECTED] (D. Dante Lorenso) wrote: Pavel Stehule wrote: 2007/9/12, Jay Dickon Glanville [EMAIL PROTECTED]: - I write a function (it doesn't matter what language it's in: PL/pgSQL, PL/Java, etc) - I register that function as a post-commit callback function - when a client commits a transaction, the function gets called, and the database passes the function some general information as to the content of the transaction Note how similar this process is to writing triggers. The only problem I have with triggers is that events get generated per-table. I'd like to get notifications based on transactions, not table changes. What I'd like to be able to do with this event is to notify any applications of this change, so they can update their cached view of the database. Although I'm happy to use triggers as-is (not per transaction, etc) I've also wondered about firing events from the database. I'm curious to know if anyone has attempted to write a trigger that will open a socket and send an event packet to an application server on the network. I've considered using a message queue like JMS to manage events on my network and have PostgreSQL fire off UDP messages to a socket server that would insert jobs into the message queue as triggers get fired in the database. Doing this would be an alternative to storing the queue as a database table and having to use polling to constantly check the database for events in the queue. I am interested what anybody might contribute to this thread. Let us know what you tried whether it worked or not, it might be useful. -- Dante Depending on your reliability requirements UDP may not be a great choice. But, since you asked about what's been tried, my (successful so far) production setup is along the lines of: 1. process A accepts multiple data flows, inserts work to be done items into a table in batches and calls NOTIFY. 2. process B LISTENs for notifications (with a blocking read on the socket connection to Postgres) and takes them as a signal to look for work items to be done. It also checks every N minutes of idle time for work items to be done in case the NOTIFY/LISTEN mechanism is broken (haven't seen that situation yet). As for recovery, process B looks for work items on startup, then drops into the LISTEN / blocking_read mode. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] constrains on two tables
On Sep 10, 9:55 pm, finecur [EMAIL PROTECTED] wrote: Hi Here is my table: Table School ( id integer, name text ); Table Department ( id integer, school_id integer reference school(id), name text ); Table Course ( department_id integer references department(id), name text, course_number text ) I would like to make the course_number unique, but not in the course table, nor in department. I would like to make the course_number unique in the scope of School. So, you may see the same course_number in Course table, but (school_id, course_number) should be always unique. How can I make the constrain? Thanks, ff Anyone know??? ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] arrays of foreign keys
Hello, Thanks everyone for your input. Then, it sounds like I won't use an array of foreign keys. I was just curious about the array functionality. However, I didn't think about setting up a view above the intermediary table with an array_accum, now I have never heard of array_accum. I did some research in the online doc. It's a cool functionality, but what's the performance of it? Would using an array_accum slow down a view? Thanks Max ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] su: adduser: command not found mac osx
Have you tried sudo? su won't do anything if you haven't explicitly enabled the root account. On Sep 12, 2007, at 1:53 AM, Jason Nerida wrote: I've done everything I can find related to this error including su - instead of su useradd instead of adduser but nothing is helping, is there anyone out there using bash on mac os who knows how to fix this? Thanks, Jason ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] GRANT on group does not give access to group members
Dear all, I'm a bit confused about privilege management in PostgreSQL. I have a database db1 , schema schema1 and table table1, created with a superuser. Now, following the documentation (and what I've learnt about user management), I created a group called admin and a user login, and gave the login user the admin privileges. CREATE ROLE admin NOINHERIT; CREATE ROLE login LOGIN ENCRYPTED PASSWORD 'md5c2740ac0c81b17602438f3ac849fea08' NOINHERIT; GRANT admin TO login; Now, if I grant: GRANT ALL ON TABLE schema1.table1 TO GROUP admin; Selecting * from the tabel with user login won't work: ERROR: permission denied for relation table1 If I grant directly to the user: GRANT ALL ON TABLE schema1.table1 TO login; It WORKS. Now, that makes the idea of creating few role groups and setting privileges to them, and later adding maybe a lot of users and just adding them to the role groups pointless. So why does PostgreSQL work like this and how can I achieve the common grant to group approach? On another question, if I want to grant privileges to all tables I have to do them ONE BY ONE. Granting the privileges on the database or the schema won't recursively grant them on the tables, am I correct? Thanks a lot for your help! -- View this message in context: http://www.nabble.com/GRANT-on-group-does-not-give-access-to-group-members-tf4435748.html#a12654908 Sent from the PostgreSQL - general mailing list archive at Nabble.com. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] constrains on two tables
Hi Here is my table: Table School ( id integer, name text ); Table Department ( id integer, school_id integer reference school(id), name text ); Table Course ( department_id integer references department(id), name text, course_number text ) I would like to make the course_number unique, but not in the course table, nor in department. I would like to make the course_number unique in the scope of School. So, you may see the same course_number in Course table, but (school_id, course_number) should be always unique. How can I make the constrain? Thanks, ff ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] tsearch2 and parsing host strings
On Tue, 11 Sep 2007, Laimonas Simutis wrote: A question related to tsearch2 functionality in postgres: When I run the following query: select to_tsvector('default', 'website.com') I get 'website.com':1. What I need to get back is 'website':1 instead. I can see that the parser correctly determines term website.com as a host token, which then are routed (on my, and I believe default, configuration) to 'default' dictionary (en_stem for me). Has anyone written a special dictionary for cases just like the above, so that I could change the pg_ts_cfgmap to map it to that special dictionary? Is there any way I can accomplish this with tsearch2? Check my reply about pg_regex dictionary. Simple regex will save you. Thanks, Laimis Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Inserting a timestamp in a timestamp column.
Hi all, I have a column declared as timestamp without time zone that I vainly want to insert a raw timestamp into (i.e. in the format returned by Perl's or PHP's time()). I know of SQL NOW(), but I want to insert a cooked timestamp from the outside most efficiently. How? Thanks. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] plpgsql trigger original query
Hi, I am looking for a way to get the original query that caused a trigger to fire. I need to be able to get this query either inside the trigger itself (and then send it to the function the trigger calls) or get it in the end function. Is this doable? The reason i am asking is that I would like to be able to send a variable (this variable would come from php where the original query is executed) to the end function called by the trigger. Any help with this would be greatly appreciated. Thanks, Dan currently this feature isn't supported. You can look to pg_stat_activity table for top outer statement: create or replace function current_statement() returns varchar as $$ select current_query from pg_stat_activity where procpid = pg_backend_pid(); $$ language sql; postgres=# select now(), current_statement(); now | current_statement + 2007-09-14 22:29:58.285+02 | select now(), current_statement(); (1 row) Regards Pavel Stehule ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Inserting a timestamp in a timestamp column.
I normally use (for php) something like date('Y-m-d H:i:s', time()) to get a string version that Postgres will accept. On Sep 14, 2007, at 3:23 PM, rihad wrote: Hi all, I have a column declared as timestamp without time zone that I vainly want to insert a raw timestamp into (i.e. in the format returned by Perl's or PHP's time()). I know of SQL NOW(), but I want to insert a cooked timestamp from the outside most efficiently. How? Thanks. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster Erik Jones Software Developer | Emma® [EMAIL PROTECTED] 800.595.4401 or 615.292.5888 615.292.0777 (fax) Emma helps organizations everywhere communicate market in style. Visit us online at http://www.myemma.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] importing pgsql schema into visio (for diagramming)
On Sep 12, 2007, at 7:32 PM, Andrew Hammond wrote: Does anyone know where I could find a tool which allows importing schema information from a postgres database into visio? The boss guys want some pretty pictures... See SQLFairy. it can generate pretty pictures directly from the schemas. I also have some script somewhere that generates graphviz output which makes optimal graphs. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Pgsql roles, SQL injection, and utility statements
[EMAIL PROTECTED] (Chris Travers) writes: Since the utility statements are not parameterized, the easiest way to manage the roles in an application is to use stored procedures which EXECUTE strings to create SQL queries. These EXECUTE statements include user-supplied data, and since these would generally run with some sort of administrative rights, I am worried about people doing things like: select * from add_user_to_role('username', 'rolename; drop table foo;'); Is this a problem? Is there a way to do this safely? Sure - validate that 'rolename; drop table foo;' is the legitimate name of a role, and raise an exception if it isn't. And have the stored function use double quotes to make sure that the names are suitably quoted. That provides a belt and a pair of suspenders for this case... -- output = (cbbrowne @ acm.org) http://www3.sympatico.ca/cbbrowne/nonrdbms.html I just removed the instructions in MC:COMMON;LINS which specify that it should be installed on AI. We'll certainly miss that machine, and probably spend the rest of our lives fixing programs that mention it. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] %tsearch2-affix parse error dictionary spanish
Hi I had installed postgresql-8.2.4. After some problems with tsearch2 installation i had the next problem: prueba=# select to_tsvector('espanol','melón'); ERROR: Affix parse error at 506 line If execute: prueba=# select lexize('sp','melón'); lexize - {melon} (1 row) where i can investigate for resolve about the problem? I tried many dictionaries with the same results. My dictionary, in the line 506 had: flag *J:# isimo E-E, ÍSIMO # grande grandísimo E-E, ÍSIMOS# grande grandísimos E-E, ÍSIMA # grande grandísima E-E, ÍSIMAS# grande grandísimas O-O, ÍSIMO # tonto tontísimo O-O, ÍSIMA # tonto tontísima O-O, ÍSIMOS# tonto tontísimos O-O, ÍSIMAS# tonto tontísimas LÍSIMO # formal formalísimo LÍSIMA # formal formalísima LÍSIMOS# formal formalísimos LÍSIMAS# formal formalísimas If removed Í then I don't have problem, but the lexema is incorrect I saw the post http://archives.postgresql.org/pgsql-general/2007-07/msg00888.php Maybe Marcelo had resolve the problem best regards ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] NOT NULL Issue
Gustav Lindenberg [EMAIL PROTECTED] writes: Why is '' not considered null is postgres (8.1.3) Because they're different. The SQL spec says that an empty string is different from NULL, and so does every database in the world except Oracle. Oracle, however, does not define the standard. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match