Re: [GENERAL] Child program using parent program's transaction?
I would say that in certain contexts it would definitely be sane/preferable to have a worker process work in its boss's transaction. In the traditional bank example: Open a transfer transaction: One application goes to take money out of one account. The second application goes to put money into the other account. A third application might do other things. This way you can do parallel processing with separate physical applications, and still keep the integrity of the transaction. The transaction would then be committed when all of the child applications came back and reported success and would be rolled back if one reported a failure. How to do it is environment specific and I couldn't tell you how to pass a transaction or connection as a parameter in ruby. Sim Wayne Conrad wrote: I work with a system designed as lots of little cooperating worker programs, with boss programs that... well, boss the worker programs around. Boss and workers all use the same database. Sometimes it would be convenient to have a boss start a transaction and then have the workers do their work in the context of that transaction. Each time, I've decided that since I don't know how to do that, that I never really wanted to do that. You know how it goes. Today, it would once again be convenient to have an exec'd program do its work in the context of its parent program's transaction. So, before I once again decide that I don't actually want to do that, can you tell me... is it possible? And, would any sane person do it? We're using Linux. All of the programs are written in Ruby 1.8, using the venerable (or is it just ancient?) pgsql library. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Certificate, login php question ? krb / sso
Le 11 sept. 06 à 05:57, Michael Fuhr a écrit :On Sun, Sep 10, 2006 at 09:39:59PM -0600, Michael Fuhr wrote: On Mon, Sep 11, 2006 at 02:32:26AM +0200, Jean-Gerard Pailloncy wrote: 1) Is it possible to use the SSL authentification done by apache with PostgreSQL ? I'm not aware of a way for Apache to proxy PostgreSQL's SSLnegotiation with the PHP script back to the HTTP client. If such a capability existed then it could arguably be considereda flaw in SSL because it would allow a server to impersonate oneof its clients to another server or to hijack a client's secureconnection with another server. Secure protocols are designed toprevent such attacks.The point is to USE AGAIN the authentification done by Apache with PostgreSQL not DO AGAIN the authentification.Googling around, I found:mod_auth_krb with "AuthType KerberosV5SaveCredentials"The auth is done by mod_auth_krb and mod_perl is able to use the same ticket for PostgreSQL. It is in the doc of PG.I found a page that presents phpkrb5 that may do the same things for mod_phphttp://www.stacken.kth.se/lists/heimdal-discuss/2003-04/msg00026.htmlThe project is hosted on http://savannah.nongnu.org/projects/phpkrb5/but is not really up to date (3 years old, and only for php4)In fact, things may look simple after reading http://archives.postgresql.org/pgsql-php/2004-08/msg00031.phpI'VE DONE IT! THE HOLY GRAIL OF WEB/DB APPS! :)All it takes it this line your PHP script:putenv("KRB5CCNAME={$_SERVER['KRB5CCNAME']}");Then pg_connect works :)but it is not reliable (http://archives.postgresql.org/pgsql-php/2004-08/msg00033.php).Sorry for the noise, but my question seems to me less and less PostgreSQL centric.On heavy solution may be a SSO with kerberos. Many new questions then...If someone has already done that, I would be glad to have some good URL.Pailloncy Jean-Gerard smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Plan for outer joins
Hi, I was tuning a join on a few tables for a SELECT COUNT(*) query, when I realized that the content of the second table didn't actually matter to the count. So, I figured a LEFT OUTER JOIN would be faster, but... apparently it's not. Shouldn't the planner notice that the right part of SELECT COUNT(*) FROM x LEFT OUTER JOIN y USING (id) isn't relevant for the result and skip the join? Attached are the explain plans for both versions and a few variations (TB wraps it if I paste). Regards, -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // explain analyze select count(*) from mm_medical_care_container_table JOIN mm_object USING (number); QUERY PLAN -- Aggregate (cost=73149.73..73149.73 rows=1 width=0) (actual time=2780.516..2780.517 rows=1 loops=1) - Hash Join (cost=2025.33..72927.66 rows=88826 width=0) (actual time=406.166..2774.618 rows=5936 loops=1) Hash Cond: (outer.number = inner.number) - Seq Scan on mm_object (cost=0.00..28666.63 rows=1651963 width=4) (actual time=0.015..1074.074 rows=892597 loops=1) - Hash (cost=1369.26..1369.26 rows=88826 width=4) (actual time=22.724..22.724 rows=0 loops=1) - Seq Scan on mm_medical_care_container_table (cost=0.00..1369.26 rows=88826 width=4) (actual time=0.028..13.889 rows=5936 loops=1) Total runtime: 2780.599 ms explain analyze select count(*) from mm_medical_care_container_table LEFT OUTER JOIN mm_object USING (number); QUERY PLAN Aggregate (cost=95099.40..95099.40 rows=1 width=0) (actual time=2919.999..2920.000 rows=1 loops=1) - Hash Left Join (cost=40863.54..94877.34 rows=88826 width=0) (actual time=2283.399..2913.955 rows=5936 loops=1) Hash Cond: (outer.number = inner.number) - Seq Scan on mm_medical_care_container_table (cost=0.00..1369.26 rows=88826 width=4) (actual time=0.021..12.450 rows=5936 loops=1) - Hash (cost=28666.63..28666.63 rows=1651963 width=4) (actual time=2263.407..2263.407 rows=0 loops=1) - Seq Scan on mm_object (cost=0.00..28666.63 rows=1651963 width=4) (actual time=0.022..1067.016 rows=892597 loops=1) Total runtime: 2920.260 ms I figured maybe the '*' was the culprit, so I tried this as well: explain analyze select count(mm_medical_care_container_table.*) from mm_medical_care_container_table LEFT OUTER JOIN mm_object USING (number); QUERY PLAN Aggregate (cost=95099.40..95099.40 rows=1 width=4) (actual time=2984.603..2984.603 rows=1 loops=1) - Hash Left Join (cost=40863.54..94877.34 rows=88826 width=4) (actual time=2318.856..2978.470 rows=5936 loops=1) Hash Cond: (outer.number = inner.number) - Seq Scan on mm_medical_care_container_table (cost=0.00..1369.26 rows=88826 width=8) (actual time=0.024..16.904 rows=5936 loops=1) - Hash (cost=28666.63..28666.63 rows=1651963 width=4) (actual time=2295.350..2295.350 rows=0 loops=1) - Seq Scan on mm_object (cost=0.00..28666.63 rows=1651963 width=4) (actual time=0.018..1074.118 rows=892597 loops=1) Total runtime: 2985.101 ms And here's what I expected to happen: explain analyze select count(*) from mm_medical_care_container_table; QUERY PLAN -- Aggregate (cost=1591.33..1591.33 rows=1 width=0) (actual time=17.427..17.428 rows=1 loops=1) - Seq Scan on mm_medical_care_container_table (cost=0.00..1369.26 rows=88826 width=0) (actual time=0.021..12.186 rows=5936 loops=1) Total runtime: 17.482 ms ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Child program using parent program's transaction?
On Wed, Sep 06, 2006 at 03:21:04PM -0700, Wayne Conrad wrote: I work with a system designed as lots of little cooperating worker programs, with boss programs that... well, boss the worker programs around. Boss and workers all use the same database. snip Today, it would once again be convenient to have an exec'd program do its work in the context of its parent program's transaction. So, before I once again decide that I don't actually want to do that, can you tell me... is it possible? And, would any sane person do it? The answer is, not really. If you're only fork()ing you can get away with it as long as you make sure no two processes are accessing the db at the same time. There's a little too much state to transfer a connection from one process to another. What you could do is have a single process that handles the actual connection to the database and have the other processes talk to it to do the queries. That way you have complete control over who accesses what in which connection. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Plan for outer joins
On Mon, Sep 11, 2006 at 11:58:56AM +0200, Alban Hertroys wrote: Hi, I was tuning a join on a few tables for a SELECT COUNT(*) query, when I realized that the content of the second table didn't actually matter to the count. So, I figured a LEFT OUTER JOIN would be faster, but... apparently it's not. Well, it would matter if the outer side of the join could return more than one row. You may know that won't happen, but the database may not. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Plan for outer joins
Martijn van Oosterhout wrote: On Mon, Sep 11, 2006 at 11:58:56AM +0200, Alban Hertroys wrote: Hi, I was tuning a join on a few tables for a SELECT COUNT(*) query, when I realized that the content of the second table didn't actually matter to the count. So, I figured a LEFT OUTER JOIN would be faster, but... apparently it's not. Well, it would matter if the outer side of the join could return more than one row. You may know that won't happen, but the database may not. Oh, of course. Thanks. -- Alban Hertroys ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] references/tutorial/tricks on dynamic generation of sql ( plpgsql functions) editing/coding
I'd write a trigger that generate a function. This is done for performance reasons. The function will be called several times and it is a list of delete statement according to the content of a table. The content of the table will seldom change and it is linked to the creation of other tables (metadata of other tables) and I want to provide a consistent interface to the downstream developers (if any in the future) and me. Since the code inside the function is getting a bit longer than what I was used I'm getting crazy about double quotes, syntax highlight and such. I'm using pg 7.4 so $$ trick shouldn't work. Do you have any advice about dynamically generated functions? Starting from a good vi/kate/whatever syntax highlighter to coding tricks to make the process less painful? Even if this is just the framework it looks enough unreadable create or replace function SP_GarbageCollectionGenerate( ) returns bool as ' begin execute ''create or replace SP_GarbageCollection( '' || '')'' || ''returns bool as -- what a mess! || begin return null; end; '' language plpgsql; end; ' language plpgsql; thx -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [ANNOUNCE] == PostgreSQL Weekly News - September 10 2006 ==
== PostgreSQL Product News == pgpool-II-1.0.0 is out, now supporting more than two servers and with the new pgpoolAdmin tool written in PHP. http://pgfoundry.org/projects/pgpool/ You dropped the most important feature in pgpool-II: parallel query. -- Tatsuo Ishii SRA OSS, Inc. Japan ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] references/tutorial/tricks on dynamic generation of sql ( plpgsql functions) editing/coding
On 9/11/06, Ivan Sergio Borgonovo [EMAIL PROTECTED] wrote: I'd write a trigger that generate a function. This is done for performance reasons. The function will be called several times and it is a list of delete statement according to the content of a table. The content of the table will seldom change and it is linked to the creation of other tables (metadata of other tables) and I want to provide a consistent interface to the downstream developers (if any in the future) and me. Since the code inside the function is getting a bit longer than what I was used I'm getting crazy about double quotes, syntax highlight and such. I'm using pg 7.4 so $$ trick shouldn't work. Do you have any advice about dynamically generated functions? Starting from a good vi/kate/whatever syntax highlighter to coding tricks to make the process less painful? Although I wrote such functions for a while, I found them to be unmaintanable. As soon as 8.0 came out, I converted everything I had to dollar quoting as quickly as possible. Dollar quoting literally transformed pl/pgsql into an amazing productive langauge. I strongly advise you to consider this against whatever objections you have to upgrading postgresql to a recent version. merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] references/tutorial/tricks on dynamic generation of sql ( plpgsql functions) editing/coding
I'm using pg 7.4 so $$ trick shouldn't work. Should work. if you put enugh $$, somebody might backport this for you ;) how about: create or replace function SP_GarbageCollectionGenerate() returns bool as ' declare v_query text; begin v_qyery=''create or replace blah blah :para1 blah blah blah blah :para2 ''; v_query=replace(v_query,'':para1'',quote_literal(value_1)); v_query=replace(v_query,'':para2'',quote_ident(value_2)); execute v_query; end; ' language plpgsql; a bit more readable, I think. Greetings Marcin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Hi Marco Bizzarri and Martijn van Oosterhout,Thanks for your valuable reply.I am trying to execute all query from VC++ through CDatabase::ExecuteSQL(sQueryString) function call.ie, via programming, not by manual entering query statements. so, in my situation I can construct a string and pass on to this function to execute the query string.Whatever the string I pass to ExecuteSQL() function, gets executed and I cannot get any result retured from the passed query string [like OID - 198705, from lo_create()]. Now, can you suggest me how to achieve it?And one more clarification, what is the value 131072? How can I get this vlaue?According to PostgreSQL documentation, they gave C Syntax as client side function. Should I use those C API calls?:)Purusothaman AOn 9/11/06, Marco Bizzarri [EMAIL PROTECTED] wrote:I will try to explain it with a sample session: this is for creating and writing a blob.From the psql prompt ( are the commands, the other are the results). begin ;BEGIN; SELECT lo_creat(131072) ; lo_creat-- 198705(1 row) (this is the OID number of the newly created large object). selectlo_open(198705, 131072) ; lo_open- 0(1 row)(this is the file handler which you will use in the operations). SELECT lowrite(0, ''); lowrite- 4(1 row)(you wrote 4 character in a large object) select lo_close(0); lo_close--0(1 row) (you closed the file). commit ;COMMITIn this way, you created a new large object, and stored a string of 4bytes inside of it.RegardsMarcoOn 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote: Thanks Martijn van Oosterhout and Marco Bizzarri. But, according to syntax of client side lo_import and lo_export, we should have 2 variable PGconn (for esatablished connection) and lobjld (imported file ID in PostgreSQL). I don't know how to do this in SQL statements. pls give me sample client side sql statements. :) Purusothaman A On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote: On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Not sure why you need a wrapper (you didn't say which language you were using) but those functions work exactly like the version you put in the SQL statements, except the filenames are for the client computer with client permissions. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8 ijq1n/SgAlwIiEgDI6zfICg= =Xk7N -END PGP SIGNATURE---Marco Bizzarrihttp://notenotturne.blogspot.com/
Re: [GENERAL] references/tutorial/tricks on dynamic generation of sql ( plpgsql functions) editing/coding
On Sep 11, 2006, at 8:34 AM, Ivan Sergio Borgonovo wrote: Since the code inside the function is getting a bit longer than what I was used I'm getting crazy about double quotes, syntax highlight and such. I'm using pg 7.4 so $$ trick shouldn't work. If you can't update to 8.0 or later, you might want to look at pgEdit. It can correctly syntax highlight pl/pgsql functions with nested quotes. John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Certificate, login php question ? krb / sso
On Mon, Sep 11, 2006 at 10:44:18AM +0200, Jean-Gerard Pailloncy wrote: Le 11 sept. 06 à 05:57, Michael Fuhr a écrit : If such a capability existed then it could arguably be considered a flaw in SSL because it would allow a server to impersonate one of its clients to another server or to hijack a client's secure connection with another server. Secure protocols are designed to prevent such attacks. The point is to USE AGAIN the authentification done by Apache with PostgreSQL not DO AGAIN the authentification. To use again the client's authentication you'd have to do again the authentication. If the PHP script makes an SSL connection to PostgreSQL then that connection is independent of the HTTP client's SSL connection with the web server. If the server in the new connection (PostgreSQL) sends an SSL CertificateRequest message then the client (the PHP script) needs access to a certificate and the corresponding private key to reply with Certificate and CertificateVerify messages. The HTTP client's certificate would be available but that client's private key would not, so the PHP script would have no way to build a valid CertificateVerify message. Googling around, I found: mod_auth_krb with AuthType KerberosV5SaveCredentials The auth is done by mod_auth_krb and mod_perl is able to use the same ticket for PostgreSQL. It is in the doc of PG. You didn't mention Kerberos before; you said you wanted to reuse the client's SSL authentication. Kerberos works differently than SSL; I'd have to research the links you mentioned to see if reusing the client's credientials would indeed be possible using Kerberos. -- Michael Fuhr ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote: Hi Marco Bizzarri and Martijn van Oosterhout, Thanks for your valuable reply. I am trying to execute all query from VC++ through CDatabase::ExecuteSQL(sQueryString) function call. ie, via programming, not by manual entering query statements. so, in my situation I can construct a string and pass on to this function to execute the query string. Whatever the string I pass to ExecuteSQL() function, gets executed and I cannot get any result retured from the passed query string [like OID - 198705, from lo_create()]. I'm not an expert in VC++. I think you should obtain some sort of ResultSet object. You could then check that. Now, can you suggest me how to achieve it? And one more clarification, what is the value 131072? How can I get this vlaue? This is actually 0x2 value in decimal. Check large object interface in postgresql documentation (C API). According to PostgreSQL documentation, they gave C Syntax as client side function. Should I use those C API calls? If you're working from inside C, you can check: http://www.postgresql.org/docs/8.1/static/lo-interfaces.html Regards Marco :) Purusothaman A On 9/11/06, Marco Bizzarri [EMAIL PROTECTED] wrote: I will try to explain it with a sample session: this is for creating and writing a blob. From the psql prompt ( are the commands, the other are the results). begin ; BEGIN; SELECT lo_creat(131072) ; lo_creat -- 198705 (1 row) (this is the OID number of the newly created large object). select lo_open(198705, 131072) ; lo_open - 0 (1 row) (this is the file handler which you will use in the operations). SELECT lowrite(0, ''); lowrite - 4 (1 row) (you wrote 4 character in a large object) select lo_close(0); lo_close -- 0 (1 row) (you closed the file). commit ; COMMIT In this way, you created a new large object, and stored a string of 4 bytes inside of it. Regards Marco On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote: Thanks Martijn van Oosterhout and Marco Bizzarri. But, according to syntax of client side lo_import and lo_export, we should have 2 variable PGconn (for esatablished connection) and lobjld (imported file ID in PostgreSQL). I don't know how to do this in SQL statements. pls give me sample client side sql statements. :) Purusothaman A On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote: On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Not sure why you need a wrapper (you didn't say which language you were using) but those functions work exactly like the version you put in the SQL statements, except the filenames are for the client computer with client permissions. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8 ijq1n/SgAlwIiEgDI6zfICg= =Xk7N -END PGP SIGNATURE- -- Marco Bizzarri http://notenotturne.blogspot.com/ -- Marco Bizzarri http://notenotturne.blogspot.com/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] references/tutorial/tricks on dynamic generation of
On Mon, 11 Sep 2006 09:36:28 -0400 John DeSoi [EMAIL PROTECTED] wrote: No matter how many $$ I put in my sql code it won't work in 7.4 ;) And I bet (ooh not seriously, since it's not the most proficient way to invest my $$) pgsql 8.X has been backported in sarge and it should already be in etch (that should be ready before I'll finish my pgsql project). Anyway I'm quite conservative in administrative related stuff... since I'm not a sysadmin and I don't want to rely on anything that gives me a feeling of not being under my control. On Sep 11, 2006, at 8:34 AM, Ivan Sergio Borgonovo wrote: Since the code inside the function is getting a bit longer than what I was used I'm getting crazy about double quotes, syntax highlight and such. I'm using pg 7.4 so $$ trick shouldn't work. If you can't update to 8.0 or later, you might want to look at pgEdit. It can correctly syntax highlight pl/pgsql functions with nested quotes. I came across your website just few minutes ago looking exactly for that. Second thing I did was feeling solidarity towards your brave choice of using pg with drupal. Third thing I did was to aptitude search gedit in my sid with no luck :( At a second look I saw your tool is not available for Linux. syntax highlight would help. Any other technique/suggestion to code dynamically generated function? thx -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Groups, permissions and visibility
Suppose I have two groups of users and want to keep some sensitive information "hidden" from one group as described in the simple example below. When connecting as user 'visitor' via pgAdmin I am surprised that I can easily browse the structure of tables and the code of functions owned by 'trusted'. I can't edit the data in the tables, but ideally I would like to restrict visibility of the objects as well. Am I missing something simple? --create two groups and a couple of userscreate group trusted;create user the_boss in group trusted password 'secret';create group guests;create user visitor in group guests password 'easy'; drop database test;-- create a test databaseCREATE DATABASE test ENCODING = 'UTF8'; -- ensure database is owned by trusted groupALTER DATABASE pgice OWNER TO trusted;-- add required languagesCREATE PROCEDURAL LANGUAGE plpgsql; -- connect to the new database\connect test -- create a schema for the trusted groupCREATE SCHEMA confidential AUTHORIZATION trusted;ALTER SCHEMA confidential OWNER TO trusted; -- add some tables to confidential schemaCREATE TABLE confidential.application ( application_key character varying(32) NOT NULL PRIMARY KEY, application_type character varying(16) NOT NULL);ALTER TABLE confidential.application OWNER TO trusted; -- add some test functionsCREATE FUNCTION confidential.get_applications() RETURNS refcursor AS $$ declare r refcursor;begin begin open r for select * from application; end; return (r); end;$$ LANGUAGE plpgsql;ALTER FUNCTION confidential.get_applications() OWNER TO trusted; revoke all on function confidential.get_applications() from public;revoke all on schema confidential from public;
[GENERAL] Database migration and redesign
Ive been tasked with the unenviable job or migrating a MySQL 4.0 database to something more usable (namely, PostgreSQL 8). MySQL 4.0 doesnt even support basic things like subqueries, and in order to emulate the effects that RULEs, TRIGGERs and VIEWs bring, they had been using PHP scripts to replicate and build tables across the database (not from one server to another within the DB itself). The database was built across 5 separate schemata simply to organize the 50 odd tables, and all the tables are using the MyISAM engine which means no transactions, no row-level locking, and no foreign key constraints. Yeah. Its ugly. You should see the front-end. My question relates to primary keys. The vast majority of tables have a primary key on a single char or varchar field. Is it considered better practice to create a serial type id key to use as the primary key for the table, and then create a unique index on the char and varchar fields? Should foreign keys reference the new primary id or the old unique key? What about compound [primary] keys? Also, any suggestions for good DB design books would be appreciated. I no longer have any good DB design books, and I dont know whats good anymore. -- Brandon Aiken CS/IT Systems Engineer Confidentiality Notice This email, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this email is not the intended recipient or his/her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this email is prohibited. If you have received this email in error, please notify the sender by replying to this message and deleting this email immediately.
Re: [GENERAL] Database migration and redesign
Since you're in the process of modifying a database, you could find this useful: Agile Databases Techniques, by Scott Ambler. Regards Marco On 9/11/06, Brandon Aiken [EMAIL PROTECTED] wrote: I've been tasked with the unenviable job or migrating a MySQL 4.0 database to something more usable (namely, PostgreSQL 8). MySQL 4.0 doesn't even support basic things like subqueries, and in order to emulate the effects that RULEs, TRIGGERs and VIEWs bring, they had been using PHP scripts to replicate and build tables across the database (not from one server to another – within the DB itself). The database was built across 5 separate schemata simply to organize the 50 odd tables, and all the tables are using the MyISAM engine which means no transactions, no row-level locking, and no foreign key constraints. Yeah. It's ugly. You should see the front-end. My question relates to primary keys. The vast majority of tables have a primary key on a single char or varchar field. Is it considered better practice to create a serial type id key to use as the primary key for the table, and then create a unique index on the char and varchar fields? Should foreign keys reference the new primary id or the old unique key? What about compound [primary] keys? Also, any suggestions for good DB design books would be appreciated. I no longer have any good DB design books, and I don't know what's good anymore. -- Brandon Aiken CS/IT Systems Engineer Confidentiality Notice This email, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this email is not the intended recipient or his/her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this email is prohibited. If you have received this email in error, please notify the sender by replying to this message and deleting this email immediately. -- Marco Bizzarri http://notenotturne.blogspot.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Database migration and redesign
On 9/11/06, Brandon Aiken [EMAIL PROTECTED] wrote: My question relates to primary keys. The vast majority of tables have a primary key on a single char or varchar field. Is it considered better practice to create a serial type id key to use as the primary key for the table, and then create a unique index on the char and varchar fields? Should foreign keys reference the new primary id or the old unique key? What about compound [primary] keys? This is a somewhat controversial topic. I personally am not a big fan of Ambler or his approach to database design. This is more or less the surrogate/natural key debate which is a perennial flamewar in the database world. I tend to think natural keys are generally superior but surrogates are useful in some situations. Automatic use of ID column in every table leads to lazy thinking and overcomplicated designs. However at times they can really optimize your database. So I will lay down some extremes and leave it up to you to find the middle ground. A trivial case of when not to use a serial key is like this: create table sex(id serial, sex char(1)); insert into sex(sex) select 'M' union all select 'F'; The reason is obvious, adding a join for no reason whenever you need to know the sex, albeit a simple one, and expanding the 1 character type to an int type. A somewhat more interesting case is: create table email(id serial, email text); In this case, while it may seem like a waste to store the full email in every table that references the email, you are optimizing the join out in such cases, which can be a big win and since there is no other properties of the email the email table only serves the purpose of maintaining relational integrity, iow no duplicates. In the event the email changes, we allow RI to cleanup the other tables...an integer proxy would be (at least to me) an meaningless abstraction of the email. A case of when not to use a natural key for relating is a bit more complex, some times you just get sick and tired of writing the key fields over and over, or you have measured and determined the natural key to be wasetful in terms of index performance. Another reason is if the p-key data changes frequently and the RI mechism is too expensive. In this case I would advise you to strictly key on the natural and make a candidate, serial key. create table foo ( nat_key1 text, nat_key2 int, foo_id serial, [...] primary key(nat_key1, nat_key2), unique(foo_id) -- candidate unique key ); This is my middle ground: there highly situational cases where an id column is a win on pracital reasons. however, a strong design around natural keys tends to make you think the problem through much more carefully and lead to a tighter database. Also, any suggestions for good DB design books would be appreciated. I no longer have any good DB design books, and I don't know what's good anymore. http://www.amazon.com/exec/obidos/ASIN/0201485559/databasede095-20?creative=327641camp=14573adid=07TEH0J3FS9SYN309QMSlink_code=as1 merlin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Hi Martijn van Oosterhout,Thanks for your valuable reply.Yes I misunderstood the documentation.Then, I searched PostgreSQL documentation for equivalent client-side functions.But I found only C programming APIs instead of SQL functions. I want functions which is usable in SQL statements.Can you give example or equivalent client side function syntax?Thanks in advance.:)Purusothaman A On 9/9/06, Martijn van Oosterhout kleptog@svana.org wrote: On Sat, Sep 09, 2006 at 05:19:26PM +0530, Purusothaman A wrote: I usually log in to postgresql server with admin username. But whatever its, according to documentation, if client machine's user has write access permission, this query should successfully be executed.Read carefully, you are using the server-side functions, therefore:These two functions read and write files in the server's file system, using the permissions of the database's owning user.What your client user is is irrelevent. Perhaps you actually want touse the client-side functions?Have a nice day,--Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFFAq41IB7bNG8LQkwRAsptAKCDSWuRI5T+JMqpWEQt7r69kJE6CwCfVTOOmOqGDNQBTZZDs4WVf6NM+wQ==kT4s-END PGP SIGNATURE-
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Short answer, you can't. The database server can obviously only access things on the database server. Since SQL is also executed on the server, no SQL statements can access stuff on the client computer. If you want to load a file on the client side to the server, you need to open the file and copy it over using the lo_ functions. There no way (IIRC) to access the contents of large objects from just SQL. Hope this helps, On Mon, Sep 11, 2006 at 02:45:10PM +0530, Purusothaman A wrote: Hi Martijn van Oosterhout, Thanks for your valuable reply. Yes I misunderstood the documentation. Then, I searched PostgreSQL documentation for equivalent client-side functions. But I found only C programming APIs instead of SQL functions. I want functions which is usable in SQL statements. Can you give example or equivalent client side function syntax? Thanks in advance. :) Purusothaman A On 9/9/06, Martijn van Oosterhout kleptog@svana.org wrote: On Sat, Sep 09, 2006 at 05:19:26PM +0530, Purusothaman A wrote: I usually log in to postgresql server with admin username. But whatever its, according to documentation, if client machine's user has write access permission, this query should successfully be executed. Read carefully, you are using the server-side functions, therefore: These two functions read and write files in the server's file system, using the permissions of the database's owning user. What your client user is is irrelevent. Perhaps you actually want to use the client-side functions? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFAq41IB7bNG8LQkwRAsptAKCDSWuRI5T+JMqpWEQt7r69kJE6CwCfVTOO mOqGDNQBTZZDs4WVf6NM+wQ= =kT4s -END PGP SIGNATURE- -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Thanks Martijn van Oosterhout,So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Am I right?:)Purusothaman AOn 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote: Short answer, you can't. The database server can obviously only accessthings on the database server. Since SQL is also executed on the server, no SQL statements can access stuff on the client computer.If you want to load a file on the client side to the server, you needto open the file and copy it over using the lo_ functions. There no way (IIRC) to access the contents of large objects from just SQL.Hope this helps,On Mon, Sep 11, 2006 at 02:45:10PM +0530, Purusothaman A wrote: Hi Martijn van Oosterhout, Thanks for your valuable reply. Yes I misunderstood the documentation. Then, I searched PostgreSQL documentation for equivalent client-side functions. But I found only C programming APIs instead of SQL functions. I want functions which is usable in SQL statements. Can you give example or equivalent client side function syntax? Thanks in advance. :) Purusothaman A On 9/9/06, Martijn van Oosterhout kleptog@svana.org wrote: On Sat, Sep 09, 2006 at 05:19:26PM +0530, Purusothaman A wrote: I usually log in to postgresql server with admin username. But whatever its, according to documentation, if client machine's user has write access permission, this query should successfully be executed. Read carefully, you are using the server-side functions, therefore: These two functions read and write files in the server's file system, using the permissions of the database's owning user. What your client user is is irrelevent. Perhaps you actually want to use the client-side functions? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFAq41IB7bNG8LQkwRAsptAKCDSWuRI5T+JMqpWEQt7r69kJE6CwCfVTOO mOqGDNQBTZZDs4WVf6NM+wQ= =kT4s -END PGP SIGNATURE- --Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate.-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFFBS6LIB7bNG8LQkwRAvI6AJ9OW7cxZiJR0QsEsSOwkYHKkYDZ6gCbBrDAGVPAoBeOhE+2toFa2zNbN3M==9W8I-END PGP SIGNATURE-
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Not sure why you need a wrapper (you didn't say which language you were using) but those functions work exactly like the version you put in the SQL statements, except the filenames are for the client computer with client permissions. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Actually, you can use direclty the lo_create, lo_open, lo_read and lo_write directly into your SQL code, instead of having to write a wrapper function. It is not simple, but it can be done. I don't have a general example: we have done this in a couple of our projects, one being public availabe (www.paflow.it): look in the Documents.py source, and look for the lo_xxx queries. This is indipendent from the fact that client and server are running on the same server. Regards Marco On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Am I right? :) Purusothaman A On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote: Short answer, you can't. The database server can obviously only access things on the database server. Since SQL is also executed on the server, no SQL statements can access stuff on the client computer. If you want to load a file on the client side to the server, you need to open the file and copy it over using the lo_ functions. There no way (IIRC) to access the contents of large objects from just SQL. Hope this helps, On Mon, Sep 11, 2006 at 02:45:10PM +0530, Purusothaman A wrote: Hi Martijn van Oosterhout, Thanks for your valuable reply. Yes I misunderstood the documentation. Then, I searched PostgreSQL documentation for equivalent client-side functions. But I found only C programming APIs instead of SQL functions. I want functions which is usable in SQL statements. Can you give example or equivalent client side function syntax? Thanks in advance. :) Purusothaman A On 9/9/06, Martijn van Oosterhout kleptog@svana.org wrote: On Sat, Sep 09, 2006 at 05:19:26PM +0530, Purusothaman A wrote: I usually log in to postgresql server with admin username. But whatever its, according to documentation, if client machine's user has write access permission, this query should successfully be executed. Read carefully, you are using the server-side functions, therefore: These two functions read and write files in the server's file system, using the permissions of the database's owning user. What your client user is is irrelevent. Perhaps you actually want to use the client-side functions? Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFAq41IB7bNG8LQkwRAsptAKCDSWuRI5T+JMqpWEQt7r69kJE6CwCfVTOO mOqGDNQBTZZDs4WVf6NM+wQ= =kT4s -END PGP SIGNATURE- -- Martijn van Oosterhout kleptog@svana.orghttp://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFBS6LIB7bNG8LQkwRAvI6AJ9OW7cxZiJR0QsEsSOwkYHKkYDZ6gCbBrDA GVPAoBeOhE+2toFa2zNbN3M= =9W8I -END PGP SIGNATURE- -- Marco Bizzarri http://notenotturne.blogspot.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
Thanks Martijn van Oosterhout and Marco Bizzarri.But, according to syntax of client side lo_import and lo_export, we should have 2 variable PGconn (for esatablished connection) and lobjld (imported file ID in PostgreSQL). I don't know how to do this in SQL statements.pls give me sample client side sql statements.:)Purusothaman AOn 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote:On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Not sure why you need a wrapper (you didn't say which language you wereusing) but those functions work exactly like the version you put in theSQL statements, except the filenames are for the client computer with client permissions.Hope this helps,--Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE-Version: GnuPG v1.4.1 (GNU/Linux)iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8ijq1n/SgAlwIiEgDI6zfICg==Xk7N-END PGP SIGNATURE-
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
I will try to explain it with a sample session: this is for creating and writing a blob. From the psql prompt ( are the commands, the other are the results). begin ; BEGIN; SELECT lo_creat(131072) ; lo_creat -- 198705 (1 row) (this is the OID number of the newly created large object). select lo_open(198705, 131072) ; lo_open - 0 (1 row) (this is the file handler which you will use in the operations). SELECT lowrite(0, ''); lowrite - 4 (1 row) (you wrote 4 character in a large object) select lo_close(0); lo_close -- 0 (1 row) (you closed the file). commit ; COMMIT In this way, you created a new large object, and stored a string of 4 bytes inside of it. Regards Marco On 9/11/06, Purusothaman A [EMAIL PROTECTED] wrote: Thanks Martijn van Oosterhout and Marco Bizzarri. But, according to syntax of client side lo_import and lo_export, we should have 2 variable PGconn (for esatablished connection) and lobjld (imported file ID in PostgreSQL). I don't know how to do this in SQL statements. pls give me sample client side sql statements. :) Purusothaman A On 9/11/06, Martijn van Oosterhout kleptog@svana.org wrote: On Mon, Sep 11, 2006 at 03:27:09PM +0530, Purusothaman A wrote: Thanks Martijn van Oosterhout, So, I have to write my own wrapper function upon the functions below. 1. Oid lo_import(PGconn *conn, const char *filename); 2. int lo_export(PGconn *conn, Oid lobjId, const char *filename); Not sure why you need a wrapper (you didn't say which language you were using) but those functions work exactly like the version you put in the SQL statements, except the filenames are for the client computer with client permissions. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFBTR4IB7bNG8LQkwRAvhPAJ9KHp9DO1EjPqbkGwBdaSaKx5J90wCfQtZ8 ijq1n/SgAlwIiEgDI6zfICg= =Xk7N -END PGP SIGNATURE- -- Marco Bizzarri http://notenotturne.blogspot.com/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [ADMIN] Problem with lo_export() and lo_import() from remote machine.
On Mon, Sep 11, 2006 at 12:56:11PM +0200, Marco Bizzarri wrote: I will try to explain it with a sample session: this is for creating and writing a blob. Oh, I was looking for lo_read/lo_write, which don't exist. It's loread/lowrite. Inconsonistant naming is irritating. It doesn't help the OPs problem with lo_import/lo_export though. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ From each according to his ability. To each according to his ability to litigate. signature.asc Description: Digital signature
Re: [GENERAL] Database migration and redesign
Excellent, I managed to find one of the major sticking points all by myself! This is exactly what I was looking for. Thanks! There seems to be a lot of that in the DB world. Practical vs theoretical. Or pragmatic vs strict. It seems to be whether you came from a math background -- in which case you're concerned with logical data sets in the most effective theory possible -- or from a software engineering background -- in which case you're concerned about the usability and performance of the database software, particularly in how other applications reference and access the database. There are advantages to both schools. A highly theoretical database can also be highly normalized and therefore have very agile data models built on top of them. However, the practical limitation of computerized relational databases means that they can't implement all features of the relational database model very well or very efficiently. Additionally, while the logical models built on top of the theoretical DB structure can be very agile, designing those models is very complex due to the complex nature of the highly normalized relational structure involved. If you have to do a multiple nested join of 10 tables just to run what will be a query of the most basic object in the final app, you've probably normalized way too far. I think I can draw some lines of distinction now, though. There are some places where the index is just a means to relate the otherwise unrelated tables (if that makes sense), and sometimes where using natural keys will save several table joins on many queries. This was what I was seeing, and I was wondering if it was acceptable to mix it up and do both where it seems to make the most sense. My next task is a bit more difficult simply to define what the relationship needs to be. The app is business management software that manages, among other things, Jobs and Quotes. The relationship between Jobs and Quotes is a little odd. Quotes are sent to customers, and if those customers accept the Quote then a Job is created and the Quote is assigned to it. So Quotes exist without Jobs. Additionally, customers will often request additional services after the original Quote and these services get Quoted in the same way, so multiple Quotes can be assigned to a single Job. It gets worse. We use Internal Jobs for cost tracking, and these Jobs do not have Quotes at all. Also, there are some Jobs that get 'verbal' quotes (that is, quotes given outside the normal quoting system). So Jobs exist without Quotes. So, one Job can be associated with many Quotes, so Quote is the child of the relationship even though it gets created first. There can be 0 or 1 parents, and 0, 1, or many children in any combination. I can simply define foreign keys normally and use NULL values where no relationship exists, but isn't there a better way to do this? Allowing NULLs is one of the problems with many of these tables. Should I create another table with two fields, one for the Quote number and one for the Job number, and then have those two fields comprise a compound primary key (and Quote having a unique constraint for itself as well)? That way I don't think I can get logically orphaned children since both tables are parents to this third table. Is that how it's supposed to be done? There are several places across the DB where this style relationship occurs, and I'd like to try to conform to best practices (especially since the last guy managed to miss just about every single one). -- Brandon Aiken CS/IT Systems Engineer -Original Message- From: Merlin Moncure [mailto:[EMAIL PROTECTED] Sent: Monday, September 11, 2006 1:08 PM To: Brandon Aiken Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Database migration and redesign On 9/11/06, Brandon Aiken [EMAIL PROTECTED] wrote: My question relates to primary keys. The vast majority of tables have a primary key on a single char or varchar field. Is it considered better practice to create a serial type id key to use as the primary key for the table, and then create a unique index on the char and varchar fields? Should foreign keys reference the new primary id or the old unique key? What about compound [primary] keys? This is a somewhat controversial topic. I personally am not a big fan of Ambler or his approach to database design. This is more or less the surrogate/natural key debate which is a perennial flamewar in the database world. I tend to think natural keys are generally superior but surrogates are useful in some situations. Automatic use of ID column in every table leads to lazy thinking and overcomplicated designs. However at times they can really optimize your database. So I will lay down some extremes and leave it up to you to find the middle ground. A trivial case of when not to use a serial key is like this: create table sex(id serial, sex char(1)); insert into sex(sex) select 'M' union all select 'F'; The
Re: [GENERAL] plz unsubscribe me
You can eliminate such things by checking the Subject line for Re: AW: SV: and such. :0 * ! ^Subject:.*Re: * ^Subject:.*(subscribe|subscribel|suscribe) /dev/null ... which still doesn't cover RE translated in most of the world's languages, which I'm sure occasionally pop up from people who use localized mail clients. And of course it can't ever cover the legitimate usages of unsubscribe, which could be common for example when you're speaking about a (slony or other) replication node... Cheers, Csaba. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Need Help w/ Timestamps
Hi! I moved all of our PostGRE DBs from Windows to Linux last weekend. Everything went well, performance is great, BUT there is one issue that I need to solve. My problem is, that the precision for timestamps in Linux is greater than in Windows. That seems to be fine, but MS Access cannot handle it. E.g.: Windows Timestamp: 2006-09-08 15:25:42.332 Linux Timestamp: 2006-09-09 21:25:06.947069 Is there any way (even somehow through the ODBC driver) to restrict the timestamp accuracy to M$ Win accuracy? Thanks for your help! -Enrico ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] off topic - web shop
Hi. We are looking to open a small web shop. I looked around to see if there are any open source web shops. Can anyone recommend any web shop system (free or non-free)? thanks stig ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] pg_dump and cluster
Thanks Alvaro! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Database design and triggers...
Hello Again, Yes, you're right. What I am talking about is a material view. Since I'm not sure yet which would be best (Material View Vs. Normal View) I will first try a normal view and if it turns out to be too slow I can always switch to a Material View. Thanks! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Need Help w/ Timestamps
On Mon, 11 Sep 2006 10:05:14 -0500 Enrico Riedel [EMAIL PROTECTED] wrote: My problem is, that the precision for timestamps in Linux is greater than in Windows. That seems to be fine, but MS Access cannot handle it. E.g.: Windows Timestamp: 2006-09-08 15:25:42.332 Linux Timestamp: 2006-09-09 21:25:06.947069 Is there any way (even somehow through the ODBC driver) to restrict the timestamp accuracy to M$ Win accuracy? http://www.postgresql.org/docs/7.4/static/functions-datetime.html#FUNCTIONS-DATETIME-TRUNC ??? -- Ivan Sergio Borgonovo http://www.webthatworks.it ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Need Help w/ Timestamps
On Mon, 2006-09-11 at 10:05, Enrico Riedel wrote: Hi! I moved all of our PostGRE DBs from Windows to Linux last weekend. Everything went well, performance is great, BUT there is one issue that I need to solve. My problem is, that the precision for timestamps in Linux is greater than in Windows. That seems to be fine, but MS Access cannot handle it. E.g.: Windows Timestamp: 2006-09-08 15:25:42.332 Linux Timestamp: 2006-09-09 21:25:06.947069 Is there any way (even somehow through the ODBC driver) to restrict the timestamp accuracy to M$ Win accuracy? select now()::timestamp(3); ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Superuser lost access to particular database
I have one database owned by user1 which as of 4 days ago the superuser, pgsql, can't see any tables. I noticed I had pg_dumpalls from 4 days ago.. stuck.. upon research I discovered that if I login as the superuser to the problem database that it can not see any of the tables owned by the regular user. The superuser is able to see system tables with \dS, but none of the regular ones with \d If I login as 'user1' all the tables are there. I tried grant all on pgsql to database mydb, but that did not help. Also tried to do a grant for particular table, but got error that it was not found. Tried a pg_dump as the database owner, but it didn't work. Basically I have this database that only the DB owner can use.. and the postgresql superuser can't see any tables and it is freezing the pg_dumpall process. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Superuser lost access to particular database
Francisco Reyes [EMAIL PROTECTED] writes: I have one database owned by user1 which as of 4 days ago the superuser, pgsql, can't see any tables. Transaction ID wraparound ... I take it this is not PG 8.1? As long as it's at least 7.4, vacuuming the system catalogs should help. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Superuser lost access to particular database
Tom Lane writes: Francisco Reyes [EMAIL PROTECTED] writes: I have one database owned by user1 which as of 4 days ago the superuser, pgsql, can't see any tables. After furhter researching found that I could not see the tables because I have them in diferent schemas.. and the superuser didn't have them in the path. However the pg_dumpall is still freezing in that database. Transaction ID wraparound ... I take it this is not PG 8.1? 8.1.3 Compiled from ports in FreeBSD 6.1 Stable. select version(); version PostgreSQL 8.1.3 on i386-portbld-freebsd6.0, compiled by GCC cc (GCC) 3.4.4 [FreeBSD] 20050518 As long as it's at least 7.4, vacuuming the system catalogs should help. vacuum the database in question? From that database tried vacuum, vacuum analyze and vacuum full. Tried both as superuser and as the user that owns the database. Also tried running: vacuumdb -azv So the problem is only doing the backup. I am able to see the tables by doing \d shema.* The database in question is fairly small so if I could somehow dow a backup I could drop it and reload it. Trying to pg_dump as superuser or as the database owner, freezes. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Superuser lost access to particular database
[EMAIL PROTECTED] writes: This may sound a bit weird, but have you tried logging in as user1 and then granting the permission's to the superuser? Found that superuser could not see them because I had the tables in two schemas which the superuser did not have path to. Doing \dt schema.* I was able to see them. Backup still freezing though. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings