Re: [GENERAL] SUBSTRING performance for large BYTEA
On Sat, Aug 18, 2007 at 12:20:42PM -0400, Tom Lane wrote: Vance Maverick [EMAIL PROTECTED] writes: My question is about performance in the postgres server. When I execute SELECT SUBSTRING (my_bytea FROM ? FOR ?) FROM my_table WHERE id =3D ?, does it fetch the whole BYTEA into memory? Or does it access only the pages that contain the requested substring? Recent releases will do what you want if the column has been marked SET STORAGE EXTERNAL (before storing anything in it...) See the ALTER TABLE reference page. Ah, thanks, good to know ! Recent releases seems to mean at least as far back as 8.1 going by the docs. Now, to convert an existing bytea column I would need to add a new bytea column with set storage external, move the data from the old column to the new column, remove the old column, and give the new column the original name, correct ? Or is the an easier way ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] SUBSTRING performance for large BYTEA
On Sat, Aug 18, 2007 at 12:49:09PM -0400, Tom Lane wrote: Joshua D. Drake [EMAIL PROTECTED] writes: Should we consider setting storage external by default for the type? No. That would be counterproductive for the more typical case of bytea values in the range of some-small-number-of-kilobytes. Or at least I think that's more typical than values that are so large you have to go out of your way to fetch them in chunks. Would it be feasible to add an ALTER TABLE mode ... set storage externally-extended cutoff size ... where size is the user configurable size of the column data at which PostgreSQL switches from extended to external storage strategy ? Such that large bytea values would be chunkable while smaller ones wouldn't at the discretion of the DBA. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SUBSTRING performance for large BYTEA
On Sat, Aug 18, 2007 at 10:23:42AM -0700, Joshua D. Drake wrote: SET STORAGE EXTERNAL (before storing anything in it...) See the ALTER TABLE reference page. Now, to convert an existing bytea column I would need to add a new bytea column with set storage external, move the data from the old column to the new column, remove the old column, and give the new column the original name, correct ? Set existing column to storage external update existing column with existing data: UPDATE foo SET bar = bar; Now the down side to this is you are going to create a dead row for every update which means a vacuum (probably full) afterward, but the way you describe above will do the same thing as well. Sure. I was a bit uneasy about the docs saying set storage doesn't affect existing data but only sets the strategy on new inserts/updates and hence thought using a wholy new column would somehow be safer. But maybe this can be nefariously interpreted such that I could sort-of implement cutoff-based extended/external switching by prepending alter table ... set storage external/extended ... to INSERTs/UPDATEs based on bytea parameter size. Or even writing a trigger issuing ALTER TABLE depending on size of insert ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] SUBSTRING performance for large BYTEA
On Sat, Aug 18, 2007 at 09:32:33PM +0100, Gregory Stark wrote: I do have to wonder how you're getting the data *in* though. If it's large enough to have to stream out like this then how do you initially load the data? Well, in my particular case it isn't so much that I *want* to access bytea in chunks but rather that under certain not-yet-pinned-down circumstances windows clients tend to go out-or-memory on the socket during *retrieval* (insertion is fine, as is put/get access from Linux clients). Doing chunked retrieval works on those boxen, too, so it's an option in our application (the user defines a chunk size that works, a size of 0 is treated as no-chunking). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] SUBSTRING performance for large BYTEA
On Sat, Aug 18, 2007 at 01:51:18PM -0400, Tom Lane wrote: Karsten Hilbert [EMAIL PROTECTED] writes: Would it be feasible to add an ALTER TABLE mode ... set storage externally-extended cutoff size ... where size is the user configurable size of the column data at which PostgreSQL switches from extended to external storage strategy ? Actually, it just occurred to me that this ties into the recent discussion of compression parameters http://archives.postgresql.org/pgsql-hackers/2007-08/msg00082.php (which hasn't gone further than discussion yet). Perhaps we need an additional parameter which is a maximum input size to attempt compression at all. IOW, the current force_input_size is not only useless but exactly backwards ... I can see that a maximum size can be relevant for the decision as to whether to *attempt* compression since large things compress slowly and may unduly slow down queries. As well as a minimum size to use compression on, quite obviously. OTOH, I'd like to be able to tell PostgreSQL to be so kind and refrain from attempting to compress values above a certain size even if it thought it'd make sense. There was some discussion in that thread (or maybe the earlier one on -patches) of exposing the lzcompress parameters directly to users, perhaps as an extended form of the current SET STORAGE command. That won't happen for 8.3 but it might later. In the Sounds good. meantime, if the defaults included not attempting to compress multi-megabyte values, I think it'd Just Work for cases like yours. Not as tweakable as I'd eventually want it but, yes, that would sort of Just Work. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] check if database is correctly created
On Wed, Aug 15, 2007 at 10:08:36AM +0200, Alain Roger wrote: i would like to check (via PHP or C#) if my database has been correctly created. for that i use the following SQL : select * from pg_tables where tablename = 'xxx' AND schemaname = 'yyy'; this i repeat till i check all tables. But how to check sequences, index, functions, and so on ? Use psql with -E and then issue any variety of \d style commands to find out what psql does to display indexes, functions, sequences etc. However, be aware that checking for the existence of an appropriately named table/function doesn't give any guarantuee about what they really *are*. We do something similar during database upgrade migrations: we calculate a hash over our tables with columns and column datatypes (tables only as they hold the real data). Only if the hash matches an expected value do we migrate (change) the tables themselves. Views, functions, indexes, constraints can all be re-run from scratch upon failure without affecting the data in the tables. http://cvs.savannah.gnu.org/viewvc/gnumed/gnumed/server/sql/gmSchemaRevisionViews.sql?root=gnumedview=markup and now http://cvs.savannah.gnu.org/viewvc/gnumed/gnumed/server/sql/v5-v6/dynamic/gm-schema.sql?root=gnumedview=markup Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Moving to postgresql and some ignorant questions
On Tue, Aug 14, 2007 at 11:09:36PM +0800, Phoenix Kiula wrote: If I am reading this right, does this mean it is probably better to leave fsync as fsync=off on production machines? No, you want fsync=on on any machine which holds data you care about. And you want hardware which doesn't lie to you so that fsync is finished really means the data is on-disk. Else PostgreSQL cannot ensure ACID compliance. Specifying explicit BEGIN and COMMIT blocks should only commit when I want the DB to, yes or will each UPDATE in the middle of this block get executed? It will get executed but the effects will only become publicly visible after COMMIT (assuming no errors in the queries in which case you could issue COMMITs all day long and still see no change in the database from the transaction in which the error occurred) So, again, in the conf file, is this what you recommend: fsync=off No. max_connections=100 Yes, depending on your usage patterns. The problem with simple CHECK constraints is that they can only reference the primary key in another table. Not so. Or you need to explain what you mean by simple CHECK constraints. I am happy to do this, but I don't see an ALTER DATABASE command. ALTER DATABASE is there, of course, but it doesn't help you. I would really like not to have to execute the CREATE DATABASE command again! You'll have to, unfortunately, I fear. Once you go about it take the opportunity and make sure the locale and encoding settings of initdb are compatible with an UTF8 database. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] timestamp skew during 7.4 - 8.2 upgrade
On Fri, Aug 10, 2007 at 10:11:29AM +0200, Louis-David Mitterrand wrote: So if I understand correctly, a timestamp_tz is ... ... stored as UTC in the backend ... sent to clients shifted by whatever timezone was requested by the client by one of several mechanisms: - set timezone to ... used by the client - select ... at time zone ... used by the client - the server timezone if neither of the above is used according to the host's timezone configuration? For example if I travel with my server and cross several timezones, my timestamp_tz's will display a different time (provided I run the tzselect utility in Linux) ? Yes, unless the client tells the server to send them shifted to a different timezone (see above). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] russian case-insensitive regexp search not working
On Tue, Jul 10, 2007 at 08:40:24AM +0400, alexander lunyov wrote: Just to clarify: lower() on both sides of a comparison should still work as expected on multibyte encodings ? It's been suggested here before. lower() on both sides also does not working in my case, it still search for case-sensitive data. String in this example have first char capitalized, and result is the same. Seems that lower() can't lower multibyte character. db= select lower('Зелен'); Well, no, select my_string where lower(my_string) ~ lower(search_fragment); Does that help ? (~ does work for eg. German in my experience) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] russian case-insensitive regexp search not working
On Mon, Jul 09, 2007 at 04:00:01PM +0400, alexander lunyov wrote: I found this bug report: http://archives.postgresql.org/pgsql-bugs/2006-09/msg00065.php Is it about this issue? Yes. And will it be fixed someday? Likely. In the meantime lower() can come to the rescue. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] russian case-insensitive regexp search not working
On Mon, Jul 09, 2007 at 09:50:42AM -0400, Tom Lane wrote: On Mon, Jul 09, 2007 at 04:00:01PM +0400, alexander lunyov wrote: I found this bug report: http://archives.postgresql.org/pgsql-bugs/2006-09/msg00065.php Is it about this issue? Yes. And will it be fixed someday? Likely. In the meantime lower() can come to the rescue. It's not nearly as easy to fix as the reporter of that bug imagines, because we don't know the relationship between pg_wchar representation and the wchar_t representation assumed by the wctype.h functions. Just to clarify: lower() on both sides of a comparison should still work as expected on multibyte encodings ? It's been suggested here before. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] multimaster
On Sun, Jun 03, 2007 at 07:47:04PM +0200, Alexander Staubo wrote: For example, part of the point of having validations declared on the model is so that you can raise user-friendly errors (and pipe them through gettext for localization) such as Your password must be at least 4 characters long and contain only letters and digits. If anyone is interested, we have sort of re-implemented gettext in SQL: http://cvs.savannah.gnu.org/viewvc/gnumed/gnumed/server/sql/?root=gnumed (see the gmI18n-*.sql stuff) In essence it enables you to write queries like so select pk, name, _(name) as l10n_name from states which will give you a localized name for states.name in l10n_name. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] In theory question
On Wed, May 09, 2007 at 10:29:02AM -0700, Joshua D. Drake wrote: This is exactly what I was asking about. So my theoretical idea has already been implemented. Now if only *all* my ideas were done for me by the time I came up with them :) Then you wouldn't be able to eventually patent them ;) I think you are overly optimistic ;-) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Any guide to indexes exists?
On Mon, May 07, 2007 at 10:47:24AM -0500, Jim Nasby wrote: GiST can also be useful if you have to query in multiple dimensions, which can occur outside the normal case of geometry. Best example I know of is a table containing duration information in the form of start_time and end_time. Trying to query for what events happened on 5/28/2005 will generally be much cheaper with a GiST index than a b- tree. Are you referring to queries with ... where some_timestamp between some_start and some_end ... or ... where some_timestamp some_start and some_timestamp some_end ... ? Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] An alternatives to rules and triggers
On Sat, May 05, 2007 at 03:01:36PM +1200, Glen Eustace wrote: Is there some way that one can determine whether a table has changed i.e. an insert, delete, update, without having to resort to setting a flag in another table using a triger or rule. You can NOTIFY a LISTENing client which does involve a trigger but does not require another table holding flags or such. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] dollar-quoting trouble
On Fri, Apr 20, 2007 at 06:08:36PM +0930, Shane Ambler wrote: If you do want 2 versions installed (both in different prefix dirs) at the same time then you probably want to make sure that your shell $PATH setting includes the path to the newer version of psql and then specify the full path to the older version of psql when you want to use the older version. Or just use the full path every time you run psql. Thanks, I knew that much from following the list. However, I was under the impression that - on Debian - by using update-alternatives I could set the default PG to the 8.1 install. Which apparently wasn't the case. Thanks for your answer, though, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] dollar-quoting trouble
On Fri, Apr 20, 2007 at 12:40:45PM +1000, Klint Gore wrote: I can't spot the trouble with this function definition: ... PostgreSQL 8.1.8 (Debian/Etch) is telling me: Actually, Lenny. psql:dem-identity.sql:43: ERROR: unterminated dollar-quoted string at or near $null_empty_title$ begin if (NEW.title is null) then return NEW; at character 83 Does psql --version match select version()? There's a message in the archive pgsql-bugs where this happened. That was indeed part of the trouble: Both 7.4.16 and 8.1.8 being installed on Debian I couldn't get the default psql be 8.1 despite using update-alternatives. So there was psql 7.4 being used against the 8.1 server which is bound to lead to trouble. That being solved by deinstalling 7.4 (which I was transitioning off anyways) psql created the function just fine. My own bootstrapper code still throws the error (it is using psycopg2 which is using libpq4) ... but, wait, deinstalling libpq3 apparently makes psycopg2 use libpq4 (?!) as it now works ... Thanks for the assistance from the GNUmed team, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] dollar-quoting trouble
Hi all, I can't spot the trouble with this function definition: create function dem.trf_null_empty_title() returns trigger language plpgsql as $null_empty_title$ begin if (NEW.title is null) then return NEW; end if; if trim(NEW.title) '' then return NEW; end if; NEW.title := NULL; return NEW; end; $null_empty_title$; PostgreSQL 8.1.8 (Debian/Etch) is telling me: psql:dem-identity.sql:43: ERROR: unterminated dollar-quoted string at or near $null_empty_title$ begin if (NEW.title is null) then return NEW; at character 83 Can anyone help ? Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Creation of a read-only role.
On Sat, Mar 17, 2007 at 01:47:11AM +0300, Dmitry Koterov wrote: When we start using of any replication system (e.g. Slony) we need to create a read-only role for access the database. This role must be able to read anything, but should NOT be able to INSERT, UPDATE or DELETE for all database objects. It may be possible to set the session to read-only set session characteristics as transaction readonly and make that the default (along the lines of alter database set ...) for the readonly role. There may be a way to disallow that role to change that characteristic. Overall, we need 3 roles: 1. Administrator: can do anything with a database (by default this user is already exists - postgres). 2. Read-only: can only read. Runs on all slave nodes. 3. Read-write: can write, but cannot change the database schema. Runs on master node only. Is any way to easily create and maintain these standard roles? Now I have written a stored procedure which iterates over the pg_catalog and runs a lot of REVOKE GRANT commands, but it seems to be not an universal solution, because: 1. I have to re-run this procedure after I change the database schema. (Very bad item! Can we avoid it?) 2. It looks like a broot-force method, and nothing said about it in the Slony documentation (strange). 3. In MySQL (e.g.) there is a one-command way to create these three roles. Again, these 3 roles seems to be a de-facto standard for replication systems, but I found nothing about this question in the Google. -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: HIPPA (was Re: [GENERAL] Anyone know ...)
On Fri, Mar 09, 2007 at 08:08:11AM -0500, Kenneth Downs wrote: First, security is defined directly in terms of tables, it is not arbitrated by code. The public group has SELECT access to the articles table and the schedules tables, that's it. If a person figures out how our links work and tries to access the claims table it will simply come up blank (and we get an email). How ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: HIPPA (was Re: [GENERAL] Anyone know ...)
On Fri, Mar 09, 2007 at 11:02:45AM -0500, Kenneth Downs wrote: First, security is defined directly in terms of tables, it is not arbitrated by code. The public group has SELECT access to the articles table and the schedules tables, that's it. If a person figures out how our links work and tries to access the claims table it will simply come up blank (and we get an email). If a user has not logged in, that is, if they are an anonymous visitor, the web framework will connect to the database as the default public user. Our system is deny-by-default, so this user cannot actually read from any table unless specifically granted permission. In the case being discussed, the public user is given SELECT permission on some columns of the insurance carriers table, and on the schedules table. The column-level security is important, as you don't want anybody seeing the provider id! If the user figures out our URL scheme, they might try something like ?gp_page=patients and say Wow I'm clever I'm going to look at the patients table, except that the public user has no privilege on the table. The db server will throw a permission denied error. My interest was more towards the we get an email part. What level do you send that from ? A trigger ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: HIPPA (was Re: [GENERAL] Anyone know ...)
On Fri, Mar 09, 2007 at 12:22:19PM -0500, Kenneth Downs wrote: My interest was more towards the we get an email part. What level do you send that from ? A trigger ? The web framework does that. I see. IOW if a violation happens below the web layer the e-mail doesn't get send. I thought you had maybe written a trigger to do it in, say, pl/Perl or so. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Mon, Feb 19, 2007 at 10:16:12AM +0100, Alban Hertroys wrote: What I'm trying to say is not that it _is_ immutable, but that it _behaves_ immutable (under said conditions). This could imply that if a certain condition is available in a query on which such a function operates, it would behave immutable. That is precisely why I didn't get the idea upfront that date_trunc() wouldn't be immutable just so. I'll solve it with a date_trunc_utc() wrapper. Thanks to all who chipped in. Something new to learn every day. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Mon, Feb 19, 2007 at 10:58:50AM +0100, Martijn van Oosterhout wrote: I'll solve it with a date_trunc_utc() wrapper. It should be noted the date_truc(timestamptz) is not immutable, whereas date_trunc(timestamp) is. Thus you should be able to make an index on: date_trunc( timestamptz_column AT TIME ZONE 'UTC', 'foo' ) Ah, that makes it clear *why* this should work. I would assume to get meaningful results from a query using that index I'd have to normalize input timestamps to UTC, too, before putting them into the query, right ? OTOH, if you're only storing times in UTC, then timestamp without timezone might be better anyway. Well, PostgreSQL itself is storing UTC anyways but we need the timezone bit since our frontend delivers timestamps from various timezones and they are note normalized to UTC before they get to the database. IOW, I want the database to force programmers to have to think about from which timezone they deliver timestamps into a date-of-birth field into. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Mon, Feb 19, 2007 at 12:53:15PM +0100, Martijn van Oosterhout wrote: Well, your queries need to use the same form, ie: SELECT blah FROM foo WHERE date_trunc( 'entered_timestamp'::timestamptz AT TIME ZONE 'UTC', 'foo' ) Thought so. That seems a bit error prone though, so your idea of making a simple SQL function to wrap it will probably save you much heartache. It will also make it clearer to people reading the code *why* it is written that way. Yep, and the COMMENT ON FUCNTION provides for a nice place to document it :-) Well, PostgreSQL itself is storing UTC anyways but we need the timezone bit since our frontend delivers timestamps from various timezones and they are note normalized to UTC before they get to the database. Yeah, the AT TIME ZONE 'UTC' needs to be put somewhere, and in the index is probably easier than everywhere else. Just checking you'd thought about it. :) Thanks ! Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Mon, Feb 19, 2007 at 12:41:11PM -0600, Bruno Wolff III wrote: Karsten Hilbert [EMAIL PROTECTED] wrote: The date-of-birth field in our table holding patients is of type timestamp with time zone. One of our patient search queries uses the date-of-birth field to find matches. Since users enter day, month, and year but not hour, minute, and second of the DOB we run the query with That seems like an odd choice. Is there some reason they didn't use a type of date? Maybe you could get them to change it? What time of day were you born ? http://en.wikipedia.org/wiki/Apgar What is the technical reason that makes you wonder ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Mon, Feb 19, 2007 at 03:28:01PM -0600, Bruno Wolff III wrote: What is the technical reason that makes you wonder ? Because it would make doing the queries simpler. If you aren't collecting the data, it doesn't make sense to deal with the extra headaches involved with pretending you know what time of day someone was born. Oh, I see. When I said that users don't enter the hour and minute that was targetted at search time. They do enter the time part when entering a new patient, of course. So, it's surely collected. It's just not used for searching. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [ANNOUNCE] Advisory on possibly insecure security definer functions
On Sat, Feb 17, 2007 at 11:31:19AM -0700, Michael Fuhr wrote: If you schema-qualify objects instead of setting search_path then don't forget about operators. I knew I had missed something. SELECT col FROM schemaname.tablename WHERE othercol operator(pg_catalog.=) schemaname.funcname(someval) Good to know what. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Why *exactly* is date_trunc() not immutable ?
Hi all, we (GNUmed) run a medical database on PostgreSQL. We are very pleased with it (PostgreSQL, that is ;-) in all aspects. The date-of-birth field in our table holding patients is of type timestamp with time zone. One of our patient search queries uses the date-of-birth field to find matches. Since users enter day, month, and year but not hour, minute, and second of the DOB we run the query with select ... where ... and date_trunc('day', dob) = date_trunc('day', what_the_user_entered_as_dob) and ... ; (appropriately escaped, of course) The actual DOB is indeed stored with hour, minute and second so the time information is not redundant but we don't need it for searching. So I figured it would make sense to add a functional index on date_trunc('day', dob) to the patients table. Which worked (appeared to, at least) with PG 7.4. One of our users is on PG 8.2 and gets the warning that date_trunc() is not immutable and can thus not be used in a functional index. Makes sense all by itself. What I don't understand, however, is exactly *why* date_trunc is not immutable ? All it does is extracting part of the information that's there anyways. One would assume it to be the timestamp equivalent of substring(), no ? (not a good example, perhaps, as that might depend on encoding settings...) It *shouldn't* really depend on, say, date/time related locale settings, should it ? I'd be happy to provide more details if that is needed for which I'd have to contact the user in question. Thanks for any insight offered, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
followup to self: On Sun, Feb 18, 2007 at 12:29:17PM +0100, Karsten Hilbert wrote: So I figured it would make sense to add a functional index on date_trunc('day', dob) to the patients table. Which worked (appeared to, at least) with PG 7.4. One of our users is on PG 8.2 PostgreSQL 8.1 I was to say. and gets the warning that date_trunc() is not immutable and can thus not be used in a functional index. The code to create the index: create index idx_identity_dob_ymd on dem.identity(date_trunc('day', dob)) The exact error it emits: functions in index expression must be marked IMMUTABLE Those were lifted from the error log without further editing. I know that I could fake immutability by wrapping date_trunc() in a stored procedure marked IMMUTABLE but I wonder what pitfalls that might hold. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Why *exactly* is date_trunc() not immutable ?
On Sun, Feb 18, 2007 at 09:19:43PM +0900, Michael Glaesemann wrote: What I don't understand, however, is exactly *why* date_trunc is not immutable ? I believe it's because the result of date_trunc depends on the time zone setting for the session. ... So, given the same arguments, ('day', and current_timestamp), date_trunc is returning two different results. (Casting to date has the same issue.) Ah, I see. That makes sense. Now, if I'd write a date_trunc_utc(precision, timestamp with time zone) which converts input timestamps to UTC I could fairly safely mark that IMMUTABLE, no ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [ANNOUNCE] Advisory on possibly insecure security definer functions
On Sat, Feb 17, 2007 at 01:26:34PM +0900, Tatsuo Ishii wrote: But if we insert a set schema search_path command in an SQL function, the caller will be affected by it. Doing reset search_path before returning to caller might solve some of problems, but it will not recover caller's special search_path. How do you solve the problem? Schema-qualifying object accesses would be tedious, omission-prone but not liable to the above problem. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] DBMS Engines and Performance
On Wed, Jan 31, 2007 at 09:57:21AM +0100, Mikael Carneholm wrote: I'm tired of teenage 1337 skill0rz PHP hackers who go whoaah, 0ms! after running select count(*) from forum_posts in a single thread (the developer himself testing his app), and then claim MySQL rocks! I tested the postgres 7.1 that came with insert linux distro of choice here, but it was twice as slow Postgres sucks! Well, tell you what, twice 0ms is 0ms as well, so I guess Postgre don't suck as bad !! ;-)) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] self-referential UPDATE problem on 7.4
Hello ! I am trying to run this query in psql: update clin.episode set fk_patient = clin.health_issue.fk_patient from clin.health_issue where clin.episode.fk_patient is NULL and clin.episode.fk_health_issue = clin.health_issue.pk; It returns UPDATE 2 which is what I expect from the data. However, the rows in question are not actually updated. What I am trying to do: - clin.episode has a nullable foreign key fk_health_issue to clin.health_issue.pk - clin.health_issue has a not-nullable fk_patient - clin.episode also has an fk_patient which is nullable I want to transfer the value of clin.health_issue.fk_patient to clin.episode.fk_patient to those clin.episodes for which fk_patient is NULL. I'm sure I am doing something wrong in a fairly basic way. I tried with a subselect, too, but get the same result: update clin.episode set fk_patient = ( select fk_patient from clin.health_issue chi where chi.pk=clin.episode.fk_health_issue ) where fk_patient is NULL; This is on 7.4.14 on Debian/Etch. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] self-referential UPDATE problem on 7.4
On Thu, Jan 04, 2007 at 11:36:35AM -0500, Tom Lane wrote: update clin.episode set fk_patient = clin.health_issue.fk_patient from clin.health_issue where clin.episode.fk_patient is NULL and clin.episode.fk_health_issue = clin.health_issue.pk; It returns UPDATE 2 which is what I expect from the data. However, the rows in question are not actually updated. That seems very strange. Could you perhaps have a BEFORE UPDATE trigger that's changing the values back to null again? I do but I drop that one in the script before doing the above update. I may have an old one hanging around, though, I'll double-check. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] self-referential UPDATE problem on 7.4
On Thu, Jan 04, 2007 at 06:37:23PM +0100, Karsten Hilbert wrote: It returns UPDATE 2 which is what I expect from the data. However, the rows in question are not actually updated. That seems very strange. Could you perhaps have a BEFORE UPDATE trigger that's changing the values back to null again? I do but I drop that one in the script before doing the above update. I may have an old one hanging around, though, I'll double-check. I was missing a schema qualification in a drop statement, hence the culprit trigger function wasn't dropped properly. Sometimes a hint helps. Thanks Tom, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Large IN query optimization
On Wed, Dec 13, 2006 at 02:08:44PM -0500, Tom Lane wrote: Are you on 8.2? 8.1 is pretty stupid about joins to inheritance trees. To put up a number for stupid: For GNUmed a particular query involving one-level inheritance went down from over ten minutes to under one second just by running against 8.2 instead of below 8.2. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] inheritance and index use (similar to UNION ALL)
Hi, we have a parent table root_item with a few common fields (one is a text field) from which a whole bunch of child tables derives. We need to run queries against the text field across the whole bunch of child tables. What naturally comes to mind is to run the query against root_item.text_field thereby catching all child table text_field values as well. However, the planner doesn't really seem to consider indices of the parent table. It was said that 8.2 would be making improvements related to this and the Release Notes do have a comment on planner improvements for UNION/inherited tables. And, yes, the data does warrant using indices over using seq scans. Explicitely joining the subtables one by one yields orders of magnitude faster results (10 minutes going down to 2 seconds) and uses indices. What I am wondering is: Should this really work (better) in 8.2 ? Do I need to provide more data (schema, explain plan etc) ? Am I doing something wrong (apart from perhaps chosing a non-performant schema design) ? Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: FW: [GENERAL] Male/female
On Fri, Dec 08, 2006 at 04:50:16PM +0100, Alban Hertroys wrote: Why not use unicode symbols 0x2640 and 0x2642? A clever idea, however, it does not cover transsexual, female phenotype transsexual, male phenotype hermaphrodite, intersexual phenotype which we (GNUmed, that is) need to support in a medical database. In fact, most (all?) of the approaches I have seen in this thread lack that. Using bool would even preclude extension of the constraint in that direction if it were ever needed. So, one is better of with, say, char(2) or something similar. I would also suggest using a *coded* gender, not male, female strings which will make gender-based calculations a lot easier down the road. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: FW: [GENERAL] Male/female
On Mon, Dec 11, 2006 at 01:34:17PM +0100, H.J. Sanders wrote: I would also suggest using a *coded* gender, not male, female strings which will make gender-based calculations a lot easier down the road. Which will also make it easier to have a multi-langual solution. Which, precisely, is one of the reason GNUmed does it that way. We then have a table gender_label which can translate the code into a string (both English as a base language and a local language via select _(label) from gender_label. (Yes, we sort of re-implemented gettext in SQL ;-) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] inheritance and index use (similar to UNION ALL)
Further testing has revealed that, indeed, PG 8.2 speeds up our use of child tables ! The query in question went down from 10 minutes to *under a second* just by running against 8.2 :-) Now, that's some gain ! Thanks to the PostgreSQL developers. Karsten, GNUmed team On Sun, Dec 10, 2006 at 09:43:35AM +0100, Karsten Hilbert wrote: Subject: [GENERAL] inheritance and index use (similar to UNION ALL) User-Agent: Mutt/1.5.13 (2006-08-11) Hi, we have a parent table root_item with a few common fields (one is a text field) from which a whole bunch of child tables derives. We need to run queries against the text field across the whole bunch of child tables. What naturally comes to mind is to run the query against root_item.text_field thereby catching all child table text_field values as well. However, the planner doesn't really seem to consider indices of the parent table. It was said that 8.2 would be making improvements related to this and the Release Notes do have a comment on planner improvements for UNION/inherited tables. And, yes, the data does warrant using indices over using seq scans. Explicitely joining the subtables one by one yields orders of magnitude faster results (10 minutes going down to 2 seconds) and uses indices. What I am wondering is: Should this really work (better) in 8.2 ? Do I need to provide more data (schema, explain plan etc) ? Am I doing something wrong (apart from perhaps chosing a non-performant schema design) ? Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Restore database from files (not dump files)?
On Wed, Dec 06, 2006 at 12:16:35PM -0800, wheel wrote: re Bruce Momjian Wow, what an unfriendly dude! Well, he's one of the very guys who make all this (PostgreSQL, that is) happen for us. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to implement backup protocol
On Tue, Nov 28, 2006 at 06:01:43PM +0200, Andrus wrote: 5. Server has *only* 5432 port open. pg_read_file() can read only text files and is restricted only to superusers. How to add a function pg_read_backup() to Postgres which creates and returns backup file with download speed ? You could use an *un*trusted procedural language to create a function to binary-read the backup from disk and return it as a bytea field. Not sure how efficient that is, though. You could then simply do select get_backup(); If you allow for parameters you could make it return certain backups based on, perhaps, timestamp of creation. select list_available_backups(); might complete the suite of tools. One could then always use some hashing tools (mhash with PG bindings comes to mind) to verify whether a backup has arrived safely: on local machine: ripemd160(backupfile) select yhash.ripemd160(get_backup()) = local hash; Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] How to implement backup protocol
On Tue, Nov 28, 2006 at 07:23:44PM +0200, Andrus wrote: Pelase, can you recomment a solution which uses port 5432 owned by Postgres If you think you know your usage pattern: Have cron stop PostgreSQL at, say, 2am. Have cron start ssh on port 5432 at 2:05am if PG is down. Have cron shutdown ssh on port 5432 at 2:55am. Have cron KILL ssh on port 5432 if need be. Have cron start PostgreSQL at 3am if ssh is down. How to force postmaster to run the tool which system provides when it receives backup request instead of postgres child process ? There is no (builtin/obvious/easy) way for good reason. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] How to implement backup protocol
On Tue, Nov 28, 2006 at 07:34:56PM +0200, Andrus wrote: This id good idea but it forces to use Postgres protocol for downloading. Why, of course. This protocol has some timeouts which are too small for large file download. For sane values of large I doubt this is true. A field in PG can store about 1 GB of data (says the FAQ) and the protocol better be able to hand out as much. It may be that you need to increase statement_timeout - which can be done on a per-session basis. Postgres protocol has also a lot of overhead added to downloadable data. Yes. But you wanted to use port 5432 on a machine already running PG. Not sure but using a binary cursor might improve things. Using a client library capable of the v3 (?) protocol should significantly lower the overhead, too. It also requires that whole downloadable file must fit into memory. My PG knowledge isn't up to this task but I have a sneaking suspicion this isn't really enforced by PG itself. ODBC I tried this but was forced to store big files in 1 MB chunks in bytea fields and create file from downloaded blocks Other client libraries may do better here. Or should I really write code which divides backup file to 1 MB chunks and stores them in bytea field ? No. I would not even store them in the database at all. I would use the untrusted language function to read the file from disk and return a (virtual) bytea field (which doesn't exist in the database). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Data transfer between databases over the Internet
John McCawley wrote: I think I may not have described my problem clearly enough...I *already* have a server-side app written in PHP with a Postgres backend...This is the ultimate destination of the data. The problem is that I am being forced by my client to deploy a 3rd party app on all of my field guys' laptops...This app (that I have NO ability to modify) is set in stone, I cannot use a different one or modify it. I am trying to write an app client side that exports the data from the .mdb backend of this 3rd party program and into my Postgres backend. If that's really all you need you might just write a Python script. Or do you need a frontend for the PostgreSQL data as well ? If so it might still be useful to separate it from the ex-/importer. This needs to work over the Internet, as my guys are not necessarily on my LAN. No problem. I'd just make sure it works over an encrypted pipe (as you said) - be it ssh or ssl. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Encoding, Unicode, locales, etc.
On Tue, Oct 31, 2006 at 11:47:56PM -0500, Tom Lane wrote: Because we depend on libc's locale support, which (on many platforms) isn't designed to switch between locales cheaply. The fact that we allow a per-database encoding spec at all was probably a bad idea in hindsight --- it's out front of what the code can really deal with. My recollection is that the Japanese contingent argued for it on the grounds that they needed to deal with multiple encodings and didn't care about encoding/locale mismatch because they were going to use C locale anyway. For everybody else though, it's a gotcha waiting to happen. Could this paragraph be put into the docs and/or the FAQ, please ? Along with the recommendation that if you require multiple encodings for your databases you better had your OS locale configured properly for UTF8 and use UNICODE databases or do initdb with the C-locale. This stuff is certainly far from ideal, but the amount of work involved to fix it is daunting; see many past pg-hackers discussions. Here are a few data points from my Debian/Testing system in favour of not worrying too much about installed ICU size as it is being used by other packages anyways: libicu36 Reverse Depends: openoffice.org-writer * OOo openoffice.org-filter-so52 openoffice.org-core libxerces27 * Xerces XML parser (Apache camp) libboost-regex1.33.1 libboost-dbg icu Reverse Depends: libicu36 libicu36 libxercesicu26* Xerces, again libxercesicu25 libicu28-dev libicu28 libicu21c102 icu-i18ndata icu-data libwine * Wine This, of course, does not decrease the work required to get this going in PostgreSQL. Thanks for the great work, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Encoding, Unicode, locales, etc.
On Wed, Nov 01, 2006 at 08:50:30PM +0100, Martijn van Oosterhout wrote: Could this paragraph be put into the docs and/or the FAQ, please ? Along with the recommendation that if you require multiple encodings for your databases you better had your OS locale configured properly for UTF8 and use UNICODE databases or do initdb with the C-locale. Err, multiple encodings don't work full-stop. Well, yes, I was thinking of multiple client encodings which can be supported either via a C-locale-initdb with the databases set to the encoding you require (but sorting/etc won't work, I know) or by doing a unicode-initdb and using unicode databases. In each case the client encodings can be multiple ones - as long as conversion is possible. Sorting etc may still be wrong, but at least the proper characters are going in and coming back. Any particular locale (as defined by POSIX) is only really designed to work with one encoding. Sure. What I meant is that if you have a unicode database you can use several client encodings and get back the properly encoded characters. The fact that the C locale produces an order when sorting UTF8 text is really just luck. Yes. Here are a few data points from my Debian/Testing system in favour of not worrying too much about installed ICU size as it is being used by other packages anyways: We'd need a suitable patch first before we start worrying about that. I think diskspace is less of an issue now. Well, size did come up in a recent discussion so I thought I'd mention the above facts. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] SQL injection in a ~ or LIKE statement
On Mon, Oct 23, 2006 at 07:58:30AM +0200, Harald Armin Massa wrote: adding: Judging from the mails of Frederico, developer of psycopg2, he was also in the early notify circle of the 8.13-8.14 escaping improvement. So, if done correctly the DB API way, all escaping with psycopg2 is fine. On the psycopg2 list Frederico writes that, technically, psycopg2 currently uses PQEscapeStringConn and he plans on adding out-of-query bind parameter support at some point in the future. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] exploiting features of pg to obtain polymorphism
On Mon, Oct 16, 2006 at 11:41:25AM +0200, Ivan Sergio Borgonovo wrote: You can put a unique constraint and a serial default on the parent table (such as a primary key). Insertion on a child table will fail if the key in question already exists in the base table. It may have come from another child table. Is it really true? http://www.postgresql.org/docs/8.1/static/ddl-inherit.html True enough. I just tried it against 7.4.13. However, it works for us because a) we don't need unique parent table fields apart from the parent table pk b) we never insert *explicitely* into the primary key field, neither via the base table nor via any child tables To make this safe we should probably put triggers onto the tables to make sure the pk isn't alter (IOW set it to DEFAULT in a BEFORE INSERT/UPDATE trigger). Updating the base table updates all the relevant child tables, too. Delete will extend from base to child tables, too. That way I'll have a unique serial across all the child tables. I just need to take care to not use ONLY on update/delete on the base table or to INSERT into the base table directly (the latter isn't really harmful to the issue, however). It would be nice if at least delete fired by triggers on the parent worked. But it doesn't since rows inserted in children don't get inserted in parents (that's OK on a OO perspective). They do get inserted into the parent. But actions on the child tables do not fire parent table triggers :-( Audit tables have their own pk/fk relationships and their triggers but according to my knowledge they won't be considered unless you operate on those table directly. If you operate on the data tables those triggers pk/fk won't be seen. True. But I still get the unique pks since I don't operate on them directly. Eventually, PG will enforce those constraints, too. You get a serial in children, not uniqueness. I do but only because I never change the PKs explicitely, not (yet) because PG enforces it. Establishing use cases may over time contribute to raising inheritance improvements further up the TODO list in terms of priorities. even if I've the suspect the code is not complete enough to implement the features Yes. Eventually it is going to be something like Veil. Or rather, I suppose it will *be* (as in use) Veil. I didn't understand. Are you referring to this? http://veil.projects.postgresql.org/curdocs/index.html Yes. And, BTW, it got nothing much to do with inheritance :-) But it could, thinking that tables might inherit from a Veil-enabled parent table or some such. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] exploiting features of pg to obtain polymorphism
On Thu, Oct 12, 2006 at 04:40:32PM +0200, Ivan Sergio Borgonovo wrote: Anyway it doesn't solve the problem of having lists that can contain different elements with same parent and maintain ref. integrity. Only to some degree. You can put a unique constraint and a serial default on the parent table (such as a primary key). Insertion on a child table will fail if the key in question already exists in the base table. It may have come from another child table. Updating the base table updates all the relevant child tables, too. Delete will extend from base to child tables, too. That way I'll have a unique serial across all the child tables. I just need to take care to not use ONLY on update/delete on the base table or to INSERT into the base table directly (the latter isn't really harmful to the issue, however). Now back to gm code. I see you've data tables with their pk/fk relations and triggers in one schema that inherit from audit tables in another. Yes. You've a function that helps to put tables that have to be audited in another table, nothing special compared with an insert with the exception of some extra control on input. Yes. Audit tables have their own pk/fk relationships and their triggers but according to my knowledge they won't be considered unless you operate on those table directly. If you operate on the data tables those triggers pk/fk won't be seen. True. But I still get the unique pks since I don't operate on them directly. Eventually, PG will enforce those constraints, too. Considering you forbid direct insert, update and delete on those tables, while pk/fk etc... are still a security net it seems that those relationship will never be really used. True as of today. Later on you grant the same operations to gm-doctors. This further puzzle me Well, all objects are owned by gm-dbo. Our bootstrapper does that. So I need to grant access rights to some people. Namely those in the group gm-doctors. even if I've the suspect the code is not complete enough to implement the features Yes. Eventually it is going to be something like Veil. Or rather, I suppose it will *be* (as in use) Veil. Finally I read: comment on table audit.audited_tables is 'All tables that need standard auditing must be recorded in this table. Audit triggers will be generated automatically for all tables recorded here.'; But I can't see anything doing this. gmAuditSchemaGenerator.py in server/bootstrap/ There is one point of contact between what I did already and what I would like to do but I still haven't had a good idea how to implement it. The use of metadata. But definitively I can't see polymorphism in your use of inheritance. Surely not to the extent a C++ programmer would hope for. Any second chance to find an OO use of inherits, Not that I know. cos this seems the only OO construct of pg. Surely not. SPs can be overloaded. Datatypes can be extended. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Is it possible to return custom type as proper ROW?
On Wed, Oct 11, 2006 at 02:08:03PM -0700, Jeff Davis wrote: select q.*, bar from ( select a, b,c from foo ) q; What is bar? XMIN, for example Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Storing images in PostgreSQL databases (again)
On Thu, Oct 05, 2006 at 05:08:27PM -0600, Leonel Nunez wrote: Subject: Re: [GENERAL] Storing images in PostgreSQL databases (again) User-Agent: SquirrelMail/1.4.8 Leonel Nunez wrote: I think the arguments for keeping stuff inside the database are (a) far easier to maintain transactional semantics for insert/delete, and (b) easier to serve the data out to clients that aren't on the same machine. You aren't going to find a performance win though. (c) easy to replicate I don't follow that. Suppose your database minus images is 3 GB, and your images are another 50 gigabytes. Which is easier to replicate, 3 or 53? Put the images on a file server, separate from the DBs - no need to replicate them. yes 3GB are *faster* han 53 gb but is the same as easy as 3 or 100 And, the above only applies to *initial* costs of replication. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] exploiting features of pg to obtain polymorphism maintaining ref. integrity
On Fri, Oct 06, 2006 at 11:09:29PM +0200, Ivan Sergio Borgonovo wrote: Is there any good documentation, example, tutorial, pamphlet, discussion... to exploit pg features to obtain polymorphic behavior without renouncing to referential integrity? In GNUmed we use it to aggregate text fields over a range of child tables and for auditing: http://cvs.savannah.gnu.org/viewcvs/gnumed/gnumed/server/sql/?root=gnumed look at gmAudit*.sql and gmclinical.sql Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] in failed sql transaction
On Mon, Sep 25, 2006 at 03:16:07PM +0530, Gurjeet Singh wrote: All other databases I used up to now just ignore the statement violating the constraint, but leave the transaction intact. Which databases behave that way? Does COMMIT succeed even if some statements failed? Oracle, for one, behaves that way... Yes, COMMIT does succeed even if some statement(s) threw errors. Probably, the 'other' DBs have implemented that by an implicit savepoint just before a command, and rollong back to it automatically, if the transaction fails. This is quite a desirable feature... Why bother with transactions at all if autocommit is enabled ?? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] in failed sql transaction
On Mon, Sep 25, 2006 at 05:40:56PM +0530, Gurjeet Singh wrote: In this case PostgreSQL does the right thing; something went wrong, queries after the error may very well depend on that data - you can't rely on the current state. And it's what the SQL specs say too, of course... In an automated/programmatic access to the database, this might be desirable; but when there's someone manually doing some activity, it sure does get to one's nerves if the transaction till now was a long one. Instead, the operator would love to edit just that one query and fire again! Well, psql does it just that way. It implements auto-commit on behalf of the user unless a transaction is explicitely started. Also, in automated/programmatic access, the programs are supposed to catch the error and rollback/correct on their own. Sure but that of course does not relieve the database of aborting the transacation on its own as soon as something goes wrong. And for sake of efficiency the transaction should be aborted right there and then and subsequent queries can be ignored until the end of transaction. This is easier on CPU cycles and memory consumption. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] execute/perform and FOUND
Hi, I am utterly confused now. Running the attached script on Debian: postgresql: Installiert:7.5.21 Mögliche Pakete:7.5.21 Versions-Tabelle: *** 7.5.21 0 990 ftp://ftp.gwdg.de testing/main Packages 100 /var/lib/dpkg/status 7.4.7-6sarge3 0 500 ftp://ftp.gwdg.de stable/main Packages 500 ftp://ftp.de.debian.org stable/main Packages 7.4.7-6sarge2 0 500 http://security.debian.org stable/updates/main Packages gives the result below. It seems inconsistent to me with regard to the FOUND variable. I would expect FOUND to always be false regardless of whether I use EXECUTE or PERFORM. I certainly do not expect it to be true for the third EXECUTE even assuming that PERFORM may have a bug. What is it that I am missing out on here ? BEGIN CREATE TABLE CREATE FUNCTION select * from test; fk_item - (0 Zeilen) select test(); psql:00-test.sql:33: NOTICE: running: select 1 from test where fk_item=1324314 psql:00-test.sql:33: NOTICE: found (execute 1): f psql:00-test.sql:33: NOTICE: found (execute 2): f psql:00-test.sql:33: NOTICE: found (perform): t psql:00-test.sql:33: NOTICE: found (execute 3): t test -- t (1 Zeile) rollback; ROLLBACK Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 begin; create table test (fk_item integer); create function test() returns boolean language 'plpgsql' as ' declare cmd text; begin cmd := ''select 1 from test where fk_item=1324314''; raise notice ''running: %'', cmd; execute cmd; raise notice ''found (execute 1): %'', found; execute cmd; raise notice ''found (execute 2): %'', found; perform cmd; raise notice ''found (perform): %'', found; execute cmd; raise notice ''found (execute 3): %'', found; return true; end; '; \set ECHO queries select * from test; select test(); rollback; ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] execute/perform and FOUND
On Thu, Sep 21, 2006 at 01:06:32PM +0200, Karsten Hilbert wrote: I am utterly confused now. Running the attached script on Debian: postgresql: Installiert:7.5.21 Mögliche Pakete:7.5.21 Versions-Tabelle: *** 7.5.21 0 990 ftp://ftp.gwdg.de testing/main Packages 100 /var/lib/dpkg/status 7.4.7-6sarge3 0 500 ftp://ftp.gwdg.de stable/main Packages 500 ftp://ftp.de.debian.org stable/main Packages 7.4.7-6sarge2 0 500 http://security.debian.org stable/updates/main Packages PostgreSQL 7.4.13 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2 20060729 (prerelease) (Debian 4.1.1-10) The 7.4 docs don't say anything about execute setting FOUND but regardless of that the PERFORM result still seems faulty: A PERFORM statement sets FOUND true if it produces (and discards) a row, false if no row is produced. I don't expect a row to be produced by the example. Assuming EXECUTE does not touch FOUND at all the EXECUTE part behaves consistently (namely default FOUND=FALSE at the beginning and later whatever it was after the PERFORM). So, what about the PERFORM ? Why does it set FOUND to true? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] execute/perform and FOUND
On Thu, Sep 21, 2006 at 01:06:32PM +0200, Karsten Hilbert wrote: gives the result below. It seems inconsistent to me with regard to the FOUND variable. I would expect FOUND to always be false regardless of whether I use EXECUTE or PERFORM. I certainly do not expect it to be true for the third EXECUTE even assuming that PERFORM may have a bug. What is it that I am missing out on here ? select * from test; fk_item - (0 Zeilen) select test(); psql:00-test.sql:33: NOTICE: running: select 1 from test where fk_item=1324314 psql:00-test.sql:33: NOTICE: found (execute 1): f psql:00-test.sql:33: NOTICE: found (execute 2): f psql:00-test.sql:33: NOTICE: found (perform): t psql:00-test.sql:33: NOTICE: found (execute 3): t ... begin cmd := ''select 1 from test where fk_item=1324314''; raise notice ''running: %'', cmd; execute cmd; And, no, in the actual situation I cannot get rid of the execute or perform because I need a dynamically generated query. It's just that the example presented here was reduced to a static query that could simply be ran as is, too. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] execute/perform and FOUND
On Thu, Sep 21, 2006 at 01:32:02PM +0200, Martijn van Oosterhout wrote: So, what about the PERFORM ? Why does it set FOUND to true? I beleive that since PERFORM doesn't return anything, it sets the FOUND variable to at least indicate whether it did something. The docs say that it sets FOUND to true if it found rows and discarded them and sets FOUND to false if it did NOT find rows (and thus did not discard any, either). This would indeed make sense. However, reality seems to look different. In the example I posted it *sets* FOUND to true even though it couldn't find any rows. This is counter-intuitive. I just want to make sure this is the intended behaviour (in which case I will have to work around it) or whether it's me misunderstanding something - because it seems counter-intuitive. It's a bit wierd, but I think later versions changed EXECUTE to set FOUND also, just to be consistant. That would be good but doesn't influence the PERFORM issue. Thanks anyways, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] execute/perform and FOUND
On Thu, Sep 21, 2006 at 01:32:02PM +0200, Martijn van Oosterhout wrote: I beleive that since PERFORM doesn't return anything, it sets the FOUND variable to at least indicate whether it did something. The weird thing is that in the example it sets FOUND to true even if it did NOT do anything. Setting FOUND to true because it *succeeded* in doing nothing is not helpful because not succeeding would abort the transaction anyways and throw an exception. Regards, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] execute/perform and FOUND
On Thu, Sep 21, 2006 at 02:50:08PM +0200, Marcin Mank wrote: With: perform cmd; raise notice ''found (perform): %'', found; You effectively do: select 'select 1 from test where fk_item=1324314' ; Try: perform 1 from test where fk_item=1324314 Marcin, you saved my day. I knew I was being stupid somewhere. It's not like I never used PERFORM before but, hey, there you go :-)) The docs do hint at how to do it properly: PERFORM create_mv(''cs_session_page_requests_mv'', my_query); but this might be helpful to be pointed out explicitely: PERFORM create_mv(''cs_session_page_requests_mv'', my_query); Note that the PERFORM replaces the SELECT in the query. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] IF EXISTS
On Wed, Sep 20, 2006 at 04:12:16PM -0700, Jeff Davis wrote: In 8.2, which is currently still in production, they have added the feature where you can do things like: DROP SEQUENCE IF EXISTS mysequence; CREATE SEQUENCE mysequence; Which makes writing SQL scripts much easier. They also made it work for other objects, like DROP TABLE IF EXISTS, etc. Hurra ! Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Create user or role from inside a function?
On Fri, Sep 01, 2006 at 08:13:14PM +1000, Dan wrote: I am running PostgreSQL 8.1.4 and I want to create a user from inside a function. Is this possible in 8.1? ... I have a function like this: ... Executing this function yields: # SELECT user_create('bob',1234,'bobspassword'); ERROR: column CREATE USER does not exist CONTEXT: SQL statement SELECT CREATE USER || $1 || WITH PASSWORD || $2 PL/pgSQL function user_create line 2 at execute statement ... Any tips would be appreciated. http://cvs.savannah.gnu.org/viewcvs/gnumed/gnumed/server/sql/gmCreateUserFunction.sql?rev=1.5root=gnumedview=markup This might help. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] invalid byte sequence ?
On Fri, Aug 25, 2006 at 01:53:30PM +0200, Peter Eisentraut wrote: In that case I would suggest to also emit a suitable warning (with a postgresql.conf option to switch that off which defaults to ON). libpq can neither read postgresql.conf nor does it have the liberty to write messages anywhere. LOL, duh, of course. Don't know how I got that idea. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Saving a jpg into Postgresql table
On Fri, Aug 25, 2006 at 10:17:34AM -0700, Jeff Davis wrote: It takes aproximately 25-30% more disk space but is much easier for me to operate with it. When I read the object from the database I decode it and I have the file in the original format. Why not go a step further and do this: (1) encode the image as base64 (2) insert into mytable(image) values(decode('base64 encoding of image','base64')); Then, to get it back: (1) select encode(image,'base64') from mytable; (2) decode the base64 into your image Should that jumping through the base64 loops even be necessary ? I thought that I could just send the raw data when it ends up in a bytea field. However, your advice and previous suggestions make me have a sneaking suspicion that it still depends on how my PG library puts the data on the wire: a) as a string inside the query itself (in which case it should not be touched by encoding conversions as its headed towards a bytea field but still needs to be quoted properly which, again, the library should do) b) in raw binary if bound parameters are used (values transmitted separate from the query) Am I correct ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] invalid byte sequence ?
On Thu, Aug 24, 2006 at 01:17:49PM -0400, Tom Lane wrote: I guess the key point might be what do we do if the client locale is C? Perhaps if it's C, we continue to use the server encoding as we have in the past. This would be a reasonable fallback in other cases where we fail to deduce an encoding from the locale, too. In that case I would suggest to also emit a suitable warning (with a postgresql.conf option to switch that off which defaults to ON). Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How do i store arbitrary questions and answers in SQL?
On Wed, Aug 23, 2006 at 04:13:24PM +0300, Enver ALTIN wrote: CREATE TABLE Answer(ID INT, questionID INT, answer_text VARCHAR(255), datatype INT) You can actually store answer_text as a BLOB in the database that is packed in any format your application can handle. You can serialize a class that represents the answer you need and store it, for example. There was an article on that on Elein's General Bits not too long ago. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [8.1.4] Create index on timestamp fails
On Wed, Aug 23, 2006 at 09:07:35AM -0400, Alvaro Herrera wrote: Another idea would be to separate the date column (which would have the index) from the time column (which would have the timezone). The timezone is important -- if you have bloggers from all around the world you're gonna have serious problems with the archived time. Would that indeed work ? I mean, depending on the time zone the *date* might be different by +/-1, too ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [8.1.4] Create index on timestamp fails
On Wed, Aug 23, 2006 at 09:42:00AM -0400, Tom Lane wrote: It sounds a bit bogus to me too. Another possibility is to keep the data storage as timestamptz (which is really the recommended type for any sort of real time values), and define the index on date_part('day', entry_time AT TIME ZONE 'GMT') That definitely sounds reasonable. (or whatever zone you choose to use for reference). However, to use the index you'd have to spell the queries exactly like that, so the PITA factor might be too high. An SQL function gmt_tz(timestamptz) might help to cut down on the fuss: select ... from tbl where gmt_tz(tbl.a_tz) between ...; Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] GNUmed release (uses PostgreSQL)
Dear fellow PostgreSQL users, we are happy to announce the release of GNUmed 0.2 Librarian. GNUmed is an electronic medical record which uses PostgreSQL for its backend. Some of the new features since the last release: - server can be installed on MS/Windows, too Courtesy of PostgreSQL now running natively on Windows :-) - a complete document management system - importing documents from files - scanning in documents - handling of review status - handling of clinical relevance - taking patient pictures with a webcam - completely integrated with the clinical structure of the medical record Courtesy of PostgreSQL's bytea support. The advanced structured electronic medical record has improved upon its existing robustness and offers: - grouping of health data by - underlying health issue - episodes of activity of health issue - doctor-patient encounter - structuring of progress notes into - Subjective - Objective - Assessment - Plan Courtesy of PostgreSQL's robust support for foreign keys, constraints, inherited tables, stored procedures etc. Packages are here: http://wiki.gnumed.de/bin/view/Gnumed/InstallerGuideHome Debian packages will be available in a few days time (in testing). Find more documentation here: http://wiki.gnumed.de Thanks for a dependable ORDBMS. Regards, Karsten Hilbert, MD GNUmed developer -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] LISTEN considered dangerous
On Tue, Aug 01, 2006 at 07:16:39PM +0200, Flemming Frandsen wrote: This way we could even have wildcard listens, imagine doing a listen % and getting all the generated events:) That'd be awesome. Along with a data field in the listen structure, please :-) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Mapping/DB Migration tool
On Wed, Jul 26, 2006 at 08:48:14AM -0700, Reece Hart wrote: In case your interested in these pgtools views, I've uploaded them to http://harts.net/reece/pgtools/ . I am looking into it. Any chance you could do a text dump with --no-owner --no-acl ? Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
timestamp with definable accuracy, was: Re: [GENERAL] empty text fields
On Thu, Jun 29, 2006 at 12:02:40PM +0200, Alban Hertroys wrote: This kind of reeks like a begin/end date and an accuracy quantifier, though that wouldn't account for option 6. Your cases 0 to 5 and 7 would be transformed into something like: ... Where I defined '5' as being accurate, and lower values less accurate. You may want to use values with a wider spread, it'll allow more fuzziness about how sure you are about a certain date. Just for your information: In our Python implementation of a fuzzy timestamp type we used accuracy values ranging from 1 to 7 denoting the precision of a complete timestamp definition: 7 - full subsecond accuracy (7 digits precision) 6 - seconds 5 - minutes 4 - hours 3 - days 2 - months 1 - years Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Long term database archival
On Fri, Jul 07, 2006 at 09:09:22AM -0700, Richard Broersma Jr wrote: I think that in twenty years, I think most of us will be more worried about our retirement than the long terms data conserns of the companies we will no longer be working for. :-D You may want to take precautions now such that you start getting *more* healthy towards retirement rather than less. Because your old medical record cannot be accessed any longer. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] different sort order in windows and linux version
On Sun, Jul 02, 2006 at 12:13:02PM +0200, Martijn van Oosterhout wrote: However, the most important point is that people have said they'll take the speed hit if they could get consistant collation. I can second that. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Question concerning arrays
On Tue, Jun 27, 2006 at 01:43:21PM +0200, Christian Rengstl wrote: i am in the middle of breaking my head over designing a database and came to the following question/problem: i have persons whose values (integer) have to be entered in the db, but per person the amount of values ranges from 10 to around 50. Now my question is if it makes sense, concerning performance, to store these integer values in arrays or if it is better to change the design so that the values are stored separately in fields in tables. I have to add, that it is not unlikely (or at least it won't happen very often) that select queries will try to find one of those specific values, but rather something like select * from persons where person_id=... If each value has a distinct medical meaning (think blood sugar readings) which makes sense on its own apart from all the other values it should very likely be stored as a separate value in the database. If, however, it is just one integer in a large batch of them (think raw values of EEG readings) which only really make sense in conjunction with each other and even then mainly to the application they should probably be stored as arrays or some other aggregate datatype. You might still want to query for single values out of the batch at times - detecting Ausreißer, fitting curves, detecting peaks of activity, etc. Karsten Assistenzarzt für Kinderheilkunde, Chirurgie und Allgemeinmedizin Leipzig -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] empty text fields
On Wed, Jun 28, 2006 at 06:25:22PM +0200, Leif B. Kristensen wrote: event_date CHAR(18) NOT NULL DEFAULT The event_date field is a fuzzy date construct. It will allow the storage of such dates as 1784, ca. 1810, May 1852, 1798 or 1799, between 1820 and 1830 and so on. It's very useful in historical research to handle such dates meaningfully. How and where do you handle the fuzziness of it ? In the application ? We have to deal with the same thing in medical history data and haven't yet satisfactorily solved it. By the way, I was also going to ask sometime if there's a better way to handle such a construct than an unspecified CHAR(18) column. A composite type comes to mind. Tagged types (google for tagged_type) would probably help, too. A full-blown implementation of a fuzzy timestamp type which a) preserves the input/update timestamp b) allows setting the accuracy of the value per row c) allows for known modifiers and terms (mid-summer, second half of ...) d) allows for an arbitrary textual addition (ca., probably) would be great. I know I should be helping to write one instead of hoping someone does it for me. But I lack the expertise to do it properly. I am willing to help, at any rate, though. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] empty text fields
self-replying with additional information On Wed, Jun 28, 2006 at 09:33:18PM +0200, Karsten Hilbert wrote: A full-blown implementation of a fuzzy timestamp type which a) preserves the input/update timestamp which tagged_types is able to handle b) allows setting the accuracy of the value per row which I have a Python (application) class for wrapping mx.DateTime.DateTime c) allows for known modifiers and terms (mid-summer, second half of ...) d) allows for an arbitrary textual addition (ca., probably) which tagged_types should be able to handle as well Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] New DBs from existing schemas
On Wed, May 24, 2006 at 04:14:46PM -0700, Nishad Prakash wrote: I want to create a new database with the exact schema of an existing one, but a different name. After some reading, it seems pg_dump -s old_db old_schema createdb -t old_schema new_db Now new_db is a 1:1 copy of old_schema, triggers, data and all. You may need to setup login permissins in pg_hba for users to be able to connect to new_db. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Feature-Request: Login-Procedure
On Thu, May 11, 2006 at 12:30:13PM +0200, Tino Wildenhain wrote: - create a greeting ;-) (who has birthday?, ...) I was not aware so many people working at the psql console regulary :-) Na klar ! :-) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] sudo-like behavior
On Thu, Apr 20, 2006 at 04:06:19PM -0400, A.M. wrote: The problem is that I wish to run arbitrary SQL as an unprivileged user Would wrapping the SQL in a stored procedure with security definer help any ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] questions?
On Fri, Mar 10, 2006 at 01:59:13PM +1100, Chris wrote: xia_pw wrote: Hi,I have read the source codes of pgsql these days,and I want to know which part of the source codes deal with the function of executing the sql(select,alter,and so on),and which function deal with the query operation. Thank! Why? If you want to add functionality you'll need to discuss it first Nope. Only if xia_pw wants it to propagate into the official sources. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] record OID to table
On Fri, Mar 03, 2006 at 09:01:53PM -0700, Michael Fuhr wrote: Could you explain what you're trying to do without reference to how you're trying to do it? It sounds like the goal is to take an arbitrary string and find out what rows in what tables contain that string. Is that right? If so them I'm not sure how to best solve that problem; maybe somebody else will have some ideas. If it happens to be one text field per table it can be solved with inheritance. Inherit the text field from a parent table and search through that eventually deducing the appropriate child table with tableoid where needed. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Wish: remove ancient constructs from Postgres
On Mon, Feb 27, 2006 at 12:25:57AM +0300, Nikolay Samokhvalov wrote: Please help. how? ... PostgreSQL has very-very good documentation, but it teaches to go Pg's way, which is not right in that sense, unfortunately... By supplying documentation patches, perhaps ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Wish: remove ancient constructs from Postgres
On Sun, Feb 26, 2006 at 10:36:23AM +0200, Andrus Moor wrote: It is difficult to write standard-compliant code in Postgres. There are a lot of constructs which have SQL equivalents but are still used widely, even in samples in docs! For example, there are suggestions using ... Bruce seems to attempt start this process trying implement escape_string_warning in postgresql.conf . However, this is only very minor step. Please clean Postgres. Please help. Thanks, Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to specify infinity for intervals ?
On Fri, Feb 24, 2006 at 10:09:25PM -0500, Bruce Momjian wrote: Karsten Hilbert wrote: I will also stay with the hope that one day before long we will have 'infinite'::interval. We have this TODO: o Allow infinite dates just like infinite timestamps Do we need to add intervals to this? Yes. Thanks. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] How to specify infinity for intervals ?
Thanks to all for the suggestions. For the time being I will stay with using NULL. I will also stay with the hope that one day before long we will have 'infinite'::interval. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] How to specify infinity for intervals ?
On Tue, Feb 21, 2006 at 12:24:09PM +0900, Michael Glaesemann wrote: I don't know the details of your database schema, If you want to you can look it up here: http://salaam.homeunix.com/twiki/bin/view/Gnumed/DatabaseSchema Feel free to comment ! but I think the relationally proper way to do would be to have a separate table for the maximum ages for vaccinations that have them. I know, but, oh no, yet another table ;-( And it would also just lead to another form of NULL via left joins as you point out below. I specifically wanted to avoid that by something like 'infinite'::interval in some way or other such that I could always do ... where now date_of_birth + max_age ... and not need an ... or max_age is null ... in all the places. Vaccinations that *don't* have a maximum age would not have an entry in the table. As you notice further down my predicate was wrong, actually. You found the proper predicate by yourself, though: Do not care about the age of the patient when deciding whether to give this vaccination. The special value method, e.g., 999 years is another way of indicated a special value, but in this case I think it's a bit different. As I see it, the predicate for the vaccination_max_ages table is The vaccination 'vaccination' must be given before the patient is 'maximum_age'. Using a special value changes this predicate to The vaccination 'vaccination' can be given at any time in the patient's life. As you point out, using a sufficiently large interval for maximum_age makes that statement very likely to be true, but the predicate is not exactly the same. Not having an entry in vaccination_max_ages is much closer to the idea that the vaccination has no maximum age. Well, but there's not really a medical difference between the two AFAICT. That's the theory, anyway. Hope this helps a bit. It confirms my thinking isn't entirely wrong. Currently on the todo list there's a mention of adding infinite dates, similar to infinite timestamps. Perhaps infinite intervals could be added as well? I'd be delighted to have that happen. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] How to specify infinity for intervals ?
I am storing the maximum age a vaccination is due in a patient (eg. don't give this vaccination beyond the age of 10 years or some such). Some vaccinations are to be given regardless of age. Up to now I have used NULL to mean no maximum age. That doesn't really feel right and also complicates the SQL needed for retrieving data. I *could*, of course, use something like '999 years' as a special value to indicate no upper limit figuring that no one is going to live that long in the foreseeable future. However, the technically elegant and satisfying solution would be to be able to use infinite with interval data types much like infinity with timestamps. I have tried various syntax attempts, calculations and casts but haven't found any returning an interval of infinite length. The docs and Google don't help, either. I am running 7.4.9 on Debian 4.0. Anyone wants to comment/suggest something ? Thanks, Karsten GNUmed developer -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Last modification time
Use LISTEN/NOTIFY and a trigger. Karsten -- DSL-Aktion wegen großer Nachfrage bis 28.2.2006 verlängert: GMX DSL-Flatrate 1 Jahr kostenlos* http://www.gmx.net/de/go/dsl ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] creating users per database
On Sat, Jan 28, 2006 at 06:17:16PM -0500, Tom Lane wrote: I am using a create_user() SP created by postgres with security definer (gasp). This works just fine, however, it transfers createuser rights to *anyone* allowed to connect to the database the function is in. Not if you restrict who's allowed to execute the function. Revoke the default public EXECUTE right on it, and grant to just who you want. Duh, I forgot about this priviledge. Yeah, that serves the purpose pretty well. A good way to manage this is to grant the EXECUTE right to a group (say wheel) and then be careful who you grant membership in wheel to. We have a dedicated DB account gm-dbo (Gnumed DataBase Operator) who owns all the database objects but which I did not want to give superuser rights to if avoidable. I'll grant execute to that user only. Then I'll use the standard procedure of requesting the gm-dbo password from the user inside the application for establishing a gm-dbo db connection when gm_create_user() is to be called. Much like su - root or switching to System mode in the KDE control center. It'll go away as soon as we think of a better way ;-). I wouldn't expect to lose functionality, but the syntax will likely change. Well, that's fine :-) You do not understand: samegroup means PG users who are members of the PG group named the same as the database can connect to the database. This has *nothing* to do with any OS-level notions. Aha ! Indeed, the 7.4 documentation wasn't clear enough on that (for me): ... The value samegroup specifies that the requested user must a member of the group with the same name as the requested database. ... Might this be amended to say ... a member of the *database* group with ... ? Karsten Hilbert, MD, GNUmed developer -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] creating users per database
On Sun, Jan 29, 2006 at 02:01:51PM -0500, Tom Lane wrote: The 8.1 documentation uses the term role, which seems unlikely to be confused with Unix groups: ... Good enough? Yes :-) Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] creating users per database
Hi all, I have a faint memory of it being possible to create users inside *one* given database by way of a particular create user syntax along the lines of: create user [EMAIL PROTECTED] ...; or similar. Was this ever possible in PostgreSQL or is my memory playing tricks on me ? I went back through the manuals all the way to 6.5 and searched archives.postgresql.com/Google, alas, to no avail. I am aware that I can (and should) restrict users to databases by way of pg_hba and friends. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] creating users per database
On Sat, Jan 28, 2006 at 11:04:09AM -0500, Tom Lane wrote: I have a faint memory of it being possible to create users inside *one* given database by way of a particular create user syntax along the lines of: create user [EMAIL PROTECTED] ...; There is the db_user_namespace kluge^H^H^H^H^Hparameter, but it's pretty ugly. Ah, I see. So my memory didn't fail that much. Do you really need DB-specific user names, or just a more convenient way to limit which DBs a user can connect to? Well, yes and no. The situation is rather complex. Basically I am using a create_user() SP created by postgres with security definer (gasp). This works just fine, however, it transfers createuser rights to *anyone* allowed to connect to the database the function is in. I wanted to limit any potential damage that could be done by users created that way by restricting where they can possibly be used. I thought that if a user only exists in the context of my database it cannot be used to attack any other database no matter which way pg_hba.conf and friends might be misconfigured. Compartmentalization. But perhaps I shouldn't count on the db_user_namespace feature to exist in the future ? If the latter, the samegroup technique Philippe mentions seems like a good bet. It isn't a given that all database users will have system level equivalents hence samegroup won't work in all cases. Also, will samegroup work predictably across remote connections ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Arrays
On Thu, Jan 26, 2006 at 10:15:22AM -0800, Bob Pawley wrote: I would like to make a table of 20 plus columns the majority of columns being arrays. The following test works. The array will hold up to five characteristics of each parameter including the unit of measurement used. Using traditional methods I would need six columns to accomplish the same end (Min, Max, Norm plus a unit column for each). And why would that be undesirable ? Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Plans for 8.2?
On Thu, Jan 12, 2006 at 07:46:18PM -0500, Tom Lane wrote: * Transaction was committed/aborted/crashed - we have to update pg_clog * if transaction is still marked as running. */ if (!TransactionIdDidCommit(xid) !TransactionIdDidAbort(xid)) TransactionIdAbort(xid); The comment's have to is an overstatement. The transaction would be treated as crashed anyway, it's just that this is a convenient place to make pg_clog a bit cleaner. I'm not real sure why we bother, actually. Because that's what makes PostgreSQL such a reliable product. You follow your intuition and taste and bother doing cleanup even if you cannot immediately tell whether it's *really* needed. Better safe than sorry. Not a bad idea for a database. Karsten (who is storing clinical data in PostgreSQL) -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] I want to know how to improve the security of postgresql
On Thu, Dec 29, 2005 at 09:22:09AM -0800, Marc Munro wrote: http://pgfoundry.org/projects/veil/ Marc, is there a higher level written summary available somewhere to be read to understand conceptually how you implemented row level security ? We will (in GNUmed) eventually have to implement row level security. The current thinking is by restricting access to the tables and setting up views that always do ... where user=current_user to limit the viewable data set. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] veil docs
Sorry, found them. They were the first hit on a Google search I fired off while writing the last post. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] I want to know how to improve the security of postgresql
On Sat, Dec 31, 2005 at 05:18:19PM +0100, Karsten Hilbert wrote: We will (in GNUmed) eventually have to implement row level security. The current thinking is by restricting access to the tables and setting up views that always do ... where user=current_user to limit the viewable data set. Which is - roughly speaking - what Veil does anyways :-) I'll link Veil with our TODO item on that part. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match