Re: [GENERAL] How do I implement a .XSD in Postgres?
On Sat, 2016-01-02 at 00:20 +, ERR ORR wrote: > I need to import some DB schemas which are defined in .XSD (a XML > derivative) format. > I've googled about this but have found no satisfactory answer. Perhaps > I > just don't know what exactly to ask Google. > > So please: > - What tool can I use in order to import .XSD schema definitions into > Postgresql 9.4.5? > - If there is no tool, can you please direct me to a document at least > hinting at how to import a .XSD schema into a DB? > > Thanks and happy new year to all. If I understand your requirement, you have a database definition in an xml dialect, and you want to convert that into a database. If you can convert the xml dialect using xslt or something similar, you can use skit (http://pgfoundry.org/projects/skit/ )to convert the resulting xml into ddl which can then be run using psql. Unfortunately, I have not gotten around to documenting the xml schema for skit, but if you run the regression tests you should be able to get enough examples to figure it out. If you want to give this a try, you can email me off list and I'll do what I can to help. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [pgsql-general] Daily digest v1.13732 (15 messages)
On Tue, 2015-08-25 at 15:41 +, Neil Tiffin ne...@neiltiffin.com wrote: I really like the standardization that PostgreSQL uses in auto generating default names. The rule I use is to always use the auto generated names unless the object is referenced routinely in code. In most cases developers don’t care about index, unique, foreign key, or primary key names (from a coding standpoint) so why should they be creating the names. Since the postgresql standard uses auto generated names with ‘_pkey’ for PRIMARY KEY ‘_fkey’ for FOREIGN KEY, and ‘_key’ for UNIQUE, why not use the same rules for consistency? So I disagree with 6 and would extend 10 to include these other names if they are manually generated. I prefer to take control of names in order to be certain that on multiple database instances, the names will *always* be the same. This allows schema-diff tools (like my own skit) to provide more useful results. Although, as you point out, Postgres does a pretty good job of naming things, there are (or at least have been in the past) cases where names have not been predictable. Furthermore, a policy of explicit naming seems to me a relatively light burden on a developer or DBA, and one that may even lead to more thought being applied during database object design. If the developer has to think of a name, they may be more inclined to think more deeply about the purpose of that object. For the record, I favour using a double underscore to separate the table_name part of constraints, etc from any other parts of the name. So: account__name_idx would be an index on the name field of the accounts table; account_name__pk would be a primary key on the account_names table. It's a personal preference and works for me, your mileage may vary. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL Developer Best Practices
On Sat, 2015-08-22 at 15:15 +, Melvin Davidson wrote: I've been searching for a PostgreSQL Developer Best Practices with not much luck, so I've started my own. At the risk of stirring up a storm of controversy, I would appreciate additional suggestions and feedback. You might add: Create all relation names as plurals. Or, if your site uses predominantly singular names, make that the standard. Consistency within the site is more important than any dogmatic belief about whether singular or plural forms is better. If you don't put it in the standard, someone will eventually create tables with names that don't gel with everything else. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] configuring library path for debian build of postgres 9.2
I'm trying to build postgres 9.2 beta1 on debian stable from sources, in a way that plays nicely with the debian cluster management stuff. If I configure with libdir=/usr/lib/postgres everything works but the shared libraries end up in the wrong place (ie just where I specified). If I configure with libdir=/usr/lib/postgresql/9.2, which is where I want the libraries to go, psql complains: $ /usr/lib/postgresql/9.2/bin/psql: symbol lookup error: /usr/lib/postgresql/9.2/bin/psql: undefined symbol: PQconnectdbParams Here is my configure command: $ ./configure --build=i486-linux-gnu --prefix=/usr --includedir=/usr/include/postgresql/9.2 --mandir=/usr/share/postgresql/9.2/man --infodir=/usr/share/postgresql/9.2/info --sysconfdir=/etc/postgresql-common --localstatedir=/var --libexecdir=/usr/lib/postgresql/9.2 --libdir=/usr/lib/postgresql/9.2 --srcdir=. -docdir=/usr/share/doc/postgresql-doc-9.2 --datadir=/usr/share/postgresql/9.2/ --bindir=/usr/lib/postgresql/9.2/bin --enable-nls --enable-integer-datetimes --enable-thread-safety --enable-debug --disable-rpath --with-perl --with-python --with-pam --with-krb5 --with-gssapi --with-openssl --with-libxml --with-libxslt --with-ldap --with-ossp-uuid --with-gnu-ld --with-system-tzdata=/usr/share/zoneinfo CFLAGS='-g -O2 -g -Wall -O2 -fPIC' LDFLAGS=' -Wl,--as-needed' I'd appreciate any suggestions. In the short term I can live with the libraries being in the wrong place for one postgres version but it isn't viable in the longer term. Ultimately I'd like to be able to build a debian-compatible postgres cluster directly from the repository. Thanks. __ Marc signature.asc Description: This is a digitally signed message part
Re: [HACKERS] [GENERAL] Dropping extensions
On Sat, 2011-07-30 at 22:46 +0200, Dimitri Fontaine wrote: Tom Lane t...@sss.pgh.pa.us writes: Hmm. I don't think we have any code in there to prohibit the same object from being made a member of two different extensions ... but this example suggests that maybe we had better check that. I see you did take care of that, thank you! http://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=988620dd8c16d77f88ede167b22056176324 I thought I'd document how I fixed Veil's drop extension issue. The problem is that veil_init() needs to be able to do different things depending on how Veil has been extended. In the past, we simply re-wrote veil_init() for the application. Now that we have proper extensions this is no longer viable. So, I have modified veil_init() to call functions that have been defined in a configuration table. An extension can now register its own init functions by inserting their details into the config table. This is almost perfect, except that when an extension is dropped, the inserted records must be deleted. We achieve this by creating a new config table for each extension, which inherits from the veil config table. When veil queries its config table, it sees the inherited tables too, and can find their init functions. When the extension is dropped, the inherited table is also dropped and veil_init() reverts to its previous behaviour. Yay. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Dropping extensions
In postgres 9.1 I have created 2 extensions, veil and veil_demo. When I install veil, it creates a default (not very useful) version of a function: veil_init(). When I create veil_demo, it replaces this version of the function with it's own (useful) version. If I drop the extension veil_demo, I am left with the veil_demo version of veil_init(). Is this a feature or a bug? Is there a work-around? Thanks. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] waiting for notfications on the server
I'd like to be able to wait for notify events using a user-defined C function. All of the examples and code I can find use libpq. Is there some way I can synchronously wait for notify events using SPI? Is there some reason this is a dumb idea? I wondered about the risk of deadlocking a backend but can't see why using libpq would eliminate this risk. Anyone care to enlighten me? Thanks. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [HACKERS] [GENERAL] Date conversion using day of week
On Thu, 2011-03-31 at 08:00 -0700, Adrian Klaver wrote: On Wednesday, March 30, 2011 8:39:25 pm Brendan Jurd wrote: On 31 March 2011 03:15, Steve Crawford scrawf...@pinpointresearch.com wrote: On 03/29/2011 04:24 PM, Adrian Klaver wrote: ... Well the strange part is only fails for SUN:... [. . .] We *could* make the OP's query return the Sunday of ISO week 2011-13, which would be properly written 2011-13-7, but I think the right move here would be to throw the error for illegal mixture of format tokens. This is a trivial change -- just a matter of changing the from_date type on the DAY, Day, day, DY, Dy, dy keys. [. . .] Just to play Devils advocate here, but why not? The day name is the same either way, it is the index that changes. I am not sure why that could not be context specific? Just to be clear, the reason I was mixing things in this way was that I wanted to validate that the dayname being passed was valid for the current locale, and I could find no easier way of doing it. FTR, I have now resorted to finding the given dayname in the results of this query: select day, to_char(day, 'dy') as dayname, extract('dow' from day) as dayno from ( select current_date + n as day from generate_series(0, 6) as n) d; If there is an easier way of doing this, please let me know. As far as the postgres API goes, exposing a function that would validate a dayname returning a day number would resolve all of this for considerably less complexity. Also throwing an error in the to_date function for unexpectedly mixed input formats seems quite reasonable. Thanks for your time and attention. The commercial RDBMS vendors could learn a lot about customer support from this forum. __ Marc Munro signature.asc Description: This is a digitally signed message part
[GENERAL] Date conversion using day of week
I'm trying to validate a day of the week, and thought that to_date would do the job for me. But I found a case where it cannot tell the difference between sunday and monday. Is this a bug or intended behaviour? dev=# select to_date('2011-13-Mon', '-IW-DY'); to_date 2011-03-28 (1 row) dev=# select to_date('2011-13-Sun', '-IW-DY'); to_date 2011-03-28 (1 row) dev=# select to_date('2011-13-Tue', '-IW-DY'); to_date 2011-03-29 (1 row) This is on postgres 8.3.14. __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Hiding data in postgresql
On Mon, May 24, 2010 at 2:16 PM, Hector Beyers hqbey...@gmail.com wrote: Hi guys, does ANYONE have any tips on hiding data on a database server? This means that data is stored in places that is not necessarily picked up in the schema of the database. I am doing some research on databases and need some direction. Any help or direction will be highly appreciated. Like everyone else who has responded I am unsure exactly what you mean but it might be that you want to implement something like a virtual private database. The basic idea is that every user connects to the same database but gets to see different subsets of data depending on what rights they have. You implement this using views. No-one gets access to the underlying tables, instead having access to a secured view. The secured view on table x looks like this: create view x as select * from real.x where i_can_see(x.key); The function i_can_see() determines whether you can see a particular row. Naturally access function, i_can_see(), needs to know who a particular user is and what rights they have. This involves some careful session management, particularly in today's web-centric applications. If you are interested in this technique, then my project, veil: http://veil.projects.postgresql.org/ , provides tools for building virtual private databases in Postgres. Be warned though, this is a difficult thing to do, may have unacceptable overhead, and may still leave channels open for data compromise. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] How to determine the operator class that an operator belongs to
I'm trying to reconstruct create operator class statements from the catalogs. My problem is that access method operators are related only to their operator families. My current assumtion is that all operators for an operator class will have the same left and right operand types as the data type (opcintype) of the operator class, or will be None. Here is my query: -- List all operator family operators. select am.amopfamily as family_oid, oc.oid as class_oid from pg_catalog.pg_amop am inner join pg_catalog.pg_operator op on op.oid = am.amopopr left outer join pg_catalog.pg_opclass oc on oc.opcfamily = am.amopfamily and (oc.opcintype = op.oprleft or op.oprleft = 0) and (oc.opcintype = op.oprright or op.oprright = 0) Is my assumption correct? Does the query seem correct? Thanks in advance. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgresql.key secure storage
On Sun, 2009-09-13 at 12:04 -0300, pgsql-general-ow...@postgresql.org wrote: A user must have the TRUNCATE privilege to truncate a table or be the tables owner. Well the TRUNCATE example I mentioned is perhaps not explicit of what I meant to say. A user who can modify data in a client application can also modify data if he connects directly to the database, bypassing the client application, with commands like 'UPDATE tbl SET col = value' Even if a few rows are concerned, data is yet inconsistent. The only way to prevent this is by preventing a direct access to the sever via a client like psql for example. With or without use of SSL, it is not possible, unless I'm missing something. If I understand you correctly, you want to allow a to use an application that has certain database rights without allowing that same user to have general database access through, say, psql. The usual way of doing this, is to have the application run on a separate application server from the one the user has direct access to. You can then ensure that the database only allows connections from the application server. The user cannot gain knowledgde of the account used by the application server without breaking into that machine, and if the user somehow guesses the authentication details they will still be unable to use psql as connections from their own machine would be denied. I suspect that the problem you are trying to deal with is more complex than you have actually explained. If the answers you have received so far don't help, you'll have to provide some more clues. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Looking for proper escape string syntax from pg_get_constraintdef
I am trying to recreate a domain definition from information in pg_catalog and getting the following warnings: WARNING: nonstandard use of \\ in a string literal LINE 3: CHECK (((VALUE ~ '^\\d{4}$'::text) OR (VALUE ~ '^\\d{5}-\\... ^ HINT: Use the escape string syntax for backslashes, e.g., E'\\'. WARNING: nonstandard use of \\ in a string literal LINE 3: CHECK (((VALUE ~ '^\\d{4}$'::text) OR (VALUE ~ '^\\d{5}-\\... ^ HINT: Use the escape string syntax for backslashes, e.g., E'\\'. My original domain definition is this: create domain public.us_postal_code as pg_catalog.text CHECK (((VALUE ~ E'^\\d{4}$'::text) OR (VALUE ~ E'^\\d{5}-\\d{4}$'::text))); but retrieving the constraint definition using pg_get_constraintdef(oid) gives me this: CHECK (((VALUE ~ '^\\d{4}$'::text) OR (VALUE ~ '^\\d{5}-\\d{4}$'::text))) How can I get the constraint definition with proper escaping from the catalog? Is there another function I should be using? __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Design Database, 3 degrees of Users.
On Fri, 2009-07-31 at 08:38 -0300, pgsql-general-ow...@postgresql.org wrote: Date: Fri, 31 Jul 2009 12:38:30 +0100 From: Andre Lopes lopes80an...@gmail.com To: pgsql-general@postgresql.org Subject: Design Database, 3 degrees of Users. Message-ID: 18f98e680907310438o764e9bc7hbb6e245d8464...@mail.gmail.com I need to design a Database that will handle 3 degrees of users: Administrators - They can see all the information in the database. Managers - They only can see the information of his dependants. Dependants - Theirs action must be aprovet by the managers. Wich the best way to implement this in PostGreSQL? There is some database examples doing this? Some OpenSource Project? If I understand your requirement, I think you are hoping to implement Virtual Private Databases for each of your users. With a VPD each user connects to the same database but can see different subsets of data. You can do this with veil: http://veil.projects.postgresql.org/curdocs/index.html Note that this is a hard problem and good solutions, even using veil, require a lot of work. My advice to anyone thinking that they want to do this is, consider very carefully whether the benefits of a VPD are worth the cost. It is generally much easier to place the sort of controls you need in your application than in the database. __ Marc Munro (developer of veil) -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgmemcache status
On Wed, 2008-07-09 at 20:42 -0300, [EMAIL PROTECTED] wrote: Is pgmemcache still being actively supported/developed? I have experienced a database crash with postgres 8.3.3 and pgmemcache 1.2beta1 Yes and no. I just joined up, and hope to be submitting some patches to it soon, as well as push to get a new version out the door, but there has not been much movement on it lately. Greg, I have a backtrace which follows below. Is this the best forum for this or would you prefer private email, a bugzilla entry, etc? Program received signal SIGSEGV, Segmentation fault. 0xb7d1aae3 in strlen () from /lib/libc.so.6 (gdb) bt #0 0xb7d1aae3 in strlen () from /lib/libc.so.6 #1 0x082bbf6a in MemoryContextStrdup (context=0x842f328, string=0x0) at mcxt.c:658 #2 0x082b4bd1 in _ShowOption (record=0x83e6fb8, use_units=-48 '�') at guc.c:5938 #3 0x082b4e45 in GetConfigOptionByNum (varnum=137, values=0xbf8fb23c, noshow=0xbf8fb26b ) at guc.c:5531 #4 0x082b9cbb in show_all_settings (fcinfo=0xbf8fb2c0) at guc.c:5773 #5 0x0817201b in ExecMakeTableFunctionResult (funcexpr=0x847c228, econtext=0x847b4b8, expectedDesc=0x847ba08, returnDesc=0xbf8fb558) at execQual.c:1566 #6 0x0817daf0 in FunctionNext (node=0x847b698) at nodeFunctionscan.c:68 #7 0x08173424 in ExecScan (node=0x847b698, accessMtd=0x817da80 FunctionNext) at execScan.c:68 #8 0x0817da79 in ExecFunctionScan (node=0x847b698) at nodeFunctionscan.c:119 #9 0x0816cb4e in ExecProcNode (node=0x847b698) at execProcnode.c:356 #10 0x0816bd4b in ExecutorRun (queryDesc=0x847b038, direction=ForwardScanDirection, count=0) at execMain.c:1248 #11 0x08202cd8 in PortalRunSelect (portal=0x845d7a0, forward=value optimized out, count=0, dest=0x846e240) at pquery.c:943 #12 0x08203cca in PortalRun (portal=0x845d7a0, count=2147483647, isTopLevel=1 '\001', dest=0x846e240, altdest=0x846e240, completionTag=0xbf8fb7fa ) at pquery.c:797 ---Type return to continue, or q return to quit--- #13 0x081ff223 in exec_simple_query ( query_string=0x8457700 select * from pg_catalog.pg_settings;) at postgres.c:986 #14 0x082000c6 in PostgresMain (argc=4, argv=0x83cad18, username=0x83cacf0 marc) at postgres.c:3572 #15 0x081d51fc in ServerLoop () at postmaster.c:3207 #16 0x081d5eaa in PostmasterMain (argc=5, argv=0x83c7120) at postmaster.c:1029 #17 0x0818db39 in main (argc=5, argv=0x83c7120) at main.c:188 __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] pgmemcache status
Hope this is the right place to ask. Apologies if not. Is pgmemcache still being actively supported/developed? I have experienced a database crash with postgres 8.3.3 and pgmemcache 1.2beta1 Doing select * from pg_settings (with shared_preload_libraries defined and no custom variable definitions) causes a server reset. I will happily provide more information to anyone who wants it and intend to attach a gdb session to it tomorrow when I will have more time to investigate the cause. Thanks. __ Marc -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Surprising syntax error
The statement: revoke all on view internal.all_objects from public; yields a syntax error. The docs show that the word view is not acceptable in this statement which is fine but the surprising thing is that: revoke all on table internal.all_objects from public; works fine even though all_objects is a view and not a table. Now that I know about it, this doesn't bother me but it was a surprise and I wonder whether the the parser/planner/whatever should be a bit smarter about allowing the word table to apply to non-tables, and whether the word view ought to be allowed. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Storage sizes for dates/times (documentation bug?)
I was just looking at the 8.3.1 documentation on the postgresql web site. According to the docs, timestamp with time zone takes less space than time with time zone with the same resolution. Is this a documentation bug? It makes no sense to me that by storing the date with the time you can save 4 bytes. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] catalog info for sequences
Can someone please tell me how to extract the mix, max, increment by, etc, values for a sequence from the system catalogs. Is this in the manual somewhere (I couldn't find it)? Thanks __ Marc ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Oddity in column specifications for table creation
It seems that in create table I cannot specify the schema of a built-in type that has a length specifier. Nor can I double-quote the type name. Is this correct/expected behaviour, and are there work-arounds? This works fine: create table public.additional ( str1pg_catalog.text not null, str2pg_catalog.varchar ); This does not: create table public.additional ( str1pg_catalog.text not null, str2pg_catalog.varchar(40) ); ERROR: syntax error at or near ( LINE 3: str2pg_catalog.varchar(40) Or this: create table public.additional ( str1pg_catalog.text not null, str2pg_catalog.varchar(40) ); ERROR: syntax error at or near ( LINE 3: str2pg_catalog.varchar(40) Or this: create table public.additional ( str1pg_catalog.text not null, str2pg_catalog.varchar(40) ); ERROR: syntax error at or near ( LINE 3: str2pg_catalog.varchar(40) But this does: create table public.additional ( str1pg_catalog.text not null, str2varchar(40) ); For now, I will simply not quote or schema-specify anything from pg_catalog (though I guess I should force the search path to only include pg_catalog in this case) but I find the limitation kinda odd. This is in 8.1 and 8.2 __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Oddity in column specifications for table creation
On Tue, 2007-11-12 at 19:32 -0500, Tom Lane wrote: Marc Munro [EMAIL PROTECTED] writes: This works fine: str2varchar(40) This does not: str2pg_catalog.varchar(40) Yeah. That's because in all existing PG releases, type modifiers are handled by hard-wired grammar productions that *only* work for VARCHAR, CHARACTER VARYING, and so on, treated as keywords. Teodor did some remarkable work for 8.3, fixing things so that any type name could have modifiers attached, without (we hope ;-)) breaking any cases that worked before. It had previously been assumed that this was impossible, because a type-name-plus-modifier looks just about indistinguishable from a function call, but he managed to find a way that side-stepped all the grammatical ambiguities. Your examples all work fine in CVS HEAD. Cool. Thanks, Tom for the response, and Teodor for fixing my problem before I even knew I had it. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Identifying casts
Is there any way of identifying whether a cast was built-in or is user-defined? I am tempted to just assume that if the cast is to/from a user-defined type or uses a user-defined function that it is user-defined. I suspect though that a user could define a new cast on pre-defined types using a pre-defined function. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Coordinating database user accounts with active directory
I have a client that wants to use active directory for authentication of connections to a postgres database. They want to be able to monitor what each user is doing so believe that the best solution is to have one database account per user. However, they do not want to have to manage the database accounts. Is there a simple way of coordinating active directory users with database accounts? I am happy to write tools to deal with the database end of things, but have no experience with active directory. Any and all suggestions would be appreciated. __ Marc ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] cascade and restrict options to alter domain drop constraint
I am puzzled by the cascade and restrict options to the alter domain drop constraint command. I do not see how a dropping a check constraint should cascade to anything, or indeed be restricted by anything. My reasoning is simple: if I drop a check constraint on a domain, no data should be affected, so no dependant objects should be affected. Could someone please explain what I am missing? Thanks in advance __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Separation of clients' data within a database
On Thu, 2006-30-11 at 17:22 -0400, [EMAIL PROTECTED] wrote: Date: Thu, 30 Nov 2006 12:48:53 -0600 From: John McCawley [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Separation of clients' data within a database Message-ID: [EMAIL PROTECTED] ... I would assume there are no row level permissions, right? (Even the thought of it seems way too much to maintain) You could take a look at Veil http://veil.projects.postgresql.org/ which gives you row-level access controls. Whatever solution you choose has its problems though: 1) Veil You have to manage user permissions, implement a bunch of access functions and secured views, and add connection functions to your sessions. 2) Separate databases You have to manage separate databases 3) Separate schemas You have to manage the separate schemas, and also consider whether access to the underlying catalogs is allowed (making it impossible for one client to infer the existence of another may be important to you). __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Daily digest v1.6578 (20 messages)
On Sat, 2006-04-11 at 06:13 -0400, [EMAIL PROTECTED] wrote: Date: Fri, 3 Nov 2006 23:08:47 +0100 From: Alexander Staubo [EMAIL PROTECTED] To: PgSQL General pgsql-general@postgresql.org Subject: Per-row security Message-ID: [EMAIL PROTECTED] I am designing an application which requires fine-grained role-based security, where every logical object in the system has an ACL which expresses the permissions allowed by roles. A fairly cursory look at your proposed model suggests that it will work, but is likely to have serious performance problems. The issue is not so much the simple queries on single views, but the complex queries your developers will almost certainly build from them. A three-table join becomes a nine-table join, and planner, optimiser and executor all have to work very hard at that point. For an alternative approach, you might want to check out Veil: http://pgfoundry.org/projects/veil This is a postgres add-on designed to help you build row-level security implementations. I believe that with some rework, you could implement your proposed security model using Veil to good effect. Veil records privileges, etc in shared and session memory and so determining whether the connected user can see row x requires no extra fetches from the database. __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] per-row security
On Mon, 2006-06-11 at 22:27 +0100, Alexander Staubo wrote: On Nov 6, 2006, at 21:00 , Marc Munro wrote: For an alternative approach, you might want to check out Veil: http://pgfoundry.org/projects/veil Addendum: I took Veil to be undocumented since the source archive only comes with Doxygen scripts; I thought the small here link on the Veil home page pointed to the same API docs, but it's actually a lot better than that. Apologies. No worries. Glad you found it in the end. Note though that the online documentation at pgfoundry is identical to that shipped with the source. Will Veil work in a replicated Slony-I setup? I can see no reason why not. The fact that the security system triggers will be placed on the secured views rather than on the underlying tables should mean that Slony has less trigger manipulation to do than might otherwise be the case. You will of course be replicating the underlying tables and not the views, so your replication user will have to have full access to the unsecured data. This is natural and should not be a concern but may be worth explicitly documenting. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Alter table alter column
Am I right in thinking that altering a column from varchar(n) to varchar(n+m) requires each tuple to be visited? Recent experience suggests this is the case but my reading of the docs has left me uncertain why this should be so. We are not changing the fundamental type of the column, nor are we attempting an operation that will fail due to existing data being incompatible with the new definition. Is there some fundamental reason why placing a higher limit on the length of existing data cannot be done solely by changing the system catalogs? Is this an optimisation that could be added to the TODO list? __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Something blocking in libpq_gettext?
Often, when we get short-lived network problems (like when our network admins break the firewall), postgres client apps will lock-up. They do not recover once the network is back to normal, do not time-out, and do not fail with any sort of error. This has been happening since at least postgres 7.4.7. We are currently running 8.0.3 (now upgrading to 8.0.8). As near as we can tell, it looks like libpq blocked inside poll inside libpq_gettext. #0 0xe405 in __kernel_vsyscall () #1 0x005a31d4 in poll () from /lib/tls/libc.so.6 #2 0xf7fd71ff in libpq_gettext () from /usr/lib/libpq.so.3 #3 0xf7fd7331 in pqWaitTimed () from /usr/lib/libpq.so.3 #4 0xf7fd73a1 in pqWait () from /usr/lib/libpq.so.3 #5 0xf7fd53fb in PQgetResult () from /usr/lib/libpq.so.3 #6 0xf7fd5524 in PQgetResult () from /usr/lib/libpq.so.3 #7 0x081b43b3 in SQLInterface::execute (this=0xf7ce3080, cmd=0xf7ce0074 execute lock_games ( '100' )) at SQLInterface.cpp:138 Can anyone offer any solutions, suggestions, fixes? We cannot reproduce this at will, but are willing to provide more information when next it occurs. __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Something blocking in libpq_gettext?
On Fri, 2006-08-25 at 15:10 -0400, Alvaro Herrera wrote: Wow, that's strange. Maybe it's trying to fetch something in the message catalogs. What are your locale settings? I'm not sure exactly what you need to know. I do have this tho: LANG=en_US.UTF-8 What other information would be helpful? __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Something blocking in libpq_gettext?
On Fri, 2006-08-25 at 15:42 -0400, Alvaro Herrera wrote: SHOW lc_messages, I think. Is it something else than C? If so, it will probably try to read the corresponding postgres.mo file. Do the hung processes have that file open? (I'm not sure if you can find that out from only the core file.) Database=# SHOW lc_messages; lc_messages - en_US.UTF-8 (1 row) Database=# We don't think we can find this from the core file either. Next time it happens we will check whether postgres.mo is in use. Unfortunately we are no longer seeing the problem. We'll let you know when we have more. Thanks for the quick response. __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Something blocking in libpq_gettext?
On Fri, 2006-08-25 at 15:43 -0400, Tom Lane wrote: Marc Munro [EMAIL PROTECTED] writes: As near as we can tell, it looks like libpq blocked inside poll inside libpq_gettext. #0 0xe405 in __kernel_vsyscall () #1 0x005a31d4 in poll () from /lib/tls/libc.so.6 #2 0xf7fd71ff in libpq_gettext () from /usr/lib/libpq.so.3 #3 0xf7fd7331 in pqWaitTimed () from /usr/lib/libpq.so.3 #4 0xf7fd73a1 in pqWait () from /usr/lib/libpq.so.3 #5 0xf7fd53fb in PQgetResult () from /usr/lib/libpq.so.3 #6 0xf7fd5524 in PQgetResult () from /usr/lib/libpq.so.3 #7 0x081b43b3 in SQLInterface::execute (this=3D0xf7ce3080,=20 cmd=3D0xf7ce0074 execute lock_games ( '100' )) at SQLInterface.cpp:138 That backtrace is silly on its face --- apparently you are using a stripped executable and gdb is providing the nearest global symbol rather than the actual function name. I think you can safely assume however that you are looking at libpq waiting for input from the backend. Does the kernel at each end still think the connection is live? (Try netstat) If this is a common result from short-lived network problems then you have a beef with the TCP stack at one end or the other ... TCP is supposed to be more robust than that. Yes indeed, it is a stripped executable. We are having issues with rpms and debug symbols and this is the best we have been able to do so far. We will try netstat next time this happens, and we are (still) trying to get proper debug information. More and better information will follow when we have it. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Spontaneous character encoding change?
Has anyone ever seen a database spontaneously change character sets? I could have sworn that all databases in my replication slave database cluster were set up as SQL_ASCII. Now they are all UNICODE, and slony is failing to replicate, due to invalid byte sequences. Now I can't really *swear* that the databases weren't UNICODE all along but I do know for sure: - that I intended them to be SQL_ASCII - that I would make an effort to avoid UNICODE until we upgrade to 8.1 - that slony has been replicating without problem on this database for some months The only unusual thing that has been done with that database recently is a failed attempt to upgrade slony from 1.1 to 1.5. This was about 3 days before slony ran in to character set problems. I have looked through the logs and found nothing of interest. Any ideas? This is non-production, on PostgreSQL 8.0.3 __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Thoughts on a Isolation/Security problem
You are talking about row-level security. Different users must be able to see different subsets of data. This sounds like a job for Veil: http://veil.projects.postgresql.org/ Veil allows you to implement row-level access controls based upon any criteria you can define. In your case, this would probably be based upon being a representative of a specific company or vessel. Taking the Veil approach you will need to modify your apps, only to the extent that they must identify the user on connection. If you are interested in using Veil, I would be pleased to help. __ Marc On Tue, 2006-04-18 at 07:24 -0300, [EMAIL PROTECTED] wrote: Date: Tue, 18 Apr 2006 13:44:34 +0300 (EEST) From: Achilleus Mantzios [EMAIL PROTECTED] To: pgsql-sql@postgresql.org, pgsql-admin@postgresql.org, pgsql-general@postgresql.org Cc: pgsql-jdbc@postgresql.org Subject: Thoughts on a Isolation/Security problem. Message-ID: [EMAIL PROTECTED] Hi, i have run into the following problem. Let me describe the context first. When i joined the company(ies) i work for (a group of Shipping Mgmt/ Owenship/Agent companies), the only thing i was told when i started designing the DB/Apps was just one company. So i built everything into one single DB, and i wrote the apps having one company in mind. Our architecture is based on jboss3/postgresql (curenctly 7.4.12). There is one .ear file, which authenticates users against a lotus notes ldap server. At the time, the corporate organisational model was a little bit wierd: - Many Indepentent ownership companies - Many Independent Mgmg companies (but all busines was with one company in mind). Each App user is a member of one or more ldap groups, each group mapping to a mgmt company. So i ended up with - one DB with 173 tables - one DB user (postgres) - one .EAR application with 148,827 lines of code. Now the requirements start to change. The boss now bought some other types of vessels too. So virtually there must be N separate distinct apps, where N is the number of Mgmt companies (roughly one for every type of vessel), where each app sees and manages only its data. Moreover there are some apps that should see global data for some specific tables. (like the crew data, people in the crew move from one type of vessel to the other so they are not tied to a Mgmt company). These new requirements are of legal nature, as well as of operational. (People managing a type of vessels dont want to mess with another type, and auditors must see each company completely separated from the rest). Doing it with extra code would be a real pain, since i would have to refine all security/authentication based on the groups ([EMAIL PROTECTED]) that a person belongs to. Also this way no inherent isolation/security would hold. Now i am thinking of restructuring the whole architecture as: - Create one EAR app for every mgmt company - Create one DB USER for every mgmg company - Create one SCHEMA (same as the USER) for every mgmt company (mgmtcompany1,mgmtcompany2,etc...) - Find a way (links/xdoclet/eclipse?) to have *one* common code base for the N EAR apps. - Tweak with jboss*.xml to map java:comp/env/jdbc/mgmt companyDB to java:/mgmt companypgsql, where mgmt companypgsql authenticates with the corresponding DB USER. - Classify the tables into - The ones that apply to ALL mgmt companies (and leave them in the public schema) - The ones that apply *only* to a mgmt company and so create one under each SCHEMA - Load the data in *each* SCHEMA, except the tables that apply to all. - Define a process of mgmt companyfying the tables in each schema (e.g. delete from mgmtcompany1.vessels the vessels that dont belong to mgmtcompany1, and so forth) - Resolve FK constraint issues - The default search_path in psql (whats the the equivalent in jdbc?) is $user,public, so effectively *each* EAR will hit automagically the correct mgmtcompanyN.* tables, or the public.* tables if these tables apply to all mgmt companies. With this way, the hard work is DB oriented, and not APP oriented. However i wonder whether someone else has gone thru a similar process, or if someone finds some assumption conceptually flawed. Thanx for reading, and for any possible thoughts. -- -Achilleus signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Grant Priviliges on column
No, but Veil allows you do it: http://veil.projects.postgresql.org/ Be warned, implementing column or row-level privileges is not trivial. If you are sure you need to do it and want to try Veil, I'll give you what help I can. __ Marc On Fri, 2006-03-17 at 11:50 -0400, [EMAIL PROTECTED] wrote: Date: Fri, 17 Mar 2006 09:09:16 -0500 From: Sean Hamilton [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Grant Priviliges on column Message-ID: [EMAIL PROTECTED] Does Postgres support granting priviliges at the column level? ex. grant update on tableA (id, description) to user signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Daily digest v1.5986 (24 messages)
Sylvain, All of the things you want to do can be done using Veil: http://veil.projects.postgresql.org/ Be warned though, it is not simple. If you want privileges at the column level, or based on a where clause, you will have to use techniques like Veil's secured views. It's better to avoid this sort of complexity if you can. Try reading the documentation though, it may give you some ideas. __ Marc On Sat, 2006-03-04 at 23:45 -0400, [EMAIL PROTECTED] wrote: Date: 4 Mar 2006 05:08:27 -0800 From: [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Questions about privileges Message-ID: [EMAIL PROTECTED] Hello, For an exercise at university, I have several SQL queries to find to manage privileges on a database. I didn't found all the queries. So, this is my problem : I have got one table named books with several columns. * The first question is : - All users can find titles, codes and prices of books (which are columns of the table books) For the moment, I did this : grant select on table books to public; But with this solution, all users can find information about all the attributes of the table books and not only about titles, codes and prices. So, I would like if a solution existed for that (to allow all users to access only at the columns titles, codes and prices). * My second question is : - Mister X can create an index on the table books For the moment, I did this : grant create on tablespace books to X; Is that good ? * My third question is : - Mister X can update the structure of the table books For that, I don't know how I can do. Someone would have an idea to do that ? * My last question is : - Mister X can update the column quantity of the table books where the column codes is equal to 2 but He can't access at others datas of the table books. For the moment, I found how to limit the update at the column quantity of the table books with the following query : grant update(quantity) on books to X; But, I don't know how to limit the update of the column quantity only where column codes is equal to 2. Someone would have an idea to do that ? Thanks to help me. Sylvain. signature.asc Description: This is a digitally signed message part
Re: [GENERAL] altering objects owned by other user
Tom, Thanks. Good suggestions, both. I'm going to defer this problem until we are able to upgrade. __ Marc On Mon, 2006-02-13 at 14:18 -0500, Tom Lane wrote: Marc Munro [EMAIL PROTECTED] writes: I want to allow a non-superuser to alter objects owned by another user. Use 8.1, have the objects in question be owned by a group (role), grant membership in the group as appropriate. This should be an audited operation (logging a notice of what was done to the postgres logs is sufficient). Perhaps log_statement = ddl? regards, tom lane signature.asc Description: This is a digitally signed message part
[GENERAL] altering objects owned by other user
I want to allow a non-superuser to alter objects owned by another user. This should be an audited operation (logging a notice of what was done to the postgres logs is sufficient). This is so that I can allow trusted users to perform maintenance operations without having to give them either superuser privilege, or the password for the object owner account. This should allow us to satisfy an outside auditor that no-one outside of the sysadmin group has unrestricted (ie unaudited) superuser access. I had hoped to implement this using set session authorization within a security-definer plpgsql function but security-definer is inadequate for passing on superuser status. Does anyone have any suggestions? My current thinking is to implement a C language function which is only accessible to my trusted users. This would simply call SetSessionAuthorization with the is_superuser argument set to true. Is this a horrible idea? Thanks. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Privilege for seeing queries using pg_stat_get_backend_activity
I want certain users to be able to examine running queries using pg_stat_get_backend_activity. Unfortunately, this will only show other users' activity if you have superuser privilege. I do not want to give monitoring users superuser privilege, but I do need to allow them to perform monitoring tasks. I've tried tricks with security definer functions but this does not help as pg_stat_get_backend_activity explicitly checks for the caller being a superuser. Aside from implementing my own version of pg_stat_get_backend_activity in C, does anyone have any suggestions? Should there be a standard privilege that allows this (please say yes)? __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Privilege for seeing queries using
Thanks Tom, On further investigation it seems that the problem is that I can create an equivalent function with security definer, and I can create a wrapper function with security definer but I cannot modify the existing function for security definer. This is a problem because the monitoring users use pgadmin which uses pg_stat_backend_activity directly and also through pg_stat_activity, so I cannot simply rewrite the monitoring queries to use a wrapper function. My solution is to create a new function with the same name in the public schema, and to redefine pg_stat_activity to call the public function. This seems a little kludgy though I am content with it for now. It does make me wonder though if there should be something like a monitoring privilege so that we don't have to go through this. FWIW, here is the new function defn: create or replace function public.pg_stat_get_backend_activity(integer) returns text as ' begin return pg_catalog.pg_stat_get_backend_activity($1); end; ' language plpgsql security definer; __ Marc On Thu, 2006-01-19 at 12:38 -0500, Tom Lane wrote: Marc Munro [EMAIL PROTECTED] writes: I want certain users to be able to examine running queries using pg_stat_get_backend_activity. Unfortunately, this will only show other users' activity if you have superuser privilege. I do not want to give monitoring users superuser privilege, but I do need to allow them to perform monitoring tasks. I've tried tricks with security definer functions but this does not help as pg_stat_get_backend_activity explicitly checks for the caller being a superuser. That should work fine, as the test is on the current effective userid which will change inside a security-definer function. Take a closer look at what you did, or post a complete example if you can't get it to work. regards, tom lane signature.asc Description: This is a digitally signed message part
Re: [GENERAL] I want to know how to improve the security of postgresql
Karsten, The project homepage has a reference to the project documentation. It can be found here: http://veil.projects.postgresql.org/curdocs/index.html If this does not contain what you are looking for, please explain what you need to know as I'd like to improve the documentation. I'm guessing that you'd like a conceptual overview. I intended this to be provided in the section Overview: a quick introduction to Veil buit maybe it falls short. If there is a better way to introduce the concepts, I'd like to hear any ideas. Feel free to contact me directly if you have any questions at all. I'd be pleased to help GNUmed. __ Marc On Sat, 2005-12-31 at 16:49 -0400, [EMAIL PROTECTED] wrote: Date: Sat, 31 Dec 2005 17:18:19 +0100 From: Karsten Hilbert [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Re: I want to know how to improve the security of postgresql Message-ID: [EMAIL PROTECTED] On Thu, Dec 29, 2005 at 09:22:09AM -0800, Marc Munro wrote: http://pgfoundry.org/projects/veil/ Marc, is there a higher level written summary available somewhere to be read to understand conceptually how you implemented row level security ? We will (in GNUmed) eventually have to implement row level security. The current thinking is by restricting access to the tables and setting up views that always do ... where user=current_user to limit the viewable data set. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 signature.asc Description: This is a digitally signed message part
[GENERAL] bit/integer operations in postgres
If I understand this correctly, I think you want to implement the sort of security that Veil provides. Take a look at http://pgfoundry.org/projects/veil/ The documentation is at http://veil.projects.postgresql.org/curdocs/index.html __ Marc On Sun, 2006-01-01 at 17:02 -0400, littlebutty wrote: Date: 22 Dec 2005 17:04:30 -0800 From: littlebutty [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: bit/integer operations in postgres Message-ID: [EMAIL PROTECTED] Does anyone know how I can solve this problem in Postgres: I am creating a table of user permissions. I want to represent each permission as a bit location. Each new permission I add to my system would be assigned to the next available bit (or column if you will) That way I can simply turn on a particular bit location for a particular user and then AND their permission level with a particular permission bit. Just as an example: Access to financial data is the first bit and would be represented by 1. Access to the admin section would be the 2nd bit, represented by 01. Access to the reporting section would be the 3rd bit, represented by 001. Thus a user with access to the admin section but nothing else would have 010 and a user with access to everything would have 111. All possible permissions are stored in one table and a users permission level is stored in the user table, but both fields are integer data types. HERE IS WHAT I WANT TO DO: How do I setup a postgres constraint that will not allow an insert on the permission table with an integer value that has more than one bit set. In other words you could insert 1, 2, 4, 8, 16, 32, etc. because there is only a single bit on in all of those integers, but not insert 3, 5, 6, 7, 9, 15, 19, etc. because they have multiple??? signature.asc Description: This is a digitally signed message part
Re: [GENERAL] I want to know how to improve the security of postgresql
Sting, I'm not entirely sure what you mean by improving the security of postgresql but if you want to implement smart, efficient row-level security you could take a look at Veil http://pgfoundry.org/projects/veil/ Check the project home page for documentation. Veil is still an alpha release but I am prepared to work with any potential users to bring it up to a production release. __ Marc On Thu, 2005-12-29 at 06:48 -0400, [EMAIL PROTECTED] wrote: Date: Thu, 29 Dec 2005 10:09:01 +0800 From: xiapw [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: I want to know how to improve the security of postgresql Message-ID: [EMAIL PROTECTED] Hi guys,can you give me some advices about how to improve the security of postgresql? Now I major in the security of postgresql and the destination is create a database with security level of B1(TCSEC),what should I do now,what program language should I use? Thanks! Sting [Attachment of type text/html removed.] signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Replicating databases
Carlos, What you are asking for is a multi-master replication scheme. Slony-I is a single master system, meaning that for each slony cluster only one node can make changes to the data. Without breaking slony's normal rules, I believe that there might be a way to do it, though it will not be pretty. Basically you would create a slony cluster for each store, which replicates store data back to the central system. You will also have a master cluster that replicates central data to all stores. For each store you will create a slony cluster CS (ie for store 1, you create cluster C1, for store 2 cluster C2, etc). For the central (master) database you will create a cluster CM that replicates to all stores. For each application table, T, you will do the following: - create a table T_S at each source store S, and on the central database - add T_S to the replication set for CS - on the central db create a master table T_M - on the central db, add triggers on T_S that copy all changes into the master table T_M (T_M will then contain the full set of data from all stores) - add T_M to the replication set for cluster CM - at each store create a view T that does select * from T_S union select * from T_M - create instead of triggers on T that cause updates to be performed only on the underlying local table T_S - at the central node create a view T that does select * from T_M, (you don't need instead of triggers for this as the data can only be updated at the stores) So, for N stores you will have created N+1 slony clusters, N+1 distinct tables for each distributed table. This is horrible and a lot of maintenance. It might work though if the number of stores is quite small. You should probably ask the question again on slony1-general. The experts there may suggest a better solution. I have seen talk of disabling the standard slony triggers to allow this sort of thing but whether that is more or less nasty is questionable. Good luck __ Marc On Wed, 2005-11-02 at 12:18 -0400, [EMAIL PROTECTED] wrote: Date: Wed, 2 Nov 2005 12:06:36 + (GMT) From: Carlos Benkendorf [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Replicating databases Message-ID: [EMAIL PROTECTED] Hello, Currently our company has a lot of small stores distributed around the country and in the actual database configuration we have a central database and all the small stores accessing it remotely. All primary key tables were designed with a column identifying the store that it belongs. In other words, the store that can update the line, other stores can read it but the system was designed in such a way that other stores can not update information that do not belong to them. The performance is not good because the line speed that connects the store to the central database sometimes is overloaded. Were thinking to replicate the central database to each store. The store would be able to read all the information from the local database but should only update lines that belong to that store. When a store needs read information about other stores, it is not necessary to be updated, it can be a yesterday snapshot. During the night all the local store databases will be consolidated in only one database and replicated again to the stores. In the morning, when the store opens, the local database has an updated and consolidated data. I would appreciate suggestions about how the best way to implement such soluction. Slony-1? SQL scripts? Thanks in advance! Benkendorf signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Daily digest v1.5657 (16 messages)
Florian, Reponses from, an ex-Oracle DBA, below. On Mon, 2005-10-24 at 11:51 -0300, [EMAIL PROTECTED] wrote: Date: Mon, 24 Oct 2005 14:29:24 +0200 From: Florian Ledoux [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: pg_dump, MVCC and consistency Message-ID: [EMAIL PROTECTED] Hello everybody ! I am coming from the (expensive) Oracle World and I am a newbie in PG administration. I am currently working on backup concerns... I am using pg_dump and I have not encountered any problems but I have some questions about the internal management of data consistency in PG server. I have read some articles about the MVCC mechanism but I can't see how it handles a consistent snapshot of the database during all the export process. The whole secret, as I understand it, is that updates and deletes do not overwrite the original tuple. The original tuple remains in place, marked with transaction ids describing the transactions to which it is visible. These old tuples remain until a vacuum is performed. The vacuum removes only those tuples which are no longer visible to any running transaction. If I have well understood, the defaut transaction isolation level in PG is the read commited isolation level. If it is the isolation scheme used by pg_dump how can I be sure that tables accessed at the end of my export are consistent with those accessed at the begining ? Does pg_dump use a serializable isolation scheme ? I believe pg_dump uses serializable. We have this kind of concerns with Oracle and a CONSISTENT flag can be set in the exp utility to use a consistent snapshot of the database from the begining to the end of the export process. Unfortunately, this mode use intensively rollback segments and can drive to obsolete data (also knows as Snapshot too old). Is there the equivalent of rollback segments in PG ? Is there some issues like snapshot too old with intensive multi-users and transactional databases ? One of the greats joy of postgres is never seeing a snapshot too old error. There is no rollback or undo space required as the original tuples remain in place. This has other benefits too - you don't have to reconstruct the original tuple from rollback in order to retrieve it, making selects faster, and you don't have to write rollback data, making writes faster. I have not a good knowledge of PG internal mechanism, I hope that my questions are clear enough... Yep. I hope the answers were too. Florian __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] dynamic loading of .so
If you need user-accessible shared variables, you could take a look at how Veil http://pgfoundry.org/projects/veil/ achieves this. Veil provides a limited number of shared variables with an API for SQL access. The variables may only be set during initialisation or reset to avoid system state changing part-way through a transaction. Veil goes to considerable trouble to deal with this. If this is not a concern for you, you may be able to do something simpler but you would have to perform extra locking on assignment. Feel free to take whatever you can from Veil. __ Marc Date: Sun, 16 Oct 2005 18:54:21 -0500 (CDT) From: [EMAIL PROTECTED] To: [EMAIL PROTECTED], [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org pgsql-general@postgresql.org Subject: Re: dynamic loading of .so Message-ID: [EMAIL PROTECTED] are there any way to make them global for all the instances? On 10/14/2005, Douglas McNaught [EMAIL PROTECTED] wrote: TJ O'Donnell [EMAIL PROTECTED] writes: I have begun to use some static variables in my c-language functions to maintain state (3rd party licensing issues) during the course of a session (postgres process, spawned by postmaster). These are declared static outside the scope of any function. (is global the correct term anymore?) When I use dynamic loading of my .so, each session is independent, with its own static variables. Will the same be true if I were to load the .so once when the database starts up? Or will there be just one set of static variables for all sessions? Each backend process has its own memory space, so the variables will still be independent. -Doug signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Daily digest v1.5632 (18 messages)
Nikolay, I think I must be missing your point. Even if updatable views existed, I would still have created Veil. The real point of Veil is to control each type of operation (insert, update, etc) on each record. With updatable views, you could avoid a lot of the tedium of creating the instead-of triggers for update, insert, etc. but you would have reduced the granularity of your access controls. If a user has select access to a record, they would also have insert, update and delete (as long as they have those privileges on the view). That is, if you can update *any* record in the view, you can update all of those records that you can see. You cannot have update privilege on some records, and select-only on others. This may suit your requirements but it may not suit everyones. In any case, the main goal of Veil is to support efficient, context-specific, querying of privileges. I don't see how any updatable view implementation will do this. I have tried to do exactly this with Oracle's updatable views and have never achieved usable levels of performance. Only by providing in-memory bitmaps of user privileges have I been able to achieve the level of performance that I require. The only alternative implementation that I considered was something like Oracle's Virtual Private Database, in which extra user-generated predicates are dynamically added to the where clauses of queries on protected tables. This would have been much harder to create, and would not work on existing installations with older version of Postgres. Having looked quite closely at Oracle's VPD, I am also not convinced that it offers either greater flexibility or the likelihood of better performance. I very much appreciate you taking the time to provide this feedback, and would like to hear your response to the above. Thanks. __ Marc On Mon, 2005-10-17 at 20:38 +0400, Nikolay Samokhvalov wrote: IMHO, Veil is very strange project. Instead of concentrating on good support of updatable views, developers are trying to reinvent the wheel. Actually, if restriction-and-projection views would be updatable w/o overhead (such as creating rules), there'll no need in such project. It's one of the major roles of views - provide mechanism to secure the data. Am I right? signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Setting up a fine-grained permission system
Dave, Sorry to be so late in responding to this but I may have just the solution for you. Please check out Veil at pgfoundry. This is an add-on to Postgres that I think does just what you are looking for. As the developer of this project, I would be pleased to offer you assistance. http://veil.projects.postgresql.org/ __ Marc Date: Thu, 29 Sep 2005 10:36:23 +0700 From: David Garamond [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Setting up a fine-grained permission system Message-ID: [EMAIL PROTECTED] Hi, Our current project requires a fine-grained permission system (row-level and possibly column-level as well). We have a pretty large (tens of thousands) of users in the 'party' table. I'm thinking of choosing Unix-style security for now (adding 'ugo' and 'owner' and 'group' columns to each table which access need to be regulated), but am unsure about the column-level permission. Anyone has experiences to share on a similar system/requirement? Do you do Unix-style or ACL? Is there a possibility in the medium/far future that Postgres will have such a fine-grained permission system. Regards, Dave signature.asc Description: This is a digitally signed message part
Re: [GENERAL] pg_ctl reload breaks our client
Tom, Thanks for this. I am going to push for reproducing the problem in a test environment. If I have any success, I will follow up with results and more details. __ Marc On Sat, 2005-09-17 at 01:23 -0400, Tom Lane wrote: Michael Fuhr [EMAIL PROTECTED] writes: On Fri, Sep 16, 2005 at 04:34:46PM -0700, Marc Munro wrote: On Fri, 2005-09-16 at 17:10 -0600, Michael Fuhr wrote: This is not currently seen as a priority (the work-around of don't do that is seen as sufficient). I'm simply hoping to get someone to say for sure that the client app should not be able to tell that a reload has happened. At that point I may be able to raise the priority of this issue. As far as I know clients shouldn't notice a reload (which is effected via a SIGHUP); I just did some tests and didn't see any problems. Existing client connections should not be able to notice a reload that changes pg_hba.conf or pg_ident.conf; however they definitely *should* notice a reload that changes postgresql.conf (at least for parameters that aren't overridden by other cases, such as a SET in the current session). So the blanket statement Marc is making is simply wrong. Whether there is a bug here is impossible to say given the limited amount of information provided. I'd not expect a reload to cause an existing connection to become totally dysfunctional, which is what Marc seems to be claiming ... but without more evidence or a test case, there's not much to be done. regards, tom lane signature.asc Description: This is a digitally signed message part
Re: [GENERAL] [pgsql-general] Daily digest v1.5557 (21 messages)
Brew, Yep we tried that. It did nothing. The same pg_hba.conf change worked when we later tried it with the client disconnected. __ Marc Date: Fri, 16 Sep 2005 21:36:01 -0400 (EDT) From: [EMAIL PROTECTED] To: pgsql-general@postgresql.org Subject: Re: pg_ctl reload breaks our client Message-ID: [EMAIL PROTECTED] Marc Yesterday a client application broke immediately after we issued a pg_ctl reload command. The only change we had made was to pg_hba.conf to enable trusted connections from localhost. Can you change pg_hba.conf back to what it had been prior and do a reload again and check if the clients start working? I've gotten confused and shot myself in the foot when setting pg_hba.conf a few times, myself. brew == Strange Brew ([EMAIL PROTECTED]) Check out my Stock Option Covered Call website http://www.callpix.com and my Musician's Online Database Exchange http://www.TheMode.com == signature.asc Description: This is a digitally signed message part
[GENERAL] pg_ctl reload breaks our client
Yesterday a client application broke immediately after we issued a pg_ctl reload command. The only change we had made was to pg_hba.conf to enable trusted connections from localhost. My question is, how should the client application be affected by such a reload? My impression was that the client should be totally unaware of a reload, but reality does not bear this out. Any ideas/informed responses will be welcomed. I suspect that this has uncovered a bug in our client but without knowing what the client experience shuold be, it's hard to narrow down where the bug may lie. Thanks. __ Marc Munro signature.asc Description: This is a digitally signed message part
Re: [GENERAL] pg_ctl reload breaks our client
Michael, It is Postgres 7.3.6. The client is a multi-threaded C++ client. The breakage was that one group of connections simply stopped. Others contined without problem. It is not clear exactly what was going on. Nothing in our application logs gives us any clue to this. As for reproducibility, it has hapenned before in test environments when we have bounced the datanase. This is not too shocking as I would expect the client to notice this :-) It is a little more shocking when it's a reload. Or maybe I have simply misunderstood what reload does. I am simply looking for clues here and don't expect definitive answers. That's why I was a little vague. Am I right though, in thinking that a reload shuold be pretty much invisible to the client, or will certain operations fail and require a re-try? __ Marc On Fri, 2005-09-16 at 14:40 -0600, Michael Fuhr wrote: On Fri, Sep 16, 2005 at 01:28:13PM -0700, Marc Munro wrote: Yesterday a client application broke immediately after we issued a pg_ctl reload command. How did the client break? What behavior did it exhibit? Were there any errors in the server's logs? Can you duplicate the problem? What version of PostgreSQL are you using, and on what platform? signature.asc Description: This is a digitally signed message part
Re: [GENERAL] pg_ctl reload breaks our client
Michael, Many thanks for your response; it is much appreciated. My responses are embedded below: On Fri, 2005-09-16 at 17:10 -0600, Michael Fuhr wrote: On Fri, Sep 16, 2005 at 02:16:29PM -0700, Marc Munro wrote: It is Postgres 7.3.6. The client is a multi-threaded C++ client. The breakage was that one group of connections simply stopped. Others contined without problem. It is not clear exactly what was going on. How did the connections stop? Were the connections broken, causing queries to fail? Or did queries block and never return? Or something else? What was happening that shouldn't happen, or what wasn't happening that should happen? From the server side, there were simply connections (1 or 2) that appeared idle. From the client side it looked like a query had been initiated but the client thread was stuck in a library call (as near as we can tell). This, vague though it is, is as much as I know right now. We were unable to do much debugging as it is a production system and the priority was to get it back up. If the connections were still active but not returning, did you do a process trace on the connection's postmaster or attach a debugger to it to see what it was doing? No, time pressure prevented this. Could the timing of the problem have been coincidence? Have you ever seen the problem without a reload? How often do you see the problem after a reload? Do you know for certain that the application was working immediately before the reload and not working immediately after it? It *could* be coincidence, but the problem began within 5 seconds of the reload. Coincidence is unlikely. What operating system are you using? Linux 2.4.20 smp i686 Nothing in our application logs gives us any clue to this. What about the postmaster logs? Ah, now there's another story. Unavailable I'm afraid. Resolving that is also on my priority list. As for reproducibility, it has hapenned before in test environments when we have bounced the datanase. This is not too shocking as I would expect the client to notice this :-) It is a little more shocking when it's a reload. Or maybe I have simply misunderstood what reload does. Can you reproduce the problem with a reload? A stop and start will terminate client connections, but a reload shouldn't. This is not currently seen as a priority (the work-around of don't do that is seen as sufficient). I'm simply hoping to get someone to say for sure that the client app should not be able to tell that a reload has happened. At that point I may be able to raise the priority of this issue. I would certainly like to do more investigation. If postgresql hackers are interested in this strange event (please tell me for sure that it *is* strange) that may also help me to get the necessary resources to run more tests. Thanks again. __ Marc Munro signature.asc Description: This is a digitally signed message part
[GENERAL] Transaction id wraparound questions
It seems that we have not been vacuuming our production database properly. We have been explicitly vacuuming all tables individually but have not vacuumed the entire database. A recent vacuum of the entire database gave us the dreaded You may have already suffered transaction-wraparound data loss. warning. We have so far encountered no problems but I am wondering about the safest course of action right now. We cannot easily take an outage to perform a full dump and restore. Questions: 1) What is likely to happen if we encounter transaction id wraparound? 2) Will a full database vacuum fix the problem? 3) Can it make things worse? 4) Other than dump and restore, what options do we have? Information: This query: select datname, datvacuumxid, datfrozenxid from pg_database; returns this: Production,1173213507,2246955329 template1, 938856359, 938856359 template0,427, 427 All responses welcomed. __ Marc Munro signature.asc Description: This is a digitally signed message part
[GENERAL] Waiting for select
Can someone explain this? I seem to have a query which is being blocked by a lock. I was under the impression that selects are never blocked. Am I missing something or is this bad behaviour? I am using slony and am synchronising a slave for the first time. In the hope of seeing some progress on the slave I attempt to perform a select count(*) on one of the tables. The select just stops. ps shows this: postgres 5987 0.0 0.4 19180 4188 ?S15:16 0:00 postgres: postgres testdb 192.168.1.111(33598) SELECT waiting A query of blocking locks shows this: object | trans | pid |mode | blocker ---+---+--+-+- testdb.campaign_cost_pk | | 5754 | AccessShareLock | testdb.csn_log_pk | | 5754 | RowExclusiveLock| testdb.pg_trigger | | 5754 | AccessShareLock | testdb.pg_trigger | | 5754 | RowExclusiveLock| testdb.sl_subscribe | | 5754 | AccessShareLock | testdb.csn_log_idx2 | | 5754 | RowExclusiveLock| testdb.campaign | | 5754 | AccessShareLock | testdb.campaign | | 5754 | RowExclusiveLock| testdb.campaign | | 5754 | AccessExclusiveLock | testdb.pg_rewrite | | 5754 | AccessShareLock | testdb.pg_rewrite | | 5754 | RowExclusiveLock| testdb.computer_sn_log| | 5754 | AccessShareLock | testdb.computer_sn_log| | 5754 | RowExclusiveLock| testdb.computer_sn_log| | 5754 | AccessExclusiveLock | testdb.address_type_pk| | 5754 | AccessShareLock | testdb.campaign_pk| | 5754 | AccessShareLock | testdb.sl_log_1 | | 5754 | AccessShareLock | testdb.sl_log_1 | | 5754 | RowExclusiveLock| testdb.csn_cookie_idx1| | 5754 | AccessShareLock | testdb.pg_index | | 5754 | AccessShareLock | testdb.pg_index | | 5754 | RowShareLock| testdb.csn_log_idx3 | | 5754 | RowExclusiveLock| testdb.csn_cookie_pk | | 5754 | AccessShareLock | testdb.sl_log_2 | | 5754 | AccessShareLock | testdb.sl_log_2 | | 5754 | RowExclusiveLock| testdb.sl_set | | 5754 | AccessShareLock | testdb.sl_set | | 5754 | RowShareLock| testdb.campaign_cost | | 5754 | AccessShareLock | testdb.campaign_cost | | 5754 | RowExclusiveLock| testdb.campaign_cost | | 5754 | AccessExclusiveLock | testdb.sl_table | | 5754 | AccessShareLock | testdb.sl_table | | 5754 | RowShareLock| testdb.sl_table | | 5754 | RowExclusiveLock| | 9182 | 5754 | ExclusiveLock | testdb.computer_sn_cookie | | 5754 | AccessShareLock | testdb.computer_sn_cookie | | 5754 | RowExclusiveLock| testdb.computer_sn_cookie | | 5754 | AccessExclusiveLock | testdb.pg_attribute | | 5754 | AccessShareLock | testdb.sl_config_lock | | 5754 | AccessExclusiveLock | testdb.sl_trigger | | 5754 | AccessShareLock | testdb.csn_pk | | 5754 | AccessShareLock | testdb.pg_class | | 5754 | AccessShareLock | testdb.pg_class | | 5754 | RowShareLock| testdb.pg_class | | 5754 | RowExclusiveLock| testdb.address_type | | 5754 | AccessShareLock | testdb.address_type | | 5754 | RowExclusiveLock| testdb.address_type | | 5754 | AccessExclusiveLock | testdb.pg_namespace | | 5754 | AccessShareLock | testdb.pg_namespace | | 5754 | RowShareLock| testdb.csn_log_idx1 | | 5754 | RowExclusiveLock| testdb.computer_sn| | 5754 | AccessShareLock | testdb.computer_sn| | 5754 | RowExclusiveLock| testdb.computer_sn| | 5754 | AccessExclusiveLock | testdb.address_type | | 5987 | AccessShareLock |5754 | 9422 | 5987 | ExclusiveLock |5754 (55 rows) All responses will be welcomed. __ Marc signature.asc Description: This is a digitally signed message part
Re: [GENERAL] Waiting for select
It's doing something in slony. Part of the initial sync operation I guess. I guess it must be doing an alter table or reindex or something. I don't understand why though. I'll repeat my question on the slony mailing list. Thanks for the response. __ Marc On Fri, 2005-06-10 at 18:48 -0400, Tom Lane wrote: Marc Munro [EMAIL PROTECTED] writes: Can someone explain this? I seem to have a query which is being blocked by a lock. I was under the impression that selects are never blocked. AccessExclusiveLock blocks anything. A query of blocking locks shows this: object | trans | pid |mode | blocker ---+---+--+-+- testdb.address_type | | 5754 | AccessExclusiveLock | testdb.address_type | | 5987 | AccessShareLock |5754 So what's process 5754 doing? regards, tom lane signature.asc Description: This is a digitally signed message part
Thanks. Was: [GENERAL] Need to determine how badly tables need vacuuming
Thanks to both Elein and Tom. pgstattuple (and dbsize) from contrib gave me exactly what I wanted. The statistics views give me something extra. __ Marc signature.asc Description: This is a digitally signed message part
[GENERAL] Need to determine how badly tables need vacuuming
On a 7.3 production system with limited downtime available, we can rarely take the time to run vaccuum full. From time to time though, performance of some of the tables becomes an issue and we have to perform a full vaccum on those tables. We'd like to be able to better plan these operations, so: Is there a query that will return an estimated row count as well as an estimated unused tuple count for each table? Right now we're figuring this stuff out by manually by reading the vacuum report. It'd be nice to have a query on hand that returns a list of tables with more than 30% unused and more than 100k rows unused. I envision these columns being returned: table_name, %unused, tuples, unused_tuples, MB_of_disk It looks like the estimated row count can be found in pg_class. I can figure out the size on disk by looking at the physical files. But how can I figure out the number of dead tuples without actually doing a vacuum? Thanks. __ Marc Munro signature.asc Description: This is a digitally signed message part