Re: [GENERAL] split string by special characters
On 24 Juli, 23:22, a.w...@netzmeister-st-pauli.de (Andreas Wenk) wrote: > > Hi, > > I was thinking about that and in my opinion the approach to let the > database do that is the wrong direction. Sure you can do a lot with > regexp_split_to_table or regexp_split_to_array but they are kind of Yes, I see. You're quite right, the split was intended to do give me everything in processed chunks it in some easy way as the last part of the interpretation of the text. > limited compared to a programming language using regular expressions. If > I had to try to get your jobdone, I would try regexp_matches() like: > > SELECT regexp_matches('This is just a text, that contain special > characters such as, (comma),"(", ")" (left and right parenthesis) as > well as "?" question, mark.How do I split it up with PostgreSQL?', > E'(\\w*.)\\s+','g'); > > regexp_matches > > {This} > {is} > {just} > {a} > {"text,"} > {that} > {contain} > {special} > {characters} > {such} > {"as,"} > {","} > {"\""} > {left} > {and} > {right} > {parenthesis)} > {as} > {well} > {as} > {"\""} > {"question,"} > {How} > {do} > {I} > {split} > {it} > {up} > {with} > (29 rows) > > So, you have the ability to catch the seperators like ','. But for now, > teh example just catches the comma. But you want to catch a lot of other Yes, but then I ran into the problems with separators that regexp consider as part of the expression and how to dynamically build the right expression in some unified way for each language. > seperators as well. I suggest you do that within the logic of your > coding language because I don't think this will be an easy way to walk Guess you're right, because I didn't know how to handle it with the regexp-approach. I sat down yesterday and wrote a function that does the job for me in PL/pgSQL, I'm not quite finished, but can see the light at the end of the tunnel. The basic approach I'm working with now is to let it find the position of each delimiter combination within the text, then sort the resulting array to get it ordered and extract each part. It won't be fast as lightning, but sufficient for now and as it seem, allow me to parse text from various files written in different languages (e.g. programming) just by specifying the delimiters. > ;-). This is no database job in my opinion. I didn't intend to try it either before I spotted some of those functions... :-) Then figured it would be nice to do it within the db-engine as all the data is present there. I wrote code outside the db-engine some time ago, but then other aspects made it less desirable to use. > > Cheers > > Andy > Thank you Andy for the code example and your advice. I really appreciate that you took your time to show me how and explain why. //Jan-Erik -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Find difference between two Text fields
Hi, there might be a better solution out there, but it seemed like an interesting problem so I wrote this function: CREATE OR REPLACE FUNCTION stringdiff(text, text) RETURNS TEXT AS $$ SELECT array_to_string(ARRAY( SELECT CASE WHEN substring($1 FROM n FOR 1) = substring($2 FROM n FOR 1) THEN ' ' ELSE substring($2 FROM n FOR 1) END FROM generate_series(1, character_length($1)) as n), ''); $$ language sql; Use it like this: SELECT stringdiff('aa', 'axaaacaaza'); stringdiff x c z Regards, Aleksander Peter Hunsberger wrote: Can anyone give me a way to find the difference between two text fields on a character by character basis. Essentially, I'd like to logically AND the two together and for any position that has a non-zero result show whatever character is in that position for the second string. The solution can be postgres specific but something approaching ANSI SQL would also be helpful (if possible). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disable databse listing for non-superuser (\l) ?
> So, is this a misguided attempt at security through obscurity, or are > you looking at limiting the noise that users see when they look at > databases? The answer to that question would be "yes". ~BAS -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disable databse listing for non-superuser (\l) ?
On Fri, Jul 24, 2009 at 5:02 PM, Brian A. Seklecki wrote: > All: > > Any suggestions on how-to, or comments on a potential NFR, to disable > non-superuser's from viewing the database list via \l? So, is this a misguided attempt at security through obscurity, or are you looking at limiting the noise that users see when they look at databases? If it's for security through obscurity, I don't have any real advice, as any changes to do that in a manner that really limits the ability of the user to look up that data are likely to have unintended negative consequences. OTOH, if you just want to limit what folks see to make their life easier, then I'd suggest making the changes in psql and change the queries it uses to look up the databases the user has access to. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1
Hi Tom, Thanks for the reply! The reindex command you provided did not work, it gave the same error as the one I got when logging in earlier, however I was feeling reckless so I did reindex database ; and that completed as follows: NOTICE: table "pg_class" was reindexed NOTICE: table "sql_features" was reindexed NOTICE: table "sql_implementation_info" was reindexed NOTICE: table "sql_languages" was reindexed NOTICE: table "sql_packages" was reindexed NOTICE: table "sql_sizing" was reindexed NOTICE: table "sql_sizing_profiles" was reindexed NOTICE: table "pg_statistic" was reindexed ERROR: invalid page header in block 43 of relation "pg_attribute" So I tried the reindex index pg_class_oid_index; again, and it responded with REINDEX, so I assume it succeeded. I can also login to the database as normal now(with out the PGOPTIONS="-P"). I'm assuming I should probably worry about the ERROR above. Is there another repair command to fix this or should I try to dump/initdb/reload? Thanks, Andrew On Fri, Jul 24, 2009 at 2:33 PM, Tom Lane wrote: > Andrew Radamis writes: > > I'm getting this error when I try to log into my database. > > *FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1 > > You've got a corrupted index. You can probably fix it like this: > >export PGOPTIONS="-P" >psql ..usual options.. >reindex index pg_class_oid_index; > > After that, a dump/initdb/reload might be in order, since it's hard to > tell whether there are other problems lurking. Some testing of your > hardware might be advisable too; and I'd suggest updating your Postgres > if it's not a recent release. > >regards, tom lane >
Re: [GENERAL] Re: user/grant - best practices handling permission in production system
On Jul 24, 2009, at 1:11 AM, Stefano Nichele wrote: Hi Greg, thanks for your suggestions. See below for my comments. Greg Stark wrote: Well there isn't a way to do step 2 in one shot either. Maybe my explanation was not clear. Step2 means run the DDL/ DMLscript to create and populate all the tables. You'll have to issue a CREATE statement for each object, it's no extra work to issue a GRANT for each object with the specific rights the application should have at that time. Think of it as an important part of the process of creating a new object. You are right but I don't like so much this approach since I'd want to provide the DDL/DML script and let the DBA to decide database name and user name (ie, I don't want to put the username in the DDL/ DML script file). It's not necessary that you give the DDL/DML script to anyone to run or that you embed any user specific info in any of your DDL. An alternative approach would be to store the DDL that you intend to be run for others in a locked down schema and then create a function, or functions depending on how you implement it all, with an admin role (one that has permissions to create whatever's needed) as SECURITY INVOKER. In this way you always maintain control of who, what, when, and where things are created and, more importtantly, dropped. Note that it's probably not necessary to grant all rights to every table. Most applications have some tables that are read-only or insert-only from the point of view of the application. Your system will be more secure if the application does not have unnecessary privileges. So thinking about what rights to grant to the application for each object when it's created is not a bad thing. You are right also about that, but maybe it's too much for me. But to have a real secure DB this should be taken in account. At the end, these are the steps 1. using postgres user (or another user with grant for creating database) create the database 2. using the user used in the previous step, create a new user (the one the webapp will use) 3. give to the new user all permission on the database 4. using the new user, create and populate the tables with the DDL/ DML script. In this way the user is not the db owner but is the owner of all tables. Do you think it's acceptable ? I don't think it's a good idea to have the webapp user creating database objects like tables and such as it would then have the ability to drop said objects. Picture a developer thinking they want to log in and run some queries to check out some data in order to decide how better to implement some data crunching application level algorithms and thinking, "Hey! I've got the app user's login right here!". All it would then take is them not paying attention to what they're doing for one second and BAM! you're getting a call from the poor sap sounding like she's about to get sick, asking you when the last backup was... Thinking through the necessary permissions the app user and each table is worth it for a number of reasons. One good one, in addition to security, is that it defines a data level access interface that can guide application db access APIs, discouraging ad-hoc SQL in the app. It's also another way to force yourself to think about your design from another angle. Another way to look at permissions is that if you give too much you're just creating the risk of more work for yourself if someone later abuses them as you'll be the one asked to fix their mess. Erik Jones, Database Administrator Engine Yard Support, Scalability, Reliability 866.518.9273 x 260 Location: US/Pacific IRC: mage2k -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Very slow joins
On Fri, Jul 24, 2009 at 4:40 PM, MS wrote: > >> I never cease to be amazed at how many times people have these monster >> CPUs, like dual quad core 3Ghz processors, with 16GB or whatever of ram, >> and then try and run a database off a single 7200 rpm desktop SATA >> drive. at work our production databases often run on dozens of 1 >> or 15000 rpm drives, organized as raid1+0's. > > > Yeah. I just took the effort and copied all data from those tables to > mysql and run an equivalent query - all took around 1 minute to > execute. > So either me or postgres is seriously broken. ;) I'm going back to > mysql. :( can we see an explain analyze at least? merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Very slow joins
> I never cease to be amazed at how many times people have these monster > CPUs, like dual quad core 3Ghz processors, with 16GB or whatever of ram, > and then try and run a database off a single 7200 rpm desktop SATA > drive. at work our production databases often run on dozens of 1 > or 15000 rpm drives, organized as raid1+0's. Yeah. I just took the effort and copied all data from those tables to mysql and run an equivalent query - all took around 1 minute to execute. So either me or postgres is seriously broken. ;) I'm going back to mysql. :( -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Disable databse listing for non-superuser (\l) ?
All: Any suggestions on how-to, or comments on a potential NFR, to disable non-superuser's from viewing the database list via \l? Possibly a postgresql.conf toggle or restrictions on the internal views that constitute say, 'pg_catalog.pg_database'. Something equivalent, in principal, to FreeBSD sysctl: % security.bsd.see_other_uids=0 Just a thought... ~BAS -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] split string by special characters
Jan-Erik wrote: I wonder if you could please help me out to extract a character string to an array or better yet, a table. I'd like to split strings of text up into words and delimiters (but not delete the delimiters). The delimiters are defined as comma, space, dot, singe/double quotation mark, question mark etc.¹ in a separate table (delimiters) depending on what rules apply for the input. regexp_split_to_array/table seem quite suitable but I have difficulties to form the right expression with it, apart from that it remove the delimiters as well. Example: This is just a text that contain special characters such as , (comma), "(", ")" (left and right parenthesis) as well as "?" question mark. How do I split it up with PostgreSQL? Expected result: {This, " ", is, " ", just, " ", a, ..., PostgreSQL, "?" } __ ¹) Also later on tags such as and at other times something else depending on the circumstances. //Jan-Erik Hi, I was thinking about that and in my opinion the approach to let the database do that is the wrong direction. Sure you can do a lot with regexp_split_to_table or regexp_split_to_array but they are kind of limited compared to a programming language using regular expressions. If I had to try to get your jobdone, I would try regexp_matches() like: SELECT regexp_matches('This is just a text, that contain special characters such as, (comma),"(", ")" (left and right parenthesis) as well as "?" question, mark.How do I split it up with PostgreSQL?', E'(\\w*.)\\s+','g'); regexp_matches {This} {is} {just} {a} {"text,"} {that} {contain} {special} {characters} {such} {"as,"} {","} {"\""} {left} {and} {right} {parenthesis)} {as} {well} {as} {"\""} {"question,"} {How} {do} {I} {split} {it} {up} {with} (29 rows) So, you have the ability to catch the seperators like ','. But for now, teh example just catches the comma. But you want to catch a lot of other seperators as well. I suggest you do that within the logic of your coding language because I don't think this will be an easy way to walk ;-). This is no database job in my opinion. Cheers Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Find difference between two Text fields
Can anyone give me a way to find the difference between two text fields on a character by character basis. Essentially, I'd like to logically AND the two together and for any position that has a non-zero result show whatever character is in that position for the second string. The solution can be postgres specific but something approaching ANSI SQL would also be helpful (if possible). -- Peter Hunsberger -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Best way to import data in postgresl (not "COPY")
Hello everyone, Denis BUCHER a écrit : > I have a system that must each day import lots of data from another one. > Our system is in postgresql and we connect to the other via ODBC. > > Currently we do something like : > > SELECT ... FROM ODBC source > foreach row { > INSERT INTO postgresql > } > > The problem is that this method is very slow... > Does someone has a better suggestion ? Thanks a lot for the help of everyone ! There are the first results of my tries, it's very interesting !!! a) ON THE DESTINATION (PHP/Postgresql) 1. Preparing INSERT statements (to Postgres) was already a better idea 2. Then using BEGIN WORK COMMIT improved even more 3. At first I didn't realised I could remove quotes escaping thank to prepare, this improved a little more 4. Then I found something very interesting : pg_send_execute ! (asynchronous) Inserted lines : 134297 Required time : 292 seconds ([0] without prepare) Required time : 253 seconds ([1] with prepare) (13% better) Required time : 224 seconds ([2] with prepare and BEGIN COMMIT) (12% better) Required time : 221 seconds [3]removed escaping Required time : 214 seconds ([4] 4% better) b) ON THE SOURCE (PHP/ODBC) 5. Believe it or not but changing from PHP ODBC to PHP PDO ODBC >From : http://us2.php.net/manual/en/ref.uodbc.php To : http://fr.php.net/manual/en/class.pdostatement.php ...helped a LOT : Inserted lines : 134297 Required time : 25 seconds ([1] [2] [3] [4] [5] + PDO) Hope it will help other people ! Thanks a lot again to everyone that help me :-) Denis -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication from Postgres to EDB
On Fri, 2009-07-24 at 11:31 +0100, Jazz Johal wrote: > Hi > > Is it possible to setup replication from EDB to Postgres? Probably. Using Slony. As I understand it they explicitly keep backward compatibility. Joshua D. Drake > > Thanks > > > -- PostgreSQL - XMPP: jdr...@jabber.postgresql.org Consulting, Development, Support, Training 503-667-4564 - http://www.commandprompt.com/ The PostgreSQL Company, serving since 1997 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1
Andrew Radamis writes: > I'm getting this error when I try to log into my database. > *FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1 You've got a corrupted index. You can probably fix it like this: export PGOPTIONS="-P" psql ..usual options.. reindex index pg_class_oid_index; After that, a dump/initdb/reload might be in order, since it's hard to tell whether there are other problems lurking. Some testing of your hardware might be advisable too; and I'd suggest updating your Postgres if it's not a recent release. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1
Hello, I'm pretty new to pgsql, so speak slowly and draw plenty of pictures please :P I'm getting this error when I try to log into my database. *FATAL: root page 3 of "pg_class_oid_index" has level 0, expected 1 *I've done some googleing and I found one mailing list item in another language, and I couldn't decipher any useful information out of. I have no idea what caused this or how it happened. The computer has been up for months, but the database doesn't get very much use, just when I occasionally use it with sql-ledger. Thanks for your help, Andrew
Re: [GENERAL] Very slow joins
MS wrote: Btw. It looks like this issue: http://archives.postgresql.org/pgsql-performance/2007-09/msg00374.php In my case the CPU usage is low too (3%) but IO wait is high (95%). I'm using Postgresql 8.3. for more info on disk iowaits, use `iostat -x 5` (5 means sample every 5 seconds), and ignore the first sample as its the average system system boot. this will give you drive by drive and flie system by file system details of disk IO. The exact details shown vary by operating system. note, on many linux distributions, iostat is part of the sysstat package, which often isn't installed by default especailly on a 'minimum' install... on RH/Fedora/Centos type systems, try `yum install sysstat` to install it. I never cease to be amazed at how many times people have these monster CPUs, like dual quad core 3Ghz processors, with 16GB or whatever of ram, and then try and run a database off a single 7200 rpm desktop SATA drive.at work our production databases often run on dozens of 1 or 15000 rpm drives, organized as raid1+0's. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Very slow joins
Btw. It looks like this issue: http://archives.postgresql.org/pgsql-performance/2007-09/msg00374.php In my case the CPU usage is low too (3%) but IO wait is high (95%). I'm using Postgresql 8.3. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication from Postgres to EDB
On Fri, Jul 24, 2009 at 6:31 AM, Jazz Johal wrote: > Hi > > Is it possible to setup replication from EDB to Postgres? > You can with slony. I recommend you contact supp...@enterprisedb.com for details. --Scott
Re: [GENERAL] Copying only incremental records to another DB..
Alban Hertroys, 24.07.2009 13:07: It would be nice if there were a tool that could do a diff between two dumps resulting in a new dump with just the statements necessary to apply the differences. I don't think there is such a tool yet though (some light Googling does bring up such a tool for sqllite). Implementing it does have a few challenges, changes to records with foreign keys for example. You might want to have a look at my SQL Workbench/J It has a command that can compare the data of two databases for differences and can write the necessary DML statements to update the target database to match the data from the source. Details can be found here: http://www.sql-workbench.net/manual/wb-commands.html#command-data-diff As it does not compare two dumps, but the databases directly, tt requires that connections can be made to both databases at the same time (so it's not possible to do an "offline-diff") Feel free to contact me if you have any questions (support email address is on the homepage). Regards Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Copying only incremental records to another DB..
On 29 Jun 2009, at 6:10, Phoenix Kiula wrote: Hi We're trying PG on a new machine, so we copied our current (live) database to that server. Tested the code and it's all working. Now, to make that second server the main live server, we will need to copy the db again including the new records since we copied for testing. Is there any way to copy only the incremental records in all the tables? It would be nice if there were a tool that could do a diff between two dumps resulting in a new dump with just the statements necessary to apply the differences. I don't think there is such a tool yet though (some light Googling does bring up such a tool for sqllite). Implementing it does have a few challenges, changes to records with foreign keys for example. Barring the availability of such a tool, there are some tools out there that can 'diff' two XML files and, seeing that the xml module can export tables to a pre-defined XML format, you may be able to do something using that. What to do with the resulting XML file is another story, some XSLT could probably turn it back into SQL again. If the changes aren't many you could probably also work from the results of a normal diff from two text-dumps and glue them back together into a usable dump file. Or just apply the changes by hand... All of these methods involve a bit of work and none is foolproof (unless the dump-diff tool does exist), but if approached well it could result in a rather useful tool. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4a6995d910131993413858! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Replication from Postgres to EDB
Hi Is it possible to setup replication from EDB to Postgres? Thanks
Re: [GENERAL] Search Path vs Synonyms
Whilst I need EDB for a few of their features, I am keen to keep as PSQL compliant as possible. Thanks for the help ALTER USER userid SET search_path TO schema1,schema2; worked nicely. -Original Message- From: Magnus Hagander [mailto:mag...@hagander.net] Sent: 24 July 2009 08:57 To: Matthew Seaborn Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] Search Path vs Synonyms On Fri, Jul 24, 2009 at 09:38, Matthew Seaborn wrote: > Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does > have synonyms. For support on EnterpriseDB you should contact EnterpriseDB, not the PostgreSQL community. We can only answer about the opensource product. > Is it possible set define the default search_path for a given user? In the community version you can. I don't know if this works in EnterpriseDB. You'd just use ALTER USER userid SET search_path='schema1,schema2' -- Magnus Hagander Self: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ CONFIDENTIALITY - This email and any files transmitted with it, are confidential, may be legally privileged and are intended solely for the use of the individual or entity to whom they are addressed. If this has come to you in error, you must not copy, distribute, disclose or use any of the information it contains. Please notify the sender immediately and delete them from your system. SECURITY - Please be aware that communication by email, by its very nature, is not 100% secure and by communicating with Perform Group by email you consent to us monitoring and reading any such correspondence. VIRUSES - Although this email message has been scanned for the presence of computer viruses, the sender accepts no liability for any damage sustained as a result of a computer virus and it is the recipients responsibility to ensure that email is virus free. AUTHORITY - Any views or opinions expressed in this email are solely those of the sender and do not necessarily represent those of Perform Group. COPYRIGHT - Copyright of this email and any attachments belongs to Perform Group, Companies House Registration number 6324278. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ERROR: unexpected data beyond EOF in block of relation "RelationName"
Hi, Thanks for that. The question for me is how to find an appropriate OS patch for this? Is there a list of required patches for this Postgres release on SuSE? Regards, Marcin --- On Thu, 23/7/09, Tom Lane wrote: > From: Tom Lane > Subject: Re: [GENERAL] ERROR: unexpected data beyond EOF in block of relation > "RelationName" > To: "Marcin Gon" > Cc: pgsql-general@postgresql.org > Date: Thursday, 23 July, 2009, 2:48 PM > Marcin Gon > writes: > > > I'm getting the following error from my Postgres > database while inserting: > > > ERROR: unexpected data beyond EOF in block of relation > "RelationName". > > You omitted the HINT that says this has only been known to > occur in > connection with buggy kernels. I see you're running a > fairly new > kernel, but it might still pay to inquire whether it's > subject to the > wrong-lseek-result race condition that was around ahile > back. > > As far as getting out of the immediate problem is > concerned, I think > restarting the postmaster ought to fix it. > > > regards, tom lane > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting SQL to pg
mar...@cornhobble.com wrote: I must be tired. Any suggestions on converting the following to postgresql? UPDATE IGNORE yose5_user_newflags AS flags, yose5_messages AS msg SET flags.forum_id = msg.forum_id WHERE flags.message_id = msg.message_id AND flags.message_id IN (15580, 15581, 15706, 15712, 15713, 15719, 15888) Untested, but I imagine it would be something like this. UPDATE yose5_user_newflags AS flags SETforum_id = msg.forum_id FROM yose5_messages AS msg WHERE flags.message_id = msg.message_id AND flags.message_id IN (15580, 15581, 15706, 15712, 15713, 15719, 15888) PostgreSQL doesn't have anything similar to IGNORE (afaik), but depending on what you want to do, you may be able to work around that with deferred constraints etc. -- Tommy Gildseth DBA, Gruppe for databasedrift Universitetet i Oslo, USIT m: +47 45 86 38 50 t: +47 22 85 29 39 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting SQL to pg
In article , Albe Laurenz wrote: >martin wrote: >To PostgreSQL from what? Mysql unless the person try to port the code made some changes to it. >Can you explain what the statement is supposed to do? It's updating a table of what messages have been read during a thread move. It's part of phorum (http://www.phorum.org) which only has mysql officially supported. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting SQL to pg
martin wrote: > I must be tired. Any suggestions on converting the following > to postgresql? > > UPDATE IGNORE yose5_user_newflags AS flags, yose5_messages AS > msg SET flags.forum_id = msg.forum_id WHERE flags.message_id = > msg.message_id AND flags.message_id IN (15580, 15581, 15706, > 15712, 15713, 15719, 15888) To PostgreSQL from what? Can you explain what the statement is supposed to do? Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] user/grant - best practices handling permission in production system
Stefano Nichele schrieb: Hi All, I have some questions for you about the best way to handle permission on a database in a production system. The final goal is to have a web application connected to the db using a single user that must run select/delete/insert/update (and maybe truncate) In my opinion that user should NOT own the db and the db itself should NOT be created using that user. Of course that user should NOT be able to create database or other users. The steps could be: 1. using postgres user (or another user with grant for creating database) create the database 2. using the user used in step 1, create the schema and populate tables with initial data 3. using the user used in the previous step, create a new user (the one the webapp will use) 4. give to the new user the grant on all database objects for select/delete/insert/update I totally agree with Greg's answer but just want to give a hint for granting privileges to several objects in one shot as in step 4. pgAdmin III is giving this ability with the grant wizard ... this may help if you don't want to put all the steps in a "init script" for automatic db setup. Cheers Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: user/grant - best practices handling permission in production system
Hi Greg, thanks for your suggestions. See below for my comments. Greg Stark wrote: Well there isn't a way to do step 2 in one shot either. Maybe my explanation was not clear. Step2 means run the DDL/DMLscript to create and populate all the tables. You'll have to issue a CREATE statement for each object, it's no extra work to issue a GRANT for each object with the specific rights the application should have at that time. Think of it as an important part of the process of creating a new object. You are right but I don't like so much this approach since I'd want to provide the DDL/DML script and let the DBA to decide database name and user name (ie, I don't want to put the username in the DDL/DML script file). Note that it's probably not necessary to grant all rights to every table. Most applications have some tables that are read-only or insert-only from the point of view of the application. Your system will be more secure if the application does not have unnecessary privileges. So thinking about what rights to grant to the application for each object when it's created is not a bad thing. You are right also about that, but maybe it's too much for me. But to have a real secure DB this should be taken in account. At the end, these are the steps 1. using postgres user (or another user with grant for creating database) create the database 2. using the user used in the previous step, create a new user (the one the webapp will use) 3. give to the new user all permission on the database 4. using the new user, create and populate the tables with the DDL/DML script. In this way the user is not the db owner but is the owner of all tables. Do you think it's acceptable ? Ste -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Search Path vs Synonyms
On Fri, Jul 24, 2009 at 09:38, Matthew Seaborn wrote: > Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does > have synonyms. For support on EnterpriseDB you should contact EnterpriseDB, not the PostgreSQL community. We can only answer about the opensource product. > Is it possible set define the default search_path for a given user? In the community version you can. I don't know if this works in EnterpriseDB. You'd just use ALTER USER userid SET search_path='schema1,schema2' -- Magnus Hagander Self: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Search Path vs Synonyms
Matthew Seaborn wrote: Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does have synonyms. then you likely will get more help from EnterpriseDB's technical support, as only they are really familiar with their proprietary and commercial product. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Search Path vs Synonyms
Ah, I am actually using EnterpriseDB (a commercial 'fork' of PSQL) which does have synonyms. Is it possible set define the default search_path for a given user? -Original Message- From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at] Sent: 24 July 2009 07:22 To: Matthew Seaborn; pgsql-general@postgresql.org Subject: RE: [GENERAL] Search Path vs Synonyms Matthew Seaborn wrote: > Given the situation where a user connecting to the database > needs access to two separate schemas: the primary schema > which contains the data they will be updating and a second > schema which contains read-only reference data, used by many > users, that will be using in joins on queries. > > I don't want to have to use fully qualified names (I am > migrating code from Oracle which uses synonyms), so what is > the best way (in both performance and reliability) to refer > to the tables; Search Path or Synonyms? There are no synonyms in PostgreSQL: synonyms are Oracle's way of search_path (though more selective). You excluded the best solution, namely to qualify the objects. If your user should be able to access *all* tables in both schemas unqualified and there are no name collisions between objects in the schemas, I would recommend search_path. Otherwise, use views. Yours, Laurenz Albe CONFIDENTIALITY - This email and any files transmitted with it, are confidential, may be legally privileged and are intended solely for the use of the individual or entity to whom they are addressed. If this has come to you in error, you must not copy, distribute, disclose or use any of the information it contains. Please notify the sender immediately and delete them from your system. SECURITY - Please be aware that communication by email, by its very nature, is not 100% secure and by communicating with Perform Group by email you consent to us monitoring and reading any such correspondence. VIRUSES - Although this email message has been scanned for the presence of computer viruses, the sender accepts no liability for any damage sustained as a result of a computer virus and it is the recipients responsibility to ensure that email is virus free. AUTHORITY - Any views or opinions expressed in this email are solely those of the sender and do not necessarily represent those of Perform Group. COPYRIGHT - Copyright of this email and any attachments belongs to Perform Group, Companies House Registration number 6324278. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ECPG Deallocate PREPARE statement - bug ?
l...@crysberg.dk wrote: >I have a program that I need compile using PostgreSQL > 8.4.0 (or later) and it must be able to run on an 8.3.5 based > system as well as 8.4.0. I'm using embedded SQL for C and I > have the following sequence of statements: > >snprintf( stmt, 3000, "SELECT count(*) FROM %s WHERE %s", > *table, *where ); >EXEC SQL AT :_thisDbConn PREPARE cntstmt FROM :stmt; >EXEC SQL AT :_thisDbConn EXECUTE cntstmt INTO :recCount :fnull; >. >. >EXEC SQL DEALLOCATE PREPARE cntstmt; > >This seems to be ok running on the 8.4.0 system, but when > running it on the 8.3.5, it complains that it is an 'Invalid > statement name cntstmt' for the deallocation. > >I then tried to add the 'AT :_thisDbConn' to the > DEALLOCATE statement, but ecpg complained that there was no > "at" allowed for deallocate. However, looking at the output > (the .c file) I noticed that it had generated an apparently > correct ECPG_deallocate() call. Manually compiling this and > linking the program turned out to be able to run on both the > 8.3.5 and the 8.4.0 system without problems. > >Is this a bug in ecpg or am I doing something wrong ? I cannot reproduce this. I used the followind program that is based on your samples: #include #include int main(int argc, char** argv) { EXEC SQL BEGIN DECLARE SECTION; char stmt[100], _thisDbConn[] = "mydb"; int fnull, recCount; EXEC SQL END DECLARE SECTION; const char table[] = "sometable", where[] = "TRUE"; EXEC SQL WHENEVER SQLWARNING SQLPRINT; EXEC SQL WHENEVER SQLERROR SQLPRINT; EXEC SQL CONNECT TO "unix:postgresql://localhost:1237/test" AS :_thisDbConn; snprintf( stmt, 3000, "SELECT count(*) FROM %s WHERE %s", table, where ); EXEC SQL AT :_thisDbConn PREPARE cntstmt FROM :stmt; EXEC SQL AT :_thisDbConn EXECUTE cntstmt INTO :recCount :fnull; EXEC SQL DEALLOCATE PREPARE cntstmt; EXEC SQL DISCONNECT :_thisDbConn; printf("Ergebnis: %d\n", recCount); return 0; } I prepared and compiled it on both 8.4.0 and 8.3.5 and ran both programs against both versions (all 4 combinations). There were no errors or warnings, and the correct result was returned. Yours, Laurenz Albe -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general