Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Oliver Jowett <[EMAIL PROTECTED]> writes: > Bernard was also objecting to the overhead of pushing the data down a > TCP pipe when it's already available locally, I think.. I didn't find > any real difference there when I compared the two methods, though. What makes you think it's necessarily available locally? -- greg ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a file
Andrew On Fri, 19 Aug 2005 04:17:16 -, you wrote: >> In the majority of bulk load cases, the input exists as a file already > >But not necessarily on the server. True. But I am concerned with the server, and there I want that things are handled on the server, not on the client. > >> The use of psql in our case requires the launching of an external >> process from within the running Java application, which is an overhead >> in processing and code maintenance that must not be under-estimated. > >Certainly supporting COPY via STDIN within the java code seems preferable. Why do you say that? That option does not exist because the Postgresql JDBC driver does not support it. > >> My suggestions for improving the COPY command so it can be used by >> non-superuser users would be as follows: >> >> 1) Add optional Postgresql user permission to use the COPY command >> with files. > >Not acceptable, since the ability to copy from a file permits you to >read from the internals of the database itself bypassing security >restrictions; in particular, if there is a password for the postgres >superuser, then it would be trivially exposed by this method. A user >with permission to use COPY thus becomes security-equivalent to a >superuser in any case. May be. Not acceptable by whom? If the owner of an application owning the connections trusts the application and gets the postgres superuser to grant it the right to read from files, then it is obviously acceptable to the owner of the application and to the postgres superuser. There is no doubt about that and the owner of the application is not concerned with 3rd party acceptability. This would be a solution even if Postgres system files were totally exposed. Better than nothing. But we can take this one step further so that we don't even need to trust ourselves: The logical next step is that for a non-postgresql-superuser user, COPY FROM files have to be world-readable and COPY TO files and directories have to be world-writable. The server checks the file attributes and grants copy permission depending on them. Obviously any Postrgres system files must not be world-readable and world-writable. Problem solved. One doesn't need to be a genius to figure this out. Not having at least this primitive solution is quite powerless. Simply rejecting this command when the user is not superuser can only be considered a temporary workaround solution. It is long overdue for replacement. And trust me, it is quite frustrating having to hit such a barrier after having seen this feature implemented in MySQL for the last ten years. I am not talking about myself only. Just do a google groups search "jdbc postgres COPY STDIN" and you will see what I mean. Lots of frustration, improvised stuff but no generic solution to this simple problem. > >> or >> >> 2) Split up security risk calculations between the two directions "TO" >> and "FROM" and relax security. Look at MySQL for clues. The >> application developer can manage security on file system permission >> level. > >Same problem as above. COPY FROM is not in any sense less of a security >risk than COPY TO. There is obviously a difference between the permission to read system files and to destroy them. But this was only a suggestion. The distinction might not be required at all. > >> or >> >> 3) Close the ident loop in such a way that if a Postgresql user was >> granted access via ident as an operating system user then the COPY >> command is executed as a process with that user ID and not as postgres >> superuser. > >Postgres does not itself run as root, therefore it lacks the ability to >spawn a program that runs under a different userid to itself. I did not know the internals of whether this could be done or not. It is just a conceptual idea where somehow the server may be able to utilise file permission information. I have my own reservations, too. > >Over the local socket, which is the only context in which ident auth is >at all trustable, it would in theory be possible to implement COPY to a >file descriptor opened by the client and passed through the socket. I >personally think it is unlikely that this would be worth the (not >inconsiderable) amount of work needed to implement it, since the performance >overhead of copying the data via the socket instead is not a large factor >in the overall cost of a large copy. I agree one has to compare costs carefully. Regards, Bernard ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Timing out connections?
Am Donnerstag, den 18.08.2005, 12:36 -0500 schrieb Peter Fein: > Hi- > > Is there any way to have the *server* timeout disconnected clients? I'm > connecting over a sometimes flaky WiFi connection & when it goes down, > I'm left with several open idle sessions. Is there a way to have the > server disconnect these? A coworker and I have searched the docs/faqs & > haven't been able to find an answer. The best we could come up with was > tunneling over SSH and lowering sshd's timeout setting, but this seems > less than ideal. TIA. You might try openvpn (anyway for WiFi connections - since WEP isnt really enough) as it is fully transparent (you dont need to start up the tunneling per connection as of ssh) and it maintains connection keep alive and stuff. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Timing out connections?
am 18.08.2005, um 12:36:26 -0500 mailte Peter Fein folgendes: > Hi- > > Is there any way to have the *server* timeout disconnected clients? I'm > connecting over a sometimes flaky WiFi connection & when it goes down, > I'm left with several open idle sessions. Is there a way to have the > server disconnect these? A coworker and I have searched the docs/faqs & I know a better solution: Use on the server screen. If the connection going dow and you have later a new connection, then you can reattach to the old screen-session. Screen is a very nice tool ;-) Regards, Andreas -- Andreas Kretschmer(Kontakt: siehe Header) Heynitz: 035242/47212, D1: 0160/7141639 GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net ===Schollglas Unternehmensgruppe=== ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Tom Lane wrote: > Oliver Jowett <[EMAIL PROTECTED]> writes: > >>It sounds like what you really want is the ability to grant something >>like FILE access without granting all superuser rights? Sounds like a >>feature request, not a bug, to me :-) > > > AFAICT, the complaint really boils down to there not being any support > for COPY-from-client in the JDBC driver. Bernard was also objecting to the overhead of pushing the data down a TCP pipe when it's already available locally, I think.. I didn't find any real difference there when I compared the two methods, though. -O ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Tom Lane wrote: > What is the story on JDBC COPY support, anyway? I'm aware that there's > an unofficial patch for that, but I'm not clear about why it's not made > it into the accepted version. I didn't like the whole "here is an undifferentiated stream of data" approach -- there were some JDBC interfaces we could adapt to read/write typed data. That never happened, though. I suppose we could apply a patch similar to the original one, given that there doesn't seem like much interest in a typed version, but it's likely to need rework as there's been at least one overhaul of the driver's protocol handling layer since then. -O ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Oliver Jowett <[EMAIL PROTECTED]> writes: > It sounds like what you really want is the ability to grant something > like FILE access without granting all superuser rights? Sounds like a > feature request, not a bug, to me :-) AFAICT, the complaint really boils down to there not being any support for COPY-from-client in the JDBC driver. Which is definitely a feature request, but not one directed to the server geeks ;-) What is the story on JDBC COPY support, anyway? I'm aware that there's an unofficial patch for that, but I'm not clear about why it's not made it into the accepted version. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
[ A bit off topic, but... ] Oliver Jowett <[EMAIL PROTECTED]> writes: > And please fix your anti-spam system so it doesn't send me a "you must > jump through these hoops to send me email" message every time please! It's standard policy on the PG lists that we boot subscribers who auto-reply to list mail like that. If you find yourself getting unsolicited 'bot replies from list postings, let Marc know. (Note: if you replied To: somebody and cc: to the list, and you got the antispam challenge due to the To: copy, that's not grounds for list removal. But it's still a sign of a jerk. If I take the time to answer someone's question, I'm not going to look very favorably on a demand to confirm that I'm a human before they'll deign to read my answer.) regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to determine table schema in trigger function
On Thu, 2005-08-18 at 17:02 +0300, Andrus wrote: > I created generic (for tables in different schemas) trigger function : > > CREATE OR REPLACE FUNCTION setlastchange() RETURNS "trigger" > AS $$BEGIN > UPDATE serverti SET lastchange='now' WHERE tablename=TG_RELNAME and >schemaname=TG_SCHEMA; > RETURN NULL; > END$$ LANGUAGE plpgsql STRICT; > > > Unfortunately, this does not work since TG_SCHEMA is not valid in PL/pgSQL > > How to determine schema name where table TG_RELNAME belongs in trigger > function ? How about extracting relnamespace from pg_catalog.pg_class? UPDATE serverti SET lastchange='now' WHERE tablename=TG_RELNAME and schemaname=( SELECT n.nspname FROM pg_catalog.pg_namespace AS n, pg_catalog.pg_class AS c WHERE c.relnamespace = n.oid AND c.oid = TG_RELID ); ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generating random values.
Mike Nolan wrote: Your new password is 87&3jiwkjIJiwkjikmkq,^^2v12hqIwLbvCQQQi18152 Do not write it down or save it in a password manager, as doing so creates security problems. There is a solution here. Initialize passwords with a random string. Flag these accounts as "Password Temporary." When the user logs in (via the app), prompt him/her to change his/her password. You can do this latter one by creating a change_password() function as such CREATE FUNCTION change_password(varchar) returns bool as ' DECLARE passwd ALIAS FOR $1; query VARCHAR; BEGIN query := '' ALTER USER '' || SESSION_USER || ''WITH ENCRYPTED PASSWORD '' || passwd; EXECUTE query; RETURN TRUE; END; ' LANGUAGE PLPGSQL SECURITY DEFINER; This is off the top of my head, so something might need to be reordered. This way you can let users choose their own passwords :-) If you like you can put other checks in it to make sure you have any three of uppercase/lowercase/numbers/other characters or whatever else you like. Word of caution. DO NOT USE CURRENT_USER instead of SESSION_USER of else everyone will have access to the superuser account which created this function :-) Best Wishes, Chris Travers Metatron Technology Consulting -- Mike Nolan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster begin:vcard fn:Chris Travers n:Travers;Chris email;internet:[EMAIL PROTECTED] x-mozilla-html:FALSE version:2.1 end:vcard ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
On Fri, 19 Aug 2005, Bernard wrote: > My suggestions for improving the COPY command so it can be used by > non-superuser users would be as follows: If you want to do this without switching to a different UNIX user, can't you already write a small SECURITY DEFINER function as a superuser that does the copy from file based on arguments and then give permissions to that function to the appropriate non-superusers? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > 2) Split up security risk calculations between the two directions "TO" > and "FROM" and relax security. Look at MySQL for clues. The > application developer can manage security on file system permission > level. I looked at MySQL's docs briefly and its behaviour seems almost the same as PostgreSQL's with some minor differences: - the equivalent to COPY is "LOAD DATA INFILE" - the equivalent to FROM STDIN is "LOCAL" - for non-LOCAL loads, the DB user must have FILE privilege which is "file access on server host". Given FILE privilege in MySQL, you can read existing files and create new files based on the access the server user has. It sounds like what you really want is the ability to grant something like FILE access without granting all superuser rights? Sounds like a feature request, not a bug, to me :-) Also, you better hope that there's no sensitive information readable by the server user that could be used to gain superuser access.. such as .pgpass files or info from pg_hba.conf, for example. -O ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > Oliver and interested list members: [...] And please fix your anti-spam system so it doesn't send me a "you must jump through these hoops to send me email" message every time please! (usual cc: to poster removed for that reason) -O ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > This difference of performance is the main reason for the COPY > command, and this is also the reason why bulk loading through the JDBC > interface will never match the performance of the COPY fith files > command. In some admittedly unscientific tests I see less than 10% difference between server-side COPY and client-side COPY FROM STDIN (psql's \copy command) on a 28mb input file. That's down in the per-run noise. Doing it via JDBC will undoubtably add some extra overhead, but I'd estimate that it's about the same sort of overhead as writing your data out to a file from Java in the first place takes. If you've already got the data in a file, why not just use psql's \copy command? This uses COPY FROM STDIN, reads the file as the user running psql, and does not require superuser permissions. > The whole architectural setup for such "bulk" loading is a mess. Do you have a concrete suggestion for improving bulk loading that doesn't open security holes? -O ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generating random values.
On 8/18/05, Mike Nolan <[EMAIL PROTECTED]> wrote: > As I indicated in my original response, there is no best answer to the > issue of password choices, though there are probably a few 'worst' > answers. :-) > > Once someone has established a password scheme, either randomly generated > or user selected, it should not be that difficult to write routines to > generate acceptable passwords or to enforce standards for user-generated > passwords. Good point Mike. In my case, for instance, the users will have the opportunity to chance their password. There's no problems with passwords which a user could remember. At least, the user will not trouble you with a password reset requirement. :D Thanks for all replies and suggestions. Fernando Lujan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Oliver and interested list members: I was referring to the majority of users wanting to "bulk" load tables not to the majority of all or whatever users who may or may not know or care about the difference in performance between INSERT and COPY. This difference of performance is the main reason for the COPY command, and this is also the reason why bulk loading through the JDBC interface will never match the performance of the COPY fith files command. The COPY command with STDIN or STDOUT is a speciality that the majority of users would not normally ask for because they usually think in terms of files and rightly so. Comparable with a STDIN/STDOUT workaround would be to pipe input and output to and from SQL stored procedures. What I mean to say is that we want this to be strictly server side for best performance and we don't want to get the client involved in the raw processing which is in violation of any 3 tier client-server architecture. In addition to this, not only will the client and network be loaded with additional processing demand, but the server load will also increase because it has to service the JDBC interface for I/O. The whole architectural setup for such "bulk" loading is a mess. Regards, Bernard On Fri, 19 Aug 2005 12:27:01 +1200, you wrote: >Bernard wrote: > >> The majority of JDBC users trying to bulk load tables would not want >> to send the data through their connection. This connection is designed >> to send commands and to transfer only as much data as necessary and as >> little as possible. > >I don't understand why this is true at all -- for example, our >application currently does bulk INSERTs over a JDBC connection, and >moving to COPY has been an option I looked at in the past. Importing >lots of data from a remote machine is hardly an uncommon case. > >> The need is only created by the limitations of the Postgres COPY >> command. >> >> I can't see why a workaround should be developed instead of or before >> fixing the COPY command. >> >> It works in other DB engines. > >I guess that other DB engines don't care about unprivileged DB users >reading any file that the backend can access. > >-O > >---(end of broadcast)--- >TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] total db lockup
On Thu, Aug 18, 2005 at 05:20:55PM -0400, Eugene wrote: > Guys, I really need help on this. Can whoever is in charge of the > mailing list change it so that it doesn't delete my messages? I am not in charge of the mail server. However: May I suggest you change to a less broken mail client? Your messages show up featuring really strange MIME content description. Even the standard gmail.com accounts seem to work better, which is surprising for a web based email setup. This brokenness may hint the spam processor on postgresql.org servers to silently droo your emails. Alternatively, you may want to "paste" your error messages and stuff on places like http://rafb.net/paste, and then post the URLs here. -- Alvaro Herrera () "El sentido de las cosas no viene de las cosas, sino de las inteligencias que las aplican a sus problemas diarios en busca del progreso." (Ernesto Hernández-Novich) ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > The majority of JDBC users trying to bulk load tables would not want > to send the data through their connection. This connection is designed > to send commands and to transfer only as much data as necessary and as > little as possible. I don't understand why this is true at all -- for example, our application currently does bulk INSERTs over a JDBC connection, and moving to COPY has been an option I looked at in the past. Importing lots of data from a remote machine is hardly an uncommon case. > The need is only created by the limitations of the Postgres COPY > command. > > I can't see why a workaround should be developed instead of or before > fixing the COPY command. > > It works in other DB engines. I guess that other DB engines don't care about unprivileged DB users reading any file that the backend can access. -O ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] in transaction
I am also seeing this situation using hibernate. Some of the IDLE-in-transaction connections are sitting there holding locks which is a BIG problem. The query I use to see the processes and locks is this: select procpid, usename , (now() - query_start) as age, c.relname , l.mode, l.granted from pg_stat_activity a LEFT OUTER JOIN pg_locks l ON (a.procpid = l.pid) LEFT OUTER JOIN pg_class c ON (l.relation = c.oid) where (current_query = ' in transaction' or current_query like '%vacuum%') -- and query_start < now() - '1 hours'::interval order by pid; Note the commented out part. Change the interval to what you like. You cannot see a query because there is none. Some of these I-i-t connections come and go after a while. Some stick around for DAYS. If ANYONE has any brilliant ideas as to the source and dare I say correction to this problem, many people, especially myself would be very very happy. --elein -- [EMAIL PROTECTED]Varlena, LLCwww.varlena.com (510)655-2584(o) (510)543-6079(c) PostgreSQL Consulting, Support & Training PostgreSQL General Bits http://www.varlena.com/GeneralBits/ -- AIM: varlenallc Yahoo: AElein Skype: varlenallc -- I have always depended on the [QA] of strangers. On Thu, Aug 18, 2005 at 04:16:27PM -0700, Junaili Lie wrote: > Hi, > We have applications that sits on top of Java application server. Our > code is written in Java, sitting on top of Jboss with Hibernate, and > we use JDBC driver pg74.215.jdbc2ee.jar. We have observed a number of > in transaction on pg_stat_activity. > I am wondering if there's command/view/system tables/ tools out there > that will allow us to look at what's the in transactions that are > waiting to be committed. > We have some " in transaction" and would like to see which part > of the code that causes it. > > > Thanks in advance. > > J > > ---(end of broadcast)--- > TIP 5: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] download binary version for Win32
Hello: I can't to download this version..can you help me? regards, Ulises ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Oliver and interested list members: Thanks for the related information. The majority of JDBC users trying to bulk load tables would not want to send the data through their connection. This connection is designed to send commands and to transfer only as much data as necessary and as little as possible. In other words, COPY, and its corresponding commands in other datbase engines are designed to transfer data using the fastest possible method, which is typically to and from files. For the majority JDBC users, there is no real need for a STDIN/STDOUT option because they can always write to a file. The need is only created by the limitations of the Postgres COPY command. I can't see why a workaround should be developed instead of or before fixing the COPY command. It works in other DB engines. Regards Bernard On Fri, 19 Aug 2005 11:10:42 +1200, you wrote: >Bernard wrote: > >> I want to follow what is suggested here. How are STDIN and STDOUT >> addressed when using the JDBC driver? > >The current JDBC driver doesn't support this mode of COPY. > >There was some work done in the past to support this but it never got to >the point of making it into the official driver; see the pgsql-jdbc >archives for details. > >-O > >---(end of broadcast)--- >TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] in transaction
Hi, We have applications that sits on top of Java application server. Our code is written in Java, sitting on top of Jboss with Hibernate, and we use JDBC driver pg74.215.jdbc2ee.jar. We have observed a number of in transaction on pg_stat_activity. I am wondering if there's command/view/system tables/ tools out there that will allow us to look at what's the in transactions that are waiting to be committed. We have some " in transaction" and would like to see which part of the code that causes it. Thanks in advance. J ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a
Bernard wrote: > I want to follow what is suggested here. How are STDIN and STDOUT > addressed when using the JDBC driver? The current JDBC driver doesn't support this mode of COPY. There was some work done in the past to support this but it never got to the point of making it into the official driver; see the pgsql-jdbc archives for details. -O ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Weird performance hit
On Thursday 18 August 2005 12:03 am, WireSpot wrote: > I have two practically identical Debian-testing systems installed > on two harddrives on the same machine. I've compiled Postgres 8.0.3 > with exactly the same options on both. Both HDD use the same > kernel, have DMA enabled and so on. I have the same database and > web applications installed in both systems. > > However, one application is for some weird reason taking a serious > performance hit on certain pages. There are some intensive joins > and selects there, but somehow one install manages a couple of > seconds and the other takes about 10. > > Any ideas? I've tried copying Postgres and the repository over from > the good install, it does the same thing. Does this suggest that > it's not a Postgres issue? A number of ideas spring to mind: 1) "practically identical" != "identical" - perhaps they aren't as similar as you assume 2) Hardware problems. Are there any indications in /var/log/messages of drive retries or other problems? Have you used S.M.A.R.T. tools, hdbench or similar to verify that both drives are problem-free and have similar performance? 3) Different data layout. I assume both are using the same filesystem - if not all bets are off. Even if they are identical, the data could end up in different areas of the disk forcing more seeks on one drive than the other. Does one setup seem to "work harder", id. more seeking/thrashing than the other? 4) Different connections. Are you actually reconnecting the drives so each is connected the same way or are they both in the system simultaneously? IDE devices only run at the speed of the slowest device on the bus so if your fast drive is alone while the "slow drive" is sharing its bus with a slow device you might see this problem. Google on "IDE slowest device" and you will find plenty on this subject. 5) In general a fresh copy (via pg dump and restore, not filesystem copying) will have better performance due to the fact that it's almost like doing a vacuum full and reindex on everything. However, you may need to run "analyze" after you do the restore to make sure the planner makes intelligent decisions. Cheers, Steve ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a file
On Fri, Aug 19, 2005 at 10:16:29AM +1200, Bernard wrote: > Bruno and interested list members > > I want to follow what is suggested here. How are STDIN and STDOUT > addressed when using the JDBC driver? > > Or in other words where can I write or receive megabytes of data? I don't know how JDBC does it, but as an example the Perl Pg module has these functions: $ret = $conn->putline($string) Sends a string to the backend. The application must explicitly send the two characters "\." to indicate to the backend that it has finished sending its data. $ret = $conn->putnbytes($buffer, $nbytes) Sends n bytes to the backend. Returns 0 if OK, EOF if not. $ret = $conn->endcopy This function waits until the backend has finished the copy. It should either be issued when the last string has been sent to the backend using putline or when the last string has been received from the backend using getline. endcopy returns 0 on success, 1 on failure. Looking at google I see the following: http://archives.postgresql.org/pgsql-jdbc/2004-10/msg00166.php saying it's not supported but there is a patch. That was a year ago though. Some later emails suggest it is possible. Hope this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpm6a4hbsQs1.pgp Description: PGP signature
Re: [GENERAL] [BUGS] BUG #1830: Non-super-user must be able to copy from a file
Bruno and interested list members I want to follow what is suggested here. How are STDIN and STDOUT addressed when using the JDBC driver? Or in other words where can I write or receive megabytes of data? I would not want to append this to the String of a SQL Statement in Java because that is a String in memory. Thanks Bernard On Wed, 17 Aug 2005 06:51:12 -0500, you wrote: >On Wed, Aug 17, 2005 at 09:22:16 +0100, > Bernard <[EMAIL PROTECTED]> wrote: >> >> The following bug has been logged online: > >This isn't a bug and you really should have asked this question on >another list. I am moving the discussion over to the general list. > >> >> Bug reference: 1830 >> Logged by: Bernard >> Email address: [EMAIL PROTECTED] >> PostgreSQL version: 8.0.3 >> Operating system: Linux RedHat 9 >> Description:Non-super-user must be able to copy from a file >> Details: >> >> On the attempt to bulk load a table from a file that is owned by the >> non-superuser current database user, the following error message is >> printed: >> >> "must be superuser to COPY to or from a file" >> >> What is the reason for this limitation? > >This is described in the documentation for the copy command. > >> >> It can't justifiably be for security reasons because if a web application >> such as tomcat requires to bulk load tables automatically on a regular basis >> then one would be forced to let the web application connect as superuser, >> which is very bad for security. > >No, because you can have the app read the file and then pass the data to >the copy command. To do this you use STDIN as the file name. > >> >> In MySQL bulk loading works for all users. > >You can use the \copy command in psql to load data from files. > >> >> We need a Postgresql solution. >> >> We have a web application where both MySQL and Postresql are supported. With >> Postgresql, the application would have to connect as user postgres. We have >> to explain this security risk to our clients very clearly. >> >> ---(end of broadcast)--- >> TIP 2: Don't 'kill -9' the postmaster > >---(end of broadcast)--- >TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] total db lockup
Well, perhaps they were blocked for being too large? Seriously, some of your messages appear to be getting through fine so if you can't attach them put them on a web or ftp server and post a link. Much better than copying it to several hundred mailboxes. On Thu, Aug 18, 2005 at 05:20:55PM -0400, Eugene wrote: > Guys, I really need help on this. Can whoever is in charge of the > mailing list change it so that it doesn't delete my messages? -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpgnxVl4VXNV.pgp Description: PGP signature
Re: [GENERAL] total db lockup
Guys, I really need help on this. Can whoever is in charge of the mailing list change it so that it doesn't delete my messages? thanks, Eugene > > From: Eugene <[EMAIL PROTECTED]> > Date: 2005/08/18 Thu PM 04:45:58 EST > To: Tom Lane <[EMAIL PROTECTED]> > CC: > Subject: Re: [GENERAL] total db lockup > > It seems that I can't. For whatever reason, my messages are being blocked. > > Eugene > > > > > > From: Tom Lane <[EMAIL PROTECTED]> > > Date: 2005/08/18 Thu PM 02:11:54 EST > > To: [EMAIL PROTECTED] > > CC: pgsql-general@postgresql.org > > Subject: Re: [GENERAL] total db lockup > > > > > > replyAll Description: null ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] total db lockup
It seems that I can't. For whatever reason, my messages are being blocked. Eugene > > From: Tom Lane <[EMAIL PROTECTED]> > Date: 2005/08/18 Thu PM 02:11:54 EST > To: [EMAIL PROTECTED] > CC: pgsql-general@postgresql.org > Subject: Re: [GENERAL] total db lockup > > replyAll Description: null ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Adding contrib modules
> -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Jonathan Villa schrieb: > >> Yes, I'm running on Linux >> >> I did not try ldconfig, however I just have... and same result >> >> tsearch2.so is in /usr/local/pgsql/lib and my home /usr/local/pgsql >> >> Also, logs say the same thing that fts.out says > > I'm a little confused about your problem. I use postgresql 7.4.8 on > production server and postgresql 8.0.3 on development server and your > problem I never had. > Is your server a clean database engine or is it in use for other > databases? Can you execute an new initdb after removing the old data > folder? > Yes, this problem is very confusing... PgSQL is a brand new installation, and nothing else is running on it. I've tried rebuilding many times and during those times I've removed the entire /usr/local/pgsql/* directory and then ran initdb all over again. Right now the issue is happening on FC3, I'll try these same steps on the production server which is RHEL 3 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Adding contrib modules
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Jonathan Villa schrieb: > Yes, I'm running on Linux > > I did not try ldconfig, however I just have... and same result > > tsearch2.so is in /usr/local/pgsql/lib and my home /usr/local/pgsql > > Also, logs say the same thing that fts.out says I'm a little confused about your problem. I use postgresql 7.4.8 on production server and postgresql 8.0.3 on development server and your problem I never had. Is your server a clean database engine or is it in use for other databases? Can you execute an new initdb after removing the old data folder? With best regards - -- Mario Günterberg mattheis. werbeagentur IT Engineer / Projektleiter Zillestrasse 105a. D - 10585 Berlin Tel#49-(0)30 . 34 80 633 - 0 Fax#49-(0)30 . 34 80 633 50 http://www.mattheis-berlin.de Wenn Sie glauben, dies sei ein großes Problem - warten Sie mal ab, bis wir versuchen die Lösung zu finden. (Walter Matthau) -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.2 (MingW32) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFDBN/mFa/ZZSW3PCQRArrVAKDD+6jwymEltP2xlFoc1JnhY33YPgCfcEfJ 4CPYJ4kRRmuABo1x7JD/F1o= =wv0O -END PGP SIGNATURE- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Generating random values.
> This way you can let users choose their own passwords :-) > > If you like you can put other checks in it to make sure you have any > three of uppercase/lowercase/numbers/other characters or whatever else > you like. Allowing users to choose their own permanent passwords does not make them any more secure, though it would hopefully make them easier to remember. Users tend to choose passwords that are easy to guess, and they tend to use the same password for multiple accounts. As I indicated in my original response, there is no best answer to the issue of password choices, though there are probably a few 'worst' answers. :-) Once someone has established a password scheme, either randomly generated or user selected, it should not be that difficult to write routines to generate acceptable passwords or to enforce standards for user-generated passwords. -- Mike Nolan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Same database, different query plans
On Thursday 18 August 2005 10:16 am, [EMAIL PROTECTED] wrote: > BTW, the default postgresql settings are WAY too conservative. The archives are full of discussion on this issue. PG runs on several operating systems and is installed on machines ranging from slow single-cpu multi-service machines with well under 100MB RAM to fast multi-cpu machines chock-full of memory and fast drive arrays dedicated to PG only. The default configuration allows you to get PG running out-of-the box on almost any machine. The end-user is responsible for adjusting the settings to match the particulars the system on which it is installed. > I am now looking into tuning and there are a lot of things that need > to be turned up. Google on "postgresql performance tuning" and you will find lots of useful articles on the first page of returns. Chapter 16 of the manual describes the various settings (but is not terribly helpful in recommending the appropriate setting). The stats collector can provide you with useful input for determining settings. When you run out of ideas from those sources, subscribe to the "Performance" mailing list. Cheers, Steve ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] total db lockup
<[EMAIL PROTECTED]> writes: > So can anyone offer any insight on this? Not without information. You haven't even told us what PG version you are running, much less provided the necessary details like the pg_locks status. > BTW, I tried reposting it with attachments and it didn't show up. Fix your mail setup and try again. Or wait a bit --- it's quite possible the previous messages are just stuck in the moderator's approval queue. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Adding contrib modules
> Jonathan Villa schrieb: > >> Thanks... at least know I'm doing to correctly... but I still get the >> errors. I've done everything as it states on the tsearch-V2-intro.html >> page... and then I run >> >> psql ftstest < tsearch2.sql &> fts.out >> >> for testing of course >> >> the fts.out file has things like >> >> ERROR: type "tsvector" does not exist >> ERROR: type "tsquery" does not exist >> ERROR: function lexize("unknown", "unknown") does not exist >> >> I'm totally baffled... I'm running 7.4.8 by the way > > I think you running a linux/unix machine? > Have you executed ldconfig or similiar after you installed tsearch2? > Is the tsearch2.so in your library path? > Yes, I'm running on Linux I did not try ldconfig, however I just have... and same result tsearch2.so is in /usr/local/pgsql/lib and my home /usr/local/pgsql Also, logs say the same thing that fts.out says *shrug* ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Waiting on a transaction
On Thu, Aug 18, 2005 at 01:33:18PM -0400, Vivek Khera wrote: Hey Vivek! Nice to hear from you over here. ;) > The trick is dealing with statement timeouts on shared pool > connections over mod_perl and Apache::DBI. I haven't satisfied > myself yet that the timeout will be unset when the next connection > uses the DB... You mean other than setting "alarm 0;"? -- Bill Moseley [EMAIL PROTECTED] ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Postmaster fails in select() in strange way
=?ISO-8859-2?Q?Zbigniew_Zag=F3rski?= <[EMAIL PROTECTED]> writes: > postgres[89874]: [1-1] LOG: XX000: select() failed in postmaster: > Inappropriate ioctl for device Wow, that's bizarre. > After closing all connections, postmaster exits leaving no message in > logs - these above are last before postmaster dies. Yeah, the postmaster just throws up its hands and quits. Everything you've said follows directly from the unexpected select() failure. > Does it look like OS or PostgreSQL bug? I'd say definitely an OS bug. Time to enlist some FreeBSD hackers. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL 8.0.3 limiting max_connections to 64 ?
Alexandre Barros <[EMAIL PROTECTED]> writes: > [ can't get more than 64 connections ] Have you looked into the postmaster log (stderr or syslog output) to see if any error conditions are being reported? It occurs to me that the postmaster might be running in an account that has a max-number-of-user-processes limit. On machines where such limits are enforced, 70 or so is a pretty common value, so that would fit with your experience. I can't tell you exactly how to check or change that limit, because it's different on different variants of Unix, but it's something to look into. If this is the problem, you'd be seeing "fork failed" messages in the postmaster log. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Timing out connections?
Hi- Is there any way to have the *server* timeout disconnected clients? I'm connecting over a sometimes flaky WiFi connection & when it goes down, I'm left with several open idle sessions. Is there a way to have the server disconnect these? A coworker and I have searched the docs/faqs & haven't been able to find an answer. The best we could come up with was tunneling over SSH and lowering sshd's timeout setting, but this seems less than ideal. TIA. --Pete -- Peter Fein [EMAIL PROTECTED] 773-575-0694 Basically, if you're not a utopianist, you're a schmuck. -J. Feldman ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Waiting on a transaction
On Aug 16, 2005, at 3:01 PM, Bill Moseley wrote: So then I wondered if my application should set an alarm and timeout with an error if, by odd chance, an update hangs. Trying to be a bit more robust -- not that the application could recover, but at least it could spit out an error other than hang. Robustness in the face of breakage is always good. Since you're programming in transactions already, you have planned for failed commits anyhow, so adding timeouts will probably not cause any significant harm to your program. The trick is dealing with statement timeouts on shared pool connections over mod_perl and Apache::DBI. I haven't satisfied myself yet that the timeout will be unset when the next connection uses the DB... Vivek Khera, Ph.D. +1-301-869-4449 x806 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Same database, different query plans
I think I can answer this cause I recently had a similar problem. There is a voodoo setting in postgresql called "analyze target". It controls how much statistic information is kept per table. This information affects the query planner. If it makes a bad guess based on insufficient statistics data, it will absolutely kill performance (BTW, the documentation never makes it explicit). Increase default_analyze_target (I think that's what it's called, look up the docs) at least tenfold, restart postgresql, and run analyze again. BTW, the default postgresql settings are WAY too conservative. I am now looking into tuning and there are a lot of things that need to be turned up. hope this helps, Eugene > > From: Michael Fuhr <[EMAIL PROTECTED]> > Date: 2005/08/18 Thu AM 10:05:14 EST > To: WireSpot <[EMAIL PROTECTED]> > CC: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Same database, different query plans > > On Thu, Aug 18, 2005 at 12:03:59PM +0300, WireSpot wrote: > > The actual SELECT results (ie. non EXPLAIN) are identical in both > > cases. The indexes and so on are identical. I've done a reindexing and > > vacuuming on both of them just to be sure. > > > > As you can see, there's quite a bit of a difference between 0.3 ms and > > 398 ms, and it shows. I haven't touched the query planning options. > > Why the different planning and what can I do to fix the misguided one? > > Have you run ANALYZE or VACUUM ANALYZE in both databases to update > the planner's statistics? If you have and get the same results, > then it might be interesting to see the output of the following on > both systems: > > SET enable_mergejoin TO off; > SET enable_nestloop TO on; > EXPLAIN ANALYZE SELECT ... > > SET enable_mergejoin TO on; > SET enable_nestloop TO off; > EXPLAIN ANALYZE SELECT ... > > -- > Michael Fuhr > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] total db lockup
So can anyone offer any insight on this? BTW, I tried reposting it with attachments and it didn't show up. thanks, Eugene > > From: <[EMAIL PROTECTED]> > Date: 2005/08/18 Thu AM 09:24:30 EST > To: > Subject: [GENERAL] total db lockup > > (NOTE: reposting this for the *fifth* time because my previous messages > didn't go through). > > > Hi all, > > We have experienced a really weird problem with > postgresql yesterday. When I was called in to take a > look, all the non-superuser connections were used up > and they were all in a waiting state (SELECT waiting, > UPDATE waiting, etc.). I couldn't figure out what the > problem is, but I saved the ps, pg_stat_activity, and > pg_locks state at the time (attached as > condor_db_stats.txt). BTW, when looking at the queries > in pg_stat_activity, postgresql cuts them off so you > can't see the entire string. Is there any way to > prevent that, or at least increase the character > limit? > > We restarted postgresql server and the problem > reappeared a few minutes later. I saved that state > also (condor_db_stats2.txt). At that point, after we > restarted postgresql (again), I ran VACUUM on the > entire database and did a few more things that seem to > have solved the problem (see below). > > This particular database is essentially just one flat > table (level) with a few small supporting tables. Only > the level table is heavily used. (table definition is > attached as table.txt). I noticed that one of the > indexes (level_owner_index) was a hash index. I > remembered what postgresql manual says about hash > indexes and concurrency > (http://www.postgresql.org/docs/7.4/interactive/locking-indexes.html) > and, after VACUUM finished, replaced the hash index > with a btree. I then did a REINDEX of the level table > and ANALYZE. This seems to have solved the problem -- > at least as of this morning we still have not seen any > deadlocks. > > My question is, what could have caused this to happen? > Can anyone explain this paragraph from the manual: > > "Share/exclusive page-level locks are used for > read/write access. Locks are released after the page > is processed. Page-level locks provide better > concurrency than index-level ones but are liable to > deadlocks." > > > Any other pointers to help me figure out what went > wrong and how to fix it? > > thanks, > > Eugene > > WTF? My message doesn't appear. Trying again without > attachments or inline text. > > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Startup ...
On Wed, Aug 17, 2005 at 18:36:55 -0700, Gary Fay <[EMAIL PROTECTED]> wrote: > I have a fedora core 3 system. I was able to start the > postgres server with the tool but I have no idea how > to create the databases. I am getting errors about my > user id and I should connect with the one that started > the server, but I used the tool, not a command. If you had the server actually running, then you should be able to create a database using the createdb command. Probably you want to run createuser first to create a postgres user name matching your normal account name that has the right to create databases. When you run createuser you will want to use the -U command to run the command as the postgres user 'postgres' (or whatever user ran initdb). ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Generating random values.
On Wed, Aug 17, 2005 at 15:54:40 -0600, Edmund <[EMAIL PROTECTED]> wrote: > > Great! a simple, dumb program can generate all your passwords in very > quickly. My 2.4 Ghz Pentium 4 did it in under 10 minutes. A token set of > 16 characters, and a fixed length of 8 charachters just isnt a very big > search space. If you are worried about that you shouldn't be using any old random number generator either. I doubt the plain random function is cryptographicly secure. You want want to use something like /dev/random as a source. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Generating random values.
> Great! a simple, dumb program can generate all your passwords in very > quickly. My 2.4 Ghz Pentium 4 did it in under 10 minutes. A token set of > 16 characters, and a fixed length of 8 charachters just isnt a very big > search space. Your new password is 87&3jiwkjIJiwkjikmkq,^^2v12hqIwLbvCQQQi18152 Do not write it down or save it in a password manager, as doing so creates security problems. -- Mike Nolan ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Generating random values.
Edmund wrote: [EMAIL PROTECTED] ("Joshua D. Drake") writes: Fernando Lujan wrote: Hi folks, I have a table wich contains my users... I want to insert to each user a random password, so I need a random function. Is there such function in Postgres? I just found the RANDOM which generates values between 0.0 and 1.0. Any help or suggestion will be appreciated. :) I would do someting like: select substring(md5(random() || random()), 5, 8); Sincerely, Joshua D. Drkae Great! a simple, dumb program can generate all your passwords in very quickly. My 2.4 Ghz Pentium 4 did it in under 10 minutes. A token set of 16 characters, and a fixed length of 8 charachters just isnt a very big search space. This is why I suggested a variable-length random string function. But this not as trivial to impliment. Best Wishes, Chris Travers ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] PostgreSQL 8.0.3 limiting max_connections to 64 ?
Can't help you with the rest of your problem, but you need to up this by a factor of 10 at least. This is 8MB of shared buffers for 500 connections... On Thu, Aug 18, 2005 at 12:43:49PM -0300, Alexandre Barros wrote: > shared_buffers = 1004 # min 16, at least max_connections*2, 8KB each Hope this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgpYjkFrXgp2j.pgp Description: PGP signature
Re: [GENERAL] PostgreSQL 8.0.3 limiting max_connections to 64 ?
First of all, thanks in advance for all the replies. Joshua D. Drake wrote: max_connections, shared_buffers, shmmax were tweaked, but the server does not seems to respect the 500 max_connections... i *know* i'm missing something obvious, but, what could be wrong ?... i'm lost... any help would be most appreciated... please. A completely stupid response but don't take it pseronally. Did you remove the # from in front of the max_connections parameter and do a COMPLETE restart? no worries, i didn't take personally. ;) --- Tom Lane wrote: Alexandre Barros <[EMAIL PROTECTED]> writes: max_connections, shared_buffers, shmmax were tweaked, but the server does not seems to respect the 500 max_connections... Er ... did you restart the postmaster after increasing those parameters? regards, tom lane Er... at least twice, once because a power failure with a defective no-break, and the second time i restarted all services ( and before that, i kept using -- over and over -- the "kill -HUP" on postmaster, and trusted the "reloading parameters" message on the logfile... ); --- Sebastian Hennebrueder wrote: 500 parallel connections are very very much. i agree... but i needed to test how ( and why ) things were ( not ) working... and the "sorry too many clients already" message was driving me crazy... You should verify if one application is not closing connections or if you can create an connection pool. the connection pool ( was testing pgpool for that ) was a possibility, but i need things at least "barely working" before... and an obscene value on "max_connections" was my best try. Use select * from pg_stat_activity to see wheach connections are open from which client and if they are iddle or not. See the postgreSQL doc for more information on these queries. i have lots of idle connections showing on a "ps|grep" from squirrel ( apache with persistent connections ) amavisd, courier-authlib all keep idle connections for a long time, but that pg_stat query only shows stuff like that: postfix=# select * from pg_stat_activity ; datid | datname | procpid | usesysid | usename | current_query | query_start ---+--+-+--+--+--+- 17230 | postfix | 29852 | 100 | postfix | | (...) (47 rows) i'm yet not sure what this means, we'll be looking over postgresql docs... --- Sven Willenberger wrote: Can you post the relevent portions of your postgresql.conf file? Do you see any error messsages when starting the database up (perhaps about shared memory or such)? everything not commented out from the postgresql.conf: max_connections = 500 # note: increasing max_connections costs about 500 bytes of shared # memory per connection slot, in addition to costs from shared_buffers # and max_locks_per_transaction. superuser_reserved_connections = 2 shared_buffers = 1004 # min 16, at least max_connections*2, 8KB each debug_print_parse = false debug_print_rewritten = false debug_print_plan = false debug_pretty_print = false log_statement = 'none' log_parser_stats = false log_planner_stats = false log_executor_stats = false log_statement_stats = false statement_timeout = 6 # 0 is disabled, in milliseconds /proc/sys/kernel/shmmax == 33554432 and no error messages at all on the startup logfile... We also run a postfix mailserver (with maildrop, courier-imap/vauthchkpw, etc) against a postgresql database with max_connections set to 512 (FreeBSD 5.x machine). On the delivery end of things we pool the connections from the postfix server using proxymap(8) (which helped ease the connections load). Sven aha ! 512 connections ! someone as "exaggerated" as me, i feel happy now ! ;) i will also study proxymap, thanks ! Alexandre Barros
[GENERAL] Postmaster fails in select() in strange way
Hi, I've encountered probably similar problem to one described in http://archives.postgresql.org/pgsql-general/2005-08/msg00847.php but have more information. After some time (about 1000-3000 connections, each one transaction with 1-50 of selects, updates are rare) postmaster stops receiving connections (but is still alive and silently waits for children). Snippet from logs at this moment: postgres[89874]: [1-1] LOG: XX000: select() failed in postmaster: Inappropriate ioctl for device postgres[89874]: [1-2] LOCATION: ServerLoop, postmaster.c:1183 Children (I'm not sure name - child processes of postmaster) are still alive and established connections work fine, but no new connection can be established. After closing all connections, postmaster exits leaving no message in logs - these above are last before postmaster dies. Also, there are no other strange, or even not strange messages/errors in logs. When I start postmaster again I can see this in logs: --- postgres[13035]: [1-1] LOG: 0: database system was interrupted at 2005-08-18 16:12:10 CEST postgres[13035]: [1-2] LOCATION: StartupXLOG, xlog.c:4063 postgres[13035]: [2-1] LOG: 0: checkpoint record is at 0/BFE8748 postgres[13035]: [2-2] LOCATION: StartupXLOG, xlog.c:4132 postgres[13035]: [3-1] LOG: 0: redo record is at 0/BFE8748; undo record is at 0/0; shutdown FALSE postgres[13035]: [3-2] LOCATION: StartupXLOG, xlog.c:4160 postgres[13035]: [4-1] LOG: 0: next transaction ID: 688728; next OID: 639822 postgres[13035]: [4-2] LOCATION: StartupXLOG, xlog.c:4163 postgres[13035]: [5-1] LOG: 0: database system was not properly shut down; automatic recovery in progress postgres[13035]: [5-2] LOCATION: StartupXLOG, xlog.c:4219 postgres[13035]: [6-1] LOG: 0: record with zero length at 0/BFE8784 postgres[13035]: [6-2] LOCATION: ReadRecord, xlog.c:2496 postgres[13035]: [7-1] LOG: 0: redo is not required postgres[13035]: [7-2] LOCATION: StartupXLOG, xlog.c:4321 postgres[13035]: [8-1] LOG: 0: database system is ready postgres[13035]: [8-2] LOCATION: StartupXLOG, xlog.c:4526 --- Looks OK i think. The most strangest part of this, that in FreeBSD manual page of select(2) ENOTTY (errno code for 'Inappropriate ioctl for device') is not listed. In previous thread Csaba Nagy wrote: > Is it possible that you're application is not closing connections, and > the server has a limit on connection count, and that is reached in a few I'm sure that my application closes all connections correctly. Does it look like OS or PostgreSQL bug? Platform: PostgreSQL: psql (PostgreSQL) 8.0.3 FreeBSD:5.2-RELEASE Thanks, Greetings. -- :: zbigg : Zbigniew Zagórski : ::: zzbigg (at) o2 (dot) pl ::: GG:5280474 ::: : 2B OR (NOT 2B) That is the question. The answer is FF. : ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How to determine table schema in trigger function
On Thu, Aug 18, 2005 at 05:02:58PM +0300, Andrus wrote: > How to determine schema name where table TG_RELNAME belongs in trigger > function ? You could use TG_RELID to query the system catalogs. See the documentation for pg_class and pg_namespace. -- Michael Fuhr ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Startup ...
I have a fedora core 3 system. I was able to start the postgres server with the tool but I have no idea how to create the databases. I am getting errors about my user id and I should connect with the one that started the server, but I used the tool, not a command. (Stopping the server and using a command did not work so well either.) Documentation was read but was found wanting. (KISS) Google gave high level stuff but I just want the basics, its running, who is defined to access and or how do I start as the postgres user? Gary Start your day with Yahoo! - make it your home page http://www.yahoo.com/r/hs ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] [JDBC] pg_locks.transaction field type
On Thu, Aug 18, 2005 at 03:55:43PM +1200, Oliver Jowett wrote: > Joseph Shraibman wrote: > > >>The column's type is 'xid' which the driver doesn't currently handle, > >>so it gets put into the "wrap it in PGobject" bucket. > > > >Is xid a type of number? > > It's an internal backend type; I'm not familiar with the details. It's an unsigned 4 byte integer. -- Alvaro Herrera () "Cada quien es cada cual y baja las escaleras como quiere" (JMSerrat) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Generating random values.
[EMAIL PROTECTED] ("Joshua D. Drake") writes: > Fernando Lujan wrote: > > Hi folks, > > I have a table wich contains my users... I want to insert to each > > user > > a random password, so I need a random function. Is there such function > > in Postgres? I just found the RANDOM which generates values between > > 0.0 and 1.0. > > Any help or suggestion will be appreciated. :) > > I would do someting like: > > select substring(md5(random() || random()), 5, 8); > > Sincerely, > > Joshua D. Drkae Great! a simple, dumb program can generate all your passwords in very quickly. My 2.4 Ghz Pentium 4 did it in under 10 minutes. A token set of 16 characters, and a fixed length of 8 charachters just isnt a very big search space. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Set autocommit to off
Michael Fuhr escreveu: On Wed, Aug 17, 2005 at 08:24:00AM +, Aliomar Mariano Rego wrote: Does somebody knows why the Postgresql 7.4.8 or later doesn't supports the option "SET AUTOCOMMIT TO OFF"? Because server-side autocommit was a bad idea. See the 7.4 Release Notes. http://www.postgresql.org/docs/7.4/static/release-7-4.html "The server-side autocommit setting was removed and reimplemented in client applications and languages. Server-side autocommit was causing too many problems with languages and applications that wanted to control their own autocommit behavior, so autocommit was removed from the server and added to individual client APIs as appropriate." Thanks for your help. I had to install Dspace release 1.3.1 (http://sourceforge.net/projects/dspace/) and during the installation process I have had an error "SET AUTOCOMMIT TO OFF is no longer supported" at Postgresql-7.4.8 and Postgresql-8.0.3. So I installed Postgres-7.3.10 and everything worked well. Thanks for your attention. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] How to determine table schema in trigger function
I created generic (for tables in different schemas) trigger function : CREATE OR REPLACE FUNCTION setlastchange() RETURNS "trigger" AS $$BEGIN UPDATE serverti SET lastchange='now' WHERE tablename=TG_RELNAME and schemaname=TG_SCHEMA; RETURN NULL; END$$ LANGUAGE plpgsql STRICT; Unfortunately, this does not work since TG_SCHEMA is not valid in PL/pgSQL How to determine schema name where table TG_RELNAME belongs in trigger function ? Andrus. table structure is: CREATE TABLE serverti ( schemaname CHARACTER(7), tablename CHARACTER(8) , lastchange timestamp, primary key (schemaname, tablename) ); ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Same database, different query plans
On Thu, Aug 18, 2005 at 12:03:59PM +0300, WireSpot wrote: > The actual SELECT results (ie. non EXPLAIN) are identical in both > cases. The indexes and so on are identical. I've done a reindexing and > vacuuming on both of them just to be sure. > > As you can see, there's quite a bit of a difference between 0.3 ms and > 398 ms, and it shows. I haven't touched the query planning options. > Why the different planning and what can I do to fix the misguided one? Have you run ANALYZE or VACUUM ANALYZE in both databases to update the planner's statistics? If you have and get the same results, then it might be interesting to see the output of the following on both systems: SET enable_mergejoin TO off; SET enable_nestloop TO on; EXPLAIN ANALYZE SELECT ... SET enable_mergejoin TO on; SET enable_nestloop TO off; EXPLAIN ANALYZE SELECT ... -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL]
Hi, I’m currently using triggers with postgresql ver. 7.4.6 on Fedora Core 3. I’ve implemented a dynamic library in `C` that receives all registered trigger. I’m having a hard time right now since my trigger function is executed even before that any transaction is committed (even if the trigger is set to trig AFTER). Here’s my question: Is it possible to create triggers that are going to be executed only after a committed transaction? Thanks! Michel
[GENERAL] total db lockup
(NOTE: reposting this for the *fifth* time because my previous messages didn't go through). Hi all, We have experienced a really weird problem with postgresql yesterday. When I was called in to take a look, all the non-superuser connections were used up and they were all in a waiting state (SELECT waiting, UPDATE waiting, etc.). I couldn't figure out what the problem is, but I saved the ps, pg_stat_activity, and pg_locks state at the time (attached as condor_db_stats.txt). BTW, when looking at the queries in pg_stat_activity, postgresql cuts them off so you can't see the entire string. Is there any way to prevent that, or at least increase the character limit? We restarted postgresql server and the problem reappeared a few minutes later. I saved that state also (condor_db_stats2.txt). At that point, after we restarted postgresql (again), I ran VACUUM on the entire database and did a few more things that seem to have solved the problem (see below). This particular database is essentially just one flat table (level) with a few small supporting tables. Only the level table is heavily used. (table definition is attached as table.txt). I noticed that one of the indexes (level_owner_index) was a hash index. I remembered what postgresql manual says about hash indexes and concurrency (http://www.postgresql.org/docs/7.4/interactive/locking-indexes.html) and, after VACUUM finished, replaced the hash index with a btree. I then did a REINDEX of the level table and ANALYZE. This seems to have solved the problem -- at least as of this morning we still have not seen any deadlocks. My question is, what could have caused this to happen? Can anyone explain this paragraph from the manual: "Share/exclusive page-level locks are used for read/write access. Locks are released after the page is processed. Page-level locks provide better concurrency than index-level ones but are liable to deadlocks." Any other pointers to help me figure out what went wrong and how to fix it? thanks, Eugene WTF? My message doesn't appear. Trying again without attachments or inline text. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Weird performance hit
On Thu, Aug 18, 2005 at 06:26:41AM -0600, Michael Fuhr wrote: > On Thu, Aug 18, 2005 at 10:03:38AM +0300, WireSpot wrote: > > However, one application is for some weird reason taking a serious > > performance hit on certain pages. There are some intensive joins and > > selects there, but somehow one install manages a couple of seconds and > > the other takes about 10. > > Could you post the query and the EXPLAIN ANALYZE output for both > systems? Never mind, I see that you posted that info in another thread (please don't start multiple threads on the same subject). -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Weird performance hit
On Thu, Aug 18, 2005 at 10:03:38AM +0300, WireSpot wrote: > However, one application is for some weird reason taking a serious > performance hit on certain pages. There are some intensive joins and > selects there, but somehow one install manages a couple of seconds and > the other takes about 10. Could you post the query and the EXPLAIN ANALYZE output for both systems? -- Michael Fuhr ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
R: [GENERAL] Linux Postgres authentication against active directory
Finaly I follow your suggestion, I use ldap and seems to work (for now). Thanks for your help dario > -Messaggio originale- > Da: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] Per conto di > Magnus Hagander > Inviato: lunedì 15 agosto 2005 21.01 > A: Ronzani Dario; pgsql-general@postgresql.org > Oggetto: Re: [GENERAL] Linux Postgres authentication against > active directory > > > > > > Actually I try to authenticate my Linux Postgres installation > > > > against Active Directory, I find 3 solution to use: > > > > > > > > 1) LDAP > > > > 2) Pam and Kerberos > > > > 3) Kerberos alone > > > > > > (3) is the one I've been using, and it works very well. I've been > > > working on a HOWTO, but it' snot done yet. > > > > > > Note that if your clients are on win32, you need at least version > > > 8.0.2. > > > > Great to know that someone are able to use this solution, I > > don't have any client my application is a web (php, java) > > application with a request for AD (or ldap depend on the > > customer) users authentication. > > The easiest way around that is to authenticate the user to > the webserver, and then use a single account to connect to > the database (or a couple, depending on group membership, or > whatever suits your app). > > > > > Naturally you will have to recreate the service account in > > AD with the > > > proper SPN. > > > > My fear is that I have misunderstood how kerberes work, I > > understand that I must to kerberize the postgres application > > to give it the access to AD, then I need to put a principal > > to any user that need to authenticate? > > No, you do not need to kerberize postgres to access AD. If > you want to "check ad passwords", it's porbably best to use LDAP. > > Kerberos is used to achieve *single sign on*. Meaning your > app never sees the password. If this is not what you need, > it's overly complex. > > Any user who needs to authenticate needs a principal, yes. > That's their normal Windows account. On top of that, every > *service* the user should authenticate *to* also needs a > principal - for mutual authentication. > > If your client connects to your wbserver only, your webserver > needs this principal. If the clients connects to the database > server, the database server needs it. There is apparantly > some way if you use mod_perl to forward kerberos credentials > from the webserver to the database server, but I don't know > any details about that. > > But again, if you intend to provide a userid/password box to > the user and then authenticate those credentials, it's going > to be a lot easier to use for example LDAP. > > You can, of course, use kerberos between your webserver and > the database server, but that's most likely an overkill as > you'll only be able to access it as a single user anyway (I think). > > > > This is my principal on the linux box, as you say (I think) I > > create this principal in uppercase: > > > > ktutil: rkt /usr/etc/postgresql/krb5.keytab > > ktutil: list > > slot KVNO Principal > > > > > - > >13 POSTGRES/[EMAIL PROTECTED] > >23 RONZANID/[EMAIL PROTECTED] > > The postgres keytab only needs the POSTGRES principal. The > other is your user (I assume), and it sohuld not be stored > *anywhere* - only transitent whlie you are logged in. > > > > May be my problem was related with this compilation line? > > > > checking for library containing com_err... -lkrb5 > > checking for library containing krb5_encrypt... none required > > checking for library containing krb5_sendauth... none required > > checking krb5.h usability... yes > > checking krb5.h presence... yes > > checking for krb5.h... yes > > checking for krb5_ticket.enc_part2... yes > > checking for krb5_error.text.data... yes > > > > Particularly I refer to krb5_encrypt and krb5_sendauth. > > No, that looks correct. It jus tmeans that they were found in > the krb5 library, and no *additional* libraries are needed. > If you were missing the rquired libs, you'd get an error and > not just a notice. > > > > With more debug I received this error when I try to authenticate. > > > > postgres: Bad application version was sent (via > > sendauth) from krb5_recvauth > > That probably indicates that your server has a different > principal name than the client (libpq library). > > > //Magnus > > ---(end of > broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Same database, different query plans
I have the exact same database on two different systems, both using 8.0.3. I've installed the database from the same dump. Here's the query I'm trying on each: EXPLAIN ANALYZE SELECT answers.*,i18n.field1,i18n.field2,i18n.field3,i18n.field4,i18n.field5 FROM answers LEFT OUTER JOIN i18n on answers.i18n_id = i18n.id and i18n.languages_id = 2 WHERE question_id = 2938 ORDER BY display_order; Here's the result from one of them: --- Sort (cost=30.46..30.47 rows=2 width=125) (actual time=0.110..0.111 rows=1 loops=1) Sort Key: answers.display_order -> Nested Loop Left Join (cost=0.00..30.45 rows=2 width=125) (actual time=0.081..0.088 rows=1 loops=1) -> Index Scan using question_id_answers_key on answers (cost=0.00..9.07 rows=2 width=38) (actual time=0.042..0.047 rows=1 loops=1) Index Cond: (question_id = 2938) -> Index Scan using i18n_pkey on i18n (cost=0.00..10.68 rows=1 width=91) (actual time=0.019..0.019 rows=0 loops=1) Index Cond: ("outer".i18n_id = i18n.id) Filter: (languages_id = 2) Total runtime: 0.306 ms (9 rows) --- And the other: --- Sort (cost=1025.08..1025.14 rows=22 width=223) (actual time=397.154..397.155 rows=1 loops=1) Sort Key: answers.display_order -> Merge Left Join (cost=1023.34..1024.59 rows=22 width=223) (actual time=396.695..396.700 rows=1 loops=1) Merge Cond: ("outer".i18n_id = "inner".id) -> Sort (cost=71.81..71.86 rows=22 width=63) (actual time=0.346..0.349 rows=1 loops=1) Sort Key: answers.i18n_id -> Index Scan using question_id_answers_key on answers (cost=0.00..71.31 rows=22 width=63) (actual time=0.320..0.327 rows=1 loops=1) Index Cond: (question_id = 2938) -> Sort (cost=951.53..952.00 rows=187 width=164) (actual time=375.092..385.246 rows=5651 loops=1) Sort Key: i18n.id -> Seq Scan on i18n (cost=0.00..944.48 rows=187 width=164) (actual time=0.127..132.919 rows=10940 loops=1) Filter: (languages_id = 2) Total runtime: 398.751 ms (13 rows) --- The actual SELECT results (ie. non EXPLAIN) are identical in both cases. The indexes and so on are identical. I've done a reindexing and vacuuming on both of them just to be sure. As you can see, there's quite a bit of a difference between 0.3 ms and 398 ms, and it shows. I haven't touched the query planning options. Why the different planning and what can I do to fix the misguided one? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Adding contrib modules
On Wed, Aug 17, 2005 at 01:14:43PM -0500, Jonathan Villa wrote: > Thanks... at least know I'm doing to correctly... but I still get the > errors. I've done everything as it states on the tsearch-V2-intro.html > page... and then I run > > psql ftstest < tsearch2.sql &> fts.out > > for testing of course > > the fts.out file has things like > > ERROR: type "tsvector" does not exist > ERROR: type "tsquery" does not exist > ERROR: function lexize("unknown", "unknown") does not exist > > I'm totally baffled... I'm running 7.4.8 by the way Have you looked into server log? -- marko ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] Weird performance hit
I have two practically identical Debian-testing systems installed on two harddrives on the same machine. I've compiled Postgres 8.0.3 with exactly the same options on both. Both HDD use the same kernel, have DMA enabled and so on. I have the same database and web applications installed in both systems. However, one application is for some weird reason taking a serious performance hit on certain pages. There are some intensive joins and selects there, but somehow one install manages a couple of seconds and the other takes about 10. Any ideas? I've tried copying Postgres and the repository over from the good install, it does the same thing. Does this suggest that it's not a Postgres issue? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings