Re: [GENERAL] relid and relname
Michael Fuhr <[EMAIL PROTECTED]> writes: > On Thu, Mar 24, 2005 at 11:01:23PM -0300, Edson Vilhena de Carvalho wrote: >> Can anyone tell me what is a relid, a relname and >> schemaname data outputs resulting from the SQL: select >> * from pg_stat_all_tables; > relid = object ID (oid) of the relation (table) More specifically, it's the OID of the pg_class row for the table. So you can join the relid from that view to pg_class.oid to find out more about the table. > In the system catalogs "relation" doesn't always refer to a table, > but in pg_stat_all_tables it does (pg_stat_all_tables is a view > that shows only tables). We really define "relation" as "anything that has a pg_class entry"; this includes tables, indexes, views, composite types, and some other weirder cases. Some but not all of these objects have associated disk files. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2
I tried it in the tsearch2 directory, not the postgres src directory. On Mar 24, 2005, at 9:35 PM, Oleg Bartunov wrote: Did you try 'make clean' first ? On Thu, 24 Mar 2005, Ben wrote: I'm trying to install tsearch2 into an empty database on a new 8.0.1 postgres install. The machine already has an older 7.4 install of postgres on it, so I gave configure a --prefix=/usr/local/pg801 option. Postgres installed and started fine (after changing the port), and I was able to create my new empty database without issues. Now comes the part where I fail to install tsearch2. I go to the contrib/tsearch2 directory, run make and make install without issues. make installcheck tries to connect to the older postgres install (I don't see an option to set the port it attempts to use), so I try to pipe tsearch2.sql into the new database. It starts working fine, and then says: ERROR: could not find function "tsvector_cmp" in file "/usr/local/pgsql/lib/tsearch2.so" which is interesting, because it's not trying to use /usr/local/pg801/ like it's supposed to. Thoughts? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2
Did you try 'make clean' first ? On Thu, 24 Mar 2005, Ben wrote: I'm trying to install tsearch2 into an empty database on a new 8.0.1 postgres install. The machine already has an older 7.4 install of postgres on it, so I gave configure a --prefix=/usr/local/pg801 option. Postgres installed and started fine (after changing the port), and I was able to create my new empty database without issues. Now comes the part where I fail to install tsearch2. I go to the contrib/tsearch2 directory, run make and make install without issues. make installcheck tries to connect to the older postgres install (I don't see an option to set the port it attempts to use), so I try to pipe tsearch2.sql into the new database. It starts working fine, and then says: ERROR: could not find function "tsvector_cmp" in file "/usr/local/pgsql/lib/tsearch2.so" which is interesting, because it's not trying to use /usr/local/pg801/ like it's supposed to. Thoughts? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq Regards, Oleg _ Oleg Bartunov, sci.researcher, hostmaster of AstroNet, Sternberg Astronomical Institute, Moscow University (Russia) Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(095)939-16-83, +007(095)939-23-83 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] relid and relname
On Thu, Mar 24, 2005 at 11:01:23PM -0300, Edson Vilhena de Carvalho wrote: > Hi, I'm a new guy from Portugal Bem-vindo! > Can anyone tell me what is a relid, a relname and > schemaname data outputs resulting from the SQL: select > * from pg_stat_all_tables; relid = object ID (oid) of the relation (table) relname = name of the relation (table) schemaname = name of the relation's (table's) schema In the system catalogs "relation" doesn't always refer to a table, but in pg_stat_all_tables it does (pg_stat_all_tables is a view that shows only tables). See the "Data Definition" chapter in the documentation for more information about tables and schemas. http://www.postgresql.org/docs/8.0/static/ddl.html > There are also other data outputs that I don't > understand but they are probably related: > indexrelid (oid) > indexrelname (name) indexrelid = object ID (oid) of the index indexrelname = name of the index See the "Indexes" chapter in the documentation for more information about indexes. http://www.postgresql.org/docs/8.0/static/indexes.html Studying the "System Catalogs" chapter might also be useful. http://www.postgresql.org/docs/8.0/static/catalogs.html -- Michael Fuhr http://www.fuhr.org/~mfuhr/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2
Nifty. On Mar 24, 2005, at 7:34 PM, Tom Lane wrote: Ben <[EMAIL PROTECTED]> writes: make installcheck tries to connect to the older postgres install (I don't see an option to set the port it attempts to use), Set PGPORT, eg export PGPORT= regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Pgsql 8.0 on Win32 Production env.
I have been using the win32 version of PG 8.0 on a windows 2003 server since sept 2004 and it's for a financial daily pricing application using apache 2.5x. I have a Delphi win32 app that the users use to update the database with data from a spreadsheet, the app then updates the tables used for the daily pricing web CGI and this happens twice per day. The web page is then hit all day long pulling the info from the PG tables. I ported the app from MS SQL server 7 and I can tell you it runs just as good if not better. We even ran this whole thing on the early win32 beta's from Aug 04 and never had one hicup. Very impressive is you ask me(PG 8.x). later, Tony On Thursday 24 March 2005 10:31 am, fabrizio ravazzini wrote: > Hello all, I have to build a web application and for > this I have to use one of my client's Windows32 > server or workstation. > Is postgresql 8.0 native suitable for a production > environment? > Thanks for any advice. > Fabri > > > > > ___ > Nuovo Yahoo! Messenger: E' molto più divertente: Audibles, Avatar, Webcam, > Giochi, Rubrica Scaricalo ora! http://it.messenger.yahoo.it > > ---(end of broadcast)--- > TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] 8.0.1 in a non-standard location and tsearch2
Ben <[EMAIL PROTECTED]> writes: > make installcheck tries to connect to the older postgres install (I > don't see an option to set the port it attempts to use), Set PGPORT, eg export PGPORT= regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] building 8.0.1 on OS X
Ben <[EMAIL PROTECTED]> writes: > I'm trying to build 8.0.1 on OS X, and after a simple ./configure > without any options, the build progresses smoothly until, when building > pg_ctl: > pg_ctl.c: In function `test_postmaster_connection': > pg_ctl.c:439: error: `PQnoPasswordSupplied' undeclared (first use in > this function) That should be defined in libpq-fe.h. I suspect your build is picking up an old version of libpq-fe.h from somewhere. Check include paths. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] 8.0.1 in a non-standard location and tsearch2
I'm trying to install tsearch2 into an empty database on a new 8.0.1 postgres install. The machine already has an older 7.4 install of postgres on it, so I gave configure a --prefix=/usr/local/pg801 option. Postgres installed and started fine (after changing the port), and I was able to create my new empty database without issues. Now comes the part where I fail to install tsearch2. I go to the contrib/tsearch2 directory, run make and make install without issues. make installcheck tries to connect to the older postgres install (I don't see an option to set the port it attempts to use), so I try to pipe tsearch2.sql into the new database. It starts working fine, and then says: ERROR: could not find function "tsvector_cmp" in file "/usr/local/pgsql/lib/tsearch2.so" which is interesting, because it's not trying to use /usr/local/pg801/ like it's supposed to. Thoughts? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] building 8.0.1 on OS X
I'm trying to build 8.0.1 on OS X, and after a simple ./configure without any options, the build progresses smoothly until, when building pg_ctl: pg_ctl.c: In function `test_postmaster_connection': pg_ctl.c:439: error: `PQnoPasswordSupplied' undeclared (first use in this function) pg_ctl.c:439: error: (Each undeclared identifier is reported only once pg_ctl.c:439: error: for each function it appears in.) Any thoughts? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] [JDBC] plpgsql function with RETURNS SETOF refcursor
Hi Kris, I use this the code found here. http://www.postgresql.org/docs/7.4/interactive/jdbc-callproc.html But I don't think there is a way to make it work with SETOF RefCursor. I will try your code wich seem to work. SQL STRING: ? = call usp_Comptabilite_JournalVentes (?, ?, ?, ?, ? ) Java Code. CallableStatement cs = (CallableStatement) dbCon.getPreparedStatement(); cs.registerOutParameter(1, Types.OTHER); cs.setString(2, (String) parameters.get("companyId")); . After I call this function and I can get the refcursor with the : rsTmp.getObject(1). That works .. If the function returns only a refcursor. I will try your way ( select usp_Comptabilite_JournalVentes (?, ?, ?, ?, ? ) ) to get my SETOF refcursor. public ResultSet executePreparedStatementQueryCursor() throws SQLException { ResultSet rsTmp = ps.executeQuery(); rs = new ResultSet[1]; rs[0] = (ResultSet) rsTmp.getObject(1); rsTmp.close(); return rs[0]; } Thanks for your help! /David P.S.: Buy the way I think it should be possible to get my SETOF refcursor using Callable Statement. Regarding how the jdbc driver handle refcursor returning by CallableStatement .. I'm not sure correctly written to handle my problem. Kris Jurka wrote: On Thu, 24 Mar 2005, David Gagnon wrote: Hi Kris, I don't get error with the rsTmp.close() statement but with " (rsTmp.next()) ". The arraycopy is because I want to shrink the original array (size 50) to it real size. It's not intended to be a deep copy. Right, my bad. I see nothing wrong with your code, but you haven't included a complete example. There aren't any thread safety problems in your code where the executePreparedStatementQueryMultipleCursor function is called simultaneously? I've attached the test code I've used to verify that this is not a driver problem. Kris Jurka import java.sql.*; public class MultRefCursor { public static void main(String args[]) throws Exception { Class.forName("org.postgresql.Driver"); Connection conn = DriverManager.getConnection("jdbc:postgresql://localhost:5432/jurka","jurka",""); Statement stmt = conn.createStatement(); stmt.execute("CREATE OR REPLACE FUNCTION multcurfunc() RETURNS SETOF refcursor AS 'DECLARE ref1 refcursor; ref2 refcursor; BEGIN OPEN ref1 FOR SELECT 1; RETURN NEXT ref1; OPEN ref2 FOR SELECT 2; RETURN next ref2; RETURN; END;' LANGUAGE plpgsql"); stmt.close(); conn.setAutoCommit(false); PreparedStatement ps = conn.prepareStatement("SELECT * FROM multcurfunc()"); ResultSet rs = ps.executeQuery(); while (rs.next()) { System.out.println(rs.getString(1)); ResultSet rs2 = (ResultSet)rs.getObject(1); while (rs2.next()) { System.out.println(rs2.getInt(1)); } rs2.close(); } rs.close(); ps.close(); conn.close(); } } ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] relid and relname
Hi, I'm a new guy from Portugal Can anyone tell me what is a relid, a relname and schemaname data outputs resulting from the SQL: select * from pg_stat_all_tables; There are also other data outputs that I don´t understand but they are probably related: indexrelid (oid) indexrelname (name) I know what they types mean but would like to know to what it refers Tank very much Edson Carvalho __ Converse com seus amigos em tempo real com o Yahoo! Messenger http://br.download.yahoo.com/messenger/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] [JDBC] plpgsql function with RETURNS SETOF refcursor AS. How
On Thu, 24 Mar 2005, David Gagnon wrote: > Hi Kris, > >I don't get error with the rsTmp.close() statement but with " > (rsTmp.next()) ". The arraycopy is because I want to shrink the > original array (size 50) to it real size. It's not intended to be a > deep copy. Right, my bad. I see nothing wrong with your code, but you haven't included a complete example. There aren't any thread safety problems in your code where the executePreparedStatementQueryMultipleCursor function is called simultaneously? I've attached the test code I've used to verify that this is not a driver problem. Kris Jurka import java.sql.*; public class MultRefCursor { public static void main(String args[]) throws Exception { Class.forName("org.postgresql.Driver"); Connection conn = DriverManager.getConnection("jdbc:postgresql://localhost:5432/jurka","jurka",""); Statement stmt = conn.createStatement(); stmt.execute("CREATE OR REPLACE FUNCTION multcurfunc() RETURNS SETOF refcursor AS 'DECLARE ref1 refcursor; ref2 refcursor; BEGIN OPEN ref1 FOR SELECT 1; RETURN NEXT ref1; OPEN ref2 FOR SELECT 2; RETURN next ref2; RETURN; END;' LANGUAGE plpgsql"); stmt.close(); conn.setAutoCommit(false); PreparedStatement ps = conn.prepareStatement("SELECT * FROM multcurfunc()"); ResultSet rs = ps.executeQuery(); while (rs.next()) { System.out.println(rs.getString(1)); ResultSet rs2 = (ResultSet)rs.getObject(1); while (rs2.next()) { System.out.println(rs2.getInt(1)); } rs2.close(); } rs.close(); ps.close(); conn.close(); } } ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [JDBC] plpgsql function with RETURNS SETOF refcursor AS. How to
Hi Kris, I don't get error with the rsTmp.close() statement but with " (rsTmp.next()) ". The arraycopy is because I want to shrink the original array (size 50) to it real size. It's not intended to be a deep copy. Plpgsql function can return multiple refcursor .. so the question is how I can get them via JDBC? Below I included the function that doen't work (throw exception in the while condition). I also included a "WORKING" function that actually return only one refcursor. Thanks for your help .. if you see something wrong I'll be happy to know it :-) /David public ResultSet[] executePreparedStatementQueryMultipleCursor() throws SQLException { ResultSet rsTmp = ps.executeQuery(); ResultSet[] tempArray = new ResultSet[50]; // Should be enough int j = 0; while (rsTmp.next()) { tempArray[j] = (ResultSet) rsTmp.getObject(1); j++; } rs = new ResultSet[j]; System.arraycopy(tempArray, 0, rs, 0, j); rsTmp.close(); return rs; } public ResultSet executePreparedStatementQueryCursor() throws SQLException { ResultSet rsTmp = ps.executeQuery(); rs = new ResultSet[1]; rs[0] = (ResultSet) rsTmp.getObject(1); rsTmp.close(); return rs[0]; } Kris Jurka wrote: On Thu, 24 Mar 2005, David Gagnon wrote: I'm already able to get Refcursor from a stored procedure. But now I need to get a SETOF refcursor and I can't make it work... Is that possible to do this via JDBC? He is the code I did. The rsTmp.next() throws a Connection is closed. Operation is not permitted. Exception. rs = new ResultSet[j]; System.arraycopy(tempArray, 0, rs, 0, j); rsTmp.close(); System.arraycopy does not make a deep copy, so the rsTmp.close() closes the ResultSet. You really can't copy resources around like that. Consider how you would copy a Connection object. Does that establish a new connection? The underlying tcp/ip connection can't be copied. Kris Jurka ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Converting from single user w/pool to multiple users
Jeff Amiel wrote: > We intended to do that very thing (read the earlier parts of the > thread between myself and Tom Lane) Sorry, you're correct. Somehow I missed that exchange (just read the archives.) -- Guy Rouillier ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plperl doesn't release memory
Greg Stark <[EMAIL PROTECTED]> writes: > Two mallocs can work fine alongside each other. They each call mmap or > sbrk to allocate new pages and they each manage the pages they've > received. They won't have any idea why the allocator seems to be > skipping pages, but they should be careful not to touch those pages. However, it's quite likely that such a setup will fail to release memory back to the OS very effectively, and it could easily suffer bad fragmentation problems even without thinking about whether the program break address can be moved back. I think what Sven is seeing is exactly fragmentation inefficiency. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] plperl doesn't release memory
Dan Sugalski <[EMAIL PROTECTED]> writes: > Anyway, if perl's using its own memory allocator you'll want to rebuild it > to not do that. You would need to do that if you wanted to use a debugging malloc. But there's no particular reason to think that you should need to do this just to work properly. Two mallocs can work fine alongside each other. They each call mmap or sbrk to allocate new pages and they each manage the pages they've received. They won't have any idea why the allocator seems to be skipping pages, but they should be careful not to touch those pages. -- greg ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] [JDBC] plpgsql function with RETURNS SETOF refcursor AS. How
On Thu, 24 Mar 2005, David Gagnon wrote: > I'm already able to get Refcursor from a stored procedure. But now I > need to get a SETOF refcursor and I can't make it work... Is that > possible to do this via JDBC? > > He is the code I did. The rsTmp.next() throws a Connection is > closed. Operation is not permitted. Exception. > > > rs = new ResultSet[j]; > System.arraycopy(tempArray, 0, rs, 0, j); > rsTmp.close(); System.arraycopy does not make a deep copy, so the rsTmp.close() closes the ResultSet. You really can't copy resources around like that. Consider how you would copy a Connection object. Does that establish a new connection? The underlying tcp/ip connection can't be copied. Kris Jurka ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] plpgsql function with RETURNS SETOF refcursor AS. How to get it work via JDBC
Hi all, I'm already able to get Refcursor from a stored procedure. But now I need to get a SETOF refcursor and I can't make it work... Is that possible to do this via JDBC? He is the code I did. The rsTmp.next() throws a Connection is closed. Operation is not permitted. Exception. public ResultSet[] executePreparedStatementQueryMultipleCursor() throws SQLException { ResultSet rsTmp = ps.executeQuery(); ResultSet[] tempArray = new ResultSet[50]; // Should be enough int j = 0; while (rsTmp.next()) { tempArray[j] = (ResultSet) rsTmp.getObject(1); j++; } rs = new ResultSet[j]; System.arraycopy(tempArray, 0, rs, 0, j); rsTmp.close(); return rs; } Here is a part of my function (see below) wich seems to work correctly. If it's not supported is there a workaround? Is this supposed to be supported sooner? Thanks for your help it's really appreciated! /David CREATE OR REPLACE FUNCTION usp_Comptabilite_JournalVentes(VARCHAR, DATE, DATE, VARCHAR,VARCHAR) RETURNS SETOF refcursor AS ' DECLARE companyId ALIAS FOR $1; startDate ALIAS FOR $2; endDate ALIAS FOR $3; periodIdFrom ALIAS FOR $4; periodIdTo ALIAS FOR $5; ref1 refcursor; ref2 refcursor; statement varchar(4000); appliedStr varchar(10); printedStr varchar(10); BEGIN OPEN ref1 FOR EXECUTE statement; RETURN NEXT ref1; ... OPEN ref2 FOR EXECUTE statement; RETURN NEXT ref2; RETURN; END; ' LANGUAGE 'plpgsql'; ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Any easy ways to change configuration
I get the following message in a psql command prompt: Warning: Console code page (437) differs from Windows code page (1252) 8-bit characters may not work correctly. See psql reference page "Notes for Windows users" for details. The problem stops a JDBC connection. Any easy ways to change the configuration or I need to uninstall and install the server? BTW, I would like to have the server works for Chinese in Window XP with Chinese language installed. Thanks, Vernon __ Do you Yahoo!? Yahoo! Small Business - Try our new resources site! http://smallbusiness.yahoo.com/resources/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Converting from single user w/pool to multiple users
Jeff Amiel wrote: > The issue is really propogating the authenticaion credentials to the > database itself.it's our ON INSERT/ON UPDATE/ON DELETE triggers > that are doing the auditing and they need the user ID to accurately > log changes. In lieu of any other per-connection persistant data > option, this seems like the best bet. I still don't like the idea of thousands of connections, most of which will probably most of the time be doing nothing except consuming lots of memory. You might want to explore creating a small wrapper around the JBoss connection pool that uses SET SESSION AUTHORIZATION after obtaining a connection. That way you can still have a small number of pooled connections but have the real user id associated with the connection. -- Guy Rouillier ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Converting from single user w/pool to multiple users
We intended to do that very thing (read the earlier parts of the thread between myself and Tom Lane) Jeff Guy Rouillier wrote: Jeff Amiel wrote: The issue is really propogating the authenticaion credentials to the database itself.it's our ON INSERT/ON UPDATE/ON DELETE triggers that are doing the auditing and they need the user ID to accurately log changes. In lieu of any other per-connection persistant data option, this seems like the best bet. I still don't like the idea of thousands of connections, most of which will probably most of the time be doing nothing except consuming lots of memory. You might want to explore creating a small wrapper around the JBoss connection pool that uses SET SESSION AUTHORIZATION after obtaining a connection. That way you can still have a small number of pooled connections but have the real user id associated with the connection. ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Converting from single user w/pool to multiple users
Guy Rouillier wrote: Do you really want to try to administer 1000s of database user accounts in addition to whereever you maintain these same accounts for non-DB authentication. This assumes, of course, that if you are interested in accountability at all that you authenticate somewhere. And I certainly wouldn't want to sacrifice memory that could be put to good use processing database requests to holding several thousand idle database connections Actually, we dont think that the management of the database accounts will be an issue. Our user administration system will create the users in postgres at the same time it creates the appropriate 'profile' in our application databases. Any changes (such as disabling the user) is also propogated to both places via stored procedure(function) that modifies both as appropriate. Instead, again assuming you authenticate users, you can propogate that security context to JBoss. Then you can secure the EJB or MBean method that obtains the database connection from the pool (and presumably does something useful) to respect the security context. If you want to audit, you have the security context information, so you can extract the authentication credentials from there to write to an audit table (or even include in every row you write to the database.) The issue is really propogating the authenticaion credentials to the database itself.it's our ON INSERT/ON UPDATE/ON DELETE triggers that are doing the auditing and they need the user ID to accurately log changes. In lieu of any other per-connection persistant data option, this seems like the best bet. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] plperl doesn't release memory
On Thu, 2005-03-24 at 15:52 -0500, Tom Lane wrote: > Sven Willenberger <[EMAIL PROTECTED]> writes: > > Yes, on these systems, perl was build with -Dusemymalloc (and > > concurrently -Ui_malloc) so there could very well be an issue with > > malloc pools going awry. Doing a quick dig reveals that Linux tends to > > build perl (by default) with the system malloc which may explain why > > your script did not display this same behavior. > > I can confirm that my copy is not using mymalloc: > > $ perl -V | grep -i alloc > usemymalloc=n, bincompat5005=undef I went ahead and rebuilt perl using the system malloc instead, and what I found was that on the function that started this whole topic, the memory allocation went overboard and postgresql bailed with "out of memory". Using the perl malloc, apparently postgres/plperl/libperl was able to manage the memory load although it got "stuck" for the session. Closing the session (psql) did end up freeing all the memory, at least from top's perspective (since the process no longer existed) -- running the query from the command line (psql -c "select function()") worked over several iterations so I do believe that the memory does get freed upon closing the connection. In fact we were able to run the query using this method with 4 simulaneous connections and, although we went heavy into swap , all four did complete. So the issue can be narrowed down to a per-connection basis where the amount of memory needed by the function would normally exceed available memory; the handler for this "overflow" does something with the extra memory needed such that subsequent invocations of the function during the same connection end up eventually creating a malloc error. (This is inconsistent with the behavior on the 8G opteron system ... but I will reevaluate the issue I saw there and see if it is related to something else). Sven ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Converting from single user w/pool to multiple users
Jeff Amiel wrote: > Web based application that utilizes PostgreSQL (7.45 currently). > > A debate is raging in the office regarding the idea of switching from > using a connection pool (that utilizes a single god-like database > user) to a model where each web user would have a mirror postgresql > user. All connections to the database (from the web/app server would > be established with that user id). > > Some questions: > > Anyone see any issues with having thousands of postgresql users > established? Are there any internal limits? > > Previously, the connection pool (provided by jboss) would 'wait' for > an available question until a timeout period was reached before > returning an error. Under the new scheme, we are limited by > max_connections (postgresql.conf)...and would return an error > immediately when no connections were available. Is there any way to > mitigate this? > > Does anyone else do this? Is it standard/recommended/taboo? Our > primary reason for this is database auditing. Our audit triggers > would now be able to pick up the user id directly instead relying on > the application programmer to provide it (or some other potentially > unreliable method) Secondarily is the obvious benefit of security. > We could divide our users into group and lock down table access as > appropriate. We use JBoss also. I understand that using database authentication provides an additional layer of security and accountability, but alternatives are available without the high overhead (both administrative and runtime.) Do you really want to try to administer 1000s of database user accounts in addition to whereever you maintain these same accounts for non-DB authentication. This assumes, of course, that if you are interested in accountability at all that you authenticate somewhere. And I certainly wouldn't want to sacrifice memory that could be put to good use processing database requests to holding several thousand idle database connections. Instead, again assuming you authenticate users, you can propogate that security context to JBoss. Then you can secure the EJB or MBean method that obtains the database connection from the pool (and presumably does something useful) to respect the security context. If you want to audit, you have the security context information, so you can extract the authentication credentials from there to write to an audit table (or even include in every row you write to the database.) -- Guy Rouillier ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] plperl doesn't release memory
Tom Lane <[EMAIL PROTECTED]> writes: > (Come to think of it, doesn't Perl normally use its very own private malloc? > Maybe there's an issue right there ...) Perl can be built either way. It should work to have two different malloc's running side by side as long as the correct free() is always called. Ie, as long as perl doesn't hand any data structures to postgres expecting postgres to free it or vice versa. -- greg ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plperl doesn't release memory
Sven Willenberger <[EMAIL PROTECTED]> writes: > Yes, on these systems, perl was build with -Dusemymalloc (and > concurrently -Ui_malloc) so there could very well be an issue with > malloc pools going awry. Doing a quick dig reveals that Linux tends to > build perl (by default) with the system malloc which may explain why > your script did not display this same behavior. I can confirm that my copy is not using mymalloc: $ perl -V | grep -i alloc usemymalloc=n, bincompat5005=undef $ regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] sort array optimisation in pl/perl
On Thu, 2005-03-24 at 15:49 +0100, GIROIRE Nicolas (COFRAMI) wrote: > I create an array which is result of query on postgresql database and > then I want to sort rows in a particular way (impossible by query on > database). can you give us more details on this particular sort order? > My solution consists to put a rows (indice m+1) in a temporary other > and then move all element before indice n to m in rows with indice n+1 > to m+1 and last i put my temporary variable to indice n. > I want to know if somebody know a better solution. > > I think of 2 solutions but i don't success to apply : > - the first is to use list in which I could deplace references as a > chained list > - the second will be to deplace tab[n..m] to tab[n+1..m+1] in one > instruction as ada language it all depends on the expected sizes of your arrays, but perl has some nice array operations, such as slices and splice() these come to mind: $x=$arr[$m+1];@[EMAIL PROTECTED];$arr[$n]=$x; or: @arr[$n..$m+1]=($arr[$m+1],@arr[$n..$m]); gnari ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plperl doesn't release memory
At 3:14 PM -0500 3/24/05, Tom Lane wrote: Dan Sugalski <[EMAIL PROTECTED]> writes: ... I'm pretty sure for embedding use it uses whatever routines the embedder defines, but it's been a while since I've poked around in there. Hmm. plperl is definitely not doing anything to try to manipulate that behavior; maybe it should? Where can we find out about this? I'll have to go dig, but this: > Anyway, if perl's using its own memory allocator you'll want to rebuild it to not do that. When I tried to test this it seemed that memory did get released at the conclusion of each query --- at least "top" showed the backend process size dropping back down. But, again, I wouldn't be surprised if Sven's perl installation is configured differently than mine. ...implies perl's doing the Right Thing, otherwise there'd be no release of memory to the system. -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] plperl doesn't release memory
On Thu, 2005-03-24 at 13:51 -0500, Tom Lane wrote: > Sven Willenberger <[EMAIL PROTECTED]> writes: > > On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote: > >> The first thing to figure out is whether the leak is inside Perl or in > >> Postgres proper. If I were trying to do this I'd run the function a > >> couple times, then attach to the (idle) backend with gdb and do > >> call MemoryContextStats(TopMemoryContext) > > > Not sure entirely how to interpret the results ... a cursory examination > > shows 516096 total in cachememory but I don't know if that reflects the > > state of "unfreed" memory (or perhaps the 354728 used is unfreed?): > > That looks like the normal steady-state condition. The leak must be > inside Perl then. > > [ thinks for a bit... ] Actually it seems possible that there's a > problem with poor interaction between Postgres and Perl. During the SPI > query they will both be making pretty substantial memory demands, and it > could be that the underlying malloc library isn't coping gracefully and > is ending up with very fragmented memory. That could result in > out-of-memory problems when in fact neither package is leaking anything > per se. > > What you probably ought to do next is build Postgres with a debugging > malloc library to learn more about who's eating up what. I am not sure > whether libperl will automatically use the malloc attached to the main > executable or whether you need to whack it around too. (Come to think > of it, doesn't Perl normally use its very own private malloc? Maybe > there's an issue right there ...) > > regards, tom lane > Yes, on these systems, perl was build with -Dusemymalloc (and concurrently -Ui_malloc) so there could very well be an issue with malloc pools going awry. Doing a quick dig reveals that Linux tends to build perl (by default) with the system malloc which may explain why your script did not display this same behavior. I will try to rebuild perl using system malloc and see how that affects things. Sven ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] plperl doesn't release memory
Dan Sugalski <[EMAIL PROTECTED]> writes: > ... I'm pretty sure for embedding use it uses whatever routines > the embedder defines, but it's been a while since I've poked around > in there. Hmm. plperl is definitely not doing anything to try to manipulate that behavior; maybe it should? Where can we find out about this? > Anyway, if perl's using its own memory allocator you'll want to > rebuild it to not do that. When I tried to test this it seemed that memory did get released at the conclusion of each query --- at least "top" showed the backend process size dropping back down. But, again, I wouldn't be surprised if Sven's perl installation is configured differently than mine. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] sort array optimisation in pl/perl
Title: sort array optimisation in pl/perl Hi, I create an array which is result of query on postgresql database and then I want to sort rows in a particular way (impossible by query on database). My solution consists to put a rows (indice m+1) in a temporary other and then move all element before indice n to m in rows with indice n+1 to m+1 and last i put my temporary variable to indice n. I want to know if somebody know a better solution. I think of 2 solutions but i don't success to apply : - the first is to use list in which I could deplace references as a chained list - the second will be to deplace tab[n..m] to tab[n+1..m+1] in one instruction as ada language Is one of this solution exists and is better than my first ? If yes, can you help me to implement ? Best regards, Nicolas
Re: [GENERAL] plperl doesn't release memory
At 1:51 PM -0500 3/24/05, Tom Lane wrote: What you probably ought to do next is build Postgres with a debugging malloc library to learn more about who's eating up what. I am not sure whether libperl will automatically use the malloc attached to the main executable or whether you need to whack it around too. (Come to think of it, doesn't Perl normally use its very own private malloc? Maybe there's an issue right there ...) Perl can, yeah. If a perl -V shows a "usemymalloc=y" in the output somewhere then perl's using its own internal malloc and you're definitely never going to release memory to anything. If it's 'n' then it'll use the default malloc scheme -- I'm pretty sure for embedding use it uses whatever routines the embedder defines, but it's been a while since I've poked around in there. Anyway, if perl's using its own memory allocator you'll want to rebuild it to not do that. -- Dan --it's like this--- Dan Sugalski even samurai [EMAIL PROTECTED] have teddy bears and even teddy bears get drunk ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] How to get the size in bytes of a table data
Check out dbsize in contrib. -tfo -- Thomas F. O'Connell Co-Founder, Information Architect Sitening, LLC Strategic Open Source — Open Your i™ http://www.sitening.com/ 110 30th Avenue North, Suite 6 Nashville, TN 37203-6320 615-260-0005 On Mar 24, 2005, at 9:53 AM, Ricardo Fonseca e Campos wrote: Hi! Does someone know an easy way to get the size (KB) of the data stored in one or more tables? I read about the ANALYZE command but it doesn’t deal with such information. Thanks in advance, Ricardo. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] plperl doesn't release memory
Sven Willenberger <[EMAIL PROTECTED]> writes: > On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote: >> The first thing to figure out is whether the leak is inside Perl or in >> Postgres proper. If I were trying to do this I'd run the function a >> couple times, then attach to the (idle) backend with gdb and do >> call MemoryContextStats(TopMemoryContext) > Not sure entirely how to interpret the results ... a cursory examination > shows 516096 total in cachememory but I don't know if that reflects the > state of "unfreed" memory (or perhaps the 354728 used is unfreed?): That looks like the normal steady-state condition. The leak must be inside Perl then. [ thinks for a bit... ] Actually it seems possible that there's a problem with poor interaction between Postgres and Perl. During the SPI query they will both be making pretty substantial memory demands, and it could be that the underlying malloc library isn't coping gracefully and is ending up with very fragmented memory. That could result in out-of-memory problems when in fact neither package is leaking anything per se. What you probably ought to do next is build Postgres with a debugging malloc library to learn more about who's eating up what. I am not sure whether libperl will automatically use the malloc attached to the main executable or whether you need to whack it around too. (Come to think of it, doesn't Perl normally use its very own private malloc? Maybe there's an issue right there ...) regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] plperl doesn't release memory
On Thu, 2005-03-24 at 11:34 -0500, Tom Lane wrote: > Sven Willenberger <[EMAIL PROTECTED]> writes: > > Any suggestions on how to trace what is going on? Debug output methods? > > The first thing to figure out is whether the leak is inside Perl or in > Postgres proper. If I were trying to do this I'd run the function a > couple times, then attach to the (idle) backend with gdb and do > call MemoryContextStats(TopMemoryContext) > to dump a summary of Postgres' memory usage to stderr. If that doesn't > show any remarkable bloat then the problem is inside Perl (and beyond my > ability to do much with). > > One thing I'm still a bit baffled about is why my test didn't show a > problem; it sure looks identical to yours. Maybe the issue is Perl > version specific? I tested using the current FC3 version, which is > perl-5.8.5-9. > > regards, tom lane Not sure entirely how to interpret the results ... a cursory examination shows 516096 total in cachememory but I don't know if that reflects the state of "unfreed" memory (or perhaps the 354728 used is unfreed?): TopMemoryContext: 32768 total in 3 blocks; 7392 free (51 chunks); 25376 used MessageContext: 8192 total in 1 blocks; 7912 free (1 chunks); 280 used PortalMemory: 8192 total in 1 blocks; 8176 free (1 chunks); 16 used CacheMemoryContext: 516096 total in 6 blocks; 161368 free (1 chunks); 354728 used lookup_state_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used lookup_ocn_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used lookup_lata_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used locate_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used matall_intra_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used matall_inter_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used matall_npanxx_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used offshore_pkey: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_class_relname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_class_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used MdSmgr: 8192 total in 1 blocks; 7000 free (0 chunks); 1192 used DynaHash: 8192 total in 1 blocks; 6776 free (0 chunks); 1416 used Operator class cache: 8192 total in 1 blocks; 5080 free (0 chunks); 31
Re: [GENERAL] Good Books
On Thu, 2005-03-24 at 10:59 -0500, [EMAIL PROTECTED] wrote: > "...and very interesting." > > The Date book should be required reading in computer science 101, but "very > interesting" implies that your only other hobby is watching grass grow. > > Rick [ referring to "An Introduction to Database Systems" by CJ Date ] Hah. I'm sure there are many that would find it boring, but I find it much more readable than, say, the Knuth books. It's all relative, I suppose. I found it interesting because at the time I was reading it it helped me solve some practical problems I was having without resorting to kludges and client-side programming. Regards, Jeff Davis ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Good Books
On Thursday 24 March 2005 4:37 am, [EMAIL PROTECTED] wrote: > Hi > > I am new to PostgreSQL and find that the standard documentation is > very thin. I would like to buy a more comprehensive book. Funny. I just downloaded and printed the comprehensive manual available at: http://www.postgresql.org/docs/manuals/ At 1422 US letter pages printed double-sided it ended up being over 3" or about 8cm thick. And it's available in commented and non-commented on-line versions as well. Although there is always room for improvement, I have not personally encountered any other open-source project with such comprehensive documentation. What specific information are you seeking. Maybe someone can point you in the right direction. Cheers, Steve ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Pgsql 8.0 on Win32 Production env.
On Thu, 2005-03-24 at 10:31, fabrizio ravazzini wrote: > Hello all, I have to build a web application and for > this I have to use one of my client's Windows32 > server or workstation. > Is postgresql 8.0 native suitable for a production > environment? > Thanks for any advice. > Fabri That really depends on what you mean by production. Are you gonna handle a workgroup of 10 or 15 people doing a phonebook type application, or are you going to handle 250,000 transactions a day in a reservation system? For certain values of production environment, pgsql on windows is probably fine. But until YOU test it under your own load, you won't really know, and neither will we. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] budiling postgresql-8.0.1 on Linux with GCC 4
Gianni Mariani <[EMAIL PROTECTED]> writes: > Firstly, the gcc version is : > gcc (GCC) 4.0.0 20050130 (experimental) > 1. While running ./configure, it failed on not being able to detect the > type of parameters to accept(). You probably need a newer gcc4. I know that PG 8.0.1 builds in Red Hat rawhide, with a fairly recent gcc4 ... and I also know that there were bugs of this general ilk in gcc4 as recently as early March, because I hit some. https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=149098 > rtget.c:99: warning: left-hand operand of comma expression has no effect > They all seem related to : > PageGetItem(page, iid); > Is this an issue ? If you compiled without --enable-cassert I'd not worry too much ... it's probably just being overly noisy about AssertMacro. regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] budiling postgresql-8.0.1 on Linux with GCC 4
I had some interesting issues come up, I'm not sure they're related to GCC 4, but that's the only thing that I can thing of. Firstly, the gcc version is : gcc (GCC) 4.0.0 20050130 (experimental) 1. While running ./configure, it failed on not being able to detect the type of parameters to accept(). It turns out that the second parameter to accept() is __SOCKADDR_ARG, which is not one of the options in ac_func_accept_argtypes.m4. Adding __SOCKADDR_ARG to : for ac_cv_func_accept_arg2 in 'struct sockaddr *' 'const struct sockaddr *' 'void *'; do like so for ac_cv_func_accept_arg2 in '__SOCKADDR_ARG' 'struct sockaddr *' 'const struct sockaddr *' 'void *'; do corrected the issue. 2. The compile ran successfully, however there were a number of warnings like: rtget.c:99: warning: left-hand operand of comma expression has no effect They all seem related to : PageGetItem(page, iid); Is this an issue ? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] plperl doesn't release memory
Sven Willenberger <[EMAIL PROTECTED]> writes: > Any suggestions on how to trace what is going on? Debug output methods? The first thing to figure out is whether the leak is inside Perl or in Postgres proper. If I were trying to do this I'd run the function a couple times, then attach to the (idle) backend with gdb and do call MemoryContextStats(TopMemoryContext) to dump a summary of Postgres' memory usage to stderr. If that doesn't show any remarkable bloat then the problem is inside Perl (and beyond my ability to do much with). One thing I'm still a bit baffled about is why my test didn't show a problem; it sure looks identical to yours. Maybe the issue is Perl version specific? I tested using the current FC3 version, which is perl-5.8.5-9. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Pgsql 8.0 on Win32 Production env.
Hello all, I have to build a web application and for this I have to use one of my client's Windows32 server or workstation. Is postgresql 8.0 native suitable for a production environment? Thanks for any advice. Fabri ___ Nuovo Yahoo! Messenger: E' molto più divertente: Audibles, Avatar, Webcam, Giochi, Rubrica Scaricalo ora! http://it.messenger.yahoo.it ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Converting from single user w/pool to multiple users
I wrote: > There's no hard limit. Offhand the only thing I can think of that might > be a bit slow is password lookup during connection --- I think that does > a linear scan through a list of usernames. This is only an issue if you > use passwords of course, but I suppose you probably would. (It would > likely not be hard to improve the search algorithm, if it did become > a bottleneck.) BTW, I take that back --- it already is a binary search, so there shouldn't be any problem with thousands of users. Still, I like your idea of continuing to pool the connections better. Backend startup is a bit expensive. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Converting from single user w/pool to multiple users
Could we continue to use our existing connection pool (via our app server) and every time the application 'gets' a connection (as a superuser) , we then 'SET SESSION AUTHORIZATION' to the appropriate user who is performing the action? Under the new scheme, we are limited by max_connections (postgresql.conf)...and would return an error immediately when no connections were available. Is there any way to mitigate this? This is doubtless the worst problem... regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Converting from single user w/pool to multiple users
Jeff Amiel <[EMAIL PROTECTED]> writes: > Could we continue to use our existing connection pool (via our app > server) and every time the application 'gets' a connection (as a > superuser) , we then 'SET SESSION AUTHORIZATION' to the appropriate user > who is performing the action? That would work to the extent that you filter SQL commands so a nefarious user can't issue his own 'SET SESSION AUTHORIZATION' to become someone else ... regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Good Books
"...and very interesting." The Date book should be required reading in computer science 101, but "very interesting" implies that your only other hobby is watching grass grow. Rick Jeff Davis <[EMAIL PROTECTED]To: Scott Marlowe <[EMAIL PROTECTED]> rg>cc: [EMAIL PROTECTED], PgSQL General List Sent by: Subject: Re: [GENERAL] Good Books [EMAIL PROTECTED] tgresql.org 03/24/2005 09:57 AM If you are looking for a database theory book, I highly recommend "An Introduction to Database Systems" by C.J. Date. It's very theoretical, but it's precisely worded and very interesting. Regards, Jeff Davis On Thu, 2005-03-24 at 06:54 -0600, Scott Marlowe wrote: > On Thu, 2005-03-24 at 06:37, [EMAIL PROTECTED] wrote: > > Hi > > > > I am new to PostgreSQL and find that the standard documentation is very > > thin. I would like to buy a more comprehensive book. All the available > > books seem very out of date!!! Firstly, does it matter if I buy a book > > that was published in Feb 2003? Will such a book be hopelessly out of > > date? > > > > Does anyone know of any books that will be releasing imminently? > > The first books to pick up and become familiar with are the ones on > database theory, not specifically postgresql. While a lot of other > databases come with documentation that explains a lot of relational > theory, PostgreSQL comes with documentation that pretty much assumes you > already understand database basics and are looking for how to do some > particular thing in postgresql. So it's not that it's thin in terms of > covering PostgreSQL, but thin in terms that it's focus is not general > database theory, but postgresql specifics. > > A lot of the books written about PostgreSQL have more information about > theory, but honestly, a good generic database oriented book is usually > more helpful at first for most folks. > > If you're already familiar with basic db theory, then look for some of > the pgsql books that have been placed online by their publishers. > > ---(end of broadcast)--- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Converting from single user w/pool to multiple users
Jeff Amiel <[EMAIL PROTECTED]> writes: > Anyone see any issues with having thousands of postgresql users > established? Are there any internal limits? There's no hard limit. Offhand the only thing I can think of that might be a bit slow is password lookup during connection --- I think that does a linear scan through a list of usernames. This is only an issue if you use passwords of course, but I suppose you probably would. (It would likely not be hard to improve the search algorithm, if it did become a bottleneck.) > Under the new scheme, we are limited by max_connections > (postgresql.conf)...and would return an error immediately when no > connections were available. Is there any way to mitigate this? This is doubtless the worst problem... regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] How to get the size in bytes of a table data
Hi! Does someone know an easy way to get the size (KB) of the data stored in one or more tables? I read about the ANALYZE command but it doesn’t deal with such information. Thanks in advance, Ricardo.
Re: [GENERAL] Changing constraints to deferrable
On Mar 24, 2005, at 12:42 AM, Greg Stark wrote: There could be some tricky bits around making a deferrable constraint not deferrable. And disabling a constraint would be nice too, reenabling it would require rechecking but at least it would eliminate the error-prone manual process of reentering the definition. there are some tricky bits. check the archives for either this list or the performance list for what I did to mark my reference checks deferrable. it was within the last few months (no more than 6). Vivek Khera, Ph.D. +1-301-869-4449 x806 ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Converting from single user w/pool to multiple users
Web based application that utilizes PostgreSQL (7.45 currently). A debate is raging in the office regarding the idea of switching from using a connection pool (that utilizes a single god-like database user) to a model where each web user would have a mirror postgresql user. All connections to the database (from the web/app server would be established with that user id). Some questions: Anyone see any issues with having thousands of postgresql users established? Are there any internal limits? Previously, the connection pool (provided by jboss) would 'wait' for an available question until a timeout period was reached before returning an error. Under the new scheme, we are limited by max_connections (postgresql.conf)...and would return an error immediately when no connections were available. Is there any way to mitigate this? Does anyone else do this? Is it standard/recommended/taboo? Our primary reason for this is database auditing. Our audit triggers would now be able to pick up the user id directly instead relying on the application programmer to provide it (or some other potentially unreliable method) Secondarily is the obvious benefit of security. We could divide our users into group and lock down table access as appropriate. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Questions regarding interaction of stored functions and transactions
Bill Moran <[EMAIL PROTECTED]> writes: > Let's take the following fictional scenerio: > BEGIN; > INSERT INTO table1 VALUES ('somestring'); > INSERT INTO table1 VALUES ('anotherstring'); > SELECT user_defined_function(); > COMMIT; > In this case, user_defined_function() does a lot more table manipulation. > I don't want that to be done if any statement prior fails, but it seems as > if it's always done, regardless. It seems as if the second INSERT is not > executed if the first fails, but the function is always called. Sorry, I don't believe a word of that. If the first insert fails, everything will be rejected until COMMIT. Possibly you need to show a less fictionalized version of your problem. regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Good Books
There's actually a list of recommended books up on techdocs: http://techdocs.postgresql.org/#books It could use a little updating, but is still pretty good. Robert Treat On Thu, 2005-03-24 at 09:57, Jeff Davis wrote: > If you are looking for a database theory book, I highly recommend "An > Introduction to Database Systems" by C.J. Date. It's very theoretical, > but it's precisely worded and very interesting. > > Regards, > Jeff Davis > > > On Thu, 2005-03-24 at 06:54 -0600, Scott Marlowe wrote: > > On Thu, 2005-03-24 at 06:37, [EMAIL PROTECTED] wrote: > > > Hi > > > > > > I am new to PostgreSQL and find that the standard documentation is very > > > thin. I would like to buy a more comprehensive book. All the available > > > books seem very out of date!!! Firstly, does it matter if I buy a book > > > that was published in Feb 2003? Will such a book be hopelessly out of > > > date? > > > > > > Does anyone know of any books that will be releasing imminently? > > > > The first books to pick up and become familiar with are the ones on > > database theory, not specifically postgresql. While a lot of other > > databases come with documentation that explains a lot of relational > > theory, PostgreSQL comes with documentation that pretty much assumes you > > already understand database basics and are looking for how to do some > > particular thing in postgresql. So it's not that it's thin in terms of > > covering PostgreSQL, but thin in terms that it's focus is not general > > database theory, but postgresql specifics. > > > > A lot of the books written about PostgreSQL have more information about > > theory, but honestly, a good generic database oriented book is usually > > more helpful at first for most folks. > > > > If you're already familiar with basic db theory, then look for some of > > the pgsql books that have been placed online by their publishers. > > > > ---(end of broadcast)--- > > TIP 9: the planner will ignore your desire to choose an index scan if your > > joining column's datatypes do not match > > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Good Books
Robert Treat <[EMAIL PROTECTED]> writes: > On Thu, 2005-03-24 at 08:41, [EMAIL PROTECTED] wrote: >> Refering to my mention of exceptions above, the following is information >> that I could not find in the standard docs: >> How to use(Return using RAISE) the actual exception code and message once >> you have handled the error. This is especially usefull if you have used >> WHEN OTHERS to catch the error. > Of course this assumes you can do it at all ;-) Which you can't. That isn't a documentation shortcoming, it's an implementation shortcoming ... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] How to retrieve the comment for a constraint?
Wolfgang Drotschmann <[EMAIL PROTECTED]> writes: > Now, imagine you know the name of a constraint or all of them for a given > table, e.g. destilled via > SELECT * > FROM information_schema.table_constraints >WHERE table_name = ''; > How can I get the comment for each of these constraints? Something like this... regression=# alter table foo add constraint bar check(id > 0); ALTER TABLE regression=# comment on constraint bar on foo is 'check its positive'; COMMENT regression=# select obj_description(oid, 'pg_constraint') from pg_constraint where conname = 'bar' and conrelid = 'foo'::regclass; obj_description check its positive (1 row) You could join to pg_description explicitly instead of using obj_description(), and/or join to pg_class instead of using regclass. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Good Books
If you are looking for a database theory book, I highly recommend "An Introduction to Database Systems" by C.J. Date. It's very theoretical, but it's precisely worded and very interesting. Regards, Jeff Davis On Thu, 2005-03-24 at 06:54 -0600, Scott Marlowe wrote: > On Thu, 2005-03-24 at 06:37, [EMAIL PROTECTED] wrote: > > Hi > > > > I am new to PostgreSQL and find that the standard documentation is very > > thin. I would like to buy a more comprehensive book. All the available > > books seem very out of date!!! Firstly, does it matter if I buy a book > > that was published in Feb 2003? Will such a book be hopelessly out of > > date? > > > > Does anyone know of any books that will be releasing imminently? > > The first books to pick up and become familiar with are the ones on > database theory, not specifically postgresql. While a lot of other > databases come with documentation that explains a lot of relational > theory, PostgreSQL comes with documentation that pretty much assumes you > already understand database basics and are looking for how to do some > particular thing in postgresql. So it's not that it's thin in terms of > covering PostgreSQL, but thin in terms that it's focus is not general > database theory, but postgresql specifics. > > A lot of the books written about PostgreSQL have more information about > theory, but honestly, a good generic database oriented book is usually > more helpful at first for most folks. > > If you're already familiar with basic db theory, then look for some of > the pgsql books that have been placed online by their publishers. > > ---(end of broadcast)--- > TIP 9: the planner will ignore your desire to choose an index scan if your > joining column's datatypes do not match ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Questions regarding interaction of stored functions
Let's take the following fictional scenerio: BEGIN; INSERT INTO table1 VALUES ('somestring'); INSERT INTO table1 VALUES ('anotherstring'); SELECT user_defined_function(); COMMIT; In this case, user_defined_function() does a lot more table manipulation. I don't want that to be done if any statement prior fails, but it seems as if it's always done, regardless. It seems as if the second INSERT is not executed if the first fails, but the function is always called. If any one of the statements within the transaction (including the function) fails the entire statement will need to rollback. If you are running 8 you can use savepoints to only have certain parts of the entire transaction rollback and then continue forward. Sincerely, Joshua D. Drake So ... I'm a little fuzzy on this. Is there a doc that details this behaviour? TIA. -- Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC Postgresql support, programming shared hosting and dedicated hosting. +1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com PostgreSQL Replicator -- production quality replication for PostgreSQL begin:vcard fn:Joshua Drake n:Drake;Joshua org:Command Prompt, Inc. adr:;;PO Box 215 ;Cascade Locks;OR;97014;US email;internet:[EMAIL PROTECTED] title:Consultant tel;work:503-667-4564 tel;fax:503-210-0334 x-mozilla-html:FALSE url:http://www.commandprompt.com version:2.1 end:vcard ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Questions regarding interaction of stored functions and transactions
I'm a little fuzzy on this, and I've been unable to find docs that clear it up for me. A pointer to a helpful doc would be just as welcome as an outright explanation ;) Let's take the following fictional scenerio: BEGIN; INSERT INTO table1 VALUES ('somestring'); INSERT INTO table1 VALUES ('anotherstring'); SELECT user_defined_function(); COMMIT; In this case, user_defined_function() does a lot more table manipulation. I don't want that to be done if any statement prior fails, but it seems as if it's always done, regardless. It seems as if the second INSERT is not executed if the first fails, but the function is always called. So ... I'm a little fuzzy on this. Is there a doc that details this behaviour? TIA. -- Bill Moran Potential Technologies http://www.potentialtech.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] multi line text data/query ?bug?
Martijn van Oosterhout writes: > IMHO, if you're trying to write portably, don't just hit enter when you > want an end of line, use \n or \r to indicate *exactly* what you mean. Indeed. We are already permissive about line endings in SQL text, so this discussion really boils down only to whether we should reinterpret data that's inside a string literal. There are good style reasons why you should never put an unescaped newline into a string literal in the first place ... and if you do, I don't think it's the database's job to second-guess what you meant. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Good Books
On Thu, 2005-03-24 at 08:41, [EMAIL PROTECTED] wrote: > Hi Bruno > > There are a lot of the topics that I feel could have better (or more > comprehensive examples). In particular I am thinking of plpgsql. One > example is information about working with Exceptions (postgresql specific) > and another is the small amount of info about the RAISE Statement. > > Refering to my mention of exceptions above, the following is information > that I could not find in the standard docs: > How to use(Return using RAISE) the actual exception code and message once > you have handled the error. This is especially usefull if you have used > WHEN OTHERS to catch the error. > To be honest, you're not going to find any books covering that because the code is just too new to have been in the pipeline for any of the books currently in print. The only one I can think of that might have something is Korry Douglas's PostgreSQl 2nd Edition that I have heard has some expanded information on plpgsql, though I haven't read it so I can't verify that for you (read that as don't buy that book based on the above... but see if you can find it and verify the above). I think the only other book that is soon to be out is Begining Databases with PostgreSQL book from Apress, but I know it doesn't have that deep a level of what your looking for on that topic. Of the other books I know about in the works... some of them could include that kind of info maybe, but they wont be out till summer at the earliest and more likely the end of the year. Of course this assumes you can do it at all ;-) Robert Treat -- Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Good Books
On Thu, Mar 24, 2005 at 03:41:06PM +0200, [EMAIL PROTECTED] wrote: Craig, > There are a lot of the topics that I feel could have better (or more > comprehensive examples). In particular I am thinking of plpgsql. One > example is information about working with Exceptions (postgresql specific) > and another is the small amount of info about the RAISE Statement. > > Refering to my mention of exceptions above, the following is information > that I could not find in the standard docs: > How to use(Return using RAISE) the actual exception code and message once > you have handled the error. This is especially usefull if you have used > WHEN OTHERS to catch the error. No book will tell you how to do that, because the EXCEPTION support in plpgsql is new as of Postgres 8.0, and some things are not there yet. If the documentation doesn't say how to do it, then maybe there is no way. You'd need to convince a hacker that it's a useful feature so they can add it. -- Alvaro Herrera (<[EMAIL PROTECTED]>) "I personally became interested in Linux while I was dating an English major who wouldn't know an operating system if it walked up and bit him." (Val Henson) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] How to retrieve the comment for a constraint?
Hallo all, using at least PostgreSQL 7.4.2, one can use the command COMMENT ON CONSTRAINT ON ; to document a constraint defined via CONSTRAINT ... in the context of a table. Now, imagine you know the name of a constraint or all of them for a given table, e.g. destilled via SELECT * FROM information_schema.table_constraints WHERE table_name = ''; How can I get the comment for each of these constraints? I mean, I can query pg_catalog.pg_description with SELECT * FROM pg_description WHERE description ~ ' *'; and get the tuple objoid | classoid | objsubid | description with the string I want to extract. I can see that it is there. In short... Given names of schema, table and constraint, how can I get the description out of pg_catalog.pg_description? Do I overlook something in the forest of system catalogs, its OIDs, the information schema...? Thank you! Best regards, Wolfgang ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] plperl doesn't release memory
On Wed, 2005-03-23 at 18:25 -0500, Tom Lane wrote: > Sven Willenberger <[EMAIL PROTECTED]> writes: > > I have been experiencing an issue with plperl and PostgreSQL 8.0.1 in > > that after calling a plperl function memory does not get released. > > AFAICT the result of spi_exec_query gets released fine, as soon as it's > no longer referenced within perl. Perhaps your perl code is written in > such a way that a reference to the hash result value remains live after > the function exit? > > I tried this: > > create or replace function nrows(text) returns int as $$ > my ($tabname) = @_; > my $rv = spi_exec_query("select * from $tabname"); > return $rv->{processed}; > $$ LANGUAGE plperl; > > and ran it repeatedly against a large table. The memory usage went > up as expected, but back down again as soon as the function exited. > > If you think it's actually a plperl bug, please show a self-contained > example. > The query in question that we used for testing is: - CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$ my $on_shore = "select root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter from root_decks, material_all, lookup where lookup.state not in (select state from offshore) and lookup.npanxx = material_all.npanxx and root_decks.type = 'ie' and root_decks.carrier = material_all.carrier;"; my $rv = spi_exec_query($on_shore); my $nrows = $rv->{processed}; return $nrows ; $$ LANGUAGE plperl; - Now thinking that perhaps the $nrows variable was getting stuck we tried the following which resulted in the exact same memory issue: - CREATE OR REPLACE FUNCTION f_dom_e_lcr() RETURNS text AS $$ my $on_shore = "select root_decks.id,material_all.npanxx,material_all.carrier,material_all.inter from root_decks, material_all, lookup where lookup.state not in (select state from offshore) and lookup.npanxx = material_all.npanxx and root_decks.type = 'ie' and root_decks.carrier = material_all.carrier;"; my $rv = spi_exec_query($on_shore); return $rv->{processed}; $$ LANGUAGE plperl; - The result set is just under 1 million rows and top shows postgres using some 600MB of memory. After the 3rd run of this function on a 1GB RAM box, the error mentioned in the original part of this thread occurs and the database restarts. Any suggestions on how to trace what is going on? Debug output methods? Sven ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Function results written to memory, then sent?
On Mon, Mar 14, 2005 at 11:35:12PM -0700, Steve - DND wrote: > I was reading through the docs today, and came across a paragraph that > indicated when plpgsql queries are executed on the server, the results are > all written to memory(or disk if necessary), and not streamed as available. > I can't find the doc page which said it, but does anyone know if this > applies to regular SQL as well, or is it just plpgsql specific. If it > applies to either or both, are there any current plans to not have PG not > behave in this manner, and stream the results of a query as they become > available? It is only plpgsql. Pl/pgsql uses the C interface, which allows both things to happen (so you can write a function in C which does one thing or the other). SQL is implemented internally (not through the C interface) and streams the results as they are available. -- Alvaro Herrera (<[EMAIL PROTECTED]>) "When the proper man does nothing (wu-wei), his thought is felt ten thousand miles." (Lao Tse) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Good Books
Hi Bruno There are a lot of the topics that I feel could have better (or more comprehensive examples). In particular I am thinking of plpgsql. One example is information about working with Exceptions (postgresql specific) and another is the small amount of info about the RAISE Statement. Refering to my mention of exceptions above, the following is information that I could not find in the standard docs: How to use(Return using RAISE) the actual exception code and message once you have handled the error. This is especially usefull if you have used WHEN OTHERS to catch the error. Thanks Craig > On Thu, Mar 24, 2005 at 14:37:59 +0200, > [EMAIL PROTECTED] wrote: >> Hi >> >> I am new to PostgreSQL and find that the standard documentation is very >> thin. I would like to buy a more comprehensive book. All the available >> books seem very out of date!!! Firstly, does it matter if I buy a book >> that was published in Feb 2003? Will such a book be hopelessly out of >> date? > > What parts of the documentation do you find thin? Without knowing what > you need extra info on, it will be hard to recommend other documentation. > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Simple query takes a long time on win2K
On Wed, Mar 23, 2005 at 08:49:22PM -0700, A. Mous wrote: > case, all test win2K machines (Celeron 400 up to pIII 800) retrieved the > data from disk in under 100ms but took an additional 4000ms to send to the > local client. This is observed even if QoS packet scheduler is installed. If you connect to the local IP (i.e. not 127.0.0.1 but some other interface), does the same thing happen? (This would tell you whether the problem lies in some sort of special problem routing localhost, or whether it's something else.) A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Good Books
[EMAIL PROTECTED] wrote: > I am new to PostgreSQL and find that the standard documentation is > very thin. It's about 1200 pages, which is thicker than any other book you will find. Certainly, some books cover specific topics in more depth or different style, but you need to decide which topics you would be interested in. > I would like to buy a more comprehensive book. I think that is impossible. You can get more specialized or in-depth books, but not more comprehensive ones. > All the > available books seem very out of date!!! Firstly, does it matter if I > buy a book that was published in Feb 2003? Will such a book be > hopelessly out of date? That, too, depends on the topic. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Good Books
On Thu, Mar 24, 2005 at 14:37:59 +0200, [EMAIL PROTECTED] wrote: > Hi > > I am new to PostgreSQL and find that the standard documentation is very > thin. I would like to buy a more comprehensive book. All the available > books seem very out of date!!! Firstly, does it matter if I buy a book > that was published in Feb 2003? Will such a book be hopelessly out of > date? What parts of the documentation do you find thin? Without knowing what you need extra info on, it will be hard to recommend other documentation. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Good Books
On Thu, 2005-03-24 at 06:37, [EMAIL PROTECTED] wrote: > Hi > > I am new to PostgreSQL and find that the standard documentation is very > thin. I would like to buy a more comprehensive book. All the available > books seem very out of date!!! Firstly, does it matter if I buy a book > that was published in Feb 2003? Will such a book be hopelessly out of > date? > > Does anyone know of any books that will be releasing imminently? The first books to pick up and become familiar with are the ones on database theory, not specifically postgresql. While a lot of other databases come with documentation that explains a lot of relational theory, PostgreSQL comes with documentation that pretty much assumes you already understand database basics and are looking for how to do some particular thing in postgresql. So it's not that it's thin in terms of covering PostgreSQL, but thin in terms that it's focus is not general database theory, but postgresql specifics. A lot of the books written about PostgreSQL have more information about theory, but honestly, a good generic database oriented book is usually more helpful at first for most folks. If you're already familiar with basic db theory, then look for some of the pgsql books that have been placed online by their publishers. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Good Books
Hi I am new to PostgreSQL and find that the standard documentation is very thin. I would like to buy a more comprehensive book. All the available books seem very out of date!!! Firstly, does it matter if I buy a book that was published in Feb 2003? Will such a book be hopelessly out of date? Does anyone know of any books that will be releasing imminently? Thanks Craig ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] MS Access to PostgreSQL
Hi Edward, Michael Fuhr's solution worked (Mar 11). I had to write a method to insert four backslashes into the the path been searched for. For example if the stored path in the DB was folder1\folder2\folder3\ in order for PostgreSQL to serach against this i had to search for path LIKE folder1folder2folder3%. Thanks to all for help and suggestions. WS On Sun, 20 Mar 2005 00:21:00 +, Edward Macnaghten <[EMAIL PROTECTED]> wrote: > William Shatner wrote: > > I have recently migrated from MS Access to PostgreSQL.Previously I had > > a SQL command > > > > ResultSet aGroupResultSet = aGroupPathStmt.executeQuery( > > "SELECT \"groupID\",\"fullpath\" FROM \"groups\" WHERE > > \"fullpath\" Like '" + > > aPath + "'"); > > > > > > > > where aPath was equal to 'folder\another folder\%'. > > > > > > Are you sure? In MS-Access JET engine it uses the '*' character instead > of the '%' one as a "like" wildcard. > > Two things you can try... If you have attached the table "groups" in > MS-Access and are using it through JET (as the code you provided would > suggest) then try changing the "%" character to "*" - the JET engine > will convert that to % for you, whereas it may escape the "%" character > you have supplied to keep the behaviour the same as JET. > > The other possibility is to use the "dbPassThrough" parameter and > execute it as a pass through query, here the SQL is sent to the > PostgreSQL engine unchanged. > > All in all I am sure this is an MS-Access problem rather than a Postgres > one. > > Eddy > > ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] multi line text data/query ?bug?
On Thu, Mar 24, 2005 at 09:24:11AM +0200, Sim Zacks wrote: > The difference between a Tab and a newline is that tab is a universally > recognized single ascii character while newline is in flux. Aside from this, > a tab is a quasi-viewable character as the cursor will not go to the middle > of the tab. Meaning if the tab takes up the space of 10 characters, you > could not scroll to the place where the 5th character would be if it were in > fact 10 spaces. You cannot highlight half of a tab in editors that allow > text highlighting. I would therefore say that a tab is as visible as a space > and can be easily differentiated. On the other hand, it is impossible to > determine which binary charcters the editor stuck in at the end of a newline > without looking at the binary/hex code. Actually, Emacs has a space-through-tab mode in which you can just move through a tab as if it were spaces. If you delete or insert it automatically reforms the tabs and spaces around it. Several editors have an auto-lineending mode in which they'll detect the end of line character and apply that everywhere. Admittedly this is an extreme case but editors regularly show things that don't reflect the underlying file. > I understand the complexity of dealing with multiple operating systems, but > seriously, how many non-viewable characters can be embedded in text that > actually make a difference between operating systems? Are there any besides > newline? Sure, the character 0xE9 means different things depending on the encoding and will sort differently based on the locale. Text files generally don't indicate what encoding they are, leading to all sorts of confusion. Unix tends to use Latin1 or UTF8. Windows has it's own encoding. IMHO, if you're trying to write portably, don't just hit enter when you want an end of line, use \n or \r to indicate *exactly* what you mean. Using a variable behaviour and expecting the server to fix it for you is wrong. I beleive the server should take exactly what the client gives as the client is the only one who knows for sure the type. -- Martijn van Oosterhout http://svana.org/kleptog/ > Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a > tool for doing 5% of the work and then sitting around waiting for someone > else to do the other 95% so you can sue them. pgp6cfzd8zlhA.pgp Description: PGP signature
Re: [GENERAL] postgresql unicode lower/upper problem
I modify pg_createcluster and add locale options exec $initdb, '--locale', 'ru_RU.UTF-8', '--encoding', 'UTF-8', '-D', $datadir then drop and create cluster with my locale. now lower and upper work good with unicode. P.S. it's bad that we can not modify lc_ctype and lc_collate from postgresql.conf like lc_messages, lc_monetary, ... I think that it will be good idea to add locale specific options to pg_createcluster... On Wed, 23 Mar 2005 12:17:14 -0500, Tom Lane <[EMAIL PROTECTED]> wrote: > Sergey Levchenko <[EMAIL PROTECTED]> writes: > > I am not able to get work lower and upper functions on postgresql > > v8.0.1 and 8.1b(current cvs copy). I use Debian SID i686 GNU/Linux. > > Locale: ru_RU.KOI8-R > > > createdb -E UNICODE test > > I think the problem is you selected a database encoding that doesn't > match what the locale expects. You can't really mix-and-match if you > expect locale-specific stuff like upper/lower to work. For that > locale you must use -E KOI8. > > regards, tom lane > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster