Re: [GENERAL] password cookie
> My suggestion is "don't do that".> I tried to do it once, years ago, and regretted it deeply.Do you mean "don´t try to fake postgres´ authorisation" (which i don´t want to),or "don´t set up your webservice so that users will recieve data according to their own rights in the database, where each frontend user equals a database user" (which i do want to)? WBL
Re: [GENERAL] Reducing pg_dump & pg_restore times
Ron Johnson wrote: > On 10/26/06 01:20, Chris wrote: >>> Coder At Heart wrote: Hi! Restore & backup take time in the tune of 6 hrs on a Linux, 4 proc, 32 G RAM machine for a 60GB database. >>> Please always cc the mailing list. >>> >>> I've never played with databases that large so I don't know if that's >>> good or bad.. others on the list will be able to offer advice/suggestions. > > RAM certainly helps, but pg_dump is single-threaded, and, by > definition, backing up and restoring 60GB of data is *extremely* IO > constrained. well from what I have seen is that on large databases and a sufficiently fast disk-io subsystem actually CPU(or rather the speed of a single core) starts to be the bottleneck with current postgresql versions. That is true for both COPY and to a much greater effect index creation (I have seen restores that took days and most of that was purely index creation). 8.2 has improved considerably on that due to the massive improvments in the external sorting code but it will still be bottleneck by the single-process nature of psql. > > So, > - What OS? > - What version of PG? > - what kind of disk system does the DB live on? > - How many controllers? > - What kind of tape drive? > - Is it on it's own SCSI controller? in addition to that basic information we need there are: *) increasing maintainance_work_mem and work_mem massively *) increasing the number of checkpoint segments *) disabling fsync during the load Stefan ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
Richard Broersma Jr <[EMAIL PROTECTED]> writes: > mydb=# select pg_get_indexdef(indexrelid) from pg_index where indrelid = > '16737'; > server closed the connection unexpectedly So what do you get from 'select * from pg_index where indrelid = 16737' ? If that crashes, which individual columns can you select from the row? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
> > > but before you do that, I'd urge > > > you to try to get as much info as you can about the nature of the > > > catalog corruption. If there's a bug here, as opposed to random > > > cosmic-ray damage, we can't fix it without more info. > > I eliminated the non-offending index with this query: > > select pg_get_indexdef(indexrelid) > from pg_index > whereindexrelid <> 604251 -- this is the index with the problem > order by indexrelid; > > How do I go about determining if the crash i caused by faulty hardware or a > possible bug? I finially narrowed to search down to the offending column and rows that causes the crash. The following two queries work as long as I don't combine indexname='index_daily' and indexdef. select * from pg_indexes where indexname <> 'index_daily'; returns all rows execpt the affected row select schemaname, tablename, indexname, tablespace --returns all columns except the affected column from pg_indexes where indexname = 'index_daily'; schemaname | tablename | indexname | tablespace +---+-+ public | process | index_daily | and finially, to show that it crashes: select indexdef from pg_indexes where indexname = 'index_daily'; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Reducing pg_dump & pg_restore times
On Thursday 26 October 2006 02:20, Chris wrote: > Coder At Heart wrote: > > Hi! > > > > Restore & backup take time in the tune of 6 hrs on a Linux, 4 proc, 32 G > > RAM machine for a 60GB database. > > Please always cc the mailing list. > > I've never played with databases that large so I don't know if that's > good or bad.. others on the list will be able to offer advice/suggestions. A while back I wrote something up on increasing restore times, might be worth checking out. http://people.planetpostgresql.org/xzilla/index.php?/archives/133-Getting-faster-database-restores-on-postgresql-8.1.html -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] [SQL] Can we convert from Postgres to Oracle !!???
On Sun, Oct 22, 2006 at 12:03:38AM +0300, Devrim GUNDUZ wrote: > On Tue, 2006-10-17 at 14:21 +0530, Sandeep Kumar Jakkaraju wrote: > > Can we convert from Postgres to Oracle !!??? You can also run our software and get Oracle syntax for 1/25th the cost. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] regarding PostgreSQL
On Mon, Oct 23, 2006 at 11:19:13PM +0530, sumit kumar wrote: > Hello , > does anybody help me out telling how the PostGRESQL estimates > cardinality of LIKE operator. Try asking on pgsql-hackers... (sorry, I don't know the answer myself). -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
> >Worst-case, you can probably fix things by dropping and > > recreating the constraint or index ...log_min_messages = info but before > > you do that, I'd > urge > > you to try to get as much info as you can about the nature of the > > catalog corruption. If there's a bug here, as opposed to random > > cosmic-ray damage, we can't fix it without more info. I eliminated the non-offending index with this query: select pg_get_indexdef(indexrelid) from pg_index whereindexrelid <> 604251 -- this is the index with the problem order by indexrelid; How do I go about determining if the crash i caused by faulty hardware or a possible bug? Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] database not enforcing unqiue constriant
On 10/27/06, Joshua D. Drake <[EMAIL PROTECTED]> wrote: Merlin Moncure wrote: > My previous employer contacted me today. Apparently they are having > an increasing frequency of occurances where they disocover violations > of the primary key constraint not being caught by the database. This > system is an ISAM emulation system, and it relies heavily on the > database throwing unique constraint violations to approximately > provide a cobol rewrite statement. > > These errors are never caught when they occur. One way they are > caught is during a dump/reload, the record fails to reinsert. > > I brought this up around a year ago, and the issue was never > successfully resolved. These are windows systems running PostgreSQL > 8.0.x. > > any thoughts? Do they vacuum enough? I have seen problems with PostgreSQL (albeit not since 7.3) where a unique constraint would not enforce because of index bloat. iirc vacuum was set up to run weekly on cron. based on turnover which was high but not super high that seemed appropriate. merlin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] database not enforcing unqiue constriant
Merlin Moncure wrote: > My previous employer contacted me today. Apparently they are having > an increasing frequency of occurances where they disocover violations > of the primary key constraint not being caught by the database. This > system is an ISAM emulation system, and it relies heavily on the > database throwing unique constraint violations to approximately > provide a cobol rewrite statement. > > These errors are never caught when they occur. One way they are > caught is during a dump/reload, the record fails to reinsert. > > I brought this up around a year ago, and the issue was never > successfully resolved. These are windows systems running PostgreSQL > 8.0.x. > > any thoughts? Do they vacuum enough? I have seen problems with PostgreSQL (albeit not since 7.3) where a unique constraint would not enforce because of index bloat. Sincerely, Joshua D. Drake > > merlin > > ---(end of broadcast)--- > TIP 2: Don't 'kill -9' the postmaster > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
> What this looks like is that there's a mangled expression string in > the catalog entry for either a check constraint or an expression index. > The backend is dying while trying to decompile the expression for > pg_dump. you were write, I appears that my problem is with one of four functional date_trunc indexs that I created for a timespace column. Here is a greatly simplified query that still succeeds: mydb=# select indexrelid from pg_index where indrelid = '16737'; indexrelid 604243 604251 604252 604253 604254 (5 rows) However, when I add the following function it breaks: mydb=# select pg_get_indexdef(indexrelid) from pg_index where indrelid = '16737'; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. > What I suggest doing is turning on log_statement = all and running > pg_dump, so that you can see the query it's dying on. That would give > you enough info to look into the catalog entry for the constraint or > index at hand. Worst-case, you can probably fix things by dropping and > recreating the constraint or index ...log_min_messages = info but before you > do that, I'd urge > you to try to get as much info as you can about the nature of the > catalog corruption. If there's a bug here, as opposed to random > cosmic-ray damage, we can't fix it without more info. The logs aren't really providing anymore information since I was already using "all". However, I turned log_min_messages from "warning to "info". %mydb LOG: statement: select pg_get_indexdef(indexrelid) from pg_index where indrelid = '16737'; % LOG: server process (PID 16099) was terminated by signal 11 % LOG: terminating any other active server processes % LOG: all server processes terminated; reinitializing % LOG: database system was interrupted at 2006-10-26 18:09:09 PDT % LOG: checkpoint record is at 0/8E9161A8 % LOG: redo record is at 0/8E9161A8; undo record is at 0/0; shutdown TRUE % LOG: next transaction ID: 174591; next OID: 621178 % LOG: next MultiXactId: 53; next MultiXactOffset: 115 % LOG: database system was not properly shut down; automatic recovery in progress %mydb FATAL: the database system is starting up % LOG: record with zero length at 0/8E9161EC % LOG: redo is not required % LOG: database system is ready % LOG: transaction ID wrap limit is 1073790580, limited by database "postgres" Thanks for the help Tom. Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] database not enforcing unqiue constriant
My previous employer contacted me today. Apparently they are having an increasing frequency of occurances where they disocover violations of the primary key constraint not being caught by the database. This system is an ISAM emulation system, and it relies heavily on the database throwing unique constraint violations to approximately provide a cobol rewrite statement. These errors are never caught when they occur. One way they are caught is during a dump/reload, the record fails to reinsert. I brought this up around a year ago, and the issue was never successfully resolved. These are windows systems running PostgreSQL 8.0.x. any thoughts? merlin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Possible problem with PQescapeStringConn and
On Thu, 2006-10-26 at 19:46 -0400, Tom Lane wrote: > Jeff Davis <[EMAIL PROTECTED]> writes: > > You can set standard_conforming_strings in postgresql.conf at any time > > and reload the config, changing the value for all active connections. > > That means that if a client opens a connection, and you SIGHUP postgres, > > and then the client issues a PQescapeStringConn, the client will get an > > incorrectly-escaped string. > > The window for this is pretty narrow, because PQescapeStringConn will > use the latest-delivered parameter status, but it's certainly true that > randomly changing standard_conforming_strings wouldn't be a bright idea. > Probably a documentation note recommending against changing it via > SIGHUP would be sufficient. > It's not a narrow time window (which was my original test), but you're right that it is narrow in the sense that any command executed on that connection will update the status. So, a potential attacker has one chance :) It seems like a documentation note would be sufficient to prevent people from changing it too haphazardly. You wouldn't want to change it at runtime if the bulk of your queries involved escape sequences. Regards, Jeff Davis ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
Richard Broersma Jr <[EMAIL PROTECTED]> writes: > You'll have to forgive me. I've never used gdb before, I am not sure I > am executing gdb with the correct syntax. However, I was able to > produce a core file. Here is the syntax that I used that produced the > most output - althought it doesn't look like what you would want. No, this is good --- it'd be better with debug symbols, but there's enough here to give us an idea where the problem is: > gdb> stack > #0 0xb7cc871b in strtouq () from /lib/libc.so.6 > #1 0xb7cc847f in __strtol_internal () from /lib/libc.so.6 > #2 0x08178404 in nodeToString () > #3 0x081794fa in parseNodeString () > #4 0x0817b2ff in nodeRead () > #5 0x0817b16f in nodeRead () > #6 0x081780ed in nodeToString () > #7 0x081795fa in parseNodeString () > #8 0x0817b2ff in nodeRead () > #9 0x0817b16f in nodeRead () > #10 0x0817b839 in stringToNode () > #11 0x082264fb in pg_get_constraintdef () > #12 0x0822667e in pg_get_indexdef () > #13 0x08144ff5 in ExecProject () > #14 0x081449ef in ExecProject () > #15 0x08153aa5 in ExecMergeJoin () What this looks like is that there's a mangled expression string in the catalog entry for either a check constraint or an expression index. The backend is dying while trying to decompile the expression for pg_dump. What I suggest doing is turning on log_statement = all and running pg_dump, so that you can see the query it's dying on. That would give you enough info to look into the catalog entry for the constraint or index at hand. Worst-case, you can probably fix things by dropping and recreating the constraint or index ... but before you do that, I'd urge you to try to get as much info as you can about the nature of the catalog corruption. If there's a bug here, as opposed to random cosmic-ray damage, we can't fix it without more info. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
> No, you just need to make sure the postmaster is started under > "ulimit -c unlimited", not "ulimit -c 0" which is the default > under many Linuxen. Adding this to the start script is the > easiest way usually. > > If you can't get anything but numbers from gdb's stack trace > then you have a "stripped" executable and you will need to > rebuild to get a useful stack trace. You'll have to forgive me. I've never used gdb before, I am not sure I am executing gdb with the correct syntax. However, I was able to produce a core file. Here is the syntax that I used that produced the most output - althought it doesn't look like what you would want. [EMAIL PROTECTED] /home/data $ gdb postgres ./core GNU gdb 6.4 Copyright 2005 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i686-pc-linux-gnu"...(no debugging symbols found) Using host libthread_db library "/lib/libthread_db.so.1". warning: core file may not match specified executable file. (no debugging symbols found) Core was generated by `postgres: postgres mydb [local] SELECT '. Program terminated with signal 11, Segmentation fault. warning: Can't read pathname for load map: Input/output error. Reading symbols from /lib/libpam.so.0...(no debugging symbols found)...done. Loaded symbols for /lib/libpam.so.0 Reading symbols from /usr/lib/libssl.so.0.9.7...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libssl.so.0.9.7 Reading symbols from /usr/lib/libcrypto.so.0.9.7...(no debugging symbols found)...done. Loaded symbols for /usr/lib/libcrypto.so.0.9.7 Reading symbols from /lib/libz.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libz.so.1 Reading symbols from /lib/libreadline.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libreadline.so.5 Reading symbols from /lib/libcrypt.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libcrypt.so.1 Reading symbols from /lib/libresolv.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libresolv.so.2 Reading symbols from /lib/libnsl.so.1...(no debugging symbols found)...done. Loaded symbols for /lib/libnsl.so.1 Reading symbols from /lib/libdl.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libdl.so.2 Reading symbols from /lib/libm.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libm.so.6 Reading symbols from /lib/libc.so.6...(no debugging symbols found)...done. Loaded symbols for /lib/libc.so.6 Reading symbols from /lib/libncurses.so.5...(no debugging symbols found)...done. Loaded symbols for /lib/libncurses.so.5 Reading symbols from /lib/ld-linux.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/ld-linux.so.2 Reading symbols from /lib/libnss_compat.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_compat.so.2 Reading symbols from /lib/libnss_nis.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_nis.so.2 Reading symbols from /lib/libnss_files.so.2...(no debugging symbols found)...done. Loaded symbols for /lib/libnss_files.so.2 #0 0xb7cc871b in strtouq () from /lib/libc.so.6 gdb> stack #0 0xb7cc871b in strtouq () from /lib/libc.so.6 #1 0xb7cc847f in __strtol_internal () from /lib/libc.so.6 #2 0x08178404 in nodeToString () #3 0x081794fa in parseNodeString () #4 0x0817b2ff in nodeRead () #5 0x0817b16f in nodeRead () #6 0x081780ed in nodeToString () #7 0x081795fa in parseNodeString () #8 0x0817b2ff in nodeRead () #9 0x0817b16f in nodeRead () #10 0x0817b839 in stringToNode () #11 0x082264fb in pg_get_constraintdef () #12 0x0822667e in pg_get_indexdef () #13 0x08144ff5 in ExecProject () #14 0x081449ef in ExecProject () #15 0x08153aa5 in ExecMergeJoin () #16 0x081441bf in ExecProcNode () #17 0x081552f5 in ExecSort () #18 0x081441ed in ExecProcNode () #19 0x08143574 in ExecutorRun () #20 0x081dcbd9 in PostgresMain () #21 0x081dd1d8 in PortalRun () #22 0x081d8fab in pg_parse_and_rewrite () #23 0x081db364 in PostgresMain () #24 0x081a7db4 in ClosePostmasterPorts () #25 0x081a8ecc in PostmasterMain () #26 0x08165629 in main () gdb> Please let me know what I can do to produce output that is more inline with what you expect. Thanks for the help. Regards, Richard Broersma Jr. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Possible problem with PQescapeStringConn and standard_conforming_strings
Jeff Davis <[EMAIL PROTECTED]> writes: > You can set standard_conforming_strings in postgresql.conf at any time > and reload the config, changing the value for all active connections. > That means that if a client opens a connection, and you SIGHUP postgres, > and then the client issues a PQescapeStringConn, the client will get an > incorrectly-escaped string. The window for this is pretty narrow, because PQescapeStringConn will use the latest-delivered parameter status, but it's certainly true that randomly changing standard_conforming_strings wouldn't be a bright idea. Probably a documentation note recommending against changing it via SIGHUP would be sufficient. > This could be a security vulnerability. Webservers which hold open > connections for long periods of time could be incorrectly escaping > values for long periods of time Not if they're using PQescapeStringConn. Handmade escaping code that thinks it needn't recheck the status could be at risk ... but realistically, handmade escaping code is likely already broken by the mere existence of standard_conforming_strings. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Possible problem with PQescapeStringConn and
On Thu, 2006-10-26 at 16:19 -0700, Jeff Davis wrote: > You can set standard_conforming_strings in postgresql.conf at any time > and reload the config, changing the value for all active connections. I should have mentioned, my email only applies to the upcoming 8.2 release. Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Possible problem with PQescapeStringConn and standard_conforming_strings
You can set standard_conforming_strings in postgresql.conf at any time and reload the config, changing the value for all active connections. That means that if a client opens a connection, and you SIGHUP postgres, and then the client issues a PQescapeStringConn, the client will get an incorrectly-escaped string. This could be a security vulnerability. Webservers which hold open connections for long periods of time could be incorrectly escaping values for long periods of time -- between the SIGHUP that changed standard_conforming_strings, and the time the connection is closed. Should we change standard_conforming_strings so that it only takes effect on new connections (or server restart, if we must)? Are there other similar settings that affect PQescapeStringConn? Regards, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Compiling ELF 64-bit on Solaris
On Thu, Oct 26, 2006 at 12:40:41PM -0700, [EMAIL PROTECTED] wrote: > How can I get the "make" to generate ELF 64-bit executables on Solaris 10? > > We're on Fujitsu hardware; uname -a displays this: > SunOS 5.10 Generic_118822-26 sun4us sparc FJSV,GPUZC-M Well, to start with, are you using a compiler that can generate 64 bit binaries? How about your libs? A -- Andrew Sullivan | [EMAIL PROTECTED] Unfortunately reformatting the Internet is a little more painful than reformatting your hard drive when it gets out of whack. --Scott Morris ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Effect of large text field data on queries
Glen Parker <[EMAIL PROTECTED]> writes: > How can I determine the default storage type for a given TOASTable > data type (text in this case)? Look in pg_type ... but they mostly default to "extended". regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] schema 8.1.5
km <[EMAIL PROTECTED]> writes: > how do i set a user account to default to a predefined schema ? If you make the choice user name = schema name, this happens for free. See http://www.postgresql.org/docs/8.1/static/ddl-schemas.html The bit about common usage patterns might help in particular. > i have tried : > SET search_path TO myschema; > but thats temprary setting i suppose. Doesn't have to be. See postgresql.conf and ALTER USER. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Effect of large text field data on queries
Tom Lane wrote: Glen Parker <[EMAIL PROTECTED]> writes: How much of a TOAST'd field is actually stored in the main heap table? Is there a way to configure that amount? A pushed-out-of-line value is replaced by a 20-byte pointer structure. There's no such thing as partially out-of-line. See http://www.postgresql.org/docs/8.1/static/storage-toast.html Ah, thanks. I think possibly changing the storage on this field may help some (I will gladly trade storage space for speed). How can I determine the default storage type for a given TOASTable data type (text in this case)? -Glen ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] schema 8.1.5
On Thursday 26 October 2006 14:04, km <[EMAIL PROTECTED]> wrote: >> so that when the user logis in and accesses a database via psql he should > be able to land into his schema. how do i do that ? any tips ? alter role rolename set search_path=path1[,path2...]; -- Ginsberg's Theorem: 1) You can't win. 2) You can't break even. 3) You can't quit the game. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] schema 8.1.5
Hi all, Have a general doubt abt default schema public in postgresql 8.1.5: i would like to know if for every database a valid user creates, postgreSQL by default creates a public schema which is optional ? Also is it possible to know which schema i am currently in ? how do i set a user account to default to a predefined schema ? i have tried : SET search_path TO myschema; but thats temprary setting i suppose. so that when the user logis in and accesses a database via psql he should be able to land into his schema. how do i do that ? any tips ? regards, KM ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Effect of large text field data on queries
Glen Parker <[EMAIL PROTECTED]> writes: > How much of a TOAST'd field is actually stored in the main heap table? > Is there a way to configure that amount? A pushed-out-of-line value is replaced by a 20-byte pointer structure. There's no such thing as partially out-of-line. See http://www.postgresql.org/docs/8.1/static/storage-toast.html regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Effect of large text field data on queries
On Thursday 26 October 2006 12:51, Glen Parker <[EMAIL PROTECTED]> wrote: > Why would very large text values effect the speed of a seq scan that > does not actually evaluate those values? Seq scan reads the whole table. The limiting factor is the size of the table on disk. -- "If a nation expects to be ignorant and free, in a state of civilization, it expects what never was and never will be." -- Thomas Jefferson ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Effect of large text field data on queries
Tom Lane wrote: Glen Parker <[EMAIL PROTECTED]> writes: Why would very large text values effect the speed of a seq scan that does not actually evaluate those values? I'd actually suppose it's the smaller values (up to a few hundred bytes) that impact this the most. Really wide fields would be pushed out-of-line. Meaning that the portion of the text value stored in the companion TOAST table would be ignored for this type of query, correct? That's why I'm concerned. How much of a TOAST'd field is actually stored in the main heap table? Is there a way to configure that amount? -Glen ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Effect of large text field data on queries
Glen Parker <[EMAIL PROTECTED]> writes: > Why would very large text values effect the speed of a seq scan that > does not actually evaluate those values? More bytes to scan over? Have you checked the physical table sizes? I'd actually suppose it's the smaller values (up to a few hundred bytes) that impact this the most. Really wide fields would be pushed out-of-line. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Effect of large text field data on queries
I am having some query problems on a table with large text fields. The table contains 6.7M rows. It is vacuumed every night, and since the last vacuum, rows have been inserted only, never updated or deleted. There are many large text field values in one text field, some in excess of 6MB. I have another table with 13.8M rows, several times as many columns, but no large text values. I am testing select count() queries on both tables, counting on an integer field only. The first table takes almost 300 seconds to complete, while the second table takes about 90 seconds. Why would very large text values effect the speed of a seq scan that does not actually evaluate those values? Oh, PG 8.1.3 on FC 5, 64-bit. TIA -Glen Parker ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] more than 32 parameters to a function?
Ottavio Campana schrieb: Erik Jones wrote: Put them in arrays and pass them as the arguments. But, I would like to know what function could ever need 65 arguments? Consider that I have to invoke the function from a ZSQL method in zope. Do you know if it works? I need all these arguments because we have a tables where we store items for an e-commerce site. The problem is that these items might have a lot of peculiarities and more than 40 fields are boolean to fully describe them. you mean item description? Why would you update them all? Or what are you doing w/ it? If its kinda search for products, you wont need more then about 6 different attributes to choose from the same time. thats attribute name + desired value = 12 params at best. Otherwise could you give an example usage of that more then 32 values call? Regards Tino ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] more than 32 parameters to a function?
I just implemented the same function using an array holding all the booleans fields describing the objects. It works well. Thank you to all of you. -- Non c'e' piu' forza nella normalita', c'e' solo monotonia. signature.asc Description: OpenPGP digital signature
[GENERAL] Compiling ELF 64-bit on Solaris
How can I get the "make" to generate ELF 64-bit executables on Solaris 10? We're on Fujitsu hardware; uname -a displays this: SunOS 5.10 Generic_118822-26 sun4us sparc FJSV,GPUZC-M Thanks Victor Odhner ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] UK Hosting Providers
On Oct 26, 2006, at 7:46 AM, [EMAIL PROTECTED] wrote: Now they need someone to host the site, preferably UK based and not costing the earth supporting PHP and of course postgres. You might find some here: http://www.postgresql.org/support/professional_hosting John DeSoi, Ph.D. http://pgedit.com/ Power Tools for PostgreSQL ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] more than 32 parameters to a function?
Ottavio Campana <[EMAIL PROTECTED]> writes: > I'm writing some stored procedures in pl/pgsql for a database using > postgresql 7.4.7. > I need to write a complex function with 65 arguments, but when I try to > run it I get an error complaining that arguments can be up to 32. Update to 8.1, which allows 100 by default. Or modify FUNC_MAX_ARGS and recompile --- but be aware that in 7.4.x that change requires an initdb. Or, as suggested elsewhere, rethink that function's API. A list of 65 arguments seems terribly error-prone to me... regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] more than 32 parameters to a function?
On Thu, Oct 26, 2006 at 10:03:27AM -0700, Ottavio Campana wrote: > I'm writing some stored procedures in pl/pgsql for a database using > postgresql 7.4.7. > > I need to write a complex function with 65 arguments, but when I try > to run it I get an error complaining that arguments can be up to 32. > > Is there a way to solve this problem or do I have to try to split > the function into three new ones? You can pass the function one complex type that has as many parts as you like. Or several complex types if that makes more sense. Cheers, D -- David Fetter <[EMAIL PROTECTED]> http://fetter.org/ phone: +1 415 235 3778AIM: dfetter666 Skype: davidfetter Remember to vote! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Monitoring Postgres - Get the SQL queries which are sent to postgres
On Oct 25, 2006, at 10:11 AM, A. Kretschmer wrote: Set this in your postgresql.conf: log_statement = all Reload the server, and then you can find all your statements in the log. or, preferably, on a per-connection basis, execute this SQL statement: set log_min_duration_statement = 0 then only those queries for that connection will be logged. otherwise you get *way* too much stuff to sort out. Another useful setting which I always enable (in my postgresql.conf file) is log_min_error_statement = error so that any statement that generates an error will be appended to the error log entry. otherwise you just see the error notice and have no clue what caused it. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] more than 32 parameters to a function?
On Oct 26 10:55, Ottavio Campana wrote: > Erik Jones wrote: > > Put them in arrays and pass them as the arguments. But, I would like to > > know what function could ever need 65 arguments? > > Consider that I have to invoke the function from a ZSQL method in zope. > Do you know if it works? > > I need all these arguments because we have a tables where we store items > for an e-commerce site. The problem is that these items might have a > lot of peculiarities and more than 40 fields are boolean to fully > describe them. I don't have a clue about the ZSQL issue, but it can be a solution to OR those booleans, with some bitwise logic. Regards. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] more than 32 parameters to a function?
Erik Jones wrote: > Put them in arrays and pass them as the arguments. But, I would like to > know what function could ever need 65 arguments? Consider that I have to invoke the function from a ZSQL method in zope. Do you know if it works? I need all these arguments because we have a tables where we store items for an e-commerce site. The problem is that these items might have a lot of peculiarities and more than 40 fields are boolean to fully describe them. Bye > Ottavio Campana wrote: >> I'm writing some stored procedures in pl/pgsql for a database using >> postgresql 7.4.7. >> >> I need to write a complex function with 65 arguments, but when I try to >> run it I get an error complaining that arguments can be up to 32. >> >> Is there a way to solve this problem or do I have to try to split the >> function into three new ones? >> >> > > -- Non c'e' piu' forza nella normalita', c'e' solo monotonia. signature.asc Description: OpenPGP digital signature
Re: [GENERAL] more than 32 parameters to a function?
On Oct 26 10:03, Ottavio Campana wrote: > I'm writing some stored procedures in pl/pgsql for a database using > postgresql 7.4.7. > > I need to write a complex function with 65 arguments, but when I try to > run it I get an error complaining that arguments can be up to 32. > > Is there a way to solve this problem or do I have to try to split the > function into three new ones? Why don't you use a record (or array) type instead? Regards. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] more than 32 parameters to a function?
Create a type whith the fields?! Ex.: Create type myType as (field1 integer, field2 integer, ...); Create or replace function func_type(myType) returns integer AS $$ DECLARE param ALIAS FOR $1; BEGIN RAISE LOG 'Param fields: %, %, %, ... ', param.field1, param.field2, ...; END; $$ LANGUAGE plpgsql; 2006/10/26, Ottavio Campana <[EMAIL PROTECTED]>: I'm writing some stored procedures in pl/pgsql for a database usingpostgresql 7.4.7.I need to write a complex function with 65 arguments, but when I try torun it I get an error complaining that arguments can be up to 32. Is there a way to solve this problem or do I have to try to split thefunction into three new ones?--Non c'e' piu' forza nella normalita', c'e' solo monotonia. -- William Leite Araújo
Re: [GENERAL] more than 32 parameters to a function?
Put them in arrays and pass them as the arguments. But, I would like to know what function could ever need 65 arguments? Ottavio Campana wrote: I'm writing some stored procedures in pl/pgsql for a database using postgresql 7.4.7. I need to write a complex function with 65 arguments, but when I try to run it I get an error complaining that arguments can be up to 32. Is there a way to solve this problem or do I have to try to split the function into three new ones? -- erik jones <[EMAIL PROTECTED]> software development emma(r) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] NOTICE: word is too long INSERT 0 3014
Any thoughts on the below? Joshua D. Drake wrote: > Hello, > > I am running into this limitation ALOT with Tsearch2. What are my > options to get around it. Do I have to compile PostgreSQL with a > different block size? > > If yes, what are the downsides to doing so (outside of not being able to > do straight upgrades)? > > Sincerely, > > Joshua D. Drake > -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] more than 32 parameters to a function?
I'm writing some stored procedures in pl/pgsql for a database using postgresql 7.4.7. I need to write a complex function with 65 arguments, but when I try to run it I get an error complaining that arguments can be up to 32. Is there a way to solve this problem or do I have to try to split the function into three new ones? -- Non c'e' piu' forza nella normalita', c'e' solo monotonia. signature.asc Description: OpenPGP digital signature
Re: [GENERAL] grouping excluding some text results
Off the top of my head (and not sure how optimized)... select t1.id, t1.score, t1.description from scoretable t1, ( select id, max(score) as score from scoretable group by id ) as t2 where t1.id = t2.id and t1.score = t2.score order by t1.id If you get duplicated rows back, then try using select distinct t1.id, t1.score, t1.description ... instead. Hope that helps. John Fip wrote: Hi, ho can I select some one restult for type, excluding the rest? I have something like, as result of a join: | ID | score| description | - MG01 56 "textual description1..." MG02 47 "another text ..." MG02 55 "textual description, text"< note this MG02 55 "textual description, text"< note this MG01 35 "this is a different text" MG02 61 "random chars..." (...) I know that is possible selecting by grouping if I use an aggregate function: select ID,max(score) by table group by ID but I want the description too, only one description, and I have a very large dataset, I cannot cycle with more subselections, this require too much time; also I want to exclude duplicates: only one ID. What can I do? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] UK Hosting Providers
Sorry to intrude on the community just to ask a dumb question, but I don't know postgres and I know how helpful and friendly newgroups are to ignorant newbies ;) I have a client who has had some software developed that requires a website element and the site uses postgres for the back end database (of course, its the best after all!) Now they need someone to host the site, preferably UK based and not costing the earth supporting PHP and of course postgres. Any suggestions? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] grouping excluding some text results
Hi, ho can I select some one restult for type, excluding the rest? I have something like, as result of a join: | ID | score| description | - MG01 56 "textual description1..." MG02 47 "another text ..." MG02 55 "textual description, text"< note this MG02 55 "textual description, text"< note this MG01 35 "this is a different text" MG02 61 "random chars..." (...) I know that is possible selecting by grouping if I use an aggregate function: > select ID,max(score) by table group by ID but I want the description too, only one description, and I have a very large dataset, I cannot cycle with more subselections, this require too much time; also I want to exclude duplicates: only one ID. What can I do? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] [Triggers] Check if OLD or NEW records are NULL
Dear All, Can I check if OLD record is NULL in trigger function. I'm using tg_op and check if it's UPDATE, etc. but I need check if OLD record exists. Is it possible? f.i. IF OLD IS NOT NULL THEN IF OLD.identity IS NULL THEN tg_op_final:=''INSERT''; ELSE tg_op_final:=''UPDATE''; END IF; ELSE tg_op_final:=''FILL_BY_DATALOADER''; END IF; --- Thanks! -- Best Regards Maciek ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Updating client table definitions from server tables without losing client data
Hi... I want to create an automatic update application. In this application, I need to update the database because new versions of a program may use newly created tables or modified tables, if compared to the older version program. So, I have a database version 2.0 in Computer 'A', and I need to make a script that I can run in Computer 'B' (still running version 1.0 of the database). After the execution of the script, Computer 'B' tables should be exactly the same as Computer 'A' tables. But no data should be involved in this, all data in Computer 'B' should be kept, and no data from Computer 'A' should be transferred to Computer 'B'. I tried to use pg_dump (in 'A') and pg_restore (in 'B'). I only have success with pg_restore with the --clean option, but it erases all data. Can anyone help me on this? Thanks! (sorry the not so good english ;) )
Re: [GENERAL] What is causing 'canceling statement due to user request' ?
On 23.10.2006 16:14 Csaba Nagy wrote: Hi all, I know of 2 causes: - hit CTRL-C in the psql client; - have a non-zero statement timeout and have the statement actually time out; But I am seeing this via JDBC which can't cancel a statement AFAIK JDBC *can* cancel a running statement. You just need to call cancel() from another thread. http://java.sun.com/j2se/1.5.0/docs/api/java/sql/Statement.html#cancel() Works great in my SQL front end. Thomas ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Wordpress & PostgreSQL ...
the wordpress guys have basically said they do not want to support postgres, which is mainly why we swapped to s9y on planetpg. you can read some more info here: http://people.planetpostgresql.org/xzilla/index.php?/archives/13-One-Good-Port.html Robert Treat On Saturday 21 October 2006 23:31, Marc G. Fournier wrote: > Ya, I found the wordpress-pg.sourceforge.org project, but that was for 1.2 > ... major old :( > > I have one client running Blogsom w/ PostgreSQL that works well, but a > whack running MySQL that I'd love nothing more then to get off of it :( If > we ever have performance problems on our servers, the "fix" is usually (and > I mean 99.9% of the time) to restart the mysql processes *sigh* > > > --On Saturday, October 21, 2006 20:07:02 -0700 "Joshua D. Drake" > > <[EMAIL PROTECTED]> wrote: > > Marc G. Fournier wrote: > >> Does anyone know of any work being done to get wordpress ported to > >> PostgreSQL? My search on the web finds emails from March of this year > >> concerning some ppl more or less "looking into it", but I can't find > >> anything that indicates they've done much more then talk :( > > > > There was an experimental patch at one time. However the devs don't > > really have much interest in PostgreSQL AFAICT. > > > > Joshua D. Drake > > > >> > >> Marc G. Fournier Hub.Org Networking Services > >> (http://www.hub.org) Email . [EMAIL PROTECTED] > >> MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.org > >>ICQ . 7615664 > > > > ---(end of broadcast)--- > > TIP 6: explain analyze is your friend > > > > > > > > -- > > > >=== The PostgreSQL Company: Command Prompt, Inc. === > > Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 > >Providing the most comprehensive PostgreSQL solutions since 1997 > > http://www.commandprompt.com/ > > > > Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate > > > Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) > Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] > Yahoo . yscrappy Skype: hub.orgICQ . 7615664 > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] PostgreSQL uses in the weather service
Hi All, I saw that "The National Weather Service" uses the PostgreSQL. I'm interested to implement the PostgreSql in the similar functional area and I need some experience feedback regard the database volume, the PostgreSQL performance in The National Weather Service. Thanks in advance, Best Regards* This message and any attachments (the "message") are confidential and intended solely for the addressee(s). Any unauthorised use or dissemination is prohibited. E-mails are susceptible to alteration. Neither SOCIETE GENERALE nor any of its subsidiaries or affiliates shall be liable for the message if altered, changed or falsified. Ce message et toutes les pieces jointes (ci-apres le "message") sont confidentiels et etablis a l'intention exclusive de ses destinataires. Toute utilisation ou diffusion non autorisee est interdite. Tout message electronique est susceptible d'alteration. La SOCIETE GENERALE et ses filiales declinent toute responsabilite au titre de ce message s'il a ete altere, deforme ou falsifie. *
Re: [GENERAL] PostgreSQL in article I wrote
Roman Neuhauser wrote regarding: http://www.sema.org/main/semaorghome.aspx?id=56095 The firefox / thunderbird section should rather warn them to pay attention to bugtraq. All those buffer overflows in libwhathaveyou will make sure their single-user (root) KDE installations will need mkfs as often as they needed format c: in windows. Yeah, I was concerned about that as well so I worded it "a safer environment" in the context of the typical reader who will be a MS-Windows user. I knew I couldn't win, cause the only safe browser is composed of clay (a brick) rather than silicon. On the otherhand (to bring this back on topic) the editor did choose to put a nice screenshot of postgreSQL's website in the paper version of the article. -- Walter ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] password cookie
On Thu, Oct 26, 2006 at 12:27:49AM +0200, Willy-Bas Loos wrote: > or will not receive those, because of the rights granted to him. These > granted rights and roles will be determined by the regular postgres > functionality (and some views). Ah, that's a different matter. My suggestion is "don't do that". I tried to do it once, years ago, and regretted it deeply. Of course, my code was awful, and yours might be better. But in my view, that's a security problem just waiting to happen. You're better off to have one user in your application that does the authentication for you. You can use Kerberos or something to authenticate it; much easier to lock down one such user carefully, that comes only from boxes under your control, than to secure many users' accounts. If you want to do it this way, I sure wouldn't use cookies to store the password. I think you're asking for a compromise that way. A -- Andrew Sullivan | [EMAIL PROTECTED] The fact that technology doesn't work is no bar to success in the marketplace. --Philip Greenspun ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] pg_dumpall failing from possible corrupted shared memory
Richard Broersma Jr <[EMAIL PROTECTED]> writes: > I did not find any coredump files. Will I need to rebuild postgres to > produce coredumps? No, you just need to make sure the postmaster is started under "ulimit -c unlimited", not "ulimit -c 0" which is the default under many Linuxen. Adding this to the start script is the easiest way usually. If you can't get anything but numbers from gdb's stack trace then you have a "stripped" executable and you will need to rebuild to get a useful stack trace. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Problem with 8.1.5 RPMs for Redhat AS 4
Hello, On Wed, 2006-10-25 at 13:41 -0500, Will Reese wrote: > Any idea when the new RPMs might be available for AS 4? Ah sorry. I just pushed them. They will be in main FTP site in an hour. It may take some time for your local mirror to pick it up. BTW, you can use RHEL ES 4 rpms for RHEL AS 4. They are identical. Regards, -- The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564 PostgreSQL Replication, Consulting, Custom Development, 24x7 support Managed Services, Shared and Dedicated Hosting Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/ signature.asc Description: This is a digitally signed message part
Re: [GENERAL] pg_autovacuum / pg_class
Tomas Vondra wrote: > Hello, > > I have a simple question about auto-vacuuming. We are running a Postgres > 8.1 and I've noticed that some of the tables are not analyzed by the > pg_autovacuum daemon. That is - when I select the rows from pg_class, > all the important values (relpages, reltuples) are 0. > > I've noticed this in case of newly created tables (that is not older > than 1 month). My 'theory' is this - the table has to be analyzed by > hand at the beginning, otherwise the pg_autovacuum won't process it. Am > I right? Hum. Just by inserting tuples into those tables, autovacuum should "be able to see" them. Autovacuum won't touch, and indeed won't even consider, tables that are just created. But as soon as you insert a single tuple in them, it will start to consider vacuuming or analyzing them. If you insert a hundred thousand rows in a table, autovacuum most certainly should analyze it. If it's not, we'd really like to know why. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] DBI-Link, Oracle, database encoding
I have it working fine now. Seems PG indeed did not have access to the env vars, because of the init script I was using. export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 export NLS_NCHAR=AL32UTF8 pg_ctl restart fixed it for me. If you are using DBI-Link, please sign up for its mailing list on pgfoundry. I will, though it works fine for me now I have some more questions :) DBI-Link 2.0beta1 provides some infrastructure for setting environment variables. Any suggestions would be welcome. Did I miss it in the Readme or is it not documented? Anyway, thanks for creating this piece of software. It saved me days of work and some ugly hacks. -- Regards, Hannes Dorbath ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] pg_autovacuum / pg_class
Hello, I have a simple question about auto-vacuuming. We are running a Postgres 8.1 and I've noticed that some of the tables are not analyzed by the pg_autovacuum daemon. That is - when I select the rows from pg_class, all the important values (relpages, reltuples) are 0. I've noticed this in case of newly created tables (that is not older than 1 month). My 'theory' is this - the table has to be analyzed by hand at the beginning, otherwise the pg_autovacuum won't process it. Am I right? We use default values for all pg_autovacuum related parameters in postgresql.conf, and there are no rows in pg_autovacuum. The new tables are growing pretty fast (about 1.000.000 of rows each month), so the thresholds should be exceeded pretty very fast. But maybe this is caused by some stupid misconfiguration and/or is a known feature ... thanks for all your advices Tomas ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Reducing pg_dump & pg_restore times
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/26/06 01:20, Chris wrote: > Coder At Heart wrote: >> Hi! >> >> Restore & backup take time in the tune of 6 hrs on a Linux, 4 proc, 32 >> G RAM machine for a 60GB database. > > Please always cc the mailing list. > > I've never played with databases that large so I don't know if that's > good or bad.. others on the list will be able to offer advice/suggestions. RAM certainly helps, but pg_dump is single-threaded, and, by definition, backing up and restoring 60GB of data is *extremely* IO constrained. So, - - What OS? - - What version of PG? - - what kind of disk system does the DB live on? - - How many controllers? - - What kind of tape drive? - - Is it on it's own SCSI controller? - -- Ron Johnson, Jr. Jefferson LA USA Is "common sense" really valid? For example, it is "common sense" to white-power racists that whites are superior to blacks, and that those with brown skins are mud people. However, that "common sense" is obviously wrong. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.5 (GNU/Linux) iD8DBQFFQH10S9HxQb37XmcRAhFpAKCeBZbTPQW8mhY8EgG26R7OgpgrlQCeMAek KmPbaWmLx+N+kYQvHNOgiL0= =YsM3 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Enableing contrib modules on Debian
On 10/25/06, Tom Lane <[EMAIL PROTECTED]> wrote: "Stuart Grimshaw" <[EMAIL PROTECTED]> writes: > I've installed the postgresql-contrib8.1 deb package to get at the > earthdistance function, but it doesn't seem to be recognised, even > after a restart of the server. > Is there a final step I need to take after using apt-get to install? Yeah, you have to run the SQL script included in the module to define its functions within a particular database. Look for earthdistance.sql among the files installed by the .deb (it's likely under /usr/share). Thanks Tom, the .sql is stored in /usr/share/postgres/8.1/contrib All sorted now. -- -S Sports Photography in South Yorkshire & Derbyshire http://www.stuartgrimshaw.co.uk ---(end of broadcast)--- TIP 6: explain analyze is your friend