[GENERAL] C-procedure crashed in Postgres 8.3.3 when using 'text' variable (WinXP) - additional
Hello. I'm trying to create a C-procedure returning text variable - again :). Postgres 8.3.3 (standard binaries - so built by means of MSVC), WinXP SP2. I also use MSVC 2005 for compilation my library. Configuration type - Dynamic Library (.dll) Additional include directories - D:\pgsql83\include;D:\pgsql83\include\server;D:\pgsql83\include\server\port\win32 Additional library directories - D:\pgsql83\lib Additional dependencies - postgres.lib Compile as C Code (/TC) (By the way, I can't compile it as C++ Code (/TP) In this case I get a lot of errors. F.e. d:\pgsql83\include\server\nodes\primnodes.h(1078) : error C2238: unexpected token(s) preceding ';' 1d:\pgsql83\include\server\nodes\parsenodes.h(167) : error C2059: syntax error : 'typeid' ) So, here is the code: -- include postgres.h #include fmgr.h #include executor/executor.h #include utils/timestamp.h #include utils/builtins.h #include utils/formatting.h #ifdef PG_MODULE_MAGIC PG_MODULE_MAGIC; #endif #define GET_TEXT(cstrp) DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(cstrp))) PG_FUNCTION_INFO_V1(getTimeFromApplication); Datum getTimeFromApplication(PG_FUNCTION_ARGS) { PG_RETURN_TEXT_P(GET_TEXT(success)); } I can compile it and get the library, but there are some warnings: 1d:\pgsql83\getstring\c_getstring.c(10) : warning C4273: 'Pg_magic_func' : inconsistent dll linkage 1d:\pgsql83\getstring\c_getstring.c(10) : see previous definition of 'Pg_magic_func' 1d:\pgsql83\getstring\c_getstring.c(24) : warning C4273: 'pg_finfo_getTimeFromApplication' : inconsistent dll linkage 1d:\pgsql83\getstring\c_getstring.c(24) : see previous definition of 'pg_finfo_getTimeFromApplication' 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4311: 'type cast' : pointer truncation from 'char [8]' to 'Datum' 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4312: 'type cast' : conversion from 'Datum' to 'Pointer' of greater size 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4311: 'type cast' : pointer truncation from 'varlena *' to 'Datum' --- Then I put the library into 'lib' directory and create the stored procedure: CREATE OR REPLACE FUNCTION service.get_app_time () RETURNS text AS '$libdir/getstring', 'pg_finfo_getTimeFromApplication' LANGUAGE C STRICT; Then I try to run it: select * from service.get_app_time (); And get an error: ERROR: invalid memory alloc request size 4294967293 What did I wrong? Thanks in advance, Marina. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] tsearch dictionary list?
Hi all. I'm new to postgresql world, and I have to extend an existing product for Thailand - that product has some features based on tsearch, and I was wondering if there is an existing dictionary for that language... I failed to find any reference of such dictionary on the web, and of course I don't speak thailandese at all! In fact, is there somewhere a list of existing tsearch dictionaries? Are japanese, chinese, or other exotic languages supported by tsearch? Thanks for your attention, Daniel Chiaramello -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Initdb problem on debian mips cobalt: Bus error
Hi Chaps, I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error http://privatepaste.com/47jTTGw5XC I've configured it as I usually do ./configure --with-perl --with-python --with-tcl --with-openssl --with-pam --with-krb5. Of course running on a more obscure machine I don't expect too much to be honest, but does anyone have any ideas? Perhaps the kernel is missing support for something? Thanks Glyn __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill wrote: Hi Chaps, I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error http://privatepaste.com/47jTTGw5XC Looks like a problem trying to determine how much shared memory it can allocate. selecting default shared_buffers/max_fsm_pages ... sh: line 1: 22630 Bus error (core dumped) /usr/pgsql_src/postgresql-8.3.3/src/test/regress/tmp_check/install/usr/local/pgsql/bin/postgres --boot -x0 -F -c max_connections=10 -c shared_buffers=4096 -c max_fsm_pages=204800 /dev/null /dev/null 21 Can you support 4096 shared-buffers on this? Afraid I don't know anything about shared-mem handling on mips, but it might be that the max-shared-memory detection got something wrong. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
I assume it's doing it correctly deb:/home/glyn# cat /proc/sys/kernel/shmmax 33554432 That's right isn't it? 4096*8192= 33554432 - Original Message From: Richard Huxton [EMAIL PROTECTED] To: Glyn Astill [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Friday, 18 July, 2008 11:48:39 AM Subject: Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error Glyn Astill wrote: Hi Chaps, I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error http://privatepaste.com/47jTTGw5XC Looks like a problem trying to determine how much shared memory it can allocate. selecting default shared_buffers/max_fsm_pages ... sh: line 1: 22630 Bus error (core dumped) /usr/pgsql_src/postgresql-8.3.3/src/test/regress/tmp_check/install/usr/local/pgsql/bin/postgres --boot -x0 -F -c max_connections=10 -c shared_buffers=4096 -c max_fsm_pages=204800 /dev/null /dev/null 21 Can you support 4096 shared-buffers on this? Afraid I don't know anything about shared-mem handling on mips, but it might be that the max-shared-memory detection got something wrong. -- Richard Huxton Archonet Ltd __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill wrote: I assume it's doing it correctly deb:/home/glyn# cat /proc/sys/kernel/shmmax 33554432 That's right isn't it? 4096*8192= 33554432 Does shmall allow for any more? Other processes may be preventing you from allocating all that. Of course, ideally you'd get an error message regarding allocation rather than a bus error. -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes wrote: The OS triggered the out of memory killer (oom-killer). The table I am selecting from has a few hundred million rows. The table I am inserting into has partitions. I am benchmarking breaking up a large table into smaller partitions. Is the partition split done with triggers or rules? -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Yes I think it does? deb:/home/glyn# cat /proc/sys/kernel/shmall 2097152 deb:/home/glyn# getconf PAGE_SIZE 4096 - Original Message From: Richard Huxton [EMAIL PROTECTED] To: Glyn Astill [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Friday, 18 July, 2008 12:36:30 PM Subject: Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error Glyn Astill wrote: I assume it's doing it correctly deb:/home/glyn# cat /proc/sys/kernel/shmmax 33554432 That's right isn't it? 4096*8192= 33554432 Does shmall allow for any more? Other processes may be preventing you from allocating all that. Of course, ideally you'd get an error message regarding allocation rather than a bus error. -- Richard Huxton Archonet Ltd __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill wrote: Yes I think it does? deb:/home/glyn# cat /proc/sys/kernel/shmall 2097152 deb:/home/glyn# getconf PAGE_SIZE 4096 Well, if it's using PAGE_SIZE then that's 8GB which sounds optimistic for a qube. Presumably it represents some theoretical maximum. Did a previous version of PG work on this box? Actually, does Debian's package of PG (whatever version) work on this box? -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TODO list and hyphen
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: Scara Maccai wrote: Hi, I was looking at the TODO: http://www.postgresql.org/docs/faqs.TODO.html A hyphen, -, marks changes that will appear in the upcoming 8.4 release. Well, making a search for the - sign is complicated... it's obviously used for a lot of other things... could you use another character? I think a search for ' -' will show you the items you want. Might it be worth documenting that at the top of the page? I know I've struggled to look down the list before myself. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFIgJ+6hXYlbEtYt2wRAuGIAKCo254lOs92446Im8QxaG31U5CvlgCeIS8g aIDNM6QqydD/9CxCRjoCcIs= =rotb -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TODO list and hyphen
Russ Brown wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruce Momjian wrote: Scara Maccai wrote: Hi, I was looking at the TODO: http://www.postgresql.org/docs/faqs.TODO.html A hyphen, -, marks changes that will appear in the upcoming 8.4 release. Well, making a search for the - sign is complicated... it's obviously used for a lot of other things... could you use another character? I think a search for ' -' will show you the items you want. Might it be worth documenting that at the top of the page? I know I've struggled to look down the list before myself. You are the first person to ever ask, and searching for ' -' is pretty basic. If it is a problem, I think some other symbol should be used. -- Bruce Momjian [EMAIL PROTECTED]http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On Fri, Jul 18, 2008 at 12:18 AM, Francisco Reyes [EMAIL PROTECTED] wrote: Douglas McNaught writes: It does seem that reducing work_mem might help you, but others on this I reduced it from 256MB to 64MB. It seems it is helping. You should also look at your memory overcommit settings (in /proc/sys/vm). You can set things up so that Postgres gets a malloc() failure (which it is generally prepared to cope with cleanly) when the system runs out of RAM, rather than having the OOM killer go off and hit it with SIGKILL. Overcommit is useful in some contexts (Java apps tend to map a lot more memory than they actually use) but for a dedicated database server you really don't ever want to have the OOM killer triggered. -Doug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
I thought similar, I assumed it was sort sort of generic maximum. The qube is very old so there's only 196Mb ram in it. Nope, never tried postgres on this qube before. I've not tried debians package, it was 8.1 as I recall. I'll give the package ago purely as a test unless anyone can see any other reasons for the errors? I wanted to have 8.3.3 from source on it so I could poke around. - Original Message From: Richard Huxton [EMAIL PROTECTED] To: Glyn Astill [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Friday, 18 July, 2008 2:09:28 PM Subject: Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error Glyn Astill wrote: Yes I think it does? deb:/home/glyn# cat /proc/sys/kernel/shmall 2097152 deb:/home/glyn# getconf PAGE_SIZE 4096 Well, if it's using PAGE_SIZE then that's 8GB which sounds optimistic for a qube. Presumably it represents some theoretical maximum. Did a previous version of PG work on this box? Actually, does Debian's package of PG (whatever version) work on this box? -- Richard Huxton Archonet Ltd __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] SSPI/Kerberos support for ODBC on Windows 2008 Server
I have my Windows 2008 stand-alone server configured so I can log on using my MIT Kerberos credentials. I have an SSPI enabled PUTTY which will allow single-sign-on with Kerberos. This much is working correctly. My question - does the current ODBC driver support this type of 'native' configuration on Windows? By that, I mean I haven't installed anything from MIT on the Windows machine, I have simply run these commands: ksetup /setdomain DOMAIN.TLD ksetup /addkdc DOMAIN.TLD host.domain.tld ksetup /setcomputerpassword x ksetup /mapuser user [EMAIL PROTECTED] Of course the needed principals have been created on the KDC. When I test the connection, I get the following error: pg_krb5_init: krb5_cc_get_principal: No credentials cache found Thanks Blake
Re: [GENERAL] query optimization
I sent this follow up in yesterday, but it did not show up. Must be doing something wrong. Here is the second try. kd select * from security sec where getsectypekey('OP') = sec.securitytypekey returns 690 rows in 1625ms EXPLAIN Seq Scan on security sec (cost=0.00..507.54 rows=602 width=374) Filter: (getsectypekey('OP'::bpchar) = securitytypekey) select * from security sec where ( select getsectypekey('OP') ) = sec.securitytypekey returns 690 rows in 172ms EXPLAIN Bitmap Heap Scan on security sec (cost=16.93..368.36 rows=602 width=374) Recheck Cond: ($0 = securitytypekey) InitPlan - Result (cost=0.00..0.01 rows=1 width=0) - Bitmap Index Scan on security_sectypekey (cost=0.00..16.77 rows=602 width=0) Index Cond: ($0 = securitytypekey) So this proves it is using the index. But I think the issue is in the interaction between the numbers of rows returned from positions_gsco and security. kd -Original Message- From: Klint Gore [mailto:[EMAIL PROTECTED] Sent: Thursday, July 17, 2008 7:41 PM To: Kevin Duffy Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] query optimization Kevin Duffy wrote: So here are the questions for the PSQL gurus: Is getsectypekey('CFD') executing for every join (or possible join) between positions_gsco and security? Causing a scan of security for every possible join. Does ' (select getsectypekey('CFD') ) ' cause the getsectype() function to be executed once and thus allowing the index on security to be used. And of couse '5' makes things simple. The index on security is used. Am I posting this in the right. If not please help me correct my error and point me to the correct spot. Is the function stable or volatile? As Scott Marlowe suggested, you need to look at the explain results to find out what the plan is in each case. klint. -- Klint Gore Database Manager Sheep CRC A.G.B.U. University of New England Armidale NSW 2350 Ph: 02 6773 3789 Fax: 02 6773 3266 EMail: [EMAIL PROTECTED] -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 9:53 am 07/18/08 Douglas McNaught [EMAIL PROTECTED] wrote: dedicated database server you really don't ever want to have the OOM killer triggered. Found that yesterday (vm.overcommit_memory=2). Agree that this is better than OOM. I still ran out of memory last night and postgres just failed on the malloc(), which as you mentioned is better. Reduced work_mem to 8MB and trying again. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 8:13 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote: Is the partition split done with triggers or rules? I have a single trigger+function combo that dynamically computes which partition the data has to go to. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] C-procedure crashed in Postgres 8.3.3 when using 'text' variable (WinXP) - additional
el dorado [EMAIL PROTECTED] writes: (By the way, I can't compile it as C++ Code (/TP) No, you can't. 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4311: 'type cast' : pointer truncation from 'char [8]' to 'Datum' 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4312: 'type cast' : conversion from 'Datum' to 'Pointer' of greater size 1d:\pgsql83\getstring\c_getstring.c(75) : warning C4311: 'type cast' : pointer truncation from 'varlena *' to 'Datum' These look like you are trying to compile in a 64-bit environment. We don't (yet) support building 64-bit in Windows. Use 32-bit. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill [EMAIL PROTECTED] writes: I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error Could we see a gdb stack trace from that core dump? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes wrote: On 8:13 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote: Is the partition split done with triggers or rules? I have a single trigger+function combo that dynamically computes which partition the data has to go to. I'm wondering whether it's memory usage either for the trigger itself, or for the function (pl/pgsql?). If you're doing something like: INSERT INTO partitioned_table SELECT * FROM big_table then that's not only taking place within a single transaction, but within a single statement. Without being a hacker, I'd say it's entirely plausible that PG might clean up triggers at the end of a statement meaning you would need memory for 200million+ triggers. Alternatively, it could be a memory-leak somewhere in the pl/pgsql or trigger code. Wouldn't have to be much to affect this particular case. What happens if you do the insert/select in stages but all in one transaction? Do you see PG's memory requirement stay constant or grow in steps. That will show whether the memory is growing over the duration of a statement or a transaction. BEGIN; INSERT ... SELECT ... WHERE id BETWEEN 0 AND 99 INSERT ... SELECT ... WHERE id BETWEEN 100 AND 199 ... COMMIT; -- Richard Huxton Archonet Ltd -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 11:25 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote: I'm wondering whether it's memory usage either for the trigger itself, or for the function (pl/pgsql?). Good point. If you're doing something like:INSERT INTO partitioned_table SELECT * FROM big_table then that's not only taking place within a single transaction, but within a single statement. Correct. I have kept decreasing work_mem and that does not seem to help. Without being a hacker, I'd say it's entirely plausible that PG might clean up triggers at the end of a statement meaning you would need memory for 200million+ triggers. Sure hope that is not the case. Alternatively, it could be a memory-leak somewhere in the pl/pgsql or trigger code. Wouldn't have to be much to affect this particular case. Will post an strace. What happens if you do the insert/select in stages but all in one transaction? Will test. The data is about a year worth of data. I will try to do one month at a time, within a single transaction. A single month finishes fine. Do you see PG's memory requirement stay constant or grow in steps. That will show whether the memory is growing over the duration of a statement or a transaction. Right now for the single statement/transaction (the one big process) it is growing slowly over time. It may be a leak. It seems to start growing somewhere between the 1st and 2nd hower. It seems to always be failing around 4 hours. I wrote a little process that shows the amount of free memory every 15 minutes.. I will post strace for the big process and then will try breaking the process down by month, but within a single transaction and report that later when I get some results. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On Fri, Jul 18, 2008 at 10:40:02AM -0400, Francisco Reyes wrote: Found that yesterday (vm.overcommit_memory=2). Agree that this is better than OOM. I still ran out of memory last night and postgres just failed on the malloc(), which as you mentioned is better. Reduced work_mem to 8MB and trying again. Perhaps you can try reducing the shared_buffers, to see if that helps more? 8MB is quite small for workmem. More shared_buffers is not necessarily better. Also, how much swap are you running? overcommit disabled while not having any swap setup is a great way to ensure you run out of memory quickly. Have a nice day, -- Martijn van Oosterhout [EMAIL PROTECTED] http://svana.org/kleptog/ Please line up in a tree and maintain the heap invariant while boarding. Thank you for flying nlogn airlines. signature.asc Description: Digital signature
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 11:25 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote: Strace of the single/large process. Again, all the query is doing is insert into file select subquery The strace is pretty much a repetition of the lines below. semop(557057, 0x7fbfffdfb0, 1) = 0 lseek(100, 0, SEEK_END) = 671719424 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 lseek(508, 0, SEEK_END) = 55697408 write(508, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 read(381, \0\0\0\0\224\21\0\225o\10\0\30\331c\0c\225%w(\0\0\0\0\0\0\0\0\0\5\0..., 8192) = 8192 semop(557057, 0x7fbfffd1a0, 1) = 0 lseek(100, 0, SEEK_END) = 671727616 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 semop(557057, 0x7fbfffd1c0, 1) = 0 semop(557057, 0x7fbfffd1a0, 1) = 0 semop(557057, 0x7fbfffd1c0, 1) = 0 read(381, w\317\21\0]9\0\177\246eA(\0\0\0\0\0\0\0\0\0\5\0\2\0\30\0.\v\0\0..., 8192) = 8192 semop(557057, 0x7fbfffd1a0, 1) = 0 lseek(512, 0, SEEK_END) = 48144384 write(512, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 semop(557057, 0x7fbfffd1c0, 1) = 0 lseek(100, 0, SEEK_END) = 671735808 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 lseek(517, 0, SEEK_END) = 89309184 write(517, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 semop(557057, 0x7fbfffd1c0, 1) = 0 semop(557057, 0x7fbfffddd0, 1) = 0 lseek(100, 0, SEEK_END) = 671744000 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 read(381, \212\225\202(\0\0\0\0\0\0\0\0\0\5\0\2\0\30\\v\0\0\1\0\23\2\0\0\0\t..., 8192) = 8192 lseek(510, 0, SEEK_END) = 29351936 write(510, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 lseek(100, 0, SEEK_END) = 671752192 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 semop(557057, 0x7fbfffddf0, 1) = 0 read(381, \0\0\0\0\0\0\5\0\2\0\30\0001\v\0\0\0\0\23\2\0\0\0\30\0\4\20\0\302\326\0\0..., 8192) = 8192 lseek(513, 0, SEEK_END) = 19316736 write(513, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 lseek(100, 0, SEEK_END) = 671760384 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 read(381, [EMAIL PROTECTED]..., 8192) = 8192 lseek(100, 0, SEEK_END) = 671768576 write(100, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 lseek(518, 0, SEEK_END) = 55025664 write(518, \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0..., 8192) = 8192 semop(557057, 0x7fbfffd1c0, 1) = 0 semop(557057, 0x7fbfffd1c0, 1) = 0 semop(557057, 0x7fbfffd1c0, 1) = 0 lseek(100, 0, SEEK_END) = 671776768 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Writing a user defined function
Hello, I am trying to code a simple udf in postgres. How do I write sql commands into pl/sql ? The foll. code doesnt work. CREATE OR REPLACE FUNCTION udf() RETURNS integer AS $$ BEGIN for i in 1..2000 loop for j in 1...1 loop end loop; begin work; declare cust scroll cursor for select * from tpcd.customer; FETCH FORWARD 5 FROM cust; end loop; CLOSE cust; COMMIT work; return 1; end; $$ LANGUAGE plpgsql; select udf(); thanks, Suresh -- View this message in context: http://www.nabble.com/Writing-a-user-defined-function-tp18532591p18532591.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] TODO list and hyphen
You are the first person to ever ask, and searching for ' -' is pretty basic. If it is a problem, I think some other symbol should be used. using opera it doesn't work... and with Firefox you still get a lot of not-wanted matches... Of course, this is not a problem, I was just thinking that another char would be better. Posta, news, sport, oroscopo: tutto in una sola pagina. Crea l#39;home page che piace a te! www.yahoo.it/latuapagina -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes wrote: On 11:25 am 07/18/08 Richard Huxton [EMAIL PROTECTED] wrote: Strace of the single/large process. Again, all the query is doing is insert into file select subquery The strace is pretty much a repetition of the lines below. Do you have long-running transactions? (For example transactions that have been idle for a long time). -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Writing a user defined function
On Fri, Jul 18, 2008 at 12:07 PM, Suresh_ [EMAIL PROTECTED] wrote: Hello, I am trying to code a simple udf in postgres. How do I write sql commands into pl/sql ? The foll. code doesnt work. CREATE OR REPLACE FUNCTION udf() RETURNS integer AS $$ BEGIN for i in 1..2000 loop for j in 1...1 loop end loop; begin work; Postgres doesn't let you do transactions inside a function. Take out the BEGIN and COMMIT, and if you still get errors post the function code and the error message that you get. -Doug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Could this be any less informative? Core was generated by `/usr/pgsql_src/postgresql-8.3.3/src/test/regress/tmp_check/install/usr/local/pg'. Program terminated with signal 10, Bus error. #0 0x007572d0 in ?? () - Original Message From: Tom Lane [EMAIL PROTECTED] To: Glyn Astill [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Friday, 18 July, 2008 4:01:23 PM Subject: Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error Glyn Astill writes: I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error Could we see a gdb stack trace from that core dump? regards, tom lane __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] autovacuum logging 8.1
[EMAIL PROTECTED] escribió: Is there a way to disable some of the autovacuum logging? Yes -- upgrade to a newer version. Autovacuum was primitive in 8.1. It was the first version. Bear with us. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Returns cursor?
Hi Guru, can a function returns a cursor? Thank you! Domenico -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 12:03 pm 07/18/08 Martijn van Oosterhout [EMAIL PROTECTED] wrote: Perhaps you can try reducing the shared_buffers, to see if that helps more? Will try. 8MB is quite small for workmem. More shared_buffers is not necessarily better. Ok, but from everything I had read shared_buffers of 1/4 seemed like a starting point. Will try reducing it to 2GB. Also, how much swap are you running? Started out with 12GB (same as memory) and last night I added 24GB more. I had 2 instances of inserts going so each exausted about 18GB of ram! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 12:23 pm 07/18/08 Alvaro Herrera [EMAIL PROTECTED] wrote: Do you have long-running transactions? (For example transactions that have been idle for a long time). No. The two inserts I was running were the only processes. I even did a restart to make sure there was absolutely nothing else running and to make sure all my postgresql.conf settings were in. Given that memory grows over time I am beggining to wonder if it is some type of memory leak. Just installed the postgresql debug rpm, but not sure if did anything.. strace doesn't look any different.. read(81, 2\1\0\0\260~!\16\1\0\0\0\370\1\0\2\0 \4 \0\0\0\0\300\237r\0\200\237r\0..., 8192) = 8192 write(191, Q=J\313\253]1\0\0\0\1\0007\33\4\0\2\0\2\t\30\0\3\302\204\0;a1OjG..., 8192) = 8192 write(160, XQxbqQEx+yo=H\333o\2371\0\0\0\1\0.\33C\0\2\0\2\t\30\0..., 8192) = 8192 read(81, 2\1\0\0\320(\301\17\1\0\0\0\370\1\0\2\0 \4 \0\0\0\0\300\237r\0\200\237r\0..., 8192) = 8192 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill [EMAIL PROTECTED] writes: Could this be any less informative? Core was generated by `/usr/pgsql_src/postgresql-8.3.3/src/test/regress/tmp_check/install/usr/local/pg'. Program terminated with signal 10, Bus error. #0 0x007572d0 in ?? () Probably not :-(. Did you build with --enable-debug? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
No. Will recompile with debug info and post back when done. - Original Message From: Tom Lane [EMAIL PROTECTED] To: Glyn Astill [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Friday, 18 July, 2008 5:50:05 PM Subject: Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error Glyn Astill writes: Could this be any less informative? Core was generated by `/usr/pgsql_src/postgresql-8.3.3/src/test/regress/tmp_check/install/usr/local/pg'. Program terminated with signal 10, Bus error. #0 0x007572d0 in ?? () Probably not :-(. Did you build with --enable-debug? regards, tom lane __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes [EMAIL PROTECTED] writes: Given that memory grows over time I am beggining to wonder if it is some type of memory leak. Are there any AFTER triggers (including foreign key constraints) on the table being inserted into? If so the list of pending trigger events might be your problem. If you can get Postgres to report an actual out-of-memory error (as opposed to crashing from OOM kill) then it should dump a memory usage map into the postmaster log. Looking at that would be informative. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
Glyn Astill [EMAIL PROTECTED] writes: No. Will recompile with debug info and post back when done. FWIW, the most likely issue here is the MIPS-specific assembly code in src/include/storage/s_lock.h --- I'm not sure how many MIPS platforms that's really been exercised on, but it may not work on yours. While you're waiting for the rebuild you might try to find a MIPS guru to show that code to. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Initdb problem on debian mips cobalt: Bus error
--- On Fri, 7/18/08, Glyn Astill [EMAIL PROTECTED] wrote: From: Glyn Astill [EMAIL PROTECTED] Subject: [GENERAL] Initdb problem on debian mips cobalt: Bus error To: pgsql-general@postgresql.org Date: Friday, July 18, 2008, 10:26 AM Hi Chaps, I'm attempting to run 8.3.3 on an old cobalt qube, with debian etch. It appeared to compile ok (however I didn't stick around to watch, that'd be painfull) and said PostgreSQL compiled successfully and ready to install or whatever, but when I run make check, fails in initdb. Here is the message and the initdb log file contents showing bus error http://privatepaste.com/47jTTGw5XC the user ho execute the command initdb have owner to the directory where the cluster was create check the owner from the directory and permission I've configured it as I usually do ./configure --with-perl --with-python --with-tcl --with-openssl --with-pam --with-krb5. Of course running on a more obscure machine I don't expect too much to be honest, but does anyone have any ideas? Perhaps the kernel is missing support for something? Thanks Glyn __ Not happy with your email address?. Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 1:00 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: Are there any AFTER triggers (including foreign key constraints) I have two foreign key constraints. the table being inserted into? If so the list of pending trigger events might be your problem. I guess I can try disablign the foreign key, but that would be less than ideal for production. This is an analytics environment so all operations are in bulk. If you can get Postgres to report an actual out-of-memory error (as opposed to crashing from OOM kill) Disabled oom with vm.overcommit_memory=2. then it should dump a memory usage map into the postmaster log. Looking at that would be informative. Got it. -- AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6 chunks); 1055316 used ExecutorState: 122880 total in 4 blocks; 68040 free (8 chunks); 54840 used Operator lookup cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 used Operator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used MessageContext: 131072 total in 5 blocks; 50712 free (291 chunks); 80360 used smgr relation table: 24576 total in 2 blocks; 3584 free (4 chunks); 20992 used TransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 used Portal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 used PortalMemory: 8192 total in 1 blocks; 7888 free (0 chunks); 304 used PortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 used ExecutorState: 98784 total in 8 blocks; 24064 free (22 chunks); 74720 used ExprContext: 8192 total in 1 blocks; 8016 free (0 chunks); 176 used HashTableContext: 8192 total in 1 blocks; 8064 free (1 chunks); 128 used HashBatchContext: 532676656 total in 74 blocks; 1863936 free (5 chunks); 530812720 used HashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used HashBatchContext: 415227952 total in 59 blocks; 6589744 free (5 chunks); 408638208 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used ExprContext: 8192 total in 1 blocks; 8136 free (0 chunks); 56 used Relcache by OID: 24576 total in 2 blocks; 8672 free (3 chunks); 15904 used CacheMemoryContext: 2390256 total in 22 blocks; 751904 free (2 chunks); 1638352 used CachedPlan: 1024 total in 1 blocks; 336 free (0 chunks); 688 used CachedPlanSource: 1024 total in 1 blocks; 80 free (0 chunks); 944 used SPI Plan: 1024 total in 1 blocks; 808 free (0 chunks); 216 used CachedPlan: 7168 total in 3 blocks; 3120 free (0 chunks); 4048 used CachedPlanSource: 7168 total in 3 blocks; 1816 free (0 chunks); 5352 used SPI Plan: 1024 total in 1 blocks; 784 free (0 chunks); 240 used CachedPlan: 3072 total in 2 blocks; 792 free (0 chunks); 2280 used CachedPlanSource: 7168 total in 3 blocks; 3600 free (0 chunks); 3568 used SPI Plan: 1024 total in 1 blocks; 800 free (0 chunks); 224 used pg_cast_source_target_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_language_oid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used pg_toast_2619_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_amop_opr_fam_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used tcf_mnfoids_partid: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used tcf_mnfoids_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used cards_cardnum_key: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used cards_pkey: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used tcf_original_trans_partid_cardnum: 2048 total in 1 blocks; 656 free (0 chunks); 1392 used tcf_original_trans_yearmo: 2048 total in 1 blocks; 752 free (0 chunks); 1296 used pg_constraint_contypid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used pg_constraint_conname_nsp_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_operator_oprname_l_r_n_index: 2048 total in 1 blocks; 392 free (0 chunks); 1656 used pg_proc_proname_args_nsp_index: 2048 total in 1 blocks; 584 free (0 chunks); 1464 used pg_proc_oid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used pg_shdepend_reference_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_namespace_oid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used pg_statistic_relid_att_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_inherits_relid_seqno_index: 2048 total in 1 blocks; 608 free (0 chunks); 1440 used pg_constraint_oid_index: 2048 total in 1 blocks; 704 free (0 chunks); 1344 used
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes [EMAIL PROTECTED] writes: On 1:00 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: If you can get Postgres to report an actual out-of-memory error (as opposed to crashing from OOM kill) then it should dump a memory usage map into the postmaster log. Looking at that would be informative. Got it. AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6 chunks); 1055316 used Well, that's definitely your problem ... HashBatchContext: 532676656 total in 74 blocks; 1863936 free (5 chunks); 530812720 used HashBatchContext: 415227952 total in 59 blocks; 6589744 free (5 chunks); 408638208 used although these numbers seem way outta line too. What did you say you had work_mem set to? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 3:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6 chunks); 1055316 used Well, that's definitely your problem ... So I need to remove the foreign constraints? HashBatchContext: 415227952 total in 59 blocks; 6589744 free (5 chunks); 408638208 used although these numbers seem way outta line too. What did you say you had work_mem set to? Initially on the first crash it was 256MB. I believe at the time of the crash I got the dump for it was down to 64MB or 8MB. I kept trying lower values. Also tried reducing shared_buffers as someone suggested. I will bump my shared_buffers back to 3GB and work_mem back to 64MB. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Writing a user defined function
On 18/07/2008 17:07, Suresh_ wrote: CREATE OR REPLACE FUNCTION udf() RETURNS integer AS $$ BEGIN for i in 1..2000 loop for j in 1...1 loop end loop; begin work; ^^- Here's your problem! You can't have a transaction inside a function - the function is already executed inside a transaction. Ray. -- Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland [EMAIL PROTECTED] Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals -- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Calling Python functions with parameters
I am having a problem with the simplest of Python functions, so I must be doing something wrong. After hours of searching and trying many options, I need the key that my puny brain is missing here. I cannot pass parameters to a plpythonu function. I have tried within psql and with pgAdmin III (which adds IN or INOUT to the parameter list - which I can't find documented). I'm an advanced Python programmer but a beginning PostgreSQL user. Here is what I have, which I copied almost verbatim from example code: test_dev-# \p create or replace function testf5i(a integer,b integer) RETURNS integer AS $$ if a b: return a return b $$ language plpythonu test_dev-# \g CREATE FUNCTION test_dev=# select testf5i(1,2); ERROR: plpython: function testf5i failed DETAIL: exceptions.NameError: global name 'a' is not defined If I remove the parameters and replace the a and b variables with numbers, it works fine. Any clues for me would be much appreciated! I'm using PostgreSQL 8.1.10 on Windows (for dev) and 8.1.2 in production in Linux. --Puzzled in Portland PS. What I need to do, which I also could find not examples on the mailing lists or the Internet, is to de-normalize some tables (user, addresses, phones, emails) into one big view and then update the proper tables upon updates. The web application then can just get one row and not have to deal with all the different tables I have this working in theory by using a rule for the user's fields and another rule for when a fax number changes. The problem being I would need too many rules to be easily editable (I would prefer one or two source files so I can search-replace, and put in version control.) I would need five rules for each field (ON UPDATE .. UPDATE if the values change, ON UPDATE .. INSERT if a value were blank and now exist, ON UPDATE .. DELETE if the value was set and is not blank, ON INSERT, and ON DELETE). There are four sets of fields (address, fax, phone, email), and 5 copies (email1, email2, email3, email4, email5). I REALLY don't want to maintain 80 rules! So I thought I would just create a function and call it with the built ins OLD and NEW and program a few simple loops. But I cannot pass parameters correctly to the function. (And I'm surprised that I could find no examples of anyone already doing this? Is there something inherently wrong with this approach?)
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 3:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6 chunks); 1055316 used Well, that's definitely your problem ... What is the overhead for each AfterTriggerEvent? I guess I can write a program to process so many rows at a time, if I know how much overhead each AfterTriggerEvent uses. I know 15 million at a time worked fine, so I could do 5 or 10 million at a time. When does the memory usage for those AfterTriggerEvents gets released? At commit? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes [EMAIL PROTECTED] writes: On 3:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: AfterTriggerEvents: 10553909248 total in 1268 blocks; 20432 free (6 chunks); 1055316 used Well, that's definitely your problem ... So I need to remove the foreign constraints? Either that or do the update in sections. But working through umpteen gig of pending trigger events would take forever anyway --- dropping and re-adding the FK constraint is almost certainly a better way. HashBatchContext: 415227952 total in 59 blocks; 6589744 free (5 chunks); 408638208 used although these numbers seem way outta line too. What did you say you had work_mem set to? Initially on the first crash it was 256MB. I believe at the time of the crash I got the dump for it was down to 64MB or 8MB. Something fishy about that. The max size of a HashBatchContext should be work_mem, more or less (the accounting isn't perfectly accurate I think, but it's not off by an order of magnitude). The only thing I can think of is that you had a huge number of rows with all the same hash value, so that there wasn't any way to split the batch into smaller sections. What are the join keys exactly in this query, and what can you tell us about their data distributions? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
Francisco Reyes [EMAIL PROTECTED] writes: What is the overhead for each AfterTriggerEvent? On a 64-bit machine it looks like they'd cost you about 80 bytes each :-(. A good deal of that is palloc overhead --- I wonder if we should get rid of the separate-palloc-for-each-event design? When does the memory usage for those AfterTriggerEvents gets released? At commit? Whenever the check is done; you'd have to read the rules about deferred constraints ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Calling Python functions with parameters
-- Original message -- From: user [EMAIL PROTECTED] I am having a problem with the simplest of Python functions, so I must be doing something wrong. After hours of searching and trying many options, I need the key that my puny brain is missing here. I cannot pass parameters to a plpythonu function. I have tried within psql and with pgAdmin III (which adds IN or INOUT to the parameter list - which I can't find documented). I'm an advanced Python programmer but a beginning PostgreSQL user. Here is what I have, which I copied almost verbatim from example code: test_dev-# \p create or replace function testf5i(a integer,b integer) RETURNS integer AS $$ if a b: return a return b $$ language plpythonu test_dev-# \g CREATE FUNCTION test_dev=# select testf5i(1,2); ERROR: plpython: function testf5i failed DETAIL: exceptions.NameError: global name 'a' is not defined Take a look at: http://www.postgresql.org/docs/8.1/interactive/plpython.html#PLPYTHON-FUNCS The form you are trying to use is supported in 8.2+. -- Adrian Klaver [EMAIL PROTECTED] ---BeginMessage--- I am having a problem with the simplest of Python functions, so I must be doing something wrong. After hours of searching and trying many options, I need the key that my puny brain is missing here. I cannot pass parameters to a plpythonu function. I have tried within psql and with pgAdmin III (which adds IN or INOUT to the parameter list - which I cant find documented). Im an advanced Python programmer but a beginning PostgreSQL user. Here is what I have, which I copied almost verbatim from example code:test_dev-# \pcreate or replace function testf5i(a integer,b integer)RETURNS integer AS $$ if a b: return a return b$$ language plpythonutest_dev-# \gCREATE FUNCTIONtest_dev=# select testf5i(1,2);ERROR: plpython: function testf5i failedDETAIL: exceptions.NameError: global name a is not defined If I remove the parameters and replace the a and b variables with numbers, it works fine.Any clues for me would be much appreciated!Im using PostgreSQL 8.1.10 on Windows (for dev) and 8.1.2 in production in Linux. --Puzzled in PortlandPS. What I need to do, which I also could find not examples on the mailing lists or the Internet, is to de-normalize some tables (user, addresses, phones, emails) into one big view and then update the proper tables upon updates. The web application then can just get one row and not have to deal with all the different tables I have this working in theory by using a rule for the users fields and another rule for when a fax number changes. The problem being I would need too many rules to be easily editable (I would prefer one or two source files so I can search-replace, and put in version control.) I would need five rules for each field (ON UPDATE .. UPDATE if the values change, ON UPDATE .. INSERT if a value were blank and now exist, ON UPDATE .. DELETE if the value was set and is not blank, ON INSERT, and ON DELETE). There are four sets of fields (address, fax, phone, email), and 5 copies (email1, email2, email3, email4, email5). I REALLY dont want to maintain 80 rules!So I thought I would just create a function and call it with the built ins OLD and NEW and program a few simple loops. But I cannot pass parameters correctly to the function. (And Im surprised that I could find no examples of anyone already doing this? Is there something inherently wrong with this approach?) ---End Message--- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Reducing memory usage of insert into select operations?
On 4:55 pm 07/18/08 Tom Lane [EMAIL PROTECTED] wrote: The only thing I can think of is that you had a huge number of rows with all the same hash value, so that there wasn't any way to split the batch into smaller sections. What are the join keys exactly in this query, and what can you tell us about their data distributions? I can't put actual table or column names so I am putting the actual select and explain, with all names changed.. insert into customer_transactions (record_id, date, type, amount, ids, groupid) select ca.record_id, coh.date, coh.type, coh.amount, coh.ids, ids.groupid from customer_original_historical coh, cards ca, customer_ids ids where ca.natural_key = coh.natural_key and ids.ids = coh.ids and coh.yearmo '200703'; Hash Join (cost=712213.57..27293913.33 rows=234402352 width=24) Hash Cond: (coh.id = ids.id) - Hash Join (cost=551387.26..18799378.16 rows=234402352 width=22) Hash Cond: (coh.user_id = ca.user_id) - Seq Scan on customer_original_historical coh (cost=0.00..6702501.40 rows=234402352 width=47) Filter: (yearmo '200703'::bpchar) - Hash (cost=268355.67..268355.67 rows=14637567 width=32) - Seq Scan on cards ca (cost=0.00..268355.67 rows=14637567 width=32) - Hash (cost=77883.25..77883.25 rows=5055525 width=6) - Seq Scan on customer_ids ids (cost=0.00..77883.25 rows=5055525 width=6) There was a single table, customer_original_historical, which was using a natural key with a text field. Most queries used the customer_original_historical by itself or joined against a single other table which we shoudl call area. The new schema I am testing is to split the one single file into 12 files per month. In addition I replaced the natural keys with a synthetic integer key. I also replaced the area table with a customer_ids table which only has two columns: synthetic key for historical and a region. In order to have 12 tables per month I grouped all the regions into 12 groups. Queries are usually within a single region so what I am trying to benchmark is if dividing 24 months of data into 24 sets of 12 regions will perform better than a single large file. The distribution of the joins is: There are about 1000,000 unique natural keys. Each natural key has in average 15 rows per month. ids are regions where the natural_keys are. Figure 10s of thousands of natural_keys to an id. Is that along the lines of what you were looking for? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] tsearch dictionary list?
On Fri, 18 Jul 2008, Daniel Chiaramello wrote: Hi all. I'm new to postgresql world, and I have to extend an existing product for Thailand - that product has some features based on tsearch, and I was wondering if there is an existing dictionary for that language... I failed to find any reference of such dictionary on the web, and of course I don't speak thailandese at all! read documentation first and you'll be surprised. In brief, tsearch supports all dictionaries of OpenOffice http://wiki.services.openoffice.org/wiki/Dictionaries In fact, is there somewhere a list of existing tsearch dictionaries? Are japanese, chinese, or other exotic languages supported by tsearch? Thanks for your attention, Daniel Chiaramello Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: [EMAIL PROTECTED], http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Backup/Restore of single table in multi TB database
Simon Riggs wrote: Have a look at pg_snapclone. It's specifically designed to significantly improve dump times for very large objects. http://pgfoundry.org/projects/snapclone/ Also, in case the original poster is not aware, by default pg_dump allows to backup single tables. Just add -t table name. Does pg_snapclone works mostly on large rows or will it also be faster than pg_dump for narrow tables? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] using regexp_matches and array manipulation
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Can anyone give me an example of how to use regexp_matches and use the captured values? For instance, if I have a delimited string a,b,c and I want to put each letter into a variable so I can subsequently use those variables in an insert statement, how would I do that? I know regexp_matches returns a text array, but how do I assign the value to an array and then access those values? leaselog=# select regexp_matches('a,b,c','(.*),(.*),(.*)'); ~ regexp_matches - ~ {a,b,c} (1 row) I've tried select into, but that just created a table and didn't put the values into an array variable. leaselog=# select regexp_matches('a,b,c','(.*),(.*),(.*)') into foo; Thanks in advance! Rgds, Bret -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.7 (MingW32) iD8DBQFIgT2T/PgQIGRJuUcRAvMGAJ9VRNfc5ZZsFtS2LG8VJgPNNnL1wwCfewlf Jih6ReqSTj6Pp9Ya3B2uMn8= =HbPn -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] UPDATE runs slow in a transaction
Viktor Rosenfeld [EMAIL PROTECTED] writes: Postgres is indeed selecting a bad plan. Turns out that the index I created to speed up the UPDATE isn't used inside a transaction block. That doesn't make any sense to me, and in fact I cannot replicate any such behavior here. What PG version are you running, exactly? The exact test case I tried is attached --- it's just your original incomplete example with some dummy data created beforehand. I get a plan using the tmp_id index in all supported PG versions. regards, tom lane drop table _struct, _rank; create table _struct(token_index int, id int); create table _rank(struct_ref int, pre int, post int); insert into _struct select i, i from generate_series(1,1000) g(i); insert into _rank select i, i, i from generate_series(1,1000) g(i); analyze _struct; analyze _rank; begin; -- add columns left_token, right_token and copy values from token_index ALTER TABLE _struct ADD left_token integer; ALTER TABLE _struct ADD right_token integer; UPDATE _struct SET left_token = token_index; UPDATE _struct SET right_token = token_index; -- set left, right values for non-terminals -- (use temporary table to get rid of joins between struct and rank) CREATE TABLE tmp AS SELECT r.pre, r.post, s.id, s.left_token, s.right_token FROM _rank r, _struct s WHERE r.struct_ref = s.id; CREATE INDEX idx_tmp_pre_post ON tmp (pre, post); UPDATE tmp SET left_token = (SELECT min(t2.left_token) FROM tmp t2 WHERE t2.pre = tmp.pre AND t2.pre = tmp.post); UPDATE tmp SET right_token = (SELECT max(t2.right_token) FROM tmp t2 WHERE t2.pre = tmp.pre AND t2.pre = tmp.post); -- copy left, right values for everything CREATE INDEX tmp_id ON tmp (id); --analyze tmp; explain UPDATE _struct SET left_token = (SELECT DISTINCT left_token FROM tmp WHERE _struct.id = tmp.id); -- the UPDATE above takes ages when called within a transaction UPDATE _struct SET right_token = (SELECT DISTINCT right_token FROM tmp WHERE _struct.id = tmp.id); -- clean up DROP TABLE tmp; rollback; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 10.5 OS X ppc64 problem
Shane Ambler [EMAIL PROTECTED] writes: There has been some talk about getting postgres to build as a universal binary. The current makefiles don't support the option but a couple of people have come up with work arounds. I do believe that there are plans to add this to future releases. FYI, we are making some progress: http://archives.postgresql.org/pgsql-hackers/2008-07/msg00884.php I am not sure we'll ever bother to fix the configuration-file issue, but other than that it does seem to be feasible to build universal binaries without major hacking on the source code as of CVS HEAD. (I fixed the postgres.bki issue a few minutes ago.) regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general