[GENERAL] libpq.so.3 problem, PostgreSQL >= 8.0.2 and RPM installations
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, AFAICS from the user requests, many people is not aware about the compatibility RPM we built: http://developer.PostgreSQL.org/~devrim/compat-postgresql-libs-3-2PGDG.i686.rpm is the compatibility RPM that fixes the problem which arose with PostgreSQL 8.0.2 . With 8.0.2, the major number of libpq was bumped, and all software that depends on libpq.so.3 failed/or needed to be recompiled. With this RPM, you can overcome the problem. First install this package and then install / upgrade PostgreSQL. Once we had compat-postgresql-libs-3-1PGDG.i686.rpm (initial version), but it was *extremely* buggy. This RPM applies to all Red Hat / Fedora Core releases. Please report any problems to [EMAIL PROTECTED] - -HTH Regards, - -- Devrim GUNDUZ devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr http://www.tdmsoft.com.tr http://www.gunduz.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.1 (GNU/Linux) iD8DBQFCllbStl86P3SPfQ4RAq1qAKC8kVz+kfKmMdpdpj10q6QtaNaYwgCgqQDn fnkcCu80Qtuk5VUQBszYkOY= =bf47 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] another failover testing question
"David Parker" <[EMAIL PROTECTED]> writes: > Sorry, neglected the version yet again: 7.4.5. What happens is that we > have active connections accessing tables that are being replicated by > slony. Then somebody does an uninstall of slony, which removes the slony > trigger from those tables. Then we start getting the OID error. > If this should indeed not be an issue in 7.4.5, I will try to come up > with a test case independent of a slony install. It should not be ... at least, assuming that Slony is using the standard DROP TRIGGER operation, rather than playing directly with the system catalogs ... regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Locking rows
"gabriele zelasco" <[EMAIL PROTECTED]> writes: > I would like to start a transaction with a sql function. > When user press "edit" button on my form, i would lock the current row. > After user has modified data on form, pressing "save" button I would save t= > he modified row by sql update function and so commit. This is widely considered a very bad way to design an application. Consider what happens when the user leaves for lunch, or otherwise lets the app sit for a long time. See the list archives for prior discussions of the issue. But in any case, the answer to your question is to use "SELECT FOR UPDATE" to retrieve the row. And you can't start a transaction inside a function, because by definition you'll already be in one. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Just a crazy idea!
On Thu, May 26, 2005 at 05:04:37PM -0400, Hrishikesh Deshmukh wrote: > Is it possible to connect a DB in Postgresql to a DB in MySQL! I > know its a crazy idea! It's called DBI-Link. http://pgfoundry.org/projects/dbi-link/ Cheers, D -- David Fetter [EMAIL PROTECTED] http://fetter.org/ phone: +1 510 893 6100 mobile: +1 415 235 3778 Remember to vote! ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql, pgaccess and tcl/tk (in X11)
"daniellewis" <[EMAIL PROTECTED]> writes: > PostgreSQL (Version 8.0.1). I installed from fink; tcl/tk 8.4.1-12, and > changed the pgaccess bash script to read wish8.4. I tried to run this > and I go the following error: > Application initialization failed: no display name and no $DISPLAY > environment variable > Error in startup script: invalid command name "image" > while executing > "image create bitmap dnarw -data { I believe this implies that you are running plain Tcl, not Tcl/Tk (since image is a Tk command). Check whether the script is really being executed by wish and not by plain tclsh. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Just a crazy idea!
LOL..not looney! On 5/26/05, Matt Miller <[EMAIL PROTECTED]> wrote: > On Thu, 2005-05-26 at 17:21 -0400, Hrishikesh Deshmukh wrote: > > I have a little schema in pgsql and some annotation in mysql; > > ... > > if i could make these two talk > > ... > > So the question and frankly i thought it was crazy thought! > > The replys so far indicate that i am not looney at all ;) > > Well the replies seem to indicate that the question does have an > affirmative answer. Whether Hrishikesh is looney, however, is probably > still an open question ... > > ---(end of broadcast)--- > TIP 7: don't forget to increase your free space map settings > ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Just a crazy idea!
On Thu, 2005-05-26 at 17:21 -0400, Hrishikesh Deshmukh wrote: > I have a little schema in pgsql and some annotation in mysql; > ... > if i could make these two talk > ... > So the question and frankly i thought it was crazy thought! > The replys so far indicate that i am not looney at all ;) Well the replies seem to indicate that the question does have an affirmative answer. Whether Hrishikesh is looney, however, is probably still an open question ... ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] another failover testing question
Sorry, neglected the version yet again: 7.4.5. What happens is that we have active connections accessing tables that are being replicated by slony. Then somebody does an uninstall of slony, which removes the slony trigger from those tables. Then we start getting the OID error. If this should indeed not be an issue in 7.4.5, I will try to come up with a test case independent of a slony install. Thanks. - DAP >-Original Message- >From: Tom Lane [mailto:[EMAIL PROTECTED] >Sent: Thursday, May 26, 2005 4:30 PM >To: David Parker >Cc: postgres general >Subject: Re: [GENERAL] another failover testing question > >"David Parker" <[EMAIL PROTECTED]> writes: >> Something that we end up doing sometimes in our failover testing is >> removing slony replication from an "active" (data provider) server. >> Because this involves removing triggers from tables, we end up with >> currently connected clients getting a bunch of "OID 123 not found" >> errors, where the OID is that of the recently removed trigger. > >> Is there any way short of cycling all client connections to have the >> server processes clean that information out of their cache when an >> object disappears like this from the database? > >AFAICS, there already *is* adequate interlocking for this. >What PG version are you testing, and can you provide a >self-contained test case? > > regards, tom lane > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Just a crazy idea!
I have a little schema in pgsql and some annotation in mysql; either way transfer of schema might result in data types etc conflicts. So if i could make these two talk then i don't have to worry about schema transfer. So the question and frankly i thought it was crazy thought! The replys so far indicate that i am not looney at all ;) Hrishi On 5/26/05, Dawid Kuroczko <[EMAIL PROTECTED]> wrote: > On 5/26/05, Hrishikesh Deshmukh <[EMAIL PROTECTED]> wrote: > > Is it possible to connect a DB in Postgresql to a DB in MySQL! > > I know its a crazy idea! > > Why, of course. Been' doing that. > > All you need is to write a set of functios, for example in PL/perlU, > some of them being set returning functions. > > For my purposes I did a function which connects to mysql to > call its encrypt function, and a function which returns "show databases" > and similar stuff. > > Basically -- yes you can do it, though things getting tricky if you want > to have _read_ access to _big_ tables. :) > >Regards, > Dawid > > PS: For the reference, why do you need to connect to mysql? > ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Just a crazy idea!
On 5/26/05, Hrishikesh Deshmukh <[EMAIL PROTECTED]> wrote: > Is it possible to connect a DB in Postgresql to a DB in MySQL! > I know its a crazy idea! Why, of course. Been' doing that. All you need is to write a set of functios, for example in PL/perlU, some of them being set returning functions. For my purposes I did a function which connects to mysql to call its encrypt function, and a function which returns "show databases" and similar stuff. Basically -- yes you can do it, though things getting tricky if you want to have _read_ access to _big_ tables. :) Regards, Dawid PS: For the reference, why do you need to connect to mysql? ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Just a crazy idea!
Could you point to documentation regarding this. This would be a big help. Thanks, Hrishi On 5/26/05, Dann Corbit <[EMAIL PROTECTED]> wrote: > Of course it is possible. In fact, it's easy. > > Just use JDBC or ODBC or OLEDB or a .NET provider and join to both > database systems. > > There is nothing to it. > > I can make a join where tables from RMS and DB/2 and Oracle and > PostgreSQL and MySQL are all participating in the SQL statement with > ease. > > This technology is 15 years old. > > > -Original Message- > > From: [EMAIL PROTECTED] [mailto:pgsql-general- > > [EMAIL PROTECTED] On Behalf Of Hrishikesh Deshmukh > > Sent: Thursday, May 26, 2005 2:05 PM > > To: Postgresql-General > > Subject: [GENERAL] Just a crazy idea! > > > > Is it possible to connect a DB in Postgresql to a DB in MySQL! > > I know its a crazy idea! > > H > > > > ---(end of > broadcast)--- > > TIP 5: Have you checked our extensive FAQ? > > > >http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Locking rows
Hi.I'm using postgresql 8.0.3 under win2000 and developing with VS2003 (npgsql net provider).I would like to start a transaction with a sql function.When user press "edit" button on my form, i would lock the current row.After user has modified data on form, pressing "save" button I would save the modified row by sql update function and so commit.Well.The update function works fine. My problem is to lock the current row.My code is something like:CREATE OR REPLACE FUNCTION "public"."new_function" () RETURNS SETOF "public"."table" AS$body$begin;select * from table where field = 'value' for update;$body$LANGUAGE 'sql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER;But I get :ERROR , BEGIN IS NOT ALLOWED IN SQL FUNCTIONWhat's wrong? Please tell me if it's a syntax problemmaybe have I to use pgsql ? How can I lock row in pgsql ?Thanks in advance. Internal Virus Database is out-of-date. Checked by AVG Anti-Virus. Version: 7.0.289 / Virus Database: 266.11.11 - Release Date: 16/05/2005 ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[GENERAL] postgresql, pgaccess and tcl/tk (in X11)
Hello, I'm quite new to postgreSQL, pgaccess and TCL/TK... Here is my scenario: PROBLEM 1: I'm running X11R6 on Apple X11 (on OS X 10.3.8), I want to run pgaccess (which I have version 0.98.7 from http://ns.flex.ro/pgaccess/ ). I have PostgreSQL (Version 8.0.1). I installed from fink; tcl/tk 8.4.1-12, and changed the pgaccess bash script to read wish8.4. I tried to run this and I go the following error: Application initialization failed: no display name and no $DISPLAY environment variable Error in startup script: invalid command name "image" while executing "image create bitmap dnarw -data { #define down_arrow_width 15 #define down_arrow_height 15 static char down_arrow_bits[] = { 0x00,0x80,0x00,0x80,0x0..." (file "/usr/local/pgaccess/main.tcl" line 5) Thats my main problem. PROBLEM 2: I tried to fix this by downloading Tcl/Tk Aqua (TclTkAquaBI-8.4.9.1 for Mac OS X 10.3 and later only). When I tried to load the main.tcl script for pgaccess I got the following error: "Error: dyld: /Library/Frameworks/Tk.framework/Versions/8.4/Resources/Wish Shell.app/Contents/MacOS/Wish Shell can't open library libpgtcl.dylib (No such file or directory, errno = 2)" With the following details: dyld: /Library/Frameworks/Tk.framework/Versions/8.4/Resources/Wish Shell.app/Contents/MacOS/Wish Shell can't open library: libpgtcl.dylib (No such file or directory, errno = 2) dyld: /Library/Frameworks/Tk.framework/Versions/8.4/Resources/Wish Shell.app/Contents/MacOS/Wish Shell can't open library: libpgtcl.dylib (No such file or directory, errno = 2) while executing "load libpgtcl[info sharedlibextension]" (procedure "main" line 3) invoked from within" I have a feeling that this is something to do with the User, I should be logged in as postgres. Any help will be appreciated... as I have to learn PostgreSQL for work, which I start in two weeks time. Thanks again. Daniel Lewis ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Just a crazy idea!
Of course it is possible. In fact, it's easy. Just use JDBC or ODBC or OLEDB or a .NET provider and join to both database systems. There is nothing to it. I can make a join where tables from RMS and DB/2 and Oracle and PostgreSQL and MySQL are all participating in the SQL statement with ease. This technology is 15 years old. > -Original Message- > From: [EMAIL PROTECTED] [mailto:pgsql-general- > [EMAIL PROTECTED] On Behalf Of Hrishikesh Deshmukh > Sent: Thursday, May 26, 2005 2:05 PM > To: Postgresql-General > Subject: [GENERAL] Just a crazy idea! > > Is it possible to connect a DB in Postgresql to a DB in MySQL! > I know its a crazy idea! > H > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Just a crazy idea!
Is it possible to connect a DB in Postgresql to a DB in MySQL! I know its a crazy idea! H ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Trigger and arguments question
On 5/26/05, Hervé Inisan <[EMAIL PROTECTED]> wrote: > > Hi everybody! > > I have a trigger like this: > > CREATE TRIGGER mytrigger >AFTER INSERT OR UPDATE OR DELETE >ON myschema.mytable >FOR EACH ROW >EXECUTE PROCEDURE myschema.myfunction(myarg); > > It sends an argument to myfunction(), and I can retrieve this value in > TG_ARGV[0]. Fine. > What I'm trying to do is using TG_ARGV[0] to point to a field in NEW or OLD. > Is it possible? > > Something like NEW.TG_ARGV[0]... > > I'm trying to write a kind of generic function which I could use on multiple > tables with different field names (myarg being the field name). > But I can't get it to work. > > Any clues or other solutions? No. the argument of the trigger must be a string literal defined at creation time. maybe you better solution is simply a function -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] another failover testing question
"David Parker" <[EMAIL PROTECTED]> writes: > Something that we end up doing sometimes in our failover testing is > removing slony replication from an "active" (data provider) server. > Because this involves removing triggers from tables, we end up with > currently connected clients getting a bunch of "OID 123 not found" > errors, where the OID is that of the recently removed trigger. > Is there any way short of cycling all client connections to have the > server processes clean that information out of their cache when an > object disappears like this from the database? AFAICS, there already *is* adequate interlocking for this. What PG version are you testing, and can you provide a self-contained test case? regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[GENERAL] Trigger and arguments question
Hi everybody! I have a trigger like this: CREATE TRIGGER mytrigger AFTER INSERT OR UPDATE OR DELETE ON myschema.mytable FOR EACH ROW EXECUTE PROCEDURE myschema.myfunction(myarg); It sends an argument to myfunction(), and I can retrieve this value in TG_ARGV[0]. Fine. What I'm trying to do is using TG_ARGV[0] to point to a field in NEW or OLD. Is it possible? Something like NEW.TG_ARGV[0]... I'm trying to write a kind of generic function which I could use on multiple tables with different field names (myarg being the field name). But I can't get it to work. Any clues or other solutions? Thanks, -- Hervé Inisan. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] pg_listener records
On May 26, 2005, at 2:41 PM, David Parker wrote: But I'm wondering - shouldn't that be part of normal server startup, cleaning out the pg_listener table? Or has this been addressed in 8.X.? Or is there a reason this isn't a good idea? Try slony 1.0.5, which fixed *many* issues and bugs. I believe this is one of them. The upgrade should be uneventful. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] maintenance_work_mem upper limit =1gb??
Tom Lane writes: >Himanshu Baweja <[EMAIL PROTECTED]>writes:>> why has been maintenance_work_mem and work_mem been>> restricted to 1gb... >So as not to overflow on 32-bit machines. then why not add a check during configure(before compiling) to check if its a 32 or 64 bit machine in the past we have seen servers like DEC and Sun... which although 32 bit used to support much more memory although i can just give suggestion... its upto u to see whts good and whts not... by the way i have started to like postgres.. it really roks... thx to all the developers :) Regards Himanshu __Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com
[GENERAL] Archiving solution
Hi, Does anybody know any commercial or open source archieving solutions available out there? We need to be able to archieve data/records from certain tables that are more than 1 year old. Thank you in advance. J ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] pg_listener records
"David Parker" <[EMAIL PROTECTED]> writes: > But I'm wondering - shouldn't that be part of normal server startup, > cleaning out the pg_listener table? Perhaps, but the code is written such that it's unlikely to be a major problem --- notifying processes automatically clean out entries that don't correspond to active backend PIDs. The long-term solution to LISTEN/NOTIFY performance issues is to get rid of the table altogether, so I don't see a lot of point in putting effort into band-aids like cleaning out the table during restart. regards, tom lane ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Can postgresql catch all the sentences applies to one table?
On May 26, 2005, at 11:55 AM, Manuel García wrote:Hello, Somebody knows If is possible to catch all the sentences applies to one table using triggers and function in C maybe, that’s because, I need to create one log table with all the sentences. Once that I have that I going to use all the sentences to replicate that table in other database, and after the first load of this table I going to try to apply only the last sentences register on the source table.There are at least two table replication projects that sounds like they will solve whatever problem you're trying to solve. No need to do all that work re-inventing it.
[GENERAL] another failover testing question
Something that we end up doing sometimes in our failover testing is removing slony replication from an "active" (data provider) server. Because this involves removing triggers from tables, we end up with currently connected clients getting a bunch of "OID 123 not found" errors, where the OID is that of the recently removed trigger. Is there any way short of cycling all client connections to have the server processes clean that information out of their cache when an object disappears like this from the database? (I'm posting here rather than the slony list because it seems like a general question) Thanks. - DAP--David Parker Tazz Networks (401) 709-5130
Re: [GENERAL] maintenance_work_mem upper limit =1gb??
Himanshu Baweja <[EMAIL PROTECTED]> writes: > why has been maintenance_work_mem and work_mem been > restricted to 1gb... So as not to overflow on 32-bit machines. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] pg_listener records
Thanks. Yeah, I know slony 1.0.5 cleans up after itself, and is better in general, and I want to get there, but upgrading is not an option at the moment, unfortunately. Same for postgres 8. But it still seems like this is something the server itself should be taking care of, not a client process - DAP >-Original Message- >From: Scott Marlowe [mailto:[EMAIL PROTECTED] >Sent: Thursday, May 26, 2005 2:51 PM >To: David Parker >Cc: postgres general >Subject: Re: [GENERAL] pg_listener records > >On Thu, 2005-05-26 at 13:41, David Parker wrote: >> In failover testing we have been doing recently (postgres 7.4.5 w/ >> slony 1.0.2) we have seen several times when the database comes back >> up after a power failure it still has old pg_listener >records hanging >> around from its previous life. This causes some problems with slony, >> but of course it is easy enough to implement a procedure to clean >> those records out, which we have done. >> >> But I'm wondering - shouldn't that be part of normal server startup, >> cleaning out the pg_listener table? Or has this been addressed in >> 8.X.? Or is there a reason this isn't a good idea? > >You should really be running the latest version of slony, >1.0.5. There were plenty of little niggling bugs in the >earlier version that have been fixed. I'd upgrade postgresql >while I was at it too, but slony DEFINITELY needs to be the >latest version. > >I'm pretty sure the problem you speak of was in fact fixed in >later versions, btw. > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] pg_listener records
On Thu, 2005-05-26 at 13:41, David Parker wrote: > In failover testing we have been doing recently (postgres 7.4.5 w/ > slony 1.0.2) we have seen several times when the database comes back > up after a power failure it still has old pg_listener records hanging > around from its previous life. This causes some problems with slony, > but of course it is easy enough to implement a procedure to clean > those records out, which we have done. > > But I'm wondering - shouldn't that be part of normal server startup, > cleaning out the pg_listener table? Or has this been addressed in > 8.X.? Or is there a reason this isn't a good idea? You should really be running the latest version of slony, 1.0.5. There were plenty of little niggling bugs in the earlier version that have been fixed. I'd upgrade postgresql while I was at it too, but slony DEFINITELY needs to be the latest version. I'm pretty sure the problem you speak of was in fact fixed in later versions, btw. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pg_listener records
In failover testing we have been doing recently (postgres 7.4.5 w/ slony 1.0.2) we have seen several times when the database comes back up after a power failure it still has old pg_listener records hanging around from its previous life. This causes some problems with slony, but of course it is easy enough to implement a procedure to clean those records out, which we have done. But I'm wondering - shouldn't that be part of normal server startup, cleaning out the pg_listener table? Or has this been addressed in 8.X.? Or is there a reason this isn't a good idea? - DAP--David Parker Tazz Networks (401) 709-5130
[GENERAL] maintenance_work_mem upper limit =1gb??
why has been maintenance_work_mem and work_mem been restricted to 1gb... although i dont need it but kinda server i am working on i wont mind allocating...(32gb ram) #define MaxAllocSize ((Size) 0x3fff) /* 1 gigabyte - 1 */ also for those who dont know the max. shared_buffer= 262143 (= 2^31 (INT_MAX) / 8192(size of each buffer)...) comes to be around 2Gb... although i would like to set this high also... but the reasoning for its limit seems logical... developers might want to increase these limits.. with the kind-of 64 bits available now... might speed up the things little bit ;) Regards Himanshu __ Do you Yahoo!? Yahoo! Small Business - Try our new Resources site http://smallbusiness.yahoo.com/resources/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Strongly-Typed Refcursor (PowerBuilder Datawindow Clients)
I'd like a function to return a strongly-typed refcursor. My goal is to allow callers of the function to know, based on the function's return type, the number and data types of the columns that it can expect in the refcursor. From what I see in plpgsql, all refcursors are allowed to point to any query at all. In particular, my caller's environment will be Sybase PowerBuilder, and the function will be tied to a PowerBuilder datawindow. In the past these datawindow clients have used Oracle ref cursors, but we've preferred that these Oracle ref cursors be strongly typed. Oracle defines a strongly-typed ref cursor as one that includes the 'return' clause in the ref cursor definition. For example, in Oracle: --beginning of code-- create or replace package x_pkg is type x_rec is record (col1 number); type x_type is ref cursor return x_rec; end; / create or replace function x_func return x_pkg.x_type as cur x_pkg.x_type; begin open cur for select 1 from dual; return cur; end; / --end of code-- This code will create a function 'x_func' that must return a ref cursor whose query contains a single numeric column. If I 'describe' the function I will see that the function returns a record that contains a single numeric column. How can I write a plpgsql function that behaves similarly? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Compiling Postgresql 8.0.3 on Solaris 10
Greg Stark <[EMAIL PROTECTED]> writes: > I suspect something stranger going on. I'm still wondering about the theory that it's not the aliases at issue, but some scripts in the PATH ahead of the normal /bin/ls and friends. regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Compiling Postgresql 8.0.3 on Solaris 10
Hello Tom, I hope that you are well, thank you for your guidence, but these are indeed defined in my .bashrc: # .bashrc # User specific aliases and functions # Source global definitions if [ -f /etc/bashrc ]; then . /etc/bashrc fi if [ "$PS1" ]; then # your settings: PS1="[EMAIL PROTECTED]::\@::\w]\\$ " fi alias cls=clear alias e\-mail=pine alias e='emacs -nw $1' alias rmf='/bin/rm -f' alias rmp='/bin/rm' alias rm='rm -i' alias logout=exit alias lo=exit alias rmtmp='rm -i core *~ *.*~ .*~ .pine-debug*' alias mproc='ps -ef | grep $USER' alias allproc='ps -ef | less' alias ll='colorls -l' alias ls='colorls -al' I don't see why colorls would do anything different, or for that matter rm -i, shouldn't the shell scripts *not* use the user's environment and detect that there is /bin/rm and that ls is /bin/ls etc etc. Secondly, I did login and typed sh, which dumped me into the sh shell, in my previous e-mail I showed the alias listings in that shell. I tried compiling in that shell and it seems that it presents the same problems. Cheers, Aly. Tom Lane wrote: Aly Dharshi <[EMAIL PROTECTED]> writes: alias ls='colorls -al' alias rm='rm -i' I don't see any aliases that are going to break the compile process. I beg to differ --- I think the ones quoted above match your symptoms pretty well. So the question is: why are they getting used in a noninteractive script? My bet is that you've defined these aliases in the wrong place. I'm not sure about Solaris, but on Linux one conventionally puts aliases like these in ~/.bashrc, which I think is not read by plain sh. If you've put them in ~/.profile they are very likely to break shell scripts. regards, tom lane -- Aly Dharshi [EMAIL PROTECTED] "A good speech is like a good dress that's short enough to be interesting and long enough to cover the subject" ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Portability of pgsql database files.
"David M. Lee" <[EMAIL PROTECTED]> writes: > I have a system that is dual bootable for both i686 and x86_64. Would > there be any issues using the PostgreSQL database files generated for > i686 on x86_64, or vice versa? You'd probably have problems with the different data alignment rules for the two architectures (I'm supposing MAXALIGN is different, though perhaps it is not?). regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Portability of pgsql database files.
David M. Lee wrote: > I have a system that is dual bootable for both i686 and x86_64. Would > there be any issues using the PostgreSQL database files generated for > i686 on x86_64, or vice versa? Uh, if the padding is the same, it would work, but we never test such things. -- Bruce Momjian| http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Compiling Postgresql 8.0.3 on Solaris 10
Tom Lane <[EMAIL PROTECTED]> writes: > Aly Dharshi <[EMAIL PROTECTED]> writes: > > alias ls='colorls -al' > > alias rm='rm -i' > > > I don't see any aliases that are going to break the compile process. > > I beg to differ --- I think the ones quoted above match your symptoms > pretty well. So the question is: why are they getting used in a > noninteractive script? > > My bet is that you've defined these aliases in the wrong place. > I'm not sure about Solaris, but on Linux one conventionally puts > aliases like these in ~/.bashrc, which I think is not read by > plain sh. If you've put them in ~/.profile they are very likely > to break shell scripts. This doesn't really seem like a sufficient explanation. Putting such things in .profile is pretty standard too. .profile should only ever be run by login shells -- not even all interactive shells -- in the first place. Moreover, bash at least will not expand aliases for non-interactive shells: Aliases are not expanded when the shell is not interactive, unless the expand_aliases shell option is set using shopt (see the description of shopt under SHELL BUILTIN COMMANDS below). So either this user has this expand_aliases shell option set. Or he's using Solaris's /bin/sh which is a non-POSIX ksh derivative and that shell is both 1) running .profile for a non-interactive shell (which is utterly bogus) and 2) expanding aliases for a non-interactive shell. I'm skeptical that it's doing that and if it is then it would be causing problems for virtually any /bin/sh script. *Many* users would have commands in .profile that are intended to only run once at login time. I suspect something stranger going on. Perhaps /bin/sh on this machine is bash and this user has some environment variable set that enables this non-standard behaviour? -- greg ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Portability of pgsql database files.
I have a system that is dual bootable for both i686 and x86_64. Would there be any issues using the PostgreSQL database files generated for i686 on x86_64, or vice versa? Thanks! dave <>< ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Modulus operator returns negative values / numeric
On Thu, May 26, 2005 at 08:56:34AM -0400, Paul Tillotson wrote: > Tom Lane wrote: > > >Paul Tillotson <[EMAIL PROTECTED]> writes: > In other words, no arbitrary number of extra decimal places when calling > div_var() will be always sufficient to prevent rounding up at some other > decimal place. No, an arbitrary number won't do. I found I could make it work by adding as much extra decimals as digits in the divisor. At least it worked for these test cases I made up. (Attached) > >I cannot believe that that won't create problems at least as bad as it > >solves. Have you even tried the regression tests on this? > No. Can you tell me how to do that? make installcheck in src/test/regress -- Alvaro Herrera () "Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar al alfarero y ver qué formas se pueden sacar del otro" (C. Halloway en La Feria de las Tinieblas, R. Bradbury) drop table modtest; create table modtest (name text, arg1 numeric, arg2 numeric); copy modtest from stdin; uno 4385200147210375820 123 dos 12345678901234567934123 tres539379618106876228181 122 cuatro 539379618106876228319 123 cinco 66343693027145776082375 123 seis8160274242338930458142210 123 siete 1003713731807688446351500562123 ocho123456789012345678901234567895 123 nueve 1234567890123456789012345678901234567856123 diez12345678901234567890123456789012345678901234567817 123 once123456789012345678901234567890123456789012345678901234567901123 doce1234567890123456789012345678901234567890123456789012345678901234567859 12 trece 1234567890123456789012345678901234567890123456789012345678901234567862 123 catorce 1234567890123456789012345678901234567890123456789012345678901234566713 1234 quince 1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567321 1234 dieciseis 1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234558889 12345 \. select name, arg1 % arg2 from modtest; \echo 0 select name, arg2, arg1 % arg2::numeric(10,0), log(arg2)::int from modtest; \echo 1 select name, arg2, arg1 % arg2::numeric(10,1), log(arg2)::int from modtest; \echo 2 select name, arg2, arg1 % arg2::numeric(10,2), log(arg2)::int from modtest; \echo 3 select name, arg2, arg1 % arg2::numeric(10,3), log(arg2)::int from modtest; \echo 4 select name, arg2, arg1 % arg2::numeric(10,4), log(arg2)::int from modtest; ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Query for importing and exporting data from a database
"Yateen Joshi" <[EMAIL PROTECTED]> writes: > Link: File-List > > Hi, > > > > I am using postgres 7.4.2 on Solaris. My unix system does not place a > limitation of 2 GB on file size. If I export a data from my database that > causes the file size to be more than 2 GB, then that export fails (and > vice versa for importing, i.e. if the file size is more than 3 GB, it can > not import). Two questions - > > Why does this happen? > > Is there any way to avoid this? Probably, your Postgres was not compiled with the LARGEFILE option (I forget exactly what it's called). You may be able to work around it by doing something like: pg_dump -t mytable mydb | cat > output.sql -Doug ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] Query for importing and exporting data from a database
Hi, I am using postgres 7.4.2 on Solaris. My unix system does not place a limitation of 2 GB on file size. If I export a data from my database that causes the file size to be more than 2 GB, then that export fails (and vice versa for importing, i.e. if the file size is more than 3 GB, it can not import). Two questions – Why does this happen? Is there any way to avoid this? Basically, I want to export huge chunk of data to files and then import the same, please help me. Thanks and regards, Yateen V. Joshi
[GENERAL] Can postgresql catch all the sentences applies to one table?
Hello, Somebody knows If is possible to catch all the sentences applies to one table using triggers and function in C maybe, thats because, I need to create one log table with all the sentences. Once that I have that I going to use all the sentences to replicate that table in other database, and after the first load of this table I going to try to apply only the last sentences register on the source table. If this is possible I like to know or maybe if somebody can advice me about some technical documentation I am using postgresql 7.3 Somebody, can help me Thanks in advanced. Atte. Manuel
[GENERAL] unsupported frontend protocol
I've painted myself into a little corner here: I pg_dumped a 7.4.3 database, created a database of the same name on a 7.3.4 server, psql'd into the new database, and \i'd the dump file. The database was created although there were a variety of errors which I realized were due to 7.4.3 and 7.3.4 SQL incompatibilities. I had also added a new entry to pg_hba.conf. After restarting the 7.3.4 postmaster I started getting an endless series of "FATAL: unsupported frontend protocol" messages, even when not using psql. The only other access to the cluster is by PHP 4.2.2, and nothing accesses the newly restored database. Since the db had errors anyway, I restored the old pg_hba.conf file, dropped the new database, and restarted the postmaster -- i.e. attempting to wipe out all changes. HOWEVER, the protocol error message persists, endlessly and infuriatingly! Searching on the error message text doesn't return anything enlightening. Can anyone suggest why this is happening or where I find a log with further details. Thank you in advance John Gunther ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Bgwriter--- BufferSync() and StrategyStatsDump()
Himanshu Baweja <[EMAIL PROTECTED]> writes: > this would greatly help ppl in determining the > appropriate value of bgwriter parameters it would > require a simple patch to written which will add two > else statements in StrategyDirtyBufferList() and > returning a struct instead of int... > also a line in function BufferSync()=> > elog(DEBUG1,.. That whole algorithm is gone in CVS tip, so there's no point in creating such a patch ... regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] unsupported frontend protocol
Bucks vs Bytes Inc <[EMAIL PROTECTED]> writes: > Any thoughts on what could make both clients attempt wrong protocol? They are both using 7.4-or-later libpq. Whether you think so or not. regards, tom lane ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] enable_sort optimization problem
Dave E Martin <[EMAIL PROTECTED]> writes: > I have noticed that if I set enable_sort=false in the .conf file, my > queries are running faster. You mean one specific example is running faster. If you do that you'll probably slow down other queries. It looks like the main estimation error is here: > -> Nested Loop (cost=0.00..1012208.81 rows=159 > width=76) (actual time=126.277..254.709 rows=31 loops=1) >-> Index Scan Backward using pk_outageid on > outages (cost=0.00..252480.62 rows=132245 width=52) (actual > time=77.021..77.484 rows=31 loops=1) >-> Index Scan using > ipinterface_nodeid_ipaddr_ismanaged_idx on ipinterface (cost=0.00..5.73 > rows=1 width=40) (actual time=5.304..5.686 rows=1 loops=31) > Index Cond: (("outer".nodeid = > ipinterface.nodeid) AND (("outer".ipaddr)::text = > (ipinterface.ipaddr)::text)) Why does it think that only 159 of the 132245 rows in outages will have join partners in ipinterface? The actual results look like they all do. It might be worth looking at the pg_stats rows for the join columns to see if there's something odd about the statistics. regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] PostgreSQL release schedule
Marc G. Fournier wrote: I'd almost think taht this shuld be much more prominently put in a section on the main page of the web site, actually ... make it nice and visible instead of buried on a sub page ... I agree it would be good to have a link on the main page. Possibly near "What's new in current_version" to have a link to 'coming up in our next release' or something similar. Just my $0.02, I know where the todo page is now. :) ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Modulus operator returns negative values / numeric
Tom Lane wrote: Paul Tillotson <[EMAIL PROTECTED]> writes: I don't think anyone wants to defend the negative modulus as such, but to fix it, we have to do one of these: (1) Keep rounding division, but rewrite the numeric modulus operator to use a form of division that always rounds towards zero. or (2) Give up rounding division in favor of truncating towards zero. or (3) increase the calculation precision (rscale), as suggested by Alvaro's message. Possibly that cannot work, but I haven't seen a proof. I don't think that will work. Before switching round_var() to trunc_var() at the end of div_var(), I tried recompiling it to say div_var(var1, var2, &tmp, rscale + 1); instead of div_var(var1, var2, &tmp, rscale); Around line 4129 in mod_var(). (Which would perform the division with one extra decimal place when calculating a modulus.) It fixed the case that Alvaro used as a test, but I was still able to get a negative modulus by trying other values. I think that adding digits to rscale will cause the negative modulus to become more rare, but it will always be possible to do get it. For example, 123456789012345678980 / 123 is 100371373180768844.6341 (rounded to 4 decimal places.) If you divide with no extra decimal places you get 45 at the tens' and ones' digits. If you divide with one extra decimal place, you get 44.6, which is truncated to 44. But suppose that dividing that gave you 100371373180768844.97 In that case, you would need to work it to at least 6 extra places before truncation would actually give you the expected 44 rather than 45, because even when working it to 5 decimal places, the carry propagation would eventually carry into the ones digit, changing the 4 to a 5. In other words, no arbitrary number of extra decimal places when calling div_var() will be always sufficient to prevent rounding up at some other decimal place. It looks like the "bug" can be easily fixed by changing the end of div_var where it says round_var(result, rscale); to trunc_var(result, scale); I cannot believe that that won't create problems at least as bad as it solves. Have you even tried the regression tests on this? regards, tom lane No. Can you tell me how to do that? Paul ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Bgwriter--- BufferSync() and StrategyStatsDump()
wht StrategyStatsDump prints is ARC clean buffers at LRU of T1 and T2 now lets say i have a dirty buffer at position 31st from LRU and the next one is at 3500th... in cases like this... t1_clean and t2_clean are of no use a better option would be to have a function like StrategyDirtyBufferList() which will scan till it finds max_buffers dirty buffers or end of the list... but a function like this would have a high overhead so a even better method is to print the no. of total buffers it had to scan each time bgwriter wrote... something like.. to find 100 dirty buffers it had to scan 15000 buffers this would greatly help ppl in determining the appropriate value of bgwriter parameters it would require a simple patch to written which will add two else statements in StrategyDirtyBufferList() and returning a struct instead of int... also a line in function BufferSync()=> elog(DEBUG1,.. i would really love to see something like this to be added to postgres... as then we wont have to spend one whole day trying to determine bgwriter values and finding in the end there is nothing much if there is one... plz tell...(i have already used iowait time... but its results are not really encouraging)... Regards Himanshu __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] unsupported frontend protocol
Connection logging shows an unvarying pattern: every connection attempt, regardless of target database or source (PHP or psql), first uses a wrong protocol and then succeeds on a second attempt, presumably after falling back: LOG: connection received: host=[local] FATAL: unsupported frontend protocol LOG: connection received: host=[local] LOG: connection authorized: user=testuser database=test Any thoughts on what could make both clients attempt wrong protocol? /usr/lib has the following libpq files: -rw-r--r-- 1 root root 592784 Jun 24 2004 /usr/lib/libpq.a lrwxrwxrwx 1 root root 12 Jul 8 2004 /usr/lib/libpq.so -> libpq.so.3.1 lrwxrwxrwx 1 root root 12 Nov 28 2003 /usr/lib/libpq.so.2 -> libpq.so.2.2 -rwxr-xr-x 1 root root 61252 Nov 4 2003 /usr/lib/libpq.so.2.2 lrwxrwxrwx 1 root root 12 Jul 8 2004 /usr/lib/libpq.so.3 -> libpq.so.3.1 -rwxr-xr-x 1 root root 112040 Jun 24 2004 /usr/lib/libpq.so.3.1 Tom Lane wrote: Bucks vs Bytes Inc <[EMAIL PROTECTED]> writes: Is there any postmaster logging I can turn on that will detail what's triggering the error? Well, you could change the error report in postmaster.c to show the specific protocol version code it's receiving (7.4 and up do this, but it hadn't occurred to us as of 7.3). I have little doubt what you will find though. Maybe what you really want is to enable log_connections so you can find out where the problematic connections are coming from. regards, tom lane
Re: [GENERAL] bulk loading of bplus index tree
On Thu, 26 May 2005 06:06 pm, Surabhi Ahuja wrote: > > I have heard about "bulk loading algorithm" for indexes.. > for eg. if u have values like 1, 2,3,4,5, etc...till a very large number. > in case of simple mechanism of indexing, the values will be inserted one by > one for eg..1 then 2 and so on > however in bulk loading ..the mechanism of building the index (bplus)tree is > quite diffreent and very fast ezpecially if u consider a very large number of > values. > > My question is : is this algorith implemented by postgreSQL. If yes please > tell in what cases can i make use of it. Bulk loading for B+Tree's in implemented in PostgreSQL. It is used on index creation, or reindex. I don't believe it's in other places, but Others may have more to say. Regards Russell Smith. > Thank You > Regards > Surabhi Ahuja > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] bulk loading of bplus index tree
Title: bulk loading of bplus index tree I have heard about "bulk loading algorithm" for indexes.. for eg. if u have values like 1, 2,3,4,5, etc...till a very large number. in case of simple mechanism of indexing, the values will be inserted one by one for eg..1 then 2 and so on however in bulk loading ..the mechanism of building the index (bplus)tree is quite diffreent and very fast ezpecially if u consider a very large number of values. My question is : is this algorith implemented by postgreSQL. If yes please tell in what cases can i make use of it. Thank You Regards Surabhi Ahuja
Re: [GENERAL] enable_sort optimization problem
Dave E Martin wrote: (8.0.1 on debian/linux 2.6.11 kernel) I have noticed that if I set enable_sort=false in the .conf file, my queries are running faster. I had a query which if I did a limit 20, ran in 6 milliseconds, but if I changed it to limit 21, it took around 19 seconds (or 19000 milliseconds). It also took longer if I did limit 19 offset 2. (I don't know what it is about the 21st record). In any case, I noticed that in the analysis, the long version was doing a sort and the quick version was not, so I tried the enable_sort=false setting, and now things are generally running faster. I HAVE done analyzes, and vacuums, and vacuum analyzes. In prior experimenting with this, there were even some seq_scans, which turned into index_scans when I set enable_seqscan=false, and became moderately faster. This sort of thing is useful as a way of testing whether a better plan exists. It's not terribly good as a way of tuning a live system. I am using 8.0.1, and below are the two query plans, first the enable_sort=true version, then the enable_sort=false version, note the VAST difference in speed. What is the problem, and how can I convince the query optimizer to do the right thing (short of enable_sort=false)? from the config file: # - Planner Cost Constants - #effective_cache_size = 1000# typically 8KB each #random_page_cost = 4 # units are one sequential page fetch cost #cpu_tuple_cost = 0.01 # (same) #cpu_index_tuple_cost = 0.001 # (same) #cpu_operator_cost = 0.0025 # (same) You should probably start with the performance-tuning articles here: http://www.powerpostgresql.com/PerfList http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php Certainly your effective-cache-size is (hopefully) too low, and random-page-cost might well be so too. If sorts in particular seem slow, you might want to increase work_mem (called "sort_mem" in older releases). BUT make changes one step at a time and look at the total impact on the system, otherwise you can end up making one query fast and nine slow. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings