Re: [GENERAL] ctid access is slow
Ilja Golshtein [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hello! On Tue, Aug 23, 2005 at 09:15:42AM -0400, Robert Treat wrote: On Tuesday 23 August 2005 08:39, Ilja Golshtein wrote: select ctid from aaa where ctid in (select ctid from aaa limit 10); Aside from that, ctid is of type tid, and its equality operator isn't hashable. It is the piece of knowledge I failed to deduce exploring plans of queries ;(. So I have no better solution then creating indexed field of serial type, have I? The only thing I am curios is ctid good for anything from user point of view? The ctid value can be useful in a multi user application, to check whether a record has been changed by another user, before committing changes. Whenever a record is updated the ctid value will be changed, so by storing the ctid value when first fetching the record, that can be compared with the current ctid value before doing the update. Thanks a lot. -- Best regards Ilja Golshtein ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] ctid access is slow
--- Jim C. Nasby [EMAIL PROTECTED] wrote: On Wed, Aug 24, 2005 at 09:26:10AM +0930, Jeff Eckermann wrote: The ctid value can be useful in a multi user application, to check whether a record has been changed by another user, before committing changes. Whenever a record is updated the ctid value will be changed, so by storing the ctid value when first fetching the record, that can be compared with the current ctid value before doing the update. I believe that's not necessarily true. If you select a tuple and it's ctid and it's updated more than once with a vacuum in-between I believe it could end up back in the same position, which would mean the same ctid. True. But the probability of that happening would generally be low enough not to bother the designers of most applications. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Softwarehttp://pervasive.com 512-569-9461 __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] ctid access is slow
--- Tom Lane [EMAIL PROTECTED] wrote: Jim C. Nasby [EMAIL PROTECTED] writes: I believe that's not necessarily true. If you select a tuple and it's ctid and it's updated more than once with a vacuum in-between I believe it could end up back in the same position, which would mean the same ctid. This is the reason for the recommendation that you don't trust a TID for longer than one transaction. If you select a row and see it has TID such and such, and then later try to fetch/update/delete that row by TID, the worst that can happen is that you'll not find the row because some other xact has already updated or deleted it. You can not find a different row in the TID slot, because VACUUM will not have removed a row that is possibly still visible to your transaction. (VACUUM has no idea whether you're running under SERIALIZABLE rules or not, and so it takes the conservative approach that any row you could ever possibly have seen as good is still interesting.) But this guarantee only lasts as long as your current transaction. regards, tom lane Just in case anyone following this thread gets a little confused, my response was somewhat tangential to the main discussion; I was talking of fetching the record by primary key or such, and then comparing the ctid values. Agreed that any other valid use of ctid is dubious. __ Yahoo! Mail for Mobile Take Yahoo! Mail with you! Check email on your mobile phone. http://mobile.yahoo.com/learn/mail ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] download binary version for Win32
Ulises Poirier [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hello: I can't to download this version..can you help me? What did you do, and what happened? A copy-and-paste of a terminal session would be ideal. regards, Ulises ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] linking access using ODBC driver with geometric data
Jeff D. Hamann [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Sorry for the seemingly novice posting, but I could find a solution for this on the web so far... I've been developing a database using postgresql (and loving it) and have started running into problems with attempting to update records using the ODBC connection. The tables that contain geometric fields are causing the problems. Whenever I try to update a record that contains a geometric field (any field) I get the following error message: ODBC -- update on a linked table 'public_locations' failed. ERROR: invalid input syntax for type point: (100),(100) (#7) When I attempt to update the record without the quotes, as if typing 100,100 in the point field, I get the error: record has been changed... blah, blah, blah... You mean the old record has been changed by another user? If so, try setting the row versioning driver option to true. Is there some trick to being able to use the geometric data types if you're going to interface with Access using the ODBC driver? Jeff. --- Jeff D. Hamann Forest Informatics, Inc. PO Box 1421 Corvallis, Oregon USA 97339-1421 541-754-1428 [EMAIL PROTECTED] www.forestinformatics.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] For Tom Lane
Scott Marlowe [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] On Fri, 2005-05-27 at 09:57, [EMAIL PROTECTED] wrote: Thanks for answer Tom Consider what happens when the user leaves for lunch Well, I've already thought about it.But I'm working with VS2003 and disconnected dataset.. so when user edit data he's modifying an old disconnected row, while real updated row is in the database.. So my strategy would be (as I already written): 1. refresh data recalling current row from database to the form's fields 2. lock the row 3. update modified data in the database through stored procedure (function) 4. commit and unlock the row Have you another idea that could work better with disconnected objects ? While this ensures that the update is atomic, it doesn't ensure that no one else is trying to edit it at the same time. What you might want to do is either optimistically lock it, or use application level locking. To use optimistic locking, you'll need to do something like make an md5 of all the fields being edited, then, right before you write back the data, check to see if the md5 you created at the beginning still matches by re-reading the data and md5ing it again. If it doesn't match, then you can throw a mid air collision error, so to speak, and tell them that the record changed underneath them, or do some kind of merging / or whatnot. The ODBC driver uses the ctid value to check whether a record has changed; an updated row will always have a new ctid. That would probably be the most economical way to check. If you want to do application level locking, then create a field and use that for locks. Just make it a timestamp field and put in the current time value when the lock is taken. When the predetermined timeout occurs, the user lock is removed by the next person to access it, or offer them chance to, or email the original locker, etc... Handle it the way you want or need to. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] double quotes inside VBA string ?
Zlatko Matiæ [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hello. I have a problem when working with MS Access/PostgreSQL. Namely, as PostgreSQL syntax uses doble quotes for table names and field names, when I write it as a string of a pass-through query or Command text of ADO Command object it looks like: select * from public.Customers and VBA considers that the first quote after public. is the end of statement. How to solve it ? This question is OT really, but... Make your command string something like: select * from public. chr(34) Customers chr(34). Better yet, change the table names to be all lower case. Access won't care at all, and your life with PostgreSQL will be easier. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking
Philippe Lang [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hi, Yes, I'm sure this kind of feature is application-based. There is nothing in the driver itself. If I'm not wrong, MS Access uses a timestamp column to check if the record was updated meanwhile, or the column values if not timestamp is available. This might be true only with a MS SQL Database, though. My understanding is that MS Access uses the row's ctid value to check uniqueness, on the basis that an updated row (even if all the data remains the same) will have a new ctid. This value is provided via the driver, where the row versioning option is set to true. Otherwise, Access will check the value of every data field, which can produce painful results. So, the short answer is, you could just check the ctid value for the row you are about to update. I'll try doing something similar in a Delphi event-handler. Thanks, bye Philippe -Message d'origine- De : Tony Caduto [mailto:[EMAIL PROTECTED] Envoyé : jeudi, 12. mai 2005 21:07 À : Philippe Lang Cc : pgsql-general@postgresql.org Objet : Re: [GENERAL] Delphi 2005, Postgresql, ZEOS optimistic locking Why not just add a onenter event handler to the forms field in question and just check the data before they edit it. If it's different update the field with the most current data. Access probably just does something similar under the hood for you. I don't think that's a feature of the ODBC driver or is it? If it is you could always use the ODBC driver from Delphi as well. With Dephi/BDE/ODBC, this is different: as soon as you try updating a field that has been modified by someone else meanwhile, the field is automatically updated for you before you start making your own changes, and of course before you try to commit them. That's fine too. I would have preferred an error personnally. Is there a way to do the same with ZEOS? Or maybe is there another mecanism that could be used to do optimistic locking? Some kind of long transactions, in the database server? Philippe Lang ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] MS-Access and Stored procedures
Zlatko Matic [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] I was using ADO command object and both refresh method and method with creating parameter object while working with Access Project...but I didn't try to use it with PostgreSQL... I would rather like to have all queries on client side anyway. Therefore I use pass-through queries. But it doesn't allow using parameters (execept by concatenation). Also, you can't base subforms on pass-through queries, so now I use strange combination of local tables, append queries with parameters based on pass-through queries etc. It works but I'm aware that it is not very clever:)... I find it hard to imagine why you would want to do it that way. Perhaps if you explain what you are trying to do, others can offer suggestions. I think that it would be great if pass-through queries could accept parameters. That would be a powerfull way for executing queries on client, while keeping all the code on front-end side...But I doubt that Microsoft will work on further Access improving anymore. It seems that Access is left behind while VS.NET is top technology. Too bad... IS there any good book covering MS Access usage as front-end for different database servers except MSDE ? I find the Access XX Developer's Handbook (where XX is the Access version) gives the best all round coverage of Access development issues, including client-server. Do you have form/subform/subform...based on stored procedures ? If so, how do you synchronize form with subform ? Greetings, Zlatko - Original Message - From: Hervé Inisan [EMAIL PROTECTED] To: pgsql-general@postgresql.org Sent: Thursday, May 12, 2005 11:06 PM Subject: Re: [GENERAL] MS-Access and Stored procedures Hello...This is very interesting. I have also asked myself how to prepare and execute stored procedures on POstgre from MS Access. Could you, please, give some example of Postgre function with parameters that is executed as stored procedure from MS Access? How would you pass parameters ? Using ADO Command object? AFAIK, there are 2 ways to send parameters from Access to a PG function, using ADO: 1. Write the parameters as the CommandText string: Set cmd = New ADODB.Command cmd.ActiveConnection = cnn cmd.CommandText = mypgfunction('this is a parameter', 25) cmd.CommandType = adCmdStoredProc cmd.Execute Set cmd = Nothing The CommandText string can be the result of a concatenation: Cmd.CommandText = mypgfunction(' strMyString ', intMyValue ) 2. Another way is to use true ADO parameters: Set cmd = New ADODB.Command cmd.ActiveConnection = cnn cmd.CommandText = mypgfunction cmd.CommandType = adCmdStoredProc Dim prm1 As ADODB.Parameter Set prm1 = New ADODB.Parameter With prm1 .Type = adVarChar .Direction = adParamInput .Value = another string sent to PG .Name = param1 .Size = 30 End With Dim prm2 As ADODB.Parameter Set prm2 = New ADODB.Parameter With prm2 .Type = adInteger .Direction = adParamInput .Value = 25 .Name = param2 .Size = 0 End With cmd.Parameters.Append prm1 cmd.Parameters.Append prm2 cmd.Execute Set cmd = Nothing Voilà! -- Hervé Inisan, www.self-access.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Adventures in Quest for GUI RAD
[EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] It is just too bad that there is not a better, more user friendly, well documented front-end tool for Postgresql in open source. It really is kind of a marketing issue, in a way. I mean, if someone could really put together some sort of Postgresql for Dummies series with something like Rekall for a front end with some REAL LIFE examples or projects that ACTUALLY DO SOMETHING USEFUL, rather than just paint a form to go first/last/next/add/save if someone could put together something like THAT, then, perhaps things would really take off. I dont know. Just a thought. All the ingredients and raw materials are lying around just waiting for someone to do that. Hear, hear. Speaking as another self-taught database user and programmer, I have a gripe about most (nearly all) books on programming, which don't (or else poorly) address questions like: * What useful stuff can you do with this? * Why was this technology (language etc.) invented, i.e. what problem was it designed to solve, and why is it better than competing technologies? * What is the point of exotic feature xyz (i.e. what is it really there for)? * How do you put together a working application that does something useful? If anyone here can suggest books that really address those questions, I for one will be all ears. ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Howto prevent write based on date
Fmiser [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] I'm a newbie to database admin, but I'm not afraid to try - but this one has me stumped. I'm using SQL-Ledger as a front end to postgresql 7.4.2. on (mostly) Debian Testing, i386. My fiscal year is over and I would _like_ to prevent any changes to the data from last year. A simple trigger would work well. A rule could do it too, but rules can have gotchas if you don't know exactly what you are doing, and triggers give finer grained control, because you get to use procedural logic. I looked/searched in the manual, but I don't even know what to call what it is that I'm trying to do! lock has another meaning for databases. :) Write seems to bring up lots of user-related stuff. So, I'm hoping one of you geniuses can tell me where to look, what to look for, or how to do it. *smile* Thanks! Philip, wanabe-admin ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] [INTERFACES] calculated identity field in views, again...
--- Zlatko Matic [EMAIL PROTECTED] wrote: I asked this question several weeks ago, but nobody proposed a solution, so I am repeating the same question again... I have an MS Access front-end for a database on PostgreSQL. I could use pass-through queries as record sources for reports and it works fine... Unfortunately, MS Access doesn't allow pass-through queries to be records sources for subforms. Unless you use unbound form/controls. Which means handling everything in code, which might work out best for you, depending on what you want (this is effectively equivalent to the VB-only option which someone else mentioned). Therefore I tried to base subforms on regular JET queries on linked tables. It was too slow... Then I tried to base subforms on DAO recordset code generated from pass-through QueryDef objects. Although it worked, it was very unstable... Now it seems to me that POstgreSQL views are the best solution, but Access considers views as tables (!) and needs column with unique values. AFAIK a composite key (combination of several columns) should work ok for a primary key for Access. When linking to the view, just select the columns you want to use. Or are you saying that you tried this, and it didn't work? Alternatively, you could try including in your view definition the oid column for each of the constituent tables. If I understand right, oids are globally unique within your database. This assumes that you have created your tables with oids, which may not be the case. Basing a subform on a mult-table join sounds like odd database design. Perhaps if you can explain more about what you are trying to do, people can offer more suggestions. All those views are complicated queries on several tables, so I can't use any table's column as primary key. I need a calculated column in the view that Access will consider as primary key column. In regular tables, I use bigserial field, but how can I create calculated bigserial column in a view ? Thanks. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] [ODBC] Adventures with P2P and Scripts in Windows
--- [EMAIL PROTECTED] wrote: I DID Succeed in doing the sort of things I want in VBA script in MSAccess, but that does me no good at work because we cant get the money to purchase copies of Access. I own a copy of Visual Basic, and if I could find a good example of accessing the postgres in vba through the odbc driver, then I could do what I want, compile it, and put it on the work stations. Try this one: http://gborg.postgresql.org/project/psqlodbc/genpage.php?howto-visualbasic If you already have some acquaintance with VBA in Access, then you can leverage that knowledge by using Visual Basic. Since you already own Visual Basic, that could be the no (extra) cost solution that you are looking for. __ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Calculated bigserial column in a view
Bruno Wolff III [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Please keep replies copied to the list so that other people can learn from and comment on the discussion unless to have a good reason to make the thread private. On Mon, Apr 25, 2005 at 21:46:20 +0200, Zlatko Matic [EMAIL PROTECTED] wrote: The reason for such crazy idea is my front-end MS Access which considers views as tables and have problem with tables ( views also) if there is no unique numeric field... Anyway, how to do it ? You can tell Access to use any combination of fields as a primary key, when linking to the view. There must be some unique combination that will work as a key. But this is really of no significance unless you want to issue inserts or updates against the view, which raises another set of issues. Thanks. I don't know MS Access, so I am not going to be able to help much with this. There may be some way to trick MS Access into thinking that your view has a unique numeric field. It would help to know how it knows that a column has those properties and how it uses the information. - Original Message - From: Bruno Wolff III [EMAIL PROTECTED] To: Zlatko Matic [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Monday, April 25, 2005 6:31 AM Subject: Re: [GENERAL] Calculated bigserial column in a view On Sun, Apr 24, 2005 at 22:16:14 +0200, Zlatko Matic [EMAIL PROTECTED] wrote: I know that it sounds crazy, but I need a bigserial coulumn in a view that is consisted of several tables. That column should not be based on bigserial column of any table, but should be a calculated column... How can I accomplish it ? This doesn't really make sense. It sounds like you want a key that is unique accross the union of several tables and that doesn't change when other rows in the union are added or removed. There are a couple of ways you might go about doing something like this. Is this what you really want? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] artificial keys or not?
Scott Ribe [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] ..so the only question is: drop the the serials as PKs and just use short text handles? our database size is arround 290 mb. there shouldn't be speed issues as long as proper indexes exit, right? some of the conditions are dynamic though, so there can't be an proper index in any case. any other concerns? Someone (a manager) somewhere (probably marketing) sometime (when you least expect it) will demand that those short text names be changed, possibly for a completely irrational reason. I know there are still text books that claim that artificial keys are evil for some inscrutable reason, but hard experience has taught me to never, ever, under any circumstance, make a primary key out of data that comes from humans, and to be very suspicious of using data that will be visible to humans. This is my experience also. But opinions vary, and I wouldn't be surprised to see a further post from someone which argues the exact opposite. :-) -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 665-7007 voice ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Playing with PostgreSQL and Access VBA
Tony Caduto [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] In a real application that uses a client/server type database you should NEVER construct queries that return a million rows. You have to keep in mind that each record has to be sent accross the wire via TCP/IP and it is going to take a lot of time to bring back 1 million rows and I am sure your network admin would notice increased network traffic. In a real application you would use something like the Delphi Tdataset based components which will only bring back the records that can be seen on the screen, then when you scroll, it brings back more. you can manually create something like this using Postgresql handy LIMIT and OFFSET sql keywords in your query to build a paging system. Access is actually a very poor choice for client/server application development, like I said before, invest some time learning Delphi, it's Access may be a poor choice for a Delphi developer to use. But for many of us, it's an excellent choice. Everything depends on your situation and on your needs. The OP is starting from ground zero in learning about Access (or anything, apparently) and client-server apps. Of course there will be difficulty, regardless of the choice of interface. not difficult at all and you won't have these problems. You can get a copy of Delphi 7 personal edition and you can use the zeos components with it. http://www.zeoslib.net Tony . When I had it loop and add 1000 rows, it was ok... When I told it to add a million rows then after 250,000 the Access application hung (but not the whole machine, and not the ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] psqlodbc MSAccess and Postgresql
[EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Thanks for the help with /dt... Mike, your post helped me to discover psql command window, I had been trying out the pgAdmin ... I found an interesting article regarding MSAccess as a front end to Postgresql, so I downloaded the psqlodbc installer. I will be getting up the nerve to run the install and go into windows odbc and install that driver so I can use MSAccess with Postgresql ... any tips or caveats appreciated Be sure to check out the FAQs and How-tos at the psqlodbc project site: http://gborg.postgresql.org/project/psqlodbc/projdisplay.php You will find that that Access and PostgreSQL get on well together. You can get by with the default driver settings for the most part; I suggest that you make sure that the row versioning and true = -1 options are set to true. If you want to use the memo datatype in Access, then you will need to check text = longvarchar option. If you have further questions, try posting to the pgsql-odbc list. This is the link which is guiding me _http://database.sarang.net/database/postgres/postodbc/faq.html_ (http://database.sarang.net/database/postgres/postodbc/faq.html) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Postgresql Windows ODBC
[EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Tonight I shall install the Postgresql ODBC driver which I downloaded, and experiment. Is anyone using ODBC? I was looking at Realbasic standard edition (not professional), and thinking of downloading the demo and trying it with the ODBC. Anyone here use Realbasic at all? In Windows, what would you say is the easiest language to use which can talk to Postgresql? Just curiouse.g. I have Liberty Basic which is very easy, but does not easily support ODBC... Is anyone working with some form of Python on Windows. I downloaded Python, but it seems to run only in some DOS window. I mentioned this on a Dreamcard/Revolution board, and they said Oh, you want PYTHON CARD. Anyway, I would be interested to learn of an easy language which can access Postgresql, together with some decent examples or tutorials to get me going. I have a feeling that Realbasic will do the trick. Your question is too broad to be easily answered. What kind of application do you want to create, to meet what need for whom? In what kind of business environment will it be running? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] ADO and PostgreSQL functions
Zlatko Matiæ [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hello. While I was working with Access Projects (Access front-end with MSDE) I was able to call stored procedures by using ADO command and parameters object. Now I am trying to migrate my database from MSDE to Postgre and I'm wondering about stored procedures on Postgre...I couldn't find such expression in Pg documentation, so I suppose that there are no such thing on Postgre. On the other hand, I could see that functions could play such role. Am I right? Yes. Functions can be written to take pretty much whatever parameters you want, and can return a value, a row, a resultset, or nothing if you wish. You also have a wide choice of languages to use. The documentation has plenty more on this. If that is so, how can I call such functions from the client (MS Access, for example) ? I couldn't call MSDE functions from MS Access using ADO, because ADO recognize only stored procedures (adCmdStoredProc), so I suppose that the same problem could be with Postgre functions... How can I use Postgre functions as recordset for my forms and reports in MS Access? How can I call and execute parameterized functions? The usual syntax for calling functions is select myfunction(param1, param2...);. For set returning functions, you need to write select * from myfunction(param1, param2...);. Note that a function has to return a datatype recognised as such by the database. Every table automatically has created a type which corresponds to its rowtype; you can also create any type you want. In Access, there are two approaches you can use to run a paramaterised query, with the parameters set at runtime: 1. Construct a query string using the supplied parameters, and then explicitly set the SQL property of your querydef to that string, or: 2. Send the query string to the backend as a passthrough query. You may have trouble getting an updatable recordset this way (certainly with #2), which will give you trouble updating when using bound controls. But if you don't mind writing some code, you can handle updates/inserts easily enough in your app. You may find that using SQL for your updates and inserts, rather than ADO methods, is more robust. Some versions of the ODBC driver have trouble with queries using set returning functions, because they don't recognize that the function name does not apply to a table. I believe that this has been fixed in recent driver versions, but I haven't tested this. This problem would not apply in any case to passthrough queries. I suggest that you search the archives of the pgsql-odbc list for information, and direct any further questions to that list, where you are more likely to get good answers. HTH Thanks. ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Crystal reports 9 fails to recognise data on upgrade to 8.0.1
Looks like your ODBC settings might have changed in the switch. In particular, look at the text as longvarchar setting. If you cannot find a solution, try posting to the pgsql-odbc list. Kristina Magwood [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hi, We are updating from v7.3 to v8.0.1 as well as transferring the database to a new machine. We use Crystal Reports 9 to access the data on a production basis. Unfortunately, Crystal Reports 9 fails to recognise some of the data fields on the new database. In particular, it does not recognise VARCHAR 256 and larger. Any ideas? Thanks, Kristina Magwood ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] ADO adCmdStoredProc PlPgSql-SP Parameters
[EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] Hi, How do I call a StoredProcdure, written in PlPqSql which returns a set of records (or tableWhatever) and wants parameters, from ADO while using adCmdStoredProc ? I don't believe that works (anyone who thinks I'm wrong, please correct me). You will need to use select syntax, e.g. select * from setreturningfunction(param1, param2,); If you are using ODBC, you may have a problem with that syntax, though I believe that the driver code has been patched to deal with that, and a new release is on its way out (or you could compile the source). ERROR: set-valued function called in context that cannot accept a set -- VERY LONG VERSION OF THIS QUESTION - When I use Microsoft SQL-Server I make a SP like CREATE PROCEDURE _test AS select * from evt_event return 42 GO I call it just via name. When I use PostgreSql I make a SQL-SP like CREATE OR REPLACE FUNCTION public._test () RETURNS SETOF public.t1 AS $body$ select * from t1; $body$ LANGUAGE 'sql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; I call it just via name too. When I use PostgreSql I make a PLPG-SQL-SP like CREATE OR REPLACE FUNCTION public._test () RETURNS SETOF public.t1 AS $body$ declare rs record; begin for rs in select * from t1 loop return next rs; end loop; RAISE NOTICE '42'; return; end; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; How do I call this one ? (using adCmdStoredProc) In the SQL-Editor I may call both pg-SPs via select * from _test() EMS PostgreSql Manager SQL-Editor: VariantePG1: select * from _test() : ok VariantePG2: select * from _test() : ok Ado: RECORDSET rs = adoRecordSet.open command,, adCmdStoredProc VarianteMS : ok VariantePG1: ok VariantePG2: ERROR: set-valued function called in context that cannot accept a set Help, Andreas ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Certifications in military environment
--- Marco Bizzarri [EMAIL PROTECTED] wrote: Hi all. I would like to know if postgresql has any certification for the military environment. There are no official certifications, nor are there likely to be. But certifications may be offered by individual companies, like Red Had does with Linux. Here is one which might be of interest: http://groups.google.com/groups?hl=enlr=selm=20041014.104826.128620349.t-ishii%40sra.co.jp If possible, please answer me directly (I'm not on this mailing list). Otherwise, I will read from the web interface. Regards Marco ---(end of broadcast)--- TIP 8: explain analyze is your friend __ Do you Yahoo!? Meet the all-new My Yahoo! - Try it today! http://my.yahoo.com ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] PostgreSQL certification
--- Konstantin Danilov [EMAIL PROTECTED] wrote: Hello everyone! I have a question.. Does PostgreSQL have official certification? If yes, where can I find certification courses to get a certificate? There are no official certifications. Anyone who wants to can offer their own certifications, like Red Hat does with Linux. Here is a link that may be of interest: http://groups.google.com/groups?hl=enlr=selm=20041014.104826.128620349.t-ishii%40sra.co.jp Konstantin ---(end of broadcast)--- TIP 8: explain analyze is your friend __ Do you Yahoo!? Meet the all-new My Yahoo! - Try it today! http://my.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Last value inserted
--- Jerry III [EMAIL PROTECTED] wrote: Which means that sometimes they do not return the correct value - if you have a trigger that inserts another record you will not get the right value. If you are new to PostgreSQL, as you say, then why are you so sure of this? Perhaps you may profit from looking a little more at how currval() works. MSSQL has @@IDENTITY and SCOPE_IDENTITY() to handle this case, I'm new to pgsql so I don't know if it has anything like that. Jerry Richard Huxton [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] MaRCeLO PeReiRA wrote: How can I now (for sure) with value was generated by the sequence to fill the field ID? (There is lots of users using the software at the same time, so I am not able to use the last_value() function on the sequence.) Yes you are nextval()/currval() are multi-user safe. They return the next/current value *in the current connection*. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 8: explain analyze is your friend ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster __ Do you Yahoo!? The all-new My Yahoo! - Get yours free! http://my.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] OID's
--- Joolz [EMAIL PROTECTED] wrote: Thanks. Btw what's going on on the list??? I seems someone is posting a lot of obscene nonsense, using my email address. I saw this happen before with someone else. Is there anything I can do about it? This is the down side of free will. Human cleverness can be used for bad purposes as well as good. Those who do things like this gain gratification from the responses that they get from others. If we ignore them (as everyone else on the list appears to be doing) they will go away eventually. Thanks again! ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org __ Do you Yahoo!? The all-new My Yahoo! - Get yours free! http://my.yahoo.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] I spoke with Marc from the postgresql mailing list.
--- Brian {Hamilton Kelly} [EMAIL PROTECTED] wrote: Which only confirms my opinion that he's a fuckwit. Evidently some of the nastiness out there on Usenet is propagating into our lists in advance of any change. __ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Important Info on comp.databases.postgresql.general
--- William Yu [EMAIL PROTECTED] wrote: ??? As a longstanding reader of the pgsql- mailinglists, (including via news.postgresql.org on occasion), all I see is some outsiders trying to help us fix a problem that does not exist. And yes, I have read most of the messages that have passed by in these threads. After all that, I still don't see the benefit. To me, this sounds pretty elitist and exclusionary. Is there a reason why you would not want more people particpating in the discussions here? I myself never participated here until I figured out I could point a usenet reader to news.postgresql.org to view and reply to the list. No way I was going to get hundreds of messages a day in my inbox. As for why not just keep the same system, there are times when news.postgresl.org runs like a dog. 2 or 3 days out of a month, messages take 5 or more seconds to come up. If I and others could use our local news servers, that's less bandwidth, less load for the postgres servers. And that means someone, somewhere is paying less for their contributions to the PG project. William, Thank you for your useful post. I was speaking from my personal point of view, as indeed you were. To explain: I have read the pgsql- lists for several years, and in that time I have seen many, many issues discussed, often to death and beyond. But never once have I seen a post saying gee, wouldn't it be wonderful if we were a big 8 newsgroup, or wow, it really is difficult to participate in these lists, perhaps we should get on Usenet. In short, the issue that is now being raised with us, the list participants, is not an issue for us. So, next we have a person whom we have never heard of, coming to us and saying Hey, I am solving your problem; I am going to change the way your lists work, and I am saving you trouble by doing it myself without even asking your opinion. Do you blame me, and some others, for responding with Huh?, and What was that problem again?. The problem that is being solved is, as presented so far, too abstract to be convincing, at least for me. Your post is the first I have read that actually talks about something concrete, i.e. meaningful to an actual user. If we continue to see more discussion of this subject, as we no doubt will, I hope to see a higher ratio of useful posts like yours. Regards, Jeff ---(end of broadcast)--- TIP 8: explain analyze is your friend __ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Important Info on comp.databases.postgresql.general
--- Gary L. Burnore [EMAIL PROTECTED] wrote: On 9 Nov 2004 15:51:24 -0800, [EMAIL PROTECTED] wrote: This is a very unfair limitation. Stop top posting. If there is a ballot, it should be posted to each of the individual mailing lists. That's not how it works. When/if a ballot appears in News Announce Newgroups, I will copy/paste and e-mail the ballot to each of the PostgreSql mailing lists, once per week during the 3 weeks of voting. The mailing list users will be most affected by any voting outcome. I will not allow them to be left out of the process. Many of the list subscribers do not use Usenet. I never posted to a newsgroup before today. Additionally, I encourage all list members to vote yes. Please get everyone you know to vote. We need to exceed the no votes by 100 in order to pass. I am affiliated with a network of over 60 PostgreSql users/developers, and I will e-mail each one of those people a ballot, and it is safe to presume that essentially all of them will vote yes. I encourage all proponents of saving the lists to do exactly the same, within their scope of influence. You're trying to get it to fail, eh? Nice going. I am stupefied by the downright arrogance of most of the news.groups readership. Please. We're trying to help you fix something that's broken. ??? As a longstanding reader of the pgsql- mailinglists, (including via news.postgresql.org on occasion), all I see is some outsiders trying to help us fix a problem that does not exist. And yes, I have read most of the messages that have passed by in these threads. After all that, I still don't see the benefit. Perhaps that is why these conversations have been carried on almost totally by people who do not post to the pgsql lists. You people have the audacity try and dictate rules of matters that do not affect your lives in the least. I've dealt with the day to day workings of USENet for more than 10 years and there are a LOT of people who've been doing it longer. Try not making so many assumptions. Mr. Pauli Vaurio -- [EMAIL PROTECTED] dot com --- How you look depends on where you go. --- Gary L. Burnore | ÝÛ³ºÝ³Þ³ºÝ³³Ýۺݳ޳ºÝ³Ý³Þ³ºÝ³ÝÝÛ³ | ÝÛ³ºÝ³Þ³ºÝ³³Ýۺݳ޳ºÝ³Ý³Þ³ºÝ³ÝÝÛ³ DataBasix | ÝÛ³ºÝ³Þ³ºÝ³³Ýۺݳ޳ºÝ³Ý³Þ³ºÝ³ÝÝÛ³ | ÝÛ³ 3 4 1 4 2 ݳ޳ 6 9 0 6 9 ÝÛ³ Black Helicopter Repair Svcs Division | Official Proof of Purchase === Want one? GET one! http://signup.databasix.com === ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster __ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Lost databases
--- Liviu BURCUSEL [EMAIL PROTECTED] wrote: Hi ! I'm in the ultimate state of dispair. Over the night some of my databases just disapeared. But they did not dissaper totally. I can still connect to them but I cannot dump them. When doing a select * from pg_database I see only 2 databases I hardly ever use (they are used once in a couple of mounths). Template0 and template1 are missing. All the databases that I currently use dissapeared! Please help me to restore the integrity of the system. I'm in dense fog here :( What Postgresql version is this? Your problem sounds like oid wraparound. There was a recent discussion of this on the list; try the list archives to see what was recommended in that case. Liviu ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html __ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Rows created by a stored proc prompt Access' dreaded write conflict
--- Sim Zacks [EMAIL PROTECTED] wrote: After the stored procedure is run, call requery on the form that was updated. We are in the middle of moving Access implementations to PostGreSQL. I'd be happy to trade war stories, if you'd like. I hope that you do it on the list, so that the rest of us can profit from your experience. Thank You Sim Zacks IT Manager CompuLab 04-829-0145 - Office 04-832-5251 - Fax Hi all, I am using an Access client linked to a PG 7.4 server via ODBC. I have a stored proc on the server that inserts rows into a table.particular table, accomplished via an INSERT within the body of the stored proc. The procedure does not explicitly commit this data, as no transactions are invoked. The problem is that Access will not modify these records via table or form view, giving its generic Write conflict: another user has modified this record message. It does just fine for any other records in the table, but it will not modify those created by the stored proc. It will also execute an UPDATE OR DELETE query to modify these records This stored procedure is pretty key for us to go forward. Does anyone have any ideas of what's going on and how to fix it? I can post more details, but I wanted to see if this was a known problem before doing so. Many thanks, Eric ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) __ Do you Yahoo!? Check out the new Yahoo! Front Page. www.yahoo.com ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] ERROR: parser: unterminated quoted string
--- Daniel Secomb [EMAIL PROTECTED] wrote: Hi, I'm using PostgreSQL 7.3.4 on phpPgAdmin 2.4.2 and I'm getting the following error when I try and run a SQL query on a script: Error - /usr/local/psa/admin/htdocs/domains/databases/phpPgAdmin/db_readdump.php -- Line: 33 PostgreSQL said: ERROR: parser: unterminated quoted string at or near 'Black Tea\015\012Fresh range of leaf tea for drinking black or with milk,\015\012including Ceylons, Assams and favourites like Irish Breakfast.\015\012\015\012Green Tea\015\012First quality Japanese Chinese popular green teas, including Bancha, Genmaicha flavoured green teas.\015\012\015\012Flavoured Tea\015\012Exotic and delicious, favourites are Vanilla, Arctic Fire Lady Grey.\015\012\015\012Herbal Infusions\015\012Pure flowers and whole leaves of calming herbs, a\015\012delightful range of specially blended herbal infusions Fruit Melanges; at character 566 Any idea as to why this is doing this? Because there is an unterminated quoted string? I see a single quote just before Black Tea, but I don't see another anywhere in that string. Thank you for your help. Dan __ Do you Yahoo!? Yahoo! Mail is new and improved - Check it out! http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Single Row Table?
--- Scott Marlowe [EMAIL PROTECTED] wrote: On Sun, 2004-08-29 at 15:30, Bruno Wolff III wrote: On Fri, Aug 27, 2004 at 13:32:07 +0530, Yateen Joshi [EMAIL PROTECTED] wrote: Hi, I have got a table which is supposed to contain only one row. It does not have any primary keys defined. So, essentially, when a new insert happens in that table, I would like it (the insert) to fail if there is already a row existing in that table. How can I do that? Can I add any constraints? Or do I need to write a separate trigger for the same? A simple way to force this is to add a primary key and a constraint that forces the primary key to be a particular value. Is it reasonable / possible to add a check constraint something like select count(*) from table =1? ISTM most natural to do this with a rule, e.g.: CREATE RULE my_insert_rule AS ON INSERT TO my_table DO INSTEAD NOTHING; Which will cause all inserts to be silently dropped. If you want to return a message to the application, you could use a statement trigger, which I believe we don't have yet, or you could use a rule like: CREATE RULE my_insert_rule AS ON INSERT TO my_table DO INSTEAD SELECT 'Inserts to my_table not allowed!'; Although the application may not be expecting a return message, and might not handle it. ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match ___ Do you Yahoo!? Win 1 of 4,000 free domain names from Yahoo! Enter now. http://promotions.yahoo.com/goldrush ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Replication options?
--- Tom Lane [EMAIL PROTECTED] wrote: Jeff Eckermann [EMAIL PROTECTED] writes: That probably won't impress your bosses. If you need a track record, then erServer might be what you need. erServer is a commercially produced product that was (is still?) used by Afilias, the provider of registry services for the .info and .org domains. That's serious testing; very large databases and lots of traffic. Note that Afilias paid for the development of Slony-I; ... because they were quite unhappy with erServer ... Now erServer did work for them, but it required significant amounts of tuning and constant babysitting by the DBA. (If Andrew Sullivan is paying attention to this thread, he can offer lots of gory details.) I can also personally testify that getting erServer set up is a major pain in the rear. I haven't messed with Slony, but all reports are that it's a substantially better piece of code. regards, tom lane Granted... Liam's bosses want something with a history of successful use in a serious production situation, and erServer at least has that. Slony has been around for too short a time to make that claim, yet. __ Do you Yahoo!? New and Improved Yahoo! Mail - 100MB free storage! http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] New to Postgres
--- Darkcamel [EMAIL PROTECTED] wrote: Hello all, I am new to postgres and don't really understand how the database is set-up. I am very fluent with mysql and sql2000, but postgres is new to me. If anyone can point me to some good links I would appreciate it very much. This is worth a book, and there are several written for that purpose. I will try a summary. First, you need a working PostgreSQL installation. If you do not have this now, I suggest looking for a package for your platform, rather than attempting to compile source yourself (unless you are feeling adventurous). Make sure you have some user(s) and database(s) set up to play with. Example (assuming that the database setup was initialized by user postgres, which is usually the case): su postgres #you probably need to su to root first createuser myuser createdb mydb Next, look over the documentation (easiest just to go to http://www.postgresql.org for that). Don't spend too much time reading the content at first; instead, spend some time familiarizing yourself with the layout of the contents. Once you become a little familiar with the layout of the docs, you will find that you will be easily able in most cases to drill down and find answers to your questions. Then, just jump in and start playing. Connect to the server, and log into the database using psql [databasename]. Psql is something like Oracle's sql*plus; man psql will tell you more. From the psql prompt, you can issue SQL commands. There are many psql-specific commands, which you can list by typing \? at the psql prompt. Some examples: \dt list all tables \d tablename List structure (fields, datatypes etc.) for tablename. \h command Show syntax for SQL command command If you have been using another RDBMS, you can learn a lot by attempting to import a dump from that system into PostgreSQL. Depending on how many vendor-specific extensions are in use, you may find that you can do it with little (or possibly no) editing of the dump file. To import a dump file, just do (from the OS command line): psql databasename dump.file This, plus a little research, should be enough to get you started. The docs are the definitive source for information. You will also find lots of useful stuff at http://techdocs.postgresql.org. If you have specific questions that you are having trouble finding answers to, post them to the list. Good luck. Thanks, Darkcamel ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster __ Do you Yahoo!? Yahoo! Mail Address AutoComplete - You start. We finish. http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Getting started with replication
Slony seems to be moving to the head of the pack of replication solutions. Still too new to say definitively. Slony author Jan Wieck has said repeatedly that replication is used in practice to meet various needs, and there is no one replication solution that can be ideal for all of them. For that reason, there is likely to continue to be a variety of replication solutions for PostgreSQL, with none of them part of the core release. Anyone who would write a guide to replication in general, with critiques of the available PostgreSQL solutions, would be doing a great service. But I am not qualified to do it. --- CSN [EMAIL PROTECTED] wrote: Are there any docs that deal with replication and Postgresql? Intro's, faq's, etc. Can anybody recommend a Postgresql replication package? I'm really not familiar with any - seems like there are several, with possibly Slony rising to the forefront. Are there any docs that compare the various options? Would there be any problems replicating data between Linux and Windows/Cygwin servers? Thanks, CSN __ Do you Yahoo!? Yahoo! Mail is new and improved - Check it out! http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org __ Do you Yahoo!? New and Improved Yahoo! Mail - 100MB free storage! http://promotions.yahoo.com/new_mail ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Training and certification
I recall an extended discussion of certifications and training some time back. You may recall it, too. The outcome (if it may be so called) was the view that, given the nature of the PostgreSQL project, there exists no entity that can bless a certification and thereby make it official, any more than there can be (for example) an official linux certification. Any certifications would have to be offered by those with an interest in doing so (Red Hat is an often cited example). Another outcome I recall was that one or more people (apparently) stated their intention to develop training materials that could be available for use by people wishing to learn about PostgreSQL. But I haven't heard another thing about that. AFAICS nothing has changed. I am a little frustrated by that, because a generally-acknowledged PostgreSQL curriculum, with training materials available, would be a very good thing. But I can't talk too loudly, because I don't have time to contribute to that. --- Bret Busby [EMAIL PROTECTED] wrote: I realise that these have been discuuseed before, but a couple of things have happened that caused me to bring this up again, and to raise some questions. A couple of nights ago, a seminar was presented in Perth, Western Australia, by an institution offering IT masters degrees. One of the masters degrees, was a Master of Systems Development (MSD). Their masters degrees incorporated industry certification. The MSD incorporated the Microsoft MCAD and MCSD, and related to .NET systems development. The institution is also investigating possibly offering a similar masters degree, relating specifically to, or emphasising, databases. After the presentation, I asked the presenter about the possibility of incorporating open source, rather than proprietary system development, with open source industry certification. Unfortunately, the only open source industry certification that was relevant, that I could cite, was the MySQL certification set, along with LPI and RHCE certification, with no apparent industry certification for PostgreSQL. The presenter said that the institution was considering Oracle as a possibility in the future, Oracle having industry certification. Thus, recognised, international, industry certification of open source application systems development, either involving PostgreSQL as a database backend by itself, or, involving PostgreSQL as a factor (eg, open source database applications development involving either PostgreSQL or MySQL as separate options, or, competency with both), could be useful, apart from having the internationally recognised industry certification in its own right, encouraging acceptance of open source software development in such qualifications as these masters degrees with their incorporated recognised international industry certification. In trying to find the organisation that I had understood to be the main one for providing PostgreSQL certification (I had understood that it was PostgreSQL.com, or something like that, or, possibly pgsql.com), and, being unable to find any details of any competency-based certification at these sites, I did a bit of searching, and I found a postgresql.org web page at http://www.postgresql.org/survey.php?View=1SurveyID=22 , which gave the results of a survey, with the question Should we create a standard worldwide PostgreSQL training course?, with 79.691% of the votes, being votes for the yes side - and of that percentage, 50.386 strongly yes. The survey is not dated, so I do not know how long ago it was held. However, it appears to have indicated support for a standard worldwide PostgreSQL training course. On the web page, whils other survey questions are listed, no further reference is made to the result of the survey, for example, The PostgreSQL guru's are investigating the possibility of developing a standard worldwide PostgreSQL traing course, in response to the survey results. So, I ask, given the result of the poll, however old it is, is any action being taken, to develop a standard worldwide PostgreSQL training course, or set of such courses (eg, core, DBA, developer, PHP|Perl/PostgreSQL web developer, etc)? I also found a web page at http://advocacy.postgresql.org/advantages/?lang=en , in which the text was included; our training programs are generally regarded as being more cost-effective, manageable, and practical in the real world than that of the leading proprietary database vendors.. To what training programs, does this refer? Are they standardised, or, does this refer to separate, independent, autonomous, individual training programs that are not standardised? Also, in my searching, I did manage to find a certification at http://www.postgresql.at/certification.html , which also has a link to training provided by that company. From the
Re: [GENERAL] PostgreSQL delete the blank in the end of the String automatically. how can I avoid it?
--- Paul Thomas [EMAIL PROTECTED] wrote: On 28/05/2004 20:46 Jeff Eckermann wrote: I had a thought that the ODBC driver could be stripping the trailing spaces, but the test I just did (using MS Access via ODBC) worked fine. But that's using a different ODBC driver so won't show a possible bug in the PG driver. PgAdminII uses the psqlodbc driver. It's pgAdminIII that does not. Thinking some more, I used a text type field for my test. I have some recollection that varchar(n) fields _will_ strip trailing spaces. But I'm too lazy to test that right now. I was testing using a varchar(n) field. Which I just tested, and found to work fine. My recollection of the space-stripping behaviour of varchar(n) appears to be off. All of which is academic, seeing that the OP solved his problem by using char(n). But that leaves wide open the question of why that would work, when varchar(n) or text do not. -- Paul Thomas +--+-+ | Thomas Micro Systems Limited | Software Solutions for Business | | Computer Consultants | http://www.thomas-micro-systems-ltd.co.uk | +--+-+ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster __ Do you Yahoo!? Friends. Fun. Try the all-new Yahoo! Messenger. http://messenger.yahoo.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] insert/update
--- Tom Allison [EMAIL PROTECTED] wrote: Jeff Eckermann wrote: --- Tom Allison [EMAIL PROTECTED] wrote: I seemed to remember being able to do this but I can't find the docs. Can I run a sql query to insert new or update existing rows in one query? So I have to watch out for transactions on this? Essentially what I'm trying to do is one of the following two: if exists update a field to field+1 on one record if it doesn't exist, insert a row with field = 1 I'm not sure what you are asking here that is not already covered. I suggest you spend some time reading the documentation on concurrency, and searching the archives for some of the lengthy past discussions on this topic. __ Do you Yahoo!? Friends. Fun. Try the all-new Yahoo! Messenger. http://messenger.yahoo.com/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] insert/update
--- Tom Allison [EMAIL PROTECTED] wrote: I seemed to remember being able to do this but I can't find the docs. Can I run a sql query to insert new or update existing rows in one query? Otherwise I have to run a select query to see if it's there and then another one to update/insert. This is what you have to do. This question comes up a lot on the lists. You can read endless discussions about it if you want to search the archives. The issue is concurrency, i.e. multiple users accessing the data at the same time, and perhaps two of them wanting to do the same update-else-insert combination at the same time. Then you have the so called race condition, i.e. user1 does a select, finds the record does not exist, attempts to insert; in between those, user2 inserts the row. So, you now either have duplicate data (bad), or user1's insert fails because of a unique constraint (also bad, because the operation has failed). The only way to guarantee against this is to lock the table for the duration of the exercise, which prevents any concurrent access at all. This may be acceptable if you have few users, or a low insert/update load, but may be a performance killer otherwise. Every now and then someone pops up on the list(s) claiming to have found some new miracle method for getting around these limitations, but no such has yet been proven. What I'm trying to do is create a counter for each key, insert a value of 1 or increment the value by 1 and then set another specific row (where key = $key) to always increment by 1. And the more I type, the more this sounds like the answer is going to be part function, part trigger Maybe I should post to 'novice' for a while! ;) ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly __ Do you Yahoo!? Friends. Fun. Try the all-new Yahoo! Messenger. http://messenger.yahoo.com/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] eval in plpgsl
--- Mage [EMAIL PROTECTED] wrote: Bruce Momjian wrote: Uh, we have EXECUTE, but I don't know if will allow you to set a local variable. Maybe you can do: execute doesn't support select into It does, if you use the for variablename in execute ... syntax. Which may seem like overkill for setting just one value, but at least it will work for you. Or you could investigate other procedural languages, which may offer the capability you want. I believe pltcl does, and perhaps plpython also. Mage ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings __ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Native Win32 port - PLEASE!
--- Rick Gigger [EMAIL PROTECTED] wrote: So can I quietly beg the Win32 group to expedite this port. I believe you will be utterly astonished at the demand. Please. I'm sure quietly begging certain developers with your pocekt book probably wouldn't hurt your cause either. :) Actually though from what I read here on this list native win32 support is almost ceranily going to be included in the next major release of postgres with will probably come out sometime in the next 6-9 month. According to recent posts on the win32 hackers list, current development code compiles and runs on Windows now, albeit with a few magical incantations needed along the way. There are still issues to resolve, but work is proceeding rapidly. Now, as to the timing of the next release, I don't believe that anyone has made an estimate yet. After lurking on the win32 hackers list for a while, I am very impressed by the energy and capability of the people who are doing the work. Anyone who wants to test the development code, and report back on any new issues encountered, would no doubt be doing the developers a favour. Instructions for compiling on Windows are on the project page on Bruce Momjian's site. But I am feeling too lazy to look up the URL right now ;-) Am I right on this? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster __ Do you Yahoo!? Yahoo! Finance Tax Center - File online. File on time. http://taxes.yahoo.com/filing.html ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] check date validity
--- Harald Fuchs [EMAIL PROTECTED] wrote: In article [EMAIL PROTECTED], LitelWang [EMAIL PROTECTED] writes: I need this function : CheckDate('2002-02-29') return false CheckDate('2002-02-28') return true Why would you want to do that? Just try to insert '2002-02-29' into your DATE column, and PostgreSQL will complain. That will cause the whole transaction to abort, which is probably not what is wanted. I don't know any way around this in Postgres. Best to check this in application code. __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] what we need to use postgresql in the enterprise
--- [EMAIL PROTECTED] wrote: I write this to tell you why we won't use postgresql even though we wish we could at a large company. Don't get me wrong I love postgresql in many ways and for many reasons , but fact is fact. If you need more detail I can be glad to prove all my points. Our goal is to make logical systems. We don't want php,perl, or c++ making all the procedure calls and having the host language to be checking for errors and handleing all the transactions commits and rollbacks. That is not very logical in a large batch system. Also please don't tell me to code the changes myself. I'm not at that part of the food chain. That is to low level for me and I don't have the time to put that hat on. I build the applications that use the database systems. Also please feel free to correct me in any area I don't know everything I'm just stating my opinion here 1. Need commit roll back in pl/pgsql much like Oracle does 2. Need exception handling in pl/pgsql must like Oracle does 3. ANeed sub transactions . BAnd if an inner transactions fails it should not cause all others to fail. If #2 was robust enough than #3 B might not be an issue. #1 #3 both refer to the same thing, i.e. nested transactions. Alvaro Herrera has been working on this for some time, and recently stated on (I think) the pgsql-hackers list that he intended to have nested transactions ready for the next release of PostgreSQL. On the other hand, Tom Lane recently responded to a question about nested transactions by warning about the complexity of the problems needing to be overcome to make that happen, and expressing doubt about an early solution. So you could say that the status is unclear. A question on the -hackers list would probably get more information. Agreed that Oracle-style exception handling in pl/pgsql would be a good thing. If I understand things correctly, the new error codes scheme in PostgreSQL version 7.4 makes that easier to implement than before. But I am not aware of anyone working on it. *** Note To Developers *** Adding Oracle-style exception handling to pl/pgsql would greatly ease the migration path from Oracle to PostgreSQL, and could easily result in many more instances of Postgres being used for enterprise apps that are now using Oracle. But I'm not up to the task, so I'm flagging it here for anyone else who might want to take a crack at it. With those two things I could accomplish pretty much everything with postgresql that we're currently doing in Oracle. 1. It's a must if you have long running complicated and time consuming batch processing. There is no reason why one should say do all of commit and rollbacks from the client. Our current batch system gets fired off by running some sql scripts that start an entry point into the batch system. Once the first stored procedure is called it will call all the rest. This encapsulates all logic and processing in the database where it belongs. There is no client traffic because once that first script kicks off there is no outside process running , just our pl/sql. Now I'm not a postgresql expert at all but when I read up on it looks like this is something you can't accomplish and I see no word of this being worked on. 2. Without this you can't trust complicated code as far as I'm concerned. I need to log some errors and continue processing and for others log and exit which I think you can do now in pl/pgsql. Point being pl/pgsql exception handling is almost nonexistent at best. 3. We use this all the time in pl/sql and we need to. To write this off as not need is wrong and will not get postgresql to where it can be(AT THE TOP). * PRIVILEGED AND CONFIDENTIAL: This communication, including attachments, is for the exclusive use of addressee and may contain proprietary, confidential and/or privileged information. If you are not the intended recipient, any use, copying, disclosure, dissemination or distribution is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, delete this communication and destroy all copies. * ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] __ Do you Yahoo!? Yahoo! Hotjobs: Enter the Signing Bonus Sweepstakes http://hotjobs.sweepstakes.yahoo.com/signingbonus ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [pgsql-advocacy] [GENERAL] Is my MySQL Gaining ?
--- Bret Busby [EMAIL PROTECTED] wrote: On Mon, 29 Dec 2003, Jeff Eckermann wrote: Isn't this what books are supposed to be for? i.e. to fill in the gaps or provide the coverage, tips, howtos etc. etc. that no-one really expects formal documentation to cover. There are quite a few good books out there, including two accessible online, with links from the www.postgresql.org page (that must have been modesty on your part ;-) ). Bruce's book, even referring to an outdated version of PostgreSQL, still gives a pretty good introduction for an SQL newbie in how to get started. We have plenty of good stuff already out there, the issue here appears to be more one of presentation and organization. But, do these things have set exercises, relevant to the material, to ensure the reader understands the material? It is one thing to present a worked example, but, without getting a student to perform an exercise create a database named supermarket, with tables groceryline and socklevel and itemprice, input 100 stock lines of varying stock levels, and of varying values, then create a report of the total value of the stock, and a report listing the stock lines with an item value over $5.00, and the total value of stock with item prices over $5.00, to show whether the student actually understands what to do, and how to do it, so the student can realise whether the student needs to go back and cover the material again, or whether the student can move on. To give a person knowledge, increases the person's memorised information; to require the person to use the knowledge, makes the person learn, and increases the person's skills. That is why I have repeatedly referred to the need for a Teach Yourself PostgreSQL in 21 Days book, to have such exercises, etc. My post was more a response to the suggestion that the core docs should be expanded to encompass more howto stuff. You are right, there is a need for some self learning tool (speaking as someone who is almost entirely self taught on PostgreSQL and computing in general), and I am not aware of any existing book which addresses that need. A problem is that much of what a newbie needs to learn is not PostgreSQL specific, much of it comes down to generic SQL and RDBMS functionality. Should we try to replicate the existing masses of material on that? For example, I first learned SQL from the SQL in 21 Days book, which gave a good general introduction, almost all of it directly applicable to PostgreSQL. I even found the introduction to Oracle pl/sql to be valuable as a quick start on pl/pgsql. Maybe we need some more suggestions from people about what they would have liked to have had when they first got started with PostgreSQL, and get some ideas from that. I was happy with the existing resources, but I am not most people. __ Do you Yahoo!? Find out what made the Top Yahoo! Searches of 2003 http://search.yahoo.com/top2003 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] Is my MySQL Gaining ?
This has been an interesting thread, with lots of well considered contributions. The consensus seems to be PostgreSQL is plenty good enough and more, we just need more people to know it, and an easier learning path. What bothers me a little here is an apparent lack of awareness of the work of the Advocacy Group. They have been organized for a little over one full release cycle, but have already begun to achieve some impressive things. The release of version 7.4 saw a well prepared press release, which was subsequently picked up by journalists and featured (often lifted word for word) in articles in a variety of IT industry publications around the world. The effect was to get our marketing material in front of the eyes of many readers, without them having to go looking for it at all. When did that happen before? I cite that as just one example of what can be achieved by an organized and co-ordinated approach, which is just what the Advocacy Group is working on. The scope for more development along these lines is huge, all that is needed is the passage of time, and hopefully more contributions from more people. I recommend to all those whose interest was caught by this thread to check out the pgsql-advocacy list, if you have not already done so, and think about what you might be able to add. In answer to the obvious question, I have been lurking on that list for a while, and intend to make a contribution where I feel fitted to do so. Maybe we need to invent some new solutions, but for advocacy at least, we already have one. --- Ericson Smith [EMAIL PROTECTED] wrote: A documentation system like the one over at http://php.net, would be fantastic for Postgresql. There could be lookups based on SQL commands, Functions, and Sitewide Searches. This alone would go a long way to expose PHP to the masses. In terms of using MySQL or Postgresql, lets all face it, most data storage work could be easily and efficiently handled by text files, since there needs to be just infrequent inserts and updates, and mostly reads. The majority of interfaces exposed on the web follow this paradigm, and include: * Content management * Catalogs * Shopping cart stuff * User management Yes, our powerful and easy to use PG can do all of that too, but SQLite, Sleepycat DBM files and MySQL can do it as well. There are going to be even more migrations for Oracle to MySQL than from Oracle to PG, because so many of those Oracle installations were overkill in the first place. Our place is in that hoary back end that runs the world, the un-sexy part of any organization that no one outside of the Development team, or System Administrators know about. Getting mindshare is a different problem. That requires PG to have a full time effective press person. This press person would need to be in touch with the press constantly to tell them things like: * PG is a great back for windows clients using ODBC/MS Access/Excel * PG is a real database comparable to Oracle * PG costs nothing * Free support is fabulous, and paid support is available * Development is constant In the end, I believe that PG needs to move into an organizational structure so that its considerable assets can be fully realized, its wonderful developers may be fully compensated, and commercial users (our bread and butter), can have an official place to help sponsor features of the system and so on. All this is more than a website. Someone posted pictures of the PG booth at a show recently. It was nice, but there was this one sad guy shrouded in darkness -- I felt depressed, because that's how PG advocacy felt. Warm regards, Ericson Smith DBA/Developer +---++ | http://www.did-it.com | When I'm paid, I always | | [EMAIL PROTECTED] | follow the job through. | | 516-255-0500 | You know that. -Angel Eyes| +---++ Karsten Hilbert wrote: I'm in a similar situation. My app is currently PG-only (although I _might_ be able to get it work with Firebird eventually). Currently I have to sell Linux to prospective clients in addition to my app. A native Windows version would make my life a bit easier. Same here. Our clients use legacy medical office software that 99% runs on Windows. We offer add-ons (tailored mini-versions of our main application :-) and thus get OSS (Python, PostgreSQL, wxWindows, sometimes Linux itself) into their offices and onto their networks. Most of the time the main difficulty is to figure out how to offer PostgreSQL in their environment (yes, we know about CygWin). (clients because we don't do business as in selling stuff) Karsten Hilbert, MD www.gnumed.org begin:vcard fn:Ericson Smith n:Smith;Ericson org:Did-it.com;Programming adr:#304;;55 Maple Avenue;Rockville
Re: [GENERAL] [NOVICE] PostgreSQL Training
Chris, Your persistence and your ideas suggest that you have something to contribute in this area. Why not sign up on the pgsql-advocacy list, and carry your ideas forward there? That would be an appropriate forum for this kind of discussion. The arguments on both sides (if there are only two sides) are strong, which is why this is such a difficult problem. But I see some promise of the emergence from further discussion of some workable formula. The Wise Heads have been convinced to change their minds before, and can be again. Best of luck! --- Chris Travers [EMAIL PROTECTED] wrote: On Fri, 2003-12-12 at 21:40, Bruce Momjian wrote: Imagine if Linus or the Linux kernel guys tried to standardize Linux training --- it would be a mess. Exactly, but that is what community is for :-) Also, though lots of people want training, seems that want _free_ training. They aren't flooding my Atlanta classes, that's for sure. I give classes at many conferences around the world too, and I get usually 20-40 people --- not exactly a flood either. Maybe they want me to come to their house? :-) Tell me what your wife is cooking for dinner before I decide. :-) I guess I see this from a different angle. The problem is not only because people only want free training, but because the PostgreSQL community by and large has a very small novice component. Most people who turn to PostgreSQL understand what it is they are looking for and have experience with other relational database systems. As a result these people (myself included) can easily pick up the manual and run with it. Compared to that of MySQL, our community is sparse, widely disperse, and MUCH more experienced/professional. This puts a damper on the training unless we can create a larger interest in the database among novices. This is partly what the job of the advocacy community is. But really it crosses all boundaries. I am wondering if you are interested in helping with some sort of skills outline project-- what skills we as a community think are important for someone to claim basic mastery over the database manager. Not as if you don't have enough to do already ;-) Maybe at least as a mentor. Best Wishes, Chris Travers ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [GENERAL] Export CSV from psql
--- Craig O'Shannessy [EMAIL PROTECTED] wrote: As it's very unlikely that there are tab's in your strings, tab makes a much safer field separator. Probably, but I wouldn't bet the farm on it. I have found plenty of instances of unexpected tabs, especially with data generated from character-based apps, where users tend to forget that they can't tab from one field to another... There is no substitute for checking. __ Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard http://antispam.yahoo.com/whatsnewfree ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] plpgsql: return multiple result sets or temp table
--- Oksana Yasynska [EMAIL PROTECTED] wrote: Jeff, thank you for the time and suggestion. I'm also trying to use SETOF custom_type as a solution I don't believe that works in 7.2. But if you do make it work, be sure to tell me how ;-) That will work with 7.3, but I am assuming that, as your first message said, you cannot upgrade the server. Best of luck. __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] plpgsql: return multiple result sets
--- Oksana Yasynska [EMAIL PROTECTED] wrote: Hi all, I'm running Postgres 7.2.1 and I need to return multiple row sets from plpgsql function. I'm new in plpgsql but according documentation and everything I could find in the mailing list I need to switch to 7.3 to get at least SETOF rows as a result. I can't really upgrade Postgres now. Is there is any a workaround idea to retrieve multiple rowsets? I have up to 50 tables in database to join and pass this data to the another application I had an idea to build a view and retrieve cursor on this view (if I stay with 7.2) or generate custom type based on the columns of all 50 tables and retrieve a SETOF custom type (if I use 7.3) Can anybody give me any suggestion? You can return a cursor from your function, which you can then use in your application. Sort of like: create function my_cursor_test(refcursor, integer) returns refcursor as 'begin open $1 as cursor for select * from mytable where id = $2; return $1; end;' language 'plpgsql'; Then call it like: begin; select my_cursor_test(mycursor, 1); select * from mycursor; (output comes here) end; Note the need to wrap the statements in an explicit transaction. With statements being autocommitted, the cursor would be closed immediately following the function call. Better check the syntax too, I just dashed that off (hey, it's Saturday). It's all there in the 7.2 docs under procedural languages. __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] PG tools
--- G Lam [EMAIL PROTECTED] wrote: Hi, I am new to postgresql. I have done some small applications with MS Access. Are there some PG tools out there that can build applications like MS Access does out there? MS Access works well with PostgreSQL via ODBC. Check here: http://gborg.postgresql.org/project/psqlodbc There are no free/open source products which can be said to do what MS Access does, although we hope that pgAccess might do it. A recent recommendation on this lists pointed to a commercial product: http://www.thekompany.com/products/rekall/ I haven't tried it so I can't say. __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Migrating Access to Postgres
--- Fabrizio Mazzoni [EMAIL PROTECTED] wrote: Hello .. i did the migration and my advice is that you forget about the automatic translation tools... I have used the migration wizard that comes as a plugin for PgAdminII, and was very happy with the result. Yes, you can do everything by hand, but why? I took the access database and recreated all af the tables and relationships between them manually in postgres. I also added all the necessary indexes at design time .. Basiclly you have to open each table in access in desgn mode and recreate it in postgres (maybe first write it in a text file) with sql commands ...eg: create table foo (a serial primary key, b varchar) .. After that you have to migrate the queries and recreate them as views in PG... Then i dumped all the access tables to csv files and reimported them in pg with the copy command. Another solution to export the data from access to PG would be to link all the pg tables in access and execute an insert query from access This is a faster solution but sometimes access can run out of memory or you can get differences and errors in the datatypes which are very annoying... I kept access only as a frontend and beleive me this solution gave us a huge boost in production in our company ... Best Rgeards, Fabrizio Mazzoni On Wed, 1 Oct 2003 09:23:44 -0600 (CST) Bernardo Robelo [EMAIL PROTECTED] wrote: Hi, I am interested in migrating Microsoft Access database to Postgres database. But I do not have idea of like initiating. Maybe some tool exists for this problem. Thanks you. Bernardo ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings __ Do you Yahoo!? The New Yahoo! Shopping - with improved product search http://shopping.yahoo.com ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Picture with Postgres and Delphi
This discussion provides an opportunity to capture the essentials of how to store and retrieve images using PostgreSQL, and the issues (performance, administration etc.) associated with each possible method. A discussion article on Techdocs (or even General Bits?) would be a fine thing. My observation is that questions on this topic are coming up on the lists more and more frequently. I'm not volunteering, not having the competence to write on the subject. But maybe someone else will be moved to attempt it. __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Picture with Postgres and Delphi
--- Jonathan Bartlett [EMAIL PROTECTED] wrote: I'm a big fan of bytea. In every case where I've done the filesystem method I wished I hadn't. For the education of me and maybe others too, why was that? i.e. what problems did you run into, that bytea avoids? __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Trunc in Postgres
Look at the trim function. While you are about it, looking over the other available functions would be worth your while too (look under Functions and Operators in the docs). --- Amin Schoeib [EMAIL PROTECTED] wrote: Hi, Is there an equivalent for the trunc function of Oracle in Postgres??? I need to trunc(the zeros) a number which is stored as a char with a lot of zeros Like that : 004 In oracle you can make that by trunc(YOUR_COLUMNNAME,0) Thanxx Schoeib 4Tek Gesellschaft für angewandte Informationstechnologien mbH Schoeib Amin Tel. +49 (0) 69 697688-132 Fax. +49 (0) 69 697688-111 http://www.4tek.de __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Oracle decode Function in Postgres
--- Amin Schoeib [EMAIL PROTECTED] wrote: Hi, Like I see there is no equivalent to the Oracle decode Function In Postgres.Is there maybe somebody who wrote decode as a Function? The problem for any such body is that there is no one function possible in PostgreSQL that will cover all of the possible use cases of decode in Oracle. As I understand things, decode takes a variable number of arguments, for a variety of datatypes. In PostgreSQL you would need to create a separate decode function for every distinct set of arguments (number and datatypes) that you are likely to encounter. That would be a lot of work for a generalized case. You may be best served by searching your code for usage of decode, and writing only those versions that you need. The coding for each one would be trivial, using case constructions. Schoeib 4Tek Gesellschaft für angewandte Informationstechnologien mbH Schoeib Amin Tel. +49 (0) 69 697688-132 Fax. +49 (0) 69 697688-111 http://www.4tek.de __ Do you Yahoo!? Yahoo! SiteBuilder - Free, easy-to-use web site design software http://sitebuilder.yahoo.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Case sensitivity issue
use ILIKE instead of LIKE or use a regular expression match: ~* 'red' - Original Message - From: Michael Gay [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sunday, September 09, 2001 11:02 PM Subject: [GENERAL] Case sensitivity issue If I am doing a command such as select * from Apples where color like '%red%'; how do I make it case insensitive? Thanks. mike ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] joinning tables
Outer joins are supported in PostgreSQL, but not before version 7.1. If you are using version 7.1, please post your failing query, and someone will be able to help you. Emmanuel SARACCO [EMAIL PROTECTED] wrote in message 9nnjc3$68p$[EMAIL PROTECTED]">news:9nnjc3$68p$[EMAIL PROTECTED]... hi, it seems like postgresql does not interpret a LEFT JOIN instruction like other sql engines. for exemple, if it have some null values on keys, data tables are not joinned... same has if it was a INNER JOIN instruction. any idea? thanks ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] error while generating and EXECUTEing a query dynamically
Your function worked fine for me (I copied and pasted). What PostgeSQL version are you running? EXECUTE is a new feature for version 7.1. - Original Message - From: Gaurav Priyolkar [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, September 12, 2001 1:15 PM Subject: [GENERAL] error while generating and EXECUTEing a query dynamically Hi all, The machine I am sending this mail from does not have postgres, but I did this just this morning so am typing from memory: code CREATE FUNCTION footable() RETURNS INTEGER AS ' DECLARE sql VARCHAR(1000); BEGIN sql := ''CREATE TABLE foo (x INT, y CHAR) ''; EXECUTE sql; RETURN 1; END; ' LANGUAGE 'plpgsql'; code Now when I try to execute this function I get an error message saying error parse error at or near error I will get the exact function and error output from other machine tomorrow (the actual query I am generating is more complex) but in the meantime if anyone can help with this example, I would be most grateful. Where am I going wrong? TIA Gaurav ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
Re: [GENERAL] COPY command with timestamp and NULLs
COPY expects NULL to be represented by \N by default. Probably easiest to explicitly specify on the command line what you want COPY to recognize as a NULL: copy table from wherever with null as 'something'; - Original Message - From: Dwayne Miller [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, September 10, 2001 2:51 PM Subject: [GENERAL] COPY command with timestamp and NULLs I recently upgraded to 7.1.3. I was experimenting with a script to export data from FoxPro into an SQL file and multiple data files. The SQL file creates the tables, indexes, foreign keys, etc, and calls the COPY command to load the data from the appropriate data files. It appears, and I could easily be mistaken, that the COPY command does not allow NULLs into a timestamp field, even though the field is defined to accept nulls. Actually, it appears that the behavior of the COPY command changed as I believe it would accept nulls in the prior release 7.1.2. In any case, I'm using the COPY command WITH NULL AS '^N'. And the datafile contains ^N in timestamp fields that could be NULL, but the command fails with an invalid timestamp error, referencing the first line that contains the '^N' null sequence. Any thoughts? Thanks, Dwayne ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] SQL Loader?
COPY is the command used for bulk loading in PostgreSQL (check the SQL Commands in the Reference Manual part of the docs). COPY is not intelligent about interpreting varying data formats. Your flexibility is limited to specifying the data delimiter and whatever string you want to represent NULL. If your data is not in the correct format to match the PostgreSQL data type for that column in the table, COPY will error out. If you have data that doesn't meet these criteria, you have two choices: 1. Write a script to edit your data into the required form (which is what people generally do), or 2. Go to the source code for COPY, and develop it into a more intelligent/flexible form. Requests for no. 2 don't seem to be frequent, so I am guessing that development of COPY is not high on the developers' todo list. - Original Message - From: Guy Fraser [EMAIL PROTECTED] To: Joel Pang [EMAIL PROTECTED]; PostgreSQL general [EMAIL PROTECTED] Sent: Monday, September 10, 2001 12:10 PM Subject: Re: [GENERAL] SQL Loader? Joel Pang wrote: Hi Fraser, I've read your mail in postgresql web site that you've a procedure that will do bulk loading of records into database. I've been looking a utility just like the sql loader of Oracle for the Postgres database. I was wondering if you will share your procedure will me? or give me some URL links that I can get reference on how to write my own sql loader procedure? Thanks in advance for your help. Cheers, Joel Please not the script below uses some special program to modify radius accounting files for import as an ordered tab delimited file format. The script also shows a way to import into an intermediate table for seperating different uniqu records into seperate files then moving the duplicate entries to a table for data integrity. This also demonstrates a method of using tables determined from the data being imported. I wrote this software a long time ago and have spent little time patching or rewriting. Since the script was initially written, I now know some better ways of performing some of these tasks. I don't have time to rewrite the script and it has worked for over 3 years so it is low priority. Guy Fraser begin radimport #!/bin/bash ORIG=`pwd` WORKDIR='/usr/local/pgsql' cd $WORKDIR echo Start collecting files wget -P $WORKDIR/ -t 10 -T 30 -c \ ftp://username:[EMAIL PROTECTED]/path/*.acct.gz # modified for security echo Finished collecting files echo Start decompressing files gunzip -v *.acct.gz echo Finished decompressing files for fname in `ls *.acct`; do { [ -f $fname ] { date echo Start inputing $fname # parserad - convert radius detail file to tab delimited format. /usr/local/sbin/parserad -i $fname \ | /usr/local/pgsql/bin/psql -c \ copy detail from stdin; radius echo Finished inputing $fname echo Start compressing $fname gzip -v $fname echo Finished compressing $fname # # # Clean up detail # /usr/local/bin/psql -c vacuum detail; radius # # If more than one month determine Current and Previous, Month and Year. # # MINTIME=`/usr/local/pgsql/bin/psql -c \ select date_trunc('month',min(\Time-Stamp\)::abstime) from detail;\ -A -t radius` MINMON=`echo $MINTIME | awk '{print $2}' -` MINYEAR=`echo $MINTIME | awk '{print $5}' -` MAXTIME=`/usr/local/pgsql/bin/psql -c \ select date_trunc('month',max(\Time-Stamp\)::abstime) from detail;\ -A -t radius` MAXMON=`echo $MAXTIME | awk '{print $2}' -` MAXYEAR=`echo $MAXTIME | awk '{print $5}' -` [ $MAXYEAR = ] (echo Exiting: No Data in detail table. || exit 1) echo Moving $fname mv $fname.gz /mnt/sdb3/done/$MAXYEAR echo Start processing data from $fname # # Process records in detail file and create a unique record file called radius. # # echo Creating lookup table /usr/local/bin/psql -c \ select min(oid) as recnum,max(\Acct-Session-Time\),\ \Acct-Status-Type\,\Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\, \ \User-Name\,\Realm\,\Framed-IP-Address\,\Calling-Station-Id\ \ into radius \ from detail \ group by \Acct-Session-Id\,\NAS-IP-Address\,\NAS-Port-Id\,\User-Name\,\ \Realm\,\Framed-IP-Address\,\Calling-Station-Id\,\Acct-Status-Type\; \ radius /usr/local/bin/psql -c vacuum radius; radius # # # Move stop records to stop table # echo Creating unique stop record table /usr/local/bin/psql -c \ select recnum into radstop from radius where \Acct-Status-Type\ = 'Stop';\ radius echo Filtering stop records /usr/local/bin/psql -c \ select a.* into stop from detail as a,radstop as b where a.oid = b.recnum;\ radius /usr/local/bin/psql -c select count(*) as \Filtered\ from stop; radius echo Cleaning stop records /usr/local/bin/psql -c \ delete from detail where oid = radstop.recnum;\ radius echo Cleaning temporary tables /usr/local/bin/psql -c drop table radstop; radius /usr/local/bin/psql -c vacuum detail; radius # # Move start and alive records to start table # echo
Re: [GENERAL] template0 and template1 databases
No. template0 is new for version 7.1. No doubt a developer can tell you exactly why dropping template0 would be a very bad idea; I am just getting in quickly to make sure you don't do so in the interim :-) - Original Message - From: Andrew Gould [EMAIL PROTECTED] To: J.H.M. Dassen (Ray) [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Thursday, September 06, 2001 7:51 AM Subject: Re: [GENERAL] template0 and template1 databases Thanks for the info and link. So if I'm running PostgreSQL 7.1.3 and have pg_dumped my databases, I can 'dropdb template0'? Andrew --- J.H.M. Dassen (Ray) [EMAIL PROTECTED] wrote: Andrew Gould [EMAIL PROTECTED] wrote: I notice that template0 can't be vacuumed; and that template1 gets vacuumed but has no relations per \d. What do these databases do? template0 is probably a leftover you got from importing a dump from an older version of PostgreSQL. template1 is the master database from which new databases are cloned; see http://www.postgresql.org/idocs/index.php?managing-databases.html#MANAGE-AG- CREATEDB HTH, Ray -- a infinite number of monkeys typing into GNU emacs would never make a good program .../linux/Documentation/CodingStyle ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) __ Do You Yahoo!? Get email alerts NEW webcam video instant messaging with Yahoo! Messenger http://im.yahoo.com ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Regarding Vacuumdb
Before doing anything so drastic, you may want to check the archives for recent messages on a similar error. IIRC, this problem can be caused by rolling back a drop table command. The effect is that the table structure in the database remains, but the physical file is gone. The fix is to create a file with the correct name in the data directory for that database, then drop the table. If you are running version 7.1, the trick will be to figure out the correct name for that file. You can find discussion on that in the archives too. - Original Message - From: Shaun Thomas [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Friday, August 31, 2001 11:48 AM Subject: Re: [GENERAL] Regarding Vacuumdb On Tue, 28 Aug 2001, Bhuvaneswari wrote: hi, I am getting the following error while doing vacuumdb, ERROR: mdopen: couldn't open test1: No such file or directory vacuumdb: database vacuum failed on db1. We got this error a lot in 6.5. Usually it means your table has somehow been corrupted, and postgres doesn't want anything to do with it. It'll show up, and you can even select from it, but doing so will crash the back-end, and you can't run a vacuum or pg_dump on that database successfully. You'll have to do a table-by-table pg_dump, destroy the DB, and reimport everything. You'll have to rebuild the corrupted table from scratch, since you might not be able to dump it. Either way, it's a lot of work. Just be careful. -- +-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+ | Shaun M. ThomasINN Database Programmer | | Phone: (309) 743-0812 Fax : (309) 743-0830| | Email: [EMAIL PROTECTED]AIM : trifthen | | Web : hamster.lee.net | | | | Most of our lives are about proving something, either to | | ourselves or to someone else. | | -- Anonymous | +-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-+ ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Newbie ...Function error (Stored Procedure)?
I think you need to double the single quotes around the sequence name: (''seq1''). Statements inside function definitions go through an extra level of parsing, which strips off one set of single quotes. - Original Message - From: Ron S [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, August 27, 2001 2:34 PM Subject: [GENERAL] Newbie ...Function error (Stored Procedure)? I have a sequence called seq1. In psql I am trying to create a function which simply calls this sequence with the nextval() function. CREATE FUNCTION testid() RETURNS INTEGER AS 'SELECT NEXTVAL('seq1');' LANGUAGE 'SQL'; I get the following error ERROR: parser: parser error at or near seq1 I can call nextval('seq1') by itself with now error. What am I doing wrong? I can't for the life of me figure this seemingly simple error out ;) Postgresql 7.0.3 on Mandrake 8.0 Thanks, Ron ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[GENERAL] Re: problems transfering databases
I very much doubt that the data wouldn't "dump right". The dump will include any data that is in the table, including CRs if they are in there. In the current case, the CRs were not already in there, because PostgreSQL would not have recognized that field as null ("\N") if they were. Any transfer of a file via a Windows machine is apt to cause line endings to be silently changed, which would account for this case. The real danger is where the rightmost field is a text type, because the CRs would be silently imported into the new installation, where they could cause real problems with any app that tries to use those fields, and a headache to debug. - Original Message - From: Rob Arnold To: Jeff Eckermann ; Miroslav Koncar Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 6:32 PM Subject: Re: problems transfering databases I've had this problem before. The data is stored correctly in PG, it just doesn't dump right (the CR thing I guess. I didn't know the reason at the time). I think this was on 7.0.x. I worked around it by doing a pg_dump that dumped the table as "proper inserts". That make the load MUCH slower, but it works around the issue. Now that I know what the cause might have been, I guess I better go look at my data . . . --rob - Original Message ----- From: Jeff Eckermann To: Miroslav Koncar Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 10:59 AM Subject: Re: problems transfering databases Looks like you already have the answer. The error message is a dead giveaway. Some field, in this case having a null value (represented as '\N') contains a carriage return character, like: '\NCR'. PostgreSQL cannot make sense out of this combination, so it returns an error. The CR contained in the error message causes the output to return to the start of the line, outputting the rest of the message from the beginning: see how the final " ' " overwrites the "p" in "psql". Edit out the CR characters, and the problem will go away. This is what Tom already said; I am expanding on it. - Original Message - From: Miroslav Koncar To: Tom Lane Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 3:18 AM Subject: Re: [GENERAL] problems transfering databases Hello Tom, thanks for the answer. This is the procedure, how we tried to transfer the data from one machine to another. My collegue has used the pg_dumpall command, in the following way: pg_dumpall backups/2001-06-04db.out Since it is a simple test database, the 2001-06-04db.out file was around 40kB, so he has sent it to me via email. I tried to load the data from the file in two different ways: psql -f 2001-08-21db.out cat 2001-06-04db.out |psql but got the error like stated before. I've tried to do this on Solaris and Linux (I've set postgreSQL up on my Linux box for testing purposes), but the same error pops up. Could it have something to do with the versions of the databases, and how it was installed? The version I use is psql (PostgreSQL) 7.1.2 (the output of psql -V command), but I'm not sure what version did my collegue use (I'll find that out, but I think it is 7.x.x). Regards, Miroslav Tom Lane wrote: Miroslav Koncar [EMAIL PROTECTED] writes: 'sql:2001-08-21db.out:5: ERROR: copy: line 1, Bad abstime external representation '\N psql:2001-08-21db.out:5: PQendcopy: resetting connection Given the weird line-wrapping of the message, I'm going to bet that the problem is that newlines in the dump file have gotten converted to DOS format (ie, \n became \r\n), and COPY is unhappy because it sees the \r as part of the data. When it echoes the data it didn't like, the presence of the \r messes up the format of the error message. Not sure *how* that happened in a Linux-to-Solaris transfer, though; what tool did you use to transfer the dump file? regards, tom lane-- Miroslav Koncar Software Engineer Ericsson Nikola Tesla ETK/D/R Tel: +385 1 365 3479 Fax: +385 1 365 3548 mailto:[EMAIL PROTECTED]
Re: [GENERAL] problems transfering databases
I think we may be using too many words in this discussion. Perhaps there is a message in this thread that I have not seen. Miroslav, have you tested for carriage returns in your data, and if so, what did you find? If that is the problem, the carriage returns are easily edited out, and speculation about the cause may use more time than it is worth... If that is not the problem, we definitely have something weird. The original error message shows the problem occurring at the first line of your data. Perhaps you could pipe the first few lines through some utility that shows nonprinting characters (like vis -lw), and see what that shows? - Original Message - From: Tom Lane [EMAIL PROTECTED] To: Miroslav Koncar [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Friday, August 24, 2001 8:13 AM Subject: Re: [GENERAL] problems transfering databases Miroslav Koncar [EMAIL PROTECTED] writes: The only thing I can think of is the PostgreSQL versions. My collegue is using 7.0.3, and I use 7.1.2. Comments? That wouldn't produce a carriage-return problem. It's possible that we're barking up the wrong tree, and that the issue is not carriage returns but something else --- I just jumped to that conclusion based on the way you presented the error message. But maybe the error message was mangled because you were sloppy about cutting-and- pasting it into your email, or some such. Have you looked at the data to try to narrow down exactly which line is being rejected? regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] problems transfering databases
Looks like you already have the answer. The error message is a dead giveaway. Some field, in this case having a null value (represented as '\N') contains a carriage return character, like: '\NCR'. PostgreSQL cannot make sense out of this combination, so it returns an error. The CR contained in the error message causes the output to return to the start of the line, outputting the rest of the message from the beginning: see how the final " ' " overwrites the "p" in "psql". Edit out the CR characters, and the problem will go away. This is what Tom already said; I am expanding on it. - Original Message - From: Miroslav Koncar To: Tom Lane Cc: [EMAIL PROTECTED] Sent: Thursday, August 23, 2001 3:18 AM Subject: Re: [GENERAL] problems transfering databases Hello Tom, thanks for the answer. This is the procedure, how we tried to transfer the data from one machine to another. My collegue has used the pg_dumpall command, in the following way: pg_dumpall backups/2001-06-04db.out Since it is a simple test database, the 2001-06-04db.out file was around 40kB, so he has sent it to me via email. I tried to load the data from the file in two different ways: psql -f 2001-08-21db.out cat 2001-06-04db.out |psql but got the error like stated before. I've tried to do this on Solaris and Linux (I've set postgreSQL up on my Linux box for testing purposes), but the same error pops up. Could it have something to do with the versions of the databases, and how it was installed? The version I use is psql (PostgreSQL) 7.1.2 (the output of psql -V command), but I'm not sure what version did my collegue use (I'll find that out, but I think it is 7.x.x). Regards, Miroslav Tom Lane wrote: Miroslav Koncar [EMAIL PROTECTED] writes: 'sql:2001-08-21db.out:5: ERROR: copy: line 1, Bad abstime external representation '\N psql:2001-08-21db.out:5: PQendcopy: resetting connection Given the weird line-wrapping of the message, I'm going to bet that the problem is that newlines in the dump file have gotten converted to DOS format (ie, \n became \r\n), and COPY is unhappy because it sees the \r as part of the data. When it echoes the data it didn't like, the presence of the \r messes up the format of the error message. Not sure *how* that happened in a Linux-to-Solaris transfer, though; what tool did you use to transfer the dump file? regards, tom lane-- Miroslav Koncar Software Engineer Ericsson Nikola Tesla ETK/D/R Tel: +385 1 365 3479 Fax: +385 1 365 3548 mailto:[EMAIL PROTECTED]
RE: [GENERAL] Calling stored procedures.
You need to declare the variable ret, i.e. add a line before BEGIN like: DECLARE ret integer; Your function will not work without that. You may still get an error for some other unknown reason, or you may not. I have found that plpgsql errors can be highly uninformative, or downright misleading. Hi. I've created some stored procedures. For example: create function validate_user(varchar, varchar) RETURNS integer AS ' BEGIN SELECT count(*) INTO ret FROM users WHERE userid = $1 and passwd = $2; RETURN ret; END; ' LANGUAGE 'plpgsql'; When I try to call it with the command, select validate_users('user', passwd'), I get this error message: Error Number: -2147467259 Error Description: Error while executing the query; ERROR: fmgr_info: function 19264: cache lookup failed I did some research and found that this error message occurs when a stored procedure refers to another function that has been dropped from the database and that to avoid this error you must drop all objects referring to this stored procedure and recreate them. The problem is that I get this error message for NEWLY created procedures? Any ideas about this? I don't know what this OID 19264 is referring to. Thanks. John ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] Large OR query
I would load the 1000 values into a temporary table, and join on the ID field. I usually find that approach gives much faster results. -Original Message- From: Zak McGregor [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, June 12, 2001 7:42 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Large OR query Hi all If I have say 1000 values for an ID field, what is the best way to select from a table all the corresponding records? I have tried select * from blah where id in (id1,id2,id3...id1000) and select * from blah where id=id1 or id=id2 ... or id=id1000 and both are pretty slow. Is there a better way to do this please? Thanks Ciao -- Zak McGregorhttp://www.carfolio.com - Over 7000 car specs online Web mercenary - currently for hire. Perl/html/.js/sql/cgi/GNUlinux/php + - Trying to make bits uncopyable is like trying to make water not wet. The sooner people accept this, and build business models that take this into account, the sooner people will start making money again. -- Bruce Schneier ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] pg_dump pg_dumpall problem.
I was bitten by the same problem recently. It means that the owner of the pgsql call handler no longer exists. To find out the id of the owner, do a select like: select * from pg_proc where proname like 'plpg%'; Then create a user having that id. You may need to edit pg_shadow to get the desired result. -Original Message- From: Luis Magaña [SMTP:[EMAIL PROTECTED]] Sent: Thursday, May 31, 2001 12:26 PM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: [GENERAL] pg_dump pg_dumpall problem. Hi there: Would like to know how to fix this: Everytime I do a pg_dump or pg_dumpall I get this error message: dumpProcLangs(): handler procedure for language plpgsql not found Any help would be appreciated. Thank you in advanced. -- Ing. Luis Magaña Gnovus Networks Software www.gnovus.com ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
RE: [GENERAL] MS SQL 7.0 to PostgreSQL 7.1
Also: * Watch out for embedded tabs and carriage returns in your data: these will cause problems during or after your COPY into PostgreSQL. * Check the value used in the exported file to represent NULL values (could be an empty string, or NULL, or something else), and use that in your COPY statement: COPY table from stdin with null as 'whatever'; -Original Message- From: Brent R. Matzelle [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, May 09, 2001 2:40 PM To: [EMAIL PROTECTED] Subject: Re: [GENERAL] MS SQL 7.0 to PostgreSQL 7.1 --- Ryan C. Bonham [EMAIL PROTECTED] wrote: My question is what is the best way to import my data from MS SQL. I installed the ODBC Drivers and tried exporting using the MS import/export tool.. It successfully creates the tables, but fails to import any data, with a error stating that the relationship doesn't exist. So there goes the easy route and MS POS tool.. What's the correct way of doing this?? Thanks in advance First, verify that all of the data types of the old SQL Server tables were correctly converted to Postgres. Then to transfer the rest of the data over use the MS bulk copy (bcp) tool or the export tool (MSSQL 7 or higher I believe) to create tab-delimited dump files for each table. Then you must add this to the beginning of each dump file: { COPY table_name FROM stdin; } And add a { \. } at the end of the dump file. Then you can use { psql -f table_name.dump } to import the data from the dump files. Brent __ Do You Yahoo!? Yahoo! Auctions - buy the things you want at great prices http://auctions.yahoo.com/ ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
RE: [GENERAL] Improve a query...
Try: SELECT DISTINCT ON (org_id) rpt_id, org_id, period,... FROM reports ORDER BY period DESC; -Original Message- From: Eric G. Miller [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, May 01, 2001 4:04 AM To: PostgreSQL General Subject: [GENERAL] Improve a query... Looking for the best way to formulate a query to select the most recent entry for an organization in a table like: CREATE TABLE reports ( -- Report Id used to link up related 1:M rather than multi-key rpt_id SERIAL NOT NULL PRIMARY KEY, -- A Unique ID for the organization org_id char(10) NOT NULL CHECK(CHARACTER_LENGTH(op_id) = 10), -- The reporting period period integer NOT NULL -- Various and Sundry ... . . . UNIQUE (org_id,period) ); If max(period) for an organization yields the most recent reporting, I want to get whatever is the most recent report for each organization. This query works, but seems expensive... SELECT a.rpt_id, a.org_id, a.period, ... FROM reports As a INNER JOIN (SELECT b.org_id, max(b.period) As period FROM reports b group by b.org_id) As c ON a.org_id = c.org_id and a.period = c.period; EXPLAIN looks thusly: NOTICE: QUERY PLAN: Merge Join (cost=147.98..164.48 rows=10 width=48) - Sort (cost=69.83..69.83 rows=1000 width=32) - Seq Scan on reports a (cost=0.00..20.00 rows=1000 width=32) - Sort (cost=78.15..78.15 rows=100 width=16) - Subquery Scan c (cost=69.83..74.83 rows=100 width=16) - Aggregate (cost=69.83..74.83 rows=100 width=16) - Group (cost=69.83..72.33 rows=1000 width=16) - Sort (cost=69.83..69.83 rows=1000 width=16) - Seq Scan on reports b (cost=0.00..20.00 rows=1000 width=16) The data is very hierarchical so I didn't want to carry around alot of key fields in related many-sided tables which may also have related many-sided tables. Any ideas on how to minimize the multiple scans on the table? The numbers for explain probably aren't telling much since there's not much real data in the table at this time... -- Eric G. Miller [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] Data Export from PostGreSQL
I got good results in a test case, just using a standard PostgreSQL dump file created with INSERT statements (pg_dump -d). All it needed was a little editing to change some of the data types in the table definition, e.g. from text and varchar to varchar2 (if your Oracle table is set up, you won't even have to do that: just do a pg_dump -a -d). The only other thing I needed to do was edit dates into the form that Oracle likes: DD-MMM-YY. I believe that this default can be changed though, I just couldn't find a reference quickly enough to suit. This approach was necessary because I was testing a development version of Oracle (the freely downloadable one), which doesn't include the import/export tools, so I had to use the sqlplus interface. If you have the import tools, you can just import from a delimited text file, as can be created using COPY: read the documentation for your Oracle installation. -Original Message- From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]] Sent: Thursday, April 26, 2001 10:13 AM To: [EMAIL PROTECTED] Subject: [GENERAL] Data Export from PostGreSQL For the purposes of a demo I have to export data from a PostGresSQL system to Oracle (I know, I know not my choice) Is there an easy way of dumping all the data in a format Oracle can understand? Any help appreciated, Thanks, Martin C. -- NOTICE: The information contained in this electronic mail transmission is intended by Convergys Corporation for the use of the named individual or entity to which it is directed and may contain information that is privileged or otherwise confidential. If you have received this electronic mail transmission in error, please delete it from your system without copying or forwarding it, and notify the sender of the error by reply email or by telephone (collect), so that the sender's address records can be corrected. ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] pg_dump failure: handler procedure for language plpgsql not foun d
Tom, Absolutely right! I suppose I could have figured that out if I had spent three days looking at it :-) I didn't find anything in the docs about assigning a sysid to a new user, so I just did a createuser and an UPDATE to pg_shadow to reset the sysid. If that is likely to cause a problem, please let me know. In any case, pg_dump is now working for me. Thank you once again for your valuable help. Regards -Original Message- From: Tom Lane [SMTP:[EMAIL PROTECTED]] Sent: Monday, April 23, 2001 6:23 PM To: Jeff Eckermann Cc: '[EMAIL PROTECTED]' Subject: Re: [GENERAL] pg_dump failure: handler procedure for language plpgsql not foun d Jeff Eckermann [EMAIL PROTECTED] writes: I am attempting to do a pg_dump, preparing to upgrade to 7.1. I have attempted a pg_dumpall, as well as pg_dump for various individual databases, and each fails with the message dumpProcLangs(): handler procedure for language plpgsql not found Hmm, does plpgsql_call_handler's proowner appear in pg_shadow? If not, add a user with the correct sysid... regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
RE: [GENERAL] PL/pgSQL problems
Have you dropped and recreated the function since creating the trigger? Try dropping and recreating the trigger. Errors like this happen because database objects (tables, functions etc) are referenced by object ids (oids) behind the scenes. Recreating an object (like a function) will give it a different oid, even with the same name. -Original Message- From: Keith Menard [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, April 18, 2001 3:51 PM To: [EMAIL PROTECTED] Subject: [GENERAL] PL/pgSQL problems All, I am new to pgSQL so here goes. I am having a problem using the PL/pg/SQL language extension to create triggers. I get all the items installed including PL/pgSQL, the trigger and the function the trigger calls and I get the following error dring the firing of the trigger: Apr 18 09:49:02 host postgres[2477]: ERROR: fmgr_info: function 57216: cache lookup failed Any help would be appreciated since I can find only scant docs on the PL/pgSQL extensions. Keith Menard [EMAIL PROTECTED] ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED]) ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
RE: [GENERAL] Re: Load Text File into DB
COPY expects tab delimiters by default, so specifying them is not strictly necessary in this case. Specifying "null as xxx" is a good idea, because COPY interprets an empty field as an empty string, which will cause failure for non text-type fields. A couple of other things to watch out for when bulk-loading data with COPY (based on my painful experiences): * Embedded delimiters, i.e. tabs in your data. Start by grepping for lines with more than the expected number of tabs. * Fields ending with a backslash. Sounds unlikely, but this has happened to me several times. The result is that your delimiter is escaped, shifting all of your following fields down by one. This will either cause COPY to fail (because of data type mismatches), or messed up data in that record. * For files imported from a Windows machine, better edit out the carriage returns. -Original Message- From: Chris Jones [SMTP:[EMAIL PROTECTED]] Sent: Friday, April 13, 2001 1:27 PM To: [EMAIL PROTECTED] Subject: [GENERAL] Re: Load Text File into DB This is what I have used with success.. copy {table name} from '{delimited file name}' using delimiters '\t' with null as ''; Of course, don't put in the brackets and you can use whatever null character you might need... Hope this helps Chris Karl J. Stubsjoen wrote in message 9b20s1$1b3d$[EMAIL PROTECTED]... Hello, I have a tab delimited text file located on the server and would like to load this into a table. How do I accomplish this? Thanks! Karl ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Backend sent D message without prior T
My main server is down, so I am using my backup copy on a different database. Based on past posts I have read, this error is usually associated with running out of memory for the query result. Problem is, I am only expecting about 30 lines of moderate length to be returned I see nothing in the query which would produce unexpected results: unless my beserk use of regular expressions has something to do with it? Note the strange switch to "copy" mode following the error. I would appreciate any suggestions. PostgreSQL version 7.03 running on FreeBSD 4.0-RELEASE. 256M memory available... TIA extracts=# \o ls_dsl.txt extracts=# select p.* from products p, customers c where p.arbor_acct_no = c.external_id and p.product_description ~ '^VGS' or (p.product_description ~* 'dsl' and (c.rev_rcv_cost_ctr ~ '^180' or c.bill_state ~* '^la$')); Backend sent D message without prior T Enter data to be copied followed by a newline. End with a backslash and a period on a line by itself. ^C \. Unknown protocol character 'l' read from backend. (The protocol character is the first character the backend sends in response to a query it receives). PQendcopy: resetting connection extracts=# ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
RE: [GENERAL] dynamic field names in a function.
In my experience, the best way to find out answers like this is to try it out and see. Usually I find that I need to fiddle around with the syntax a bit (I believe it's called debugging) before getting something to work. Postgresql is very powerful; the capability to utilize that power comes at a price. In other words, be prepared to put in a solid investment if you want to see a return. (I'm not accustomed to preaching, but sometimes this just needs to be said). -Original Message- From: Soma Interesting [SMTP:[EMAIL PROTECTED]] Sent: Thursday, March 29, 2001 4:39 PM To: [EMAIL PROTECTED] Subject: [GENERAL] dynamic field names in a function. I want to be able to reference NEW.field_0 though NEW.field_x where x is coming from NEW.qty in a FOR loop of pl/pgsql function. Is this possible? In other words: FOR j IN 0..NEW.str LOOP ans := ''q'' || i || ''a'' || j; cor := ''q'' || i || ''c'' || j; eval := 'q'' || i || ''e'' || j; IF NEW.ans = NEW.cor THEN NEW.eval := 1; END IF; END LOOP; ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
RE: [GENERAL] converting .xls to delimited file
Excel can do this for you, if you use File - Save As... You will find various file types available. The list will vary according to the completeness of your Excel installation. I believe that at minimum you should have "tab delimited text" available, and probably csv as well (don't use csv, it's evil). If I'm wrong, you'll need to get the MS Office CD and reinstall Excel with more options checked... If you want to import the result into Postgres, beware of CR characters, and leading/trailing spaces in your fields. Excel will tolerate those spaces in non text-type fields, but Postgres will not. -Original Message- From: William Staniewicz [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, January 10, 2001 10:23 AM To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: [GENERAL] converting .xls to delimited file Is there any way to convert an Excel file (".xls") to a delimited file? Maybe using sed? Bill
[GENERAL] Cluster means tangle for me
I would appreciate any advice on getting out of this strange situation. My table now doesn't exist, but I can't recreate it either (at least under that name). jeffe@kiyoko= psql -V psql (PostgreSQL) 7.0.0 jeffe@kiyoko= uname -a FreeBSD kiyoko.la.verio.net 4.0-STABLE FreeBSD 4.0-STABLE #0: Thu Apr 27 10:44:07 CDT 2000 extracts=# create index dc_et_i on dedcolo (equip_type); CREATE extracts=# cluster dc_et_i on dedcolo; ERROR: temp_286bbc3 is an index relation extracts=# drop index dc_et_i; ERROR: index "dc_et_i" nonexistent extracts=# \d dedcolo Did not find any relation named "dedcolo". extracts=# vacuum verbose analyze; (snip) NOTICE: mdopen: couldn't open temp_28421e0: No such file or directory NOTICE: RelationIdBuildRelation: smgropen(temp_28421e0): No such file or directory NOTICE: --Relation temp_28421e0-- NOTICE: mdopen: couldn't open temp_28421e0: No such file or directory ERROR: cannot open relation temp_28421e0 extracts=# drop table temp_28421e0; NOTICE: mdopen: couldn't open temp_28421e0: No such file or directory NOTICE: mdopen: couldn't open temp_28421e0: No such file or directory NOTICE: mdopen: couldn't open temp_28421e0: No such file or directory
RE: [GENERAL] Help with SQL Function
Thanks, Tom, and also to Alex Pilosov for his answer. I was extrapolating from the plpgsql docs, which I probably didn't understand correctly. Programming By Example (which is what we non-programmers are obliged to do) doesn't work so well when the docs are somewhat sparse. Are there any plans to expand the docs on plpgsql, since there are new features coming? Plpgsql looks like a good tool for enhancing my development of financial etc reports, but in working with it, I've had two weeks of frustration and not much of anything else so far. -Original Message- From: Tom Lane [SMTP:[EMAIL PROTECTED]] Sent: Saturday, January 06, 2001 3:41 PM To: Jeff Eckermann Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] Help with SQL Function Jeff Eckermann [EMAIL PROTECTED] writes: extracts=# create function testfunc(text) returns int4 as ' extracts'# select count (*) from dedcolo where equip_type = ''$1'' --- That's doubled single quotes extracts'# ' language 'sql'; CREATE That's looking for rows where equip_type = '$1' ... ie, the literal string $1. What you probably wanted is create function testfunc(text) returns int4 as ' select count (*) from dedcolo where equip_type = $1 ' language 'sql'; regards, tom lane
RE: [GENERAL] Off topic? - Solution for a delimiter problem
Isn't this overkill? Something like: sed -e 's/\" \"/^I/g' -e 's/\"//g' file would do the job nicely. I'm sure people on this list can suggest even simpler ways (note: the "^I", or tab character, in the example is created by typing Ctrl-V then Ctrl-I). Sed has been (and is) my friend for many such cases. -Original Message- From: Andrew Gould [SMTP:[EMAIL PROTECTED]] Sent: Wednesday, December 20, 2000 5:58 PM To: Postgres Mailing List Subject: Re: [GENERAL] Off topic? - Solution for a delimiter problem If space was used as the delimiter without accounting for the quotation marks, text fields might be broken apart whenever a space occurred. For example, the street address "123 Sesame Street" might be split into 3 fields containing '"123', 'Sesame' and 'Street"' respectively. In the past, I imported the file into MySQL, then exported the data into a tab delimited text file for importing into PostgreSQL. The Python script saves many steps; so I was offering it up for use and improvement by others. I didn't know if I should post the script to the mailing list or send it directly to people who are interested. (It's only 1k.) Andrew Gould --- "Poul L. Christiansen" [EMAIL PROTECTED] wrote: Maybe you can specify the delimiter to be space. radius=# \h copy Command: COPY Description: Copies data between files and tables Syntax: COPY [ BINARY ] table [ WITH OIDS ] FROM { 'filename' | stdin } [ [USING] DELIMITERS 'delimiter' ] [ WITH NULL AS 'null string' ] COPY [ BINARY ] table [ WITH OIDS ] TO { 'filename' | stdout } [ [USING] DELIMITERS 'delimiter' ] [ WITH NULL AS 'null string' ] That would probably be COPY table FROM filename USING DELIMITERS ' '; HTH, Poul L. Christiansen Andrew Gould wrote: I receive space delimited files that have spaces within double quoted text fields. Since PostgreSQL's COPY doesn't deal with multiple delimiters, I've written a crude Python script to convert the files to a tab delimited format without the double quotes and without disturbing the spaces within text fields. I'm sure someone could make it more efficient. (Probably by rewriting it in Perl.) Does anyone else need the script? Is the script something I can/should post? Andrew Gould __ Do You Yahoo!? Yahoo! Shopping - Thousands of Stores. Millions of Products. http://shopping.yahoo.com/
[GENERAL] String substitution with translate function
I have been trying to use the translate function to delete a specific string from entries in a particular column, as in "translate(column, 'string', '')". What I find is that "translate" is treating the nominated string as a character class, and is deleting every instance of every character in that class I have tried every which way, but cannot get the behaviour I want. Is it possible, or am I just seeing the builtin behaviour of "translate", which cannot be worked around? TIA
[GENERAL] String substitution with translate function
(My first attempt at posting didn't make it to my mailbox, so I am reposting. Apologies to any who get this twice.) I have been trying to use the translate function to delete a specific string from entries in a particular column, as in "translate(column, 'string', '')". What I find is that "translate" is treating the nominated string as a character class, and is deleting every instance of every character in that class I have tried every which way, but cannot get the behaviour I want. Is it possible, or am I just seeing the builtin behaviour of "translate", which cannot be worked around? TIA
RE: [GENERAL] Index on substring?
Tom, Thanks very much for your full and clear answer. It's hard to imagine a general use for this facility, anyway. For me this is a one-off exercise, albeit a big one. Regards -Original Message- From: Tom Lane [SMTP:[EMAIL PROTECTED]] Sent: Thursday, October 12, 2000 12:49 AM To: Jeff Eckermann Cc: '[EMAIL PROTECTED]' Subject: Re: [GENERAL] Index on substring? Jeff Eckermann [EMAIL PROTECTED] writes: extracts=# create index c_namesum_i on customers (substr(bill_company,1,5)); ERROR: parser: parse error at or near "1" The functional-index syntax only allows a function name applied to simple column names. You can work around this by defining a function that handles any additional computation needed, eg, create index c_namesum_i on customers (mysubstr15(bill_company)); where mysubstr15(foo) returns substr(foo,1,5). In current releases the intermediate function has to be in C or a PL language. 7.1 will allow a SQL-language function too (although frankly I'd recommend against using a SQL function for indexing, on performance grounds). There's been some talk of generalizing the functional-index support into arbitrary-expression-index support, but it doesn't seem to be real high on anyone's priority list. regards, tom lane
[GENERAL] Index on substring?
I thought this was possible, but searching the archives docs I can't find any reference to it... Am I doing something wrong? jeffe@kiyoko= psql -V psql (PostgreSQL) 7.0.0 jeffe@kiyoko= uname -a FreeBSD kiyoko.la.verio.net 4.0-STABLE FreeBSD 4.0-STABLE #0: Thu Apr 27 10:44:07 CDT 2000 root@:/usr/src/sys/compile/KIYOKO i386 extracts=# create index c_namesum_i on customers (substr (bill_company,1,5)); ERROR: parser: parse error at or near "1" extracts=# select substr (bill_company, 1, 5) from customers limit 10; substr RoadW Beliz Radio Trill R2000 Data Inter AEC M G2 Gr MindB (10 rows) extracts=# create index c_namesum_i on customers (substring(bill_company from 1 for 5)); ERROR: parser: parse error at or near "substring" extracts=# select substring (bill_company from 1 for 5) from customers limit 10; substr RoadW Beliz Radio Trill R2000 Data Inter AEC M G2 Gr MindB (10 rows) extracts=#
RE: [GENERAL] Query bombed: why?
Thanks for your reply. I was expecting not much more than 50 rows to be returned, with an absolute maximum of 70. I was trying to simulate an outer join by using the "where not exists" clause, so that I would get back my full list of 70 and be able to see the unmatched entries... -Original Message- From: Tom Lane [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, May 09, 2000 12:35 PM To: Jeff Eckermann Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] Query bombed: why? Jeff Eckermann [EMAIL PROTECTED] writes: After about 25 minutes of running a query with a "where not exists 'correlated subquery'", I got a whole bunch of lines printing out: "Backend sent D message without prior T". Could someone give me an idea of what that means, and how to deal with it? How many rows were you expecting the query to produce? (It might be worth redoing it as a SELECT count(*) FROM ... to find out how many it really produced.) My first bet is that your frontend application ran out of memory while trying to absorb the query result. libpq is designed to collect the whole result before handing it back to the application, which is nice for some things but starts to look like a bad idea when you have a huge query result. Also, libpq doesn't react very gracefully to running out of memory :-( --- the symptoms you describe sound like one likely failure mode. (We need to fix that...) You might be able to increase your process memory limit; otherwise, consider using DECLARE CURSOR and FETCH to retrieve the query result a few hundred rows at a time. regards, tom lane
[GENERAL] Memory leak in FreeBSD?
We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are having problems with moderately complex queries failing to complete (backend terminating unexpectedly; last one crashed the server). The most likely explanation appears to be a memory leak. Is there any known problem with FreeBSD?
RE: [GENERAL] Memory leak in FreeBSD?
FreeBSD port: I don't know enough to know what difference that might make. Any suggestion you have would be appreciated: thanks. -Original Message- From: admin [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, January 11, 2000 12:18 PM To: Jeff Eckermann Cc: '[EMAIL PROTECTED]' Subject: Re: [GENERAL] Memory leak in FreeBSD? Did you upgrade from source or from the freebsd ports? We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are having problems with moderately complex queries failing to complete (backend terminating unexpectedly; last one crashed the server). The most likely explanation appears to be a memory leak. Is there any known problem with FreeBSD?
RE: [GENERAL] Memory leak in FreeBSD?
Maxusers is set to 128. RAM is 256Mg. Do you think this could be the problem? -Original Message- From: admin [SMTP:[EMAIL PROTECTED]] Sent: Tuesday, January 11, 2000 12:50 PM To: Jeff Eckermann Cc: '[EMAIL PROTECTED]' Subject: RE: [GENERAL] Memory leak in FreeBSD? What is maxusers set to in your kernel? One prolem I had was that postgresql was using more filedescriptors that my kernel could handle. If you'd like to check your current filedescriptor status and your max, try: pstat -T. If that is your problem, change your maxusers to a suitable number and recompile your kernel. FreeBSD port: I don't know enough to know what difference that might make. Any suggestion you have would be appreciated: thanks. Did you upgrade from source or from the freebsd ports? We upgraded to version 6.5.2 recently, running on FreeBSD 3.0. Now we are having problems with moderately complex queries failing to complete (backend terminating unexpectedly; last one crashed the server). The most likely explanation appears to be a memory leak. Is there any known problem with FreeBSD?