Re: [HACKERS] [GENERAL] Empty arrays with ARRAY[]
On Nov 30, 2007 9:09 PM, Gregory Stark [EMAIL PROTECTED] wrote: I'm sorry to suggest anything at this point, but... would it be less invasive if instead of requiring the immediate cast you created a special case in the array code to allow a placeholder object for empty array of unknown type. The only operation which would be allowed on it would be to cast it to some specific array type. That way things like UPDATE foo SET col = array[]; INSERT INTO foo (col) VALUES (array[]); could be allowed if they could be contrived to introduce an assignment cast. Hi Gregory. Not sure it would be less invasive, but I do like the outcome of being able to create an empty array pending assignment. In addition to your examples, it might also make it possible to do things like this in plpgsql DECLARE a text[] := array[]; Whereas my patch requires you to write a text[]: =array[]::text[]; ... which seems pretty stupid. So, I like your idea a lot from a usability point of view. But I really, really hate it from a just spent half a week on this patch point of view =/ Any suggestions about how you would enforce the only allow casts to array types restriction on the empty array? Cheers BJ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] plperl and regexps with accented characters - incompatible?
I wrote: OK, I have a fairly ugly manual workaround, that I don't yet understand, but seems to work for me. In your session, run the following code before you do anything else: CREATE OR REPLACE FUNCTION test(text) RETURNS bool LANGUAGE plperl as $$ return shift =~ /\xa9/i ? 'true' : 'false'; $$; SELECT test('a'); DROP FUNCTION test(text); After that we seem to be good to go with any old UTF8 chars. I'm looking at automating this so the workaround can be hidden, but I'd rather understand it first. (Core guys: If we can hold RC1 for a bit while I get this fixed that would be good.) The attached patch works for me to eliminate the errors. Please test ASAP. Given our time constraints I intend to apply this to HEAD and backpatch it to 8.2 and 8.1, unless there's a strenuous objection. That will give us some buildfarm coverage on it, although we don't seem to have any perl 5.6.x on the buildfarm that I could see. We've had a positive test report, no negative reports, and I'm fairly sure the patch is at worst harmless. cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.3 beta testing suggestions welcome
On Tue, Aug 21, 2007 at 1:12 PM, in message [EMAIL PROTECTED], Greg Smith [EMAIL PROTECTED] wrote: My suggestion for a starting 8.3 configuration for you would be adjusting these settings as follows: shared_buffers=1GB checkpoint_segments = 50 I'm not into the most meaningful tests yet, but I'm trying to garner what useful information I can from the load of the 8.2.5 database into a server running the cvs tip copy taken a few hours ago. After letting the cache settle in, while in the middle of a COPY of a large table, I see the attached through a couple checkpoint cycles. The pattern is pretty consistent with this over other the other checkpoint cycles I've examined. Note that the system can sustain over 2 writes per second without the disk being a bottleneck. The current tip behavior is smoother than 8.2.x without background writer, for sure. It still isn't as smooth as 8.2.x with an aggressive backgroud writer, at least for loading a dump. Note how it lounges around at as low as 5500 writes per second for prolonged periods with occassional spikes into the 55000 range, causing I/O wait time, although these only last a few seconds. Without more rigorous tests for comparison, I'd say that this is probably a net loss for this operation -- I seem to recall seeing a pretty steady 2 bo under 8.2.x. More info to follow as testing progresses. -Kevin vmstat.sample1.out Description: Binary data ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] 8.3 beta testing suggestions welcome
On Fri, 30 Nov 2007, Kevin Grittner wrote: checkpoint_segments = 50 Here's how I'm reading your data: 16:43:11 : Checkpoint start 16:44:23 : Checkpoint ends [ 1:22 long] 1:01 passes 16:45:24 : Checkpoint start 16:46:36 : checkpoint ends [1:22 long] If you're getting a checkpoint every minute or so, the number of checkpoint segments you're using is still far too low to really take advantage of the new checkpoint spreading features. You want to make that high enough that it's several minutes between them. It's not unusual to set a much higher checkpoint_segments during bulk loading than you'd necessarily want for actual production use. One thing that would make your reports more useful: in the spots in your log where you're labeling [checkpoint starting] etc., if you could include the new message spit out by turning on checkpoint_log there it would help. That's the one that says how much data was written. Did you ever write something to save snapshots of pg_stat_bgwriter? Those would be interesting to see on the same time scale as well. -- * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostGreSQL and recursive queries...
Sam Mason [EMAIL PROTECTED] writes: Sounds as though you need some sort of type inference algorithm. There are quite a few decidable ones around, the one by Hindley-Milner being very popular/common. Decidable means you get the correct answer out in a reasonable amount of time or it fails, and, barring implementation bugs, it'll never get stuck trying to figure out what you meant. I think some closer reading of the SQL spec might be called for. I'm pretty sure the spec authors did not intend to require any especially abstruse algorithm to infer the types involved in a recursive query. In fact, if they have not completely abandoned their duty as spec writers, the spec itself should spell out any algorithms required to determine the meaning of a query. (As distinct from algorithms needed to produce an efficient implementation, which is a topic outside the purview of the spec. But what type is this result column is surely something the spec is required to define.) regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] CommandCounterIncrement versus plan caching
Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: ... I am tempted to remove that from CCI and call it from just a selected few CCI call sites, instead --- maybe only CommitTransactionCommand. OTOH this step might reasonably be considered too risky for late beta, since it would affect asychronous backend interactions, which are way harder to test properly than within-a-backend behavior. I agree that it seems risky to remove it at this point. It could have severe performance impact if it turns out that not calling it enough causes the queue to overflow. It seems safer to be calling it as frequently as possible. Yeah, I left it alone for the moment. It should be revisited someday. I was going to say, what would happen if vacuum were to run on a large table and a high vacuum_delay setting, but then I noticed that currently it doesn't call CCI at all. Is this a concern? It's always been the case that long-running queries might not call AcceptInvalidationMessages for a very long time. The worst-case consequence is a system cache flush, which might be annoying from a performance point of view but shouldn't be a problem otherwise. Again, it's not something I care to mess with right now. One thought is that the typical case would probably involve only one or two laggard backends. Right now, we handle queue overflow by discarding *all* pending messages and forcing a cache reset instead in *all* backends. That's simple but it makes every backend pay the performance price. Might be worthwhile to teach it to discard, say, the oldest half of the queued messages and only force reset for those backends that hadn't eaten those yet. Then, a reset would be expected to occur only within a backend that'd just finished a long-running query, and so the relative performance cost should be small. regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Sorting Improvements for 8.4
On Tue, 2007-11-27 at 18:03 +, Simon Riggs wrote: 5. DYNAMIC RUN HANDLING (in Final Merge) Another way of addressing a) is to simply make better use of memory itself. Let's look at that in more detail: Number of runs that can be merged at once is currently fixed, based upon available memory. This has the underlying assumption that all runs will be concurrently active during final merging, which may not always be true. If we have random data then almost all runs will overlap with all other runs, i.e. the min and max values are sufficiently wide that the runs do all overlap. In many cases, data arrives in somewhat sorted order, e.g. financial data is fairly regular with some late payers but not many, and those trail off with a fairly tight decay. In the somewhat sorted case we find that the actual overlap is less than total, so there are many later runs that don't overlap the earlier ones. In the best case we would find run 1 and 2 overlap, runs 2 and 3 overlap, then 3 and 4 overlap. I have spoken with Len Shapiro, a professor at Portland State University, regarding sorting before. He suggests that PostgreSQL should implement forecasting, which is similar to what you're describing. Forecasting does not require that entire runs are disjoint, it works by tracking the maximum values from the last block read from every run. This allows you to know which run you will need more blocks from the soonest. I'm still looking into the problem to understand it better, but the algorithm is in Knuth Vol 3. I can look at it in more detail, but have you already looked into this idea? Is there a reason we don't do this currently? Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] PostGreSQL and recursive queries...
[ I'm not very sure of my WITH RECURSIVE syntax, so please excuse any mistakes ] On Fri, Nov 30, 2007 at 01:00:27PM +, Gregory Stark wrote: Hopefully at the cte call sites we'll be able to gin up enough information to fill in the subquery information enough for the planner above to work with it. I could imagine problems the planner would have to deal with though, such as what type is bogon in this query? WITH RECURSIVE x(bogon) AS (select bogon from x) select * from x; That shouldn't be allowed, no types could be deduced. The following would be allowed though: WITH RECURSIVE x(bogon) AS (select bogon from x) select * from x WHERE bogon 1; The WHERE clause will constrain bogon to be an INTEGER which can be unified with everything else, allowing the query to run. what about something like: WITH RECURSIVE x(bogon) AS (select bogon+1 from x) select * from x; As above, that'll return an integer. note that the usual case is something like: WITH RECURSIVE x(bogon) AS (SELECT 1 UNION ALL SELECT bogon+1 FROM x) SELECT * FROM x WHERE bogon ? So the we can't refuse just anything where the types are recursively dependent. Sounds as though you need some sort of type inference algorithm. There are quite a few decidable ones around, the one by Hindley-Milner being very popular/common. Decidable means you get the correct answer out in a reasonable amount of time or it fails, and, barring implementation bugs, it'll never get stuck trying to figure out what you meant. We might have to do something weird like make the types of a recursive call unknown until it's planned then go back and replan recursive queries making use of the new information to catch things like: create function foo(int) returns text ... create function foo(text) returns int ... with recursive x(bogon) as (select 1 union all select foo(bogon) from x) select * from x When would something like the above actually be used in practise? Supporting things like that would open up a whole bag of undecidable nastiness (+ associated confusion for the user, when it all goes wrong) for what I would think is a small increase in expressiveness. Sam ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] .NET or Mono functions in PG
On Nov 30, 2007 1:21 PM, Magnus Hagander [EMAIL PROTECTED] wrote: Just searched my backups and can't seem to find it :( It was on gborg CVS. Where is that now? Dead. But Marc has a tarball of the whole thing somewhere and can find it. OK, I'll see if I can get a copy of it. -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 499 Thornall Street, 2nd Floor | [EMAIL PROTECTED] Edison, NJ 08837| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] .NET or Mono functions in PG
On Nov 30, 2007 10:09 AM, Jonah H. Harris [EMAIL PROTECTED] wrote: Actually, it was PL/Mono. I'll dig through my stuff and post it somewhere for you. Just searched my backups and can't seem to find it :( It was on gborg CVS. Where is that now? -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 499 Thornall Street, 2nd Floor | [EMAIL PROTECTED] Edison, NJ 08837| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Release Note Changes
On Nov 30, 2007 11:07 PM, Tom Lane [EMAIL PROTECTED] wrote: Josh Berkus [EMAIL PROTECTED] writes: I disagree. For people who want a quick summary of the major user-facing things changed we'll have multiple sources: (a) the announcement, (b) the press features list, (c) the Feature-Version matrix. The Release notes should have a *complete* list of changes. Define complete. Why? Because we don't use a bug/feature tracker. So a user trying to figure out hey, was my issue XXX fixed so that I should upgrade? has *no other source* than the Release notes to look at, except CVS history. And if we start asking sysadmins and application vendors to read the CVS history, we're gonna simply push them towards other DBMSes which have this information more clearly. So in other words, you don't *really* want complete. i think he means a list meant for end users which mentions all features and bug fixes done for that release. Your argument of go read the CVS logs is valid, but there are just too many for someone to go through to get the complete picture. i mean people may end up reading 1000 + logs in a worst case scenario to find out if a bug they are interested in is fixed , and the someone who compiled the release notes didn't think it was important enough to make it to the notes. Going through a 5K release notes document would be half that time, granted that over time thier ability to read through logs quicker will improve, but thats a learning curve they have to be willing to go trough, and not everyone will be interested to do that if i would have to find a word to describe what we need, i would say we need something *compendious* i.e. what is at once full in scope and brief and concise in treatment it is however work that someone will have to do, but it can be managed as such that it is a by-product of the process, instead of a 'one time in the end' job. This discussion is all about finding a suitable balance between length and detail. Simplistic pronouncements don't help us strike that balance. FWIW, I tend to agree with the folks who think Bruce trimmed too much this time. But the release notes are, and always have been, intended to boil the CVS history down to something useful by eliminating irrelevant detail. For the vast majority of people, the details that are being mentioned here are indeed irrelevant. There will be some for whom they are not. But depending on the question, almost any detail might not be irrelevant, and at that point you have to be prepared to go check the archives. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster -- Usama Munir Dar http://linkedin.com/in/usamadar Consultant Architect Cell:+92 321 5020666 Skype: usamadar
Re: [HACKERS] PostGreSQL and recursive queries...
Tom Lane [EMAIL PROTECTED] writes: Gregory Stark [EMAIL PROTECTED] writes: I could imagine problems the planner would have to deal with though, such as what type is bogon in this query? WITH RECURSIVE x(bogon) AS (select bogon from x) select * from x; Just a note --- that's not the planner's problem, either. Semantic interpretation of the meaning of a query is supposed to be completed during parse analysis. I was being sloppy. I just mean as opposed to the executor. Ie, that the code to build the plan is harder than actually running it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] .NET or Mono functions in PG
On Fri, Nov 30, 2007 at 04:43:03PM +0100, Gevik Babakhani wrote: I did look at this at some earlier point as well. One big problem at that time was that once you embedded mono, you had all sorts of threads running in your backend ;-) yes, threads running around could become a show stopper for both Mono and MS.NET This is something I want to be sure of before I begin. Yaeh. Another way to do it is the PL/J way (I think). Which is starting up a separate process with the VM in it and then do RPC of some kind to it. Which has more overhead per call, but lower per backend etc. And a lot less dangerous. This is actually my plan B, hence it is less dangerous. I am thinking the started process can stay alive and act as a service of some kind to handle/answer calls form multiple backends and shutdown itself after a period of time being idle. Yes, you should have just one such process with .Net managed threads running inside it. You just need to find a good RPC library to pass things back and forth between the backend running natively and this special process running in .net. //Magnus ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] .NET or Mono functions in PG
I did look at this at some earlier point as well. One big problem at that time was that once you embedded mono, you had all sorts of threads running in your backend ;-) yes, threads running around could become a show stopper for both Mono and MS.NET This is something I want to be sure of before I begin. Another way to do it is the PL/J way (I think). Which is starting up a separate process with the VM in it and then do RPC of some kind to it. Which has more overhead per call, but lower per backend etc. And a lot less dangerous. This is actually my plan B, hence it is less dangerous. I am thinking the started process can stay alive and act as a service of some kind to handle/answer calls form multiple backends and shutdown itself after a period of time being idle. Regards, Gevik Babakhani PostgreSQL NL http://www.postgresql.nl TrueSoftware BV http://www.truesoftware.nl ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostGreSQL and recursive queries...
Gregory Stark [EMAIL PROTECTED] writes: I could imagine problems the planner would have to deal with though, such as what type is bogon in this query? WITH RECURSIVE x(bogon) AS (select bogon from x) select * from x; Just a note --- that's not the planner's problem, either. Semantic interpretation of the meaning of a query is supposed to be completed during parse analysis. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] .NET or Mono functions in PG
Jonah H. Harris wrote: On Nov 30, 2007 10:09 AM, Jonah H. Harris [EMAIL PROTECTED] wrote: Actually, it was PL/Mono. I'll dig through my stuff and post it somewhere for you. Just searched my backups and can't seem to find it :( It was on gborg CVS. Where is that now? Dead. But Marc has a tarball of the whole thing somewhere and can find it. //Magnus ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Release Note Changes
Simon Riggs wrote: On Fri, 2007-11-30 at 06:31 +, Simon Riggs wrote: I also notice that two performance features have disappeared from the release notes. (Presumably they have been removed from source). Both of them have changes that can be seen by users, so can't see why we would want them removed. Wow, just realised 3 of Heikki's performance patches aren't mentioned either: - CheckpointStartLock removal I don't think it's worth mentioning, given that we have the Load Distributed Checkpoints in there. That alone will tell people that there's been some major changes to checkpoints. - I/O reduction during recovery This might be worth mentioning, since it can be quite a big difference in the right circumstances, and it helps a bit with the scalability problem of the recovery. Should mention that it only helps with full_pages_writes=on. One more reason to not gamble with data integrity ;-). - Tuning of Visibility code I don't think that was release notes worthy. The release notes are quite long already... -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] .NET or Mono functions in PG
It would we great. Thank you. Regards, Gevik Babakhani PostgreSQL NL http://www.postgresql.nl TrueSoftware BV http://www.truesoftware.nl -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Jonah H. Harris Sent: Friday, November 30, 2007 4:10 PM To: Gevik Babakhani Cc: pgsql-hackers@postgresql.org Subject: Re: [HACKERS] .NET or Mono functions in PG On Nov 30, 2007 9:20 AM, Jonah H. Harris [EMAIL PROTECTED] wrote: Yeah, I have a copy of the old PL/.NET backed up somewhere. IIRC, it needs to be rewritten, but it would be possible to make it work for both Mono and .NET. Actually, it was PL/Mono. I'll dig through my stuff and post it somewhere for you. -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 499 Thornall Street, 2nd Floor | [EMAIL PROTECTED] Edison, NJ 08837| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] .NET or Mono functions in PG
On Nov 30, 2007 9:20 AM, Jonah H. Harris [EMAIL PROTECTED] wrote: Yeah, I have a copy of the old PL/.NET backed up somewhere. IIRC, it needs to be rewritten, but it would be possible to make it work for both Mono and .NET. Actually, it was PL/Mono. I'll dig through my stuff and post it somewhere for you. -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 499 Thornall Street, 2nd Floor | [EMAIL PROTECTED] Edison, NJ 08837| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Release Note Changes
Greg, Frankly I think the release notes are already too long. People who judge a release by counting the number of items in the release notes are not worth appeasing. Including every individual lock removed or code path optimized will only obscure the important points on which people should be judging the relevance of the release to them. Things like smoothing checkpoint i/o which could be removing a show-stopper problem for them. I disagree. For people who want a quick summary of the major user-facing things changed we'll have multiple sources: (a) the announcement, (b) the press features list, (c) the Feature-Version matrix. The Release notes should have a *complete* list of changes. Why? Because we don't use a bug/feature tracker. So a user trying to figure out hey, was my issue XXX fixed so that I should upgrade? has *no other source* than the Release notes to look at, except CVS history. And if we start asking sysadmins and application vendors to read the CVS history, we're gonna simply push them towards other DBMSes which have this information more clearly. If we want to shorten the release notes, then we should adopt an issue tracker. -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Release Note Changes
On Nov 30, 2007 4:49 AM, Gregory Stark [EMAIL PROTECTED] wrote: Simon Riggs [EMAIL PROTECTED] writes: If people understand there aren't 13 performance improvements there are at *least* 19+ that is a positive message to help people decide to upgrade. Frankly I think the release notes are already too long. People who judge a release by counting the number of items in the release notes are not worth appeasing. Including every individual lock removed or code path optimized will only obscure the important points on which people should be judging the relevance of the release to them. Things like smoothing checkpoint i/o which could be removing a show-stopper problem for them. IMO, it's probably good to include things that materially affect how people operate the databse. An example is improvements to statistics gathering because it eliminates a historical trade-off in configuring the server. I agree with you regarding basic operations though. merlin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] .NET or Mono functions in PG
I am investigating the possibility of having stored procedures and functions written in .NET language flavors. I remember a long time ago there was a gborg project wanting to implement it, but that is long gone I guess. anyway I think there are two possibilities: a) Use of MS.NET core and implement it only for Windows. b) Use of Mono core and have the possibility to run it also on *nix systems. c) Or perhaps a hybrid of the above would be possible. Any implementation would require to invoke/call an external runtime (Mono or MS.CRL). I wonder what the performance would be. I would like to have your opinion about this idea. Regards, Gevik Babakhani PostgreSQL NL http://www.postgresql.nl TrueSoftware BV http://www.truesoftware.nl ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PostGreSQL and recursive queries...
Tatsuo Ishii [EMAIL PROTECTED] writes: We decided to start working on WITH RECURSIVE too. Currently one of our engineers is about to start to look at what has been done and what is remaining. We hope to work together with you! Here's the original message where I posted what I think we need in the executor to make this work: http://archives.postgresql.org/pgsql-hackers/2007-01/msg01495.php Here's another thread where we discussed some further issues: http://archives.postgresql.org/pgsql-hackers/2007-02/msg01229.php This is all about the executor though, which I've since learned not to expect to be the source of the headaches. The planner is infinitely more complex and subtle. Hopefully at the cte call sites we'll be able to gin up enough information to fill in the subquery information enough for the planner above to work with it. I could imagine problems the planner would have to deal with though, such as what type is bogon in this query? WITH RECURSIVE x(bogon) AS (select bogon from x) select * from x; what about something like: WITH RECURSIVE x(bogon) AS (select bogon+1 from x) select * from x; note that the usual case is something like: WITH RECURSIVE x(bogon) AS (SELECT 1 UNION ALL SELECT bogon+1 FROM x) SELECT * FROM x WHERE bogon ? So the we can't refuse just anything where the types are recursively dependent. We might have to do something weird like make the types of a recursive call unknown until it's planned then go back and replan recursive queries making use of the new information to catch things like: create function foo(int) returns text ... create function foo(text) returns int ... with recursive x(bogon) as (select 1 union all select foo(bogon) from x) select * from x -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Release Note Changes
Simon Riggs wrote: - Heap-Only Tuples (HOT) accelerate space reuse for UPDATEs change to Heap-Only Tuples (HOT) improve performance of frequent UPDATEs I think we need to qualify this, or it could be quite misleading. perhaps add that don't affect indexed columns or something like that. cheers andrew ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] pgwin32_open returning EINVAL
On Thu, Nov 29, 2007 at 05:48:23PM -0500, Tom Lane wrote: Alvaro Herrera [EMAIL PROTECTED] writes: Tom Lane wrote: Well, since EINVAL is the default result from _dosmaperr, and none of the cases it represents are expected, why don't we just remove all of the explicit mappings to EINVAL from doserrors[]? Well, the problematic routine is not already using _dosmaperr currently. It is doing it's own mapping and neglecting to report anything. Oh, well then why are we arguing? There is no reason at all to assume that _dosmaperr wouldn't give us a sufficiently good fix on the error if it were only being used. At the very least, I think we should put in Magnus' patch and find out whether it gives sufficient information. If it doesn't, then we can think about changing to a higher log level. Applied. //Magnus ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] PostGreSQL and recursive queries...
Hubert FONGARNAND [EMAIL PROTECTED] writes: Ce message et les éventuels documents joints peuvent contenir des informations confidentielles. Au cas où il ne vous serait pas destiné, nous vous remercions de bien vouloir le supprimer et en aviser immédiatement l'expéditeur. Toute utilisation de ce message non conforme à sa destination, toute diffusion ou publication, totale ou partielle et quel qu'en soit le moyen est formellement interdite. Les communications sur internet n'étant pas sécurisées, l'intégrité de ce message n'est pas assurée et la société émettrice ne peut être tenue pour responsable de son contenu. I started working on WITH RECURSIVE a while back and still intend to get back to it. But there's no guarantee that what I turn up will be to the liking of everyone else. I also think the connectby() patch should be possible to port forward but I haven't looked too much into it. I know it's a big patch, just the sheer amount of code that has to be gone through carefully to port it forward might make it kind of hard. Hi, We decided to start working on WITH RECURSIVE too. Currently one of our engineers is about to start to look at what has been done and what is remaining. We hope to work together with you! -- Tatsuo Ishii SRA OSS, Inc. Japan ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [GENERAL] Empty arrays with ARRAY[]
Brendan Jurd [EMAIL PROTECTED] writes: The patch is very invasive (at least compared to any of my previous patches), but so far I haven't managed to find any broken behaviour. I'm sorry to suggest anything at this point, but... would it be less invasive if instead of requiring the immediate cast you created a special case in the array code to allow a placeholder object for empty array of unknown type. The only operation which would be allowed on it would be to cast it to some specific array type. That way things like UPDATE foo SET col = array[]; INSERT INTO foo (col) VALUES (array[]); could be allowed if they could be contrived to introduce an assignment cast. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Release Note Changes
On Fri, 2007-11-30 at 09:49 +, Gregory Stark wrote: Simon Riggs [EMAIL PROTECTED] writes: If people understand there aren't 13 performance improvements there are at *least* 19+ that is a positive message to help people decide to upgrade. Frankly I think the release notes are already too long. So why do we have stuff in there that the users will never see? We already have a release summary, so why summarise *some* of the detail as well, but not all of it??? I see no reason to diminish yours, Heikki's or my own contributions, all of which were in the area of performance, which people do care about. None of the ones I mentioned were trivial patches, nor were their effects small. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Release Note Changes
On 11/30/07, Gregory Stark [EMAIL PROTECTED] wrote: Simon Riggs [EMAIL PROTECTED] writes: If people understand there aren't 13 performance improvements there are at *least* 19+ that is a positive message to help people decide to upgrade. Frankly I think the release notes are already too long. People who judge a release by counting the number of items in the release notes are not worth appeasing. Including every individual lock removed or code path optimized will only obscure the important points on which people should be judging the relevance of the release to them. Things like smoothing checkpoint i/o which could be removing a show-stopper problem for them. If they're mentioned at all a single release note bullet point saying Many optimizations and concurrency improvements in areas such as transaction start and finish, checkpoint start, record visibility checking, merge join plans, ... would suffice. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings i agree that release notes should not be too long, but may be there should be (if there isn't one already) something like a change log where people can find out all the changes done from the previous release, if they are intrested ? -- Usama Munir Dar http://linkedin.com/in/usamadar Consultant Architect Cell:+92 321 5020666 Skype: usamadar
Re: [HACKERS] [GENERAL] Empty arrays with ARRAY[]
As discussed on -hackers, this patch allows the construction of an empty array if an explicit cast to an array type is given (as in, ARRAY[]::int[]). postgres=# select array[]::int[]; array --- {} postgres=# select array[]; ERROR: no target type for empty array HINT: Empty arrays must be explictly cast to the desired array type, e.g. ARRAY[]::int[] A few notes on the implementation: * The syntax now allows an ARRAY constructor with an empty expression list (array_expr_list may be empty). * I've added a new parsenode for arrays, A_ArrayExpr (previously the parser would create ArrayExpr primnodes). * transformArrayExpr() now takes two extra arguments, a type oid and a typmod. When transforming a typecast which casts an A_ArrayExpr to an array type, transformExpr passes these type details down to transformArrayExpr, and skips the typecast. * transformArrayExpr() behaves slightly differently when passed type information. The overall type of the array is set to the given type, and all elements are explictly coerced to the equivalent element type. If it was not passed a type, then the behaviour is as previous; the function looks for a common type among the elements, and coerces them to that type. The overall type of the array is derived from the common element type. The patch is very invasive (at least compared to any of my previous patches), but so far I haven't managed to find any broken behaviour. All regression tests pass, and the regression tests for arrays seem to be quite comprehensive. I did add a couple of new tests for the empty array behaviours, but the rest I've left alone. I look forward to your comments -- although given the length of the 8.4 patch review queue, that will probably be an exercise in extreme patience! Major thanks go out to Tom for all his guidance on -hackers while I developed the patch. Regards, BJ *** ./doc/src/sgml/syntax.sgml.orig Fri Nov 30 19:31:29 2007 --- ./doc/src/sgml/syntax.sgml Fri Nov 30 19:32:11 2007 *** *** 1497,1503 array value from values for its member elements. A simple array constructor consists of the key word literalARRAY/literal, a left square bracket ! literal[/, one or more expressions (separated by commas) for the array element values, and finally a right square bracket literal]/. For example: programlisting --- 1497,1503 array value from values for its member elements. A simple array constructor consists of the key word literalARRAY/literal, a left square bracket ! literal[/, a list of expressions (separated by commas) for the array element values, and finally a right square bracket literal]/. For example: programlisting *** *** 1507,1515 {1,2,7} (1 row) /programlisting ! The array element type is the common type of the member expressions, ! determined using the same rules as for literalUNION/ or ! literalCASE/ constructs (see xref linkend=typeconv-union-case). /para para --- 1507,1516 {1,2,7} (1 row) /programlisting ! If the array is not explictly cast to a particular type, the array element ! type is the common type of the member expressions, determined using the ! same rules as for literalUNION/ or literalCASE/ constructs (see ! xref linkend=typeconv-union-case). /para para *** *** 1554,1559 --- 1555,1573 /para para +You can construct an empty array, but since it's impossible to have an array +with no type, you must explictly cast your empty array to the desired type. For example: + programlisting + SELECT ARRAY[]::int[]; + int4 + -- + {} + (1 row) + /programlisting +For more on casting, see xref linkend=sql-syntax-type-casts. + /para + + para It is also possible to construct an array from the results of a subquery. In this form, the array constructor is written with the key word literalARRAY/literal followed by a parenthesized (not *** ./src/backend/nodes/copyfuncs.c.origFri Nov 30 19:29:16 2007 --- ./src/backend/nodes/copyfuncs.c Fri Nov 30 19:32:11 2007 *** *** 1704,1709 --- 1704,1719 return newnode; } + static A_ArrayExpr * + _copyA_ArrayExpr(A_ArrayExpr *from) + { + A_ArrayExpr *newnode = makeNode(A_ArrayExpr); + + COPY_NODE_FIELD(elements); + + return newnode; + } + static ResTarget * _copyResTarget(ResTarget *from) { *** *** 3538,3543 --- 3548,3556 case T_A_ArrayExpr: retval = _copyA_ArrayExpr(from); break; + case T_A_ArrayExpr: + retval = _copyA_ArrayExpr(from); + break; case T_ResTarget: retval = _copyResTarget(from); break; *** ./src/backend/nodes/outfuncs.c.orig
Re: [HACKERS] CommandCounterIncrement versus plan caching
I wrote: One fairly simple answer is to insert a CCI call at the start of RevalidateCachedPlan. I dislike that solution, at least by itself, on two grounds: ... I've also thought about rearranging the current conventions for where to call CCI. This particular form of the problem would go away if SPI command execution did CCI after, instead of before, each non-read-only command. Or perhaps safer, before each such command and after the last one. After further thought, I've concluded that the second of these approaches is the more attractive, because it avoids adding CCI calls into read-only functions. While I haven't yet tested any of this, the plan that is in my head is: 1. Add if (!read_only) CommandCounterIncrement(); at the end of _SPI_execute_plan(). We keep the before call, though, so that a volatile function still sees the partial results of a calling query; that's how it's worked historically and I don't want to be tinkering with those semantics right now. 2. Remove the CCI call at the top of _SPI_prepare_plan. It should be unnecessary given that we are now expecting any previous DDL to have been followed by CCI. (If it *isn't* unnecessary, then this whole idea is wrong, because paths that involve re-using a previously prepared plan instead of making a new one will still be broken.) 3. Do something to ameliorate the consequences of the increased number of CCI calls. As previously mentioned, the main problem with this approach is that for the typical case of one SQL command per _SPI_execute_plan call, we'd be doubling the number of CCI calls and thus consuming command IDs twice as fast. I propose fixing that by not allocating a new command ID unless the previous ID was actually used to mark a tuple. Looking at the uses of GetCurrentCommandId, it seems that we can distinguish read only from read/write calls easily in many places, but there is one problem: the Executor uses the passed-in snapshot's curcid as the CommandId to write tuples with. When we set up a snapshot we typically don't know whether it will be used with a SELECT or an updating query, so we cannot decide at that point whether the command ID has been dirtied or not. I think this can be fixed by changing the Executor so that it doesn't use snapshot-curcid for this purpose. Instead, add a field to EState showing the CommandID to mark tuples with. ExecutorStart, which has enough information to know whether the query is read-only or not, can set this field, or not, and tell GetCurrentCommandId to mark the command ID dirty (or not). In practice, it appears that all callers of the Executor pass in snapshots that have current curcid, and so this would not result in any actual change of the CID being used. (If a caller did pass in a snap with an older CID, there'd still not be any real change of behavior --- correct behavior ensues as long as the executor's output CID is = snapshot CID.) One fine point is that we have to mark the ID dirty at ExecutorStart time, whether or not the query actually ends up marking any tuples with it; we cannot wait until a heap_insert/update/delete actually happens with it, as I'd first thought. The problem is that the query might call a volatile function before it first inserts any tuple, and that function needs to take a new command ID for itself; if it doesn't then we can conflate the output of the function with the output of the calling query later on. Once we have the knowledge of whether the current command ID is dirty, we can skip everything inside CommandCounterIncrement when it is not; except for the AtStart_Cache() call, ie, AcceptInvalidationMessages(). What that is looking for is asynchronous DDL-change notifications from other backends. I believe that it is actually not necessary for correctness for CCI to do that, because we should (had better) have adequate locking to ensure that messages about any particular table are absorbed before we touch that table. Rather, the reasoning for having this in CCI is to make sure we do it often enough in a long-running transaction to keep the sinval message queue from overflowing. I am tempted to remove that from CCI and call it from just a selected few CCI call sites, instead --- maybe only CommitTransactionCommand. OTOH this step might reasonably be considered too risky for late beta, since it would affect asychronous backend interactions, which are way harder to test properly than within-a-backend behavior. Comments? regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] .NET or Mono functions in PG
On Nov 30, 2007 8:31 AM, Gevik Babakhani [EMAIL PROTECTED] wrote: I am investigating the possibility of having stored procedures and functions written in .NET language flavors. I remember a long time ago there was a gborg project wanting to implement it, but that is long gone I guess. anyway Yeah, I have a copy of the old PL/.NET backed up somewhere. IIRC, it needs to be rewritten, but it would be possible to make it work for both Mono and .NET. -- Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324 EnterpriseDB Corporation| fax: 732.331.1301 499 Thornall Street, 2nd Floor | [EMAIL PROTECTED] Edison, NJ 08837| http://www.enterprisedb.com/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[HACKERS] compiling postgres in winxp
Hi, i'm trying to compile postgres in winxp for the first time in order to try recent Magnus patch discussed here http://archives.postgresql.org/pgsql-hackers/2007-11/msg01307.php i've installed mingw, msys and msysDTK i tried ./configure --without-zlib and then make install and get this error. someone can give a me a clue here? make[2]: Entering directory `/home/General/postgresql-8.2.5/src/port' gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -Wendif-labels -fno-strict-aliasing -I../../src/port -DFRONTEND -I../../src/include -I./src/include/port/win32 -DEXEC_BACKEND -I../../src/include/port/win32 -c -o getrusage.o getrusage.c In file included from ../../src/include/rusagestub.h:17, from getrusage.c:18: /mingw/lib/gcc/mingw32/3.4.5/../../../../include/sys/time.h:27: error: redefinition of `struct timezone' /mingw/lib/gcc/mingw32/3.4.5/../../../../include/sys/time.h:40: error: conflicting types for 'gettimeofday' ../../src/include/port.h:292: error: previous declaration of 'gettimeofday' was here /mingw/lib/gcc/mingw32/3.4.5/../../../../include/sys/time.h:40: error: conflicting types for 'gettimeofday' ../../src/include/port.h:292: error: previous declaration of 'gettimeofday' was here make[2]: *** [getrusage.o] Error 1 make[2]: Leaving directory `/home/General/postgresql-8.2.5/src/port' make[1]: *** [install] Error 2 make[1]: Leaving directory `/home/General/postgresql-8.2.5/src' make: *** [install] Error 2 -- regards, Jaime Casanova Programming today is a race between software engineers striving to build bigger and better idiot-proof programs and the universe trying to produce bigger and better idiots. So far, the universe is winning. Richard Cook ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Release Note Changes
On Fri, 2007-11-30 at 06:31 +, Simon Riggs wrote: I also notice that two performance features have disappeared from the release notes. (Presumably they have been removed from source). Both of them have changes that can be seen by users, so can't see why we would want them removed. Wow, just realised 3 of Heikki's performance patches aren't mentioned either: - CheckpointStartLock removal - I/O reduction during recovery - Tuning of Visibility code I'm not sure what the rationale is for not mentioning these things. They're at least as important, if not more so, than mentioning minor source code changes. If people understand there aren't 13 performance improvements there are at *least* 19+ that is a positive message to help people decide to upgrade. -- Simon Riggs 2ndQuadrant http://www.2ndQuadrant.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Release Note Changes
Simon Riggs [EMAIL PROTECTED] writes: If people understand there aren't 13 performance improvements there are at *least* 19+ that is a positive message to help people decide to upgrade. Frankly I think the release notes are already too long. People who judge a release by counting the number of items in the release notes are not worth appeasing. Including every individual lock removed or code path optimized will only obscure the important points on which people should be judging the relevance of the release to them. Things like smoothing checkpoint i/o which could be removing a show-stopper problem for them. If they're mentioned at all a single release note bullet point saying Many optimizations and concurrency improvements in areas such as transaction start and finish, checkpoint start, record visibility checking, merge join plans, ... would suffice. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] .NET or Mono functions in PG
Hello I thought about it. It can be great project. Oracle has similar project. Performance will be same as PL/Java probably. There is slow only first call. Regards Pavel Stehule On 30/11/2007, Gevik Babakhani [EMAIL PROTECTED] wrote: I am investigating the possibility of having stored procedures and functions written in .NET language flavors. I remember a long time ago there was a gborg project wanting to implement it, but that is long gone I guess. anyway I think there are two possibilities: a) Use of MS.NET core and implement it only for Windows. b) Use of Mono core and have the possibility to run it also on *nix systems. c) Or perhaps a hybrid of the above would be possible. Any implementation would require to invoke/call an external runtime (Mono or MS.CRL). I wonder what the performance would be. I would like to have your opinion about this idea. Regards, Gevik Babakhani PostgreSQL NL http://www.postgresql.nl TrueSoftware BV http://www.truesoftware.nl ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] .NET or Mono functions in PG
On Fri, Nov 30, 2007 at 02:31:44PM +0100, Gevik Babakhani wrote: I am investigating the possibility of having stored procedures and functions written in .NET language flavors. I remember a long time ago there was a gborg project wanting to implement it, but that is long gone I guess. anyway I think there are two possibilities: a) Use of MS.NET core and implement it only for Windows. b) Use of Mono core and have the possibility to run it also on *nix systems. c) Or perhaps a hybrid of the above would be possible. Any implementation would require to invoke/call an external runtime (Mono or MS.CRL). I wonder what the performance would be. I would like to have your opinion about this idea. I did look at this at some earlier point as well. One big problem at that time was that once you embedded mono, yuo had all sorts of threads running in your backend ;-) Another way to do it is the PL/J way (I think). Which is starting up a separate process with the VM in it and then do RPC of some kind to it. Which has more overhead per call, but lower per backend etc. And a lot less dangerous. I never got past the thinking stage though. But it would certanily be nice to have! //Magnus ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Release Note Changes
Usama Dar [EMAIL PROTECTED] writes: i agree that release notes should not be too long, but may be there should be (if there isn't one already) something like a change log where people can find out all the changes done from the previous release, if they are intrested ? The CVS history (either direct from the CVS server, or in the pgsql-committers archives) will give you as much detail as you could possibly want. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Release Note Changes
Heikki, This might be worth mentioning, since it can be quite a big difference in the right circumstances, and it helps a bit with the scalability problem of the recovery. Should mention that it only helps with full_pages_writes=on. One more reason to not gamble with data integrity ;-). Does this mean that recovery from logs with full_page_writes will be faster than recovery from logs without them? -- Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Release Note Changes
Josh Berkus [EMAIL PROTECTED] writes: I disagree. For people who want a quick summary of the major user-facing things changed we'll have multiple sources: (a) the announcement, (b) the press features list, (c) the Feature-Version matrix. The Release notes should have a *complete* list of changes. Define complete. Why? Because we don't use a bug/feature tracker. So a user trying to figure out hey, was my issue XXX fixed so that I should upgrade? has *no other source* than the Release notes to look at, except CVS history. And if we start asking sysadmins and application vendors to read the CVS history, we're gonna simply push them towards other DBMSes which have this information more clearly. So in other words, you don't *really* want complete. This discussion is all about finding a suitable balance between length and detail. Simplistic pronouncements don't help us strike that balance. FWIW, I tend to agree with the folks who think Bruce trimmed too much this time. But the release notes are, and always have been, intended to boil the CVS history down to something useful by eliminating irrelevant detail. For the vast majority of people, the details that are being mentioned here are indeed irrelevant. There will be some for whom they are not. But depending on the question, almost any detail might not be irrelevant, and at that point you have to be prepared to go check the archives. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] CommandCounterIncrement versus plan caching
Tom Lane wrote: Once we have the knowledge of whether the current command ID is dirty, we can skip everything inside CommandCounterIncrement when it is not; except for the AtStart_Cache() call, ie, AcceptInvalidationMessages(). What that is looking for is asynchronous DDL-change notifications from other backends. I believe that it is actually not necessary for correctness for CCI to do that, because we should (had better) have adequate locking to ensure that messages about any particular table are absorbed before we touch that table. Rather, the reasoning for having this in CCI is to make sure we do it often enough in a long-running transaction to keep the sinval message queue from overflowing. I am tempted to remove that from CCI and call it from just a selected few CCI call sites, instead --- maybe only CommitTransactionCommand. OTOH this step might reasonably be considered too risky for late beta, since it would affect asychronous backend interactions, which are way harder to test properly than within-a-backend behavior. I agree that it seems risky to remove it at this point. It could have severe performance impact if it turns out that not calling it enough causes the queue to overflow. It seems safer to be calling it as frequently as possible. I was going to say, what would happen if vacuum were to run on a large table and a high vacuum_delay setting, but then I noticed that currently it doesn't call CCI at all. Is this a concern? -- Alvaro Herrera Developer, http://www.PostgreSQL.org/ Crear es tan difÃcil como ser libre (Elsa Triolet) ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] compiling postgres in winxp
On Sat, 1 Dec 2007, Jaime Casanova wrote: i'm trying to compile postgres in winxp for the first time in order to try recent Magnus patch discussed here http://archives.postgresql.org/pgsql-hackers/2007-11/msg01307.php /mingw/lib/gcc/mingw32/3.4.5/../../../../include/sys/time.h:27: error: redefinition of `struct timezone' /mingw/lib/gcc/mingw32/3.4.5/../../../../include/sys/time.h:40: error: conflicting types for 'gettimeofday' .../../src/include/port.h:292: error: previous declaration of 'gettimeofday' was here Only very recently can the 8.2 release be built with recent mingw releases. Upgrade pg to the top of REL8_2_STABLE or downgrade mingw. http://archives.postgresql.org/pgsql-committers/2007-11/msg00566.php Kris Jurka ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster