> "Mark Woodward" <[EMAIL PROTECTED]> writes: >> Still, I would say that is is extremly bad behavior for not having >> stats, wouldn't you think? > > Think of it as a kernel bug.
While I respect your viewpoint that the Linux kernel should not kill an offending process if the system runs out of memory, I sort of disagree in that OOM is a disaster preventor. It should be viewed as a last ditch "him or me" choice the kernel needs to make and it should not get into that position in the first place. Regardless, it is troubling that failing to have current stats can cause the system, with a large data set, to exceed working memory limits. I think it is still a bug. While it may manifest itself as a pg crash on Linux because of a feature with which you have issue, the fact remains that PG is exeeding its working memory limit. Should failing to run ANALYZE cause this behavior? If so, how does this get clearly documented? If not, can it be prevented? > >>> Meanwhile, I'd strongly recommend turning off OOM kill. That's got to >>> be the single worst design decision in the entire Linux kernel. > >> How is this any different than the FreeBSD having a default 512M process >> size limit? On FreeBSD, the process would have been killed earlier. > > No, the process would have been politely told it was out of memory, and > would have told you the same. If the kernel's way of notifying a > process that it's out of memory is SIGKILL, there is not a damn thing > that we can do to operate robustly. Lets not waste time on a Linux discussion. Linux and FreeBSD have their strengths, and a debate on the dubious merits of either is a long and contentious debate. Both systems are fine, just with some subtle differences in design goals. ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings