Greg Stark <[EMAIL PROTECTED]> writes:
> I was just trying to clarify the situation since someone made some comment
> about it having to do with memory being swapped out and then finding nowhere
> to swap in when needed. That's not exactly what's happening.
No. I believe the case that is actually
Greg Stark wrote:
>
> Tom Lane <[EMAIL PROTECTED]> writes:
>
> > Greg Stark <[EMAIL PROTECTED]> writes:
> > > That's why merely allocating tons of swap doesn't necessarily protect you.
> > > It's still possible for a process (or several processes if you allocate
> > > more
> > > swap than you ha
Tom Lane <[EMAIL PROTECTED]> writes:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > That's why merely allocating tons of swap doesn't necessarily protect you.
> > It's still possible for a process (or several processes if you allocate more
> > swap than you have address space) to mmap gigabytes of
On Fri, Feb 10, 2006 at 12:16:04PM -0500, Mark Woodward wrote:
> > And even when PostgreSQL has the server all to itself, having a hashagg
> > spill to disk is *way* better than pushing the machine into a swap
> > storm. At least if you spill the hashagg you only have one backend
> > running at a s
Greg Stark <[EMAIL PROTECTED]> writes:
> That's why merely allocating tons of swap doesn't necessarily protect you.
> It's still possible for a process (or several processes if you allocate more
> swap than you have address space) to mmap gigabytes of memory without touching
> it and then start tou
Martijn van Oosterhout writes:
> Disabling overcommit has a serious cost in that most of your VM will
> never be used. Are people really suggesting that I can't run a few
> daemons, X and a web-browser on FreeBSD without allocating 3 times my
> physical memory in swap?
There's a possibility you
Martijn van Oosterhout writes:
> Disclaimer: The Linux OOM killer has never killed the wrong process for
> me, so I don't have any bad experiences with overcommit.
You haven't tried real hard. What I've seen recently when I do something
that makes a PG backend go overboard is that the kernel zap
> On Fri, Feb 10, 2006 at 09:57:12AM -0500, Mark Woodward wrote:
>> > In most practical situations, I think
>> > exceeding work_mem is really the best solution, as long as it's not
>> > by more than 10x or 100x. It's when the estimate is off by many
>> > orders of magnitude that you've got a probl
On Fri, Feb 10, 2006 at 10:01:18AM -0600, Jim C. Nasby wrote:
> BTW, I was shocked when I found out that FreeBSD actually has an OOM
> killer itself. Yet I've never heard of anyone having problems with it.
> Granted, the FreeBSD OOM could be better designed to pick the right
> process to kill, but
On Fri, Feb 10, 2006 at 09:57:12AM -0500, Mark Woodward wrote:
> > In most practical situations, I think
> > exceeding work_mem is really the best solution, as long as it's not
> > by more than 10x or 100x. It's when the estimate is off by many
> > orders of magnitude that you've got a problem. R
On Thu, Feb 09, 2006 at 05:04:38PM -0500, Tom Lane wrote:
> Martijn van Oosterhout writes:
> > When people talk about disabling the OOM killer, it doesn't stop the
> > SIGKILL behaviour,
>
> Yes it does, because the situation will never arise.
>
> > it just causes the kernel to return -ENOMEM fo
On Thu, Feb 09, 2006 at 03:13:22PM -0500, Greg Stark wrote:
> Stephen Frost <[EMAIL PROTECTED]> writes:
>
> > * Tom Lane ([EMAIL PROTECTED]) wrote:
> > > Greg Stark <[EMAIL PROTECTED]> writes:
> > > > It doesn't seem like a bad idea to have a max_memory parameter that if a
> > > > backend ever exc
> Rick Gigger <[EMAIL PROTECTED]> writes:
>> However if hashagg truly does not obey the limit that is supposed to
>> be imposed by work_mem then it really ought to be documented. Is
>> there a misunderstanding here and it really does obey it? Or is
>> hashagg an exception but the other work_mem a
Rick Gigger <[EMAIL PROTECTED]> writes:
> However if hashagg truly does not obey the limit that is supposed to
> be imposed by work_mem then it really ought to be documented. Is
> there a misunderstanding here and it really does obey it? Or is
> hashagg an exception but the other work_mem a
On Feb 9, 2006, at 12:49 PM, Mark Woodward wrote:
On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
"Mark Woodward" <[EMAIL PROTECTED]> writes:
Again, regardless of OS used, hashagg will exceed "working
memory" as
defined in postgresql.conf.
So? If you've got OOM kill enable
On Friday 10 February 2006 00:53, Mark Woodward wrote:
> > Martijn van Oosterhout writes:
> >> When people talk about disabling the OOM killer, it doesn't stop the
> >> SIGKILL behaviour,
> >
> > Yes it does, because the situation will never arise.
> >
> >> it just causes the kernel to return -ENO
> Martijn van Oosterhout writes:
>> When people talk about disabling the OOM killer, it doesn't stop the
>> SIGKILL behaviour,
>
> Yes it does, because the situation will never arise.
>
>> it just causes the kernel to return -ENOMEM for
>> malloc() much much earlier... (ie when you still actually
Martijn van Oosterhout writes:
> When people talk about disabling the OOM killer, it doesn't stop the
> SIGKILL behaviour,
Yes it does, because the situation will never arise.
> it just causes the kernel to return -ENOMEM for
> malloc() much much earlier... (ie when you still actually have memor
On Feb 9, 2006, at 11:22 AM, Stephen Frost wrote:
* Tom Lane ([EMAIL PROTECTED]) wrote:
"Mark Woodward" <[EMAIL PROTECTED]> writes:
Again, regardless of OS used, hashagg will exceed "working
memory" as
defined in postgresql.conf.
So? If you've got OOM kill enabled, it can zap a process w
On Thu, Feb 09, 2006 at 02:35:34PM -0500, Tom Lane wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > Except not quite, since I think an out of memory error still means that
> > backend exits instead of just that query failing.
>
> Not at all! PG will recover from this perfectly well ... if it's
> Stephen Frost <[EMAIL PROTECTED]> writes:
>
>> * Tom Lane ([EMAIL PROTECTED]) wrote:
>> > Greg Stark <[EMAIL PROTECTED]> writes:
>> > > It doesn't seem like a bad idea to have a max_memory parameter that
>> if a
>> > > backend ever exceeded it would immediately abort the current
>> > > transactio
Stephen Frost <[EMAIL PROTECTED]> writes:
> * Tom Lane ([EMAIL PROTECTED]) wrote:
> > Greg Stark <[EMAIL PROTECTED]> writes:
> > > It doesn't seem like a bad idea to have a max_memory parameter that if a
> > > backend ever exceeded it would immediately abort the current
> > > transaction.
> >
> >
* Tom Lane ([EMAIL PROTECTED]) wrote:
> Greg Stark <[EMAIL PROTECTED]> writes:
> > It doesn't seem like a bad idea to have a max_memory parameter that if a
> > backend ever exceeded it would immediately abort the current
> > transaction.
>
> See ulimit (or local equivalent).
As much as setting ul
Greg Stark wrote:
> Well in a way it does. Postgres would get an out-of-memory error from malloc
> which it would handle properly and the world would be happy.
>
> Except not quite, since I think an out of memory error still means that
> backend exits instead of just that query failing.
Not at a
Greg Stark <[EMAIL PROTECTED]> writes:
> Except not quite, since I think an out of memory error still means that
> backend exits instead of just that query failing.
Not at all! PG will recover from this perfectly well ... if it's given
the opportunity, rather than being SIGKILLed.
> It doesn't s
Stephen Frost <[EMAIL PROTECTED]> writes:
> Unless I've missed something here, disabling the OOM killer doesn't
> really solve the problem here.
Well in a way it does. Postgres would get an out-of-memory error from malloc
which it would handle properly and the world would be happy.
Except not
> On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
>> > "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> >> Again, regardless of OS used, hashagg will exceed "working memory" as
>> >> defined in postgresql.conf.
>> >
>> > So? If you've got OOM kill enabled, it can zap a process whether
* Jim C. Nasby ([EMAIL PROTECTED]) wrote:
> On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
> > If it is not something that can be fixed, it should be clearly documented.
>
> work_mem (integer)
>
> Specifies the amount of memory to be used by internal sort
> operations and
On Thu, Feb 09, 2006 at 11:42:57AM -0500, Mark Woodward wrote:
> > "Mark Woodward" <[EMAIL PROTECTED]> writes:
> >> Still, I would say that is is extremly bad behavior for not having
> >> stats, wouldn't you think?
> >
> > Think of it as a kernel bug.
>
> While I respect your viewpoint that the Li
On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
> > "Mark Woodward" <[EMAIL PROTECTED]> writes:
> >> Again, regardless of OS used, hashagg will exceed "working memory" as
> >> defined in postgresql.conf.
> >
> > So? If you've got OOM kill enabled, it can zap a process whether it's
>
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Again, regardless of OS used, hashagg will exceed "working memory" as
>> defined in postgresql.conf.
>
> So? If you've got OOM kill enabled, it can zap a process whether it's
> strictly adhered to work_mem or not. The OOM killer is entirely capable
* Tom Lane ([EMAIL PROTECTED]) wrote:
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
> > Again, regardless of OS used, hashagg will exceed "working memory" as
> > defined in postgresql.conf.
>
> So? If you've got OOM kill enabled, it can zap a process whether it's
> strictly adhered to work_mem or
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> Again, regardless of OS used, hashagg will exceed "working memory" as
> defined in postgresql.conf.
So? If you've got OOM kill enabled, it can zap a process whether it's
strictly adhered to work_mem or not. The OOM killer is entirely capable
of choos
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> I think it is still a bug. While it may manifest itself as a pg crash on
>> Linux because of a feature with which you have issue, the fact remains
>> that PG is exeeding its working memory limit.
>
> The problem is that *we have no way to know what t
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> I think it is still a bug. While it may manifest itself as a pg crash on
> Linux because of a feature with which you have issue, the fact remains
> that PG is exeeding its working memory limit.
The problem is that *we have no way to know what that limi
Tom Lane wrote:
"Mark Woodward" <[EMAIL PROTECTED]> writes:
Still, I would say that is is extremly bad behavior for not having
stats, wouldn't you think?
Think of it as a kernel bug.
Meanwhile, I'd strongly recommend turning off OOM kill. That's got to
be the single worst design
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> Still, I would say that is is extremly bad behavior for not having
>> stats, wouldn't you think?
>
> Think of it as a kernel bug.
While I respect your viewpoint that the Linux kernel should not kill an
offending process if the system runs out of mem
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> Still, I would say that is is extremly bad behavior for not having
> stats, wouldn't you think?
Think of it as a kernel bug.
>> Meanwhile, I'd strongly recommend turning off OOM kill. That's got to
>> be the single worst design decision in the entire
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> -> HashAggregate (cost=106527.68..106528.68 rows=200
>> width=32)
>>Filter: (count(ucode) > 1)
>>-> Seq Scan on cdtitles (cost=0.00..96888.12
>> rows=1927912
>> width=32)
>
>> Well, shouldn't hash aggregat
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> -> HashAggregate (cost=106527.68..106528.68 rows=200 width=32)
>Filter: (count(ucode) > 1)
>-> Seq Scan on cdtitles (cost=0.00..96888.12 rows=1927912
> width=32)
> Well, shouldn't hash aggregate respect work
> "Mark Woodward" <[EMAIL PROTECTED]> writes:
>> PostgreSQL promptly uses all available memory for the query and
>> subsequently crashes.
>
> I'll bet a nickel this is on a Linux machine with OOM kill enabled.
> What does the postmaster log show --- or look in the kernel log to
> see if it mentions
"Mark Woodward" <[EMAIL PROTECTED]> writes:
> PostgreSQL promptly uses all available memory for the query and
> subsequently crashes.
I'll bet a nickel this is on a Linux machine with OOM kill enabled.
What does the postmaster log show --- or look in the kernel log to
see if it mentions anything a
More info: the machine has 512M RAM and 512M swap
Work mem is set to:work_mem = 1024
This should't have crashed, should it?
> PostgreSQL promptly uses all available memory for the query and
> subsequently crashes.
>
> I'm sure it can be corrected with a setting, but should it crash?
>
> freedb=#
PostgreSQL promptly uses all available memory for the query and
subsequently crashes.
I'm sure it can be corrected with a setting, but should it crash?
freedb=# create table ucode as select distinct ucode from cdtitles group
by ucode having count(ucode)>1 ;
server closed the connection unexpected
44 matches
Mail list logo