having written some desktop apps with a large apetit for memory and a
demand to run well on everybody's machine I started to favor NIO for
storing most of the data within the application and using wrapper
objects to provide access to the stored information. By doing this you
end up with a smaller number of long term objects in memory and the
NIO virtual memory mapped is outside of the VM memory.

Ruben

On Thu, Jan 1, 2009 at 8:48 AM, Reinier Zwitserloot <reini...@gmail.com> wrote:
>
> Sherod, your rant doesn't really work:
>
> There is simple no one right way to do memory limits. If you allow an
> app to eat up as much as it would need, then there are plenty of
> situations where THAT was clearly the wrong thing to do, particularly
> on the client, which, contrary to some insinuations in this thread,
> should be the default, because server folk are supposed to know what
> they are doing, more often than on the client. There's no magic
> solution that makes everyone happy.
>
> One thing that java certainly can work on, is being more useful when
> an OOME does happen. Granted, not every app actually prints the OOME
> to a readable place, especially because when the OOME happens, I don't
> think the JVM guarantees that your code will even continue normally, -
> but-, there's still lots of improvement possible in this area,
> starting with having the JVM report in the comment for the OOME if you
> could fix it with a larger heap!
>
> Even that has some practical problems though: What if you do update
> the heap size but the actual problem is a memory leak? Then you'll get
> the same error again, but later. Should the exception have a whole
> paragraph that states that if you've updated the heap before, that it
> might be a memory leak? Maybe. Tough situation.
>
> I don't think anyone is hand-waving OOMEs away by stating that you
> should just 'know how much you need'. Who is saying that? I get the
> feeling you think this problem is easy to solve. It isn't. *THAT* is
> the issue.
>
> A recap of issues with eating as much memory as you want:
>
> FACT A: OSes manage virtual memory and make it relatively hard for
> userland apps to meddle and/or inspect this.
>
> FACT B: There's no way for the JVM to see any difference between an
> app leaking memory and an app that has a legitimate use for its
> continued increased memory requirements. Even if we can somehow
> magically create a garbage collector that can do perfect cleanup in
> near zero extra time compared to the more heuristic (pretty good but
> not perfect) approach used by the current garbage collector, that
> still doesn't allow us to assume that a JVM that wants more memory
> should really get it.
>
> FACT C: Its possible to do a complete garbage collect, but this takes
> precious CPU cycles. This is worth it to avoid swapping, but it is a
> complete waste of time if the host computer has real memory to spare.
> It's not like unused memory saves power or some such. It's effectively
> a free resource, if its there.
>
> Problem #1: On a 64-bit OS, addressable memory is in the terabytes.
> Depending on the host OS configuration, it is imaginable that you
> could write an entire disk full with virtual memory if the JVM keeps
> asking for more. There's got to be some limit, somewhere, especially
> if the box isn't dedicated to the JVM instance; continued memory
> munching would flood out other apps into swap and eventually turn
> everything into molasses. In theory OSes should do a better job on
> prioritizing the GUI (so that the user can force-kill the misbehaving
> JVM), but in practice even linux doesn't always get this right, let
> alone Os X and Windows (who almost never do). So what IS a useful
> limit? 64MB is low, sure, especially on big iron desktops, but you
> can't magically say that there isn't a limit.
>
> Problem #2: Even IF you have full introspection of the swap mechanism,
> its still hard to 'do the right thing'. Clearly you could increase the
> frequency of full garbage collections if you notice that lots of
> swapping is going on, but for security reasons you don't usually get
> any insight into the swapping behaviour for OTHER apps. So what if the
> JVMs continued memory munching is not causing the JVM itself to swap,
> but is causing other apps to swap in and out so often that it would
> have been far more efficient overall for the JVM to start swapping
> more? How do you even figure this stuff out?
>
> Problem #3: Most developer boxes have massive amounts of memory. But
> that's not true for every computer. How do you make sure that a
> developer gets timely notification that his app uses rather a lot of
> memory? What about memory leaks? If the JVM will happily munch 8+GB,
> then odds are excellent that a leaking app will never be found by the
> developers. Sure, java apps don't often leak memory, but it does still
> happen. You could move the onus to the developers and force them to
> use -XMX to fake a low-resource box, but if you think about it, that
> isn't such a good idea either: A few developers who forget and soon
> you'll have users who get confronted with the problem - I expect at
> least as many problems as we're seeing now with developers releasing
> programs that should have shipped with a JVM execution batch script or
> some such that adds more memory.
>
>
> I honestly don't know what C# does, but are you certain that there
> isn't an -XMX equivalent that's simply been defaulted to something
> higher?
>
> I'm not trying to argue that this is the best solution. Far from it.
> I'm merely saying that there's no easy fix. Here are some hard fixes:
>
>  - profiling: Pretty much force memory profiling onto the user; any
> app that steadily eats more memory should automatically trigger a
> popup of some sort that explains to the developer (the JVM would need
> a 'developer mode' of some sort, possibly we can just use 'debugger
> attached' as a flag for this, but plenty of people don't develop using
> a debugger, ouch): Perhaps your app leaks memory over time.
> Unfortunately this isn't the only fashion in which memory can leak.
>
>  - expected memory usage feedback: Have an API of sorts where the
> developer can tell the JVM how much memory the app ought to be taking,
> in very rough terms, in as many axes as you want. In other words, if
> you're a web server, you might return the number of running
> connections as one axis, and the number of cached files as another
> axis. The JVM can then apply some heuristics to tell the difference
> between a big app and a leaking app: If similar numbers on each axis
> result in wildly different memory requirements, generally always the
> same are larger as the JVM is running, a leak is likely. Just having
> better methods to interact with the JVM could allow you to write a
> library that does this.
>
>  - A simple one: Have a 'greedy' flag that tells the JVM to assume its
> the only important thing running on the box. Eat all the memory you
> need, don't worry about nicing up your threads for the rest of the OS
> too much, etcetera. I don't think this is very practical because
> usually there are slave processes running on the box that assist the
> JVM that shouldn't be drowned out, such as, say, the database. In
> other words, the number of apps where -Xgreedy would be appropriate is
> too small to bother with having the setting.
>
> On Jan 1, 4:00 am, sherod <steven.he...@gmail.com> wrote:
>> It's things like this that gives Java such a low reputation in some
>> operational areas.
>>
>> This is how the 'real world' should work:
>>
>> System monitoring software watches memory usage on a process /
>> machine.
>> It alerts operational staff when it moved beyond acceptable bounds
>> Investigation starts - action is taken - and the app stays up.
>> As RAM usage goes up and we move to virtual memory/swap you'll find
>> that the machine may become very slow as it spends most of its time
>> paging, but it stays up.
>>
>> With a hard limit on Heap size, instead, my app / app server hits a
>> limit and dies in some creative an unusual manner.
>>
>> We can all say 'oh, you should know how much memory your app uses' etc
>> etc... but that's just like saying we could eliminate the road toll by
>> all driving safely, despite best intentions this isn't always possible
>> or even likely.
>>
>> A case this week at work, two Microstrategy developers are running a
>> report which crashes with an Out of Memory exception on a the Java
>> VM.  They say 'oh, we need to reboot the server' and 'We should move
>> to a 64bit server'  They are using a Java based product but aren't
>> Java developers, they have no idea there even is a Java VM - on
>> investigation the embedded Java VM is set to have a 256M heap
>> size!
>>
>> Yes, you can bitch and moan about it, but this is how the real world
>> works.
>>
>> On Jan 1, 5:45 am, "Matthew Beldyk" <matthew.bel...@gmail.com> wrote:
>>
>> > My impression was that a fixed size of the heap was designed to keep
>> > the program from using all the ram on the machine.  Personally, I'd
>> > rather have a single program crash than have an entire machine come to
>> > it's knees (I no longer have any machines running a single application
>> > unless they are running some low level embedded operation, and that's
>> > a completely different beast).
>>
>> > I will admit, 64M is a little on the low side for modern desktop
>> > applications (this from a guy with 8 gigs or ram on his workstation,
>> > so take that statement with a grain of salt).  But 64M is actually on
>> > the large side some embedded systems.  I'm going to guess that 64M was
>> > decided upon as a good place to start; I imagine it would cover most
>> > usual programs.
>>
>> > Configuring the maximum heap size is also fairly trivial (the -Xms and
>> > -Xmx flags, I believe).  And having a ballpark idea how much ram your
>> > program should use should be a basic benchmarking test before you put
>> > something into production (I've been guilty of missing that test
>> > before and won't make that mistake again).
>>
>> > All this being said, I have been burnt by this with tomcat before when
>> > we forgot to configure this correctly and ended up with some very
>> > strange behaviors (suddenly some of our applications could no longer
>> > find libraries that tomcat was dropping out of the heap; I don't know
>> > the exact details as I was on vacation that week and this is all
>> > hearsay).
>>
>> > In my opinion, the fixed max heap size is a required annoyance.
>> > Unless I were better able to manually manage memory usage in java, I'm
>> > disinclined to allow program's ram usage to grow unchecked.
>>
>> > -Matt
>>
>> > On Wed, Dec 31, 2008 at 10:54 AM, phil.swen...@gmail.com
>>
>> > <phil.swen...@gmail.com> wrote:
>>
>> > > I've never understood why sun chose to have a "max heap size" setting
>> > > and default it to 64 megs.  To figure out what your max heap size
>> > > should be you pretty much have to use trial and error.  This makes
>> > > java inherently unstable.  I can't count the # of times I've had
>> > > processes crash with an OutOfMemoryException because the heap size is
>> > > set either to the default 64 meg or too low.
>>
>> > > Why not do what every other runtime does and just allocate memory as
>> > > needed?  And what exactly does the max heap size setting do anyway?
>>
>> > --
>> > Calvin: Know what I pray for?
>> > Hobbes: What?
>> > Calvin: The strength to change what I can, the inability to accept
>> > what I can't, and the incapacity to tell the difference.
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "The 
Java Posse" group.
To post to this group, send email to javaposse@googlegroups.com
To unsubscribe from this group, send email to 
javaposse+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/javaposse?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to