The changes look awesome. I'll be taking a closer look at them in the near
future. Great work.

Seems I'm going to have to look into switching to the Berkeley DB for my
memory cache if that's the case.

I'm in the midst of (finally) getting my project set up with my brand new
cvs login/password...

And I'm having a tough time getting the JISP stuff to work.

I could only find v3.0 of JISP, but apparently JCS is using v2.x, and
there's been a minor change to the interface.

How do we deal with situations like this? Possible solutions I'm looking for
are: Tell me to commit the upgrade to v3.0 (perhaps in a branch?), OR clue
me into the location of the v2.x jisp.jar.

Thanks!

-Travis Savo <[EMAIL PROTECTED]>

-----Original Message-----
From: Aaron Smuts [mailto:[EMAIL PROTECTED]
Sent: Thursday, July 15, 2004 5:08 PM
To: 'Turbine JCS Developers List'
Subject: RE: MaxObjects not meaningful?


The Berkeley DB JE uses a memory cache.  They manage based on either
size (I guess they serialize, as we do for the JCSAdmin.jsp) or on heap
percentage.  The heap percentage will not work properly on all vms.  It
would still be nice.  The problem is that the memory is managed in a per
region basis.  To do effective heap size management we might need a
memory cache that works for all regions, one that can be plugged in like
the other auxiliaries.  All regions could then compete for space.  This
could be good and bad.  There is no easy way to do this given the
current architecture.  Right now you could create a memory auxiliary and
configure it like you do the disk cache.  You could set the default
memory size for the region to 0 and it will be bypassed, sort of.  The 0
size default memory cache would try to spool items if you also had a
disk cache for the region though.

One annoying thing about the current memory cache setup is that it uses
the cache configuration object.  This forces us to put in unused
configuration parameters. . . 

I've been thinking of a way to do this.  

One option, after my changes which I'll put in tonight, is to just use
the disk cache for a region and to set the memory size to 0.  I changed
the behavior such that items are not removed from disk purgatory on get
and when the memory size is 0, items pulled from the disk are not sent
to the memory cache only to be respooled.  This makes a pure disk cache
configuration run very efficiently.  Also, I now have a recycle bin for
the disk cache that allows it to reuse empty slots and have 99% of the
problems worked out of a real time disk defragmentation option.  I'll
have this stuff in later tonight.

Aaron

> -----Original Message-----
> From: Travis Savo [mailto:[EMAIL PROTECTED]
> Sent: Thursday, July 15, 2004 3:54 PM
> To: 'Turbine JCS Developers List'
> Subject: MaxObjects not meaningful?
> 
> In my experience, one particular setting has proven to be the most
> difficult
> to find the correct setting for: MaxObjects.
> 
> I should qualify that I have a pretty specialized setup: I have about
100
> regions with the capacity to generate many more objects than I have
room
> for, despite having 1.8gig of memory available, and that these objects
get
> generated once, read all the time, and are changed almost never.
Latency
> is
> our primary enemy, but even a remote cache get is significantly faster
> than
> 'doing the work'. The LRU is ideal for our settings because the most
> common
> items stay in memory indefinitely (maxTime of several months) while
idle
> and
> infrequently used objects will drop out.
> 
> Unfortunately for any given region the size of a given cached objects
> varies
> pretty wildly, thus forming the basis for my dilemma.
> 
> The problem is that in peek times the cache can do one of two things:
Load
> itself up with small objects, hitting MaxObjects and stopping, thus
not
> making use of the available resources and incurring additional (and
> unnecessary) load on the rest of the infrastructure (disk, remote
cache,
> and
> the DB); Or it loads itself up with large objects and runs out of
memory
> before hitting MaxObjects.
> 
> The obvious solution here is (in my scenario) to replace MaxObjects
with
> MaxMemory.
> 
> Unfortunately this is, of course, easier said than done.
> 
> The only option I've encountered so far is to Serialize the object,
and
> even
> that's only a rough guess of what it's actually taking up in memory.
This
> adds the overhead of Serialization to the size and doesn't account for
> transient fields, which definitely do take up space in memory. Of
course
> if
> we're already doing this for 'Deep Copy' it may be a non-issue.
> 
> The other solution I thought of was using
> (Runtime.totalMemory()-Runtime.freeMemory()). When this number dips
below
> a
> threshold, the Memory Cache can't grow any further (which is the same
as
> if
> we've reached MaxObjects). This is perfect for my purposes, and
actually
> provides a more accurate heuristic of memory utilization, even if it's
a
> less fine grained one.
> 
> Thoughts? Comments? Suggestions? Screams of agony? Does anyone else
think
> this might be useful or am I just off on a tangent?
> 
> -Travis Savo <[EMAIL PROTECTED]>
> 
> 
> 
> 
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail:
[EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to