Re: [appengine-java] Re: memcache best practice or framework

2010-07-16 Thread John Patterson


On 17 Jul 2010, at 08:33, Shawn Brown wrote:

Aaah, stick-cache has dependencies on twig and
http://code.google.com/p/guava-libraries/

The utilities from twig can be pulled out easily but I haven't looked
at guava-libraries/


Sorry the Twig ones slipped in by mistake.  I'e just pushed an update  
to remove them.  Mainly it was a class to help reduce the size of  
serialized instances by no including a class descriptor.  Now it uses  
plain Java serialization.


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: memcache best practice or framework

2010-07-16 Thread Shawn Brown
>
>> What's the usage for in-memory caching via stick-cache?
>
> It now supports bulk gets for all caches - i.e. memcache and datastore using
> their respective bulk get methods

Aaah, stick-cache has dependencies on twig and
http://code.google.com/p/guava-libraries/

The utilities from twig can be pulled out easily but I haven't looked
at guava-libraries/

I don't wonder if having fewer jars in the classpath at cold start
time is more than the savings from using local/memcache over strictly
datastore  for session storage.  In other words, is loading guava
worth it.  Maybe I can just snip out the relevant code.

Shawn

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: memcache best practice or framework

2010-07-10 Thread John Patterson


On 11 Jul 2010, at 09:12, Shawn Brown wrote:


What's the usage for in-memory caching via stick-cache?


It now supports bulk gets for all caches - i.e. memcache and datastore  
using their respective bulk get methods


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: memcache best practice or framework

2010-07-10 Thread John Patterson


On 11 Jul 2010, at 09:12, Shawn Brown wrote:


MemoryCache mc = new MemoryCache(50);


but on Jun 22, 2010;  that class was deleted

revision	ded84586e4Delete	/src/main/java/com/vercer/cache/ 
MemoryCache.java


Is it no longer supported?


Try an update now - it is back. 
 


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: memcache best practice or framework

2010-07-10 Thread Shawn Brown
Hi John,


> There is also a CompositeCache class that allows you to layer the caches so
> that it first checks in-memory, then memcache , then the datastore.  Puts go
> to all levels and cache hits refresh the higher levels.  e.g. if an item is
> not in-memory and has been flushed from memcache but is still present in the
> datastore then the other two will be updated.
>
> http://code.google.com/p/stick-cache/

What's the usage for in-memory caching via stick-cache?

The docs show this:
MemoryCache mc = new MemoryCache(50);


but on Jun 22, 2010;  that class was deleted

revisionded84586e4Delete
/src/main/java/com/vercer/cache/MemoryCache.java

Is it no longer supported?


Shawn

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: memcache best practice or framework

2010-06-23 Thread Nacho Coloma
SimpleDS does also include a two-level Cache implementation. We are
using the same approach of Hibernate: the first-level cache is bound
to the current thread and will be discarded after the response is
committed, and the second-level cache is relying on Memcache.

The tricky part was dealing with partial cache results when invoking a
multiple get().

Feel free to inspect the code to get your own ideas:
http://code.google.com/p/simpleds/source/browse/#svn/trunk/src/main/java/org/simpleds/cache

On Jun 18, 10:08 am, Toby  wrote:
> Hello Ikai,
>
> Sorry but I might have worded my post incorrectly. I do not doubt
> memcache, in fact I use it very heavily and it is great.
>
> On Google I/O I learned that memcache is actually hosted on another
> machine and that for each request a significant network overhead is
> involved. In appstats I see about 20ms for a single request. Simple
> data-store requests are also between 20-50ms. So in fact memcache is
> good as second level cache but shows to be a bottleneck for data that
> I heavily use over and over again (plus it costs valuable API time).
>
> So the idea was to have some "first level" in-memory cache living on
> the same machine that is holding heavily used data to prevent the
> server round trip to the cache machine. Now you might tell me that my
> application is designed wrong. And indeed I could just put this in by
> myself. But as you also pointed out there are tricky parts like memory
> boundaries and other things to take care of. This is why I started
> this thread to see if someone has come up with a good solution.
>
> I think multiple cache layers are kind of a standard approach that has
> shown its usefulness in many places.  It would be good to have that as
> part of GAE. Of course this is not the most urgent issue.
>
> Cheers,
> Toby
>
> On Jun 17, 6:46 pm, "Ikai L (Google)"  wrote:
>
> > What aspect of Memcache is too slow? Have you run AppStats yet?
>
> > The overhead of Memcache is low enough for many of the top sites on the
> > internet to use. Some sites are listed on the main page here:
>
> >http://memcached.org/
>
> > As you move closer and closer to local memory, the volatility of your cache
> > will increase, so the only items I would store in local memory are items
> > that are okay to lose. If you want, you can probably layer your application
> > to fetch from memcache -> fetch from authoritative source and place into
> > local memory on a cache miss. Just be aware that there are process memory
> > limits, and exceeding these will force a restart.
>
> > On Thu, Jun 17, 2010 at 2:30 AM, Toby  wrote:
> > > Hello,
>
> > > I wonder if there is a framework (such as Objectify) also for
> > > memcache.  As memcache is not on the local machine it is rather slow,
> > > especially for reoccurring requests. So on Google I/O they suggested
> > > to build your own in-memory layer around that. I know that is an easy
> > > task, still I wonder if there might already be a framework for
> > > that :-)
>
> > > Also I wonder if someone can give me some ideas about how to build an
> > > in-memory cache. I guess it is just a static hashmap. But will it
> > > survive multiple requests? How much can I put in there?
>
> > > As the problem of memcache is apparently the high latency for the
> > > network traffic to the server I had the idea to store the in-memory
> > > cache in the memcache, de-serialize it and then use it?
>
> > > Do you have other ideas how to speed up caching?
>
> > > Thank you for your advice,
>
> > > Toby
>
> > > --
> > > You received this message because you are subscribed to the Google Groups
> > > "Google App Engine for Java" group.
> > > To post to this group, send email to
> > > google-appengine-j...@googlegroups.com.
> > > To unsubscribe from this group, send email to
> > > google-appengine-java+unsubscr...@googlegroups.com
> > > .
> > > For more options, visit this group at
> > >http://groups.google.com/group/google-appengine-java?hl=en.
>
> > --
> > Ikai Lan
> > Developer Programs Engineer, Google App Engine
> > Blog:http://googleappengine.blogspot.com
> > Twitter:http://twitter.com/app_engine
> > Reddit:http://www.reddit.com/r/appengine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: memcache best practice or framework

2010-06-22 Thread John Patterson
Hey Toby, yes that is exactly what Stick tries to do - by storing in  
the datastore your data can survive memcache restarts while having the  
speed of in-memory access for some data.


I have always hated with a passion the APIs of JCache and EHCache.

On 22 Jun 2010, at 14:43, Toby wrote:


Hello John,

Thank you for your message. I was looking at your project and indeed
it does pretty much what I was looking for.

I think memcache is good for basic caching problems but it could be
made more efficient if it would keep most used data on the local
machine memory to cut the costs for the network overhead. The problem
is that from application side our local data gets lost when the
application is cycled out. Google could provide a solution that
survives this time-out and that uses resources more efficiently.
Other discussions about ehcache show that there is a need for
something more sophisticated. Maybe something for the roadmap?

Cheers,
Toby

On Jun 18, 10:33 am, John Patterson  wrote:

Hi Toby,

I made some code public that does what you describe: it is a simple
cache interface that has implementations for in-memory,memcacheand
the datastore.  You get about 100MB of heap space to use which can
significantly speed up your caching.

There is also a CompositeCache class that allows you to layer the
caches so that it first checks in-memory, thenmemcache, then the
datastore.  Puts go to all levels and cache hits refresh the higher
levels.  e.g. if an item is not in-memory and has been flushed  
from  memcachebut is still present in the datastore then the other  
two will

be updated.

http://code.google.com/p/stick-cache/

Hope this helps,

John

On 18 Jun 2010, at 15:08, Toby wrote:


Hello Ikai,



Sorry but I might have worded my post incorrectly. I do not doubt
memcache, in fact I use it very heavily and it is great.



On Google I/O I learned thatmemcacheis actually hosted on another
machine and that for each request a significant network overhead is
involved. In appstats I see about 20ms for a single request. Simple
data-store requests are also between 20-50ms. So in factmemcacheis
good as second level cache but shows to be a bottleneck for data  
that

I heavily use over and over again (plus it costs valuable API time).



So the idea was to have some "first level" in-memory cache living on
the same machine that is holding heavily used data to prevent the
server round trip to the cache machine. Now you might tell me that  
my
application is designed wrong. And indeed I could just put this in  
by
myself. But as you also pointed out there are tricky parts like  
memory

boundaries and other things to take care of. This is why I started
this thread to see if someone has come up with a good solution.


I think multiple cache layers are kind of a standard approach that  
has
shown its usefulness in many places.  It would be good to have  
that as

part of GAE. Of course this is not the most urgent issue.



Cheers,
Toby



On Jun 17, 6:46 pm, "Ikai L (Google)"  wrote:

What aspect ofMemcacheis too slow? Have you run AppStats yet?



The overhead ofMemcacheis low enough for many of the top sites on
the
internet to use. Some sites are listed on the main page here:



http://memcached.org/



As you move closer and closer to local memory, the volatility of
your cache
will increase, so the only items I would store in local memory are
items
that are okay to lose. If you want, you can probably layer your
application
to fetch frommemcache-> fetch from authoritative source and place
into
local memory on a cache miss. Just be aware that there are process
memory
limits, and exceeding these will force a restart.



On Thu, Jun 17, 2010 at 2:30 AM, Toby  wrote:

Hello,



I wonder if there is a framework (such as Objectify) also for
memcache.  Asmemcacheis not on the local machine it is rather
slow,
especially for reoccurring requests. So on Google I/O they  
suggested

to build your own in-memory layer around that. I know that is an
easy
task, still I wonder if there might already be a framework for
that :-)



Also I wonder if someone can give me some ideas about how to build
an
in-memory cache. I guess it is just a static hashmap. But will it
survive multiple requests? How much can I put in there?



As the problem ofmemcacheis apparently the high latency for the
network traffic to the server I had the idea to store the in- 
memory

cache in thememcache, de-serialize it and then use it?



Do you have other ideas how to speed up caching?



Thank you for your advice,



Toby



--
You received this message because you are subscribed to the Google
Groups
"Google App Engine for Java" group.
To post to this group, send email to
google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine-java+unsubscr...@googlegroups.com


.
For more options, visit this group at
http://groups.google.com/group/google-appengine-java?hl=en.



--
Ikai Lan
Developer Programs Engineer, Google App E

[appengine-java] Re: memcache best practice or framework

2010-06-22 Thread Toby
Hello John,

Thank you for your message. I was looking at your project and indeed
it does pretty much what I was looking for.

I think memcache is good for basic caching problems but it could be
made more efficient if it would keep most used data on the local
machine memory to cut the costs for the network overhead. The problem
is that from application side our local data gets lost when the
application is cycled out. Google could provide a solution that
survives this time-out and that uses resources more efficiently.
Other discussions about ehcache show that there is a need for
something more sophisticated. Maybe something for the roadmap?

Cheers,
Toby

On Jun 18, 10:33 am, John Patterson  wrote:
> Hi Toby,
>
> I made some code public that does what you describe: it is a simple  
> cache interface that has implementations for in-memory,memcacheand  
> the datastore.  You get about 100MB of heap space to use which can  
> significantly speed up your caching.
>
> There is also a CompositeCache class that allows you to layer the  
> caches so that it first checks in-memory, thenmemcache, then the  
> datastore.  Puts go to all levels and cache hits refresh the higher  
> levels.  e.g. if an item is not in-memory and has been flushed from  
> memcachebut is still present in the datastore then the other two will  
> be updated.
>
> http://code.google.com/p/stick-cache/
>
> Hope this helps,
>
> John
>
> On 18 Jun 2010, at 15:08, Toby wrote:
>
> > Hello Ikai,
>
> > Sorry but I might have worded my post incorrectly. I do not doubt
> >memcache, in fact I use it very heavily and it is great.
>
> > On Google I/O I learned thatmemcacheis actually hosted on another
> > machine and that for each request a significant network overhead is
> > involved. In appstats I see about 20ms for a single request. Simple
> > data-store requests are also between 20-50ms. So in factmemcacheis
> > good as second level cache but shows to be a bottleneck for data that
> > I heavily use over and over again (plus it costs valuable API time).
>
> > So the idea was to have some "first level" in-memory cache living on
> > the same machine that is holding heavily used data to prevent the
> > server round trip to the cache machine. Now you might tell me that my
> > application is designed wrong. And indeed I could just put this in by
> > myself. But as you also pointed out there are tricky parts like memory
> > boundaries and other things to take care of. This is why I started
> > this thread to see if someone has come up with a good solution.
>
> > I think multiple cache layers are kind of a standard approach that has
> > shown its usefulness in many places.  It would be good to have that as
> > part of GAE. Of course this is not the most urgent issue.
>
> > Cheers,
> > Toby
>
> > On Jun 17, 6:46 pm, "Ikai L (Google)"  wrote:
> >> What aspect ofMemcacheis too slow? Have you run AppStats yet?
>
> >> The overhead ofMemcacheis low enough for many of the top sites on  
> >> the
> >> internet to use. Some sites are listed on the main page here:
>
> >>http://memcached.org/
>
> >> As you move closer and closer to local memory, the volatility of  
> >> your cache
> >> will increase, so the only items I would store in local memory are  
> >> items
> >> that are okay to lose. If you want, you can probably layer your  
> >> application
> >> to fetch frommemcache-> fetch from authoritative source and place  
> >> into
> >> local memory on a cache miss. Just be aware that there are process  
> >> memory
> >> limits, and exceeding these will force a restart.
>
> >> On Thu, Jun 17, 2010 at 2:30 AM, Toby  wrote:
> >>> Hello,
>
> >>> I wonder if there is a framework (such as Objectify) also for
> >>>memcache.  Asmemcacheis not on the local machine it is rather  
> >>> slow,
> >>> especially for reoccurring requests. So on Google I/O they suggested
> >>> to build your own in-memory layer around that. I know that is an  
> >>> easy
> >>> task, still I wonder if there might already be a framework for
> >>> that :-)
>
> >>> Also I wonder if someone can give me some ideas about how to build  
> >>> an
> >>> in-memory cache. I guess it is just a static hashmap. But will it
> >>> survive multiple requests? How much can I put in there?
>
> >>> As the problem ofmemcacheis apparently the high latency for the
> >>> network traffic to the server I had the idea to store the in-memory
> >>> cache in thememcache, de-serialize it and then use it?
>
> >>> Do you have other ideas how to speed up caching?
>
> >>> Thank you for your advice,
>
> >>> Toby
>
> >>> --
> >>> You received this message because you are subscribed to the Google  
> >>> Groups
> >>> "Google App Engine for Java" group.
> >>> To post to this group, send email to
> >>> google-appengine-j...@googlegroups.com.
> >>> To unsubscribe from this group, send email to
> >>> google-appengine-java+unsubscr...@googlegroups.com
> >>> .
> >>> For more options, visit this group at
> >>>http://groups.google.com/group/google-

Re: [appengine-java] Re: memcache best practice or framework

2010-06-18 Thread John Patterson

Hi Toby,

I made some code public that does what you describe: it is a simple  
cache interface that has implementations for in-memory, memcache and  
the datastore.  You get about 100MB of heap space to use which can  
significantly speed up your caching.


There is also a CompositeCache class that allows you to layer the  
caches so that it first checks in-memory, then memcache , then the  
datastore.  Puts go to all levels and cache hits refresh the higher  
levels.  e.g. if an item is not in-memory and has been flushed from  
memcache but is still present in the datastore then the other two will  
be updated.


http://code.google.com/p/stick-cache/

Hope this helps,

John


On 18 Jun 2010, at 15:08, Toby wrote:


Hello Ikai,

Sorry but I might have worded my post incorrectly. I do not doubt
memcache, in fact I use it very heavily and it is great.

On Google I/O I learned that memcache is actually hosted on another
machine and that for each request a significant network overhead is
involved. In appstats I see about 20ms for a single request. Simple
data-store requests are also between 20-50ms. So in fact memcache is
good as second level cache but shows to be a bottleneck for data that
I heavily use over and over again (plus it costs valuable API time).

So the idea was to have some "first level" in-memory cache living on
the same machine that is holding heavily used data to prevent the
server round trip to the cache machine. Now you might tell me that my
application is designed wrong. And indeed I could just put this in by
myself. But as you also pointed out there are tricky parts like memory
boundaries and other things to take care of. This is why I started
this thread to see if someone has come up with a good solution.

I think multiple cache layers are kind of a standard approach that has
shown its usefulness in many places.  It would be good to have that as
part of GAE. Of course this is not the most urgent issue.

Cheers,
Toby


On Jun 17, 6:46 pm, "Ikai L (Google)"  wrote:

What aspect of Memcache is too slow? Have you run AppStats yet?

The overhead of Memcache is low enough for many of the top sites on  
the

internet to use. Some sites are listed on the main page here:

http://memcached.org/

As you move closer and closer to local memory, the volatility of  
your cache
will increase, so the only items I would store in local memory are  
items
that are okay to lose. If you want, you can probably layer your  
application
to fetch from memcache -> fetch from authoritative source and place  
into
local memory on a cache miss. Just be aware that there are process  
memory

limits, and exceeding these will force a restart.



On Thu, Jun 17, 2010 at 2:30 AM, Toby  wrote:

Hello,



I wonder if there is a framework (such as Objectify) also for
memcache.  As memcache is not on the local machine it is rather  
slow,

especially for reoccurring requests. So on Google I/O they suggested
to build your own in-memory layer around that. I know that is an  
easy

task, still I wonder if there might already be a framework for
that :-)


Also I wonder if someone can give me some ideas about how to build  
an

in-memory cache. I guess it is just a static hashmap. But will it
survive multiple requests? How much can I put in there?



As the problem of memcache is apparently the high latency for the
network traffic to the server I had the idea to store the in-memory
cache in the memcache, de-serialize it and then use it?



Do you have other ideas how to speed up caching?



Thank you for your advice,



Toby



--
You received this message because you are subscribed to the Google  
Groups

"Google App Engine for Java" group.
To post to this group, send email to
google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine-java+unsubscr...@googlegroups.com>

.
For more options, visit this group at
http://groups.google.com/group/google-appengine-java?hl=en.


--
Ikai Lan
Developer Programs Engineer, Google App Engine
Blog:http://googleappengine.blogspot.com
Twitter:http://twitter.com/app_engine
Reddit:http://www.reddit.com/r/appengine


--
You received this message because you are subscribed to the Google  
Groups "Google App Engine for Java" group.
To post to this group, send email to google-appengine-java@googlegroups.com 
.
To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en 
.




--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: memcache best practice or framework

2010-06-18 Thread Toby
Hello Ikai,

Sorry but I might have worded my post incorrectly. I do not doubt
memcache, in fact I use it very heavily and it is great.

On Google I/O I learned that memcache is actually hosted on another
machine and that for each request a significant network overhead is
involved. In appstats I see about 20ms for a single request. Simple
data-store requests are also between 20-50ms. So in fact memcache is
good as second level cache but shows to be a bottleneck for data that
I heavily use over and over again (plus it costs valuable API time).

So the idea was to have some "first level" in-memory cache living on
the same machine that is holding heavily used data to prevent the
server round trip to the cache machine. Now you might tell me that my
application is designed wrong. And indeed I could just put this in by
myself. But as you also pointed out there are tricky parts like memory
boundaries and other things to take care of. This is why I started
this thread to see if someone has come up with a good solution.

I think multiple cache layers are kind of a standard approach that has
shown its usefulness in many places.  It would be good to have that as
part of GAE. Of course this is not the most urgent issue.

Cheers,
Toby


On Jun 17, 6:46 pm, "Ikai L (Google)"  wrote:
> What aspect of Memcache is too slow? Have you run AppStats yet?
>
> The overhead of Memcache is low enough for many of the top sites on the
> internet to use. Some sites are listed on the main page here:
>
> http://memcached.org/
>
> As you move closer and closer to local memory, the volatility of your cache
> will increase, so the only items I would store in local memory are items
> that are okay to lose. If you want, you can probably layer your application
> to fetch from memcache -> fetch from authoritative source and place into
> local memory on a cache miss. Just be aware that there are process memory
> limits, and exceeding these will force a restart.
>
>
>
> On Thu, Jun 17, 2010 at 2:30 AM, Toby  wrote:
> > Hello,
>
> > I wonder if there is a framework (such as Objectify) also for
> > memcache.  As memcache is not on the local machine it is rather slow,
> > especially for reoccurring requests. So on Google I/O they suggested
> > to build your own in-memory layer around that. I know that is an easy
> > task, still I wonder if there might already be a framework for
> > that :-)
>
> > Also I wonder if someone can give me some ideas about how to build an
> > in-memory cache. I guess it is just a static hashmap. But will it
> > survive multiple requests? How much can I put in there?
>
> > As the problem of memcache is apparently the high latency for the
> > network traffic to the server I had the idea to store the in-memory
> > cache in the memcache, de-serialize it and then use it?
>
> > Do you have other ideas how to speed up caching?
>
> > Thank you for your advice,
>
> > Toby
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine for Java" group.
> > To post to this group, send email to
> > google-appengine-j...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine-java+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine-java?hl=en.
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> Blog:http://googleappengine.blogspot.com
> Twitter:http://twitter.com/app_engine
> Reddit:http://www.reddit.com/r/appengine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.