[appengine-java] Re: GAE + PDFJet

2010-03-12 Thread mobilekid
Here's a code snippet where I get java.lang.IllegalStateException:

byte[] rawData = pdf.getData().toByteArray();
resp.getOutputStream().write(rawData);
resp.getOutputStream().flush();

Please advice how to fix this. Thanks!

On Mar 11, 8:31 am, mobilekid  wrote:
> Hi there,
> I've created a servlet, which simply reads requests using Apache
> Commons FileUpload, and then if an image has been uploaded, it
> converts it into a .pdf file using the PDFJet OS library. My question
> is how do I feed back the client with the generated PDF? Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Not able to write string arrays in data store

2010-03-12 Thread datanucleus
You mean you now have ArrayList, as opposed to the array
mentioned in your first post ?

ArrayList is not a supported property type. ArrayList is.
You could obviously add a dummy class (e.g MyTempClass) as persistable
with the other ArrayList in it as a field so the original field
becomes ArrayList

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread Jeff Schnitzer
On Thu, Mar 11, 2010 at 8:56 PM, John Patterson  wrote:
>
> But for typesafe changes large or small Twig supports data migration in a
> much safer, more flexible way than Objectify.  Read on for details.

You are increasing my suspicion that you've never actually performed
schema migrations on big, rapidly changing datasets.

> Cool, the @AlsoLoad is quite a neat feature.  Although very limited to
> simple naming changes and nothing structural.  All this is based on a
> dangerous assumption that you can modify "live" data in place.  Hardly
> bullet proof.

Actually, @AlsoLoad (in conjunction with @LoadOnly and the @PrePersist
and @PostLoad lifecycle callbacks) provides an enormous range of
ability to transform your data.  I know, I've had to do more of it
than I would like to admit.  You can rename fields, change types,
arbitrarily munge data, split entities into multiple parts, combine
multiple entities into one, convert between child entities and
embedded parts, etc.

In most cases you can do this on a live running system.  That is the
entire point, actually - our goal is zero downtime for schema
migration.  The general approach:

 * Modify your entities to save in your new format.
 * Use Objectify's primitives so that data loads in both the old
format and the new format.
 * Test your code against your local datastore, or if you're deeply
concerned, against exported data in another appid.
 * Deploy your new code, letting the natural churn update your database.
 * Fire off a batch job at your leisure to finish it off.
 * Remove the extra loading logic from your code when you're done.

Not every migration works exactly the same way, but the tools are
there.  I know from experience that it works and works well.

> The Twig solution is to create a new version of the type (v2) and process
> your changes while leaving the live data completely isolated and safe.  Then
> after you have tested your changes you bump up the version number of your
> live app.

This is cumbersome and inelegant compared to Objectify's solution.
You require the developers to 1) create a parallel hierarchy of
classes and 2) create code (possibly scattered across the app) to
write out both formats.  You require a complete duplication of the
datastore kind - potentially billions of entities occupying hell only
knows how much space.  It could take *weeks* to do even minor schema
migrations this way.  And if you want to make another minor change
halfway through the process?  Start from scratch!  In the mean time,
your customers are wondering why the new feature isn't live yet.

Also... do you realize how slow and expensive deletes are in
appengine?  Duplicating the database is just not an option.  Not with
the Mobcast 2.0 dataset (not live yet, I should be able to talk about
it more freely in a month or two).  Certainly not with Scott's
dataset, which may end up caching a significant chunk of Flickr,
Picasa, and Facebook if it takes off.

> What is with your obsession with batch gets?  I understand they are central
> in Objectify because you are always loading keys.  As I said already - even
> though this is not as essential in Twig it will be added to a new load
> command.

Batch gets are *the* core feature of NoSQL databases, including the
GAE datastore.  Look at these graphs:

http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-get-latency
http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-query-latency

Notice that a get()'s average latency is 50ms and a query()'s average
latency is 500ms.  Last week the typical query was averaging
800-1000ms with frequent spikes into 1200ms or so.

Deep down in the fiber of its being, BigTable is a key-value store.
It is very very efficient at doing batch gets.  It wants to do batch
gets all day long.  Queries require touching indexes maintained in
alternative tablets and comparatively, the performance sucks.

I'm by no means a BigTable expert, but I have a significant
professional interest in being able to read & write a lot of data.  I
could not implement (perhaps better said I couldn't scale) Mobcast
without batch gets and sets.

To be honest, I'm not wholly thrilled with the performance of batch
get/put operations on appengine either.  Cassandra folks are claiming
10k/s writes *per machine*.  Tokyo Tyrant folks are claiming 20k+/sec
writes.  Reads are even faster!  True, these systems are not as
full-featured as the appengine datastore... but we're talking at least
two full orders of mangitude difference!  Ouch.

Why am I obsessed with batch gets?  Because they're essential for
making an application perform.  They're why there is such a thing as a
NoSQL movement in the first place.

> Oops I didn't post the CookBook page in the end.  Rest assured it is a
> trivial addition and I'll update the docs.
> It is also often better to cache above the data layer - hardly the killer
> feature you claim.

If you have a read-h

Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread Scott Hernandez
I hope the general public is enjoying this discussion. There *are*
lots of useful points in this thread, really :) It isn't just about
Twig and Objectify; it is clearly coming down to "philosophical
differences".

On Fri, Mar 12, 2010 at 6:52 AM, John Patterson  wrote:
>
> On 12 Mar 2010, at 13:01, Scott Hernandez wrote:
>>
>> We have a different idea about live systems and managing
>> upgrades/deployments. To answer the question below, you can always
>> upgrade the data in place because you will always need a way to load
>> that data into the current object representation. It just may be that
>> you need to take the system offline to migrate the data.
>
> In my app taking the system off-line while re-processing the data is not an
> option.  The actual reprocessing can take days.
>
> I see your point though.  My data is mainly static so I don't have the
> issues you describe of keeping the new version in sync with the live
> version.  However, I can think of several solutions to this while still
> having the safety advantage of independent "tables" in the datastore.
>
> The simple way would be to, during the migration period, create both a
> Person and a PersonNew whenever a user makes an edit.
>
> Another way would be to do this transparently by creating a simple "forked
> translator" that stored the Person as two Entities instead of one.  I won't
> go into details but it is certainly not a major extension.
>
> I would probably go for the first method if it was just for my app - but
> this is the kind of feature that could be pushed into the framework to make
> it easier for others to benefit too.

Right, that brings up another issue. Code maintenance. You are now
changing the types (two versions are represented by two different java
classes in your project at the same time) of your pojos. If you pass
those objects through many layers of code (Entities -> DAO ->
Service-Framework -> GWT/JS) you might need to either abstract the
implementation or change the objects across all the layers of your
application. This may be a good practice in large, versioned systems,
but in small apps requiring that schema changes may require doubling
your entity classes, and possibly changing your interfaces with a new
(temporary) classes will probably only make things more complicated.
The code is going to get more complicated, and possibly in the wrong
places.

I would assume you do something like this when versioning an object:
1) Duplicate the class code 2) renaming it to something temporary
(with the old version number), and 3) incrementing the version on the
original class. Now what? It gets complicated. You now have two
versions of an object, where only the old one has data until you do a
mass migration. If you do just in time migration you have all kinds of
issues and you are back to a solution like we have come up with in
Objectify. If you wait and do a full data migration then you have your
data offline until that migration is done.

It seems like the better solution if you are concerned about
data-loss, and creating backups, would be having a goal of in-place
migration with a backup option. The framework could create a temporary
(backup) copy of the old state (call it -migration-x, or
something else unlikely to be used). Then you can clean-up your backup
data when things go well, or you can reprocess the backup data if
there are problems (the first n times).

Also, it isn't that there is no testing for me, using Objectify. You
can easily write unit tests that populate the local datastore with the
old version of your data and then runs tests for the migration
(in-place, or otherwise). Either way you need to be careful when
migrating data from one schema to another. It is the same in sql, or
any database. It would be much more helpful if app-engine had some
sort of backup, or snapshot, for us to leverage.

>>> The versions remain completely separate.  Modifying live data in place
>>> gives
>>> me heart pains.
>>
>> Duplicating data (during upgrades) is unacceptable, for my app. It may
>> be safer to leave the old data, but it is not always possible.
>
> There is no maximum stored data on App Engine.  If you delete the data after
> one day is that still a problem?
>

There may be no limit, but there is a cost. The more data you have to
move, the slower it will be. I don't disagree, full data migration is
probably best. If you can take the system down for a short enough
period, why not reprocess and migrate all the data?

>> I have
>> a *lot* of data, that was costly to generate (both in terms of network
>> and cpu) and to store. Jeff, and others on the Objectify list, have
>> spent a lot of time working on a solution with the goal of keeping the
>> app up during an upgrade (in simple upgrades), without needing to
>> migrate all the existing data first, which would require downtime.
>
> Ok I understand.  But that is a big trade-off - saving a day or two worth of
> storage costs in exchange for testability, ability-to-r

Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread John Patterson


On 12 Mar 2010, at 16:28, Jeff Schnitzer wrote:

You are increasing my suspicion that you've never actually performed
schema migrations on big, rapidly changing datasets.


You are increasing my suspicion that you like to make inflammatory  
remarks without thinking them through just for the sake of trolling.   
Perhaps cutting back on coffee might help ;)


Cool, the @AlsoLoad is quite a neat feature.  Although very limited  
to

simple naming changes and nothing structural.  All this is based on a
dangerous assumption that you can modify "live" data in place.   
Hardly

bullet proof.


Actually, @AlsoLoad (in conjunction with @LoadOnly and the @PrePersist
and @PostLoad lifecycle callbacks) provides an enormous range of
ability to transform your data.  I know, I've had to do more of it
than I would like to admit.  You can rename fields, change types,
arbitrarily munge data, split entities into multiple parts, combine
multiple entities into one, convert between child entities and
embedded parts, etc


Firstly , I want to say that I can see advantages in "in-place"  
updates in terms of CPU usage and storage.  I accept that some apps  
can have such huge amounts of data that this is the only practical  
solution for them.  When that is not the case - as in many apps - I  
still stand by my point that keeping the new and old data types in  
separate kinds has advantages.


I would like Twig to offer both techniques.

The Twig solution is to create a new version of the type (v2) and  
process
your changes while leaving the live data completely isolated and  
safe.  Then
after you have tested your changes you bump up the version number  
of your

live app.


This is cumbersome and inelegant compared to Objectify's solution.


Your perception of "elegance" is a little biased.  Your solution  
requires the developer to keep track of which fields in the *same*  
entity belong to which version of your schema.


I would say that keeping separate versions is more elegant in terms of  
simplicity and safety.  But I agree that in-place updates have  
advantages in terms of CPU usage and storage.  I'll be thinking more  
about ways to support in place schema changes in Twig.



You require the developers to 1) create a parallel hierarchy of
classes and


I would actually say that it is cleaner to have two separate classes  
than two classes munged into one.



Batch gets are *the* core feature of NoSQL databases, including the
GAE datastore.  Look at these graphs:


Batch gets *are* used in Twig - they are just not a part of the API.   
But please - enough already about batch gets.  This is a trivial  
feature that I have said three times now will be included in the next  
update when I add an update command.


There just hasn't yet been the need for them due to the non-central  
roll keys play in Twig.



If you have a read-heavy app (and most are), nothing gives you
bang-for-the-buck like adding one little annotation and pulling your
data out of memcache instead of the datastore.  Caching at higher
levels *might* save you some additional cpu cycles, but it's certainly
a lot more work.


The simplicity of having adding a single annotation to cache a type is  
something that fits very well with the Twig philosophy of moving  
complexity into the framework.  It is definitely a feature that will  
be added - I was just pointing out that its not hard to implement that  
right now.


Jeff, no more coffee today - promise?


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: How to delete all entities of a kind with the datastore viewer

2010-03-12 Thread Toby
Hello,

deleting 500 should not take a lot of time. Did you try the
deletePersistentAll method?
Query q = pm.newQuery(persistentClass);
q.deletePersistentAll();

otherwise what I do is using the task queue. I delete 1000 entities
and then put a task in the queue to delete another 1000. In that case
you can not use the deletePersistentAll. You need to query by object
type and limit:

Query q = pm.newQuery(query);
q.setRange(0, size);

try {
resultList = (List) q.execute();
resultSize = resultList.size();
for (T t : resultList) {
pm.deletePersistent(t);
}
return resultSize;
} finally {
q.closeAll();
}

since I return the size I see how much records where affected. If it
is less than 1000 I know that I can stop doing the queuing.
This is a bit difficult to set up but works fine. I believe there is
not better way to do that but I am happy about any other suggestions.

Cheers,
Toby





On Mar 12, 7:08 am, 杨浩  wrote:
> in the admin console:clike the next 20 entity,then change the brower's
> location.href set *size=200* and *offset=0*, enter^ ^
> The offset's max is *1000*
> Good luck!
>
> 2010/3/12 Spines 
>
>
>
> > I'm only able to delete 20 entities at a time,  I have over 500
> > entities of a certain kind. Is there a  way I can delete them all from
> > the admin console?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Unable to upload app: Error posting to URL:

2010-03-12 Thread Chummar Maly
How & where  to specify the proxy settings ?.


On Thu, Mar 11, 2010 at 3:25 PM, Ikai L (Google)  wrote:

> Yes, you will have to specify your proxy settings.
>
> On Tue, Mar 9, 2010 at 8:32 AM, WillSpecht  wrote:
> > I am getting the following error when trying to upload my app to app
> > engine.  What does the 302 Redirected mean?  I am behind a proxy could
> > this be part of the problem?
> >
> > Error
> > Tue Mar 09 11:12:47 EST 2010
> > Unable to upload app: Error posting to URL:
> >
> http://appengine.google.com/api/appversion/create?app_id=black3live&version=1&302
> > Redirected
> >
> >
> > See the deployment console for more details
> >
> > com.google.appengine.tools.admin.AdminException: Unable to upload app:
> > Error posting to URL:
> http://appengine.google.com/api/appversion/create?app_id=black3live&version=1&;
> > 302 Redirected
> >
> > at
> > com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:
> > 59)
> > at
> >
> com.google.appengine.eclipse.core.proxy.AppEngineBridgeImpl.deploy(AppEngineBridgeImpl.java:
> > 271)
> > at
> >
> com.google.appengine.eclipse.core.deploy.DeployProjectJob.runInWorkspace(DeployProjectJob.java:
> > 148)
> > at
> >
> org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:
> > 38)
> > at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
> > Caused by: java.io.IOException: Error posting to URL:
> >
> http://appengine.google.com/api/appversion/create?app_id=black3live&version=1&;
> > 302 Redirected
> >
> > at
> >
> com.google.appengine.tools.admin.ServerConnection.send(ServerConnection.java:
> > 143)
> > at
> >
> com.google.appengine.tools.admin.ServerConnection.post(ServerConnection.java:
> > 81)
> > at
> >
> com.google.appengine.tools.admin.AppVersionUpload.send(AppVersionUpload.java:
> > 415)
> > at
> >
> com.google.appengine.tools.admin.AppVersionUpload.beginTransaction(AppVersionUpload.java:
> > 229)
> > at
> >
> com.google.appengine.tools.admin.AppVersionUpload.doUpload(AppVersionUpload.java:
> > 98)
> > at
> > com.google.appengine.tools.admin.AppAdminImpl.update(AppAdminImpl.java:
> > 53)
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> > To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> > To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> > For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
> >
> >
>
>
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App Engine
> http://googleappengine.blogspot.com | http://twitter.com/app_engine
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>


-- 
Chummar Maly
http://servetube.appspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread Nacho Coloma
> > You are increasing my suspicion that you've never actually performed
> > schema migrations on big, rapidly changing datasets.
>
> You are increasing my suspicion that you like to make inflammatory  
> remarks without thinking them through just for the sake of trolling.  

I have one question more or less related to this thread. Why are both
Objectify and Twig tying the schema upgrade to the persistence
framework? As far as I can recall no persistence framework does this,
and it hasn't been that Hibernate didn't have the chance.

If I am following correctly, you are proposing to specify the
migration path in the persistence metadata, but I just don't get it.
Why?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Help with modeling JDO persistent classes

2010-03-12 Thread objectuser
One way to do this would be to duplicate A.name on the associated Bs.

class A {
  Long id;
  String name;
  ...
}

class B {
  Long id;
  Long Aid;
  Long Aname;
  ...
}

Then you'd be able to do your select on just the B entity group and it
would work.

On Mar 10, 2:59 am, kattus  wrote:
> Hi,
>
> I have 2 persistent classes:
>
> 1. class A that has a primary key (Long) and a property called
> "name" (String).
> 2. class B that is referencing class A (one to many relationship, each
> B has one A, but A can belong to many B's)
>
> I need to retrieve the B's sorted by the "name" property of A. In
> other words if it was relational database I would make something like
> this (simplified):
>
> SELECT * FROM A, B WHERE A.id=B.Aid ORDER BY A.name
>
> The question is how to make this with JDO. I don't want to make A and
> B in the same entity group and it seams it is not necessary either, I
> think using unowned relationships may be a good direction, the help is
> too basic though:
>
> http://code.google.com/appengine/docs/java/datastore/relationships.ht...
>
> The questions are:
>
> Is it possible to define such a relationship between A and B? If yes
> how (which annotations)? Do I have to use the Key class in B to
> reference A?
> How to use the Google query language to write the correct query?
>
> Thank you,
> Gil

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Do I need unowned relationships to accomplish this...?

2010-03-12 Thread objectuser
How about something like this?

class User {
  Long id;
  ...
}

class Deck {
  Long id;
  Long userId;
  ...
}

class Card {
  Long id;
  Long deckId;
  ...
}

Then inserting a card into the deck is a simple insert and finding all
cards in a deck is a single query.  The same for adding a deck to a
user.

This structure has its own trade-offs, of course.

On Mar 10, 4:15 pm, tempy  wrote:
> Exactly, CardList is potentially very large and I want to avoid having
> to load it just to add the cardreference.
>
> On Mar 10, 10:43 pm, WillSpecht  wrote:
>
> > So if you have a reference to a new card and do
>
> > cardList.add(cardRefference)
>
> > all you are loading into memory is the card list and the new card.
>
> > Is this what you are trying to avoid?
>
> > On Mar 10, 4:06 pm, tempy  wrote:
>
> > > Actually cards can only be owned by one deck... so that's not a
> > > problem.  Deck<--1...0toN-->card.
>
> > > The thing that I am looking for is a way to add new cards without
> > > loading a deck's entire card collection, and to add decks without
> > > loading a User's entire deck collection.
>
> > > On Mar 10, 9:15 pm, WillSpecht  wrote:
>
> > > > The way I understand it, if an object can be owned by more than one
> > > > object it must be unowned.  I would assume that cards can be in
> > > > multiple decks so they must be unowned.  I would assume each deck
> > > > would belong to one user so decks could be owned.  I don't know a good
> > > > way to store cards that can be queried in one query unless you have
> > > > each card store what decks they are in.  This could be even more
> > > > difficult if cards appear more than once in a deck.  If that is true I
> > > > would suggest a join table.
>
> > > > On Mar 10, 2:20 pm, tempy  wrote:
>
> > > > > I have the following datastructure:
>
> > > > > "Users" are the root entities, and each "user" can have one or more
> > > > > "decks", and each deck can have one or more "cards."
>
> > > > > When a user wants to add a deck, I would like to be able to add the
> > > > > deck to the user's collection of decks without first fetching all of
> > > > > the user's decks (potentially a large amount of data), then adding the
> > > > > new deck to that collection, and then persisting the user.  Rather, I
> > > > > would like to simply instantiate the deck and append it to the user's
> > > > > collection of decks, without ever retrieving the entire collection.
>
> > > > > Similarly, if a user wants to add a new card to an existing deck, I
> > > > > would like to add the card to the deck without first retrieving the
> > > > > entire deck (that is, the deck with all of its cards).
>
> > > > > I would like to preserve the option of fetching a user with a
> > > > > populated collection of all their decks and to retrieve a deck with a
> > > > > populated collection of all its cards, which is possible with owned
> > > > > relationships.  But to accomplish what I have mentioned above, would I
> > > > > be forced to use unowned relationships? (Collections of keys instead
> > > > > of collections of objects.)
>
> > > > > Thanks,
> > > > > Mike

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread John Patterson


On 12 Mar 2010, at 19:30, Nacho Coloma wrote:

I have one question more or less related to this thread. Why are both
Objectify and Twig tying the schema upgrade to the persistence
framework? As far as I can recall no persistence framework does this,
and it hasn't been that Hibernate didn't have the chance.

If I am following correctly, you are proposing to specify the
migration path in the persistence metadata, but I just don't get it.
Why?


Because the datastore has no schema the interfaces need to define the  
schema themselves - as Java objects and some annotations.  So if you  
are already defining the schema at the data model level it makes sense  
to define changes there too.


Do you mean use the low-level api instead to alter the data?  In that  
case you still need to configure the interface to read/write the new  
format - so may as well reuse it, no?


I do remember something in HIbernate that could generate "update  
table" scripts from changed config files.  But I guess with all back  
ends they had to support it was a much tougher job.


--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Error when deleting entities - Id cannot be zero

2010-03-12 Thread Pavel Byles
I'm trying to delete all entities in my datastore but I receive the
following error:

javax.jdo.JDOUserException: One or more instances could not be deleted...
NestedThrowablesStackTrace:
java.lang.IllegalArgumentException: id cannot be zero...

Caused by:java.lang.IllegalArgumentException: id cannot be zero


For the following code:

  public void deleteAllMyType() {
PersistenceManager pm = PMF.get().getPersistenceManager();
Query query = pm.newQuery(MyType.class);
try {
  query.deletePersistentAll();
  //List clist = (List) query.execute();
  //pm.deletePersistentAll(clist); // This doesn't work either
} finally {
  query.closeAll();
  pm.close();
}
  }

My entity class looks like this:

@PersistenceCapable(identityType = IdentityType.APPLICATION)//, detachable =
"false")
public class MyType implements Serializable {
  @PrimaryKey
  @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
  private Long id;

  @Persistent
  private String name;
  .
  .
  .
}

-- 
-Pav

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] App Engine Bug

2010-03-12 Thread Henning
Hello,

if I have a servlet only the admin has access to and I logged in/
registered to App Engine  with my Google apps account there is no way
to run this servlet although I am an admin because it wants me to
authenticate/login as admin with my regular Google account (it does
not support the Google apps login)

Regards,
Henning

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread Nacho Coloma
> Because the datastore has no schema the interfaces need to define the schema
> themselves - as Java objects and some annotations.  So if you are already
> defining the schema at the data model level it makes sense to define changes
> there too.
>
> Do you mean use the low-level api instead to alter the data?  In that case
> you still need to configure the interface to read/write the new format - so
> may as well reuse it, no?

This is the point were we differ. To me it makes all sense to prepare
a low-level API for schema migration, but I would not bind it to the
current mappings. I mean, the possibilities here are just too many,
and the up-to-date persistence mappings are not relevant to do the
migration. They seem to be more of a hassle than helpful for that
purpose.

I may be wrong, so I am just asking. I was comparing the examples
given here with what would be the equivalent of porting to GAE the
typical schema upgrade script, and didn't see much benefit in it.

> I do remember something in HIbernate that could generate "update table"
> scripts from changed config files.  But I guess with all back ends they had
> to support it was a much tougher job.

AFAIK they have something to create the DDL, but then you should use
your own database tools to create the upgrade script.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Reporting on GAE in Java

2010-03-12 Thread Sandeep Sathaye
Try our product called Cloud2db . This works with
any JDBC complient client tools including JasperReports. Just install
Cloud2db on your appspot, create your tables and use provided JDBC driver
with JasperReports. Complete JasperReport suite will work including iReport.

On Thu, Mar 11, 2010 at 11:12 PM, David  wrote:

> Update:
>
> This website (http://www.jscriptive.org/2009/08/jasperreports-and-
> google-appengine.html) appears to confirm that it's a real issue
> rather than simply something I've done wrong or overlooked.
>
> This website (http://code.google.com/p/g2-report-engine/wiki/
> AppEngineSupport) seems to be a basic reporting tool that's
> specifically designed to work with GAE/J, albeit without the support
> of report building tools such as JasperReports' iReport.
>
> I'm still on the lookout for alternative reporting solutions if anyone
> can help me out?
>
> Cheers,
> David
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] WebSockets

2010-03-12 Thread Dan Billings
Has anyone had any success implementing a Java websockets server on
GAE?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] 1.3.1 Security Fallout, SocketLogger,

2010-03-12 Thread Steve Pritchard

In a previous post

http://groups.google.com/group/google-appengine-java/browse_thread/thread/cdd3abc956a2fba1#

I INCORRECTLY suspected the security model had changed from 1.3.0 to
1.3.1.

It turns out that I had removed a 5 from my legacy web.xml during the upgrade and that caused the
problem.

The conclusion, which may be of interest to some is the following.

(1) The n parameter in the
standard web.xml is honoured by the development mode Jetty and ignored
(as the docs indicate) by the production mode engine.
(2) During this special pre-startup call made to the init method of
the Servlet in development mode only, it is possible to open a socket
by placing the SocketLogger class in the shared folder of the app-
engine SDKbundle.

I find this technique invaluable as a development aid because I route
my log messages to a second screen sitting besides my main screen.  I
can then look at the code being debugged  and the logged messages
without having to flip windows on the same screen.

The SocketLogger is efficient enough in an internal network
environment to, via a 'log' AJAX request to the server, trace
Javascript statements.  As a result I can debug complex Javascript
code with mouseovers etc being logged instantaneously to a visible
screen.

If there is any interest in this code I can make it available.

Steve Pritchard

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread Scott Hernandez
Nacho, this may be a silly question, have you used the datastore api
or app-engine?

On Fri, Mar 12, 2010 at 2:19 PM, Nacho Coloma  wrote:
>> Because the datastore has no schema the interfaces need to define the schema
>> themselves - as Java objects and some annotations.  So if you are already
>> defining the schema at the data model level it makes sense to define changes
>> there too.
>>
>> Do you mean use the low-level api instead to alter the data?  In that case
>> you still need to configure the interface to read/write the new format - so
>> may as well reuse it, no?
>
> This is the point were we differ. To me it makes all sense to prepare
> a low-level API for schema migration, but I would not bind it to the
> current mappings. I mean, the possibilities here are just too many,
> and the up-to-date persistence mappings are not relevant to do the
> migration. They seem to be more of a hassle than helpful for that
> purpose.

Okay, I'll bit. How do you do schema migration (both on a live system,
and in a batch way), in the low-level api?. In the datastore every
entity has its own schema. There is no way to alter the schema without
altering the data that is stored in an entity.

> I may be wrong, so I am just asking. I was comparing the examples
> given here with what would be the equivalent of porting to GAE the
> typical schema upgrade script, and didn't see much benefit in it.

What "update script" are you talking about? The only way to update
data in the datastore is write it back to the datastore; There is no
DDL, or metadata language, or data transformation system for the
datastore, that I know about.

>> I do remember something in HIbernate that could generate "update table"
>> scripts from changed config files.  But I guess with all back ends they had
>> to support it was a much tougher job.
>
> AFAIK they have something to create the DDL, but then you should use
> your own database tools to create the upgrade script.

They can create schema, and migrate schema automatically (generating
the alter scripts without dropping the tables), assuming the types of
changes you do are supported. I've found hibernate to do a good job in
creating and migrating schema automatically for development, and if
careful, production too. It is considered good practice to generate
the migration scripts from hibernate, and then hand-fix them to be
safe.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: How exactly do the App Engine Logs work?

2010-03-12 Thread Don Schwarz
I'm not sure that we document or guarantee this anywhere, but we currently
seem to be preserving these for 90 days.  I don't know how feasible it is to
retrieve all 90 days of data via appcfg, though.

On Thu, Mar 11, 2010 at 8:50 PM, Spines  wrote:

> Thanks for your help Don.  Just one more question (hopefully :)).
>
> I think just the request logs will be good enough for my purposes, and
> I won't actually have to use the diagnostic logs.  How much past data
> is stored for the request logs?
>
> On Mar 11, 8:16 am, Don Schwarz  wrote:
> > On Wed, Mar 10, 2010 at 8:05 PM, Spines  wrote:
> > > Sorry, I don't think I really understood the task queue approach. To
> > > ensure no loss of data, would the task have to update in the datastore
> > > every time?  And the benefit over just doing it directly in the
> > > servlet handler would be the faster response time to the user?
> >
> > Basically, yes.
> >
> > > I think the memcache solution may be my best bet.  I can tolerate some
> > > chance of loss of data.  I'm wondering how likely loss of data would
> > > be? How often does the data get booted out? If I persist from memcache
> > > to datastore every 10 seconds would data loss be super rare?
> >
> > You would have to experiment with this as we don't make any guarantees,
> but
> > I believe that memcache data generally survives much longer than 10
> seconds
> > of inactivity, yes.
> >
> > On Mar 10, 4:11 pm, Spines  wrote:
> >
> >
> >
> > > > Thanks Don,
> > > > I thought about the task queue, but that caps at being able to
> execute
> > > > like 5 tasks per second right?
> >
> > > > So, as long as the log data doesn't get full before I download it
> then
> > > > it would be fine?
> >
> > > > On Mar 10, 3:42 pm, Don Schwarz  wrote:
> >
> > > > > Yeah, those are diagnostic logs.  They effectively go into a ring
> > > buffer per
> > > > > logging level, so the maximum data stored at any given time is
> capped.
> > >  The
> > > > > more you log, the more frequently you would have to download the
> logs
> > > to
> > > > > avoid missing any.  You would also be competing with log space with
> any
> > > > > other log messages generated by your application.
> >
> > > > > What I would suggest instead is either to increment counters in
> > > memcache,
> > > > > and flush them to the datastore periodically if you need durability
> > > (I'm
> > > > > assuming you can tolerate some chance of data loss here).  If you
> > > cannot
> > > > > tolerate any loss of data, then I would suggest enqueueing tasks to
> a
> > > task
> > > > > queue for each request that maintains a summary in memcache and/or
> the
> > > > > datastore.
> >
> > > > > On Wed, Mar 10, 2010 at 4:59 PM, Spines 
> wrote:
> > > > > > I'm talking about the logs that get written when I call
> > > > > > Logger.info("something").
> >
> > > > > > Basically this is what I'm thinking: I have certain data that
> needs
> > > to
> > > > > > get written very often, but hardly ever needs to be read (stuff
> like
> > > > > > what users view what pages of my site).  The datastore is
> optimized
> > > > > > for read efficiency. So, I want to output this data to the logs.
> I
> > > > > > will have an offsite computer download these logs, do
> calculations on
> > > > > > them, and upload the result of the calculations to the datastore.
> >
> > > > > > On Mar 10, 2:03 pm, Don Schwarz  wrote:
> > > > > > > Are you talking about request logs or diagnostic logs?
>  Although we
> > > > > > conflate
> > > > > > > them a bit in both the Admin Console viewer and the appcfg
> command,
> > > but
> > > > > > they
> > > > > > > are stored and tracked separately.
> >
> > > > > > > On Wed, Mar 10, 2010 at 4:01 PM, Spines 
> > > wrote:
> > > > > > > > Hmm, that is my biggest concern, log reliability.  Can
> someone
> > > from
> > > > > > > > Google confirm whether or not I can rely on the logs having
> all
> > > of the
> > > > > > > > log data? Or might certain entries just disappear?
> >
> > > > > > > > On Mar 10, 1:24 pm, thierry Le conniat  >
> > > wrote:
> > > > > > > > > Hello,
> > > > > > > > > I think google log are stored in file.
> > > > > > > > > My experience about log reliability is that when the app is
> > > very
> > > > > > > > > strong working, not all the log are stored.
> > > > > > > > > It's confusing, but i can't not explain it.
> >
> > > > > > > > > Bye
> >
> > > > > > > > > On 10 mar, 22:04, Spines  wrote:
> >
> > > > > > > > > > Where does Google store the logs when you do a Logging
> > > statement?
> > > > > > > > > > Logging statements seem to be pretty fast, so it doesn't
> seem
> > > like
> > > > > > > > > > they are stored in the datastore.
> >
> > > > > > > > > > How reliable are the logs? If I do a logging statement
> and it
> > > > > > > > > > succeeds, is it pretty much guaranteed that it will show
> up
> > > in the
> > > > > > > > > > logs?
> >
> > > > > > > > > > How much past history of logs is stored?
> >
> > > > > > > > > > The reason I'm interested in 

[appengine-java] Re: Spring MVC - File upload problem

2010-03-12 Thread amit
Is there any demo in spring mvc also ?

On Mar 2, 7:22 am, yjun hu  wrote:
> hi, i got a demo here.http://hapeblog.appspot.com/blog.shtml?id=2002
>
> On Tue, Mar 2, 2010 at 4:36 AM, Sebastian Cartier 
> wrote:
>
>
>
> > Hi
> > my first solution i wrote isn't working any more. I had to add
>
> > beans = {
>
> > multipartResolver(is.hax.spring.web.multipart.StreamingMultipartResolver)
> > }
>
> > to /grails-app/conf/spring/resources.xml
> > Now it works again
>
> > @Markus: you can not use byte-arrays with google app engine. You have
> > to use blobs! See my previous solution.
> > I put the library in /lib
>
> > On 2 Feb., 03:08, Markus Paaso  wrote:
> > > Hi
>
> > > I tried it with Grails 1.2.0 and app-engine plugin 0.8.8 but got just
> > > an another error:
>
> > > java.lang.NoClassDefFoundError: Could not initialize class
> > > org.apache.commons.fileupload.disk.DiskFileItem
> > >         at
> > org.apache.commons.fileupload.disk.DiskFileItemFactory.createItem
> > > (DiskFileItemFactory.java:196)
> > >         at org.apache.commons.fileupload.FileUploadBase.parseRequest
> > > (FileUploadBase.java:358)
> > >         at
> > > org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest
> > > (ServletFileUpload.java:126)
> > >         at
>
> > org.springframework.web.multipart.commons.CommonsMultipartResolver.parseRequest
> > > (CommonsMultipartResolver.java:155)
>
> > > my controller:
>
> > >         def imageInstance = new Image(imageParams)
> > >         def f = request.getFile('imageData')
> > >         imageInstance.imageData = f.getBytes()
> > >         Image.withTransaction {
> > >                 if(imageInstance.save(flush:true)) {
> > >                     flash.message = "Image ${imageInstance.id} created"
> > >                     redirect(action:show,id:imageInstance.id)
> > >                 }
> > >                 else {
>
> > render(view:'create',model:[imageInstance:imageInstance])
> > >                 }
> > >         }
>
> > > and domain-class:
>
> > > import javax.persistence.*;
> > > // import com.google.appengine.api.datastore.Key;
>
> > > @Entity
> > > class Image implements Serializable {
>
> > >     @Id
> > >     @GeneratedValue(strategy = GenerationType.IDENTITY)
> > >     Long id
> > >     byte[] imageData
>
> > >     static constraints = {
> > >         id visible:false
> > >     }
>
> > > }
>
> > > It seems like the multipart resolver is not replaced with the new one.
> > > I placed the jar file into /lib and /lib
> > > directories.
> > > Maybe I didn't place the jar to the right directory?
> > > Would you like to tell more about how you got it to work?
>
> > > Markus
>
> > > On 2 helmi, 00:00, Sebastian Cartier  wrote:
>
> > > > this works also for grails!
> > > > Add the library to your lib directory and add
> > > >     
> > > > class="is.hax.spring.web.multipart.StreamingMultipartResolver">
> > > >     
> > > > to your applicationContext.xml
>
> > > > For saving the image in a google blob i used the following functions:
> > > >         @Persistent
> > > >         Blob imageBlob
>
> > > >         byte [] getImage(){
> > > >                 if(imageBlob){
> > > >                         imageBlob.getBytes()
> > > >                 }else{
> > > >                         null;
> > > >                 }
> > > >         }
>
> > > >         void setImage(byte [] imageBytes){
> > > >                 imageBlob = new Blob(imageBytes)
> > > >         }
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine for Java" group.
> > To post to this group, send email to
> > google-appengine-j...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine-java+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine-java?hl=en.
>
> --
> dream or truth

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: App instance recycling and response times - is there solution?

2010-03-12 Thread xcdesz
This is NOT just a problem with Spring -- stop talking like
optimization is going to fix things.  It takes too much time for a
naked servlet to load (i.e; 5-10 seconds).  The only jars that I have
are for JPA.

On Jan 12, 8:32 pm, Jeff Schnitzer  wrote:
> I've been thinking about this issue a little.  It's not quite as
> straightforward as just keeping an instance warm.  Even if you have an
> app that gets multiple hits per second, there will still be cold
> starts:
>
>  * When a new instance comes online to serve more demand.
>  * When you redeploy a version of your app.
>
> Is appengine smart about letting new instances added to the pool "warm
> up" before serving requests?  It's hard to tell from my logs but it
> doesn't look like it.
>
> I know appengine is *not* smart about warming up an instance before
> redeploying.  When I redeploy, some large number of users must wait
> while the appserver(s) startup.
>
> One thing to keep in mind during these discussions is how other Java
> EE environments solve this problem:  They *don't*.  For a long time
> it's been assumed in the EE development that server initialization
> time is irrelevant, and we grew fat libraries that take tens of
> seconds to minutes to start up.  The problem is, this time has *never*
> been irrelevant - even in a production environment you must deploy new
> versions of your app, and none of the appservers I'm familiar with are
> smart enough to keep serving off the old version while the new one
> loads.  Users with unlucky timing always got screwed.
>
> We just didn't care because we only deployed code once a week and we
> added/removed server instances far less often than that.  Well guess
> what, now it's easy - you can deploy up to 1,000 times per day just by
> clicking a button in eclipse, and server provisioning is now not only
> trivial but 100% transparent to you.  Just try that with WebSphere!
>
> You aren't going to like this, but here's the only answer that isn't
> going to piss off your customers:  Stop using Spring.  Stop performing
> eager initialization.  Stop assuming that users don't see startup
> time.  Yes, change the way you write code.
>
> Jeff
>
>
>
> On Tue, Jan 12, 2010 at 1:11 PM, Don Schwarz  wrote:
> > Make sure you are using offline precompilation.  We are always working on
> > optimizations to decrease the latency of loading requests, but here are some
> > other tips:
> >http://googleappengine.blogspot.com/2009/12/request-performance-in-ja...
> > On Tue, Jan 12, 2010 at 3:01 PM, Locke  wrote:
>
> >> I agree that making users wait 20 seconds for your app to load is not
> >> adequate for the vast majority of apps. I also agree that
> >> reengineering everything to try and hide load times from users is a
> >> poor solution in most cases.
>
> >> Using cron to keep your app loaded will not consume your quota; it
> >> will actually conserve your quota. Every time your app loads you will
> >> be billed for 20s of CPU time. If you keep it loaded, you will only be
> >> billed for a few milliseconds per 'keep-alive' cron execution.
>
> >> However, the Google engineers who post here have recommended against
> >> doing this. If everyone did it, appengine might run out of resources
> >> (RAM, I assume).
>
> >> I imagine that Google will need to either find a way to load apps in
> >> 1/10th the time (the ideal solution), raise prices significantly, or
> >> ration  resources in some other way.
>
> >> If I may make a suggestion to the Google engineers: offer a "keep my
> >> app loaded" option and make it available ONLY for billing-enabled
> >> apps. Disable cron for apps which are not billing-enabled, so that
> >> people who just want free hosting or are merely toying with appengine
> >> won't be using up resources all the time.
>
> >> This way, the people who have shown that they are serious about
> >> appengine (by laying their cash down) won't be driven away by the
> >> people who are just fooling with it.
>
> > Yes, we are seriously considering something like this.  Please star this
> > issue for updates:
> >http://code.google.com/p/googleappengine/issues/detail?id=2456
>
> >> On Jan 12, 1:43 pm, Konrad  wrote:
> >> > I asked same question on Stack Overflow (http://stackoverflow.com/
> >> > questions/2051036/google-app-engine-application-instance-recycling-and-
> >> > response-times).
>
> >> > So far proposed solutions (in SO thread and found on other websites)
> >> > do not satisfy me. Creating cron or any other kind of periodic HTTP
> >> > requests to keep instance up and running make no sense. First - there
> >> > is no evidence that this instance will serve next coming request (eg.
> >> > from different network location etc.), second - it will consume Quota
> >> > (which is less a problem).
>
> >> > Other solution - refactoring app - replacing critical functionality
> >> > with lightweight servlet - sounds better, but is GAE forcing to go
> >> > back to CGI programming style? And I could replace let say - API
> >> >

[appengine-java] Re: Flash arcade game GAE based

2010-03-12 Thread Ahmed Khalifa
thanks for the great help ..
the problem is that GAE provided an excellent online developping
platform that allowed me to upload my code and update the versions of
the game and test it on daily basis .. it was a great disappointment
to know that sandbox restrictions prevent me from opening a socket but
any how .. i just need to move to another platform ..
the problem is, it seems that my server will need less restrictions
than the free hosts usually provide ..
i would really appreciate it if you can help me with a start .. if
there's any website that hosts web applications from the type of my
game for free (at least in the beginning of deployment ) i would be
very happy to know about it ..
best regards,
A. Khalifa

On Mar 8, 10:50 pm, nicolas melendez  wrote:
> Hi, there are defferent types of games, first you should know in which
> category your game  is:
>
> 1) Real time, like Quake, you need a Socket connection, TCP when you want
> reliable data and also UDP when reliable data is not important and will
> change soon, like position in a map. So GAE won't help you, HTTP and also
> XMPP are very slow for that kind of game where 500ms is eternity.
>
> 2) Game with turns, like a chess game. Here Gae Can help you with HTTP or
> XMPP.
> I am making a game based on turns, i have choose the XMPP way, but i didn't
> finish yet, so i can't it was a right choice. But for the moment i can say,
> that your
> main dificult task, will be time optimization with the satastore.
> Also you should know that XMPP works right in deployment, but in
> development, you can't test your application. Here you need to do some hack,
> simulate XMPP responses.
>
> 3) Web base games, the are very static and time isn't important,  here GAE
> also can help you. an example ishttp://www.mafia-family.com.
>
> Hope i Help.
> NM
>
> On Mon, Mar 8, 2010 at 5:11 PM, Ikai L (Google)  wrote:
>
> > HTTP is just not a great protocol for real time games. Even with XMPP,
> > there's a chance outgoing responses will be queued and delayed for a
> > bit as capacity requires.
>
> > If you want it to be truly real time, you'll have to develop your own
> > server that is capable of maintaining an open socket connection.
> > Trying to fit client/server communications in a real time game into
> > the HTTP model is like trying to watch a movie by having your friend
> > record it on his camera phone and MMSing it to you as quickly as
> > possible.
>
> > On Mon, Mar 8, 2010 at 9:08 AM, Ahmed Khalifa 
> > wrote:
> > > I have to tell you that i had to slow down the rate by which flash
> > > sends requests to the server in order to give GAE some time to respond
> > > which consequently kills the real-time game play intended ..
> > > after all, even if GAE was not a good choice for a semi-real time game
> > > can anyone please give me valid reasons for why to disregard GAE??
> > > many people are keeping telling me that but no one is giving me real
> > > reasons for why .. plus, i have been really distracted by the amount
> > > of suggestions that do not converge to on direction .. some are
> > > suggesting using XMPP, others say long polling is the way, some say
> > > GraniteDS is a valid option along with memcache .. I am really
> > > confused about the decision to be taken ..
> > > regards,
> > > A. Khalifa
>
> > > On Mar 8, 6:21 pm, Robert Lancer  wrote:
> > >> Yeah, I certainly would not use GAE Java for anything that has to be
> > >> semi real time. Have you checked out Red5.org or Web Orb at
> > >> themidnightcoders.com? Those are designed to work directly with
> > >> Flash.
>
> > >> On Mar 8, 10:56 am, Ahmed Khalifa  wrote:
>
> > >> > thanks a lot ..
> > >> > however, i am still doubtful about the source of latency .. is it GAE
> > >> > itself not supporting a certain feature that allows real-time response
> > >> > or it is something that i lacked ..
> > >> > thanks in advance ..
>
> > >> > On Mar 4, 3:27 pm, Toby  wrote:
>
> > >> > > Hi Ahmed,
>
> > >> > > take a look at GraniteDS. It is a bit like blazeds with the ability
> > to
> > >> > > push and syncronize data between multiple clients. It uses a very
> > >> > > efficient serialization and it runs on GAE:
> >http://graniteds.blogspot.com/2009/04/graniteds-20-on-google-app-engi...
> > >> > > on server side you just need to make sure that the datastore updates
> > >> > > do not block your clients. I would suggest using memcache and to add
> > >> > > datastore updates to the task queue.
> > >> > > So when you receive a data change from a client you update the
> > >> > > memcache value, you propagate the change to all clients and you put
> > on
> > >> > > the task queue a request for the data to be saved. This will be done
> > >> > > asynchronously.
>
> > >> > > Cheers,
> > >> > > Toby
>
> > >> > > On Mar 3, 5:47 am, nicolas melendez  wrote:
>
> > >> > > > I want to do the same. Is there any good XMPP Java framework to
> > include in
> > >> > > > my Applet? recomendations?
> > >> > > > Thanks
> > >> > > > 

Re: [appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread Nacho Coloma
> Nacho, this may be a silly question, have you used the datastore api
> or app-engine?

Quite a bit. Why?

I have not performed big live schema changes yet, though. That's why I
am asking.

>> This is the point were we differ. To me it makes all sense to prepare
>> a low-level API for schema migration, but I would not bind it to the
>> current mappings. I mean, the possibilities here are just too many,
>> and the up-to-date persistence mappings are not relevant to do the
>> migration. They seem to be more of a hassle than helpful for that
>> purpose.
>
> Okay, I'll bit. How do you do schema migration (both on a live system,
> and in a batch way), in the low-level api?. In the datastore every
> entity has its own schema. There is no way to alter the schema without
> altering the data that is stored in an entity.

Maybe if we stick to an example it is going to be easier for us both to follow.

* Say I have a data model that is persisted as two classes, A and B. B
is nested inside A.
* Now I decide to make a refactor. B is not going to be nested inside
A anymore. Instead, it's going to be a root class and use a Key
attribute to reference A.

How do you use the schema migration features of any of these
frameworks to make this migration easier? From my point of view the
easiest way is to use the low-level API to migrate the data into a
copy that will be used by the next application version, but maybe I am
missing something.

>> I may be wrong, so I am just asking. I was comparing the examples
>> given here with what would be the equivalent of porting to GAE the
>> typical schema upgrade script, and didn't see much benefit in it.
>
> What "update script" are you talking about? The only way to update
> data in the datastore is write it back to the datastore; There is no
> DDL, or metadata language, or data transformation system for the
> datastore, that I know about.

Excuse my poor use of language. I was meaning "to translate to Java
code using the Datastore API the equivalent of a DDL script", as
opposed to "introducing into the annotation system all kinds of
madness associated to any kind of migration I could think of in the
future". Maybe I oversimplified the sentence.

> They can create schema, and migrate schema automatically (generating
> the alter scripts without dropping the tables), assuming the types of
> changes you do are supported.

It's provided as a standalone tool, separate from the hibernate core.
There are not annotations to help in this process, for example. That
was the original question.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Cryptography on App Engine

2010-03-12 Thread Ikai L (Google)
It's not on our "Will it play" page:

http://groups.google.com/group/google-appengine-java/web/will-it-play-in-app-engine

This doesn't mean it won't work, it just means no one has tried it.
Can you try it and let us know so we can update the page?

On Thu, Mar 11, 2010 at 8:44 PM, Spines  wrote:
> I think bouncy castle is a good library to use, does anyone know if it
> works on the app engine?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine for Java" group.
> To post to this group, send email to google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine-java+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>



-- 
Ikai Lan
Developer Programs Engineer, Google App Engine
http://googleappengine.blogspot.com | http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] WebSockets

2010-03-12 Thread Ikai L (Google)
Here are a few issues to star:

http://code.google.com/p/googleappengine/issues/list?can=2&q=websockets

App Engine currently doesn't support web sockets or long polling.

On Fri, Mar 12, 2010 at 7:06 AM, Dan Billings  wrote:
> Has anyone had any success implementing a Java websockets server on
> GAE?
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine for Java" group.
> To post to this group, send email to google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine-java+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>



-- 
Ikai Lan
Developer Programs Engineer, Google App Engine
http://googleappengine.blogspot.com | http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] JPA enhancement problem (DataNucleus)

2010-03-12 Thread Rajeev Dayal
Hi there,

Can you provide the full error message that you're seeing when you get this
error?

Also, can you navigate to the Error Log (Window -> Show View -> Error Log)
and see if there are any errors related to this problem listed there?


Thanks,
Rajeev

On Thu, Mar 11, 2010 at 2:21 PM, Sekhar  wrote:

> I'm using the Eclipse Google plugin, and every once in a while after a
> build I get the dreaded "this class is not enhanced!" errors for all
> my entities (even when I don't edit any of them). Any idea why this
> is? If I touch the files, they get built/enhanced again fine, but this
> is getting to be a real annoyance. I'd appreciate any pointers you can
> give!
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Flash arcade game GAE based

2010-03-12 Thread Ikai L (Google)
I don't know of any. You could try a search for a cheap VPS. I like
Slicehost and Linode for these sorts of things, but the smallest plans
are $20 a month and you'll have to configure your own stack.

I understand your disappointment, but we do document very clearly that
App Engine is built for web applications with support for email and
XMPP. Our goal is to try to do as much as we can within this domain,
and it's not apparent that allow raw socket access fits into this plan
(web sockets and long-polling definitely do, but they are not what you
are looking for here).

On Thu, Mar 11, 2010 at 12:24 PM, Ahmed Khalifa  wrote:
> thanks for the great help ..
> the problem is that GAE provided an excellent online developping
> platform that allowed me to upload my code and update the versions of
> the game and test it on daily basis .. it was a great disappointment
> to know that sandbox restrictions prevent me from opening a socket but
> any how .. i just need to move to another platform ..
> the problem is, it seems that my server will need less restrictions
> than the free hosts usually provide ..
> i would really appreciate it if you can help me with a start .. if
> there's any website that hosts web applications from the type of my
> game for free (at least in the beginning of deployment ) i would be
> very happy to know about it ..
> best regards,
> A. Khalifa
>
> On Mar 8, 10:50 pm, nicolas melendez  wrote:
>> Hi, there are defferent types of games, first you should know in which
>> category your game  is:
>>
>> 1) Real time, like Quake, you need a Socket connection, TCP when you want
>> reliable data and also UDP when reliable data is not important and will
>> change soon, like position in a map. So GAE won't help you, HTTP and also
>> XMPP are very slow for that kind of game where 500ms is eternity.
>>
>> 2) Game with turns, like a chess game. Here Gae Can help you with HTTP or
>> XMPP.
>> I am making a game based on turns, i have choose the XMPP way, but i didn't
>> finish yet, so i can't it was a right choice. But for the moment i can say,
>> that your
>> main dificult task, will be time optimization with the satastore.
>> Also you should know that XMPP works right in deployment, but in
>> development, you can't test your application. Here you need to do some hack,
>> simulate XMPP responses.
>>
>> 3) Web base games, the are very static and time isn't important,  here GAE
>> also can help you. an example ishttp://www.mafia-family.com.
>>
>> Hope i Help.
>> NM
>>
>> On Mon, Mar 8, 2010 at 5:11 PM, Ikai L (Google)  wrote:
>>
>> > HTTP is just not a great protocol for real time games. Even with XMPP,
>> > there's a chance outgoing responses will be queued and delayed for a
>> > bit as capacity requires.
>>
>> > If you want it to be truly real time, you'll have to develop your own
>> > server that is capable of maintaining an open socket connection.
>> > Trying to fit client/server communications in a real time game into
>> > the HTTP model is like trying to watch a movie by having your friend
>> > record it on his camera phone and MMSing it to you as quickly as
>> > possible.
>>
>> > On Mon, Mar 8, 2010 at 9:08 AM, Ahmed Khalifa 
>> > wrote:
>> > > I have to tell you that i had to slow down the rate by which flash
>> > > sends requests to the server in order to give GAE some time to respond
>> > > which consequently kills the real-time game play intended ..
>> > > after all, even if GAE was not a good choice for a semi-real time game
>> > > can anyone please give me valid reasons for why to disregard GAE??
>> > > many people are keeping telling me that but no one is giving me real
>> > > reasons for why .. plus, i have been really distracted by the amount
>> > > of suggestions that do not converge to on direction .. some are
>> > > suggesting using XMPP, others say long polling is the way, some say
>> > > GraniteDS is a valid option along with memcache .. I am really
>> > > confused about the decision to be taken ..
>> > > regards,
>> > > A. Khalifa
>>
>> > > On Mar 8, 6:21 pm, Robert Lancer  wrote:
>> > >> Yeah, I certainly would not use GAE Java for anything that has to be
>> > >> semi real time. Have you checked out Red5.org or Web Orb at
>> > >> themidnightcoders.com? Those are designed to work directly with
>> > >> Flash.
>>
>> > >> On Mar 8, 10:56 am, Ahmed Khalifa  wrote:
>>
>> > >> > thanks a lot ..
>> > >> > however, i am still doubtful about the source of latency .. is it GAE
>> > >> > itself not supporting a certain feature that allows real-time response
>> > >> > or it is something that i lacked ..
>> > >> > thanks in advance ..
>>
>> > >> > On Mar 4, 3:27 pm, Toby  wrote:
>>
>> > >> > > Hi Ahmed,
>>
>> > >> > > take a look at GraniteDS. It is a bit like blazeds with the ability
>> > to
>> > >> > > push and syncronize data between multiple clients. It uses a very
>> > >> > > efficient serialization and it runs on GAE:
>> >http://graniteds.blogspot.com/2009/04/graniteds-20-on-google-app-engi..

[appengine-java] Re: How do I write data in my Google App Engine Datastore to com.google.appengine.api.datastore.Text

2010-03-12 Thread Jake
Hey,

I presume setMethod() refers to a getter/setter.  So, your persisted
class would look like:

@Persistent
Text text;

public void setText(String s) {
   this.text = new Text(s);
}

public String getText() {
   return this.text.getValue()
}

The App Engine API is your friend:  
http://code.google.com/appengine/docs/java/javadoc/

Jake

On Mar 11, 8:37 pm, Tristan  wrote:
> Lloyd,
>
> String reallyLong = "It was the best of times, it was the worst of
> times. (...) ..";
>
> Text myText = new Text(reallyLong);
>
> I don't understand your reference to "setMethod()".
>
> Cheers!
>
> On Mar 10, 10:53 pm, Lloyd  wrote:
>
> > I have persistent object, with a string property that often is over
> > 500 charachters. Google App Engine says I need to save it as a
> > com.google.appengine.api.datastore.Text.
>
> > How do I either convert a String type to a
> > com.google.appengine.api.datastore.Text type so I can use a
> > setMethod() on the property, or otherwise get my long sting data into
> > that persistent value?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Cryptography on App Engine

2010-03-12 Thread Spines
I ended up just using javax.Crypto. If I try out bouncy castle in the
future I'll post the results here.

On Mar 12, 10:24 am, "Ikai L (Google)"  wrote:
> It's not on our "Will it play" page:
>
> http://groups.google.com/group/google-appengine-java/web/will-it-play...
>
> This doesn't mean it won't work, it just means no one has tried it.
> Can you try it and let us know so we can update the page?
>
> On Thu, Mar 11, 2010 at 8:44 PM, Spines  wrote:
> > I think bouncy castle is a good library to use, does anyone know if it
> > works on the app engine?
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "Google App Engine for Java" group.
> > To post to this group, send email to google-appengine-j...@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > google-appengine-java+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/google-appengine-java?hl=en.
>
> --
> Ikai Lan
> Developer Programs Engineer, Google App 
> Enginehttp://googleappengine.blogspot.com|http://twitter.com/app_engine

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Objectify - Twig - approaches to persistence

2010-03-12 Thread Jeff Schnitzer
Scott:  Nacho is the author of SimpleDS.

Schema migration is something that Hibernate and RDBMSes actually do
rather poorly.  The typical process is to prepare a series of scripts
(ALTER TABLE and then any relevant data transmogrification), shut down
the application, run the scripts, then bring up new code that
understands the new schema.  Hibernate may help provide the scripts,
but it won't prevent your downtime.

With large data volumes and significant changes, this process could
take a very very long time.

The schemaless nature of AppEngine makes migration both easier and
harder.  Any entity can have any shape, but there are no bulk
operations.  Furthermore, iterating through your dataset and resaving
each entity is a *very slow* operation.  Batch jobs on my entities
(which have very few indexes) convert less than 100 instances per
second.  Want to convert 1 million entities?  Nearly 3 hours.  10
million?  100 million?  The math is easy.

We designed the Objectify schema migration tools with the assumptions:

 1) You have vast quantities of data
 2) The data is changing rapidly
 3) Any amount of downtime is unacceptable

All transformations within a single entity are pretty easy and
straightforward using @AlsoLoad (on a field, lets you load the old
name as well as the new) or on a method parameter (lets you obtain the
raw data as whatever format you need, munge it, and save it to
whatever set of fields you care).

Condensing multiple entities into a single entity is also fairly
straightforward using @PostLoad and @PrePersist.  I'll provide an
example if you want.

Splitting apart one entity into multiple is by far the hardest
transformation, and there isn't any one solution that works for
everyone.  However, I have done it.  Here is one strategy:

 Starting Entity 
class Person {
@Id Long id;
String addressStreet;
String addressCity;
}

Goal: Create an Address entity and add a foreign key reference in Person.

 Translating Entity 
class Address {
@Id Long id;
String street;
String city;
}
class Person {
@Id Long id;
@LoadOnly String addressStreet;
@LoadOnly String addressCity;
Key address;

@PrePersist void onSave() {
if (addressStreet != null || addressCity != null) {
Address addy = new Address(addressStreet, addressCity);
address = ObjectifyService.begin().put(addy);
}
}
}

You might also need to implement the getAddressStreet() and
getAddresCity() methods with a check to the address Key and a load of
the Address object.

It's somewhat convoluted, but it does work.  Another alternative,
which gets rid of the complicated getAddressStreet() implementation is
to perform the conversion onLoad():

class Person {
@Id Long id;
@LoadOnly String addressStreet;
@LoadOnly String addressCity;
Key address;

@PostLoad void onLoad) {
if (addressStreet != null || addressCity != null) {
Address addy = new Address(addressStreet, addressCity);
address = ObjectifyService.begin().put(addy);
ObjectifyService.begin().put(this);
}
}
}

It really depends on the shape of your data and your query profile -
if your app reads a *lot* of entities, you might find the performance
profile of convert-on-save to be better.  Keep in mind that this type
of conversion is a "worst case scenario".  Most other schema
migrations are fairly simple.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] log4j init fails

2010-03-12 Thread Rajeev Dayal
Can you post a copy of your log4j.properties file?

On Thu, Mar 11, 2010 at 12:45 PM, AJ Chen  wrote:

> yes, log4j.properties is copied by the build. the app uses it.  the warning
> message is weird.  thanks.
>
>
> On Thu, Mar 11, 2010 at 8:10 AM, Rajeev Dayal  wrote:
>
>> If you have your log4j.properties file at the root of your source tree, it
>> should automatically be copied over to war/WEB-INF/classes whenever Eclipse
>> performs a build of your project; you should not have to copy it over
>> manually.
>>
>> I'm not sure why you're getting the error with regard to
>> Datanucleus.Connection; I've added Don to this thread; he may have some
>> insight into this.
>>
>>
>> On Thu, Mar 11, 2010 at 3:33 AM, AJ Chen  wrote:
>>
>>> I have the default log4j.properties in WEB-INF/classes dir. but the
>>> warning always comes up. the file is visible because I can change the log
>>> level to ERROR to get rid of the warning.
>>> -aj
>>>
>>>
>>>
>>> On Fri, Feb 19, 2010 at 7:24 PM, Rusty Wright 
>>> wrote:
>>>
 I think you can simply put the log4j.properties file in the
 WEB-INF/classes dir and you don't need any appengine-web.xml stuff for it.
  Log4j looks for its configuration file "on the classpath" which means it
 looks in WEB-INF/classes (and also in all of the jars in the lib 
 directory).


 AJ Chen wrote:

> I have  log4j config in appengine-web.xml,
> 
> value="WEB-INF/logging.properties"/>
> value="file:WEB-INF/classes/log4j.properties"/>
> value="WEB-INF/monitor.properties"/>
>
>   but GAE still complains about it:
> log4j:WARN No appenders could be found for logger
> (DataNucleus.Connection).
> log4j:WARN Please initialize the log4j system properly.
>
> Is there anything else that should be set?
>
> thanks,
> -aj
>
> --
> You received this message because you are subscribed to the Google
> Groups "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

 --
 0x2B | ~0x2b  --  Hamlet

 --
 You received this message because you are subscribed to the Google
 Groups "Google App Engine for Java" group.
 To post to this group, send email to
 google-appengine-j...@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine-java+unsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/google-appengine-java?hl=en.


>>>
>>>
>>> --
>>> AJ Chen, PhD
>>> Chair, Semantic Web SIG, sdforum.org
>>> http://web2express.org
>>> twitter @web2express
>>> Palo Alto, CA, USA
>>> 650-283-4091
>>> *Building social media monitoring pipeline, and connecting social
>>> customers to CRM*
>>>
>>>  --
>>> You received this message because you are subscribed to the Google Groups
>>> "Google App Engine for Java" group.
>>> To post to this group, send email to
>>> google-appengine-j...@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> google-appengine-java+unsubscr...@googlegroups.com
>>> .
>>> For more options, visit this group at
>>> http://groups.google.com/group/google-appengine-java?hl=en.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine for Java" group.
>> To post to this group, send email to
>> google-appengine-j...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine-java+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine-java?hl=en.
>>
>
>
>
> --
> AJ Chen, PhD
> Chair, Semantic Web SIG, sdforum.org
> http://web2express.org
> twitter @web2express
> Palo Alto, CA, USA
> 650-283-4091
> *Building social media monitoring pipeline, and connecting social customers
> to CRM*
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Memory Leak in the EntityManagerFactory?

2010-03-12 Thread David Fuelling
I have a JUnit test class that is attempting to test some JPA
datastore "create" operations, and I'm getting results that *seem* to
indicate a memory leak in the EntityManagerFactory (?)  Basically, if
I use "test1a" (see below), the heap in use by the JUnit test process
continually increases until the JUnit test fails with an OutOfMemory
error.  Test1b suffers from no such problem.

I would not expect this type of behavior from test1a because even
though I'm creating a new EntityManager upon every for-loop iteration,
that "em" should go away after every for-loop iteration since the
variable reference is replaced with a new EntityManager each time.

Now, one might argue that my test is just going too fast, and the GC
isn't getting a chance to Garbage Collect.  However, Test1a takes a
pretty long time to execute on my machine (> 120 seconds), so I
*should* be getting some GC, right?  Unless the EntityManagerFactory
is holding onto a reference to each created EntityManager?

Any input here would be much appreciated...

Thanks!

david

ps - my "UserImpl" is a standard JPA entity.


///
//Begin JUnit Test #1a
///

User user = null;
EntityManager em = null;
for (int i = 0; i < 5000; i++)
{
  //See how I get an em here:
http://code.google.com/appengine/docs/java/datastore/usingjpa.html#Getting_an_EntityManager_Instance
  em = EMF.get().createEntityManager();
  user = new UserImpl("test" + i);
  em.persist(user);
  em.close();
}

///
//End Test #1b
///

///
//Begin JUnit Test #1b
///

User user = null;
EntityManager em = EMF.get().createEntityManager();
for(int i = 0; i < 5000; i++)
{
  user = new UserImpl("test" + i);
  em.persist(user);
}
em.close();

///
//End Test #1b
///

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread Pavel Byles
Anyone?

On Fri, Mar 12, 2010 at 8:41 AM, Pavel Byles  wrote:

> I'm trying to delete all entities in my datastore but I receive the
> following error:
>
> javax.jdo.JDOUserException: One or more instances could not be deleted...
> NestedThrowablesStackTrace:
> java.lang.IllegalArgumentException: id cannot be zero...
>
> Caused by:java.lang.IllegalArgumentException: id cannot be zero
>
>
> For the following code:
>
>   public void deleteAllMyType() {
> PersistenceManager pm = PMF.get().getPersistenceManager();
> Query query = pm.newQuery(MyType.class);
> try {
>   query.deletePersistentAll();
>   //List clist = (List) query.execute();
>   //pm.deletePersistentAll(clist); // This doesn't work either
> } finally {
>   query.closeAll();
>   pm.close();
> }
>   }
>
> My entity class looks like this:
>
> @PersistenceCapable(identityType = IdentityType.APPLICATION)//, detachable
> = "false")
> public class MyType implements Serializable {
>   @PrimaryKey
>   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
>   private Long id;
>
>   @Persistent
>   private String name;
>   .
>   .
>   .
> }
>
> --
> -Pav
>



-- 
-Pav

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Memory Leak in the EntityManagerFactory?

2010-03-12 Thread Max Ross (Google)
Thanks for the report David, this certainly seems suspicious.  There is at
least one memory leak I'm aware of but it's related to transactions so
that's probably not what you're bumping into.  Have you tried taking a heap
dump to see what exactly is building up?

On Fri, Mar 12, 2010 at 1:27 PM, David Fuelling  wrote:

> I have a JUnit test class that is attempting to test some JPA
> datastore "create" operations, and I'm getting results that *seem* to
> indicate a memory leak in the EntityManagerFactory (?)  Basically, if
> I use "test1a" (see below), the heap in use by the JUnit test process
> continually increases until the JUnit test fails with an OutOfMemory
> error.  Test1b suffers from no such problem.
>
> I would not expect this type of behavior from test1a because even
> though I'm creating a new EntityManager upon every for-loop iteration,
> that "em" should go away after every for-loop iteration since the
> variable reference is replaced with a new EntityManager each time.
>
> Now, one might argue that my test is just going too fast, and the GC
> isn't getting a chance to Garbage Collect.  However, Test1a takes a
> pretty long time to execute on my machine (> 120 seconds), so I
> *should* be getting some GC, right?  Unless the EntityManagerFactory
> is holding onto a reference to each created EntityManager?
>
> Any input here would be much appreciated...
>
> Thanks!
>
> david
>
> ps - my "UserImpl" is a standard JPA entity.
>
>
> ///
> //Begin JUnit Test #1a
> ///
>
> User user = null;
> EntityManager em = null;
> for (int i = 0; i < 5000; i++)
> {
>  //See how I get an em here:
>
> http://code.google.com/appengine/docs/java/datastore/usingjpa.html#Getting_an_EntityManager_Instance
>  em = EMF.get().createEntityManager();
>  user = new UserImpl("test" + i);
>  em.persist(user);
>  em.close();
> }
>
> ///
> //End Test #1b
> ///
>
> ///
> //Begin JUnit Test #1b
> ///
>
> User user = null;
> EntityManager em = EMF.get().createEntityManager();
> for(int i = 0; i < 5000; i++)
> {
>  user = new UserImpl("test" + i);
>  em.persist(user);
> }
> em.close();
>
> ///
> //End Test #1b
> ///
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread Max Ross (Google)
What version of the sdk are you using?

On Fri, Mar 12, 2010 at 1:36 PM, Pavel Byles  wrote:

> Anyone?
>
>
> On Fri, Mar 12, 2010 at 8:41 AM, Pavel Byles  wrote:
>
>> I'm trying to delete all entities in my datastore but I receive the
>> following error:
>>
>> javax.jdo.JDOUserException: One or more instances could not be deleted...
>> NestedThrowablesStackTrace:
>> java.lang.IllegalArgumentException: id cannot be zero...
>>
>>
>> Caused by:java.lang.IllegalArgumentException: id cannot be zero
>>
>>
>> For the following code:
>>
>>   public void deleteAllMyType() {
>> PersistenceManager pm = PMF.get().getPersistenceManager();
>> Query query = pm.newQuery(MyType.class);
>> try {
>>   query.deletePersistentAll();
>>   //List clist = (List) query.execute();
>>   //pm.deletePersistentAll(clist); // This doesn't work either
>> } finally {
>>   query.closeAll();
>>   pm.close();
>> }
>>   }
>>
>> My entity class looks like this:
>>
>> @PersistenceCapable(identityType = IdentityType.APPLICATION)//,
>> detachable = "false")
>> public class MyType implements Serializable {
>>   @PrimaryKey
>>   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
>>   private Long id;
>>
>>   @Persistent
>>   private String name;
>>   .
>>   .
>>   .
>> }
>>
>> --
>> -Pav
>>
>
>
>
> --
> -Pav
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread Pavel Byles
GWT: 2.0.3

On Fri, Mar 12, 2010 at 4:38 PM, Max Ross (Google) <
maxr+appeng...@google.com > wrote:

> What version of the sdk are you using?
>
> On Fri, Mar 12, 2010 at 1:36 PM, Pavel Byles  wrote:
>
>> Anyone?
>>
>>
>> On Fri, Mar 12, 2010 at 8:41 AM, Pavel Byles wrote:
>>
>>> I'm trying to delete all entities in my datastore but I receive the
>>> following error:
>>>
>>> javax.jdo.JDOUserException: One or more instances could not be deleted...
>>> NestedThrowablesStackTrace:
>>> java.lang.IllegalArgumentException: id cannot be zero...
>>>
>>>
>>>
>>>
>>> Caused by:java.lang.IllegalArgumentException: id cannot be zero
>>>
>>>
>>> For the following code:
>>>
>>>   public void deleteAllMyType() {
>>> PersistenceManager pm = PMF.get().getPersistenceManager();
>>> Query query = pm.newQuery(MyType.class);
>>> try {
>>>   query.deletePersistentAll();
>>>   //List clist = (List) query.execute();
>>>   //pm.deletePersistentAll(clist); // This doesn't work either
>>> } finally {
>>>   query.closeAll();
>>>   pm.close();
>>> }
>>>   }
>>>
>>> My entity class looks like this:
>>>
>>> @PersistenceCapable(identityType = IdentityType.APPLICATION)//,
>>> detachable = "false")
>>> public class MyType implements Serializable {
>>>   @PrimaryKey
>>>   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
>>>   private Long id;
>>>
>>>   @Persistent
>>>   private String name;
>>>   .
>>>   .
>>>   .
>>> }
>>>
>>> --
>>> -Pav
>>>
>>
>>
>>
>> --
>> -Pav
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine for Java" group.
>> To post to this group, send email to
>> google-appengine-j...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine-java+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine-java?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>



-- 
-Pav

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread Max Ross (Google)
Which version of the App Engine SDK?


On Fri, Mar 12, 2010 at 1:43 PM, Pavel Byles  wrote:

> GWT: 2.0.3
>
> On Fri, Mar 12, 2010 at 4:38 PM, Max Ross (Google) <
> maxr+appeng...@google.com > wrote:
>
>> What version of the sdk are you using?
>>
>> On Fri, Mar 12, 2010 at 1:36 PM, Pavel Byles wrote:
>>
>>> Anyone?
>>>
>>>
>>> On Fri, Mar 12, 2010 at 8:41 AM, Pavel Byles wrote:
>>>
 I'm trying to delete all entities in my datastore but I receive the
 following error:

 javax.jdo.JDOUserException: One or more instances could not be deleted...
 NestedThrowablesStackTrace:
 java.lang.IllegalArgumentException: id cannot be zero...





 Caused by:java.lang.IllegalArgumentException: id cannot be zero


 For the following code:

   public void deleteAllMyType() {
 PersistenceManager pm = PMF.get().getPersistenceManager();
 Query query = pm.newQuery(MyType.class);
 try {
   query.deletePersistentAll();
   //List clist = (List) query.execute();
   //pm.deletePersistentAll(clist); // This doesn't work either
 } finally {
   query.closeAll();
   pm.close();
 }
   }

 My entity class looks like this:

 @PersistenceCapable(identityType = IdentityType.APPLICATION)//,
 detachable = "false")
 public class MyType implements Serializable {
   @PrimaryKey
   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
   private Long id;

   @Persistent
   private String name;
   .
   .
   .
 }

 --
 -Pav

>>>
>>>
>>>
>>> --
>>> -Pav
>>>
>>> --
>>> You received this message because you are subscribed to the Google Groups
>>> "Google App Engine for Java" group.
>>> To post to this group, send email to
>>> google-appengine-j...@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> google-appengine-java+unsubscr...@googlegroups.com
>>> .
>>> For more options, visit this group at
>>> http://groups.google.com/group/google-appengine-java?hl=en.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine for Java" group.
>> To post to this group, send email to
>> google-appengine-j...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine-java+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine-java?hl=en.
>>
>
>
>
> --
> -Pav
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread Pavel Byles
1.3.1

On Fri, Mar 12, 2010 at 4:51 PM, Max Ross (Google) <
maxr+appeng...@google.com > wrote:

> Which version of the App Engine SDK?
>
>
>
> On Fri, Mar 12, 2010 at 1:43 PM, Pavel Byles  wrote:
>
>> GWT: 2.0.3
>>
>> On Fri, Mar 12, 2010 at 4:38 PM, Max Ross (Google) <
>> maxr+appeng...@google.com > wrote:
>>
>>> What version of the sdk are you using?
>>>
>>> On Fri, Mar 12, 2010 at 1:36 PM, Pavel Byles wrote:
>>>
 Anyone?


 On Fri, Mar 12, 2010 at 8:41 AM, Pavel Byles wrote:

> I'm trying to delete all entities in my datastore but I receive the
> following error:
>
> javax.jdo.JDOUserException: One or more instances could not be deleted...
> NestedThrowablesStackTrace:
> java.lang.IllegalArgumentException: id cannot be zero...
>
>
>
>
>
>
>
> Caused by:java.lang.IllegalArgumentException: id cannot be zero
>
>
> For the following code:
>
>   public void deleteAllMyType() {
> PersistenceManager pm = PMF.get().getPersistenceManager();
> Query query = pm.newQuery(MyType.class);
> try {
>   query.deletePersistentAll();
>   //List clist = (List) query.execute();
>   //pm.deletePersistentAll(clist); // This doesn't work either
> } finally {
>   query.closeAll();
>   pm.close();
> }
>   }
>
> My entity class looks like this:
>
> @PersistenceCapable(identityType = IdentityType.APPLICATION)//,
> detachable = "false")
> public class MyType implements Serializable {
>   @PrimaryKey
>   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
>   private Long id;
>
>   @Persistent
>   private String name;
>   .
>   .
>   .
> }
>
> --
> -Pav
>



 --
 -Pav

 --
 You received this message because you are subscribed to the Google
 Groups "Google App Engine for Java" group.
 To post to this group, send email to
 google-appengine-j...@googlegroups.com.
 To unsubscribe from this group, send email to
 google-appengine-java+unsubscr...@googlegroups.com
 .
 For more options, visit this group at
 http://groups.google.com/group/google-appengine-java?hl=en.

>>>
>>>  --
>>> You received this message because you are subscribed to the Google Groups
>>> "Google App Engine for Java" group.
>>> To post to this group, send email to
>>> google-appengine-j...@googlegroups.com.
>>> To unsubscribe from this group, send email to
>>> google-appengine-java+unsubscr...@googlegroups.com
>>> .
>>> For more options, visit this group at
>>> http://groups.google.com/group/google-appengine-java?hl=en.
>>>
>>
>>
>>
>> --
>> -Pav
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Google App Engine for Java" group.
>> To post to this group, send email to
>> google-appengine-j...@googlegroups.com.
>> To unsubscribe from this group, send email to
>> google-appengine-java+unsubscr...@googlegroups.com
>> .
>> For more options, visit this group at
>> http://groups.google.com/group/google-appengine-java?hl=en.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine for Java" group.
> To post to this group, send email to
> google-appengine-j...@googlegroups.com.
> To unsubscribe from this group, send email to
> google-appengine-java+unsubscr...@googlegroups.com
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine-java?hl=en.
>



-- 
-Pav

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] DataNucleus: java.lang.IllegalArgumentException: Invalid MTJ Project

2010-03-12 Thread haole
There have been numerous posts on the subject of a
NullPointerException popping up when the DataNucleus Enhancer runs. At
one point, I was able to make this problem go away, and now, it is
AGAIN preventing me from running my project.

I've seen about 5 different explanations for what's happening. While i
can't reproduce the conditions, i do know that i removed all
references to PersistenceCapable in my code and it got rid of the
problem the last time. Something else is causing the problem this
time.

Is this going to be fixed? Shouldn't there be more verbose exception
handling within the datanucleus enhancer or plugin as to why the
enhancer is failing?

Message in Eclipse error log:
An internal error occurred during: "DataNucleus Enhancer"

Stack trace:
java.lang.NullPointerException
at
com.google.gdt.eclipse.core.ProcessUtilities.cleanupProcess(ProcessUtilities.java:
367)
at
com.google.gdt.eclipse.core.ProcessUtilities.launchProcessAndActivateOnError(ProcessUtilities.java:
271)
at
com.google.appengine.eclipse.core.orm.enhancement.EnhancerJob.runInWorkspace(EnhancerJob.java:
82)
at
org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:
38)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

Session data:
eclipse.buildId=M20100211-1343
java.version=1.6.0_18
java.vendor=Sun Microsystems Inc.
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_US
Command-line arguments:  -os win32 -ws win32 -arch x86

And the entries from workspace/.metadata/log:
!SESSION 2010-03-10 06:29:11.124
---
eclipse.buildId=M20100211-1343
java.version=1.6.0_18
java.vendor=Sun Microsystems Inc.
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=en_US
Command-line arguments:  -os win32 -ws win32 -arch x86

This is a continuation of log file C:\Users\Joe\workspace\.metadata
\.bak_0.log
Created Time: 2010-03-10 08:56:31.234

!ENTRY org.eclipse.mtj.core 4 0 2010-03-10 08:56:31.234
!MESSAGE Invalid MTJ Project.
!STACK 0
java.lang.IllegalArgumentException: Invalid MTJ Project.
at
org.eclipse.mtj.internal.core.build.MTJBuildProperties.(Unknown
Source)
at
org.eclipse.mtj.internal.core.build.MTJBuildProperties.getBuildProperties(Unknown
Source)
at
org.eclipse.mtj.internal.core.util.MTJBuildPropertiesResourceListener.updateBuildProperties(Unknown
Source)
at
org.eclipse.mtj.internal.core.util.MTJBuildPropertiesResourceListener.resourceChanged(Unknown
Source)
at org.eclipse.core.internal.events.NotificationManager
$2.run(NotificationManager.java:291)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at
org.eclipse.core.internal.events.NotificationManager.notify(NotificationManager.java:
285)
at
org.eclipse.core.internal.events.NotificationManager.broadcastChanges(NotificationManager.java:
149)
at
org.eclipse.core.internal.resources.Workspace.broadcastPostChange(Workspace.java:
313)
at
org.eclipse.core.internal.resources.Workspace.checkpoint(Workspace.java:
367)
at org.eclipse.ltk.core.refactoring.PerformChangeOperation
$1.run(PerformChangeOperation.java:265)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:
1800)
at
org.eclipse.ltk.core.refactoring.PerformChangeOperation.executeChange(PerformChangeOperation.java:
308)
at
org.eclipse.ltk.internal.ui.refactoring.UIPerformChangeOperation.executeChange(UIPerformChangeOperation.java:
92)
at
org.eclipse.ltk.core.refactoring.PerformChangeOperation.run(PerformChangeOperation.java:
220)
at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:
1800)
at
org.eclipse.ltk.internal.ui.refactoring.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:
87)
at org.eclipse.jface.operation.ModalContext
$ModalContextThread.run(ModalContext.java:121)

!ENTRY org.eclipse.mtj.core 4 0 2010-03-10 09:02:39.870
!MESSAGE Invalid MTJ Project.
!STACK 0
java.lang.IllegalArgumentException: Invalid MTJ Project.
at
org.eclipse.mtj.internal.core.build.MTJBuildProperties.(Unknown
Source)
at
org.eclipse.mtj.internal.core.build.MTJBuildProperties.getBuildProperties(Unknown
Source)
at
org.eclipse.mtj.internal.core.util.MTJBuildPropertiesResourceListener.updateBuildProperties(Unknown
Source)
at
org.eclipse.mtj.internal.core.util.MTJBuildPropertiesResourceListener.resourceChanged(Unknown
Source)
at org.eclipse.core.internal.events.NotificationManager
$2.run(NotificationManager.java:291)
at org.eclipse.core.runtime.SafeRunner.run(SafeRunner.java:42)
at
org.eclipse.core.internal.events.NotificationManager.notify(NotificationManager.java:
285)
at
org.eclipse.core.internal.events.NotificationManager.broadcastChanges(NotificationManager.java:
149)
at
org.eclipse.core.internal.resources.Workspace.broadcastPostChange(Workspace.java:
313)
at
o

[appengine-java] Re: DataNucleus: java.lang.IllegalArgumentException: Invalid MTJ Project

2010-03-12 Thread haole
correction: was able to solve the problem previously be removing
references to PersistenceAware (not PersistenceCapable)

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: DataNucleus: java.lang.IllegalArgumentException: Invalid MTJ Project

2010-03-12 Thread haole
removing all of the jars from my user-defined library for gdata and
including only those that i need (core, client, calendar) seemed to
fix the problem.

i remember reading somewhere about how long classpaths can cause this
problem.

is this going to be fixed? is the complexity of my app going to be
limited by a limit on how long classpaths can be?!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread Jeff Schnitzer
On Thu, Mar 11, 2010 at 8:56 PM, John Patterson  wrote:
>
>> Again, it is not my intention to say this feature is inherently wrong
>> or bad - but that it's not the great revolution that you make it out
>> to be.  It comes with a cost which the Objectify developers are not
>> currently willing to pay.
>
> I don't blame you.  It was a lot of effort to get right and it works very
> elegantly without the need for bytecode enhancement or dynamic proxies.

Incidentally, this feature is actually quite easy to implement in
Objectify.  If we did, I suspect we would avoid any "magic" - you
would always get an uninitialized entity and the user would need to
refresh() it (or batch refresh() it).  There would be no cascades.

It really is instructive to follow how this feature (in the form of
proxies) evolved in Hibernate (and JPA).  It started out, much like
your Activation annotations, being a static configuration on the
entity classes.  The problem is, sometimes you want more data to come
back and sometimes you want less data to come back.  So they added
FETCH to the HQL/EJBQL/JPAQL language.  I expect you will re-discover
all of these issues.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Cannot Install Plugin

2010-03-12 Thread the.cologne
Hi,

I'd like to try out the App Engine and set up a fresh Eclipse 3.5
Install with a clean workspace.

But when I try to install the plugin I recieve the following error
message:
eclipse.buildId=unknown
java.version=1.6.0_17
java.vendor=Sun Microsystems Inc.
BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=de_DE
Framework arguments:  -product org.eclipse.epp.package.jee.product
Command-line arguments:  -os win32 -ws win32 -arch x86 -product
org.eclipse.epp.package.jee.product


Error
Fri Mar 12 22:15:00 CET 2010
No repository found containing:
org.eclipse.update.feature,com.google.gwt.eclipse.sdkbundle.e35.feature.
2.0.3,2.0.3.v201002191036

Any thoughts or hints?

Thanks,
Thomas

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: Error when deleting entities - Id cannot be zero

2010-03-12 Thread thierry LE CONNIAT
HI,
I have try your function deleteAllMyType
it's functions well, the differenxces are in my class of object :

@PersistenceCapable(identityType = IdentityType.APPLICATION)
public class Picture {
private static final Logger log =
Logger.getLogger(Picture.class.getName());

@PrimaryKey
private String fileName;

...
Picture doesn't extend Serializable and my key is not an id...



On 12 mar, 14:41, Pavel Byles  wrote:
> I'm trying to delete all entities in my datastore but I receive the
> following error:
>
> javax.jdo.JDOUserException: One or more instances could not be deleted...
> NestedThrowablesStackTrace:
> java.lang.IllegalArgumentException: id cannot be zero...
>
> Caused by:java.lang.IllegalArgumentException: id cannot be zero
>
> For the following code:
>
>   public void deleteAllMyType() {
>     PersistenceManager pm = PMF.get().getPersistenceManager();
>     Query query = pm.newQuery(MyType.class);
>     try {
>       query.deletePersistentAll();
>       //List clist = (List) query.execute();
>       //pm.deletePersistentAll(clist); // This doesn't work either
>     } finally {
>       query.closeAll();
>       pm.close();
>     }
>   }
>
> My entity class looks like this:
>
> @PersistenceCapable(identityType = IdentityType.APPLICATION)//, detachable =
> "false")
> public class MyType implements Serializable {
>   @PrimaryKey
>   @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
>   private Long id;
>
>   @Persistent
>   private String name;
>   .
>   .
>   .
>
> }
>
> --
> -Pav

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: JPA enhancement problem (DataNucleus)

2010-03-12 Thread thierry LE CONNIAT
Hi,
I had this problem, and after try to clean your project and rebuild
it .
bye

On 12 mar, 19:34, Rajeev Dayal  wrote:
> Hi there,
>
> Can you provide the full error message that you're seeing when you get this
> error?
>
> Also, can you navigate to the Error Log (Window -> Show View -> Error Log)
> and see if there are any errors related to this problem listed there?
>
> Thanks,
> Rajeev
>
>
>
> On Thu, Mar 11, 2010 at 2:21 PM, Sekhar  wrote:
> > I'm using the Eclipse Google plugin, and every once in a while after a
> > build I get the dreaded "this class is not enhanced!" errors for all
> > my entities (even when I don't edit any of them). Any idea why this
> > is? If I touch the files, they get built/enhanced again fine, but this
> > is getting to be a real annoyance. I'd appreciate any pointers you can
> > give!
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine for Java" group.
> > To post to this group, send email to
> > google-appengine-j...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine-java+unsubscr...@googlegroups.com > unsubscr...@googlegroups.com>
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine-java?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread John Patterson


On 13 Mar 2010, at 05:52, Jeff Schnitzer wrote:

On Thu, Mar 11, 2010 at 8:56 PM, John Patterson  
 wrote:


I really don't see what you think is magical about an initialized  
instance.  I repeat: by default this feature is off and all data is  
loaded as expected.  No magic. No Proxies.  Just simple plain POJOs.   
Definitely some FUD in the air tonight.



It really is instructive to follow how this feature (in the form of
proxies) evolved in Hibernate (and JPA).


Proxies are a completely different beast.  If bytecode manipulation  
were used there are suddenly serialization problems to worry about.   
That is why Twig uses pure plain POJOs with NO magic.  As simple as  
possible.



It started out, much like
your Activation annotations, being a static configuration on the
entity classes.  The problem is, sometimes you want more data to come
back and sometimes you want less data to come back.


This is controllable in *very* fine detail by Activation settings.   
Any class can have a default activation depth, any field can have an  
activation depth and the datastore as a whole can set the depth for  
any individual operation.  This gives *complete* control over what is  
loaded and when.



So they added
FETCH to the HQL/EJBQL/JPAQL language.  I expect you will re-discover
all of these issues.


Your comparison to Hibernate is not really accurate.  You see the the  
difference is that Hibernate was built to optimise working with RDBMS  
systems that can do JOINs and therefore fetch data in bulk.  The  
datastore cannot do JOINs and probably never will so options like  
FETCH are just not required.


Activation is actually a concept borrowed from Db4o which has been  
tried and tested over many years.


John

--
You received this message because you are subscribed to the Google Groups "Google 
App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



[appengine-java] Re: JPA enhancement problem (DataNucleus)

2010-03-12 Thread Sekhar
Sure, below is the trace for one of the entities (it throws similar
dumps for each of them, but I'm just giving one since they are all
similar).

E 03-12 12:08PM 38.366
org.datanucleus.metadata.MetaDataManager initialiseFileMetaDataForUse:
Found Meta-Data for class
com.allurefx.herdspot.server.data.Participant but this class is not
enhanced!! Please enhance the class before running DataNucleus.
org.datanucleus.exceptions.NucleusUserException: Found Meta-Data for
class com.allurefx.herdspot.server.data.Participant but this class is
not enhanced!! Please enhance the class before running DataNucleus.
at
org.datanucleus.metadata.MetaDataManager.initialiseClassMetaData(MetaDataManager.java:
2225)
at
org.datanucleus.metadata.MetaDataManager.initialiseFileMetaData(MetaDataManager.java:
2176)
at
org.datanucleus.metadata.MetaDataManager.initialiseFileMetaDataForUse(MetaDataManager.java:
881)
at
org.datanucleus.metadata.MetaDataManager.loadPersistenceUnit(MetaDataManager.java:
794)
at
org.datanucleus.jpa.EntityManagerFactoryImpl.initialisePMF(EntityManagerFactoryImpl.java:
488)
at
org.datanucleus.jpa.EntityManagerFactoryImpl.(EntityManagerFactoryImpl.java:
355)
at
org.datanucleus.store.appengine.jpa.DatastoreEntityManagerFactory.(DatastoreEntityManagerFactory.java:
63)
at
org.datanucleus.store.appengine.jpa.DatastorePersistenceProvider.createEntityManagerFactory(DatastorePersistenceProvider.java:
35)
at javax.persistence.Persistence.createFactory(Persistence.java:172)
at
javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:
112)
at
javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:
66)
at com.allurefx.herdspot.server.data.EMF.get(EMF.java:16)
at com.allurefx.herdspot.server.Session.(Session.java:76)
at
com.allurefx.herdspot.server.HerdspotServiceImpl.getSession(HerdspotServiceImpl.java:
791)
at
com.allurefx.herdspot.server.HerdspotServiceImpl.getAccountDetails(HerdspotServiceImpl.java:
747)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at
com.google.apphosting.runtime.security.shared.intercept.java.lang.reflect.Method_
$1.run(Method_.java:165)
at java.security.AccessController.doPrivileged(Native Method)
at
com.google.apphosting.runtime.security.shared.intercept.java.lang.reflect.Method_.privilegedInvoke(Method_.java:
163)
at
com.google.apphosting.runtime.security.shared.intercept.java.lang.reflect.Method_.invoke_(Method_.java:
124)
at
com.google.apphosting.runtime.security.shared.intercept.java.lang.reflect.Method_.invoke(Method_.java:
43)
at
com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:
562)
at
com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:
188)
at
com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:
224)
at
com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:
62)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:713)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:806)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:
487)
at org.mortbay.jetty.servlet.ServletHandler
$CachedChain.doFilter(ServletHandler.java:1093)
at
com.google.apphosting.utils.servlet.ParseBlobUploadFilter.doFilter(ParseBlobUploadFilter.java:
97)
at org.mortbay.jetty.servlet.ServletHandler
$CachedChain.doFilter(ServletHandler.java:1084)
at
com.google.apphosting.runtime.jetty.SaveSessionFilter.doFilter(SaveSessionFilter.java:
35)
at org.mortbay.jetty.servlet.ServletHandler
$CachedChain.doFilter(ServletHandler.java:1084)
at
com.google.apphosting.utils.servlet.TransactionCleanupFilter.doFilter(TransactionCleanupFilter.java:
43)
at org.mortbay.jetty.servlet.ServletHandler
$CachedChain.doFilter(ServletHandler.java:1084)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:
360)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:
216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:
181)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:
712)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:
405)
at
com.google.apphosting.runtime.jetty.AppVersionHandlerMap.handle(AppVersionHandlerMap.java:
238)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:
139)
at org.mortbay.jetty.Server.handle(Server.java:313)
at org.mortbay.jetty.HttpConnecti

Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread Jeff Schnitzer
On Fri, Mar 12, 2010 at 4:26 PM, John Patterson  wrote:
>
> I really don't see what you think is magical about an initialized instance.
>  I repeat: by default this feature is off and all data is loaded as
> expected.  No magic. No Proxies.  Just simple plain POJOs.  Definitely some
> FUD in the air tonight.

It's hardly FUD to point out that every extra query counts.  In GAE,
you can measure the price (in $) of every single request.  In the
applications I have developed, it *matters* that you don't do multiple
queries to fetch excess data.

For nontrivial applications, fetching a large object graph every time
you load an object just doesn't work.  This is why Hibernate added
proxies.  I'm glad you're sufficiently aware of this problem that
you're building in limits to activation, but I think you're nuts if
you think that these facilities will rarely be used.

>> It really is instructive to follow how this feature (in the form of
>> proxies) evolved in Hibernate (and JPA).
>
> Proxies are a completely different beast.  If bytecode manipulation were
> used there are suddenly serialization problems to worry about.  That is why
> Twig uses pure plain POJOs with NO magic.  As simple as possible.

Proxies serve the exact same purpose as your uninitialized entities.
They allow the fetch process to halt, because in nontrivial
applications you cannot afford to load large object graphs every time
you fetch a single entity.  As a solution, proxies have advantages and
disadvantages - just like your uninitialized entities have advantages
and disadvantages.

FWIW, I myself prefer the uninitialized entity solution over proxies,
despite the - quite significant - danger.  Just remember that if
everyone was using Twig instead of Hibernate, all those
LazyInitializationExceptions - 64,500 google hits for "hibernate
lazyinitializationexception" - would in fact be method calls silently
returning invalid data.

Before you say "but the default is to fetch everything!" please
realize that Hibernate has the option to "fetch everything" as well,
and nobody uses it for the same reason they won't use it in Twig - you
normally can't get away with loading large object graphs when you
fetch a single entity.

This isn't FUD, it's being realistic based on the real-world
experiences of people using a popular framework with similar features.
 After you explained the concept of uninitialized entities (the brief
blurb in your docs really isn't enough), I actually rather like your
solution!  I might even implement something similar in Objectify.  But
I really think you need to document the hell out of the issues
surrounding them.  It is very very easy to corrupt data.

Sadly, this is where bytecode manipulation really would come in handy
- you can intercept data access on an uninitialized entity and throw
an exception.  It's too bad java dynamic proxies can't wrap concrete
classes.

> This is controllable in *very* fine detail by Activation settings.  Any
> class can have a default activation depth, any field can have an activation
> depth and the datastore as a whole can set the depth for any individual
> operation.  This gives *complete* control over what is loaded and when.

I wouldn't call it complete control.  It doesn't very gracefully
handle entities with multiple relationships - some queries you will
want to fetch some parts and not others.  However, the point is moot,
since you can (and IMNSHO probably should) always disable automatic
activation and refresh the graph manually.

Here's a bit of free advice:  You need a batch refresh() operation.

> Your comparison to Hibernate is not really accurate.  You see the the
> difference is that Hibernate was built to optimise working with RDBMS
> systems that can do JOINs and therefore fetch data in bulk.  The datastore
> cannot do JOINs and probably never will so options like FETCH are just not
> required.

Actually, this makes the situation even *more* dire in GAE than it is
in Hibernate.  In Hibernate, extra fetching means extra JOINs in the
query.  In GAE, extra fetching means doing whole additional roundtrips
to the datastore - essentially manual joins.  The fetch/activation
limits are very important.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine-java+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine-java?hl=en.



Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread John Patterson


On 12 Mar 2010, at 16:28, Jeff Schnitzer wrote:


Look at these graphs:

http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-get-latency
http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-query-latency

Notice that a get()'s average latency is 50ms and a query()'s average
latency is 500ms.  Last week the typical query was averaging
800-1000ms with frequent spikes into 1200ms or so.


"You are increasing my suspicion that you have never worked" with an  
application that queries large amounts of data.  If your queries are  
taking anywhere near 1000 ms then you must be doing something  
seriously wrong.


One of my apps query times are generally in the 200 ms range over 2  
million records.  A keys-only query can return in 50ms.


This is the time required to execute 9 parallel queries on geospatial  
data and OR merge them together.  Keep in mind that with Twig I could  
execute 90 parallel queries and expect the time to be about the same.




Deep down in the fiber of its being, BigTable is a key-value store.
It is very very efficient at doing batch gets.  It wants to do batch
gets all day long.  Queries require touching indexes maintained in
alternative tablets and comparatively, the performance sucks.


You are ignoring the fact that for many (most?) applications queries  
are essential.  I completely understand that your FaceBook app doesn't  
depend on them but assuming that other peoples apps also do not is  
just not helpful.



Why am I obsessed with batch gets?  Because they're essential for
making an application perform.  They're why there is such a thing as a
NoSQL movement in the first place.


Again, essential for your app.  Not mine and probably many other apps  
in which querying their own data is more important.  Batch gets are  
really only useful in apps that need to take a load of ids from an  
external source and do something with them.  Social network  
"extension" apps for example.


Just to reiterate - batch gets of external ids is a trivial feature  
that has always been planned to be a part of the new "load command"  
that will follow the pattern of the find and store commands.



* Fire off a batch job at your leisure to finish it off.


This "partial update" approach only works in cases where you are not  
adding a field that you will query on.  That needs to be an all-or- 
nothing batch job.


What is with your obsession with batch gets?  I understand they are  
central
in Objectify because you are always loading keys.  As I said  
already - even
though this is not as essential in Twig it will be added to a new  
load

command.


Batch gets are *the* core feature of NoSQL databases, including the
GAE datastore.


Querying is important.  You are ignoring a whole class of applications  
if you think that querying is not important.  I understand that your  
applications works with FaceBook and does a lot of "lookups" by  
external ids in a large dataset so to your mind batch get is the most  
important operation.  This is really not such a common scenario as you  
social network developers might think.


One of the applications I work on application has about 2 million  
records on which it needs to do geospatial queries sorted and  
filtered.  I guarantee you that there are many other applications that  
have different query needs so to focus only on batch gets is myopic.


It probably explains why you don't think that OR queries are so  
important.  They were one of the first things I tried on App Engine  
and one of the reasons Twig was written.  I would bet that most  
developers could not imagine working with an RDBMS that did not  
support OR and AND queries (on more than one property).  Twigs support  
for these saves time and reduces the complexity of the developers  
app.  With Objectify they are left on their own to re-invent the wheel  
every time.


The high-level design of Twigs commands means that ORs are supported  
now in the query API.  Objectifies low-level design could only help  
out by providing helper classes -  hardly user friendly or intuitive.   
The goal of Twigs design is to put these common solutions at the  
developers finger tips.  Yes there are more methods in the API, but  
they are well organised using the fluent style commands.


The command pattern used by Twig has the potential to add new "high  
level" functionality that Objectifies low-level query interface would  
need to rely on helper functions.  For example, supporting AND queries  
with more than one inequality filter is in development.  Just like the  
OR queries it will "stream" results, never keeping more than a small  
number in memory.


These are the types of common problems that take a lot of time to  
code.  Im not saying that this is impossible to code with Objectify -  
just that it is up to the developer to code these patterns again and  
again.  Re-inventing the wheel is one of the biggest wa

Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread John Patterson

On 13 Mar 2010, at 11:00, Jeff Schnitzer wrote:

On Fri, Mar 12, 2010 at 4:26 PM, John Patterson  
 wrote:



It's hardly FUD to point out that every extra query counts.  In GAE,
you can measure the price (in $) of every single request.  In the
applications I have developed, it *matters* that you don't do multiple
queries to fetch excess data


The FUD I was referring to was you claim that uninitialised instances  
are magical.  They are just normal instances with no values set.  But  
I See below that you are coming to like the idea after all :)



For nontrivial applications, fetching a large object graph every time
you load an object just doesn't work.  This is why Hibernate added
proxies.  I'm glad you're sufficiently aware of this problem that
you're building in limits to activation, but I think you're nuts if
you think that these facilities will rarely be used.


I believe you claimed that Keys were not used all that often in  
Objectify.  I guess that would also make you "nuts".  Activation would  
be used less than Keys because it can be done as a part of  
optimisation when needed.  I consider Objectifies use of Keys and  
manual loading of every relationship a "premature optimisation".



Proxies are a completely different beast.  If bytecode manipulation  
were
used there are suddenly serialization problems to worry about.   
That is why

Twig uses pure plain POJOs with NO magic.  As simple as possible.


Proxies serve the exact same purpose as your uninitialized entities.


They have the same purpose (with some advantages) but different  
implications and are more complex



They allow the fetch process to halt, because in nontrivial
applications you cannot afford to load large object graphs every time
you fetch a single entity.  As a solution, proxies have advantages and
disadvantages - just like your uninitialized entities have advantages
and disadvantages.


That is why, as the docs say, a future release will include an option  
to use proxies for automatic activation and dirty detection.


For now the simple "no magic" approach works fine and has less gotchas.


FWIW, I myself prefer the uninitialized entity solution over proxies,
despite the - quite significant - danger.  Just remember that if
everyone was using Twig instead of Hibernate, all those



After you explained the concept of uninitialized entities (the brief
blurb in your docs really isn't enough), I actually rather like your
solution!  I might even implement something similar in Objectify.


Thanks for the recognition!  But I borrowed the concept from Db4o -  
just a shame that their implementation was completely useless for  
server apps (single threaded!).


Yes I admit that docs have so far come after features.  Twig contains  
a hell of a lot of features to save developers time and make their  
code cleaner.  Hopefully we'll get some other developers excited by  
these new features to join the effort to add documentation - and more  
features!!  Automatic activation and dirty detection are high on the  
list of non-trivial additions.



 But I really think you need to document the hell out of the issues
surrounding them.  It is very very easy to corrupt data.



I think that it is easier to corrupt data in Objectify because the  
same instance can be loaded into memory more than once at the same  
time.  Very easy to overwrite one with the other without realising.   
Twig guarantees that one entity will only have one instance in memory.



Sadly, this is where bytecode manipulation really would come in handy
- you can intercept data access on an uninitialized entity and throw
an exception.  It's too bad java dynamic proxies can't wrap concrete
classes.


Yes and too bad Guice's bytecode AOP doesn't support field access  
interceptors.


This is controllable in *very* fine detail by Activation settings.   
Any
class can have a default activation depth, any field can have an  
activation
depth and the datastore as a whole can set the depth for any  
individual
operation.  This gives *complete* control over what is loaded and  
when.


I wouldn't call it complete control.  It doesn't very gracefully
handle entities with multiple relationships - some queries you will
want to fetch some parts and not others.


Actually you can set activation per Class, Field or per datastore  
command.  Also the ActivationStrategy makes it possible to make *any*  
control you can think of in Java code.  But I don't think this will be  
hardly ever necessary.



However, the point is moot,
since you can (and IMNSHO probably should) always disable automatic
activation and refresh the graph manually.


Completely disagree.  IMNSHO it i best to make the framework work out- 
of-the box first and optimize later.  Although refreshing is handy you  
would end up in the same position as Objectify - all your code that  
uses your data models would need to reference the data layer (i.e. be  
ties to the platform)



Here's a bit of free advice:  You need 

Re: [appengine-java] Objectify - Twig - approaches to persistence

2010-03-12 Thread Jeff Schnitzer
On Fri, Mar 12, 2010 at 8:35 PM, John Patterson  wrote:
>
> On 12 Mar 2010, at 16:28, Jeff Schnitzer wrote:
>
> Look at these graphs:
>
> http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-get-latency
> http://code.google.com/status/appengine/detail/datastore/2010/03/12#ae-trust-detail-datastore-query-latency
>
> Notice that a get()'s average latency is 50ms and a query()'s average
> latency is 500ms.  Last week the typical query was averaging
> 800-1000ms with frequent spikes into 1200ms or so.
>
> "You are increasing my suspicion that you have never worked" with an
> application that queries large amounts of data.  If your queries are taking
> anywhere near 1000 ms then you must be doing something seriously wrong.
> One of my apps query times are generally in the 200 ms range over 2 million
> records.  A keys-only query can return in 50ms.

Are you debating the validity of google's statistics?  Or the loud
complaints posted to this mailing list last week?

Some queries will certainly return faster than others, and from what
I've read/watched, keys-only queries should have performance profiles
roughly similar to simple gets.  But there can be no doubt that real
queries are quite slow compared to simple gets.

But you're arguing with a straw man here.  I've never suggested that
queries are not useful.

However, you *have* suggested that batch gets aren't important.
"Batch gets are really only useful in apps that need to take a load of
ids from an external source and do something with them."  That's
absolute rubbish.  A very large (and growing) number of applications
are being built on NoSQL databases that are effectively key-value
stores.  Cassandra, Tokyo Cabinet, HBase, Voldemort, and *dozens* of
other tools are being developed because they can do something that
relational systems can't:  get() and put() vast quantities of data
quickly.

There are a growing number of applications (largely defined by
staggeringly large user bases) in which the cost of maintaining
traditional indexes is not practical.  You aren't going to implement
Twitter or Facebook with a bunch of appengine queries!  But apparently
Cassandra works great.

> This is the time required to execute 9 parallel queries on geospatial data
> and OR merge them together.  Keep in mind that with Twig I could execute 90
> parallel queries and expect the time to be about the same.

You have the luxury of relatively static data, which colors your view
of the world.  I work with data that has a high churn rate, which
colors mine.

I have to ask you something though - would you need to do 9 parallel
queries if you were working with a datastore that has proper spatial
indexes?  Not that doing parallel queries isn't cool, but is it
actually necessary for your app?

I'm not doing spatial queries right now, but it's on the horizon.
I've done the research.  For my application, it's much easier and more
efficient to push my spatial queries off to a cluster of PostGIS
instances running elsewhere in the cloud.  It's also much, much
cheaper.

> * Fire off a batch job at your leisure to finish it off.
>
> This "partial update" approach only works in cases where you are not adding
> a field that you will query on.  That needs to be an all-or-nothing batch
> job.

Nonsense, this is totally dependent on the specific logic of your application.

Simple example:  You're adding a loginCount to your User entity, and
you want to add a query that selects out users that have logged in
more than N times.  No reason you can't start running those queries
right away.

You're trying to dismiss the utility of upgrading the dataset in-place
by saying that *some* application features require the dataset to be
completely transitioned before being enabled.  Ok, some do some don't.
 Your claim is still absurd.

> It probably explains why you don't think that OR queries are so important.

The reason OR queries aren't high on our priority list is because
nobody has been asking for them.  There doesn't even seem to be an
issue for it in GAE's issue tracker - or if there is, it's *pages*
down the list of priorities.

> They were one of the first things I tried on App Engine and one of the
> reasons Twig was written.  I would bet that most developers could not
> imagine working with an RDBMS that did not support OR and AND queries (on
> more than one property).  Twigs support for these saves time and reduces the
> complexity of the developers app.  With Objectify they are left on their own
> to re-invent the wheel every time.

Our conceptual model of the datastore is not an RDBMS.  It's a
key-value store that also allows limited queryability.  If you really
want an RDBMS, I'm sure the Cloud2db guys will be happy to chime in
again.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine for Java" group.
To post to this group, send email to google-appengine-j...@googlegroups.com.
To unsubscribe from t