Hi Larry
> I am wondering about writing a Servlet that would form/multi-part
> upload large files and cache them in memcache then use the
> cron API to "trickle" persist them into the DS over time ...
I've been thinking about using something like this as well. I think
you could likely cache the
Hello Vince,
That is pretty cool. I will have a look.
Cheers,
Tobias
On Sep 13, 9:45 pm, Vince Bonfanti wrote:
> This is already being done:
>
> http://code.google.com/p/gaevfs/
>
> I recently adding a CachingDatastoreService class that implements a
> write-behind cache using task queues (
This is already being done:
http://code.google.com/p/gaevfs/
I recently adding a CachingDatastoreService class that implements a
write-behind cache using task queues (instead of cron), which will be
included in the next release. The code is available now in SVN:
http://code.google.com/p
Hi Larry,
not sure if this solution would be reliable.
In any case I am having the same trouble as you. I have small
datasats of a few tenthousand entities and I keep hitting the 30
second boundary all the time. I need to write Ajax queries to split
my requests when I try to process the data.
I am wondering about writing a Servlet that would form/multi-part
upload large files and cache them in memcache then use the
cron API to "trickle" persist them into the DS over time ...
could maybe even get adventurous and put a "filesystem"-like API
over the cache ...
lemme know if anyone would
Exactly!
I was hoping this update (
http://code.google.com/p/datanucleus-appengine/issues/detail?id=7
) would seriously improve bulk inserts. As it seems in practice you
can now do roughly 2-3 times as many inserts in the same ammount of
real and CPU time.
However this is still poor compared to
So now, I am hitting Datastore timeouts and Request timeouts ...
I really really think you guys need to add a mechanism that allows
developers to simply do bulk uploads of data into their GAE
applications (from Java thank you).
:)
On Sep 11, 9:06 am, Larry Cable wrote:
> I tried doing a "bulk"
I tried doing a "bulk" load with the JDO makePersistentAll(..) call
yesterday ...
by default what I did was created a List of size 2048, filled it to
capacity and then called makePersistentAll() ... I got an
IllegalArgumentException out of that call stating that you could
only persist at most 500
thanks Jason
On Sep 10, 2:00 pm, "Jason (Google)" wrote:
> All standalone entities are in their own entity group by default. To put an
> entity in another entity, you use an owned relationship, and we have a
> section in our docs for that:
>
> http://code.google.com/appengine/docs/java/datastore
All standalone entities are in their own entity group by default. To put an
entity in another entity, you use an owned relationship, and we have a
section in our docs for that:
http://code.google.com/appengine/docs/java/datastore/relationships.html#Relationships_Entity_Groups_and_Transactions
- J
any documentation or comments on how JPA/JDO map their entities and
identities onto entity groups?
On Sep 8, 2:16 pm, "Jason (Google)" wrote:
> If you're trying to achieve high write throughput, as it sounds like you are
> since you have 1,000,000 entities to write, you should be designing your
If you're trying to achieve high write throughput, as it sounds like you are
since you have 1,000,000 entities to write, you should be designing your
schema to minimize the number of entities in an entity group. These and
other general tips are listed here:
http://code.google.com/appengine/docs/py
Yes. If you need to be able to rollback in case one or more entities don't
get written, you'll need to use transactions. If you use transactions, your
entities must belong to the same entity group or else an exception will be
thrown. You'll get better performance if you do this outside of a
transac
On Sep 5, 10:24 am, "Jason (Google)" wrote:
> Batch puts are supported, yes, and as of yesterday's release, calling
> makePersistentAll (JDO) and the equivalent JPA call will take advantage of
> this support (previously, you had to use the low-level API).
>
> Two quick notes:
>
> 1) All of the en
Your two "quick notes" seem to be contradictory. In order to use
transactions, don't all of the entities have to be in the same entity
group?
Vince
On Fri, Sep 4, 2009 at 8:24 PM, Jason (Google) wrote:
> Batch puts are supported, yes, and as of yesterday's release, calling
> makePersistentAll (J
Batch puts are supported, yes, and as of yesterday's release, calling
makePersistentAll (JDO) and the equivalent JPA call will take advantage of
this support (previously, you had to use the low-level API).
Two quick notes:
1) All of the entities that you're persisting should be in separate entity
16 matches
Mail list logo