Batch puts are supported, yes, and as of yesterday's release, calling makePersistentAll (JDO) and the equivalent JPA call will take advantage of this support (previously, you had to use the low-level API).
Two quick notes: 1) All of the entities that you're persisting should be in separate entity groups since two entities in the same entity group can't be written to consecutively, and you will see datastore timeout exceptions if many simultaneous write requests come in for the same entity or entity group. 2) Batch puts do not operate in a transaction. This means that some writes may succeed but others may not, so if you need the ability to rollback, you'll need transactions. - Jason Let me know if you have any more questions on this. - Jason On Thu, Sep 3, 2009 at 7:24 PM, Nicholas Albion <nalb...@gmail.com> wrote: > > Is it possible to overcome the datastore's 10 writes/second limit by > batching them? > > I've got a table containing just over one million records (in CSV > format). Does a batched write (of around 1MB of data and, say 1000 > records) count as one write, or 1000 writes? > > > --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Google App Engine for Java" group. To post to this group, send email to google-appengine-java@googlegroups.com To unsubscribe from this group, send email to google-appengine-java+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/google-appengine-java?hl=en -~----------~----~----~----~------~----~------~--~---