Hi Stakka,

My suggestion would be to do something like this:
- Split the uploaded file into 'jobs'. One job per 500k might be about
right; it depends on your processing overhead. In any case, the job
needs to be less than 1MB.
- Insert the jobs into the datastore.
- Add a task queue job for each job.
- Have the task queue job process its part of the total data.

-Nick Johnson

On Wed, Aug 12, 2009 at 10:36 PM, Stakka<henrik.lindqv...@gmail.com> wrote:
>
> I'am working on an browser based accounting app which has a feature to
> import ledger transactions through file uploads. I'am currently only
> running on the local dev server, but from what I've read datastore
> puts -- even batch -- is very slow and CPU (quota) intensive when
> deployed live.
>
> How do I overcome this problem if the user uploads a large file with
> thousands transaction?
>
> I've seen solutions where you batch put entities in chunks of 500.
> That only works if you run a custom upload tool on your computer, not
> from a browser since the request is limited to 30 seconds. Am I forced
> to use the Task Queue? But where do I store the raw uploaded file or
> the preferably parsed interim transaction entities when the task isn't
> executing?
>
> Funny App Engine has a 10 megabyte request (file upload) size limit
> when storing 10 megabyte worth of entities seems to be so hard.
>
> >
>



-- 
Nick Johnson, Developer Programs Engineer, App Engine

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to