We are using the "legacy" (non-PipelineAPI) version of the mapreduce 
library: http://code.google.com/p/appengine-mapreduce/

The issue is that we can only ever get one shard processing, even for kinds 
that have >150,000 entities. We have tried different shard_count 
configurations, e.g, 4, 16, 128, but always only one shard processing 
entire dataset, which is very slow.

I feel like I've missed a step (e.g., creating an index or something).

Crossing my fingers that someone knows an offhand answer.

Thanks,
j

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/MsCTk6bPY8QJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to