On 28 Sep, 10:15, "Nick Johnson (Google)"
wrote:
>
> You could do this much more efficiently by loading the list of keys into a
> list (say, from a text file), and deleting them in batches, something like
> this:
>
> f = iter(open("todelete.txt", "r"))
> for batch in zip(*([f]*20)):
> db.dele
ot None:
obj.delete()
print("Deleting entity: a3")
obj = MyModel.get_by_key_name("a3")
if obj is not None:
obj.delete()
..
-
This creates a huge file but it does work, and at a not unreasonable
speed. What do think of this approach?
Thanks
G.
On Sep 26, 8:17 pm, &qu
On 17 Sep, 23:25, djerdo wrote:
> Using the bulkloader with the dev appserver, the script adds rows
> (Entities) at a progressively slower rate, to the point where it
> becomes unusable when the csv file is large (20,000 rows). Why? Is
> this a known issue? Are there any
Using the bulkloader with the dev appserver, the script adds rows
(Entities) at a progressively slower rate, to the point where it
becomes unusable when the csv file is large (20,000 rows). Why? Is
this a known issue? Are there any workarounds?
Thanks
--~--~-~--~~~--