On Mar 22, 3:48 pm, Nick Johnson (Google) nick.john...@google.com
wrote
On Mon, Mar 22, 2010 at 8:45 PM, homunq jameson.qu...@gmail.com wrote:
OK, after hashing it out on IRC, I see that I have to erase my data
and start again.
Why is that? Wouldn't updating the data be a better option?
Hi,
On Tue, Mar 23, 2010 at 10:25 AM, homunq jameson.qu...@gmail.com wrote:
On Mar 22, 3:48 pm, Nick Johnson (Google) nick.john...@google.com
wrote
On Mon, Mar 22, 2010 at 8:45 PM, homunq jameson.qu...@gmail.com wrote:
OK, after hashing it out on IRC, I see that I have to erase my data
Watching my deletion process start to get trapped in molasses, as Eli
Jones mentions above, I have to ask two things again:
1. Is there ANY ANY way to delete all indexes on a given property
name? Without worrying about keeping indexes in order when I'm just
paring them down to 0, I'd
On Tue, Mar 23, 2010 at 1:57 PM, homunq jameson.qu...@gmail.com wrote:
Watching my deletion process start to get trapped in molasses, as Eli
Jones mentions above, I have to ask two things again:
1. Is there ANY ANY way to delete all indexes on a given property
name? Without
OK, I guess I'm guilty on all counts.
Clearly, I can fix that moving forward, though it will cost me a lot
of CPU to fix the data I've already entered. But as a short-term
stopgap, is there any way to delete entire default indexes for a given
property? (I mean, anything besides setting
Hey Nick,
Just out of curiosity, how many properties would it take to get that amount
of wasted space in overhead? Are we talking about entities in the orders of
magnitudes of tens/thousands/hundreds?
On Mon, Mar 22, 2010 at 9:07 AM, homunq jameson.qu...@gmail.com wrote:
OK, I guess I'm
Hi Patrick,
An overhead factor of 12 (as observed below) is high, but not outrageous.
With long model names and property names, this could happen with relatively
few indexed properties - on the order of magnitude of tens, at most.
-Nick Johnson
On Mon, Mar 22, 2010 at 8:07 PM, Patrick Twohig
OK, after hashing it out on IRC, I see that I have to erase my data
and start again. Since it took me 3 days of CPU quota to add the data,
I want to know if I can erase it quickly.
1. Is the overhead for erasing data (and thus whittling down indexes)
over half the overhead from adding it? Under
I'd use a cursor on the task queue. Do bulk deletes in blocks of 500 (I
think that's the most keys you can pass to delete on a single call) and it
shouldn't be that hard to wipe it out.
Cheers!
On Mon, Mar 22, 2010 at 1:45 PM, homunq jameson.qu...@gmail.com wrote:
OK, after hashing it out on
oh man.. well, he's going to be wiping out 7GB of junk... :)
When I went through process of deleting something like 400MB of junk.. it
was not fun
First I started off deleting by __key__ in batches of 500, then I had to
limit down to 200.. then down to 100.. then down to 50.. then down to
Hi,
On Mon, Mar 22, 2010 at 8:45 PM, homunq jameson.qu...@gmail.com wrote:
OK, after hashing it out on IRC, I see that I have to erase my data
and start again.
Why is that? Wouldn't updating the data be a better option?
Since it took me 3 days of CPU quota to add the data,
I want to know
11 matches
Mail list logo