If I go 1-2 steps forward, I'm probably going to handle every scenario 
manually/guarantee results, probably the easiest option

What I'm doing is: There is one user entity, for everything, user signs up, 
sets name, information etc.
There are also pingers from JS that sets user.online_time to datetime.now() 
every 80 seconds or something, in almost every test, this online setter 
overwritten some data, although it's probabilistically pretty low, a pinger 
should fire before page load, and overwrite pageload modifications, the 
requests are all one-sweep async routines, takes <1 second at total, but 
although highly non-probable, I've inspected timings and it does happen :)

My current solution to prevent important overwrites: defer functions to 
check whether data is overwritten and re-write it

My contention solution: create a Job(ndb.model) entity and put it, a manual 
push queue fires and processes all Job's for the entity/deletes them
I'm probably doing something similar to pull queue logic
The contention solution doesn't help overwrites, it only fires if the 
entity is in contention mode (a Job was fired before) or there was a write 
in the last 5 seconds, which triggers contention mode in case of a "need to 
write" scenario

I've thought about integrating transactions, however it seems everything 
has to become transactions for it to bear fruits, all routines must be 
re-handled, not worth it

I've thought about separating some stuff out of the User object, put it 
into something like UserInfo etc, however since the data is used at 
routines, it requires constant fetching/updating

Some data can be applied with delay, for example an increased value, 
however since these fields are sign-up related,  a delay causes 
confusion/reset

The best idea I could come up with, after writing all these, handle these 
need-to-exist stuff separately in a persistent manner and push every 
modification as a Job, instead of risking it and writing directly to the 
entity, didn't take this route initially to prevent increased costs

----

I've also thought about memcached related solutions, maybe short-term - 
quickly updated cache's etc (get/set, in sequence, to prevent memcached 
overwrite)
But I'm not sure about the reliability, let's say an overwrite occurs at 
1%, if memcached succeeds with 95%, it still leaves the 0.05% :) guess it's 
acceptable

If the capacity demand exceeds 20gb - seems unlikely, it would probably 
pull memcached success rate to 0% - doesn't seem it could happen, since we 
are dealing with small amount of data

Sorry for the unfocused blob-like writing :)




On Saturday, October 19, 2013 8:52:31 PM UTC+3, Vinny P wrote:
>
> On Saturday, October 19, 2013 9:15:15 AM UTC-5, Kaan Soral wrote:
>>
>> I've implemented routines to handle contention at a large level, a custom 
>> pipeline implementation, there also also basic checks to prevent 
>> overwriting an entity.
>> When these get overwritten by another update simultaneously, it creates a 
>> really disturbing experience
>>
>>
>
> Generally, the solution is use-case specific. Personally I will use 
> transactions or pull queues (take each new action, place them in a queue, 
> backend handles them sequentially) to solve these types of problems. There 
> are some cases where you can completely skip all of this: for example if 
> you use "Login With Google" or similar 3rd party login service, then you 
> don't need to validate emails since it's already done for you.
>
>
> On Saturday, October 19, 2013 9:15:15 AM UTC-5, Kaan Soral wrote:
>>
>> However there are some operations that require no overwrite to occur, for 
>> example:
>>    
>>    - Email verification
>>
>>
> Out of curiosity, why are you writing so much to an email verification 
> field that overwrite is a problem? I can understand if you do a lot of 
> reads (to check to see if an email has been verified) but the writing 
> should be fairly spaced out - are you rapidly verifying new addresses for 
> the same user, or combining multiple user email addresses into a single 
> entity?
>
>
> On Saturday, October 19, 2013 9:15:15 AM UTC-5, Kaan Soral wrote:
>>
>> (I'm also extremely unlucky, or lucky, tested my new app with ~10 users, 
>> all of them experienced unique problems, some issues that occurred probably 
>> have <1/10000 probability of occurrence, a user triggered a second signup 
>> request, probably by page refresh, it would only cause an issue if the last 
>> part of the routine would execute around the same millisecond, and it did)
>>
>
>
> I would certainly call that unusual. Perhaps try another test, and see how 
> it goes. 
>
>
> -----------------
> -Vinny P
> Technology & Media Advisor
> Chicago, IL
>
> App Engine Code Samples: http://www.learntogoogleit.com
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to