[google-appengine] Re: gaeutilities session not working in ie9

2011-11-16 Thread bowman.jos...@gmail.com
The state is saved via a cookie that is a token the library keys off of. If 
you haven't changed the default settings then you should have a cookie in 
your browser named something like gaeutilities-session.

If IE is configured to not accept cookies from the domain you are testing, 
then that would likely be the problem.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/E0wymcFXQecJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



Re: [google-appengine] Re: gaeutilities sessions, plz to help a n00b

2011-09-21 Thread bowman.jos...@gmail.com
The security is the rotating session token. 

It works basically like this. A session token is valid for x seconds, and 
then a backlog of x tokens are considered valid. The reason that multiple 
tokens are valid is to support sites using ajax requests which may have 
request/response out of sequence with each other.

Avoiding taking multiple requests per page view into account, and just doing 
a single page/response scenario here's how it happens. This model also 
assumes that each page view is generating a new token, that's not required 
as you can make the token expiration as long as you want for your 
application.

You request a page, which generates a session. Session token is set in a 
cookie.
Next page request, the token is valid, but expired. A new cookie is set with 
a new session token.
Next page request, same thing.

Now, if you lock your user profile to only accepting one session, then 
hijacking will create a scenario where either the hijacker or user loses 
their session. So..

User creates session, get's token.
Hijacker sniffs token, connects using it, and get's another token.
User makes a request, generating a new token.
Hijacker connects still using the token they had, which generates a new one.

...

Eventually either the user or hijacker has a token that's expired so a new 
session needs to be created. 

If it's the user, when they log in they invalidate the session the hijacker 
is using and reclaim their access.

Now, it's not fool proof. If the hijacker is using a complicated enough 
system they can keep sniffing and resetting their cookies with the victims 
tokens. They can at least have some access time on the users account. They 
can also just sniff again to jump back on the session when they get kicked 
off. There's no way to make it truely secure, just more difficult.

The biggest problem with gaeutilities though is it's currently pretty much 
unsupported. I've stopped using appengine and with having 2 kids now I don't 
have time to dedicate to a project I'm not using. I learned python writing 
gaeutilities, and have since figured out ways to improve the performance 
- https://github.com/joerussbowman/gaeutilities/issues/2

I'm open to pull requests, or even to someone forking the project and 
continuing it. I'd be happy to act as an advisor or anything required to 
assist as long as the contributors can deal with my limited availability. If 
anyone just wants to fork the entire project and carry it on as long as I'm 
comfortable with the approaches taken I'd even point everyone to it. The 
only qualification I have is that security remain a primary motivator of 
design. Of course if it's a fork of my code and ideas I'd also like to 
continue to receive credit.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/T4Xaaa3DXFQJ.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: gaeutilities sessions, plz to help a n00b

2011-09-20 Thread bowman.jos...@gmail.com
The purpose of restricting logins to one session is to avoid session 
hijacking. gaeutilities has features that help your site avoid session 
hijacking which have been made even easier with tools like Firesheep - 
http://codebutler.com/firesheep

Since (as of last I checked) you can't use ssl when using your own domains 
cookie sniffing is simple for appengine apps.

Sure, other libraries are faster, and if all you care about is performance, 
then I'd suggest using them. The only reason to choose gaeutilities is it 
was written with security prioritized over performance, therefore is more 
secure than the other libraries. Not to say it's secure, without ssl it's 
not truly secure, but it's much more difficult to spoof a gaeutilities 
session if configured correctly.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/XWaPWJ54gt8J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Discussion on google-app-engine-open-source-projects

2010-04-21 Thread bowman.jos...@gmail.com
Can the contacts for gaeutiltities, top project, be changed to
bowman.jos...@gmail.com? the link there is to the old google code
hosted project, which I believe is causing some confusion as I've
moved it to github.

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: session user property not updated. weird bug?

2010-04-02 Thread bowman.jos...@gmail.com
Hi I just found this thread...

I've gone ahead and filed an issue with a suggested fix for the
gaeutilities project on github. I'm pretty behind on getting stuff in
for that project, I'm not using appengine a lot at the moment. I'll
try to get to it when i can, of if someone is feeling fiesty enough to
provide a patch I'd be happy to apply it.

http://github.com/joerussbowman/gaeutilities/issues/issue/13

On Mar 8, 11:25 am, Sümer Cip sum...@gmail.com wrote:
 Of course this approach is only valid if I can guarantee no subsequent put()
 operations will happen in a single request which in my case unlikely. Or is
 there a better generic approach you can suggest?





 On Mon, Mar 8, 2010 at 4:09 PM, Sümer Cip sum...@gmail.com wrote:
  Yes, exactly, I finally found the problem yesterday, gaeutlities session
  behaves different on each object. So how about this solution: I have a base
  handler class for all the handlers and what I added is to refresh the user
  param from the datastore for every __init__ function. How that sounds?

  class scRequestHandler(webapp.RequestHandler):
      def __init__(self):
          webapp.RequestHandler.__init__(self)
          self.Session = scSession()
          # reload the user object, this is because the refeernce property in
  the Session
          # object is somehow pickled and the put() operation does not update
  the real entity.
          # even though they have the same keys().
          # The solution is to get the user from DS for every request.
          if self.Session.user:
              self.Session.user = db.get(self.Session.user.key())

  On Mon, Mar 8, 2010 at 2:55 PM, Nick Johnson (Google) 
  nick.john...@google.com wrote:

  Hi Sumer,

  You're storing an entity in the session, and unless the gaeutilities
  session object handles this specially, this results in it pickling the
  'user' entity and storing it in the session. Calling .put() on the user
  object will update the datastore with the new value, but won't change the
  pickled value the session object is storing. Next request, it will
  deserialize the same data it did previously, with no update.

  -Nick Johnson

  On Sat, Mar 6, 2010 at 2:22 PM, Sumer Cip sum...@gmail.com wrote:

  Hi all,

  Below is the code code I am using in my tests to update a reference
  tied to my session, the session object I use if from gaeutilities
  (wrapped it a little). I put a reference object in the session and all
  working fine.

         if not self.Session.user:
             user = scMember.all().filter(name = ,
  testmember).get()
             self.Session.user = user
         else:
             self.Session.user.playcount += 1
             self.Session.user.put()

  The problem with this code is the put() is not working, if I do not
  restart app engine SDK. In the error scenario, user is set into the
  session object, and in the second request I can update the playcount
  only once. BUt the subsequent operations fail without any notice of
  error and playcount stays 1 everytime even I have incremented it.
  This is the testhandler and there is no other code running paralelly.
  When I restart the SDK, the problem goes away, it works fine.
  So, I am not sure if I am doing something wrong, it took me 6 hours to
  identify this thing.

  Any help will be appericiated.

  Thanks,

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2Bunsubscrib
   e...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

  --
  Nick Johnson, Developer Programs Engineer, App Engine
  Google Ireland Ltd. :: Registered in Dublin, Ireland, Registration Number:
  368047

  --
  You received this message because you are subscribed to the Google Groups
  Google App Engine group.
  To post to this group, send email to google-appeng...@googlegroups.com.
  To unsubscribe from this group, send email to
  google-appengine+unsubscr...@googlegroups.comgoogle-appengine%2Bunsubscrib
   e...@googlegroups.com
  .
  For more options, visit this group at
 http://groups.google.com/group/google-appengine?hl=en.

  --
  Sumer Cip

 --
 Sumer Cip

-- 
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Does the datastore API try on timed out reads as well as write?

2009-08-12 Thread bowman.jos...@gmail.com

After creating the ROTModel (retry on timeout Model) for gaeutiltiies,
I learned the functionality I implemented was actually put into the
base datastore API. If a write failed, it retries the write a few
times before raising the Timeout exception if it continues to fail. In
the trunk for gaeutiltiies, nothing uses ROTModel now.

With the knowledge that the Datastore API automatically did the
retries, and timeouts persisted, I managed to get session writes to
get more reliability for writes by writing to both memcache and the
datastore and tracking the state of both. If the datastore write
fails, it will be pushed from memcache the next time the variable is
read. This works great.

However, I have seen instances where Timeout on read causes issued.

Call session()[key]
Memcache miss
Timeout on Datastore read

No way around that. The datastore is the only persistent layer
available to appengine applications (within the appengine framework).
I've seen this exact instance more than once, as the sole use of my
app. It doesn't happen often, but a few times in a month for a single
user doesn't bode well for instances where an app may have many more
users. I'm curious if read is doing the retries, or if I should try to
add that in the session?

The only other option left is declare the datastore not reliable
enough for session level programming and looking for alternatives. The
root of the problem is that the datastore has timeouts more often than
what would be acceptable for any database solution that would be
hosted in house. It's been repeatedly stated by Google that this will
not be fixed. IMHO, it might be time to revisit that statement and
determine if more reliability can be introduced to the only persistent
data layer appengine offers.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Can we expect if the datastore is read only, memcache probably will be also?

2009-07-03 Thread bowman.jos...@gmail.com

Hello,

The past several planned and unplanned outages where the datastore has
been readonly, has also had the memcache made unavailable for at least
part of the duration.

I'd been working on the python session class, expanding it to allow
for periods where the datastore was read only, but memcache is
available. The point being that applications that could persist in a
read only state would also be able to keep user state or whatever else
they were using sessions for.

Except for gracefully getting past datastore timeout issues, this
extra complexity doesn't seem to be solving much for any of the
downtimes though. So I'm just curious how tightly integrated the write
functionality is for both?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Question about google.appengine.runtime.apiproxy_errors.CapabilityDisabledError

2009-06-08 Thread bowman.jos...@gmail.com

Does this exception just mean the datastore is down, or that both the
datastore and memcache are down? The current svn for gaeutilities is
written so it can catch datastore write failures, and fallback to
memcache, syncing back when it can write to the datastore again for
both session and class. However, on 6/10, both the datastore and
memcache will be unavailable for writing, so I'm just trying to figure
out the best way to detect when this is the case.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] ANNOUCEMENT: gaeutilities version 1.2.1 is now available. session bugfix

2009-03-08 Thread bowman.jos...@gmail.com

Not much new in this release. There was an issue where entities with
empty sid values were getting created along with valid new sessions,
this bug has been fixed.
cache has also been expanded to have a has_key() method.
Work has begun on a pagination library for gaeutilities, more
information can be found in this post:
http://groups.google.com/group/appengine-utilities/browse_thread/thread/4b40de81bc952ea2?hl=en

Users of version 1.2 (and any other version) are strongly recommended
to upgrade if you are using session.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Note for developers using gaeutilities

2009-02-20 Thread bowman.jos...@gmail.com

I'm hoping to get a release out tonight to fix a bug in the current
version 1.1.3. Unfortunately I haven't had access to sit down and work
out the fix and release until this evening, and even then I'm not sure
I'll have time. As such, for anyone using the library, and
encountering the problem where when session.delete() is called it
breaks their application, here is what I believe the fix is.

At line 172 in sessions.py:

# start a new session
+self.session = _AppEngineUtilities_Session()
+self.session.put()
self.sid = self.new_sid()

I've also put 1.1.2 up as a non-deprecated download a the site
downloads page. http://code.google.com/p/gaeutilities/downloads/list

The bug was introduced in version 1.1.3, so 1.1.2 will be more stable
for most users. Version 1.2 will be available as soon as testing is
completed.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] ANNOUNCEMENT: gaeutilities 1.2 release (better sessions performance with cookie writer)

2009-02-20 Thread bowman.jos...@gmail.com

http://gaeutilities.appspot.com/

Version 1.2 of gaeutilities is now available. This release includes
and important bugfix for session deletion and is recommended for all
users. This release also introduces new functionality for session, the
ability to set the writer. The options are datastore (default) and
cookie. When you set the writer to cookie, nothing is saved server
side, and there is no session token validation, increasing performance
at the risk of security.

The django-middleware has been updated to take advantage of this
functionality. By default, django will use the cookie writer, thus
decreasing the CPU overhead for page views on your system. However, on
login, you can call request.session.save() which will move the session
to a datastore based version. Further requests made by the user will
have their session data saved in the datastore, securing your user
data from sniffing attacks.

This is made possible by a new class method check_token
(Session.check_token()) which checks to see if a session token exists
for the user, and if so validates it.

For more information on how to use the new session functionality, see:
http://gaeutilities.appspot.com/

This change will impact users of the django-middleware, who will need
to call request.session.save() before auth.login(). However, all other
use cases of session will not be impacted, as the default is the
datastore backed sessions. However you're encouraged to take advantage
of cookie writer for anonymous requests in order to maximize
performance if you're loading session on every request.

Note: I'm interested in any sites currently using gaeutilities as well
as feedback as to why you are using the library, or why not. I've not
gotten much feedback during the course of the project but have seen a
lot of downloads. I'm curious as to how people are using the utilities.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Get Django's admin interface! app-engine-patch 1.0beta

2009-02-10 Thread bowman.jos...@gmail.com

Awesome, can't wait to try it out!

On Feb 10, 7:39 am, Waldemar Kornewald wkornew...@gmail.com wrote:
 Hi everyone,
 please grab our new app-engine-patch 1.0beta release. Major new
 features: support for Django's admin interface, the media generator
 (combines and compresses your JS/CSS files for faster site load
 times), and self-contained apps.http://code.google.com/p/app-engine-patch/

 We need testers. This is a two-week testing phase where we can
 stabilize everything, so we need your help to find and report bugs!

 Donations are back! Please don't miss the chance to donate and show
 your gratitude if you like 
 app-engine-patch:http://code.google.com/p/app-engine-patch/wiki/Donate

 If you don't believe me, see the sample project in action! 
 :)http://aep-sample.appspot.com/

 For those who upgrade from the repository, please change your
 settings_pre/_post imports to ragendja.settings_pre/_post. That
 feature is now part of ragendja instead of merely the sample project.

 Bye,
 Waldemar Kornewald
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] ANNOUNCEMENT: gaeutilities 1.1.3 release

2009-02-07 Thread bowman.jos...@gmail.com

This release is primarily a bugfix release.

- session: bugfix to handle str() and len() methods when there is no
session data. Issue #12
- session: delete_all_sessions changed to a class method. Issue #14
(NOTE: delete_all_sessions is not complete)
- session: Modified session token to include the session key. Issue
#10
- session: Session token encryption changed to md5 from sha1, in order
to improve performance as the salt is randomized with a time string.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: ANNOUNCEMENT: gaeutilities 1.1.3 release

2009-02-07 Thread bowman.jos...@gmail.com

Sorry, forgot the url

http://gaeutilities.appspot.com

On Feb 8, 1:51 am, bowman.jos...@gmail.com bowman.jos...@gmail.com
wrote:
 This release is primarily a bugfix release.

 - session: bugfix to handle str() and len() methods when there is no
 session data. Issue #12
 - session: delete_all_sessions changed to a class method. Issue #14
 (NOTE: delete_all_sessions is not complete)
 - session: Modified session token to include the session key. Issue
 #10
 - session: Session token encryption changed to md5 from sha1, in order
 to improve performance as the salt is randomized with a time string.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Should I be learning Python?

2009-02-04 Thread bowman.jos...@gmail.com

I learned python in order to use appengine. Though if you're already
learning another language for professional reasons (job requirement)
you may be better off waiting until you are comfortable with that
language, or at least until you know what the next release offers.

I've taught myself C, C++, java, php, perl, bash shell scripting, and
python for various reasons. However as I often have to use at least a
few of these languages in the course of a month, I can attest to how
frustrating it can be to switch back and forth. Also, as someone who
is pretty much language agnostic, I can tell you that python is just
another language and I really don't see any reason to favor it over
any other, except for the fact it's all appengine supports right now.

On Feb 4, 12:56 pm, luismgz luis...@gmail.com wrote:
 You should set your priorities and make one step at a time.
 Learning python shoudn't be seen as a painful task. Actually, if you
 leave all other concerns behind and concentrate in learning it,
 without pressure, you'll enjoy it very much.

 Believe me, none of the other languages you mentioned will give you
 more joy  than python. It's sheer simplicity, succinctness, and
 expressiveness will make you rediscover the pleasure of programming.

 Also, it is much more useful nowadays since it's cross-platform and
 runs virtually everywhere (windows, linux, mac, .Net, Java, etc).

 Just forget everything else and give python one single day (or an
 evening), open its tutorial and fire up the interactive interpreter.
 Then, come back and let me know what you think...

 Luis

 On Feb 4, 5:13 am, tempy fay...@gmail.com wrote:

  I am starting on App Engine and learning how to use it the only way I
  know how, by starting a project.  That also means learning Python.  I
  don't totally mind learning Python, but I'm already trying to teach
  myself Objective-C and all the different syntaxes are starting to tax
  my poor head.

  So I see in the road map that the next release should be March 09, and
  also that that release should support a new language.  I know that no
  one knows (or can give away) what language will be supported, but, I
  am more or less fluent in java/c#/c++/php, so... =) should I be
  patient and wait until the next release?

  Or start trying to wrap my mind around python, while crying myself to
  sleep over the absence of dear, dear semicolons.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Custom Login System

2009-01-30 Thread bowman.jos...@gmail.com

As a suggestion, the app-engine-patch project is a very nice product.
I'm personally using it, on substituting gaeutiltiies sessions and
cache, with very good results on my primary project. Being able to
take advantage of Django's backends had me authenticating accoutns
through Google (OpenID) and Yahoo(Oauth with their persistent id
field) literally in minutes. And I see nothing stopping me from adding
Myspace and Facebook when I'm ready.

On Jan 30, 10:05 am, varun varun...@gmail.com wrote:
 Real informative post for a beginner like me. The post title suits me
 completely. I need to make a basic user management system for my app.
 I have been googling the possibility of using django admin interface
 with GAE but the django-appengine-helper and django-patch look hackish
 to me. I dont really want them to be. At the same time I wanted to
 find something readymade (being lazy, that I am). Both the session
 handlers above, gaeutils and gmemsess provide me with a good start.
 Is there any simple user registration or login framework that I can
 use. Ofcourse when I am here, I dont want my users to have a google
 acocunt.

 Thanks again for you post and work.

 Cheers
 Varun

 On Jan 25, 6:35 am, MajorProgamming sefira...@gmail.com wrote:

  Hmm. So basically this would sum it up:?

  1. If I use SSL, I can rely on the cookies?

  2. If I don't use SSL and use another provider for the password check
  (like OpenID, etc.), I need to take your advice on Sessions expiring?

  Am I correct?

  And, what are the real chances of a hacker intercepting traffic?

  On Jan 24, 6:41 pm, Greg g.fawc...@gmail.com wrote:

   On Jan 24, 10:10 am, Andrew Badera and...@badera.us wrote:

Typically, or at least in my experience, salting is
md5/sha1/whatever(password+salt) rather than md5(md5(password)+salt) ...

   If you just hash the password plus the salt, you need to store the
   password on the server. This is bad, both because servers are
   vulnerable and also because at some stage you have to transmit the
   password in clear. So you transmit (and store) the hash of the
   password, which means you need to hash it twice when you login.

But can't the attackers simply spoof a request with that session id in
the cookies?

   Yes, but only while the session is valid. At the very least make your
   sessions expire frequently, and make logging out enticing for users.
   And you could also make their IP address part of the salt, and have
   the server check it. This limits attacks to your internal network.

   Cheers!
   Greg.- Hide quoted text -

  - Show quoted text -
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Proper way to use sessions

2009-01-28 Thread bowman.jos...@gmail.com

Sessions provide you with a way to store information about the current
site visitor across requests. It's the foundation for an
authentication system, but does not actually provide.

What you would need to do is store within the session that the user is
logged in. In most cases it would work something like this.

User goes to site.
User fills in and submits authentication information.
You site validates the information.
If valid, you'd store in the session the id that for that user.
(session['user_id'] = id)
For future requests, you'd check and see if that id is stored in the
session to validate the user is logged in. (if session.has_key
('user_id')

gaeutilities sessions do expire, and this is customizable on
initialization. gaeutilities also offers the ability for you to not
set an expiration on the cookie, meaning it will be cleared out when
the user closes their browser.

The session process, simplified, works like this.

User visits site.
Session is initialized
Session checks to see if there is a cookie in the browser with a
session token

If there is not one, it creates a new session object locally,
generates a token, and stores it in the local session object and as a
cookie in the browser.

If there is a session token cookie, it checks the local database to
see if there is a session with that token.
If there is not a session for that token, it creates a new session
object locally, generates a token, and stores it in the local session
object and as a cookie in the browser.

If there is a session with that token, it then checks the session
token to determine if it needs to be regenerated due to expiration. If
this is the case, then it creates a new token and stores it in the
local object as well as writing out a new cookie. This is not a new
session, just a new token to access the session.

If the token is found, session.last_activity is also checked to see if
the session itself has expired. The length a session is valid for is
also customizable. If the session has expired, then a new session is
created per what is described above.

I hope this helps.

On Jan 28, 9:51 am, solidus e.smolens...@gmail.com wrote:
 I tried looking online on how to define a descriptor and it looks
 fairly straightforward, but I don't see how to incorporate it with the
 @ syntax that google's @requier_login uses. Can you point me in the
 right direction or show me a a way to do this?

 Thanks again!

 On Jan 28, 9:09 am, Blixt andreasbl...@gmail.com wrote:

  Oh and regarding your second question:
  This behavior can be implied to the user's browser by not sending an
  expiry header for the cookie that holds the session id. I don't know
  if gaeutilities let's you specify this, but I would assume it does.

  If you want to truly enforce it on the server side, you can add a
  timestamp to your sessions in the datastore and not let the user
  continue using a session if the session hasn't been requested for x
  amount of time. The timestamp would be updated every time the session
  is requested so that the session only expires if the user stops using
  the page for a while (which would imply that the user has left the
  page.)

  On Jan 28, 3:02 pm, Blixt andreasbl...@gmail.com wrote:

   If you're familiar with Python:
   If you've got separate request handlers for the parts of the site that
   require login and the parts that don't, you can make a function
   descriptor that checks if the user is logged in before calling the
   actual function. If the user is not logged in, it redirects to a login
   page. Then you can use this descriptor on the get / post methods.

   Google provides this functionality with their @require_login
   descriptor that redirects to the Google Accounts login page if the
   user is not logged in, but this doesn't work when rolling your own
   authentication system, obviously.

   If you're not familiar with Python:
   The simplest way is probably to just make a function you can call that
   returns True if the user is logged in. If the user is not logged in,
   it redirects the user to your login page, then returns False. In your
   actual get / post method you check whether the result of this function
   is False, and if so, you leave the method:

   def logged_in(request):
       if [user is logged in]:
           return True
       request.redirect('/login')
       return False

   class UserSettings(webapp.RequestHandler):
       def get(self):
           if not logged_in(self): return
           # show page

   On Jan 28, 3:13 am, solidus e.smolens...@gmail.com wrote:

Hi all,

I'm new to appengine and somewhat new to web development. My question
is regarding proper ways to use sessions.

I'm currently messing around using the gaeutilities sessions module. I
have a basic login page and some content. The question is what is the
standard/best practice way to ensure that users aren't accessing parts
of your site (via direct URL) without first going through the login
 

[google-appengine] Re: two paging problems

2009-01-28 Thread bowman.jos...@gmail.com

Well to answer your question, you'd need to regenerate your index, or
build it in such a way as items can be removed from it.

My approach to paging was this. I determined for my product I didn't
need to offer more than 100 items in a set.

When the system that needs paging is first hit, I pull 100 items, and
then cache that set using gaeutilities cache. For views after that, I
just pull the cache and page through it in blocks of 10. I found that
I could reliably pull 100 items, though I have seen one instance in my
logs where the deadline error was exceeded.

When I add or delete items from that set, I delete the cache and rely
on the next view to reinstantiate it.

On Jan 28, 11:49 am, kang areyouloo...@gmail.com wrote:
 Then if I delete a object, the numerical index will be broken..How to do
 efficient paging?



 On Wed, Jan 28, 2009 at 8:08 AM, Marzia Niccolai ma...@google.com wrote:

  Hi,

  The efficient method for paging only generates a 'next' and a 'back'
  link when used as written.

  For numbered pagination, you will need to maintain some kind of global
  counter object that would compute the number of pages and the proper
  filtering value for each page you wanted to display.

  -Marzia

  On Wed, Jan 28, 2009 at 10:40 AM, kang areyouloo...@gmail.com wrote:
   I still do not know how to do paging...I can only get the next link, but
  I
   can not get the page 2,3,4,5,6,7,8 more link. Wish you can give me more
   hint. Thanks

   On Mon, Jan 26, 2009 at 7:32 AM, Marzia Niccolai ma...@google.com
  wrote:

   Hi,

   This may not work for your particular use case, but you can use an
   equality filter in conjunction with the key filter, so instead of
   something like:
      suggestions = Suggestion.all().filter('__key__ =',
   bookmark).order(__key__).fetch(PAGESIZE+1)
   you could do:
      suggestions = Suggestion.all().filter('title =',
   VALUE).filter('__key__ =',
   bookmark).order(__key__).fetch(PAGESIZE+1)

   Which would at least allow users to query on a small subset of values
   (those that specifically equal a property) and still allow paging on
   the results.

   -Marzia

   On Mon, Jan 26, 2009 at 1:02 PM, kang areyouloo...@gmail.com wrote:
Thanks for your reply.
I've just read the article today and I will have a try. For search
result, I
do not know what query users give and how much result. So, I can not
give a
index for an instance.

On Mon, Jan 26, 2009 at 6:54 AM, Marzia Niccolai ma...@google.com
wrote:

Hi,

There are a couple of reliable ways to do paging with App Engine for
arbitrarily sized data sets, both of which are discussed in this
article:
   http://code.google.com/appengine/articles/paging.html
And the corresponding sample code:

 http://code.google.com/p/google-app-engine-samples/source/browse/#svn...

The easiest way is to just use key paging, if key ordering is
sufficient for your paging needs.

I'm not sure what you mean by not being able index a search result,
but with the paging methods described in the article, deleting an
entity should not affect the ability to page.

-Marzia

On Sat, Jan 24, 2009 at 1:02 PM, lookon areyouloo...@gmail.com
  wrote:

 I've read the discussion about paging and have post some problems.
 But
 I still have some problems.

 If I builds index by myself, I cannot index the search result. And
  if
 I have built index for an object and delete one instance of an
 object,
 the index will be broken.

 If I use the GAE Paginator class (http://appengine-
 cookbook.appspot.com/recipe/gae-paginator-class/?

  id=ahJhcHBlbmdpbmUtY29va2Jvb2tyjgELEgtSZWNpcGVJbmRleCI4YWhKaGNIQmxibWRwYm1VdFkyOXZhMkp2YjJ0eUZBc1NDRU5oZEdWbmIzSjVJZ1pFYW1GdVoyOE0MCxIGUmVjaXBlIjlhaEpoY0hCbGJtZHBibVV0WTI5dmEySnZiMnR5RkFzU0NFTmhkR1ZuYjNKNUlnWkVhbUZ1WjI4TTcM)

 Then how to deal with the 1000 result limit in GAE? I cannot page
  if
 I
 have more than 1000 result. I can write my new paging class, but
  wish
 you can give me some advice.Thanks.

--
Stay hungry,Stay foolish.

   --
   Stay hungry,Stay foolish.

 --
 Stay hungry,Stay foolish.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Suggestion for anyone using session classes that rely on the datastore

2009-01-27 Thread bowman.jos...@gmail.com

I've started work on version 1.1.3 of gaeutilities which is going to
add a flag to not save new sessions by default, as well as adding a
save method. I've got the first bit of functionality live on a site
that every hour has a script that connects and adds anywhere from
80-150 items to my datastore. The script doesn't store cookies, so
every request started a new session.

Last night, since adding the new sessions code in, there is a
noticeable (though not dramatic) decrease in the CPU Seconds Used/
Second.

My suggestion for anyone using sessions is to avoid using datastore
backed sessions for anonymous users. gaeutilities with version 1.1.3
will make this easier. Those of you using memcache or pure cookie
solutions should not have the overhead that the datastore backed
cookies bring to the table, so this may be less of an issue for you.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Permanent Unique User Identifier (part 2)

2009-01-27 Thread bowman.jos...@gmail.com

The openid solution is as easy to use as the user api for appengine.
In my implementation, I direct the users using a Login using Google
link, which sends them to a google page where they click a Continue to
sign in button, which sends them back to my site where I finish
handling the openid authentication process.

On Jan 27, 8:31 pm, Ryan Lamansky spam...@kardax.com wrote:
 I'm aware of the OpenID option, but I'm concerned about having to
 train users how to use it.  Ease of use is critical.

 The ability to migrate off App Engine isn't useful to me because I
 could never cost effectively reach the same level of scalability.

 -Ryan

 On Jan 26, 5:21 pm, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  As someone suggested in that thread, I'd suggest you look at Google
  Openid. I went this route because it also offers a way for you to
  persist your user identity if you move your domain off of appengine.

  On Jan 26, 5:22 pm, Ryan Lamansky spam...@kardax.com wrote:

   As a follow-up to this thread 
   here:http://groups.google.com/group/google-appengine/browse_thread/thread/...

   I've created a defect report 
   here:http://code.google.com/p/googleappengine/issues/detail?id=1019

   I'm really hoping Google can do something, as I'm hesitant to proceed
   with any of my App Engine ideas knowing that my users are going to get
   burned :|

   -Ryan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Announcement: gaeutilities 1.1.2

2009-01-26 Thread bowman.jos...@gmail.com

gaeutilities - http://gaeutilities.appspot.com/ - release 1.1.2 is now
out. It is strongly recommended anyone using the session class upgrade
to this version. 1 critical and 1 important bug is solved in this
release. Also there is a marginal performance increase by being able
to configure how often the last_activity field is updated, removing
the need to have a put on every request.

 1.1.2
 - session: Added last_activity_update variable to set an interval for
which last_activity needs to be updated. It defaults to 60 seconds.
This means a put is not required every request which should increase
performance.
 - session: Refactored based on a contribution from Jeremy Awon
optimizing the class initialization.
 - session: Bug found in the process to clear old sessions that was
deleting data it shouldn't be. This was the cause of the mysterious
missing session data bug. Thank you pant.lalit
 - session: Bug fixed where clean_check_percent was being ignored.
 - session: Some tweaks done to delete_all_session(), though this
feature shouldn't be considered complete yet. Needs more testing.

More performance improvements are planned for the 1.1.3 release which
will be coming soon.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Permanent Unique User Identifier (part 2)

2009-01-26 Thread bowman.jos...@gmail.com

As someone suggested in that thread, I'd suggest you look at Google
Openid. I went this route because it also offers a way for you to
persist your user identity if you move your domain off of appengine.

On Jan 26, 5:22 pm, Ryan Lamansky spam...@kardax.com wrote:
 As a follow-up to this thread 
 here:http://groups.google.com/group/google-appengine/browse_thread/thread/...

 I've created a defect report 
 here:http://code.google.com/p/googleappengine/issues/detail?id=1019

 I'm really hoping Google can do something, as I'm hesitant to proceed
 with any of my App Engine ideas knowing that my users are going to get
 burned :|

 -Ryan
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-24 Thread bowman.jos...@gmail.com

The problems I see what that approach is:

 - 1 time token can be sniffed. We have limited ssl support with
appengine which is why the session token client side needs to change.
 - Relying on gears, flash, or even javascript creates client side
dependencies. gaeutilities already has a dependency on cookies because
it's low enough level trying to create a way to append the session
token to all requests for all applications wasn't really possible.
Though I do have plans to expose the session token via some method to
provide an opportunity for people to do that. Adding more dependencies
is something I want to avoid.

On Jan 24, 12:57 pm, yejun yej...@gmail.com wrote:
 Maybe store a secure token locally on gears or flash, then send one
 time token by javascript. But the initial token still need to be
 delivered by ssl.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-23 Thread bowman.jos...@gmail.com

Yea but R would be rotated every 15 seconds which would decrease the
window in which a session is really valid by a large margin.That's why
the session token needs to be tied to every account.

On Jan 23, 1:04 am, jeremy jeremy.a...@gmail.com wrote:
 What I see as a concern with your approach is what happens when the
 server wide variable R gets out of sync with someone's version that
 was crypted based off of it? The original reason the 3 valid token set
 

 that's why i mention that you can store the last 3 values of R as is
 done now for each sessions sid - so all 3 would be tried as is done
 now with the sid list on each session entity. you could also count how
 often R has been randomized and hand this iteration index to the
 client as part of the token.

 i'm not sure about going primarily with memcache - isn't memcache
 designed only to be a caching layer? memcache isn't volatile in the
 sense of being either up or down. rather, it throws out stored data
 randomly as far as the developer is concerned as load increases.

 On Jan 23, 1:37 am, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  By the way, I really am not concerned with analysis attacks. It's
  sniffing/spoofing attacks that are most common for session hijacking.
  I simply sniff the network and find out what the name and value of the
  cookie are, and what user agent you are sending. I then duplicate
  those 2 things and if I'm behind the same NAT as you, I have your
  session loaded up in my browser. If I'm any good a social hacking, I
  set my page to auto refresh and then distract you by talking to you
  until I have your full session by rotating the session tokens past the
  point of where the one in your browser is invalid, and more than
  likely the application will make log back in, without logging me out.
  This is where you may want to consider tieing the session directly to
  a user account, so a user can only be logged in once at any time, and
  logging invalidates the current log in if it exists, ie:
  active_session field on your user model.

  Just some late night thoughts when I really should be asleep.

  On Jan 22, 11:12 pm, jeremy jeremy.a...@gmail.com wrote:

   Hmm, I'm not sure what session timing is.

   I have an idea to reduce writes. Instead of updating the sid of every
   session individually, give each session a random value between 0 and
   C, and have one application-wide value R randomized every
   session_token_ttl seconds to an integer between 0 and C, then hand the
   client the value of this as a token:

   t = (session_id+R)%C

   then when a client hands the server a token, you can compute
   session_id = (t-R)%C

   (you can store the last 3 values of R as is done now for each sessions
   sid)

   I'm pretty sure there's no analysis attack that would allow a client
   to figure out either R at any moment or their own (constant)
   session_id. But, i could be wrong about that :\ ... The advantage
   would be you're only updating a single datastore entity every
   session_token_ttl.

   On Jan 22, 9:24 pm, bowman.jos...@gmail.com

   bowman.jos...@gmail.com wrote:
I've gone with a different approach that currently achieves similar
results, that's now available in the trunk. A new variable,
last_activity_update has been added. It's the amount of seconds that
needs to pass before that field needs to be updated by doing a put().
It defaults to 60 seconds, which of course is longer than the duration
before a put is required to update the session token with the default
settings.

This will allow developers who wish to lengthen their
session_token_ttl to a larger interval to still get their
last_activity update in, useful for session timing. It too is
customizable so for developers who have no use for this field can set
it to a large enough number to be irrelevant.

I'm trying to flush out an idea I have to limit the amount of writes
for the token even further, but am still researching it. If I figure
it out I will get it in and do another release. Otherwise I will
release what's there now. Before any release I want to go over the
refactoring you did as it does look more efficient than what I
currently have, thanks.

On Jan 22, 6:31 pm, jeremy jeremy.a...@gmail.com wrote:

 Ok. I actually modified Session.__init__ locally to do the
 last_activity on sid rotation (i also refactored it a bit to reduce
 repeated code blocks). Regarding google.com's SID cookie - i'm not
 seeing the sid update within minutes. I'm not sure why yours rotates
 so quickly, but it's something entirely configurable in your code so
 it shouldn't matter. Anyway, here's my version of Session.__init__ :

     def __init__(self, cookie_path=DEFAULT_COOKIE_PATH,
             cookie_name=COOKIE_NAME,
 session_expire_time=SESSION_EXPIRE_TIME,
             clean_check_percent=CLEAN_CHECK_PERCENT

[google-appengine] Re: Custom Login System

2009-01-23 Thread bowman.jos...@gmail.com

By the way, relying on javascript to handle hashing passwords and such
isn't a reliable solution. If that's what's coming from the browser,
then anyone else can just sniff that hash and send it as the password
with the username. In the end you're relying on data from the client
being secure, which is bad.

I'd suggest, if you don't want to use the Google User API, you look
into still using other ID providers, such as OpenID, Oauth, or
Facebook connect. They will handle the login via SSL on their end, and
the account validation would happen via urlfetch between your
application and the provider, leaving no traffic to be sniffed on the
users network.

If you really need a unique user system, I suppose you could set up a
VPS server and have it act as an OpenID provider. One thought that
just hit me as I writing this up is you could also use the build in
application.appspot.com ssl that google provides you to handle the
login by making it an OpenID provider. I believe there's a sample
application out there for making an OpenID provider on GAE. Then your
application, if you're using your own domain name, could urlfetch to
itself for that portion of the authentication, in order to get the
cookie domain set correctly for your sessions.

This is something that would make an interesting little project, I
wish I had time for.

On Jan 23, 10:42 pm, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 gaeutilities -http://gaeutilities.appspot.com/- has a session class
 built specifically to work around that problem. The session id (used
 for matching data to the session) is never passed to the browser,
 rather is uses a session token system. By default a session token is
 valid for only 5 seconds, after which a new session token is
 generated. The current token, plus previous two, are stored and are
 valid on requests in order to not cause problems with sites who's
 pages may make multiple requests (AJAX oriented sites). It also
 includes a middleware class so you can plug it in and use it directly
 with memcache.

 Version 1.1.1 is the current release, and the next release will
 include some new functionality to try and increase the performance by
 relying more on memcache (while still using the datastore in order to
 provide a completely reliable solution). It already uses both, but I'm
 working on cutting down the amount of writes.

 It's BSD licensed, open source. There are no fees or attribution
 requirements for it's use.

 This will not provide you with a login system. However, it does plug
 directly into django using the middleware so you can use django's
 authentication system. I in fact am currently using it, django, and
 the appenginepatch project -http://code.google.com/p/app-engine-patch/
 - with some custom backends to handle OpenId and Oauth authentication
 for my user management system.

 On Jan 23, 4:16 pm, MajorProgamming sefira...@gmail.com wrote:

  Javascript on your login form should first hash the password, then
  hash the result with a salt - say the session id
  I assume that's only true if I opt out of SSL?

  That way the contents of the cookie are no use to anyone, all useful
  info
  is stored in memcache, where attackers can't get it.
  But can't the attackers simply spoof a request with that session id in
  the cookies?

  On Jan 23, 4:01 pm, Greg g.fawc...@gmail.com wrote:

   First, if you are not a security expert, consider using Django's
   authentication framework. Online security is not easy  - there are a
   lot of things you have to get right, and missing just one of them
   means you've failed.

   I have a reasonable amount of experience with online security, so I
   built my own authentication system on top of gmemsess, a memchache-
   backed session object. Unfortunately my code isn't modular enough to
   publish, but here are a few pointers...

   - SSL is always good, because it means anyone with access to your
   comms can't easily see what you are doing. However, it isn't crucial,
   as long as your customers can live with the unlikely event of someone
   sniffing their traffic - a good authentication scheme will prevent
   attackers sniffing passwords, although everything they do after
   logging in may be visible.

   - Cookies are far more convenient than trying to pass a session ID
   with every request. Your cookie should contain a single random ID,
   which your app then uses to find the session object in memcache. That
   way the contents of the cookie are no use to anyone, all useful info
   is stored in memcache, where attackers can't get it.

   - Store a hash of the password on appengine, not the password itself.
   This means admin cannot steal passwords, as well as allowing for safe
   transport of the password.

   - Javascript on your login form should first hash the password, then
   hash the result with a salt - say the session id. The extra salted
   hash prevents a sniffer from simply sending the hash to login, and
   also guards against using

[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-22 Thread bowman.jos...@gmail.com

I think it's a case of it's been that way for so long I haven't
realized I need to change it. The put on every write to update the
last activity, I mean. I'll look into modifying it so that it only
writes when the session token changes.

As for google though, every time I load up my igoogle page, I have
cookie called SID for the www.google.com that changes each page view.
This is really as it should be for real security, as even the 15
second window I have configured by default it more that large enough
for someone to write a script very quickly to sniff and grab the
cookie to be used to hijack the session.

Modifying the last_activity to update less frequently is a good idea,
I will try to get that in soon.

On Jan 22, 2:08 pm, jeremy jeremy.a...@gmail.com wrote:
 Bowman - I think my issue with gaeutilities may largely be that the
 default settings seem excessive. Looking at the cookies google.com
 hands its visitors, their SID cookie lasts a good decade and seems to
 rotate every few hours, not seconds. So at the moment i'm thinking
 I'll see if changing the default satisfies me.

 Also, at the moment there's a datastore put() on every session-using
 request to (at a minimum) update the last activity time stamp. But if
 my SESSION_TOKEN_TTL is something like 1 hour, then actually i only
 need to update the last activity time stamp if the last timestamp was
 more than an hour ago. That would significantly reduce the amount of
 datastore writes, no?

 re: I'm just trying to figure out the value in the signed cookie
 approach, because if I can figure out a way for it to make sense I
 would consider moving gaeutilities to that approach. :

 Do you mean storing all session data in the clients cookie and using a
 signature to authenticate it? That approach makes me nervous, although
 I can imagine it's actually quite effective. You just store the
 session data in plain text but then also include a signature = SHA1
 (session_data + secret_key_only_the_server_knows). That way the server
 can confirm it was the one who committed the values (and not the
 client messing with their own cookies).

 On Jan 21, 8:33 pm, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  Does beaker store all session information as cookies?

  I'm just trying to figure out the value in the signed cookie approach,
  because if I can figure out a way for it to make sense I would
  consider moving gaeutilities to that approach.

  gaeutilities stores only a temporary session token in the browser
  cookie store, all information is stored server side for security.
  Since I only have the one cookie, which is basically an id to find the
  session server side with, I don't see a way for this approach to keep
  a session from being hijacked. As in the end the 'hash-value' string
  could be hijacked and reused by another browser.

  The performance issue with gaeutililites, and why Jeremy is looking
  for options, is the approach I've taken to securing the session id
  requires frequent put() operations. I've seen some other blogs who've
  mentioned this is a performance bottleneck for gae, but I haven't been
  able to come up with another approach that will not sacrifice
  reliability or performance. Basically I rotate the session id in the
  cookie every 5 seconds (this length is configurable). I store the
  current plus previous 2 tokens in a ListProperty in order to manage
  sites using AJAX type functionality. A session token is generally good
  for 15 seconds with this approach. Longer if there are not interim
  requests generating new tokens, as a new token is only generated when
  the 5 second TTL is hit. So you come back 2 minutes later, and the
  session token is still valid for that request, just regenerated on the
  request. As you can imagine, this is a lot of put operations for each
  session.

  I deemed just using memcache to not be a viable alternative because
  it's documented that it's a volatile storage, it can be cleared at any
  time. I do use memcache for speeding up reads to the session data,
  limiting read access to the datastore.

  I'm definitely open to suggestions for a method to do this without all
  the puts. The key being the reliability and security are more
  important than performance. In the end it's cheaper to pay for the CPU
  usage than the risks by allowing users to have their sessions easily
  hijacked, depending on the application, or having sessions randomly
  lost due to the storage medium not being permanent enough.

  On Jan 21, 6:27 pm, jeremy jeremy.a...@gmail.com wrote:

   thanks for the suggestions.

   does beaker really work out of the box with gae?

   On Jan 21, 1:06 am, Ian Bicking i...@colorstudy.com wrote:

On Tue, Jan 20, 2009 at 10:40 PM, jeremy jeremy.a...@gmail.com wrote:
 can anyone recommend / mention a session manager other than the one in
 gaeutilities?

Beaker works with GAE:http://beaker.groovie.org/

--
Ian Bicking  |  http

[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-22 Thread bowman.jos...@gmail.com
:
                 self.sid = self.new_sid()
                 if len(self.session.sid)  2:
                     self.session.sid.remove(self.session.sid[0])
                 self.session.sid.append(self.sid)
                 dirty = True
             else:
                 self.sid = self.session.sid[-1]

         self.output_cookie[cookie_name] = self.sid
         self.output_cookie[cookie_name]['path'] = cookie_path
         if set_cookie_expires:
             self.output_cookie[cookie_name]['expires'] =
 self.session_expire_time

         self.cache['sid'] = pickle.dumps(self.sid)

         if dirty:
             self.session.put()

         #self.cookie.output()
         print self.output_cookie.output()

         # fire up a Flash object if integration is enabled
         if self.integrate_flash:
             import flash
             self.flash = flash.Flash(cookie=self.cookie)

         # randomly delete old stale sessions in the datastore (see
         # CLEAN_CHECK_PERCENT variable)
         if random.randint(1, 100)  CLEAN_CHECK_PERCENT:
             self._clean_old_sessions()

 On Jan 22, 4:38 pm, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  I think it's a case of it's been that way for so long I haven't
  realized I need to change it. The put on every write to update the
  last activity, I mean. I'll look into modifying it so that it only
  writes when the session token changes.

  As for google though, every time I load up my igoogle page, I have
  cookie called SID for thewww.google.comthatchanges each page view.
  This is really as it should be for real security, as even the 15
  second window I have configured by default it more that large enough
  for someone to write a script very quickly to sniff and grab the
  cookie to be used to hijack the session.

  Modifying the last_activity to update less frequently is a good idea,
  I will try to get that in soon.

  On Jan 22, 2:08 pm, jeremy jeremy.a...@gmail.com wrote:

   Bowman - I think my issue with gaeutilities may largely be that the
   default settings seem excessive. Looking at the cookies google.com
   hands its visitors, their SID cookie lasts a good decade and seems to
   rotate every few hours, not seconds. So at the moment i'm thinking
   I'll see if changing the default satisfies me.

   Also, at the moment there's a datastore put() on every session-using
   request to (at a minimum) update the last activity time stamp. But if
   my SESSION_TOKEN_TTL is something like 1 hour, then actually i only
   need to update the last activity time stamp if the last timestamp was
   more than an hour ago. That would significantly reduce the amount of
   datastore writes, no?

   re: I'm just trying to figure out the value in the signed cookie
   approach, because if I can figure out a way for it to make sense I
   would consider moving gaeutilities to that approach. :

   Do you mean storing all session data in the clients cookie and using a
   signature to authenticate it? That approach makes me nervous, although
   I can imagine it's actually quite effective. You just store the
   session data in plain text but then also include a signature = SHA1
   (session_data + secret_key_only_the_server_knows). That way the server
   can confirm it was the one who committed the values (and not the
   client messing with their own cookies).

   On Jan 21, 8:33 pm, bowman.jos...@gmail.com

   bowman.jos...@gmail.com wrote:
Does beaker store all session information as cookies?

I'm just trying to figure out the value in the signed cookie approach,
because if I can figure out a way for it to make sense I would
consider moving gaeutilities to that approach.

gaeutilities stores only a temporary session token in the browser
cookie store, all information is stored server side for security.
Since I only have the one cookie, which is basically an id to find the
session server side with, I don't see a way for this approach to keep
a session from being hijacked. As in the end the 'hash-value' string
could be hijacked and reused by another browser.

The performance issue with gaeutililites, and why Jeremy is looking
for options, is the approach I've taken to securing the session id
requires frequent put() operations. I've seen some other blogs who've
mentioned this is a performance bottleneck for gae, but I haven't been
able to come up with another approach that will not sacrifice
reliability or performance. Basically I rotate the session id in the
cookie every 5 seconds (this length is configurable). I store the
current plus previous 2 tokens in a ListProperty in order to manage
sites using AJAX type functionality. A session token is generally good
for 15 seconds with this approach. Longer if there are not interim
requests generating new tokens, as a new token is only generated when
the 5 second TTL is hit. So you come back 2 minutes later, and the
session

[google-appengine] Is there some sort of cookie issued by google apps can take advantage of for sessions?

2009-01-22 Thread bowman.jos...@gmail.com

There's been some ongoing discussion about the approach I and others
have been taking to session management in our appengine applications.
I always rank security over performance, but with how heavy datastore
writes are, this can be problematic and eventually expensive for
applications.

I've been thinking though, since users can log in with their Google
accounts using the User API google offers, I was wondering if there
was a layer to this that could be tied into for all applications,
whether they choose to implement the full stack for user management?

Since we can host our own domains, I'm assuming that Google has
figured out a way to tie their own cookies into being readable through
the stack somehow? While I respect the fact you may not want to go
into detail how the full process works, I was wondering if the User
API could be expanded to allow applications to whatever identifier
you're using client side to uniquely identify browser sessions? This
would then allow the various developers working on their own session
implementations to build off of that to maintain session state, and
gain the security of a real revolving session token that doesn't
require a put.

A full API for session data management doesn't need to be provided,
thought it would be nice. Just access to a token that I'm assuming
somehow exists.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-22 Thread bowman.jos...@gmail.com

By session timing I was referring to how long a session is valid for.

For example, a session is valid for 2 hours. This means the session is
valid for last_activity + 2 hours. Completely separate from the
session token process. So, if you leave the site and come back 90
minutes later, what happens is.

Session token is handed to server.
Session token is verified as a valid session token, because now -
last_activity is less than 120 minutes.
Your session token is regenerated because it's older than 5 seconds.
Because of that put operation your last_activity gets updated as well.

What I see as a concern with your approach is what happens when the
server wide variable R gets out of sync with someone's version that
was crypted based off of it? The original reason the 3 valid token
set

Now, one thought I've had is that really, sessions should be used for
logged in users in most cases. So one thing I will add sooner rather
than later is a class that will help you determine if you need to use
session at all.

class session_manager()
def session_exists():

Check and see if the browser has a session cookie. Note,
because of the expire time set on the cookie by session, may be all
you ever need.


def delete_session_cookie():

Clears the session cookie. Useful for if you check to see if a
session cookie exists, but the user is not logged in so you want to
clear it.


I'm using appenginepatch myself, with the django middleware bundled
with gaeutilities. I'll see what I can do about patching this to work
with this class in place when I do it.


Other than that, some approaches I'm considering.

1: I've made a request seeing if Google has some sort of session id
functionality they can give us. I think this should be possible based
off the fact they offer their user api. There has to be something in
the browser the application can access to tie that browser to the
logged in user session.

2: Going with memcache for the session token, with a datastore
fallback. Confirm memcache is working using the method to return the
memcache stats. If it is, then only use memcache. If it is down, use
the datastore. Of course when it goes down this would null out all
active sessions, which is undesirable. I have already seen a case
where memcache was down for applications, so this is a realistic
possibility.

3: A hybrid datastore/memcache solution with 2 cookies.Since there is
the 15 second window already. Store every third token in the database
and as an alternate cookie. Say session_id, and ds_session_id. Include
in the value a count, value--count, split on -- to get the token value
and the count. If the count is  2, reset to 1 and set up the
ds_session_id cookie and store the value in the datastore. This would
be a write every 15 seconds through constant refreshes. Lengthening
the session token ttl will help reduce the amount writes, but will
increase the attack vector of spoofing.

Also, so many people are behind NAT's and/or using DHCP clients, I may
turn off IP verification by default, though will leave the option in
to use it.

On Jan 22, 11:12 pm, jeremy jeremy.a...@gmail.com wrote:
 Hmm, I'm not sure what session timing is.

 I have an idea to reduce writes. Instead of updating the sid of every
 session individually, give each session a random value between 0 and
 C, and have one application-wide value R randomized every
 session_token_ttl seconds to an integer between 0 and C, then hand the
 client the value of this as a token:

 t = (session_id+R)%C

 then when a client hands the server a token, you can compute
 session_id = (t-R)%C

 (you can store the last 3 values of R as is done now for each sessions
 sid)

 I'm pretty sure there's no analysis attack that would allow a client
 to figure out either R at any moment or their own (constant)
 session_id. But, i could be wrong about that :\ ... The advantage
 would be you're only updating a single datastore entity every
 session_token_ttl.

 On Jan 22, 9:24 pm, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  I've gone with a different approach that currently achieves similar
  results, that's now available in the trunk. A new variable,
  last_activity_update has been added. It's the amount of seconds that
  needs to pass before that field needs to be updated by doing a put().
  It defaults to 60 seconds, which of course is longer than the duration
  before a put is required to update the session token with the default
  settings.

  This will allow developers who wish to lengthen their
  session_token_ttl to a larger interval to still get their
  last_activity update in, useful for session timing. It too is
  customizable so for developers who have no use for this field can set
  it to a large enough number to be irrelevant.

  I'm trying to flush out an idea I have to limit the amount of writes
  for the token even further, but am still researching it. If I figure
  it out I will get it in and do another

[google-appengine] Re: is gaeutilities sessions the only 3rd party session manager?

2009-01-21 Thread bowman.jos...@gmail.com

Does beaker store all session information as cookies?

I'm just trying to figure out the value in the signed cookie approach,
because if I can figure out a way for it to make sense I would
consider moving gaeutilities to that approach.

gaeutilities stores only a temporary session token in the browser
cookie store, all information is stored server side for security.
Since I only have the one cookie, which is basically an id to find the
session server side with, I don't see a way for this approach to keep
a session from being hijacked. As in the end the 'hash-value' string
could be hijacked and reused by another browser.

The performance issue with gaeutililites, and why Jeremy is looking
for options, is the approach I've taken to securing the session id
requires frequent put() operations. I've seen some other blogs who've
mentioned this is a performance bottleneck for gae, but I haven't been
able to come up with another approach that will not sacrifice
reliability or performance. Basically I rotate the session id in the
cookie every 5 seconds (this length is configurable). I store the
current plus previous 2 tokens in a ListProperty in order to manage
sites using AJAX type functionality. A session token is generally good
for 15 seconds with this approach. Longer if there are not interim
requests generating new tokens, as a new token is only generated when
the 5 second TTL is hit. So you come back 2 minutes later, and the
session token is still valid for that request, just regenerated on the
request. As you can imagine, this is a lot of put operations for each
session.

I deemed just using memcache to not be a viable alternative because
it's documented that it's a volatile storage, it can be cleared at any
time. I do use memcache for speeding up reads to the session data,
limiting read access to the datastore.

I'm definitely open to suggestions for a method to do this without all
the puts. The key being the reliability and security are more
important than performance. In the end it's cheaper to pay for the CPU
usage than the risks by allowing users to have their sessions easily
hijacked, depending on the application, or having sessions randomly
lost due to the storage medium not being permanent enough.

On Jan 21, 6:27 pm, jeremy jeremy.a...@gmail.com wrote:
 thanks for the suggestions.

 does beaker really work out of the box with gae?

 On Jan 21, 1:06 am, Ian Bicking i...@colorstudy.com wrote:

  On Tue, Jan 20, 2009 at 10:40 PM, jeremy jeremy.a...@gmail.com wrote:
   can anyone recommend / mention a session manager other than the one in
   gaeutilities?

  Beaker works with GAE:http://beaker.groovie.org/

  --
  Ian Bicking  |  http://blog.ianbicking.org
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Announcement: gaeutilities 1.1.1

2009-01-19 Thread bowman.jos...@gmail.com

http://gaeutilities.appspot.com/

gaeutilities has been upgraded to version 1.1.1, this is a suggested
upgrade for all developers using the libraries. This release includes
some performance optimizations, bugfixes, and the release of new
ROTModel (Retry on Timeout Model). Please note that pages that use
session will not browser cache. If you are using session, please make
sure to not use it on pages where it's not necessary (static pages),
to better optimize your site and cut down on requests.

 1.1.1
 - session and flash: added no_cache_headers() method that is called
whenever either
class is loaded. This should stop any problems with browser
caching.
 - ROTModel - new Model class added that retries put operations when
db.Timeout is encountered.
 - sessions and cache: Retuned the cleanup routines default settings.
It will happen more often, but will delete less instances, lowering
the amount of time required to run the operation in an effort to avoid
deadline exceded errors in applications.
 - session - uses the new ROTModel for both session and session data.
 - session - rewrote the Cookie handling process to work better with
other applications that use the cookies in applications, Google
Analytics for example.
 - cron - Fixed a bug in the form for deleting cron entries.
 - Project - The 1.1 branch is now the suggested branch to use, 1.0 is
being deprecated.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Looking for help to finish a new model to catch datastore timeouts

2009-01-16 Thread bowman.jos...@gmail.com

Thanks for both links. I think I need to go back over that site. That
page on exceptions was much better than the book I bought.

On Jan 15, 6:28 pm, Alexander Kojevnikov alexan...@kojevnikov.com
wrote:
  Oh ok.. so it is possible to catch that. It's datastore.Timeout and
  not db.Timeout, right? I was confused because when I see it in the
  logs I saw: raise _ToDatastoreError(err), so I wasn't sure how to
  catch it since that exception covers pretty much any issue with
  writing to the datastore, not just timeouts.

 Actually google.appengine.ext.db.Timeout and
 google.appengine.api.datastore_errors.Timeout is the same class, check
 google/appengine/ext/db/__init__.py, line 105.

 db.Timeout is documented here [1], the one from datastore_errors is
 not a part of documented public API and thus can change. It's safer to
 use db.Timeout in your code.

 [1]http://code.google.com/appengine/docs/datastore/exceptions.html

  (Sorry, I didn't start using python until I started using
  appengine, so still learning here)

 Neither did I :) You can skim through the chapter of the tutorial that
 covers exceptions, it's pretty short and very well 
 written:http://www.python.org/doc/2.5.2/tut/node10.html
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: App Engine Gurus

2009-01-16 Thread bowman.jos...@gmail.com

I like it. Now lets hope people don't drive poor Alexander and Bill
crazy with questions. :)

On Jan 15, 7:36 pm, boson dan.kam...@gmail.com wrote:
 Yay!  Both Alexander  Bill have been very helpful and knowledgeable.
 I'm glad to see the formal recognition and development of the GAE
 community.

 On Jan 15, 12:12 pm, Daniel O'Brien (Google) d...@google.com
 wrote:

  Hello everyone,

  Some of you may have noticed the recent addition of a guru listing
  at the top of our main groups page. We've selected a few top
  contributors to the App Engine community to act as App Engine Gurus.
  Gurus will continue to post normally in the group, but are now
  officially noted for their excellent contributions. The gurus were
  selected based on their eagerness to participate in the App Engine
  community as a whole and their advanced knowledge of the API.

  The newly appointed Gurus are:

    - Alexander Kojevnikov
    - Bill Katz

  Google will continue posting normally alongside the gurus. Our primary
  focus is to help developers using the API, and the gurus will help
  with that focus by continuing to help provide knowledge and answers to
  those who ask. Remember that this list isn't fixed. Gurus who stop
  participating may be removed, and others who demonstrate the level of
  expertise and willingness to participate may be added.

  If anyone has any questions, feel free to reply, contact me directly,
  or send a message to the group's owners list.

  Daniel O'Brien
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Looking for help to finish a new model to catch datastore timeouts

2009-01-15 Thread bowman.jos...@gmail.com



On Jan 14, 11:11 pm, Alexander Kojevnikov alexan...@kojevnikov.com
wrote:
  Is there any way to catch the timeout error specifically? That way it
  doesn't bother to retry if something else is the problem.

 Sure, just replace except with except db.Timeout


Oh ok.. so it is possible to catch that. It's datastore.Timeout and
not db.Timeout, right? I was confused because when I see it in the
logs I saw: raise _ToDatastoreError(err), so I wasn't sure how to
catch it since that exception covers pretty much any issue with
writing to the datastore, not just timeouts. (Sorry, I didn't start
using python until I started using appengine, so still learning here)

example:

class 'google.appengine.api.datastore_errors.Timeout': datastore
timeout: operation took too long.
Traceback (most recent call last):
  File /base/data/home/apps/fanatasticweb/1.330570312771074300/common/
appenginepatch/main.py, line 26, in real_main
util.run_wsgi_app(application)
  File /base/python_lib/versions/1/google/appengine/ext/webapp/
util.py, line 76, in run_wsgi_app
result = application(env, _start_response)
  File /base/data/home/apps/fanatasticweb/1.330570312771074300/common/
zip-packages/django.zip/django/core/handlers/wsgi.py, line 239, in
__call__
  File /base/data/home/apps/fanatasticweb/1.330570312771074300/common/
zip-packages/django.zip/django/core/handlers/base.py, line 67, in
get_response
  File /base/data/home/apps/fanatasticweb/1.330570312771074300/common/
appengine_utilities/django-middleware/middleware.py, line 12, in
process_request
request.session = sessions.Session()
  File /base/data/home/apps/fanatasticweb/1.330570312771074300/common/
appengine_utilities/sessions.py, line 207, in __init__
self.session.put()
  File /base/python_lib/versions/1/google/appengine/ext/db/
__init__.py, line 657, in put
return datastore.Put(self._entity)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 162, in Put
raise _ToDatastoreError(err)
  File /base/python_lib/versions/1/google/appengine/api/
datastore.py, line 1637, in _ToDatastoreError
raise errors[err.application_error](err.error_detail)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Looking for help to finish a new model to catch datastore timeouts

2009-01-14 Thread bowman.jos...@gmail.com

I have the basics, which are posted below. However, this is where I'm
realizing I'm still very new at learning python. What I'd like to do
it only catch the timeout error for the retries, and at the end of the
retries go ahead and return the timeout error, as if it's failed 3
times, I'm going to assume it will keep failing and it's up to the
application to handle that.

class ROTModel(db.Model):

Retry On Timeout Model. This model exists to override the put
method of
db.Model in order to retry the put operation when a timeout error
is encountered.

def put(self):
count = 0
while count  3:
try:
return db.Model.put(self)
except:
count += 1
else:
raise datastore._ToDatastoreError()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Looking for help to finish a new model to catch datastore timeouts

2009-01-14 Thread bowman.jos...@gmail.com

Is there any way to catch the timeout error specifically? That way it
doesn't bother to retry if something else is the problem.

On Jan 14, 5:53 pm, Alexander Kojevnikov alexan...@kojevnikov.com
wrote:
 Joseph,

 You can try this code:

     def put(self):
         count = 0
         while True:
             try:
                 return db.Model.put(self)
             except:
                 count += 1
                 if count == 3:
                     raise

 Cheers,
 Alex
 --www.muspy.com

 On Jan 15, 4:09 am, bowman.jos...@gmail.com

 bowman.jos...@gmail.com wrote:
  I have the basics, which are posted below. However, this is where I'm
  realizing I'm still very new at learning python. What I'd like to do
  it only catch the timeout error for the retries, and at the end of the
  retries go ahead and return the timeout error, as if it's failed 3
  times, I'm going to assume it will keep failing and it's up to the
  application to handle that.

  class ROTModel(db.Model):
      
      Retry On Timeout Model. This model exists to override the put
  method of
      db.Model in order to retry the put operation when a timeout error
  is encountered.
      
      def put(self):
          count = 0
          while count  3:
              try:
                  return db.Model.put(self)
              except:
                  count += 1
          else:
              raise datastore._ToDatastoreError()
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Possible workaround for datastore timeouts

2009-01-12 Thread bowman.jos...@gmail.com

I've added issue 982: 
http://code.google.com/p/googleappengine/issues/detail?id=982
as an issue to include a possible workaround for the datastore
timeouts.

Here's the text of the issue:

From what I can tell, it's not a case of if you're going to get
datastore
timeouts on puts, but when. My original thought was for input actions
on my
application to use javascript to do the action, catching timeouts and
trying again. On a scripted input I have for my site this has worked
extremely well for making sure the data gets into my site.

However, when I started taking into account all the writes that happen
through the operation of my site, I realized that this methodology
makes
more sense to integrate at a lower level than the front end.

When I have time, after I fix a few other things with my application,
I
intend to look at creating a new Model for my site to use which will
handle
this at the put() level.

Something to the effect of:

MAX_PUT_RETRIES = 3

count = 0
while count  MAX_PUT_RETRIES:
''' try to put the data.
if the put fails due to a datastore timeout, increase
the count and try to put again. Ideally throw another
exception if it hits MAX_PUT_RETRIES without adding the
data.'''

If something like this could be added to GAE directly, I think it
would
help a lot of people. Though of course the extra CPU usage caused by
these
timeouts would still be an issue.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Is the Datastore just too slow for 100+ returned records?

2009-01-09 Thread bowman.jos...@gmail.com

For the record, I regularly pull 100 entities, that are much smaller
than yours, for my application in order to page through them.
Basically to meet a paging requirement I pull 100, cache that result,
then page within it. I do think they size of your entities are part of
the problem

Though, I did find with the same models I pull 100 of for viewing, I
had to drop down to 75 for deletion. Even pulling 75 to delete I tend
to run into high CPU and datastore timeouts on many requests. Overall,
bulk management within the datastore is very difficult.

On Jan 9, 1:22 pm, Tzakie gwood...@comexton.com wrote:
  The datastore is
  designed to maintain its performance characteristics over very large data
  sets and heavy traffic, so performance with small data sets and low traffic
  may not always compare to a single-server SQL database under similar
  conditions. in the same sense that it won't compare to a RAM cache of the
  same data.

 I not expecting it to work the same as sql. But I do expect to build a
 list
 of a couple hundred records. I'm not crazy this is a totally
 reasonable thing
 for an app to do. Sorry guys the serial return speed of these entity
 loaded
 queries is an issue. It's not me it's you. And it's just a couple
 changes away
 from working. Can't change it myself so I'm at your mercy google.

 Can we allow your libraries to spawn threads? SimpleDB solves this by
 allowing you to bang it with many threads at once. Even though the
 fetch
 is slow for the entities since you can do so many in parallel it works
 out.

 That way you could:

 Keys=QueryObject.getKeys()
 NeededEntities=[]
 for Key in Keys:
   #if key is not in memcache add to NeededEntities

 DictonaryOfReturnedEntities=Model.fetchMassEntities
 (NeededEntities,Threads=30)

 Now I want .fetchMassEntities(NeededEntities,Threads=30) to kick 30
 threads off
 and get my entities. Which I will cache. This should be totally
 workable for lists
 up to 1,000 I would think. It's not a sql vs BigTable think it's a
 architecture issue
 with the current setup.

  Part of scaling data is
  determining how quickly updates need to be available, so you can
  pre-calculate and cache results to optimize queries, i.e. make data that
  doesn't need to be instantly dynamic less dynamic.

 If I make a degenerate table model just the info I want to query and
 the
 key stored is it not just the same number of entity fetches right? I
 am I getting
 punished by the number or returned entities? or the number of
 properties
 returned?

  Incidentally, returning just keys for a query has been on our radar for a
  while, we just haven't gotten to it.  I can't promise anything, but it's on
  our list.  Feel free to file a feature request in the issue tracker to
  promote it.

 Will do. Thanks for your time Dan. :)
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Running multiple Dev Web Servers on same machine

2009-01-08 Thread bowman.jos...@gmail.com

When writing the Cron application for gaeutilities I ran into the need
to ran two instances, as the dev_appserver can't make a request to
itself. All I did was run the second instance on a different port,
didn't have to mess with the address. Mind you, it didn't make a lot
of requests to the datastore for the purposes I was using it for, but
I had no problems with both instances using the same datastore.

On Jan 8, 3:12 pm, boson dan.kam...@gmail.com wrote:
 These dev_appserver.py params seem important for running multiple
 instances:

 --address=ADDRESS, -a ADDRESS
 Address to which this server should bind. (Default
 localhost).

 --port=PORT, -p PORT
 Port for the server to run on. (Default 8080)

 --datastore_path=PATH
 Path to use for storing Datastore file stub data.
 (Default /var/folders/PP/PP05HWgpEyadqSwuIRPRSU+++TI/-Tmp-/
 dev_appserver.datastore)

 --history_path=PATH
 Path to use for storing Datastore history.
 (Default /var/folders/PP/PP05HWgpEyadqSwuIRPRSU+++TI/-Tmp-/
 dev_appserver.datastore.history)

 I would expect multiple processes might butt heads when accessing the
 same Datastore.  And of course you need to put the servers on
 different address/port combinations.

 On Jan 8, 11:39 am, Alex Popescu the.mindstorm.mailingl...@gmail.com
 wrote:

  I am trying to run 2 dev web servers on my local machine in order to
  test some fetch functionality. Anyways, even if both apps are
  configured to use different ports and they are running in 2 different
  Python processes, this doesn't seem to work. Is this a known
  limitation? Is there any workaround?

  tia,

  ./alex
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: 1 application, multiple datastores

2009-01-07 Thread bowman.jos...@gmail.com

Guys, I think you need to take a step back and look at this from a
higher level.

Appengine supplies you with an instance in a cloud that includes a
customized python set, and a BigTable backend. It does not support
multiple BigTable backends and design wise I doubt it ever will. There
comes a time when you have to look at your application and determine
what is the right environment for it to be built in to meet your
business requirements. In this case, it does not sound like appengine
in and of itself is going to meet those requirements.

Generally business requirements dictate the speed at which your
product must become available for use. Google has a published roadmap
for appengine, and support for multiple BigTable instances per
application is not on it, and they have not even implied it's
something they have any interest in implementing.

So, at this point, I'd suggest you look at other alternatives in order
to meet your business requirements.

 - Separate it by table within BigTable as has been suggested.
 - Pull everything back inhouse and build server(s) capable of
supporting your application with the requirements you have. Such as
with MySQL running a different database for each of your users.
 - Examine other cloud db storage options to see if they can meet your
requirements, such as the offering from Amazon. Though, while you
could use appengine combined with that solution, I would question how
quick you'd hit the urlfetch quota limits.
 - Examine all the offerings at Amazon and other cloud providers such
as Aptana to see if any of them are a better fit for your
requirements.

Sometimes you have to stop and realize that business/security
requirements will dictate the technology you need to use, rather than
personal preference/comfort with technology you know. Over a decade of
preaching Linux while supporting Exchange and Citrix on Windows has
pounded this into my head.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Getting around the limits of fetching large result sets

2009-01-06 Thread bowman.jos...@gmail.com

I've got several blogs scattered around and use none of them, so I
thought I'd post this here and maybe some of the ideas and methods I'm
using might help other developers using appengine.

Let me preface this with, I've come to the conclusion that appengine
is not the right tool for everyjob, and when you start looking at
needing to do the things I mention below on a regular basis, you might
wish to consider moving back to a traditional server/database
solution, or look for other alternatives to BigTable.

I had two problems, I attacked them two different ways to reach
acceptable solutions.

There was a constraint for both problems that some of you may not be
aware of yet, so I'll explain that first. While the datastore has 1000
limit on fetch(), if you're regularly fetching 1000 entities you are
going to get execution timeouts. Especially if you're doing something
on each of those 1000 requests, like say, deleting them. I've found,
when you're doing a write for every request, 75 is for the most part a
safe number.

Finally, one other piece of information you need is what I'm
developing with.
The latest version of appenginepatch and the django supplied with it,
1.0.2 I believe.
The latest trunk version of gaeutilities (I need to get a new update
out to you all who are using it).


Ok... problems.

Paging.
I needed a nice paging system that would also be lightweight and user
friendly. I read the very interesting article on paging by key, and it
didn't really fit in this instance. keys didn't make for friendly urls
and my paging items are static, I page by a score that can change on
each entity.

My solution was to use the paging library that comes with django. I
run my query, and then page on top of it. Since the scores don't
change every request, I also cache those queries in order to offer
even better performance. I fetch 100 entities for the paging, which
gives me 10 pages of 10, when run through the django pagination
library. To increase the amount of pages, I could fetch more entities.


Deleting more than 1000 records:
My first solution was to create a page that deleted 75 records at a
time, and refresh a whole lot. This was a bit frustrating when I have
of 2 records. The next time I had to do it, I got a little bit
more creative.

First, the view in Django:
def deleteAllStories(request):
''' This method deletes all stories in the database.'''
from google.appengine.ext import db
from fanatasticweb.models import Story

query = db.GqlQuery('SELECT * FROM Story')
results = query.fetch(75)
if len(results) == 75:
more = True

db.delete(results)
if more:
return HttpResponse('More')
else:
return HttpResponse('done')

This view is actually called from ajax request in this view
def deleteAllStoriesV(request):
return render_to_response('deleteall.html', {})

deleteall.html is a simple javascript function that calls the story
deleteview and checks the content. If the content is not done it
refreshes. That way, it will keep running even over the occastional
datastore timeout error you'll encounter.


The main thing I've learned is that when you need to start managing
lots of records, it's possible. Javascript is more than likely going
to be the answer. Javascript will also be useful for avoiding
datastore timeouts in some instances as well. However, when you need
to start creating multiple http requests to manage data on a regular
basis, it may be time to move back to a server with your favorite
database backend in order to be able to process those large
transactions.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Suggests based on user input

2009-01-06 Thread bowman.jos...@gmail.com

If it's going to make a request each time it needs to calculate next
suggestions, you could quickly hit your requests quota. When doing
Ajax style functionality on your appengine site, it's best to keep in
mind you have a finite amount of requests available to your
application per day.

On Jan 6, 9:58 am, Barry Hunter barrybhun...@googlemail.com wrote:
 You could use something like thishttp://developer.yahoo.com/yui/autocomplete/

 the python script to output csv, or xml based on the datastore query
 should be fairly trival.

 2009/1/6 Shay Ben Dov shay.ben...@gmail.com:





  Hi Everybody,

  Has anyone tried to implement a suggestions based on user input like
  inGmail when you start to key in the To: field and you get a lists of
  suggested names read from your contacts.

  I wish to implement the same reading suggestions from a datastore
  table based on user keyed_in input.

  Every assistance is appreciated.

  Thanks,

  Shay Ben Dov

 --
 Barry

 -www.nearby.org.uk-www.geograph.org.uk-
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: When data is entered in the datastore, is it available at the same time to all instances?

2009-01-01 Thread bowman.jos...@gmail.com

Thanks for the explanation, I'm going to assume it's client/server
related then, meaning, my code.

On Dec 31 2008, 8:26 pm, djidjadji djidja...@gmail.com wrote:
 Yes there is only one instance of the object in BigTable.

 If a request calls put() on an object what happens is:

 1) the object is created/updated in the Datastore
 2) all the index tables are updated.
 3) the put() returns

 After the put call you know that a query for the object in the same
 request handler will be retrieved correctly.

 If another request handler does a query for the same kind it depends
 when the query is executed in relation to the put() call.

 before 1) you get the old version and the old index table
 between 1 and 2) you get the new version and the old index table
 (update), or not found (create)
 after 2) you get the new version and the new index table

 But if these requests come from a single user that does not do
 simultanious requests this index-situation does not apply

 2009/1/1 bowman.jos...@gmail.com bowman.jos...@gmail.com:



  Trying to track down a very inconsistent issue with gaeutilities
  session. I don't know enough about the backend to make some guesses,
  so am asking.

  When I enter an entity into BigTable, is that the only instance that
  exists and all front ends that users hit will read that same entity,
  or is there some sort of data duplication/caching that could mean that
  the entity is not available to an instance that the user is hitting?

  Session works by sending a cookie with a session token, and then that
  token is stored in the datastore. Every request the cookie is compared
  against the datastore to determine if the user is logged in. There is
  some memcache usage, but if the data is not found in memcache, the
  datastore is checked.

  I believe I've seen the same issue with a couple other processes I
  have in place for oauth/open id logins, where I've gotten strange
  random results of data not being found, but it's too inconsistent to
  track down.

  I can bang on the gaeutilities session without any problems. I in fact
  set up some javascript request buttons on the session demo to try and
  break it, haven't been able to replace the problem there.

  The application where I do see the problem is a django 1.0
  application, using appengine patch. Not sure why or how that would
  matter but there you go.

  I'm going to write some test scripts against that site to see if I can
  get any luck duplicating the issue in a reliable sense in the next few
  days.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] When data is entered in the datastore, is it available at the same time to all instances?

2008-12-31 Thread bowman.jos...@gmail.com

Trying to track down a very inconsistent issue with gaeutilities
session. I don't know enough about the backend to make some guesses,
so am asking.

When I enter an entity into BigTable, is that the only instance that
exists and all front ends that users hit will read that same entity,
or is there some sort of data duplication/caching that could mean that
the entity is not available to an instance that the user is hitting?

Session works by sending a cookie with a session token, and then that
token is stored in the datastore. Every request the cookie is compared
against the datastore to determine if the user is logged in. There is
some memcache usage, but if the data is not found in memcache, the
datastore is checked.

I believe I've seen the same issue with a couple other processes I
have in place for oauth/open id logins, where I've gotten strange
random results of data not being found, but it's too inconsistent to
track down.

I can bang on the gaeutilities session without any problems. I in fact
set up some javascript request buttons on the session demo to try and
break it, haven't been able to replace the problem there.

The application where I do see the problem is a django 1.0
application, using appengine patch. Not sure why or how that would
matter but there you go.

I'm going to write some test scripts against that site to see if I can
get any luck duplicating the issue in a reliable sense in the next few
days.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-21 Thread bowman.jos...@gmail.com

I'm not sure if you all changed anything, but this just worked for
me.

On Dec 18, 2:44 pm, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 I've been relying on the Kind drop down. The only other difference is
 that I'm also working within a Django structure.

 I did just do the test using a GQL query, and was able to pull up the
 original FavoriteTeamsRanking that way. Though it's still not showing
 up in the Kind dropdown.

 On Dec 18, 2:30 pm, Marzia Niccolai ma...@google.com wrote:

  Hi Joseph,

  I uploaded your example to an application that I own (yo.appspot.com), and
  it worked as expected.  So there must be something else going on here.  Have
  you tried to query for the model using GQL in the Admin console, or are you
  just relying on the kind drop down?  I'm curious to know if this might be
  where the difference is.

  -Marzia

  On Thu, Dec 18, 2008 at 11:26 AM, bowman.jos...@gmail.com 

  bowman.jos...@gmail.com wrote:

   I'd try taking my example on as a demo application. Some differences
   are

   1. You example below you to a put in TestModelB before the
   get_or_insert. I believe in my example I found that this would work.
   2. Everything in my example is happening in one request, whereas the
   shell I believe you're ending up with each put/get_or_insert happening
   per request. Not sure why that would matter, but I'm not fluent in the
   internals, and any testing and research I can do is on the SDK, where
   everything works correctly anyhow.

   I did do my own test using the appshell, and my results were that it
   worked correctly in that instance as well.

class JoeTest1(db.Model):
    testval = db.StringProperty()
class JoeTest2(db.Model):
    testval = db.StringProperty()
list(JoeTest1.all())
   []
list(JoeTest2.all())
   []
test1 = JoeTest1(testval = test1)
test1.put()
   datastore_types.Key.from_path('JoeTest1', 72664L, _app=u'shell')
test2 = JoeTest2.get_or_insert(x, testval = get_or_insert test)
list(JoeTest1.all())
   [__main__.JoeTest1 object at 0xdee1f6793dad2940]
list(JoeTest2.all())
   [__main__.JoeTest2 object at 0xb82607baa49b66c0]
JoeTest1.all().get().key().name()
JoeTest2.all().get().key().name()
   u'x'

   On Dec 18, 1:30 pm, ryan 
   ryanb+appeng...@google.comryanb%2bappeng...@google.com
   wrote:
true. i was cheating because i know the datastore internals, so i know
what's relevant to problems like these and what isn't. Model vs.
Expando, for example, generally isn't relevant.

still, following your code much more closely,http://shell.appspot.com/
still doesn't reproduce this. odd.

 class TestModelA(db.Model):

  testval = db.StringProperty() class TestModelB(db.Model):

  testval = db.StringProperty() list(TestModelA.all())
[]
 list(TestModelB.all())
[]
 test1 = TestModelA(testval = Test put 1)
 test1.put()

datastore_types.Key.from_path('TestModelA', 72663L, _app=u'shell')
   test2 = TestModelB(testval = Test put 2)
 test2.put()

datastore_types.Key.from_path('TestModelB', 73462L, _app=u'shell')
   TestModelA().get_or_insert(goi_test, testval=get_or_insert test)

__main__.TestModelA object at 0x4a9988195796e710
   TestModelB().get_or_insert(goi_test2, testval=get_or_insert test 2)

__main__.TestModelB object at 0x386035de932ed718
   list(TestModelA.all())

[__main__.TestModelA object at 0xbad1185da0f83490,
__main__.TestModelA object at 0xbad1185da0f83550]
   list(TestModelB.all())

[__main__.TestModelB object at 0x4a9988195796e3d0,
__main__.TestModelB object at 0x4a9988195796e290]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-18 Thread bowman.jos...@gmail.com

Not sure, the only things I can think of is that I am doing other puts
to another model before that one? And also, I'm using db.Model not
db.Expando.

On Dec 18, 12:37 am, ryan ryanb+appeng...@google.com wrote:
 sorry for the trouble, and thanks for the detailed info! we definitely
 do want to figure out what's going on here.

 it doesn't seem quite as reproducible as get_or_insert() not working.
 for example, i just ran this inhttp://shell.appspot.com/:

  class NewKind(db.Expando):
   pass
  list(NewKind.all())
 []
  NewKind.get_or_insert('x')
  list(NewKind.all())

 [__main__.NewKind object at 0xf42e8100f27606b0] 
 NewKind.all().get().key().name()

 u'x'
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-18 Thread bowman.jos...@gmail.com

I'd try taking my example on as a demo application. Some differences
are

1. You example below you to a put in TestModelB before the
get_or_insert. I believe in my example I found that this would work.
2. Everything in my example is happening in one request, whereas the
shell I believe you're ending up with each put/get_or_insert happening
per request. Not sure why that would matter, but I'm not fluent in the
internals, and any testing and research I can do is on the SDK, where
everything works correctly anyhow.

I did do my own test using the appshell, and my results were that it
worked correctly in that instance as well.

 class JoeTest1(db.Model):
  testval = db.StringProperty()
 class JoeTest2(db.Model):
  testval = db.StringProperty()
 list(JoeTest1.all())
[]
 list(JoeTest2.all())
[]
 test1 = JoeTest1(testval = test1)
 test1.put()
datastore_types.Key.from_path('JoeTest1', 72664L, _app=u'shell')
 test2 = JoeTest2.get_or_insert(x, testval = get_or_insert test)
 list(JoeTest1.all())
[__main__.JoeTest1 object at 0xdee1f6793dad2940]
 list(JoeTest2.all())
[__main__.JoeTest2 object at 0xb82607baa49b66c0]
 JoeTest1.all().get().key().name()
 JoeTest2.all().get().key().name()
u'x'

On Dec 18, 1:30 pm, ryan ryanb+appeng...@google.com wrote:
 true. i was cheating because i know the datastore internals, so i know
 what's relevant to problems like these and what isn't. Model vs.
 Expando, for example, generally isn't relevant.

 still, following your code much more closely,http://shell.appspot.com/
 still doesn't reproduce this. odd.

  class TestModelA(db.Model):

   testval = db.StringProperty() class TestModelB(db.Model):

   testval = db.StringProperty() list(TestModelA.all())
 []
  list(TestModelB.all())
 []
  test1 = TestModelA(testval = Test put 1)
  test1.put()

 datastore_types.Key.from_path('TestModelA', 72663L, _app=u'shell') test2 = 
 TestModelB(testval = Test put 2)
  test2.put()

 datastore_types.Key.from_path('TestModelB', 73462L, _app=u'shell') 
 TestModelA().get_or_insert(goi_test, testval=get_or_insert test)

 __main__.TestModelA object at 0x4a9988195796e710 
 TestModelB().get_or_insert(goi_test2, testval=get_or_insert test 2)

 __main__.TestModelB object at 0x386035de932ed718 list(TestModelA.all())

 [__main__.TestModelA object at 0xbad1185da0f83490,
 __main__.TestModelA object at 0xbad1185da0f83550] list(TestModelB.all())

 [__main__.TestModelB object at 0x4a9988195796e3d0,
 __main__.TestModelB object at 0x4a9988195796e290]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-18 Thread bowman.jos...@gmail.com

Ah ha, Ryan, try this (unless I made a typo somewhere, for some reason
Chrome and that textbox don't like each other and I was too lazy to
change browsers). Note how I do the testa.put() and then the
JoeTestB.get_or_insert all in the same box, then I when I go to check
JoeTestB I can't. It looks like it really is the puts and
get_or_inserts all in the same execution string.

 class JoeTestA(db.Model):
  testval = db.StringProperty()
 class JoeTestB(db.Model):
  testval = db.StringProperty()
 testa = JoeTestA(testval = test A)
testa.put()
testb = JoeTestB.get_or_insert(x, test B)
 JoeTestB.all().get().key().name()
Traceback (most recent call last):
  File /base/data/home/apps/shell/1.30/shell.py, line 266, in get
exec compiled in statement_module.__dict__
  File string, line 1, in module
AttributeError: 'NoneType' object has no attribute 'key'

On Dec 18, 1:30 pm, ryan ryanb+appeng...@google.com wrote:
 true. i was cheating because i know the datastore internals, so i know
 what's relevant to problems like these and what isn't. Model vs.
 Expando, for example, generally isn't relevant.

 still, following your code much more closely,http://shell.appspot.com/
 still doesn't reproduce this. odd.

  class TestModelA(db.Model):

   testval = db.StringProperty() class TestModelB(db.Model):

   testval = db.StringProperty() list(TestModelA.all())
 []
  list(TestModelB.all())
 []
  test1 = TestModelA(testval = Test put 1)
  test1.put()

 datastore_types.Key.from_path('TestModelA', 72663L, _app=u'shell') test2 = 
 TestModelB(testval = Test put 2)
  test2.put()

 datastore_types.Key.from_path('TestModelB', 73462L, _app=u'shell') 
 TestModelA().get_or_insert(goi_test, testval=get_or_insert test)

 __main__.TestModelA object at 0x4a9988195796e710 
 TestModelB().get_or_insert(goi_test2, testval=get_or_insert test 2)

 __main__.TestModelB object at 0x386035de932ed718 list(TestModelA.all())

 [__main__.TestModelA object at 0xbad1185da0f83490,
 __main__.TestModelA object at 0xbad1185da0f83550] list(TestModelB.all())

 [__main__.TestModelB object at 0x4a9988195796e3d0,
 __main__.TestModelB object at 0x4a9988195796e290]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-18 Thread bowman.jos...@gmail.com

I've been relying on the Kind drop down. The only other difference is
that I'm also working within a Django structure.

I did just do the test using a GQL query, and was able to pull up the
original FavoriteTeamsRanking that way. Though it's still not showing
up in the Kind dropdown.

On Dec 18, 2:30 pm, Marzia Niccolai ma...@google.com wrote:
 Hi Joseph,

 I uploaded your example to an application that I own (yo.appspot.com), and
 it worked as expected.  So there must be something else going on here.  Have
 you tried to query for the model using GQL in the Admin console, or are you
 just relying on the kind drop down?  I'm curious to know if this might be
 where the difference is.

 -Marzia

 On Thu, Dec 18, 2008 at 11:26 AM, bowman.jos...@gmail.com 

 bowman.jos...@gmail.com wrote:

  I'd try taking my example on as a demo application. Some differences
  are

  1. You example below you to a put in TestModelB before the
  get_or_insert. I believe in my example I found that this would work.
  2. Everything in my example is happening in one request, whereas the
  shell I believe you're ending up with each put/get_or_insert happening
  per request. Not sure why that would matter, but I'm not fluent in the
  internals, and any testing and research I can do is on the SDK, where
  everything works correctly anyhow.

  I did do my own test using the appshell, and my results were that it
  worked correctly in that instance as well.

   class JoeTest1(db.Model):
   testval = db.StringProperty()
   class JoeTest2(db.Model):
   testval = db.StringProperty()
   list(JoeTest1.all())
  []
   list(JoeTest2.all())
  []
   test1 = JoeTest1(testval = test1)
   test1.put()
  datastore_types.Key.from_path('JoeTest1', 72664L, _app=u'shell')
   test2 = JoeTest2.get_or_insert(x, testval = get_or_insert test)
   list(JoeTest1.all())
  [__main__.JoeTest1 object at 0xdee1f6793dad2940]
   list(JoeTest2.all())
  [__main__.JoeTest2 object at 0xb82607baa49b66c0]
   JoeTest1.all().get().key().name()
   JoeTest2.all().get().key().name()
  u'x'

  On Dec 18, 1:30 pm, ryan 
  ryanb+appeng...@google.comryanb%2bappeng...@google.com
  wrote:
   true. i was cheating because i know the datastore internals, so i know
   what's relevant to problems like these and what isn't. Model vs.
   Expando, for example, generally isn't relevant.

   still, following your code much more closely,http://shell.appspot.com/
   still doesn't reproduce this. odd.

class TestModelA(db.Model):

     testval = db.StringProperty() class TestModelB(db.Model):

     testval = db.StringProperty() list(TestModelA.all())
   []
list(TestModelB.all())
   []
test1 = TestModelA(testval = Test put 1)
test1.put()

   datastore_types.Key.from_path('TestModelA', 72663L, _app=u'shell')
  test2 = TestModelB(testval = Test put 2)
test2.put()

   datastore_types.Key.from_path('TestModelB', 73462L, _app=u'shell')
  TestModelA().get_or_insert(goi_test, testval=get_or_insert test)

   __main__.TestModelA object at 0x4a9988195796e710
  TestModelB().get_or_insert(goi_test2, testval=get_or_insert test 2)

   __main__.TestModelB object at 0x386035de932ed718
  list(TestModelA.all())

   [__main__.TestModelA object at 0xbad1185da0f83490,
   __main__.TestModelA object at 0xbad1185da0f83550]
  list(TestModelB.all())

   [__main__.TestModelB object at 0x4a9988195796e3d0,
   __main__.TestModelB object at 0x4a9988195796e290]
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Google Team: Please Make Django 1.0 A Higher Priority

2008-12-18 Thread bowman.jos...@gmail.com

I see nothing wrong with Marzia's response. The move from 0.96 to 1.0
does in fact break things. Having the change made underneath you
without time to prepare and make migration changes will break lots of
peoples applications.

Check out appengine patch. I'm using it and Django 1.0 because of it
with 0 problems. Django runs well from a zip package with it. They are
doing the responsible thing here, and your accusations about lack of
professionalism are completely outlandish.

On Dec 18, 3:06 pm, Michael Angerman zrol...@gmail.com wrote:
 Let me start out by saying that I think the App Engine is
 an incredible product and has amazing potential.

 I think the choice of Python and Django as the initial
 release of the product were brilliant and really got
 the job done.

 I would like to hear from Google as to where they are REALLY
 at with this product -- where they are going -- and how serious
 they are about putting the necessary resources in place to make
 this product a production reality.

 As a person who watches the industry very closely, I don't really
 see the effort in place that could be there to make this happen,
 and NOT porting to Django 1.0 by this time is just ONE example
 of kind of dropping the ball so to speak...

 Again, I am on Google's side and want to see this product be
 incredibly successful -- I am just extremely disappointed with the
 execution so far...

 For months, I have been anxiously awaiting Google to put
 in the hard work to make the switch to Django 1.0 a reality.

 Months ago, I had conversations with Paul McDonald regarding
 this issue, and still nothing...  This is extremely disappointing.

 In my mind, it shows Google's lack of serious commitment
 to the App Engine.  Actions really do speak louder than words.

 Another interesting note that people at Google should seriously
 take a look at is the amount of traffic on this mail list.  It has dropped
 off dramatically from the glory days and initial months of this product
 release.

 I am really baffled as to why Google didn't keep the momentum going,
 you had such hype around this product -- and all you had to do was follow
 through with execution...

 The drop off in participation in this mail list is a direct sign
 and correlation of the momentum slowing down...

 Google, take the bull by the horns and re-ignite the user community
 by delivering to us -- YOUR CUSTOMER -- what we want.

 Django 1.0 is just one example of something the user community has
 been anxiously asking for -- there are many other things...

 Thank you for your continued support,

 Sincerely,
 Michael I Angerman
 Albuquerque, New Mexico

 -

 Below are Marzia Niccolai's comments Dec 12 regarding this issue...

 Marzia -- couple of comments

 - this an incredibly weak public response

 - this response COMPLETELY shows the lack of professionalism and commitment
 on Google's part to deliver what it takes to make the App Engine a TRUE
 reality that we as customers can count on to run a robust, professional
 business on your platform.

 - brake should be break
 - seems likely -- this is a technical issue that HAS TO be worked out by
 Google

 I would say more here -- but if you read the words of this response closely,
 its a reflection of many things that could be better...

 Again, in summary -- I am not trying to be overly critical -- I am just
 trying
 to some how get some results -- and again Django 1.0 would be it...

 --

 Hi Alex,

 We are definitely interested in offering Django 1.0 with App Engine in the
 future.  However, it seems likely that including Django 1.0 as the default
 Django version with App Engine would need to be part of an api version
 change, since such a change would likely brake existing apps.

 In terms of the high CPU warnings, we are generally working on a solution
 that will lesson the affect of such warnings on applications, so we hope we
 can address this soon not just for this case, but in general.

 As for the time concern, there isn't much right now that can be done.  But
 as your application increases in popularity, it's more likely people will
 see an already warm interpreter and thus not have to wait for a new
 initialization.

 -Marzia
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-17 Thread bowman.jos...@gmail.com

I'm noticing on my app, requests where I have multiple puts only
appear to be doing the initial put, and not the following ones. Not
sure if this is the same issue you are running into, but this is new
as the functionality did exist before. Though, it does seem to
possibly be working for other apps, so I'm not sure what's going on. I
verified this by checking a get_or_insert after another put had been
done, and it's not creating the data and also returning as a None
value.

On Dec 17, 7:27 pm, Garrett Davis garrettdavis...@gmail.com wrote:
 Say, did someone make a change to the 'cloud' version of the
 ModelForms module?

 Several of my applications, which had been running on the 'cloud' for
 months, had a reference to a UserProperty class defined in
    google.appengine.ext.db.djangoforms
 and they started crashing today, with an error message that said,
 type 'exceptions.AttributeError': 'module' object has no attribute
 'UserProperty'

 I changed the references in my applications,
 from the 'modelforms' module to another UserProperty class defined in
    google.appengine.ext.db
 and my applications are working again.

 But I worry about the concept that the GAE infrastructure is subject
 to un-announced changes that might again crash my applications.

 Or did I miss an announcement or something?
 Garrett Davis
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-17 Thread bowman.jos...@gmail.com

Here's the actual code block.

   def save(self, request):
from django.contrib import auth
userKey = request.session[_auth_user_id]
new_user = db.get(request.session[_auth_user_id])
new_user.username = self.cleaned_data['username']
new_user.is_active = True
new_user.put()

self.teams = []
for field in self.cleaned_data:
if field is not 'username':
if self.cleaned_data[field]:
myteam = FavoriteTeams(user = new_user, team =
field)
myteam.put()
self.teams.append(field)
result = FavoriteTeamsRanking.get_or_insert(teams,
json=simplejson.dumps({'teams': {}}))

-
the new_user.put()
-

is going through

-
result = FavoriteTeamsRanking.get_or_insert(teams,
json=simplejson.dumps({'teams': {}}))
-

result is coming back as a None value

In the 1.1.7 SDK it is working correctly.  The FavoriteTeamsRanking is
not getting populated at all on the live site, and the only thing I
can think of at first is that it's the second instance of a put in the
request. However, I suppose get_or_insert can be broken? I've just run
into this within the past 10 minutes and was hitting the groups for
ideas of things to look at and saw this new post which seemed like it
could be similar.

I'm going to try and set up a test to see if I can track down what's
going on a bit better and will post the test and results in a few
minutes.

On Dec 17, 8:33 pm, Marzia Niccolai ma...@google.com wrote:
 Hi,

 Today we made some changes to the production system, but did not anticipate
 this having any effect on application's running with App Engine.  The
 solution for Gary's issue is the one he suggested, but Joseph, I was
 wondering if you could elaborate more on the scenario you are experiencing?

 Also, please know that we are working to reduce the impact that such changes
 will have on applications, and thanks for all your patience while we look in
 to the issues here.

 -Marzia

 On Wed, Dec 17, 2008 at 5:28 PM, bowman.jos...@gmail.com 

 bowman.jos...@gmail.com wrote:

  I'm noticing on my app, requests where I have multiple puts only
  appear to be doing the initial put, and not the following ones. Not
  sure if this is the same issue you are running into, but this is new
  as the functionality did exist before. Though, it does seem to
  possibly be working for other apps, so I'm not sure what's going on. I
  verified this by checking a get_or_insert after another put had been
  done, and it's not creating the data and also returning as a None
  value.

  On Dec 17, 7:27 pm, Garrett Davis garrettdavis...@gmail.com wrote:
   Say, did someone make a change to the 'cloud' version of the
   ModelForms module?

   Several of my applications, which had been running on the 'cloud' for
   months, had a reference to a UserProperty class defined in
      google.appengine.ext.db.djangoforms
   and they started crashing today, with an error message that said,
   type 'exceptions.AttributeError': 'module' object has no attribute
   'UserProperty'

   I changed the references in my applications,
   from the 'modelforms' module to another UserProperty class defined in
      google.appengine.ext.db
   and my applications are working again.

   But I worry about the concept that the GAE infrastructure is subject
   to un-announced changes that might again crash my applications.

   Or did I miss an announcement or something?
   Garrett Davis
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-17 Thread bowman.jos...@gmail.com

ok, I think I found the problem. Looks like get_or_insert is failing
when there are no entities existing for a model.

# Models
class TestModel(db.Model):
testval = db.StringProperty()

class TestModel2(db.Model):
testval = db.StringProperty()


# First test

def runtest(request):
from fanatasticweb.models import TestModel
from fanatasticweb.models import TestModel2

test1 = TestModel()
test1.testval = Test put 1
test1.put()

test2 = TestModel()
test2.testval = Test put 2
test2.put()

TestModel().get_or_insert(goi_test, testval=get_or_insert
test)

TestModel2().get_or_insert(goi_test2, testval=get_or_insert
test 2)

return HttpResponse(Test run, check TestModel in data viewer.)

In the SDK, everything works as you would expect.

On the production site, I'm not getting the TestModel2 data.

I also tried a reload of my test, thinking it might be a case where if
the first model already had entities it would create the second model
entries, but it still did not.


# Second test

def runtest(request):
from fanatasticweb.models import TestModel
from fanatasticweb.models import TestModel2

test1 = TestModel()
test1.testval = Test put 1
test1.put()

test2 = TestModel()
test2.testval = Test put 2
test2.put()

TestModel().get_or_insert(goi_test, testval=get_or_insert
test)

test4 = TestModel2()
test4.testval = TestModel2 with a put first
test4.put()

TestModel2().get_or_insert(goi_test2, testval=get_or_insert
test 2)

return HttpResponse(Test run, check TestModel in data viewer.)

On this test (where I do a put() to get data instantiated in
TestModel2 first), everything works as expected, all data is added.

Hope this helps.


On Dec 17, 8:44 pm, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 Here's the actual code block.

    def save(self, request):
         from django.contrib import auth
         userKey = request.session[_auth_user_id]
         new_user = db.get(request.session[_auth_user_id])
         new_user.username = self.cleaned_data['username']
         new_user.is_active = True
         new_user.put()

         self.teams = []
         for field in self.cleaned_data:
             if field is not 'username':
                 if self.cleaned_data[field]:
                     myteam = FavoriteTeams(user = new_user, team =
 field)
                     myteam.put()
                     self.teams.append(field)
         result = FavoriteTeamsRanking.get_or_insert(teams,
 json=simplejson.dumps({'teams': {}}))

 -
 the new_user.put()
 -

 is going through

 -
 result = FavoriteTeamsRanking.get_or_insert(teams,
 json=simplejson.dumps({'teams': {}}))
 -

 result is coming back as a None value

 In the 1.1.7 SDK it is working correctly.  The FavoriteTeamsRanking is
 not getting populated at all on the live site, and the only thing I
 can think of at first is that it's the second instance of a put in the
 request. However, I suppose get_or_insert can be broken? I've just run
 into this within the past 10 minutes and was hitting the groups for
 ideas of things to look at and saw this new post which seemed like it
 could be similar.

 I'm going to try and set up a test to see if I can track down what's
 going on a bit better and will post the test and results in a few
 minutes.

 On Dec 17, 8:33 pm, Marzia Niccolai ma...@google.com wrote:

  Hi,

  Today we made some changes to the production system, but did not anticipate
  this having any effect on application's running with App Engine.  The
  solution for Gary's issue is the one he suggested, but Joseph, I was
  wondering if you could elaborate more on the scenario you are experiencing?

  Also, please know that we are working to reduce the impact that such changes
  will have on applications, and thanks for all your patience while we look in
  to the issues here.

  -Marzia

  On Wed, Dec 17, 2008 at 5:28 PM, bowman.jos...@gmail.com 

  bowman.jos...@gmail.com wrote:

   I'm noticing on my app, requests where I have multiple puts only
   appear to be doing the initial put, and not the following ones. Not
   sure if this is the same issue you are running into, but this is new
   as the functionality did exist before. Though, it does seem to
   possibly be working for other apps, so I'm not sure what's going on. I
   verified this by checking a get_or_insert after another put had been
   done, and it's not creating the data and also returning as a None
   value.

   On Dec 17, 7:27 pm, Garrett Davis garrettdavis...@gmail.com wrote:
Say, did someone make a change to the 'cloud' version of the
ModelForms module?

Several of my applications, which had been running on the 'cloud' for
months, had a reference to a UserProperty class defined in
   google.appengine.ext.db.djangoforms
and they started crashing today

[google-appengine] Re: Mysterious change to ModelForms module?

2008-12-17 Thread bowman.jos...@gmail.com

Ok thanks Marzia. It's all good, good luck. And I know it's earlier
over on the west coast (I think that's where you all work) but
remember, once you hit about the 15 hour mark it's better to get some
sleep and attack the problem in the morning. Sorry, it's something my
team always tells each other when we find a problem late in the
afternoon. Seen too many instances of tired brains causing more
problems than anything else.

On Dec 17, 9:22 pm, Marzia Niccolai ma...@google.com wrote:
 Hi Joseph,

 Thanks for the info.  I don't have any answers right now why this could be
 happening, but I'm thinking about it (as are others) and I will let you
 know.

 -Marzia

 On Wed, Dec 17, 2008 at 6:12 PM, bowman.jos...@gmail.com 

 bowman.jos...@gmail.com wrote:

  ok, I think I found the problem. Looks like get_or_insert is failing
  when there are no entities existing for a model.

  # Models
  class TestModel(db.Model):
     testval = db.StringProperty()

  class TestModel2(db.Model):
     testval = db.StringProperty()

  # First test

  def runtest(request):
     from fanatasticweb.models import TestModel
     from fanatasticweb.models import TestModel2

     test1 = TestModel()
     test1.testval = Test put 1
     test1.put()

     test2 = TestModel()
     test2.testval = Test put 2
     test2.put()

     TestModel().get_or_insert(goi_test, testval=get_or_insert
  test)

     TestModel2().get_or_insert(goi_test2, testval=get_or_insert
  test 2)

     return HttpResponse(Test run, check TestModel in data viewer.)

  In the SDK, everything works as you would expect.

  On the production site, I'm not getting the TestModel2 data.

  I also tried a reload of my test, thinking it might be a case where if
  the first model already had entities it would create the second model
  entries, but it still did not.

  # Second test

  def runtest(request):
     from fanatasticweb.models import TestModel
     from fanatasticweb.models import TestModel2

     test1 = TestModel()
     test1.testval = Test put 1
     test1.put()

     test2 = TestModel()
     test2.testval = Test put 2
     test2.put()

     TestModel().get_or_insert(goi_test, testval=get_or_insert
  test)

     test4 = TestModel2()
     test4.testval = TestModel2 with a put first
     test4.put()

     TestModel2().get_or_insert(goi_test2, testval=get_or_insert
  test 2)

     return HttpResponse(Test run, check TestModel in data viewer.)

  On this test (where I do a put() to get data instantiated in
  TestModel2 first), everything works as expected, all data is added.

  Hope this helps.

  On Dec 17, 8:44 pm, bowman.jos...@gmail.com
  bowman.jos...@gmail.com wrote:
   Here's the actual code block.

      def save(self, request):
           from django.contrib import auth
           userKey = request.session[_auth_user_id]
           new_user = db.get(request.session[_auth_user_id])
           new_user.username = self.cleaned_data['username']
           new_user.is_active = True
           new_user.put()

           self.teams = []
           for field in self.cleaned_data:
               if field is not 'username':
                   if self.cleaned_data[field]:
                       myteam = FavoriteTeams(user = new_user, team =
   field)
                       myteam.put()
                       self.teams.append(field)
           result = FavoriteTeamsRanking.get_or_insert(teams,
   json=simplejson.dumps({'teams': {}}))

   -
   the new_user.put()
   -

   is going through

   -
   result = FavoriteTeamsRanking.get_or_insert(teams,
   json=simplejson.dumps({'teams': {}}))
   -

   result is coming back as a None value

   In the 1.1.7 SDK it is working correctly.  The FavoriteTeamsRanking is
   not getting populated at all on the live site, and the only thing I
   can think of at first is that it's the second instance of a put in the
   request. However, I suppose get_or_insert can be broken? I've just run
   into this within the past 10 minutes and was hitting the groups for
   ideas of things to look at and saw this new post which seemed like it
   could be similar.

   I'm going to try and set up a test to see if I can track down what's
   going on a bit better and will post the test and results in a few
   minutes.

   On Dec 17, 8:33 pm, Marzia Niccolai ma...@google.com wrote:

Hi,

Today we made some changes to the production system, but did not
  anticipate
this having any effect on application's running with App Engine.  The
solution for Gary's issue is the one he suggested, but Joseph, I was
wondering if you could elaborate more on the scenario you are
  experiencing?

Also, please know that we are working to reduce the impact that such
  changes
will have on applications, and thanks for all your patience while we
  look in
to the issues here.

-Marzia

On Wed, Dec 17, 2008 at 5:28 PM

[google-appengine] Re: I appear to be corrupting my datastore

2008-12-15 Thread bowman.jos...@gmail.com

This is actually happening on Linux, not sure if that matters or not.

Clearing the datastore is my only solution to getting it back up on
the SDK currently, but once I run the routine again, it's corrupted
again. I did try converting the entire property to a
db.DateTimeProperty and adding milliseconds, and had the same
corruption issues.

On Dec 15, 2:07 pm, Marzia Niccolai ma...@google.com wrote:
 Hi,

 Thanks for filing the issue.

 This is related to storing a floatproperty in the SDK datastore, on Macs the
 datastore file may get occasionally corrupted. Currently you will need to
 clear the datastore on your local machine to fix this issue.

 -Marzia

 On Sat, Dec 13, 2008 at 7:52 AM, bowman.jos...@gmail.com 

 bowman.jos...@gmail.com wrote:

  I didn't have much luck with switching to a datetimeproperty using
  milliseconds eithers. I also tried handling the value changing in the
  check method directly, since it looked like it was possible there.
  Same results, the datastore gets corrupted. The error I get when I try
  to start the datastore after stopping it is.

  class 'struct.error': unpack requires a string argument of length 8

  def checkScoreValue(self, value):
   valid = False
   while valid == False:
  query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
 results = query.fetch(1)
 if len(results)  0:
value = value + 0.001
 else:
   valid = True
   return value

  I've filed an issue, #922 -
 http://code.google.com/p/googleappengine/issues/detail?id=922

  On Dec 13, 10:01 am, bowman.jos...@gmail.com
  bowman.jos...@gmail.com wrote:
   I'm trying to make sure a score field I set for articles on my site in
   unique, however, I'm running into an issue where my method is
   appearing to corrupt my datastore. After I input stories, I can't view
   pages, getting a return size too large error, and when I stop and
   start the SDK, it won't restart.

   Here's what I'm doing.

   I set up a new Score Property.

  
   class ScoreProperty(db.FloatProperty):

   def checkScoreValue(self, value):
 query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
 results = query.fetch(1)
 if len(results)  0:
   raise db.BadValueError(
   'Property %s must be unique' % self.name)
 return value

   def __init__(self, verbose_name=None, name=None):
 super(ScoreProperty, self).__init__(
   verbose_name, name, required=False,
   validator=self.checkScoreValue)

  

   Then when I go to add a story, I use this try statement

  
   story_added = False
   while story_added == False:
   try:
 story.put()
 story_added = True
   except db.BadValueError:
 story.score = story.score + 0.001

  

   What should happen is that when a put is attempted, a check is done to
   see if a story with that score exists. If it does exist, add 0.001 and
   try to put again. After adding several stories in batch mode, I can
   using the data view that appears to be working, but strangely. I'll
   see a score of ###.0 and then the next would be something like ###.
   043, so I'm not sure what happened to .001-.0042. Also, after running
   the process the datastore appears to be corrupted as well.

   I'm considering swapping to using a DateTimeProperty and keying off of
   the miliseconds to see if that is handled better, but I'm confused as
   to why this current method is creating problems.

   One thing that might be an issue is the development is happening on an
   eeepc, which is a single core processor and it's SSD isn't the fast
   read/write storage device.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: I appear to be corrupting my datastore

2008-12-15 Thread bowman.jos...@gmail.com

Ok great. I'll have to check on the python version, it's whatever
xandros makes available (I haven't gotten rid of the default OS on my
eeepc because I'm lazy). If worse comes to worse I'll try compiling
the latest python by hand and see if that works. I should have this
done by the weekend. I'll post a comment to the ticket if this
resolves the issue. Thanks for the quick knowledgeable response as
always.

On Dec 15, 3:06 pm, Marzia Niccolai ma...@google.com wrote:
 Hi,

 I just assumed it was Mac because it's an issue we've seen with Macs that
 have 2.5.0 installed, and I'm assuming this is your Python installation?
 Generally, float values in the datastore just don't work with Python 2.5.0.
 Support for buffer objects in struct.unpack was not added until version
 54695 (Apr 2007), this is the underlying cause of the error message.

 -Marzia

 On Mon, Dec 15, 2008 at 11:54 AM, bowman.jos...@gmail.com 

 bowman.jos...@gmail.com wrote:

  This is actually happening on Linux, not sure if that matters or not.

  Clearing the datastore is my only solution to getting it back up on
  the SDK currently, but once I run the routine again, it's corrupted
  again. I did try converting the entire property to a
  db.DateTimeProperty and adding milliseconds, and had the same
  corruption issues.

  On Dec 15, 2:07 pm, Marzia Niccolai ma...@google.com wrote:
   Hi,

   Thanks for filing the issue.

   This is related to storing a floatproperty in the SDK datastore, on Macs
  the
   datastore file may get occasionally corrupted. Currently you will need to
   clear the datastore on your local machine to fix this issue.

   -Marzia

   On Sat, Dec 13, 2008 at 7:52 AM, bowman.jos...@gmail.com 

   bowman.jos...@gmail.com wrote:

I didn't have much luck with switching to a datetimeproperty using
milliseconds eithers. I also tried handling the value changing in the
check method directly, since it looked like it was possible there.
Same results, the datastore gets corrupted. The error I get when I try
to start the datastore after stopping it is.

class 'struct.error': unpack requires a string argument of length 8

def checkScoreValue(self, value):
 valid = False
 while valid == False:
query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
   results = query.fetch(1)
   if len(results)  0:
  value = value + 0.001
   else:
 valid = True
 return value

I've filed an issue, #922 -
   http://code.google.com/p/googleappengine/issues/detail?id=922

On Dec 13, 10:01 am, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 I'm trying to make sure a score field I set for articles on my site
  in
 unique, however, I'm running into an issue where my method is
 appearing to corrupt my datastore. After I input stories, I can't
  view
 pages, getting a return size too large error, and when I stop and
 start the SDK, it won't restart.

 Here's what I'm doing.

 I set up a new Score Property.

  
 class ScoreProperty(db.FloatProperty):

 def checkScoreValue(self, value):
   query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
   results = query.fetch(1)
   if len(results)  0:
 raise db.BadValueError(
 'Property %s must be unique' % self.name)
   return value

 def __init__(self, verbose_name=None, name=None):
   super(ScoreProperty, self).__init__(
 verbose_name, name, required=False,
 validator=self.checkScoreValue)

  

 Then when I go to add a story, I use this try statement

  
 story_added = False
 while story_added == False:
 try:
   story.put()
   story_added = True
 except db.BadValueError:
   story.score = story.score + 0.001

  

 What should happen is that when a put is attempted, a check is done
  to
 see if a story with that score exists. If it does exist, add 0.001
  and
 try to put again. After adding several stories in batch mode, I can
 using the data view that appears to be working, but strangely. I'll
 see a score of ###.0 and then the next would be something like ###.
 043, so I'm not sure what happened to .001-.0042. Also, after running
 the process the datastore appears to be corrupted as well.

 I'm considering swapping to using a DateTimeProperty and keying off
  of
 the miliseconds to see if that is handled better, but I'm confused as
 to why this current method is creating problems.

 One thing that might be an issue is the development is happening on
  an
 eeepc, which is a single core processor

[google-appengine] Re: I appear to be corrupting my datastore

2008-12-15 Thread bowman.jos...@gmail.com

compiling 2.5.2 from source appears to have fixed the database
corruption, though I'm still getting the http response too large error
when I try to bring up the stories now that they're scored as floating
points. If it turns out to be an appengine problem, I'll reply to this
thread, otherwise if it is just a problem with my code, I'll let this
topic die. Thanks again.

On Dec 15, 3:33 pm, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 Ok great. I'll have to check on the python version, it's whatever
 xandros makes available (I haven't gotten rid of the default OS on my
 eeepc because I'm lazy). If worse comes to worse I'll try compiling
 the latest python by hand and see if that works. I should have this
 done by the weekend. I'll post a comment to the ticket if this
 resolves the issue. Thanks for the quick knowledgeable response as
 always.

 On Dec 15, 3:06 pm, Marzia Niccolai ma...@google.com wrote:

  Hi,

  I just assumed it was Mac because it's an issue we've seen with Macs that
  have 2.5.0 installed, and I'm assuming this is your Python installation?
  Generally, float values in the datastore just don't work with Python 2.5.0.
  Support for buffer objects in struct.unpack was not added until version
  54695 (Apr 2007), this is the underlying cause of the error message.

  -Marzia

  On Mon, Dec 15, 2008 at 11:54 AM, bowman.jos...@gmail.com 

  bowman.jos...@gmail.com wrote:

   This is actually happening on Linux, not sure if that matters or not.

   Clearing the datastore is my only solution to getting it back up on
   the SDK currently, but once I run the routine again, it's corrupted
   again. I did try converting the entire property to a
   db.DateTimeProperty and adding milliseconds, and had the same
   corruption issues.

   On Dec 15, 2:07 pm, Marzia Niccolai ma...@google.com wrote:
Hi,

Thanks for filing the issue.

This is related to storing a floatproperty in the SDK datastore, on Macs
   the
datastore file may get occasionally corrupted. Currently you will need 
to
clear the datastore on your local machine to fix this issue.

-Marzia

On Sat, Dec 13, 2008 at 7:52 AM, bowman.jos...@gmail.com 

bowman.jos...@gmail.com wrote:

 I didn't have much luck with switching to a datetimeproperty using
 milliseconds eithers. I also tried handling the value changing in the
 check method directly, since it looked like it was possible there.
 Same results, the datastore gets corrupted. The error I get when I try
 to start the datastore after stopping it is.

 class 'struct.error': unpack requires a string argument of length 8

 def checkScoreValue(self, value):
  valid = False
  while valid == False:
     query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
    results = query.fetch(1)
    if len(results)  0:
       value = value + 0.001
    else:
      valid = True
  return value

 I've filed an issue, #922 -
http://code.google.com/p/googleappengine/issues/detail?id=922

 On Dec 13, 10:01 am, bowman.jos...@gmail.com
 bowman.jos...@gmail.com wrote:
  I'm trying to make sure a score field I set for articles on my site
   in
  unique, however, I'm running into an issue where my method is
  appearing to corrupt my datastore. After I input stories, I can't
   view
  pages, getting a return size too large error, and when I stop and
  start the SDK, it won't restart.

  Here's what I'm doing.

  I set up a new Score Property.

   
  class ScoreProperty(db.FloatProperty):

  def checkScoreValue(self, value):
    query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
    results = query.fetch(1)
    if len(results)  0:
      raise db.BadValueError(
          'Property %s must be unique' % self.name)
    return value

  def __init__(self, verbose_name=None, name=None):
    super(ScoreProperty, self).__init__(
      verbose_name, name, required=False,
      validator=self.checkScoreValue)

   

  Then when I go to add a story, I use this try statement

   
  story_added = False
  while story_added == False:
  try:
    story.put()
    story_added = True
  except db.BadValueError:
    story.score = story.score + 0.001

   

  What should happen is that when a put is attempted, a check is done
   to
  see if a story with that score exists. If it does exist, add 0.001
   and
  try to put again. After adding several stories in batch mode, I can
  using the data view that appears to be working, but strangely. I'll

[google-appengine] I appear to be corrupting my datastore

2008-12-13 Thread bowman.jos...@gmail.com

I'm trying to make sure a score field I set for articles on my site in
unique, however, I'm running into an issue where my method is
appearing to corrupt my datastore. After I input stories, I can't view
pages, getting a return size too large error, and when I stop and
start the SDK, it won't restart.

Here's what I'm doing.

I set up a new Score Property.

class ScoreProperty(db.FloatProperty):

def checkScoreValue(self, value):
  query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
  results = query.fetch(1)
  if len(results)  0:
raise db.BadValueError(
'Property %s must be unique' % self.name)
  return value

def __init__(self, verbose_name=None, name=None):
  super(ScoreProperty, self).__init__(
verbose_name, name, required=False,
validator=self.checkScoreValue)


Then when I go to add a story, I use this try statement

story_added = False
while story_added == False:
try:
  story.put()
  story_added = True
except db.BadValueError:
  story.score = story.score + 0.001




What should happen is that when a put is attempted, a check is done to
see if a story with that score exists. If it does exist, add 0.001 and
try to put again. After adding several stories in batch mode, I can
using the data view that appears to be working, but strangely. I'll
see a score of ###.0 and then the next would be something like ###.
043, so I'm not sure what happened to .001-.0042. Also, after running
the process the datastore appears to be corrupted as well.

I'm considering swapping to using a DateTimeProperty and keying off of
the miliseconds to see if that is handled better, but I'm confused as
to why this current method is creating problems.

One thing that might be an issue is the development is happening on an
eeepc, which is a single core processor and it's SSD isn't the fast
read/write storage device.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: I appear to be corrupting my datastore

2008-12-13 Thread bowman.jos...@gmail.com

I didn't have much luck with switching to a datetimeproperty using
milliseconds eithers. I also tried handling the value changing in the
check method directly, since it looked like it was possible there.
Same results, the datastore gets corrupted. The error I get when I try
to start the datastore after stopping it is.

class 'struct.error': unpack requires a string argument of length 8

def checkScoreValue(self, value):
  valid = False
  while valid == False:
query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
results = query.fetch(1)
if len(results)  0:
  value = value + 0.001
else:
  valid = True
  return value



I've filed an issue, #922 - 
http://code.google.com/p/googleappengine/issues/detail?id=922


On Dec 13, 10:01 am, bowman.jos...@gmail.com
bowman.jos...@gmail.com wrote:
 I'm trying to make sure a score field I set for articles on my site in
 unique, however, I'm running into an issue where my method is
 appearing to corrupt my datastore. After I input stories, I can't view
 pages, getting a return size too large error, and when I stop and
 start the SDK, it won't restart.

 Here's what I'm doing.

 I set up a new Score Property.
 
 class ScoreProperty(db.FloatProperty):

 def checkScoreValue(self, value):
   query = db.GqlQuery('SELECT * FROM Story WHERE score = :1', value)
   results = query.fetch(1)
   if len(results)  0:
 raise db.BadValueError(
 'Property %s must be unique' % self.name)
   return value

 def __init__(self, verbose_name=None, name=None):
   super(ScoreProperty, self).__init__(
 verbose_name, name, required=False,
 validator=self.checkScoreValue)
 

 Then when I go to add a story, I use this try statement
 
 story_added = False
 while story_added == False:
 try:
   story.put()
   story_added = True
 except db.BadValueError:
   story.score = story.score + 0.001
 

 What should happen is that when a put is attempted, a check is done to
 see if a story with that score exists. If it does exist, add 0.001 and
 try to put again. After adding several stories in batch mode, I can
 using the data view that appears to be working, but strangely. I'll
 see a score of ###.0 and then the next would be something like ###.
 043, so I'm not sure what happened to .001-.0042. Also, after running
 the process the datastore appears to be corrupted as well.

 I'm considering swapping to using a DateTimeProperty and keying off of
 the miliseconds to see if that is handled better, but I'm confused as
 to why this current method is creating problems.

 One thing that might be an issue is the development is happening on an
 eeepc, which is a single core processor and it's SSD isn't the fast
 read/write storage device.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] gaeutilities 1.1 (developmental) release, introducing cron/scheduled tasks

2008-11-06 Thread bowman.jos...@gmail.com

I'm pleased to announce that gaeutilities 1.1 is available. 1.1 is the
developmental branch as work continues to a 2.0 release. These
releases should not be considered stable, and are mainly being put out
there for people who wish to try out the new functionality.

1.1 introduces a new utility, Cron. Cron allows you to use a Unix cron
type syntax to schedule tasks to run. Tasks are run based on http
requests to your application, so run time for tasks are a best
attempt, and not absolute. Higher traffic sites will have better luck
with tasks running on time.

Full documentation can be found here: 
http://code.google.com/p/gaeutilities/wiki/Cron

More information on gaeutilities project, which includes sessions and
cache for Google Appengine applications can be found at the project
page here. http://gaeutilities.appspot.com/
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
Google App Engine group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---