[google-appengine] Can't update new version and can't get instance information

2013-01-16 Thread Julian
Hello,

We've been trying to deploy a new version of our app to AppEngine for most 
of the day and the status page says that things are back to normal but that 
doesn't seem to be the case for us.

Is the issue that caused the problems earlier today still being looked into?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/E25hzwgl-Q4J.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Data upload: differences between bulkload_client and bulkloader ?

2009-03-24 Thread Julian

Hi,

The following docs describe how to upload csv data to the datastore:
http://code.google.com/appengine/articles/bulkload.html
http://code.google.com/appengine/docs/python/tools/uploadingdata.html

But one use the script bulkload_client.py and the other bulkloader.py.
In both files it is written that their purpose is to "Imports CSV data
over HTTP."

What is the difference?? I cannot figure out.


Julian

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Data upload: differences between bulkload_client and bulkloader ?

2009-03-25 Thread Julian

Thanks for the clarification.

In the old version, the way to assign key names for uploaded data was
to override HandleEntity() and to create a new entity there.
With the new version(bulkloader.py), it seems you should override
GenerateKey() instead.

Maybe you can add it to the documentation.

Julian




On Mar 26, 7:27 am, Marzia Niccolai  wrote:
> Hi,
>
> The article is now obsolete and should be removed, it explains how to use
> the bulk upload tool that was available at launch.
>
> Recently, we released an improved version of the tool, which is what the
> second document explains.
>
> -Marzia
>
> On Tue, Mar 24, 2009 at 4:58 AM, Julian  wrote:
>
> > Hi,
>
> > The following docs describe how to upload csv data to the datastore:
> >http://code.google.com/appengine/articles/bulkload.html
> >http://code.google.com/appengine/docs/python/tools/uploadingdata.html
>
> > But one use the script bulkload_client.py and the other bulkloader.py.
> > In both files it is written that their purpose is to "Imports CSV data
> > over HTTP."
>
> > What is the difference?? I cannot figure out.
>
> > Julian
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Simultaneous Requests

2009-04-02 Thread Julian

You might want to have a look to this App Engine video by Ken
Ashcraft:
http://www.youtube.com/watch?v=dP99fLhGwAU

App Engine scales, but not instantaneously, so if you want to do a
large number of requests in parallel, you need to increase the volume
gradually.

Julian



On Apr 3, 4:26 am, MajorProgamming  wrote:
> I believe I'm using python 2.5. The script I posted before is the
> entire script, so no timeouts.
>
> The server side is simply outputting 'a':
> self.response.out.write('a')
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Still no full-text search? Mystified by the priorities.

2009-04-08 Thread Julian

Maybe I don't fully understand the problem but what would prevent
anyone from coding the full-text search ??

For example, Jonathan (very nice app by the way) you already have word
segmentation, why don't you build your own table of indexes that
references for each word a list of wordles? If there is write
contention, you could divide alphabetically, by languages, etc..

Julian




On Apr 9, 1:15 am, Jonathan Feinberg  wrote:
> Long ago I attracted a flame-fest when I expressed my opinion that
> adding support for other programming languages should be given less
> priority than fixing bugs and adding infrastructural features. Here we
> are, months later, and the big announcements are
>
> 1) Java (my God, why?)
>
> and
>
> 2) Cron jobs (...but I could already write cron jobs to hit a URL)
>
> In the meantime, full-text search is not even on the roadmap.
>
> I'm torn. As the creator of Wordle, I'm truly grateful to Google and
> the GAE team for the use of an automatically-scaling app
> infrastructure. It has been a pleasure to use. On the other hand, the
> lack of search has been a huge problem for Wordle users, and I've got
> no good options.
>
> I acknowledge that search is my pet issue; I don't claim to represent
> a community or interest group with these comments. Then again, I can't
> think of a CRUD-style app that doesn't require or benefit from text
> search. So, while I'd consider using GAE in the future for some
> stateless utility micro-site, or maybe a static site, I won't use it
> again for anything with user-created data. While I've begun to regret
> having used it for Wordle, I admit that it's my own fault for not
> having thought through the implications of having no full-text search
> available.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: I've reached my quota, only I have this great new name for my app

2009-05-04 Thread julian

Hi Jason,

I'm up to 9 apps now and I have a couple more in the pipeline, so I
was wondering if I could get my allocation increased?  Thanks in
advance for any help you can provide with this.

Julian

On May 1, 1:57 pm, "Jason (Google)"  wrote:
> Hi Tim. I'm sorry, I misunderstood. Enabling billing does not allow you to
> create more applications, but this is something we can do one-off, and I've
> just done this for your account.
>
> - Jason
>
> On Wed, Apr 29, 2009 at 11:40 AM, Dag  wrote:
>
> > Jason,
> >  Thanks for the reply.
> > This is probably still undecided within GOOG but it seems you are
> > saying that if I enable billing for my applications, I will then get
> > additional slots for applications.
> > I would like to increase my max apps count to 20. Do I have to enable
> > billing for all the apps or just one?
> > How many new application slots do I get for each application I enable
> > billing for?
>
> > I think anyone who is committed to developing on this platform long-
> > term will appreciate knowing how they can manage / increase their
> > application number.
>
> > Thanks!
> > Tim
> > thedagdae   'circle-at-or-a-sign gmail
> > SanMateoWaveforms.com
>
> > On Apr 28, 2:18 pm, "Jason (Google)"  wrote:
> > > Hi Tim. Now that billing is an option, we are asking users to enable
> > billing
> > > for their applications. We granted selective quota increases in the past
> > > because there was no other way for developers to get additional quota for
> > > their apps. Billing provides this capability.
>
> > > - Jason
>
> > > On Sat, Apr 25, 2009 at 4:11 PM, Dag  wrote:
>
> > > > Hey Jeff,
> > > >  A couple of weeks ago I asked you to up my quota as well and I
> > > > haven't heard anything from you. I used the "Reply to author" link in
> > > > this group. I never used that link before. Did you even get my
> > > > request?
> > > > Could you increase my quota as well? I have sample python websites
> > > > that I want to duplicate in java.
>
> > > > Thanks!
> > > > Tim
> > > > SanMateoWaveforms.com
>
> > > > On Apr 25, 12:54 pm, Bennomatic  wrote:
> > > > > Hi Jeff,
>
> > > > > Can you up my account limit, too?  I've got a bunch of projects in
> > the
> > > > > pipeline and even though I don't have deployments for all of my apps
> > > > > yet, I could foresee needing 15 in the next few weeks.
>
> > > > > Thanks!
>
> > > > > On Apr 24, 11:04 am, "Jeff S (Google)"  wrote:
>
> > > > > > Hi Ted,
>
> > > > > > I've increased the max apps count to 20 for your account. Rather
> > than
> > > > just
> > > > > > creating a new account we'd prefer if people ask for an increase
> > :-)
>
> > > > > > Happy coding,
>
> > > > > > Jeff
>
> > > > > > On Fri, Apr 24, 2009 at 7:40 AM, Ted Gilchrist 
> > > > wrote:
> > > > > > > Ok, I starred it. And added the following plaintive plea:
>
> > > > > > > I could do the tricky thing, and start creating new apps under a
> > > > different email id.
> > > > > > > But I'm a straight up Google fan boy, and I don't want to do you
> > this
> > > > way. Sigh, but
>
> > > > > > > if I must  For me, it's not about deleting the data. It's
> > about
> > > > thinking of
> > > > > > > really cool new names for my apps, and not wanting to go off and
> > > > register the
> > > > > > > domains, etc. Hey, wait a minute. About allowing aliases for
> > apps? Is
> > > > that any
>
> > > > > > > easier? I just wanna use my new cool app name. Is that so wrong?
>
> > > > > > > On Fri, Apr 24, 2009 at 10:18 AM, T.J. Crowder <
> > > > t...@crowdersoftware.com>wrote:
>
> > > > > > >> And you have starred this issue[1], right?
>
> > > > > > >> [1]
> >http://code.google.com/p/googleappengine/issues/detail?id=335
>
> > > > > > >> I find it truly astonishing that there's no way to delete an
> > > > > > >> application.  That's a v0.1 feature, full stop.
> > > > > > >> --
> > > &

[google-appengine] Re: SDK 1.3.6 released!

2010-08-23 Thread Julian Namaro
You can add:
sys.path.insert(0,"/path/to/google_appengine/lib/fancy_urllib")

..in the script where you're calling remote_api.



On Aug 23, 5:15 am, Ryan Weber  wrote:
> Hitting the fancy_urllib issue with remote_api too. Has anyone found a
> fix/workaround for this (other than reverting back to 1.3.5)?
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: SSO with built-in OpenId is possible?

2010-08-30 Thread Julian Namaro
I thought SSO for the marketplace was one of the goal of the built-in
OpenID login, but I haven't tried yet.
Somebody can confirm it works ?


On Aug 30, 9:26 pm, Rodrigo Moraes  wrote:
> On Aug 29, 11:33 pm, gops wrote:
>
> > I think pure openid is possible( just login ). but , openid with oauth
> > ( for docs and contact api etc ) is not possible with sso.
>
> What I am missing is how people achieve login coming from Google Apps
> without passing by the approval screen (user is prompted for
> authorization). I've set a manifest file with a whitelisted domain and
> so on, and after several tries I'm considering if SSO is even possible
> with built-in OpenId auth.
>
> -- rodrigo

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Hi group! My App Engine App is finally launching today.

2010-09-08 Thread Julian Namaro
After a lot of work, and quite a bit of testing, I am happy today to
present you my app.
It's called memobuild, and it's basically an editor for large online
documents (e.g. documentation, reports, e-books).

You can see more information in the entry of the App Gallery:
http://appgallery.appspot.com/about_app?app_id=agphcHBnYWxsZXJ5chQLEgxBcHBsaWNhdGlvbnMY3aBJDA

It's built on top of webapp, and it uses a bit of everything:
transactions, tasks queues, blobstore, federated login,...
Have a look, your feedback is welcome.

Let me take the occasion to congratulate the App Engine team for the
constant improvements of the platform. Thank you, and please, do keep
it up!

Julian Namaro
http://www.memobuild.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Hi group! My App Engine App is finally launching today.

2010-09-09 Thread Julian Namaro
Many thanks all for the nice feedback!

The business model will be freemium like, and what you see is the free
part so don't be shy in using it.

I will be working on Google Apps integration(via the marketplace) for
paying customers. It will include the ability to publish the documents
on your domain, and full customization via CSS. Sorry the pricing is
not decided yet.

There have been some interest in the editor itself too, and leasing a
custom build for your own app engine account and your own customers
might be possible in the longer term.

Jean-Luc, if you're willing to participate in the testing phase
(essentially communicate with me about problems and suggestions), I
can make a great offer and give you preview access to the above
features. My contact details are in my profile.

If you have other questions, I'd be happy to answer in the memobuild
forums:
http://support.memobuild.com/discussions/

Julian



On Sep 9, 4:40 am, colin  wrote:
> Holy Cow!
> Your app is simply awesome. I use gDocs at present to collaborate with
> others on my projects. Your app offers a far richer environment. Love
> the simple table approach and the professional look and feel of the UI
> - especially the home page! I assume it's commercial. What's the
> pricelist like?
> Colin
>
> On Sep 8, 12:17 pm, Julian Namaro  wrote:
>
> > After a lot of work, and quite a bit of testing, I am happy today to
> > present you my app.
> > It's called memobuild, and it's basically an editor for large online
> > documents (e.g. documentation, reports, e-books).

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Hi group! My App Engine App is finally launching today.

2010-09-11 Thread Julian Namaro
Thanks! Long story short, I've been working on this alone for roughly
18 months so far.
I'm looking for the right people in Tokyo now btw.


On Sep 11, 4:23 am, "Sharp-Developer.Net"
 wrote:
> Hi Julian,
>
> Nice peace of work!
>
> Can I ask how many developers and how long (roughly) worked to make
> this happen?
>
> It's 1st time I was thinking to give online editing a try. Google Docs
> did not impress me that much.
> --
> Alex

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Having trouble deploying new version - Error The request is invalid for an unspecified reason.

2010-09-14 Thread Julian Namaro
Thanks Tim, I had the same problem today and your solution worked.

Maybe the documentation(http://code.google.com/appengine/docs/python/
config/appconfig.html#Custom_Error_Responses) should make it clear to
not put the custom page in a static folder.


On Aug 18, 12:53 pm, Tim Hoffman  wrote:
> Hi
>
> Looks like the problem was with the error_handlers directive (new in
> 1.3.6)
> I had used
>
> error_handlers:
>   - file: static/default_error.html
>
> Which also overlapped with a static file handler I had configured
>
> - url: /static
>   static_dir: static
>   expiration: "30d"
>
> Moving the static default handler to
>
> error_handlers:
>   - file: default_error.html
>
> Made the problem go away.
>
> Ideally the deployment failure error should at least say "mis-
> configured app.yaml" or something similar.
>
> Regards
>
> Tim
>
> On Aug 18, 11:23 am, Tim Hoffman  wrote:
>
> > Hi
>
> > Just trying to deploy a new version of my code base, and was able to
> > successfully deploy into a test instance I have.
> > When trying to deploy to my production instance I am repeatedly
> > getting the following error.
>
> > Deploying new version.
> > Rolling back the update.
> > Error 400: --- begin server output ---
>
> > Client Error (400)
> > The request is invalid for an unspecified reason.
> > --- end server output ---
>
> > Which unfortunately is not very informative. There is nothing in the
> > admin logs.
>
> > appid is q-tracker.
>
> > Thanks
>
> > Tim
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: CSS available but not executed

2010-09-21 Thread Julian Namaro
Hi,

What do you mean "it is not executed" ? Your CSS has only some rules
for body and tables and it works fine in my browser. Do you mean your
CSS file is not updated when you push a new version of your
application? In that case that's a static files cache issue, you can
search this forum for "cache busting" solutions.


Julian
http://www.memobuild.com


On Sep 21, 3:23 am, Tzach  wrote:
> I have a strange issue with a CSS file: although it is available, it
> is not executed.
> Does anybody encounter this problem?
>
> Here is the CSS:http://sudoku-solver-online.appspot.com/stylesheets/main.css
>
> Here is the HTML file calling ithttp://sudoku-solver-online.appspot.com/
>
> My appengine-web.xml does includes there right path, otherwise it
> would not have works with the local Jetty server (it does)
>
>   
>     
>   
>
> Thanks
> Tzach

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Performing AND Queries w/List Properties

2010-09-24 Thread Julian Namaro
You can perform an AND query on a ListProperty without a custom index.
I think it's called merge-join.
Have you tried InventoryItem.all().filter("keywords =",
keyword1).filter("keywords =", keyword2).filter(...) ?


Julian
http://www.memobuild.com


On Sep 24, 3:56 am, jorge  wrote:
> It seems to me the only way to perform an AND query on a list property
> (StringListProperty, for example) to match EVERY item in a list is to
> define multiple indexes and use an equality filter (IN doesn't work in
> this case because it performs an OR operation).  For example, I'd like
> my users to be able to perform keyword searches where I return items
> that match all of their keywords.  This resulted in me having to do
> the following:
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: display_order
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: keywords
>   - name: display_order
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: keywords
>   - name: keywords
>   - name: display_order
>
> ...and so on.  This gets a little ugly, and I'm not sure if this is
> the recommended way to do this, but it's the only way I have been able
> to get to work.  Of course, I don't know up front how many keywords my
> users are going to use.  There is a generous index quota limit and
> from the documentation it seems I can do up to 30 filters?  I doubt
> anyone would use that many keywords in a search, I could probably
> safely stop at 10 indexes.
>
> Before I go down this path, I want to make sure there isn't another
> (better?) option available to me.
>
> TIA

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Performing AND Queries w/List Properties

2010-09-24 Thread Julian Namaro
You can perform an AND query on a ListProperty without a custom index.
I think it's called merge-join.
Have you tried InventoryItem.all().filter("keywords =",
keyword1).filter("keywords =", keyword2).filter(...) ?


Julian
http://www.memobuild.com


On Sep 24, 3:56 am, jorge  wrote:
> It seems to me the only way to perform an AND query on a list property
> (StringListProperty, for example) to match EVERY item in a list is to
> define multiple indexes and use an equality filter (IN doesn't work in
> this case because it performs an OR operation).  For example, I'd like
> my users to be able to perform keyword searches where I return items
> that match all of their keywords.  This resulted in me having to do
> the following:
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: display_order
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: keywords
>   - name: display_order
>
> - kind: InventoryItem
>   properties:
>   - name: in_stock
>   - name: keywords
>   - name: keywords
>   - name: keywords
>   - name: display_order
>
> ...and so on.  This gets a little ugly, and I'm not sure if this is
> the recommended way to do this, but it's the only way I have been able
> to get to work.  Of course, I don't know up front how many keywords my
> users are going to use.  There is a generous index quota limit and
> from the documentation it seems I can do up to 30 filters?  I doubt
> anyone would use that many keywords in a search, I could probably
> safely stop at 10 indexes.
>
> Before I go down this path, I want to make sure there isn't another
> (better?) option available to me.
>
> TIA

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Latency related follow up

2010-10-04 Thread Julian Namaro

Hi,

> as a GAE developer, is there anything that we can
> do to avoid "Request aborted error" scenarios?

You can run appstats, and if your app has any request taking more than
~700ms work on optimizing it. Long requests are the first to time out
during periods of high latency and are generally a source of problems
in App Engine.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: appcfg.py download_data [INFO] Authentication Failed

2010-10-04 Thread Julian Namaro
Are you guys using Google Login or Federated/OpenID Login ?
There is a know problem when using remote_api with Federated Login,
for which Nick Johnson described a fix (http://blog.notdot.net/2010/06/
Using-remote-api-with-OpenID-authentication).



On Oct 4, 7:13 pm, Hugo Rodger-Brown  wrote:
> Same here - I've tried creating admins from email addresses within the
> apps domain, and outside, but nothing works. I can upload new apps
> just fine. Using 1.3.7 of the SDK.
>
> On Sep 26, 11:20 pm, Vladimir Prudnikov  wrote:
>
> > The same problem for me. I'm trying to download data. "Authentication
> > failed" each time. I tried 10 times or more. I'm sure too that I enter
> > correct email and password (in another tab I deploy to the same app
> > with the same credentials at the same time).
>
> > Version 1.3.7 (1.3.7.891)
>
> > On Sep 1, 12:46 am, morphium_hidrochloricum 
> > wrote:
>
> > > Nothing changed after I upgraded to 1.3.7.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Latency related follow up

2010-10-06 Thread Julian Namaro
Sounds like a cold start problem. I don't have experience on this but
there are some advices in the forum. If your app grows to have
sustained traffic it will improve for sure, but when starting or for
low-traffic sites the problem remains. I see that there's a feature on
the App Engine Roadmap to help address this: "Ability to reserve
instances to reduce application loading overhead".


On Oct 5, 12:47 pm, Kangesh Gunaseelan  wrote:
> Thanks for your suggestion.
>
> I will take a look at the appstats one more time. What I am noticing though
> is api_cpu under 500ms (and many much below that) but sometimes overall cpu
> overshoots 700 ms. More over, for request aborted error messages, it doesn't
> really look like the service is even called - overall cpu in those instances
> is over 1 ms but api_cpu is 0 ms.  Am I interpreting this right?
>
> I have seen old posts where users recommended a cron and I tried that out
> today with surprisingly positive results.  Other than this potentially
> causing few instances to stay alive, I can't think of any other
> explanations.  Any one why that is better?  In any case, I wonder if that is
> going to really help if concurrent requests increase and demand more active
> instances.
>
> Thanks.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: appcfg.py download_data [INFO] Authentication Failed

2010-10-06 Thread Julian Namaro
Romage,

It's not straightforward but if you create a config file using
Automatic Configuration you'll be able to specify a format for the
downloaded data:
http://code.google.com/appengine/docs/python/tools/uploadingdata.html#Configuring_the_Bulk_Loader



On Oct 6, 11:26 pm, romage  wrote:

> I am able to get a sqllite dump from the database, using either
> appcfg.py or bulkloader.py, but I haven't yet been able to download in
> the format that I would like which is annoying and frustrating.
>
> A
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Instances, cold start and pending_ms

2010-10-25 Thread Julian Namaro
The first number is used for average latency, not cpu_ms, and the auto-
scaling threshold is computed only from user-facing request, not tasks
or crons.

Your numbers seems quite good, nothing to worry about I think. Are you
experiencing a specific problem ?




On Oct 23, 12:22 am, Matija  wrote:
> 3. If my request has these performance data: 188ms 5269cpu_ms
> 5133api_cpu_ms.
>
> Does 188 ms counts for average latency (for new instances) or 5269 ms
> (cpu time) ?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: App Gallery no longer available

2010-11-05 Thread Julian Namaro
An App Gallery is a great idea but Google's one looked like a weekend
project thrown out in the wild and was pretty useless.
There was a lot of spam, unhelpful reviews, and occasionally you could
see Desktop software (running only on Windows, not even cross-
plateform) with a whole bunch of 5-stars ratings in this "App Engine"
gallery.

My guess is that they're deprecating the gallery in favor of the
Chrome Web Store. Hope this will be a more serious attempt :]



On Nov 5, 6:14 am, MLTrim  wrote:
> Do you know why App Gallery  is no longer available?
>
> http://appgallery.appspot.com
>
> thanks
> Michele

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: App Gallery no longer available

2010-11-07 Thread Julian Namaro
I think a specific "App Engine" tag in a general Web App Gallery would
make more sense than what we had before.
After all, what you can do with App Engine is pretty much the same
than what you can do with other technology stacks, and much of what
the user experiences is the common client-side part anyway.


On Nov 7, 6:21 am, nickmilon  wrote:

> Also the gallery was a kind of museum where you could see how GAE
> appls have been evolved in time from the early days circa April 2008
> till now, always advancing using new features of the platform or as
> the features were understood by the developers.
> Now if they even do substitute this with a new site I doubt early
> developers will go and resubmit their work.
> So I only hope Google reconsiders this, fixes the problems related to
> spam and classification of appls and restore the site.
>
> On Nov 6, 5:17 am, Julian Namaro  wrote:
>
> > An App Gallery is a great idea but Google's one looked like a weekend
> > project thrown out in the wild and was pretty useless.
> > There was a lot of spam, unhelpful reviews, and occasionally you could
> > see Desktop software (running only on Windows, not even cross-
> > plateform) with a whole bunch of 5-stars ratings in this "App Engine"
> > gallery.
>
> > My guess is that they're deprecating the gallery in favor of the
> > Chrome Web Store. Hope this will be a more serious attempt :]
>
> > On Nov 5, 6:14 am, MLTrim  wrote:
>
> > > Do you know why App Gallery  is no longer available?
>
> > >http://appgallery.appspot.com
>
> > > thanks
> > > Michele
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Early Christmas Present from Google?

2010-11-08 Thread Julian Namaro
Back in July, and even after a maintenance before that I noticed a
significant lower latency in the few days right after the maintenance.
It never last though, so we should probably wait a few more days
before assessing the results. I wonder what kind of changes in
BigTable could cause a temporary improvement...


On Nov 7, 7:21 am, Bart Thate  wrote:
> On Sat, Nov 6, 2010 at 11:17 PM, Greg  wrote:
> > Check out the datastore stats after today's maintenance...
>
> >http://code.google.com/status/appengine/detail/datastore/2010/11/06#a...
>
> I really really really hope it stays this way ;]
> Latency dropped from 300 msec to 100msec for me
>
> Bart
> --
> @jsonbot Heerhugowaard, Netherlands
> programming schizofrenic -http://jsonbot.appspot.com

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Feature Request: Custom (sub)domain to a "version" subdomain mapping

2011-04-27 Thread Julian Namaro
The issue below is related:
http://code.google.com/p/googleappengine/issues/detail?id=2878

For apps using custom domains having to test different versions on
appspot.com can cause problems with cookies and OpenID identities.



On Apr 26, 9:21 pm, Vladimir Prudnikov  wrote:
> It would be great to be able to map a custom domain or subdomain to a
> "version"  subdomain (version-name.app-id.appspot.com).
>
> In my situation I have a website running on Python SDK and API running on
> Java SDK. So,www.mydomain.compoints to the default version subdomain
> master.app-id.appspot.com and api.mydomain.com points to the another version
> api.app-id.appspot.com.
>
> The advantages are:
> + I can deploy a master version more often and it will not affect API
> + I can use both Python and Java SDK in one app and using the same database
> + Many subprojects can use the same database
> + I can limit access to the API's codebase for a developers who work on a
> website, so he can deploy a website without having an API code.
> + anything else?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Unicode Support for Python 2.5

2011-06-30 Thread Julian Namaro
Python2.5 supports unicode and GAE uses it pretty much everywhere
already. I'd say you just have to tell the python method opening your
CSV file to use utf8 encoding, or use decode('utf-8') on the raw data.


On Jun 30, 4:41 pm, prakhil samar  wrote:
> Hi All
>
> Please help me out with the UNICODE issue on Google App Engine
>
> As GAE is using Python 2.5 which has default encoding system as
> "ASCII" and python 3.0 has default encoding system as "UNICODE"
>
> I have created one CSV file which contains some unicode characters
> like  à á â these characters are not in ASCII range, so when i import
> the CSV file i get following error:
>
> " : 'ascii' codec can't decode
> byte 0xe0 in position 0: ordinal not in range(128) "
>
> I have set the default encoding as "UTF 8 " in the Lib folder of
> Python 2.5 and it works all fine at my local server but when i deploy
> the application on the Google App server then it gives me the above
> error.
>
> Is there any way to set the default encoding for my application on
> Google App Engine platform ??
>
> Is there any new version of Google Appengine supporting Python 3.0 ??
>
> Anyone out there please help me out to resolve this issue
>
> Thanks in Advance
> Prakhil :)

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Unicode Support for Python 2.5

2011-07-04 Thread Julian Namaro
Prakhil,
Did you resolve this ?
You need to do the convertion to unicode before any split or strip,
try
csv_file = self.request.get('file1').decode('utf-8') on your first
line or if it is urlencoded:
csv_file = urllib.unquote( self.request.get('file1') ).decode('utf-8')


On Jul 1, 3:45 pm, prakhil samar  wrote:
> Hi Julian,
>
> Thanks for your information.
>
> I am trying to upload a CSV file which contains some special
> characters (Unicode characters which are not in the ASCII range). The
> following is the data which the CSV file contains:
>
> Name              Address                       City
> Country                         State           Zip Cde Region
> à á â        5334           Swenson Avenue   New York        United States of 
> America
> California      12345   North America
>
> I am using the following Code for reading the data from HTML and
> process the data in CSV file:
>
> HTML page:
>  \n"),skipinitialspace=True,quotechar='"',quoting=csv.QUOTE_MINIMAL)
> for reader in fileReader:
> for read in reader:
>         r = read.strip()
>         r  = unicode(r, 'utf-8')
>         
>
> Now, when I try to upload the file, I am getting the following error:”
>
> “ UnicodeDecodeError: 'ascii' codec can't decode byte 0xe0 in position
> 0: ordinal not in range(128) “
>
> What I understood is, After reading the data from CSV file, I am
> getting a 2 byte string in variable "read" in the above code. and as
> per my understanding we cannot convert 2 bytes string to unicode so it
> is giving the above error
>
> I also tried, decode() and encode() methods. I am getting the same
> error.
>
> Please help me out, to read the data from CSV file and convert that to
> Unicode and upload in the database?
>
> Looking forward to hear from you
>
> Regards
> Prakhil
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Hanging on last Backend (or Module) memcache writes

2013-08-06 Thread Julian Kent
I have a batch processing Backend (B4) which does a bunch of 
unpickle/pickle and Numpy/array stuff. Recently I noticed that I was 
getting much higher backend charges, which would hit quota almost every 
time, so I migrated to Modules (also B4), thinking that might solve it.
However, I still see the same issue:

2013-08-05 15:23:33.962 /BatchRankings 500 19413478ms 0kb instance=0 
AppEngine-Google; (+http://code.google.com/appengine)

I 2013-08-05 10:00:04.118
mem usage at start of meleerumble: 24.55078125MB

... lots more logs ...

I 2013-08-05 10:01:03.549
mem usage after zipping: 157.08984375MB

I 2013-08-05* 10:01:03.550*
split bots into 18 sections

I 2013-08-05 *15:23:03.086*
wrote 564 bots to memcache

E 2013-08-05 15:23:33.962
Process terminated because the backend took too long to shutdown.

Look at the timestamp between splitting and writing to memcache.

In addition, in my logs just below the actual request handler, I see this:
2013-08-05 *15:23:02.938* /_ah/stop 200 5ms 0kb instance=0 

So, from what I can tell, it looks like the backend hangs inside of the 
memcache writing, and the /_ah/stop wakes it up when I hit my quota.

Here is the relevant code between those two logging points:

client = memcache.Client()
if len(botsdict) > 0:
splitlist = dict_split(botsdict,32)
logging.info("split bots into " + str(len(splitlist)) + 
" sections")

for d in splitlist:
rpcList.append(client.set_multi_async(d))

logging.info("wrote " + str(len(botsdict)) + " bots to 
memcache")

I don't see how 18 set_multi_async calls can take 5h23m. Can the logs be 
trusted here? Could it be that the actual code is finished but somehow the 
exit never registered and the logging was the problem? I'm having to 
disable my backend processing because of this, since it just eats as much 
quota as I throw at it.

Any help regarding what on earth is happening here would be much 
appreciated.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.




Re: [google-appengine] Hanging on last Backend (or Module) memcache writes

2013-08-22 Thread Julian Kent


On Thursday, 8 August 2013 00:23:30 UTC+2, Vinny P wrote:

> What does the variable *botsdict* represent? Based on the naming and 
> function calls, I assume that it's a dictionary, but what type of objects 
> is it containing? Is it possible that the contained objects are doing some 
> kind of processing on their own?
>
 
It is a dictionary of zipped, pickled objects, with the key being the 
memcache key I'm using. The split_dict just breaks the dictionary into a 
list of dictionaries each of which are small enough to use with a single 
memcache.set_multi(), which has a 30MB limit.

>  
> Install appstats ( 
> https://developers.google.com/appengine/docs/python/tools/appstats ) into 
> your application, you'll be able to see how the memcache calls execute and 
> at what time. From there we can narrow down the list of problems.
>

Will do. Right now it is tough to catch these though, they only happen ~20% 
of the time. It always seems to happen while I'm asleep :-/ Is there some 
way of getting appstats to save to db so I can see it later? 


How much memory in memcache are you using? If you're only using ~200 MB, it 
> might be easier to simply store it within the backend's memory and 
> not memcache (if you're hungry for memory you can use the B4_1G backend 
> size, which gives you 1GB of memory and the same processor size as a 
> regular B4).
>
 
 This is just a batch processing job for some ranking mechanisms that 
requires all my scores to be loaded into memory at once, most of my work is 
done in frontends. Once the batch job is done I dump all the changes I've 
made to memcache to overwrite any values the frontends might be using. The 
queue for frontend processing is paused while I'm doing this so I'm not 
concerned with locking etc.

Thanks for the help, and sorry I was slow to respond. Somehow my email 
updates were turned off!
 

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Hanging on last Backend (or Module) memcache writes

2013-08-24 Thread Julian Kent
OK, I managed to catch one. I saved the appstats page, as well as the logs, 
here:
https://dl.dropboxusercontent.com/u/4066735/LiteRumble_error/LiteRumble_error.zip
If you want to step through code the latest is available here:
https://bitbucket.org/jkflying/literumble/src/3542f1e636fe8675cf6e5b70126d271963386e71/BatchRankings.py?at=default

In the appstats, further down I'm seeing memcache.Set() entries with times 
such as "real=8831941ms", which is clearly where it is hanging. Personally, 
I don't consider 8000 seconds, that is over 2 hours, reasonable latency in 
a single memcache Set() call ;-)

Also, this handler only terminated because I did a manual shutdown of the 
instance. In the log I linked you see the /_ah/stop being called just 
before the memcache.set 'woke up' after being hung for 2 hours.

Thanks for the help
Julian

On Saturday, 24 August 2013 05:05:33 UTC+2, Vinny P wrote:
>
> On Fri, Aug 23, 2013 at 1:36 AM, Julian Kent wrote:
>
>> Will do. Right now it is tough to catch these though, they only happen 
>> ~20% of the time. It always seems to happen while I'm asleep :-/ Is there 
>> some way of getting appstats to save to db so I can see it later? 
>>
>  
>  
> 20% occurrence rate is more than enough for appstats to retain the 
> appropriate data. We don't need to see every single failure, just a few 
> good examples of what the memcache calls are doing and how long they take.
>
>  
> -
> -Vinny P
> Technology & Media Advisor
> Chicago, IL
>
> App Engine Code Samples: http://www.learntogoogleit.com
>   
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Hanging on last Backend (or Module) memcache writes

2013-08-25 Thread Julian Kent
Thanks Vinny

I've put a delay of 5 seconds in, and reduced the number of items in each 
set_multi_async from 32 to 20. 

However, this is still a new issue, since I didn't have any problems like 
this until 1.8.x, even without any delay and dumping as fast as I could. 
Could you notify the dev guys that there is an edge-case bug here that 
slipped through their QA testing?

Thanks
Julian

>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [google-appengine] Hanging on last Backend (or Module) memcache writes

2013-08-25 Thread Julian Kent
Ah, it seems you aren't affiliated with Google. I'll see about filing a bug 
report or something myself then.

Anyway, thanks for the help =)

Julian

On Sunday, 25 August 2013 13:33:47 UTC+2, Julian Kent wrote:
>
> Thanks Vinny
>
> I've put a delay of 5 seconds in, and reduced the number of items in each 
> set_multi_async from 32 to 20. 
>
> However, this is still a new issue, since I didn't have any problems like 
> this until 1.8.x, even without any delay and dumping as fast as I could. 
> Could you notify the dev guys that there is an edge-case bug here that 
> slipped through their QA testing?
>
> Thanks
> Julian
>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Hanging on last Backend (or Module) memcache writes

2013-08-27 Thread Julian Kent
Wow. OK, if it was happening to you with only 2 memcache set_multi_async 
calls, I'm generally doing over 200 in the course of 8 minutes or so, each 
with ~32 elements. I've added delays between them now, so it is now 300 
over 15 minutes each with 20 elements.

Vinny, I haven't hit the error again since I changed to batches of 20 per 
set_multi_async and added 5 seconds delay between each write. Of course, 
that isn't a viable solution for everybody, and even now I'm thinking that 
I should test at 2 seconds to see what happens. Also, I thought the whole 
point of the async calls was that you could just dump and forget while you 
keep on processing. If you have to wait before you can dump more it really 
doesn't add much value to them. Thinking I might switch back to regular 
set_multi calls (and yes, I was hitting the error with them as well).

On Sunday, 25 August 2013 19:06:45 UTC+2, pdknsk wrote:
>
> PS. I should also add that I moved to a different app id. So this may or 
> may not also have helped fix it.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Hanging on last Backend (or Module) memcache writes

2013-08-30 Thread Julian Kent
2 seconds is confirmed as not hanging over 30+ runs. I'll be away for a 
week, but when I'm back I'm going to try 1 second.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Problems with my application (from AppEngine)

2013-11-28 Thread Julian Kent
Same problems.

   1. Dashboard works, but takes ~30s to load
   2. Requests are running at usual speeds, not an issue for me
   3. Deploys are timing out
   

On Thursday, 28 November 2013 10:55:58 UTC+2, Alexandru Farcaş wrote:
>
> Hi,
>
> I have problems with my application:
>
>1. The Dashboard is not working - I receive different errors (see in 
>attchm)
>2. My application is running very very slow (some requests take 
>50-60s, and some of them throws exceptions)
>3. App deploys are taking almost 1 hour - yesterday some of them failed
>
>
> (*) HR, Java SDK 1.8.8
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Re: Problems with my application (from AppEngine)

2013-11-28 Thread Julian Kent
Aaad it seems to have been fixed.

On Friday, 29 November 2013 09:04:03 UTC+2, Julian Kent wrote:
>
> Same problems.
>
>1. Dashboard works, but takes ~30s to load
>2. Requests are running at usual speeds, not an issue for me
>3. Deploys are timing out
>
>
> On Thursday, 28 November 2013 10:55:58 UTC+2, Alexandru Farcaş wrote:
>>
>> Hi,
>>
>> I have problems with my application:
>>
>>1. The Dashboard is not working - I receive different errors (see in 
>>attchm)
>>2. My application is running very very slow (some requests take 
>>50-60s, and some of them throws exceptions)
>>3. App deploys are taking almost 1 hour - yesterday some of them 
>>failed
>>
>>
>> (*) HR, Java SDK 1.8.8
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.


[google-appengine] Sending mail using account secondary address

2014-08-18 Thread Julian Wood
Hi all,

Our situation was that we had a working GAE admin account: 
nore...@company.com. This was great, and we could send email from this 
address, from GAE, without issue.

Then yesterday, we wanted to also use the smtp server for this account, to 
send from another server. To do that, we needed to add gmail capability to 
the account, which we did. Google also asked for a gmail address 
(company.nore...@gmail.com) to use with the account, making the original 
address an alternate address. I then made nore...@company.com the default 
from address and moved on. This worked perfectly for smtp relay purposes.

Now the problem - our GAE suddenly stopped sending mail. In addition, the 
account listed in permissions changed from nore...@company.com to 
company.nore...@gmail.com. I invited nore...@company.com again, which all 
went fine, except that in the end, Google thinks it is 
company.nore...@gmail.com.

Is there any way to use nore...@company.com to send mail from GAE again?

We are currently working on setting up remote SMTP from within our GAE 
instance, but it would be nice if there was a better way.

Thanks,

Julian

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-04 Thread Julian Bunn


I have a GAE application with an endpoint that requires authentication, 
which I need to call from an application (rather than from in a browser). I 
was using ClientLogin, but that is now obsolete, so I have set up a Service 
Account in the Google Console, and stored its keypair .p12 file so that I 
can use the OAuth methods as described in the documentation.

Although the GoogleCredential builder successfully returns an authorization 
token, if I then use that token in an HTTP get call to the endpoint, the 
response is always the Google Login page.

Why, if I use the token, does GAE not take my application call as 
authorized? Am I doing this all wrong or missing a step? 

Here is the code:

String emailAddress = "x...@developer.gserviceaccount.com";
JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
String emailScope = "https://www.googleapis.com/auth/userinfo.email";;
String keyFileName = "Y.p12";
String baseURL = "http://Z.appspot.com";;
HttpTransport httpTransport;
try {
httpTransport = GoogleNetHttpTransport.newTrustedTransport();

File keyFile = new File(keyFileName);
if(!keyFile.exists()) {
System.err.println("Key file "+keyFileName+" missing");
System.exit(0);
}

GoogleCredential credential = new GoogleCredential.Builder()
.setTransport(httpTransport)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId(emailAddress)
.setServiceAccountScopes(Collections.singleton(emailScope))
.setServiceAccountPrivateKeyFromP12File(keyFile)
.build();

boolean success = credential.refreshToken();
System.out.println("Access token refresh "+ success);

String token = credential.getAccessToken();

System.out.println("Token "+token);

String uri = "http://Z.appspot.com/gcm/home";;

System.out.println("uri: " + uri);

HttpGet get = new HttpGet(uri);
get.setHeader("Cookie", token);

HttpClient client = new DefaultHttpClient();
HttpResponse response = client.execute(get);
response.getEntity().writeTo(System.out);

Typical output:

   Access token refresh true
   Token ya29.xQGG1kxxx
   uri: http://Z.appspot.com/gcm/home

   
   
  
  
  
  
  Sign in - Google Accounts
  .

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/547a76b0-45b3-4c74-995a-b4c98d97160d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-06 Thread Julian Bunn
Hi Nick,

Many thanks - I had already posted on stackoverflow with no luck, so came 
here :-) I do have one reply now over there, which suggests using client 
secrets, so that is a good lead. Also your comments on the use of service 
account are well taken - it looks like that may be inappropriate.

Thanks for the pointers to the documentation, which I'd already visited and 
read but ended up being confused - as is no doubt evident from my question 
:-)

Julian

On Wednesday, August 5, 2015 at 4:57:26 PM UTC-7, Nick (Cloud Platform 
Support) wrote:
>
> Hi Julian,
>
> You've produced an excellent post which would belong on stackoverflow.com. 
> Google Groups isn't the place to post specific technical issues, as this 
> forum is meant more for general discussion of the platform and services. 
>
> I'll give you the advice before you post there that it seems you've 
> combined examples from different kinds of OAuth flow and this might be the 
> cause of your issues. I see that there's a variable "emailScope" - this is 
> a scope which a user would actually grant to your application, not one 
> which a service account could grant. 
>
> The service account and its credentials are used to call APIs on behalf of 
> your application, although I don't think I've seen this pattern before, 
> where you want to call an endpoint on your own app using a service account. 
> As far as I know, service accounts have only been used to authenticate with 
> Google APIs, although I suppose it might be possible to write an endpoint 
> which correctly authenticates it.
>
> You could do some more reading on OAuth2 
> <https://developers.google.com/identity/protocols/OAuth2>, OpenID Connect 
> <https://developers.google.com/identity/protocols/OpenIDConnect?hl=en>, 
> Service 
> Accounts 
> <https://developers.google.com/identity/protocols/OAuth2ServiceAccount>, 
> and the Google Identity Platform <https://developers.google.com/identity/>, 
> and try to repost your question to stackoverflow.com. That would be the 
> best action as there are many more users there ready to help with a 
> technical question.
>
> If you would like to open a thread in this forum discussing the platform 
> or services in more broad terms, starting a discussion that would be useful 
> for other users to join in to, feel free to do so.
>
> Have a great day!
>
> [1] http://www.stackoverflow.com/
> [2] http://www.serverfault.com/
> [3] http://code.google.com/p/google-appengine/issues/list
>
> On Wednesday, August 5, 2015 at 1:32:41 AM UTC-4, Julian Bunn wrote:
>>
>> I have a GAE application with an endpoint that requires authentication, 
>> which I need to call from an application (rather than from in a browser). I 
>> was using ClientLogin, but that is now obsolete, so I have set up a Service 
>> Account in the Google Console, and stored its keypair .p12 file so that I 
>> can use the OAuth methods as described in the documentation.
>>
>> Although the GoogleCredential builder successfully returns an 
>> authorization token, if I then use that token in an HTTP get call to the 
>> endpoint, the response is always the Google Login page.
>>
>> Why, if I use the token, does GAE not take my application call as 
>> authorized? Am I doing this all wrong or missing a step? 
>>
>> Here is the code:
>>
>> String emailAddress = "...@developer.gserviceaccount.com 
>> ";
>> JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
>> String emailScope = "https://www.googleapis.com/auth/userinfo.email";;
>> String keyFileName = "Y.p12";
>> String baseURL = "http://Z.appspot.com";;
>> HttpTransport httpTransport;
>> try {
>> httpTransport = GoogleNetHttpTransport.newTrustedTransport();
>>
>> File keyFile = new File(keyFileName);
>> if(!keyFile.exists()) {
>> System.err.println("Key file "+keyFileName+" missing");
>> System.exit(0);
>> }
>>
>> GoogleCredential credential = new GoogleCredential.Builder()
>> .setTransport(httpTransport)
>> .setJsonFactory(JSON_FACTORY)
>> .setServiceAccountId(emailAddress)
>> .setServiceAccountScopes(Collections.singleton(emailScope))
>> .setServiceAccountPrivateKeyFromP12File(keyFile)
>> .build();
>>
>> boolean success = credential.refreshToken();
>> System.out.println("Access token refresh "+ success);
>>
>> String token = credential.getA

[google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-08 Thread Julian Bunn
Hi Jason,

Yes: 
http://stackoverflow.com/questions/31816007/authentication-with-google-app-engine-service-using-googlecredential-with-a-serv

The suggestion there involves the Google Drive API, which is not really 
helping me, as my GAE application does not use that API.

Julian

On Saturday, August 8, 2015 at 9:38:00 AM UTC-7, Jason Collins wrote:
>
> Julian, can you post your link to your SO question?
>
>
> On Thursday, 6 August 2015 12:20:28 UTC-7, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Many thanks - I had already posted on stackoverflow with no luck, so came 
>> here :-) I do have one reply now over there, which suggests using client 
>> secrets, so that is a good lead. Also your comments on the use of service 
>> account are well taken - it looks like that may be inappropriate.
>>
>> Thanks for the pointers to the documentation, which I'd already visited 
>> and read but ended up being confused - as is no doubt evident from my 
>> question :-)
>>
>> Julian
>>
>> On Wednesday, August 5, 2015 at 4:57:26 PM UTC-7, Nick (Cloud Platform 
>> Support) wrote:
>>>
>>> Hi Julian,
>>>
>>> You've produced an excellent post which would belong on 
>>> stackoverflow.com. Google Groups isn't the place to post specific 
>>> technical issues, as this forum is meant more for general discussion of the 
>>> platform and services. 
>>>
>>> I'll give you the advice before you post there that it seems you've 
>>> combined examples from different kinds of OAuth flow and this might be the 
>>> cause of your issues. I see that there's a variable "emailScope" - this is 
>>> a scope which a user would actually grant to your application, not one 
>>> which a service account could grant. 
>>>
>>> The service account and its credentials are used to call APIs on behalf 
>>> of your application, although I don't think I've seen this pattern before, 
>>> where you want to call an endpoint on your own app using a service account. 
>>> As far as I know, service accounts have only been used to authenticate with 
>>> Google APIs, although I suppose it might be possible to write an endpoint 
>>> which correctly authenticates it.
>>>
>>> You could do some more reading on OAuth2 
>>> <https://developers.google.com/identity/protocols/OAuth2>, OpenID 
>>> Connect 
>>> <https://developers.google.com/identity/protocols/OpenIDConnect?hl=en>, 
>>> Service 
>>> Accounts 
>>> <https://developers.google.com/identity/protocols/OAuth2ServiceAccount>, 
>>> and the Google Identity Platform 
>>> <https://developers.google.com/identity/>, and try to repost your 
>>> question to stackoverflow.com. That would be the best action as there 
>>> are many more users there ready to help with a technical question.
>>>
>>> If you would like to open a thread in this forum discussing the platform 
>>> or services in more broad terms, starting a discussion that would be useful 
>>> for other users to join in to, feel free to do so.
>>>
>>> Have a great day!
>>>
>>> [1] http://www.stackoverflow.com/
>>> [2] http://www.serverfault.com/
>>> [3] http://code.google.com/p/google-appengine/issues/list
>>>
>>> On Wednesday, August 5, 2015 at 1:32:41 AM UTC-4, Julian Bunn wrote:
>>>>
>>>> I have a GAE application with an endpoint that requires authentication, 
>>>> which I need to call from an application (rather than from in a browser). 
>>>> I 
>>>> was using ClientLogin, but that is now obsolete, so I have set up a 
>>>> Service 
>>>> Account in the Google Console, and stored its keypair .p12 file so that I 
>>>> can use the OAuth methods as described in the documentation.
>>>>
>>>> Although the GoogleCredential builder successfully returns an 
>>>> authorization token, if I then use that token in an HTTP get call to the 
>>>> endpoint, the response is always the Google Login page.
>>>>
>>>> Why, if I use the token, does GAE not take my application call as 
>>>> authorized? Am I doing this all wrong or missing a step? 
>>>>
>>>> Here is the code:
>>>>
>>>> String emailAddress = "...@developer.gserviceaccount.com";
>>>> JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
>>>> String emailScope = "https://

Re: [google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-10 Thread Julian Bunn
Hi Nick,

Thanks ...

GAE is doing the authentication. My GAE app has endpoints (i.e. urls like
my.appspot.com/gcm/home) that can only be executed by an admin who is
logged in. There is nothing special I have implemented to support this, I
am just using Google's GAE infrastructure.

So, in the past, all I needed to do from a client application was to call
ClientLogin with a user/pass pair, which would return me a token which
could then be sent as a Cookie in calls to the GAE endpoints.

This worked very well!

Now that ClientLogin has been disabled, I am looking for an alternative to
it. I apparently need to use OAuth2, but there is no documentation that
seems to match my use case, unhappily. Use cases seem to assume the use of
various Google APIs, which I am not using.

Thanks anyway.

Julian



On Mon, Aug 10, 2015 at 4:13 PM, Nick (Cloud Platform Support) <
pay...@google.com> wrote:

> Hi Julian,
>
> The example code given there might be dealing with the Drive API, but APIs
> in this context are quite abstract, and you can easily substitute any
> Google API.
>
> Reading back over your question, I'm not sure you've supplied enough
> information for anybody to help answer. What exactly is doing the
> authenticating? Is your endpoint a Cloud Endpoints
> <https://cloud.google.com/appengine/docs/java/endpoints/> endpoint? It's
> not really clear to me what is doing the authentication at your "endpoint".
> Do you just mean that you've deployed with "login: admin"?
>
> At any rate, this forum, as mentioned, isn't meant for 1-on-1 technical
> support, so I don't think you should continue to follow-up in this thread,
> and should either improve the stackoverflow question to clarify exactly
> what you're expecting to happen in technical language and specifics, or
> else post a new question which does include that information. That will
> enable people to help you better.
>
> Best wishes,
>
> Nick
>
>
>
> On Saturday, August 8, 2015 at 1:51:24 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Jason,
>>
>> Yes:
>> http://stackoverflow.com/questions/31816007/authentication-with-google-app-engine-service-using-googlecredential-with-a-serv
>>
>> The suggestion there involves the Google Drive API, which is not really
>> helping me, as my GAE application does not use that API.
>>
>> Julian
>>
>> On Saturday, August 8, 2015 at 9:38:00 AM UTC-7, Jason Collins wrote:
>>>
>>> Julian, can you post your link to your SO question?
>>>
>>>
>>> On Thursday, 6 August 2015 12:20:28 UTC-7, Julian Bunn wrote:
>>>>
>>>> Hi Nick,
>>>>
>>>> Many thanks - I had already posted on stackoverflow with no luck, so
>>>> came here :-) I do have one reply now over there, which suggests using
>>>> client secrets, so that is a good lead. Also your comments on the use of
>>>> service account are well taken - it looks like that may be inappropriate.
>>>>
>>>> Thanks for the pointers to the documentation, which I'd already visited
>>>> and read but ended up being confused - as is no doubt evident from my
>>>> question :-)
>>>>
>>>> Julian
>>>>
>>>> On Wednesday, August 5, 2015 at 4:57:26 PM UTC-7, Nick (Cloud Platform
>>>> Support) wrote:
>>>>>
>>>>> Hi Julian,
>>>>>
>>>>> You've produced an excellent post which would belong on
>>>>> stackoverflow.com. Google Groups isn't the place to post specific
>>>>> technical issues, as this forum is meant more for general discussion of 
>>>>> the
>>>>> platform and services.
>>>>>
>>>>> I'll give you the advice before you post there that it seems you've
>>>>> combined examples from different kinds of OAuth flow and this might be the
>>>>> cause of your issues. I see that there's a variable "emailScope" - this is
>>>>> a scope which a user would actually grant to your application, not one
>>>>> which a service account could grant.
>>>>>
>>>>> The service account and its credentials are used to call APIs on
>>>>> behalf of your application, although I don't think I've seen this pattern
>>>>> before, where you want to call an endpoint on your own app using a service
>>>>> account. As far as I know, service accounts have only been used to
>>>>> authenticate with Google APIs, although I suppose it might be possible to
>>

Re: [google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-12 Thread Julian Bunn
Hi Nick,

Thanks for much for persisting with your help!

Yes. your understanding is correct: I'd like to use a service account to 
login so that I can make requests to an admin route of my app.

Under "Credentials" in the new Google Developers Console, I see the Service 
Account listed in the OAuth section, with its ID, email address and 
certificate fingerprints.

Under "Permissions" in the Console, I have my own account and a maintenance 
account listed as Owners. On the same page, under "Service Accounts" I have 
three listed, all having Edit permission. One of these is the same account 
listed in "Credentials". (The other two are @cloudservices and 
@developer.gservice accounts - I don't know where they came from, as I 
don't recall creating them).

On the old version of the Developers Console, I can see that the 
Authentication Type is set to Google Accounts API. On there I can also see 
the Service Account Name but it is different from the Service Account 
listed under Credentials (above) - which is confusing me.

The web xml for the deployment includes:




/gcm/home
/gcm/send


admin



These are the two endpoints I need to call from the client.

Thanks again!
Julian



On Wednesday, August 12, 2015 at 1:28:20 PM UTC-7, Nick (Cloud Platform 
Support) wrote:
>
> Hi Julian,
>
> OAuth2 is a complex topic and has many methods of application, being just 
> an authentication/authorization protocol, and having many possible uses / 
> forms of appearance (client-server, server-server, 3-legged, etc.)
>
> From your comments, I can now understand you're using login: admin on a 
> route of your app, and you'd like to know how to make requests to a route 
> on your app protected in such a manner, using a service account to login. 
> Is that accurate?
>
> Could you let me know whether the service account is added as an admin of 
> your application in the Developers Console under "Credentials" and whether 
> your app's authentication method is set to "Google Accounts"?
>
> On Monday, August 10, 2015 at 8:10:35 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Thanks ... 
>>
>> GAE is doing the authentication. My GAE app has endpoints (i.e. urls like 
>> my.appspot.com/gcm/home) that can only be executed by an admin who is 
>> logged in. There is nothing special I have implemented to support this, I 
>> am just using Google's GAE infrastructure.
>>
>> So, in the past, all I needed to do from a client application was to call 
>> ClientLogin with a user/pass pair, which would return me a token which 
>> could then be sent as a Cookie in calls to the GAE endpoints.
>>
>> This worked very well! 
>>
>> Now that ClientLogin has been disabled, I am looking for an alternative 
>> to it. I apparently need to use OAuth2, but there is no documentation that 
>> seems to match my use case, unhappily. Use cases seem to assume the use of 
>> various Google APIs, which I am not using.
>>
>> Thanks anyway.
>>
>> Julian
>>
>>
>>
>> On Mon, Aug 10, 2015 at 4:13 PM, Nick (Cloud Platform Support) <
>> pay...@google.com > wrote:
>>
>>> Hi Julian,
>>>
>>> The example code given there might be dealing with the Drive API, but 
>>> APIs in this context are quite abstract, and you can easily substitute any 
>>> Google API. 
>>>
>>> Reading back over your question, I'm not sure you've supplied enough 
>>> information for anybody to help answer. What exactly is doing the 
>>> authenticating? Is your endpoint a Cloud Endpoints 
>>> <https://cloud.google.com/appengine/docs/java/endpoints/> endpoint? 
>>> It's not really clear to me what is doing the authentication at your 
>>> "endpoint". Do you just mean that you've deployed with "login: admin"?
>>>
>>> At any rate, this forum, as mentioned, isn't meant for 1-on-1 technical 
>>> support, so I don't think you should continue to follow-up in this thread, 
>>> and should either improve the stackoverflow question to clarify exactly 
>>> what you're expecting to happen in technical language and specifics, or 
>>> else post a new question which does include that information. That will 
>>> enable people to help you better.
>>>
>>> Best wishes,
>>>
>>> Nick
>>>
>>>
>>>
>>> On Saturday, August 8, 2015 at 1:51:24 PM UTC-4, Julian Bunn wrote:
>>>>
>>>> Hi Jason,
>>>>
>>>> Yes: 
>>>> http://stackoverflow.com/questions

Re: [google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-14 Thread Julian Bunn
Hi Nick,

Yes ... here are the relevant details (extracted from my updated question
on stack overflow). First the secure URL specs:




/gcm/home
/gcm/send


admin



Previously, in the client application, I was using ClientLogin to
authenticate with Google before calling the endpoint. This is the code I
was using, that extracts the "Auth" token which it then uses as a Cookie on
HTTP GET to the above endpoints.

public static String loginToGoogle(String userid, String password,
String appUrl) throws Exception {
HttpClient client = new DefaultHttpClient();
HttpPost post = new HttpPost(
"https://www.google.com/accounts/ClientLogin";);

MultipartEntity reqEntity = new MultipartEntity();
reqEntity.addPart("accountType", new StringBody("HOSTED_OR_GOOGLE",
"text/plain", Charset.forName("UTF-8")));
reqEntity.addPart("Email", new StringBody(userid));
reqEntity.addPart("Passwd", new StringBody(password));
reqEntity.addPart("service", new StringBody("ah"));
reqEntity.addPart("source", new StringBody("myappname"));
post.setEntity(reqEntity);
HttpResponse response = client.execute(post);
if (response.getStatusLine().getStatusCode() == 200) {
InputStream input = response.getEntity().getContent();
String result = IOUtils.toString(input);
String authToken = getAuthToken(result);
post = new HttpPost(appUrl + "/_ah/login?auth=" + authToken);
response = client.execute(post);
Header[] cookies = response.getHeaders("SET-COOKIE");
for (Header cookie : cookies) {
if (cookie.getValue().startsWith("ACSID=")) {
return cookie.getValue();
}
}
throw new Exception("ACSID cookie cannot be found");
} else
throw new Exception("Error obtaining ACSID");
}

private static String getAuthToken(String responseText) throws
Exception {
LineNumberReader reader = new LineNumberReader(new StringReader(
responseText));
String line = reader.readLine();
while (line != null) {
line = line.trim();
if (line.startsWith("Auth=")) {
return line.substring(5);
}
line = reader.readLine();
}
throw new Exception("Could not find Auth token");
}


​Calling the gcm endpoint:

HttpGet get = new HttpGet(httpURL);

get.setHeader("Cookie", authCookie);


HttpResponse response = client.execute(get);
response.getEntity().writeTo(System.out);
​
​where "authCookie" is the token obtained from loginToGoogle above.

Thanks so much for helping with this!

Julian​

On Fri, Aug 14, 2015 at 2:03 PM, Nick (Cloud Platform Support) <
pay...@google.com> wrote:

> A quick question, is it possible you could provide the skeleton code for
> your client project? It appears to be a standalone java program, rather
> than a web app, yes?
>
>
> On Wednesday, August 12, 2015 at 5:00:53 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Thanks for much for persisting with your help!
>>
>> Yes. your understanding is correct: I'd like to use a service account to
>> login so that I can make requests to an admin route of my app.
>>
>> Under "Credentials" in the new Google Developers Console, I see the
>> Service Account listed in the OAuth section, with its ID, email address and
>> certificate fingerprints.
>>
>> Under "Permissions" in the Console, I have my own account and a
>> maintenance account listed as Owners. On the same page, under "Service
>> Accounts" I have three listed, all having Edit permission. One of these is
>> the same account listed in "Credentials". (The other two are @cloudservices
>> and @developer.gservice accounts - I don't know where they came from, as I
>> don't recall creating them).
>>
>> On the old version of the Developers Console, I can see that the
>> Authentication Type is set to Google Accounts API. On there I can also see
>> the Service Account Name but it is different from the Service Account
>> listed under Credentials (above) - which is confusing me.
>>
>> The web xml for the deployment includes:
>>
>> 
>> 
>> 
>> /gcm/home
>> /gcm/send
>> 
>> 
>> admin
>> 
>> 
>>
>> These are the two endpoints I need to call from the client.
>>
>> Thanks again!
>> Julian
>>
>>
>>
>> On Wednesday, August 12, 2015 at 1:28:20 PM UTC-7, Nick (Cloud Platform
>> Support) wrote:
>>>
>>> Hi Juli

Re: [google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-18 Thread Julian Bunn
Hi Nick,

Thanks very much for investigating this so thoroughly and following up. I
will complete a feature request, as you suggest, probably sometime tomorrow.

It's reassuring that you too were unable to make it work - I'd tried so
many different things myself, I was beginning to doubt my competence :-)

Julian

On Tue, Aug 18, 2015 at 4:30 PM, Nick (Cloud Platform Support) <
pay...@google.com> wrote:

> Hi Julian,
>
> So, after some extensive testing and reading around online, it appears
> that this is no longer possible, and I encourage you to make a public
> issue tracker feature request
> <http://code.google.com/p/google-appengine/issues/list> explaining in
> simple terms, minus the code, your desired use-case, of making HTTP calls
> to your app on routes protected by login: admin.
>
> From what I can see from looking around online, the old method which used
> a certain ClientLogin endpoint to get the token passed to the server as the
> ACSID cookie is no longer active. The closest thing to signing-in with a
> service account that I could find was service account Apps domain user
> impersonation in the Server to Server OAuth2
> <https://developers.google.com/identity/protocols/OAuth2ServiceAccount>
> docs. After extensive testing, implementing code which built the
> credential, built an HttpTransport for it, it was not possible to get the
> app to recognize the calling java code as a "login: admin" user, even when
> impersonating a user which has admin status (standalone java code compiled
> with classpath by hand).
>
> So, feel free to post the public issue tracker feature request link in
> this thread once you've made it, and I'll be following it and helping to
> get it processed.
>
> Best wishes,
>
> Nick
>
>
> On Friday, August 14, 2015 at 5:44:08 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Yes ... here are the relevant details (extracted from my updated question
>> on stack overflow). First the secure URL specs:
>>
>> 
>> 
>> 
>> /gcm/home
>> /gcm/send
>> 
>> 
>> admin
>> 
>> 
>>
>> Previously, in the client application, I was using ClientLogin to
>> authenticate with Google before calling the endpoint. This is the code I
>> was using, that extracts the "Auth" token which it then uses as a Cookie on
>> HTTP GET to the above endpoints.
>>
>> public static String loginToGoogle(String userid, String password,
>> String appUrl) throws Exception {
>> HttpClient client = new DefaultHttpClient();
>> HttpPost post = new HttpPost(
>> "https://www.google.com/accounts/ClientLogin";);
>>
>> MultipartEntity reqEntity = new MultipartEntity();
>> reqEntity.addPart("accountType", new StringBody("HOSTED_OR_GOOGLE",
>> "text/plain", Charset.forName("UTF-8")));
>> reqEntity.addPart("Email", new StringBody(userid));
>> reqEntity.addPart("Passwd", new StringBody(password));
>> reqEntity.addPart("service", new StringBody("ah"));
>> reqEntity.addPart("source", new StringBody("myappname"));
>> post.setEntity(reqEntity);
>> HttpResponse response = client.execute(post);
>> if (response.getStatusLine().getStatusCode() == 200) {
>> InputStream input = response.getEntity().getContent();
>> String result = IOUtils.toString(input);
>> String authToken = getAuthToken(result);
>> post = new HttpPost(appUrl + "/_ah/login?auth=" + authToken);
>> response = client.execute(post);
>> Header[] cookies = response.getHeaders("SET-COOKIE");
>> for (Header cookie : cookies) {
>> if (cookie.getValue().startsWith("ACSID=")) {
>> return cookie.getValue();
>> }
>> }
>> throw new Exception("ACSID cookie cannot be found");
>> } else
>> throw new Exception("Error obtaining ACSID");
>> }
>>
>> private static String getAuthToken(String responseText) throws Exception {
>> LineNumberReader reader = new LineNumberReader(new StringReader(
>> responseText));
>> String line = reader.readLine();
>> while (line != null) {
>> line = line.trim();
>> if (line.startsWith("Auth=")) {
>> return line.substring(5);
>> }
>> line = reader.readLine();

Re: [google-appengine] Re: Problem authenticating to GAE app using GoogleCredential OAuth2

2015-08-19 Thread Julian Bunn
Issue 12272:
https://code.google.com/p/googleappengine/issues/detail?id=12272

Let me know if it needs any elaboration.

Thanks again,
Julian

On Tue, Aug 18, 2015 at 4:30 PM, Nick (Cloud Platform Support) <
pay...@google.com> wrote:

> Hi Julian,
>
> So, after some extensive testing and reading around online, it appears
> that this is no longer possible, and I encourage you to make a public
> issue tracker feature request
> <http://code.google.com/p/google-appengine/issues/list> explaining in
> simple terms, minus the code, your desired use-case, of making HTTP calls
> to your app on routes protected by login: admin.
>
> From what I can see from looking around online, the old method which used
> a certain ClientLogin endpoint to get the token passed to the server as the
> ACSID cookie is no longer active. The closest thing to signing-in with a
> service account that I could find was service account Apps domain user
> impersonation in the Server to Server OAuth2
> <https://developers.google.com/identity/protocols/OAuth2ServiceAccount>
> docs. After extensive testing, implementing code which built the
> credential, built an HttpTransport for it, it was not possible to get the
> app to recognize the calling java code as a "login: admin" user, even when
> impersonating a user which has admin status (standalone java code compiled
> with classpath by hand).
>
> So, feel free to post the public issue tracker feature request link in
> this thread once you've made it, and I'll be following it and helping to
> get it processed.
>
> Best wishes,
>
> Nick
>
>
> On Friday, August 14, 2015 at 5:44:08 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Yes ... here are the relevant details (extracted from my updated question
>> on stack overflow). First the secure URL specs:
>>
>> 
>> 
>> 
>> /gcm/home
>> /gcm/send
>> 
>> 
>> admin
>> 
>> 
>>
>> Previously, in the client application, I was using ClientLogin to
>> authenticate with Google before calling the endpoint. This is the code I
>> was using, that extracts the "Auth" token which it then uses as a Cookie on
>> HTTP GET to the above endpoints.
>>
>> public static String loginToGoogle(String userid, String password,
>> String appUrl) throws Exception {
>> HttpClient client = new DefaultHttpClient();
>> HttpPost post = new HttpPost(
>> "https://www.google.com/accounts/ClientLogin";);
>>
>> MultipartEntity reqEntity = new MultipartEntity();
>> reqEntity.addPart("accountType", new StringBody("HOSTED_OR_GOOGLE",
>> "text/plain", Charset.forName("UTF-8")));
>> reqEntity.addPart("Email", new StringBody(userid));
>> reqEntity.addPart("Passwd", new StringBody(password));
>> reqEntity.addPart("service", new StringBody("ah"));
>> reqEntity.addPart("source", new StringBody("myappname"));
>> post.setEntity(reqEntity);
>> HttpResponse response = client.execute(post);
>> if (response.getStatusLine().getStatusCode() == 200) {
>> InputStream input = response.getEntity().getContent();
>> String result = IOUtils.toString(input);
>> String authToken = getAuthToken(result);
>> post = new HttpPost(appUrl + "/_ah/login?auth=" + authToken);
>> response = client.execute(post);
>> Header[] cookies = response.getHeaders("SET-COOKIE");
>> for (Header cookie : cookies) {
>> if (cookie.getValue().startsWith("ACSID=")) {
>> return cookie.getValue();
>> }
>> }
>> throw new Exception("ACSID cookie cannot be found");
>> } else
>> throw new Exception("Error obtaining ACSID");
>> }
>>
>> private static String getAuthToken(String responseText) throws Exception {
>> LineNumberReader reader = new LineNumberReader(new StringReader(
>> responseText));
>> String line = reader.readLine();
>> while (line != null) {
>> line = line.trim();
>> if (line.startsWith("Auth=")) {
>> return line.substring(5);
>> }
>> line = reader.readLine();
>> }
>> throw new Exception("Could not find Auth token");
>> }
>>
>>
>> ​Calling the gcm endpoint:
>>
>> HttpGet get = new

[google-appengine] Some instances running unreleased version of the App Engine SDK?

2015-09-17 Thread Julian Bunn
We noticed on Sep 15 and 16 (a couple of days ago) that some of our GAE 
instances were running version 1.9.27 of the SDK, rather than the released 
version 1.9.26.

(We coincidentally had several serious issues with our application.)

Could someone please advise on how this can happen, and what we can do to 
prevent it happening in the future (i.e. force instances to only run the 
latest production release of the SDK)? Is there an application setting 
somewhere?

I apologize if this is the wrong Group for such questions.

Many thanks!
Julian

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/4ac6bcf5-9a32-40fe-ac5a-0886e4eceeb9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Some instances running unreleased version of the App Engine SDK?

2015-09-17 Thread Julian Bunn
Thanks, PK - I will take a look at the issue you filed and comment 
accordingly.

We suffered pretty major disruption to our application, and lost many hours 
of data, unfortunately. The symptoms were extra null fields in some 
Datastore objects, which caused storage of new uploaded data from clients 
to fail, and a greatly increased Error rate from our app. 

Julian



On Thursday, September 17, 2015 at 2:22:08 PM UTC-7, PK wrote:
>
> Hi Julian,
>
> Google routinely does this when they release a new runtime that they 
> believe it is backwards incompatible. It mostly works OK but I did 
> experience downtime for several hours on July 24th because a new version 
> was rolled out that had an issue. It was a Friday, I was in an 
> intercontinental flight without internet when the roll out happened and it 
> was a small disaster. Google did take the issue seriously when I finally 
> brought it to their attention but it took many hours to get back to stable 
> ground. 
>
> I think Google can do some things to improve this. First, it needs to pass 
> some control of when we upgrade the runtime to developers hands so the 
> world does not change suddenly and unexpectedly under our feet. 
> Furthermore, as they roll out new versions they should *automatically* 
> revert applications back to the previous version if the error rate was 
> almost zero before and spikes as soon as instances in the new version come 
> onboard.
>
> I have filed this issue 
> <https://code.google.com/p/googleappengine/issues/detail?id=12355> in the 
> public tracker. I highly encourage others to star it and add your ideas 
> before you find yourselves in my or Julian’s shoes.
>
> Best 
>
>
> On Sep 17, 2015, at 12:34 PM, Julian Bunn > 
> wrote:
>
> We noticed on Sep 15 and 16 (a couple of days ago) that some of our GAE 
> instances were running version 1.9.27 of the SDK, rather than the released 
> version 1.9.26.
>
> (We coincidentally had several serious issues with our application.)
>
> Could someone please advise on how this can happen, and what we can do to 
> prevent it happening in the future (i.e. force instances to only run the 
> latest production release of the SDK)? Is there an application setting 
> somewhere?
>
> I apologize if this is the wrong Group for such questions.
>
> Many thanks!
> Julian
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to google-appengi...@googlegroups.com .
> To post to this group, send email to google-a...@googlegroups.com 
> .
> Visit this group at http://groups.google.com/group/google-appengine.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/google-appengine/4ac6bcf5-9a32-40fe-ac5a-0886e4eceeb9%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/google-appengine/4ac6bcf5-9a32-40fe-ac5a-0886e4eceeb9%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
> For more options, visit https://groups.google.com/d/optout.
>
>
>
> PK
> p...@gae123.com 
>
>
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at http://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/47f9d861-41de-487d-bc29-355dfd70c253%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] request_logs GAE authentication stopped working

2017-03-24 Thread Julian Bunn
We have a process that uses the request_logs feature to download logs from 
our GAE app for archival. This has stopped working unexpectedly in the last 
couple of days, apparently due to an oauth2 authentication issue. I'm not 
sure if this is a problem at our end, or on the GAE side, and I'm 
struggling to debug it.

My understanding is that when the oauth2 token has expired, it is refreshed 
automatically, but that process appears to fail.

Here is what the logs downloader process log shows:

python2.7 request_logs.py
2017-03-24 08:55:05,739 INFO root: Recovered sentinel with 
timestamp: 2017-03-22 09:39:59-07:00
08:55 AM Host: appengine.google.com
08:55 AM Downloading request logs for app my-gae-service version 4.
2017-03-24 08:55:05,757 INFO root: Request with offset None.
2017-03-24 08:55:05,757 INFO oauth2client.client: access_token is 
expired. Now: 2017-03-24 15:55:05.757671, token_expiry: 2017-03-23 23:54:13
2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
_Authenticate skipped auth; needs_auth=False
2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
Sending request to 
https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
 
headers={'X-appcfg-api-version': '1'} body=
2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: Got 
http error 401.
2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
Attempting to auth. This is try 1 of 3.
2017-03-24 08:55:05,890 INFO oauth2client.client: access_token is 
expired. Now: 2017-03-24 15:55:05.890322, token_expiry: 2017-03-23 23:54:13
2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
_Authenticate configuring auth; needs_auth=True
2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
Sending request to 
https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
 
headers={'X-appcfg-api-version': '1'} body=
2017-03-24 08:55:05,974 INFO oauth2client.client: Refreshing due to a 
401 (attempt 1/2)
2017-03-24 08:55:05,975 INFO oauth2client.client: Refreshing 
access_token
2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: Got 
http error 500.
2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
Retrying. This is attempt 1 of 3.
2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
_Authenticate configuring auth; needs_auth=True
2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
Sending request to 
https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
 
headers={'X-appcfg-api-version': '1'} body=
2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: Got 
http error 500.
2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
Retrying. This is attempt 2 of 3.
2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
_Authenticate configuring auth; needs_auth=True
2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
Sending request to 
https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
 
headers={'X-appcfg-api-version': '1'} body=
2017-03-24 08:55:10,598 DEBUGgoogle.appengine.tools.appengine_rpc: Got 
http error 500.
2017-03-24 08:55:10,598 DEBUGgoogle.appengine.tools.appengine_rpc: 
Retrying. This is attempt 3 of 3.
2017-03-24 08:55:10,598 INFO root: Too many retries for url 
https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
Error 500: --- begin server output ---
 
 
Server Error (500)
 
A server error has occurred.
--- end server output ---
2017-03-24 08:55:10,599 INFO root: Downloaded 0 logs

Does anyone have any idea what may be wrong here? 

Thanks!


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/162b53b1-0949-4b9c-8076-2aa982129894%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] Re: request_logs GAE authentication stopped working

2017-03-24 Thread Julian Bunn
Hi Nick!

It's still occurring as of one minute ago ... using release 1.9.50

Julian

On Friday, March 24, 2017 at 11:41:47 AM UTC-7, Nick (Cloud Platform 
Support) wrote:
>
> Hey Julian,
>
> What version of the SDK are you using? And is this still occurring?
>
> Cheers,
>
> Nick
> Cloud Platform Community Support
>
> On Friday, March 24, 2017 at 1:14:26 PM UTC-4, Julian Bunn wrote:
>>
>> We have a process that uses the request_logs feature to download logs 
>> from our GAE app for archival. This has stopped working unexpectedly in the 
>> last couple of days, apparently due to an oauth2 authentication issue. I'm 
>> not sure if this is a problem at our end, or on the GAE side, and I'm 
>> struggling to debug it.
>>
>> My understanding is that when the oauth2 token has expired, it is 
>> refreshed automatically, but that process appears to fail.
>>
>> Here is what the logs downloader process log shows:
>>
>> python2.7 request_logs.py
>> 2017-03-24 08:55:05,739 INFO root: Recovered sentinel with 
>> timestamp: 2017-03-22 09:39:59-07:00
>> 08:55 AM Host: appengine.google.com
>> 08:55 AM Downloading request logs for app my-gae-service version 4.
>> 2017-03-24 08:55:05,757 INFO root: Request with offset None.
>> 2017-03-24 08:55:05,757 INFO oauth2client.client: access_token is 
>> expired. Now: 2017-03-24 15:55:05.757671, token_expiry: 2017-03-23 23:54:13
>> 2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> _Authenticate skipped auth; needs_auth=False
>> 2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Sending request to 
>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>  
>> headers={'X-appcfg-api-version': '1'} body=
>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Got http error 401.
>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Attempting to auth. This is try 1 of 3.
>> 2017-03-24 08:55:05,890 INFO oauth2client.client: access_token is 
>> expired. Now: 2017-03-24 15:55:05.890322, token_expiry: 2017-03-23 23:54:13
>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> _Authenticate configuring auth; needs_auth=True
>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Sending request to 
>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>  
>> headers={'X-appcfg-api-version': '1'} body=
>> 2017-03-24 08:55:05,974 INFO oauth2client.client: Refreshing due to a 
>> 401 (attempt 1/2)
>> 2017-03-24 08:55:05,975 INFO oauth2client.client: Refreshing 
>> access_token
>> 2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Got http error 500.
>> 2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Retrying. This is attempt 1 of 3.
>> 2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> _Authenticate configuring auth; needs_auth=True
>> 2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Sending request to 
>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>  
>> headers={'X-appcfg-api-version': '1'} body=
>> 2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Got http error 500.
>> 2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Retrying. This is attempt 2 of 3.
>> 2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> _Authenticate configuring auth; needs_auth=True
>> 2017-03-24 08:55:09,644 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Sending request to 
>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>  
>> headers={'X-appcfg-api-version': '1'} body=
>> 2017-03-24 08:55:10,598 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Got http error 500.
>> 2017-03-24 08:55:10,598 DEBUGgoogle.appengine.tools.appengine_rpc: 
>> Retrying. This is attempt 3 of 3.
>> 2017-03-24 08:55:10,598 INFO root: To

[google-appengine] Re: request_logs GAE authentication stopped working

2017-03-26 Thread Julian Bunn
Hi Staffan,

Thanks for this suggestion - I tried it, but it didn't appear to make any 
difference. Can you tell me where you saw this issue of expired 
certificates talked out?

It looks to me as if there is something wrong with the oauth2 handling of 
the token refresh from request_logs on the Google servers? I'm wondering if 
it's related to the fact we don't use StackDriver logging, and the push 
seems to be towards that?

Julian

On Sunday, March 26, 2017 at 1:24:40 PM UTC-7, Staffan Rolfsson wrote:
>
> Hi, I got problems to deploy on 1.9.51 (updated from 1.9.48),
> and after some googling I found that it was due to expired certificates.
> Could this be a similiar problem?
> It was easy to fix on your own (as we wait for official fix?),
> - in SDK choose menu>> File>> Open SDK in explorer
> - open lib/cacerts folder
> - rename urlfetch_cacerts.txt to urlfetch_cacerts.old.txt
> - rename caccerts.txt to  urlfetch_cacerts.txt 
> - try again
> :)
> /Staffan
>
> On Friday, March 24, 2017 at 7:50:32 PM UTC+1, Julian Bunn wrote:
>>
>> Hi Nick!
>>
>> It's still occurring as of one minute ago ... using release 1.9.50
>>
>> Julian
>>
>> On Friday, March 24, 2017 at 11:41:47 AM UTC-7, Nick (Cloud Platform 
>> Support) wrote:
>>>
>>> Hey Julian,
>>>
>>> What version of the SDK are you using? And is this still occurring?
>>>
>>> Cheers,
>>>
>>> Nick
>>> Cloud Platform Community Support
>>>
>>> On Friday, March 24, 2017 at 1:14:26 PM UTC-4, Julian Bunn wrote:
>>>>
>>>> We have a process that uses the request_logs feature to download logs 
>>>> from our GAE app for archival. This has stopped working unexpectedly in 
>>>> the 
>>>> last couple of days, apparently due to an oauth2 authentication issue. I'm 
>>>> not sure if this is a problem at our end, or on the GAE side, and I'm 
>>>> struggling to debug it.
>>>>
>>>> My understanding is that when the oauth2 token has expired, it is 
>>>> refreshed automatically, but that process appears to fail.
>>>>
>>>> Here is what the logs downloader process log shows:
>>>>
>>>> python2.7 request_logs.py
>>>> 2017-03-24 08:55:05,739 INFO root: Recovered sentinel with 
>>>> timestamp: 2017-03-22 09:39:59-07:00
>>>> 08:55 AM Host: appengine.google.com
>>>> 08:55 AM Downloading request logs for app my-gae-service version 4.
>>>> 2017-03-24 08:55:05,757 INFO root: Request with offset None.
>>>> 2017-03-24 08:55:05,757 INFO oauth2client.client: access_token is 
>>>> expired. Now: 2017-03-24 15:55:05.757671, token_expiry: 2017-03-23 23:54:13
>>>> 2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> _Authenticate skipped auth; needs_auth=False
>>>> 2017-03-24 08:55:05,757 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> Sending request to 
>>>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>>>  
>>>> headers={'X-appcfg-api-version': '1'} body=
>>>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> Got http error 401.
>>>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> Attempting to auth. This is try 1 of 3.
>>>> 2017-03-24 08:55:05,890 INFO oauth2client.client: access_token is 
>>>> expired. Now: 2017-03-24 15:55:05.890322, token_expiry: 2017-03-23 23:54:13
>>>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> _Authenticate configuring auth; needs_auth=True
>>>> 2017-03-24 08:55:05,890 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> Sending request to 
>>>> https://appengine.google.com/api/request_logs?app_id=my-gae-service&include_all=True&include_vhost=False&limit=1000&no_header=1&severity=1&version=4
>>>>  
>>>> headers={'X-appcfg-api-version': '1'} body=
>>>> 2017-03-24 08:55:05,974 INFO oauth2client.client: Refreshing due to 
>>>> a 401 (attempt 1/2)
>>>> 2017-03-24 08:55:05,975 INFO oauth2client.client: Refreshing 
>>>> access_token
>>>> 2017-03-24 08:55:08,761 DEBUGgoogle.appengine.tools.appengine_rpc: 
>>>> Got http error 500.
>>>

[google-appengine] Error when using google.cloud.logging with logging.v1.RequestLog?

2017-03-27 Thread Julian Bunn
Right now we cannot use the request_logs feature for downloading our GAE 
logs (see my other thread on that problem!), so I thought I would try the 
newer google.cloud features, and get modern :-)

Here is the simple test I tried:

from google.cloud import logging
from google.cloud.logging import DESCENDING
def main():
client = logging.Client() 
for entry in client.list_entries(order_by=DESCENDING):
print entry
if __name__ == '__main__':
main()

When executing this (after setting up the gcloud credentials), there seems 
to be a problem parsing the log entries:

python GetLogsCloud.py
Traceback (most recent call last):
  File "GetLogsCloud.py", line 25, in 
main()
  File "GetLogsCloud.py", line 18, in main
for entry in client.list_entries(order_by=DESCENDING):
  File "c:\python27\lib\site-packages\google\cloud\iterator.py", line 219, 
in _items_iter
for item in page:
  File "c:\python27\lib\site-packages\google\cloud\iterator.py", line 163, 
in next
result = self._item_to_value(self._parent, item)
  File "c:\python27\lib\site-packages\google\cloud\logging\_gax.py", line 
488, in _item_to_entry
resource = MessageToDict(entry_pb)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
133, in MessageToDict
return printer._MessageToJsonObject(message)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
164, in _MessageToJsonObject
return self._RegularMessageToJsonObject(message, js)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
196, in _RegularMessageToJsonObject
js[name] = self._FieldToJsonObject(field, value)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
230, in _FieldToJsonObject
return self._MessageToJsonObject(value)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
162, in _MessageToJsonObject
return methodcaller(_WKTJSONMETHODS[full_name][0], message)(self)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
266, in _AnyMessageToJsonObject
sub_message = _CreateMessageFromTypeUrl(type_url)
  File "c:\python27\lib\site-packages\google\protobuf\json_format.py", line 
341, in _CreateMessageFromTypeUrl
'Can not find message descriptor by type_url: {0}.'.format(type_url))
TypeError: Can not find message descriptor by type_url: 
type.googleapis.com/google.appengine.logging.v1.RequestLog.

I'm wondering what the issue is here ... is it because of the type of 
logging we use on GAE?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/67b5422c-6e56-48e5-84d5-545df3341849%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Re: Error when using google.cloud.logging with logging.v1.RequestLog?

2017-03-27 Thread Julian Bunn
Hi Nick,

Thanks ... I do try StackOverflow from time to time for these sorts of
technical issues, but have very limited success. Perhaps I'm using bad
tags. See for example a logging related question I posted there on Friday
about JSON credentials for logging
https://stackoverflow.com/questions/43004904/accessing-gae-log-files-using-google-cloud-logging-python

Regarding the v1.RequestLog error that's the topic of this thread: we do
not use StackDriver logging in our GAE deployment, so I wonder whether
cloud.logging only supports StackDriver logs?

Otherwise, I am at a loss as to why google.cloud.logging can't understand
the log format it fetches from our GAE deployment?

We've now been without access to our log data for almost a week, so I'm
getting quite desperate to find a solution either with request_logs or
cloud.logging :-)

Thanks for your help.

Julian




On Mon, Mar 27, 2017 at 11:58 AM, 'Nick (Cloud Platform Support)' via
Google App Engine  wrote:

> Hey Julian,
>
> I've been able just now to run the same code fine. What might you mean by
> "the type of logging we use on GAE"? Are you doing anything particularly
> odd there?
>
> It's worth mentioning here that this forum isn't meant for technical
> support but rather for general discussion of the platform, services, design
> patterns, etc. This question should have been posted to stackoverflow.com
> on a relevant Cloud Platform / App Engine tag. We monitor Stack Overflow
> regularly, so we'll be able to answer there, although we can perhaps help
> debug this a little here before you move to post there.
>
> Cheers,
>
> Nick
> Cloud Platform Community Support
>
>
> On Monday, March 27, 2017 at 12:25:45 PM UTC-4, Julian Bunn wrote:
>>
>> Right now we cannot use the request_logs feature for downloading our GAE
>> logs (see my other thread on that problem!), so I thought I would try the
>> newer google.cloud features, and get modern :-)
>>
>> Here is the simple test I tried:
>>
>> from google.cloud import logging
>> from google.cloud.logging import DESCENDING
>> def main():
>> client = logging.Client()
>> for entry in client.list_entries(order_by=DESCENDING):
>> print entry
>> if __name__ == '__main__':
>> main()
>>
>> When executing this (after setting up the gcloud credentials), there
>> seems to be a problem parsing the log entries:
>>
>> python GetLogsCloud.py
>> Traceback (most recent call last):
>>   File "GetLogsCloud.py", line 25, in 
>> main()
>>   File "GetLogsCloud.py", line 18, in main
>> for entry in client.list_entries(order_by=DESCENDING):
>>   File "c:\python27\lib\site-packages\google\cloud\iterator.py", line
>> 219, in _items_iter
>> for item in page:
>>   File "c:\python27\lib\site-packages\google\cloud\iterator.py", line
>> 163, in next
>> result = self._item_to_value(self._parent, item)
>>   File "c:\python27\lib\site-packages\google\cloud\logging\_gax.py",
>> line 488, in _item_to_entry
>> resource = MessageToDict(entry_pb)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 133, in MessageToDict
>> return printer._MessageToJsonObject(message)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 164, in _MessageToJsonObject
>> return self._RegularMessageToJsonObject(message, js)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 196, in _RegularMessageToJsonObject
>> js[name] = self._FieldToJsonObject(field, value)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 230, in _FieldToJsonObject
>> return self._MessageToJsonObject(value)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 162, in _MessageToJsonObject
>> return methodcaller(_WKTJSONMETHODS[full_name][0], message)(self)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 266, in _AnyMessageToJsonObject
>> sub_message = _CreateMessageFromTypeUrl(type_url)
>>   File "c:\python27\lib\site-packages\google\protobuf\json_format.py",
>> line 341, in _CreateMessageFromTypeUrl
>> 'Can not find message descriptor by type_url: {0}.'.format(type_url))
>> TypeError: Can not find message descriptor by type_url:
>> type.googleapis.com/google.appengine.logging.v1.RequestLog.
>>
>> I'm wondering what the issue is here ... i

Re: [google-appengine] Re: Error when using google.cloud.logging with logging.v1.RequestLog?

2017-03-28 Thread Julian Bunn
Thanks, Nick!

I confirm that the command "gcloud beta logging read" retrieves our logs!
So this looks very promising ...

Is there a way to call this interface similarly to:

from google.cloud import logging
from google.cloud.logging import DESCENDING
def main():
client = logging.Client()
for entry in client.list_entries(order_by=DESCENDING):
print entry
if __name__ == '__main__':
main()



On Tue, Mar 28, 2017 at 12:39 PM, 'Nick (Cloud Platform Support)' via
Google App Engine  wrote:

> Hey Julian,
>
> I've observed the identical error when using the filter
> 'resource.type="gae_app" AND resource.labels.module_id="default"', where
> I'd deployed a standard environment app on "default".
>
> It appears this is caused by the fact that the standard environment
> doesn't log to Stackdriver v2 logging, as you observed. The docs show all
> v2 endpoints are what the library interfaces with. Nonetheless, it should
> be able to read V1 type logs as well, or at least we can consider this a
> feature request.
>
> In the meantime, the following command pattern can be used, presenting no
> problem with V1 logs:
>
> gcloud beta logging read ${LOG_FILTER}
>
>
> I've created a Public Issue Tracker issue
> <https://b.corp.google.com/issues/36687054> which you can follow while we
> work on this.
>
> Cheers,
>
> Nick
> Cloud Platform Community Support
>
> On Monday, March 27, 2017 at 3:14:13 PM UTC-4, Julian Bunn wrote:
>>
>> Hi Nick,
>>
>> Thanks ... I do try StackOverflow from time to time for these sorts of
>> technical issues, but have very limited success. Perhaps I'm using bad
>> tags. See for example a logging related question I posted there on Friday
>> about JSON credentials for logging https://stackoverflow.
>> com/questions/43004904/accessing-gae-log-files-using-google-
>> cloud-logging-python
>>
>> Regarding the v1.RequestLog error that's the topic of this thread: we do
>> not use StackDriver logging in our GAE deployment, so I wonder whether
>> cloud.logging only supports StackDriver logs?
>>
>> Otherwise, I am at a loss as to why google.cloud.logging can't understand
>> the log format it fetches from our GAE deployment?
>>
>> We've now been without access to our log data for almost a week, so I'm
>> getting quite desperate to find a solution either with request_logs or
>> cloud.logging :-)
>>
>> Thanks for your help.
>>
>> Julian
>>
>>
>>
>>
>> On Mon, Mar 27, 2017 at 11:58 AM, 'Nick (Cloud Platform Support)' via
>> Google App Engine  wrote:
>>
>>> Hey Julian,
>>>
>>> I've been able just now to run the same code fine. What might you mean
>>> by "the type of logging we use on GAE"? Are you doing anything particularly
>>> odd there?
>>>
>>> It's worth mentioning here that this forum isn't meant for technical
>>> support but rather for general discussion of the platform, services, design
>>> patterns, etc. This question should have been posted to
>>> stackoverflow.com on a relevant Cloud Platform / App Engine tag. We
>>> monitor Stack Overflow regularly, so we'll be able to answer there,
>>> although we can perhaps help debug this a little here before you move to
>>> post there.
>>>
>>> Cheers,
>>>
>>> Nick
>>> Cloud Platform Community Support
>>>
>>>
>>> On Monday, March 27, 2017 at 12:25:45 PM UTC-4, Julian Bunn wrote:
>>>>
>>>> Right now we cannot use the request_logs feature for downloading our
>>>> GAE logs (see my other thread on that problem!), so I thought I would try
>>>> the newer google.cloud features, and get modern :-)
>>>>
>>>> Here is the simple test I tried:
>>>>
>>>> from google.cloud import logging
>>>> from google.cloud.logging import DESCENDING
>>>> def main():
>>>> client = logging.Client()
>>>> for entry in client.list_entries(order_by=DESCENDING):
>>>> print entry
>>>> if __name__ == '__main__':
>>>> main()
>>>>
>>>> When executing this (after setting up the gcloud credentials), there
>>>> seems to be a problem parsing the log entries:
>>>>
>>>> python GetLogsCloud.py
>>>> Traceback (most recent call last):
>>>>   File "GetLogsCloud.py", line 25, in 
>&g

Re: [google-appengine] Re: Error when using google.cloud.logging with logging.v1.RequestLog?

2017-03-29 Thread Julian Bunn
Hi Nick,

Thanks - I had just started doing exactly what you suggest! :-)

I crave your indulgence on one other issue ... when I run a gcloud beta
logging command with a LOG_FILTER clause on a Unix system, it works fine.
For example:

 gcloud beta logging read 'timestamp<="2017-03-23T00:00:00Z" AND
timestamp>="2017-03-22T00:00:00Z"' --limit=2

However, on Windows, the exact same command doesn't work - I suspect the
single and double quotes are part of the problem:

 gcloud beta logging read 'timestamp<="2017-03-23T00:00:00Z" AND
timestamp>="2017-03-22T00:00:00Z"' --limit=2
The filename, directory name, or volume label syntax is incorrect.

(A simple "gcloud beta logging read" works just fine on Windows.)

I've tried various ways of escaping the quotes, but nothing appears to work.

Should I post this on StackOverflow?




On Wed, Mar 29, 2017 at 12:37 PM, 'Nick (Cloud Platform Support)' via
Google App Engine  wrote:

> Hey Julian,
>
> You have two options here: either to wait for the google.cloud library to
> get support for V1 logs, or adopt a slightly different approach to writing
> the python script. You could use the python subprocess
> <https://docs.python.org/2/library/subprocess.html> module to call the
> gcloud command (keeping in mind that this is part of the beta command
> group, which can change on updates without warning (the only warning being
> the release notes)), sending the output into a string, and then parsing the
> entries that way. This, while it would mean you'd have your logs in a
> python environment (with all the leverage that brings) however would not
> use the google.cloud library.
>
> Cheers,
>
> Nick
> Cloud Platform Community Support
>
> On Tuesday, March 28, 2017 at 5:51:54 PM UTC-4, Julian Bunn wrote:
>>
>> Thanks, Nick!
>>
>> I confirm that the command "gcloud beta logging read" retrieves our logs!
>> So this looks very promising ...
>>
>> Is there a way to call this interface similarly to:
>>
>> from google.cloud import logging
>> from google.cloud.logging import DESCENDING
>> def main():
>> client = logging.Client()
>> for entry in client.list_entries(order_by=DESCENDING):
>> print entry
>> if __name__ == '__main__':
>> main()
>>
>>
>>
>> On Tue, Mar 28, 2017 at 12:39 PM, 'Nick (Cloud Platform Support)' via
>> Google App Engine  wrote:
>>
>>> Hey Julian,
>>>
>>> I've observed the identical error when using the filter
>>> 'resource.type="gae_app" AND resource.labels.module_id="default"',
>>> where I'd deployed a standard environment app on "default".
>>>
>>> It appears this is caused by the fact that the standard environment
>>> doesn't log to Stackdriver v2 logging, as you observed. The docs show all
>>> v2 endpoints are what the library interfaces with. Nonetheless, it should
>>> be able to read V1 type logs as well, or at least we can consider this a
>>> feature request.
>>>
>>> In the meantime, the following command pattern can be used, presenting
>>> no problem with V1 logs:
>>>
>>> gcloud beta logging read ${LOG_FILTER}
>>>
>>>
>>> I've created a Public Issue Tracker issue
>>> <https://b.corp.google.com/issues/36687054> which you can follow while
>>> we work on this.
>>>
>>> Cheers,
>>>
>>> Nick
>>> Cloud Platform Community Support
>>>
>>> On Monday, March 27, 2017 at 3:14:13 PM UTC-4, Julian Bunn wrote:
>>>>
>>>> Hi Nick,
>>>>
>>>> Thanks ... I do try StackOverflow from time to time for these sorts of
>>>> technical issues, but have very limited success. Perhaps I'm using bad
>>>> tags. See for example a logging related question I posted there on Friday
>>>> about JSON credentials for logging https://stackoverflow.
>>>> com/questions/43004904/accessing-gae-log-files-using-google-
>>>> cloud-logging-python
>>>>
>>>> Regarding the v1.RequestLog error that's the topic of this thread: we
>>>> do not use StackDriver logging in our GAE deployment, so I wonder whether
>>>> cloud.logging only supports StackDriver logs?
>>>>
>>>> Otherwise, I am at a loss as to why google.cloud.logging can't
>>>> understand the log format it fetches from our GAE deployment?
>>>>
>>>> We've now been w

[google-appengine] gcloud beta logging read creates huge log files in .config

2017-04-05 Thread Julian Bunn
We are trying to migrate from request_logs to the "gcloud beta logging 
read" system.

When downloading logs using the beta command, we find that simultaneously 
there are large files being created in:

~/.config/gcloud/logs

For example, downloading logs for our GAE app from yesterday, we have the 
following file in ~/.config/gcloud/logs/2017.04.05

 12293554288 Apr  5 10:41 01.00.02.690250.log

The file contains entries like this:

2017-04-05 10:35:02,227 INFO ___FILE_ONLY___   
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___ responseSize
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___ :
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___  '
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___ 76
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___ '
2017-04-05 10:35:02,228 INFO ___FILE_ONLY___ 


How can we prevent the creation of this file when using the beta logging 
command, or at least reduce its size?

Thanks!



 


-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/bf1a5843-c358-42ec-9f3d-b238a106368b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Re: gcloud beta logging read creates huge log files in .config

2017-04-06 Thread Julian Bunn
Hello,

The size of the files in the ~/.config subdir appear to scale with the
number of logs downloaded by the gcloud tool.

We are downloading a day's worth of logs at a time, for a busy application
which produces around 750k log entries per day.

(Downloading this many log entries takes at least 10 hours with the gcloud
tool, which is another problem entirely!)

Right now I am simply deleting the log files created in the .config subdir,
after our download job finishes, but this is a rather crude fix!

Thanks,
Julian

On Thu, Apr 6, 2017 at 1:58 PM, 'George (Cloud Platform Support)' via
Google App Engine  wrote:

> Hello Julian,
>
> We could not reproduce the issue locally. The files in our linux machine’s
> ~/.config/gcloud/logs directory are reasonably small.
>
> You may attempt to reduce the size of your file by applying filtering more
> aggressively. Details on filter setup are to be found on the “Command Line
> Interface” documentation page
> <https://cloud.google.com/logging/docs/reference/tools/gcloud-logging>.
>
> Hoping this is of help to you, I remain at your disposal for related
> questions.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Google App Engine" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/google-appengine/8jY242lvAHk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> google-appengine+unsubscr...@googlegroups.com.
> To post to this group, send email to google-appengine@googlegroups.com.
> Visit this group at https://groups.google.com/group/google-appengine.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/google-appengine/ae255ffc-6ae9-47d0-afed-
> 428c65911c98%40googlegroups.com
> <https://groups.google.com/d/msgid/google-appengine/ae255ffc-6ae9-47d0-afed-428c65911c98%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAAxkZQtnhcvTU8c8-_6%3DZ1VD-6Mp%2BKKndF3vQ09Vqma7s%3Dn5ug%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Re: gcloud beta logging read creates huge log files in .config

2017-04-07 Thread Julian Bunn
Thanks for the info. We have no use for the gcloud beta logging tool's log
files, and would like to disable them.

Right now we are simply deleting them after the command terminates, but we
would like to avoid such a heavyweight approach, not to mention the
considerable I/O load they cause to our storage.

We are downloading the logs from our GAE application to our own archive
server: we do not use Google's storage buckets for that.

On Fri, Apr 7, 2017 at 1:55 PM, 'George (Cloud Platform Support)' via
Google App Engine  wrote:

> This is intended behavior: "The gcloud tool creates and stores logs in a
> log file that you can query, located at $HOME/.config/gcloud/logs. ", as
> documented at "Tips, Troubleshooting, & Known Issues
> ".
>
> Do you use the logs in the ~/.config/gcloud/logs directory in any way?
>
> Do you export logs with the beta command to one of your storage buckets?
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Google App Engine" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/google-appengine/8jY242lvAHk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> google-appengine+unsubscr...@googlegroups.com.
> To post to this group, send email to google-appengine@googlegroups.com.
> Visit this group at https://groups.google.com/group/google-appengine.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/google-appengine/4a6b5b95-afbd-4cbe-8fde-
> 86380ca70759%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAAxkZQu%3DW%3D%3DVO5CVjAT1CY_X8SDYfwOPiOVMq6ySFUEntKprGQ%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [google-appengine] Re: gcloud beta logging read creates huge log files in .config

2017-04-12 Thread Julian Bunn
Isn't the qualifier for the beta logging read command --verbosity (not
--severity)?

gcloud beta logging read --limit=1 --verbosity=none

However, when this is used it still produces a log in the ~/.config
directory that contains all log level entries, including "INFO" level.

Perhaps I have misunderstood what you are suggesting?

(We are already running cron jobs to process our logs.)









On Wed, Apr 12, 2017 at 1:53 PM, 'George (Cloud Platform Support)' via
Google App Engine  wrote:

> As a first approach, you may try using the --severity=SEVERITY flag, or
> filter the read entries by using --limit=LIMIT, all this to diminish from
> start the general volume of logs.
>
> You may also consider implementing a cron job in Linux, to avoid having to
> delete unwanted files manually.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "Google App Engine" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/
> topic/google-appengine/8jY242lvAHk/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> google-appengine+unsubscr...@googlegroups.com.
> To post to this group, send email to google-appengine@googlegroups.com.
> Visit this group at https://groups.google.com/group/google-appengine.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/google-appengine/7d05734a-b3d7-45bf-ab9d-
> 1ef9a6eb9231%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to google-appengine+unsubscr...@googlegroups.com.
To post to this group, send email to google-appengine@googlegroups.com.
Visit this group at https://groups.google.com/group/google-appengine.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/google-appengine/CAAxkZQuaFqcA6CKcmg7Y3aPE%2BRLs_c69jjfLhzFR_kmUtz5peg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[google-appengine] Re: Mail Service: not all emails were sent, unclear quota denials

2009-12-23 Thread Julian Namaro
If you send all the mails at the same time maybe you hit the 10
mails / second quota.
If that's the case, use a task queue to send emails and you'll be able
to control the rate of execution.


On Dec 24, 2:04 am, Alexander Arendar 
wrote:
> Hi,
>
> My app supposed to send 37 mails.
> It sent only 11.
> I started to analyze why and on the Dashboard if I select to see
> "Number of quota denials/second" I see that it is not zero.
> But if I go to the Quota Details page I see that everything there is
> under 1-2%.
> Anyone knows why this happens?
>
> Alex

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Query the Index

2009-12-23 Thread Julian Namaro
I don't think an index can help you for that.
See Robert's solution or if the list of categories doesn't change
often, you can just hardcode it in the Python/Java script.


On Dec 22, 9:01 pm, Diogo Terror  wrote:
> Is there a way I could query the index contents, such as:
>
> I have a data model called Contact which has a field called category.
> Now category is not sql-normalized but I still want to query all
> categories available for the Contact model (in which case I'd just
> query for all the keys of the category index, right?)
>
> Is there any way I could do this?
>
> Thanks!

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: How to send massive emails and avoid being suspended?

2009-12-29 Thread Julian Namaro
It is unlikely someone would mark as spam an e-card from somebody they
know. Maybe some company use your service to send spam ?
Make sure users of your service need to create an account(with a
captcha) and set a daily limit for number of e-card sent.

On Dec 23, 5:30 pm, Krystox  wrote:
> We have a service to let user to send greeting cards to their friends.
> Soon we realize people might mark those emails as spam. As a result,
> the account used to do mail.send() is easily suspended for abuse.
>
> My question is GAE mail quota allows sending thousands of emails per
> day, even the free quota is 2000. How it is possible to not being
> marked as abuse?
>
> Again, we have no intention to spam. Just want to know what is the
> correct way to use this email service. Thank you!

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Templates not updated on versions other than default

2010-01-12 Thread Julian Namaro
This can happen if you have an absolute URL path somewhere.


On Jan 13, 2:52 am, Guri  wrote:
> Hi,
>   It happened that uploading a version other than default on app
> engine, somehow html changes in templates are not reflected after
> upload.
> As soon as I make the new version Default those changes appears.
>
> Is it a bug or I am missing something ?
>
> Thanks,
> Guri
-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Sending emails - taskqueue is too quick?

2010-01-14 Thread Julian Namaro
What do you get if you specify the rate per second ?
Also are you sure each task sends only one email ?


On Jan 15, 5:19 am, Emanuel Berglund 
wrote:
> Thanks for your reply,
>
> Supposedly the bucket_size: should be 1 and not 8 according to this 
> posthttp://groups.google.co.uk/group/google-appengine/browse_thread/threa...
>
> But I tried it anyway and it had no effect, it's still sending out about 10
> emails per _second_ in my test, instead of the 8 per minute that I need.
>
> I don't think the second alternative would work either since there is a time
> limit of 30 seconds per request and it would probably trigger a deadline
> exceeded 
> exception.http://groups.google.co.uk/group/google-appengine/browse_thread/threa...
>
> Surely there must be some way to get App-Engine to send out emails at a
> certain rate without hitting the OverQuotaError? Anyone who successfully
> implemented it?
>
>
>
> On Thu, Jan 14, 2010 at 7:31 PM, Eli Jones  wrote:
> > Why don't you just configure it like so:
>
> > -name:mail-queue
> >   rate:8/m
> >   bucket_size:8
>
> > Now.. this is presuming that when you add a task to this mail-queue.. the
> > task is configured to send 1 e-mail to 1 recipient.
>
> > Though, you could create a recursive task queue that would work something
> > like this:
>
> > Let,
>
> >  N = number of recipients
> >  M = batch number
>
> > When ready to start emailing.. fire off task: MailTask(1)
>
> > Where,
>
> > MailTask(M) =
> >   1. Send emails to recipients M, M+1, M+2, ..., M+7
> >   2. If M+8 > N, Goto Step 4.  Else Goto Step 3.
> >   3. Schedule MailTask(M+8) to run in 61 Seconds
> >   4. Success
>
> > Or you could have it email one recipient at a time and reschedule itself to
> > run M+1 in 8 seconds.  That way.. if there is an error during the Task and
> > it automatically restarts.. you won't get a duplicated email sent out to
> > more than one person.
>
> > Though, you'd also need to give each task a unique name like
> > str(M)+"_NewsletterVersion" to ensure that you didn't double up any tasks on
> > the off chance that there was a timeout during the Task Add stage (I've had
> > this happen.. Task Add timed out.. but the Task had been added.. so the
> > previous task started over.. and added the "next" task again.  Thus, the
> > recursive task had forked into two task streams.)
>
> > On Thu, Jan 14, 2010 at 9:49 AM, Emanuel Berglund <
> > emanuel.bergl...@gmail.com> wrote:
>
> >> I posted a while back about getting an "OverQuotaError" when sending
> >> out 74 emails in quick succession.
> >> It turned out I was breaking the 8 emails per second cap, to solve
> >> this I was told to use a task queue.
>
> >> I've now set up a task queue and have created a specific mail queue in
> >> my queue.yml for sending emails out.
> >> I use the following configuration with the aim to send out 8 emails
> >> per minute, and I'm trying to not use the bucket by setting it to 0.
>
> >> - name: mail-queue
> >>  rate: 8/m
> >>  bucket_size: 0
>
> >> But, when I run a test and send out emails to 28 people, it seems it's
> >> still sending out emails at a very high rate. It sends them out at
> >> about 10 emails per second based on the information on the ETA of the
> >> tasks in the queue in the live App Engine panel.
>
> >> This isn't paced out to 8 emails per minute and I'm afraid I might go
> >> over my qota and get the OverQuotaError again. Since I am sending out
> >> a newsletter I really can't afford to have to restart it again and
> >> risk sending the same letter twice to a number of people.
>
> >> My account doesn't have billing activated but has had the free quota
> >> bumped up.
>
> >> --
>
> >> You received this message because you are subscribed to the Google Groups
> >> "Google App Engine" group.
> >> To post to this group, send email to google-appeng...@googlegroups.com.
> >> To unsubscribe from this group, send email to
> >> google-appengine+unsubscr...@googlegroups.com
> >> .
> >> For more options, visit this group at
> >>http://groups.google.com/group/google-appengine?hl=en.
>
> > --
> > You received this message because you are subscribed to the Google Groups
> > "Google App Engine" group.
> > To post to this group, send email to google-appeng...@googlegroups.com.
> > To unsubscribe from this group, send email to
> > google-appengine+unsubscr...@googlegroups.com
> > .
> > For more options, visit this group at
> >http://groups.google.com/group/google-appengine?hl=en.
>
> --
> ---
> Visit my personal 
> websites:http://www.musicpilgrimages.comhttp://www.countryplug.com
-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Email & Idempotence

2010-04-01 Thread Julian Namaro
What about:

try:
  mail.send_mail( ...)
except Exception, e:
  logging.error( str(e) )
  return


On Apr 2, 2:44 am, GAEfan  wrote:
> OK, just received the 3rd bcc, 34 minutes later.
>
> So the question is, how can I put an email in the taskqueue, and have
> the task not re-executed if the mail.send() takes too long?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: A group of students need your valuation on their app

2010-05-25 Thread Julian Namaro
Hi Duc,

Nice project.
If I may give one advice that would be to simplify the registration
process. Ideally the user should be able to start a problem right
after google sign-in. Make them join a default team, etc...

How do you compile code on GAE server, are you using a mock compiler
written in Java ?

Julian




On May 25, 5:41 am, Duc Anh Nguyen  wrote:
> Hi,
> We just have completed the demo version of our Java app on GAE.
> Generally, our idea is to make a website that helps you improve your
> debugging skills. And for our school project, it should be a game, so
> we called it a debugging game.
>
> You'd play as an employee working for a cruel employer, who gives you
> a trashy bunch of codes everyday and wants you to fix them. So you'd
> read the code, you'd make some changes, and you submit your solution
> to the employer - the GAE server, where it compiles your code and
> return a result.
>
> Enough said, please take a look:http://bugkillr.appspot.com
> And we have a valuation form ready for you to fill 
> in!http://spreadsheets.google.com/viewform?hl=en&pli=1&formkey=dGJQVnRuU...
>
> Thank you so much. We really appreciate your help.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google App Engine" group.
> To post to this group, send email to google-appeng...@googlegroups.com.
> To unsubscribe from this group, send email to 
> google-appengine+unsubscr...@googlegroups.com.
> For more options, visit this group 
> athttp://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Querying for N random records on Appengine datastore

2009-07-10 Thread Julian Namaro

You should try to generate a list of N random keys for your entity
because you can fetch that with only one datastore call(and no index).
Aren't google-generated numeric IDs guaranted continuous?
If not you can yourself assign alphanumeric keys that are continuous
(or generated by a formula that you pass on to the random generator).

Julian


On Jul 10, 1:33 am, aloo  wrote:
> Hi all,
>
> I'm trying to write a GQL query that returns N random records of a
> specific kind. My current implementation works but requires N calls to
> the datastore. I'd like to make it 1 call to the datastore if
> possible.
>
> I currently assign a random number to every kind that I put into the
> datastore. When I query for a random record I generate another random
> number and query for records > rand ORDER BY asc LIMIT 1.
>
> This works, however, it only returns 1 record so I need to do N
> queries. Any ideas on how to make this one query? Thanks.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Proposal: __index__ support to make certain datastore/query operations more efficient

2009-08-06 Thread Julian Namaro

That's an interesting idea, for that particular use case.

If I reformulate the "relation index entities" query as I understand
it:
1. [datastore]Traverse the index and return entity keys
2. [app engine]Transform returned keys to retrieve parent entity
3. [datastore] Batch get of entities

Your idea:
1. [datastore]Traverse the index and to get entity keys
3. [datastore] Batch get of entities

So you would save step 2, a roundtrip app engine - datastore, not sure
that's substantial though.

Julian




On Aug 6, 1:05 pm, tav  wrote:
> As a developer we have various limitations that we have to work with
> on App Engine. And if you are performance conscious, then you have
> even more ;p
>
> Two limits that one has to often work around are the 1MB Datastore
> calls and ListProperty size limits.
>
> A common pattern that I've used to overcome these is to use "related
> entities". I use these related entities to store properties that I
> want to search on -- separate from the entities that I want to
> retrieve in order to display the results back to the user.
>
> I was pleased to see App Engine developer Brett Slatkin officially
> encouraging the use of this pattern in his I/O talk:
>
> *http://code.google.com/events/io/sessions/BuildingScalableComplexApps...
>
> He calls them "relation index entities" and explains it better than I
> can, so here be a slightly adapted excerpt from pages 23-25 of his
> presentation PDF:
>
> 
>
> Problem: Scalably "delivering" a Twitter-esque post/message
>
> Solution: Split the message into 2 entities:
> * Message model contains the info we care about
> * MessageIndex has only relationships for querying
>
> >>> class Message(db.Model):
>
> ...   sender = db.StringProperty()
> ...   body = db.TextProperty()
>
> >>> class MessageIndex(db.Model):
>
> ...   receivers = db.StringListProperty()
>
> When writing, put entities in the same entity group for transactions:
> * That is, make the Message entity be the parent for the MessageIndex entity
> * You can of course have multiple MessageIndex entities per Message to
> scale up...
>
> And for queries:
> * Do a key-only query to fetch the MessageIndexes
> * Transform returned keys to retrieve parent entity
> * Fetch Message entities in batch
>
> >>> indexes = db.GqlQuery(
>
> ... "SELECT __key__ FROM MessageIndex "
> ... "WHERE receivers = :1", me)
>
> >>> keys = [k.parent() for k in indexes]
> >>> messages = db.get(keys)
>
> 
>
> Now, if you start using this pattern heavily, you'll realise that it
> should be extremely trivial for the App Engine devs to turn the 3
> stepped process of:
>
> * Do a key-only query to fetch the MessageIndexes
> * Transform returned keys to retrieve parent entity
> * Fetch Message entities in batch
>
> Into just a single step:
>
> * Do a query on MessageIndexes which will actually return the related
> Messages entities
>
> That is, the query would be done on the MessageIndexes entities, but
> instead of returning those entities, the related Message entities
> would be returned.
>
> Saving both us and App Engine an additional Datastore request and key
> computation!! Given the 30 seconds limit, this also means that you
> could then do twice the amount of querying in the same time! So it
> helps even more!
>
> So how would this work?
>
> Well, let's say that we had the following Message:
>
> >>> msg = Message(send='tav', body='Hello World')
> >>> msg_key = msg.key()
>
> And the following related MessageIndex entities:
>
> >>> rcv1 = MessageIndex(parent=msg_key, receivers=['alice', 'bob'], 
> >>> __index__=msg_key)
> >>> rcv2 = MessageIndex(parent=msg_key, receivers=['ryan'], __index__=msg_key)
>
> The presence of the newly proposed __index__ property would alter the
> behaviour of how the Datastore indexes the rcv1/rcv2 entities. Instead
> of doing the current:
>
>   rcv1:receivers:alice
>   rcv1:receivers:bob  
>   rcv2:receivers:ryan 
>
> It would look like:
>
>   rcv1:receivers:alice
>   rcv1:receivers:bob  
>   rcv2:receivers:ryan 
>
> [Note for those not familiar with the App Engine Datastore -- the App
> Engine guys don't store the full entity in all of the index tables,
> instead the  is stored and used to load up the complete
> entity with all of its properties in order to respond to the completed
> query once the relevant entities have been identified.]
>
> That is, with __in

[google-appengine] Re: Transactions

2009-08-09 Thread Julian Namaro

I think this problem is discussed there:
http://code.google.com/p/googleappengine/issues/detail?id=313

Julian



On Aug 7, 4:30 am, Cornel  wrote:
> Hello. I'm using app engine to write a business application. I've read
> that during a transaction one can modify only entities within the same
> entity group. How would one approach the following scenario? :
>
> Consider the "Account object" with the "Owner" and "Credit" fields.
>
> If i want to make a credit transfer between accounts A and B
> (A.credit--; B.credit++), it must be done in a single transaction.
> That can only happen if A and B are in the same entity group (from
> what i understand)
>
> Since a credit transfer can be done between any two random accounts, i
> must put them all in the same entity group; but this way, two
> unrelated transfers (A to B and C to D let's say) cannot be done at
> the same time anymore. Which again is not desired (i understand that
> having a single big entity group is bad practice)
>
> I think this is a pretty general problem (not related only to this
> scenario), so how is it solved?
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Adding the link to app on the the Google App top navigation bar

2009-10-30 Thread Julian Namaro

I believe the only way to do that would be to use something like a
Greasemonkey script (cf. 
http://en.wikipedia.org/wiki/Greasemonkey#Equivalents_for_other_browsers).


On Oct 29, 1:34 pm, saneef  wrote:
> Is there any way to add the link to newly made Google App to the
> Google App (where Mail, calendar,... comes) top navigation bar.
>
> Thanks,
> Saneef
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~--~~~~--~~--~--~---



[google-appengine] Re: Increasing task queue quotas

2009-11-17 Thread Julian Namaro
I'm not sure about what you want to do but just a thought: have you
considered Amazon Elastic MapReduce ?
It's sure doable with task queues but you're likely to encounter
various limitation problems as you cite.


On Nov 18, 1:46 am, James Cooper  wrote:
> Hi,
>
> I'm evaluating GAE suitability for one of my clients.  It is a B2B app
> with very low organic web traffic.  However, users upload lists of
> data that need to be processed in the background.
>
> In the absence of native MapReduce support, my plan is to use task
> queues.  But to meet the performance needs of this application, I need
> to burst to relatively high levels of concurrency.
>
> My question is about the discrepancy in the GAE quotas for inbound web
> traffic vs. task queues.
>
> Inbound web traffic can burst to: 500 qps
> Task queues burst to: 20 qps
>
> In my mind, the total concurrency that GAE would provide an app is
> equal to:
> inbound qps + task queue qps + cron qps.
>
> Given that task queues are implemented as web request handlers, I see
> little infrastructural reason to distinguish between task queue and
> inbound traffic.
>
> Is it possible to request a task queue quota increase to 500 qps?  If
> not is there a technical reason for this?
>
> thanks
>
> -- James

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=.




[google-appengine] Re: Why not implement a Django database backend for App Engine?

2009-11-18 Thread Julian Namaro
Traditional relational databases are not scalable, and using such
library would most likely result in a data organization that does not
scale well. Might be fine for a lot of apps, but it kind of defeat the
main purpose of using a cloud infrastructure.


On Nov 18, 9:54 pm, frankabel  wrote:
> Hi all,
>
> I'm newbie in this App Engine. As far I know, lot of Django app
> portability get solved if someone implement a  Django database backend
> for App Engine. Even I read "Once queryset-refactor lands in trunk, it
> might also be possible to write a database backend for App Engine that
> would allow any app to run properly. " 
> athttp://martyalchin.com/2008/apr/8/appengine/
>
> Somebody can explain me why isn't implemented yet such database
> backend? I, mean is a big deal? What are the main problems that ones
> must address to write such backend? If the backend exist, porting
> Django app will be more easy that using Google App Engine Helper for
> Django?
>
> Cheers
> Frank Abel

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=.




[google-appengine] Re: datastore glitch/corruption?

2009-11-19 Thread Julian Namaro
The datastore is 100% reliable even if the datastore Java/Python APIs
might not be.

What framework and libraries do you use? Is this live on appspot.com
or the local dev server? It looks like there is an unauthorized write
somewhere.

If you can reproduce the problem consistently, try to locate it by
gradually shutting down parts of your code.



On Nov 19, 12:12 pm, ussuri  wrote:
> I'm experiencing weird data corruption issues that seem to be caused
> by a glitch in the datastore. There is a chance that there is a bug in
> my code, of course, but I have spent hours looking for holes and I see
> none.
>
> Basically, I have an entity (table) with several content fields and
> two "tracking" fields. One is a simple timestamp that is auto updated
> (auto_now = True). The other tracking field stores the session ID of
> the last session that updated the entity. There are only four methods
> that update the entity, and all set the session ID properly.
>
> Situation: sometimes entities get one of their content fields (of the
> TextProperty type) populated with weird data, probably (this is a
> guess) with data from other records in the same 'table'. What's more,
> it appears that this happens on read-only operations. For example, I
> read today about 100 records, and most of them had this field
> corrupted. ALL OF THE CORRUPTED RECORDS  had the 'modified' field
> (auto_now timestamp) set to the time when I was reading them, while
> all of them had last_session_id set to a session that happened a month
> ago.
>
> I've read about a gmail glitch that resulted in some users seeing
> emails of other users, and this seems to be a similar kind of bug.
>
> Am I imagining things, and the datastore is 100% reliable, or is the
> scenario I am describing potentially possible?
>
> Thanks,
> MG

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=.




[google-appengine] Re: datastore glitch/corruption?

2009-11-20 Thread Julian Namaro
You can try adding some Asserts and overriding the validate() method
of your property classes to check for data corruption. This should
give you a better idea of what goes wrong and when.
http://googleappengine.blogspot.com/2009/07/writing-custom-property-classes.html



On Nov 20, 11:30 pm, ussuri  wrote:
> On Nov 19, 9:57 pm, Julian Namaro  wrote:
>
> > The datastore is 100% reliable even if the datastore Java/Python APIs
> > might not be.
>
> > What framework and libraries do you use? Is this live on appspot.com
> > or the local dev server? It looks like there is an unauthorized write
> > somewhere.
>
> > If you can reproduce the problem consistently, try to locate it by
> > gradually shutting down parts of your code.
>
> this is live on appspot.com, using Python API. I cannot reproduce the
> problem. I just notice occasional data corruption - it was rare like
> 1-2 records out of thousands in a month, but two day ago I got a whole
> bunch, as described in the original post. Can it be that occasionally
> __key__ gets translated erroneously to point to a different record,
> and then somehow the link becomes persistent in the textproperty
> field?

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=.




[google-appengine] Re: Three questions about AppEngine datastore and benefits / cost of EntityGroups?

2009-11-23 Thread Julian Namaro


> Question 1: When you define a chain of ownership between an entity and
> child entity, is the entity group defined at the root entity level or
> the "kind" level.  For example of you have "kind" called "Book" and a
> child kind "Chapter" defined.  If you have a "Book" entity named
> "Intro to AppEngine", and it has three child entitles, "Ch1", "Ch2",
> and "Ch3", is there 1 entityGroup containing "Intro to AppEngine" with
> its 3 child entities or is the entityGroup at the 'book" level where
> it contains all entities of kind "book" along with all of the child
> entities?

The grouping is at the level of entities, not models(hence the name).
In your example the Book entities will be stored in various Datastore
nodes, only "Intro to AppEngine" with its 3 child entities "Ch1/Ch2/
Ch3" are guaranteed to be grouped together in the same Datastore node.

> Question 2: Assuming you have an entityGroup with a chain of ownership
> of 3 different kinds. Does the write throughput vary based on whether
> we are using transactions or not. When Max Ross at Google I/O
> described write throughput on an entityGroup being from anywhere from
> 1 to 10 writes per second, were these numbers based on writes using
> appengine datastore transaction management or were these numbers based
> on writes without transactions.

I assume it does vary; transactions are inherently slower and can
fail.
I think the threshold is that you will get into trouble if you use
transactions on entity groups that get more than 1 write per second.

> Question 3: The second question involves queries against data in an
> entity group. In the relational database world, some databases allow
> you to partition data within a table. The advantage of this, if done
> properly, is that queries against those tables result in scans against
> specific partitions (given parameters in where clause) as oppossed to
> full table scans across all data in the table. The query can perform
> much better with these partitions in place.  With respect to the
> AppEngine datastore, do we get a performance boost with queries whose
> entities all reside within a single entity group?

The only difference for queries is that the entities are stored at the
same location. As the entities matching a query are fetched in
parallel, this will not result in a performance boost, and might
actually be slower.

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Can a bank system be transactional and low contentious?

2009-11-30 Thread Julian Namaro
How is this better than a read then a write outside of a transaction ?

Anyway it looks like there will be specialized tasks for transaction
across entity groups in the future.


On Dec 1, 4:32 am, peterk  wrote:
> I'm not sure if this would meet your needs or not, but it might be
> something to look into. A book I was reading suggested transactional
> enqueuing of tasks to get around having to keep all entities in a
> single entity group, for certain kinds of transaction.
>
> So say in a transaction you need to read from one entity and write to
> another. Using this method you could in a transaction read from the
> first entity, and then enqueue a task to write to the second.
>
> The two entities can be in different entity groups, but the
> transaction will ensure that the write task doesn't get enqueued if
> the read on the first entity fails. So if the read fails, the write
> will fail.
>
> The book notes that at the time of writing transactional task
> enqueuing was not supported, but this may have changed by now (?)
>
> You'd also be limited by the quotas and limits currently applicable to
> the Task Queue. And it wouldn't be suitable for all kinds of
> transactions...for example, I'm not sure how you'd cast a problem of
> needing 3 writes in a single transaction to this technique. You could
> enqueue three tasks in a transaction, but you've no visibility beyond
> that of the writes' success.
>
> On Nov 27, 3:40 am, 风笑雪  wrote:
>
> > I just watched some Google I/O videos about GAE yesterday, and I have
> > 2 questions about transaction.
>
> > Assume I need to build a bank system, two clients (Alice and Bob) want
> > to transfer one's money to the other.
> > So when I create Alice and Bob, I must put them in the same entity group:
> > alice = User(name='Alice')
> > bob = User(parent=alice, name='Bob')
>
> > If I have millions of clients, and they all have a chance to make a
> > deal with each other, then they should be all in the same entity
> > group:
> > adam = User(name='Adam')
> > alice = User(parent=adam,name='Alice')
> > bob = User(parent=adam, name='Bob')
> > chris = User(parent=adam, name='Chris')
> > ...
>
> > As the presentation says, writes to the entity group is serialized,
> > and a write operation takes at least 10ms, a transaction needs at
> > least 1 reed and 3 writes, so I can't do more than 33 deals/sec, is it
> > scalable enough?
> > And having such a big entity group may easily cause high contention,
> > maybe most of the transactions will fail.
> > I wonder how could I break the huge group into small entity groups?
>
> > The second question is about the root entity.
> > The presentation says, root keeps a timestamp for the entire group.
> > But document also says, we can delete an ancestor, or just create a
> > Key for the ancestor that not exist to specify the parent of the new
> > entity.
> > So if the root has been deleted, or not exist at all, can this entity
> > group still transactional?

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: DB Model design, fan-out, lots of writes?

2009-12-01 Thread Julian Namaro
There is a better solution to this problem.
You can use a special entity MessageRead with no property and a
key_name= message_id + recipient_id

indexes = keys_only query on MessageIndexes
for k in indexes:
  keys.append( k.parent() )
  keys.append( db.Key.from_path('MessageRead', k.parent().id() +
recipient_id) )
messages = db.get(keys)

You can put MessageRead in the same entity group than Message to do
transactional "mark as read", but then as Peterk said there might be
contention if the message has thousands of recipients. The other
option is to leave it a root entity and write it in a task queue after
a message is read.



On Nov 6, 1:35 am, Chris  wrote:
> Hi All
>
> I'm looking for some input on Db.Model design for the following
> scenario:
>
> 1) A User can send a message which will to anywhere between 1 to 5000
> receipients with 200-2000 receipients being by far the the most
> common.
> 2) Each recipient is expected to receive between 1 and 50 messages a
> day.
> 3) When a receipent has read a message it needs to be flagged as read.
>
> For the distribtion of messages to receipients I took inspiration from
> thishttp://www.youtube.com/watch?v=AgaL6NGpkB8
> (About 16 minutes into the video) he suggests a model like this:
>
> class Message(db.Model):
>   sender=db.StringProperty()
>   body=db.TextProperty()
>
> class MessageIndex(db.Model):
>   recipient = db.StringListProperty()
>
> with Message and MessageIndex being in the same entity group. In short
> the benefit to this design is supposedly that I can do a key only
> query on
> the MessageIndex for a particular user. From the MessageIndex keys
> returned for the recepient I can extract the actual Message entity
> keys and fetch those directly by key.
>
> That's all well and good...but then I get to 3)...recipients needing
> to flag messages as read. For that I'm contemplating something like
> this:
>
> class MessageReadIndex(db.Model):
>   recipient=db.StringProperty()
>   month=db.IntegerProperty()
>   messagesRead = db.StringListProperty(indexed=False)
>
> When a recipient asks for a list of messages it will be sorted by
> date, newest messages first, and paged (think gmail).
> In the same page request I can query the MessageReadIndex for the user
> and month(s) in question. From here I can loop through each message in
> memory and
> check to see if it has already been read.
>
> When the recipient clicks a message to read it I can also retrieve the
> MessageReadIndex entity and append the Message Id to the messagesRead
> property and put() the entity.
> This last bit is what has be a bit worried. It will be quite a few
> writes from every recipient every day...again think gmail ;-) Not
> indexing the messagesRead
> property should help minimize the number of index entries that need
> updating ...but still. Am I being overly paranoid and prematurely
> optimizing at an unreasonably
> level? Does anybody have any better ideas as for how to handle this?
>
> Thanks in advance for your CPU time!
>
> /Chris

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-01 Thread Julian Namaro
Hi Eric,

Of course this kind of performance is in no way normal.
Such a simple query typically takes below 50ms.
So either there is a problem in your code, or you hit a bug in the
java SDK.
I suggest you post a more complete code sample in the App Engine Java
group.

And no need to be condescending here. You'll see that the datastore is
a great piece of engineering :)
Query performance is completely independent of the number of entities
you have.
It depends on the size of the result set, but results are fetched in
parallel so the overhead is mainly the time needed to deserialize
entities. For the same reason locality is not a factor.



On Dec 2, 6:12 am, Eric Rannaud  wrote:
> On Tue, Dec 1, 2009 at 11:02 AM, Stephen  wrote:
> > On Dec 1, 9:55 am, Eric Rannaud  wrote:
> >> Calendar c = Calendar.getInstance();
> >> long t0 = c.getTimeInMillis();
> >> qmsgr = (List) qmsg.execute(lo, hi);
> >> System.err.println("getCMIdRange:qmsg: " + (c.getTimeInMillis() - t0));
>
> > Are you fetching all 128 entities in one batch? If you don't, the
> > result is fetched in batches of 20, incurring extra disk reads and rpc
> > overhead.
>
> > Not sure how you do that with the Java API, but with python you pass
> > '128' to the .fetch() method of a query object.
>
> As far as I can tell, there is no such equivalent in the Java API. The
> query.execute() statement returns a collection that is meant to
> contain all the results. I don't know how they implement the
> Collection object returned by query.execute(). Google may well manage
> that in batches internally, inside the object with interface
> List, but that would be nasty for performance.
>
> I should say that a query with 1 result takes about 30ms. 128*30 =
> 3840 ms. That's pretty close to what I'm seeing for 128, indicating a
> linear scaling in the number of entities. Which would be really bad,
> and unexpected.
>
> It's really hard to guess what's going on internally, without any
> visibility of the architecture.
>
> To see the impact of number of entities on response time, I did some
> systematic testing:
>
> Querying elements [0,10), [0,10), [0,10), [0,20), [0,20), [0,20),
> [0,30), [0,30), [0,30), ... [0, 260), [0, 260), [0, 260) by increments
> of 10, in a quick succession, three times each, actually shows a
> pretty good performance behavior, the largest query with 260 entities
> returned taking 300ms. So there is some kind of caching happening,
> maybe. I didn't see that caching behavior earlier, but I wasn't doing
> queries in such a quick succession.
>
> But if I hit randomly in the datastore, i.e., [X+0,X+10), [X+0,X+20),
> [X+0,X+30), ...  [X+0, X+260), where X is random and different for
> each request, 0 <= X < 50, then pretty much all the queries take
> between 1s and 4s, and we're back to more or less linear scaling in
> the number of entities fetched. (With a query returning a single
> entitiy taking 3s every so often.)
>
> It does make some sense for random queries to take longer than a bunch
> of queries in the same area of the datastore (except that there are no
> guarantees that the locality in the datastore is related to the
> ordering with respect to the field 'id'). But with the near linear
> scaling in response time with the number of entities, say 30 ms per
> entity, of average size 463 B, that's an implied bandwidth in the
> backend of 120Kb/s. Which is not very good.
>
> A last point, the field 'id' and the PrimaryKey of the entity MessageS
> are effectively uncorrelated (with respect to their ordering). The
> PrimaryKey is a String containing a MD5 hash of the content, the 'id'
> is a long set incrementally.
>
> Has anybody looked (publicly) at datastore performance depending on
> query size, locality, etc? If not, I might try to gather some
> extensive data, and write it up.
>
> Thanks,
> Eric.

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: DB Model design, fan-out, lots of writes?

2009-12-02 Thread Julian Namaro

> Is there value in putting the MessageRead entity in the same entity
> group as the message?
>

Just to do transactional "mark as read". I agree in most cases it's
not worth the trouble.

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Datastore is slow on queries involving many entities, but a smallish dataset

2009-12-03 Thread Julian Namaro

Eric,

Sorry my previous answer was wrong. The query time increases with the
number of entities fetched faster than I thought.
>From a small benchmark in Python average time for your query is
150-200ms.
This is for fetching 130 entities of about 400 bytes, with 5
properties each, using an inequality filter.

But still, one second is definitively too long. Hope the Java gurus
can help you!




On Dec 3, 5:03 am, Eric Rannaud  wrote:
> Crossposting to App Engine Java group: the original thread is 
> athttp://groups.google.com/group/google-appengine/browse_thread/thread/...
>
> In a few words: I have a problem with reasonable queries taking a very
> long time (several seconds). These queries return 128 entities, from a
> total of 500,000 entities of that type in the datastore. Each entity
> is about 400 bytes.
>
>
>
> On Tue, Dec 1, 2009 at 6:49 PM, Stephen  wrote:
> > On Dec 1, 9:12 pm, Eric Rannaud  wrote:
> >> On Tue, Dec 1, 2009 at 11:02 AM, Stephen  wrote:
> >>> On Dec 1, 9:55 am, Eric Rannaud  wrote:
>  SELECT * FROM MessageS where id >= 0 && id < 128 order by id
>
>  Calendar c = Calendar.getInstance();
>  long t0 = c.getTimeInMillis();
>  qmsgr = (List) qmsg.execute(lo, hi);
>  System.err.println("getCMIdRange:qmsg: " + (c.getTimeInMillis() - 
>  t0));
>
> >>> Are you fetching all 128 entities in one batch? If you don't, the
> >>> result is fetched in batches of 20, incurring extra disk reads and rpc
> >>> overhead.
>
> >>> Not sure how you do that with the Java API, but with python you pass
> >>> '128' to the .fetch() method of a query object.
>
> >> As far as I can tell, there is no such equivalent in the Java API. The
>
> > Something like this..?
>
> > DatastoreService datastore =
> >DatastoreServiceFactory.getDatastoreService();
>
> > Query query = new Query("MessageS");
> > query.addFilter("id", Query.FilterOperator.GREATER_THAN_OR_EQUAL, 0);
>
> > List messages = datastore.prepare(query)
> >.asList(FetchOptions.Builder.withLimit(128));
>
> > You might also have to tweak chunkSize and/or prefetchSize, or ask on
> > the Java list.
>
> I did some tests with the code you proposed. The performance remains
> essentially the same as with the JDO API, i.e. between 1 and 4 second
> per "execute"/"prepare" statement (2.5s on average).
>
> DatastoreService datastore =
> DatastoreServiceFactory.getDatastoreService();
> Query query = new Query("MessageS");
> query.addFilter("id", Query.FilterOperator.GREATER_THAN_OR_EQUAL, lo);
> query.addFilter("id", Query.FilterOperator.LESS_THAN, hi);
>
> long t0 = Calendar.getInstance().getTimeInMillis();
>
> List r = datastore.prepare(query)
> .asList(FetchOptions.Builder
> .withLimit(128)
> .prefetchSize(128)
> .chunkSize(128));
>
> System.err.println("LOW:getCMIdRange:qmsg: "
>+ (Calendar.getInstance().getTimeInMillis() - t0)
>+ " " + r.size());
>
> Thanks.

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Documenting Each App Having its Own Datastore?

2009-12-04 Thread Julian Namaro
Well technically there is only one datastore for all apps.
Google servers make sure you cannot access data from other apps, but
we can imagine that they will allow it for specific cases in the
future.
And different entities with the same name can co-exist because all the
keys of your app contain your application id.



On Dec 5, 1:04 am, Hans  wrote:
> I think this is obvious to a lot of us, but I wonder if it's worth
> explicitly documenting up front in the Java/Python Datastore overviews
> that each application has its own corresponding datastore?
>
> If so, one could add an additional paragraph in each "Introducing the
> Datastore" section 
> ofhttp://code.google.com/appengine/docs/java/datastore/overview.html#In...
> andhttp://code.google.com/appengine/docs/python/datastore/overview.html#...
> like this:
>
> "Each App Engine application has its own associated datastore. From
> any given application, you may not reference the datastore (and its
> entities) associated with a different application.This does allow
> entities with the same name to exist independently across multiple
> applications. "
>
> Hans

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Application ID

2009-12-17 Thread Julian Namaro
Hi Marco,

Maybe you got an error precisely because the id was unavailable.
A quick google search returns 2 results for artezel - at - gmail
App Engine namespace is shared with other google applications like
gmail.



On Dec 16, 9:20 am, Marco Antonio da Silva Castanheira
 wrote:
> Hi,
>
> I'm new on App Engine. I developed my first application, called *artezel*,
> but when I tried create the application on App Engine, an error has ocurred.
> The application was not created, but the id, artezel, is now unavailable.
> How I can turn this id available again?

--

You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.




[google-appengine] Re: Prerelease SDK 1.4.0 is out!

2010-11-18 Thread Julian Namaro
Wow a lot of work packed there, congrats App Engine team!

I'm surprised about the new 10 minutes deadline on Cron and Task
Queues. A short while back you were explaining that long-running
requests are bad for the App Engine ecosystem. Is it not the case for
Task Queues or are you just confident that the system has improved
enough to handle it now ?

10 minutes is a lot of time. If it works out well it will open the
door to a lot of new possibilities :)


On Nov 19, 7:27 am, "Ikai Lan (Google)" 
wrote:
> Hey everyone,
>
> I just wanted to let everyone know that prerelease SDK 1.4.0 is out! Get it
> from the Google Code project:
>
> http://code.google.com/p/googleappengine/downloads/list
>
> We're still working on the docs and will have them ready for the final
> release, so if there are any questions about how to use the new features,
> feel free to ask on this thread and I'll do my best to clarify them. The
> release notes are below. This is an EXCITING release:
>
> Python
> 
> - The Always On feature allows applications to pay and keep 3 instances of
> their
>   application always running, which can significantly reduce application
>   latency.
> - Developers can now enable Warmup Requests. By specifying  a handler in an
>   app's app.yaml, App Engine will attempt to to send a Warmup Request to
>   initialize new instances before a user interacts with it. This can reduce
> the
>   latency an end-user sees for initializing your application.
> - The Channel API is now available for all users.
> - Task Queue has been officially released, and is no longer an experimental
>   feature. The API import paths that use 'labs' have been deprecated. Task
> queue
>   storage will count towards an application's overall storage quota, and
> will
>   thus be charged for.
> - The deadline for Task Queue and Cron requests has been raised to 10
> minutes.
>   Datastore and API deadlines within those requests remain unchanged.
> - For the Task Queue, developers can specify task retry_parameters in their
>   queue.yaml.
> - Metadata Queries on the datastore for datastore kinds, namespaces, and
> entity
>   properties are available.
> - URLFetch allowed response size has been increased, up to 32 MB. Request
> size
>   is still limited to 1 MB.
> - The Admin Console Blacklist page lists the top blacklist rejected
> visitors.
> - The automatic image thumbnailing service supports arbitrary crop sizes up
> to
>   1600px.
> - Overall average instance latency in the Admin Console is now a weighted
>   average over QPS per instance.
> - The developer who uploaded an app version can download that version's code
>   using the appcfg.py download_app command. This feature can be disabled on
>   a per application basis in the admin console, under the 'Permissions' tab.
>   Once disabled, code download for the application CANNOT be re-enabled.
> - Fixed an issue where custom Admin Console pages did not work for Google
>   Apps for your Domain users.
> - Allow Django initialization to be moved to appengine_config.py to avoid
>   Django version conflicts when mixing webapp.template with pure Django.
>    http://code.google.com/p/googleappengine/issues/detail?id=1758
> - Fixed an issue in the dev_appserver where get_serving_url did not work
>   for transparent, cropped PNGs:
>    http://code.google.com/p/googleappengine/issues/detail?id=3887
> - Fixed an issue with the DatastoreFileStub.
>    http://code.google.com/p/googleappengine/issues/detail?id=3895
>
> Java
> -
> - The Always On feature allows applications to pay and keep 3 instances of
> their
>   application always running, which can significantly reduce application
>   latency.
> - Developers can now enable Warmup Requests. By specifying  a handler in an
>   app's appengine-web.xml, App Engine will attempt to to send a Warmup
> Request
>   to initialize new instances before a user interacts with it. This can
> reduce
>   the latency an end-user sees for initializing your application.
> - The Channel API is now available for all users.
> - Task Queue has been officially released, and is no longer an experimental
>   feature. The API import paths that use 'labs' have been deprecated. Task
> queue
>   storage will count towards an application's overall storage quota, and
> will
>   thus be charged for.
> - The deadline for Task Queue and Cron requests has been raised to 10
> minutes.
>   Datastore and API deadlines within those requests remain unchanged.
> - For the Task Queue, developers can specify task retry-parameters in their
>   queue.xml.
> - Metadata Queries on the datastore for datastore kinds, namespaces, and
> entity
>   properties are available.
> - URL Fetch allowed response size has been increased, up to 32 MB. Request
> size
>   is still limited to 1 MB.
> - The Admin Console Blacklist page lists the top blacklist rejected
> visitors.
> - The automatic image thumbnailing service supports arbitrary crop sizes up
> to
>   1600px.
> - Overall average ins

[google-appengine] Re: Suitability of App Engine for Database Display App

2010-11-30 Thread Julian Namaro
GAE is not designed for performing complex queries on a dataset.
It is possible to get it to work by anticipating what queries you will
need and building the corresponding indexes, but it is not easy nor
flexible, and the complexity of the queries would be limited.

Why not start with a relational database ?  10 millions rows doesn't
sound like it would be slower than GAE.
Not sure about your use case but BigQuery also looks like a
possibility: http://code.google.com/apis/bigquery/docs/overview.html



On Nov 28, 11:56 am, smitts  wrote:
> I'm considering using GAE to host a relatively large (10 Million+
> rows, possibly larger) database.  Data from this database is then
> retrieved using filter criteria on multiple columns of the table and
> displayed to the user.  Typically the criteria is (column a < 50) &&
> (10 < column b < 1000) && (column c == true), which has been
> relatively database intensive on a traditional LAMP.
>
> Does anyone have experience running a similar setup?  Most of what I
> have seen GAE used for is much less database intensive.  Could it
> work?  Should I expect faster or slower responses from the LAMP?
>
> Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: FederatedLogin: Is logout supported?

2010-12-07 Thread Julian Namaro
You can star the issue I filed a while ago about this:
http://code.google.com/p/googleappengine/issues/detail?id=3301



On Dec 8, 4:22 am, dflorey  wrote:
> Thanks for the suggestion, but don't think this is a feasible solution
> for a real world application.
> Is it possible to redirect to the proposed Google logout links and
> pass something like a next url so that the user will end up on the
> login page of my app?
>
> Right now I'm using PAPE=0 in my own OpenID implementation, but I'd
> like to switch to the GAE implementation in order to be able to send
> emails on the behalf of the current user.
>
> What a mess :-(
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Enhancement/Concept for Foreign Property Synchronization within Models

2010-12-10 Thread Julian Namaro
Yes I also thought about this when denormalizing. And this could be
generalized to a whole model by serializing it into a protobuf
StringProperty.

This is probably best for models with infrequent writes, like a User
personal information.



On Dec 9, 7:06 am, Robert Kluin  wrote:
> Hi Zach,
>   I think this is a pretty useful idea.  It would probably help people
> new to app engine design more efficient apps since it would help get
> past the typical "you want me to DEnormalize my data?  That is a
> really really bad sin though, right?" feelings that are common when
> someone first suggests the idea.
>
>   I think both methods look super-clean, but I like the idea of
> SyncedProperty -- it implies the synced-property will use the type
> from the 'source' kind which is cool.  An entity might have multiple
> references to the same kind, so I guess we would need to specify which
> reference property to use and the property to sync from.  I am not
> sure this could be done within the class definition, since it is not
> yet defined.  Maybe it could look something like:
>
> class Story(db.Model):
>     author_key = db.ReferenceProperty(Author)
>     title = db.StringProperty()
>     date = db.DateTimeProperty()
>
> # Setup the Synced properties...
> Story.author_name = db.SyncedProperty(Story.author_key, Author.name)
> Story.author_age = db.SyncedProperty(Story.author_key, Author.age)
>
> What are some cleaner ways to accomplish that?
>
> I would think the Author kind needs a "revision number" property and
> that the Story kind would also then get an implicit
> "author_key_revision" property, so that we can ensure old updated
> never overwrite new updates.
>
> Then I guess the trick would be to adjust the Author kind so that when
> it is saved a (transactional) task is kicked off to process those
> updates.  Perhaps db hooks could be used?  To ensure consistency, I
> believe all writes to the Author entity will need done within a
> transaction.
>
> Robert
>
> On Wed, Dec 8, 2010 at 15:16, fission6  wrote:
> > I think it could be very powerful to consider the following concept as a way
> > to ease data / relational integrity while still maintaing a sense of
> > denormalization.
> > Often times certain fields for a model may be replicated to maintain a more
> > embedded view of properties. This leaves the developer responsible for its
> > upkeep often times through db.keys / ReferenceProperties between the model
> > and its counterpart (relation). Would it be possible to stream line this by
> > specifying that a given field should be synced to a model field; taking
> > advantage of the backwards/forward relations maintained by
> > ReferenceProperties.
> > For example:
> > class Author(db.Model):
> >     author_name = db.StringProperty()
> >     sex = db.StringProperty()
> >     birth_day = db.DateTimeProperty()
> >     .
> > class Story(db.Model):
> >     author_key = db.ReferenceProperty(Author)
> >     author_name = db.StringProperty()
> >     title = db.StringProperty()
> >     date = db.DateTimeProperty()
> >     .
>
> > Here the developer would need to maintain the author_name explicitly
> > whenever the referring author's name changes. Although this is not often,
> > and serves as a simple example, its often routine to have to maintain some
> > sort 'foreign property' in this manor.  It would be very powerful and
> > encouraging to offer something like the following descriptor for a field
> > which can be managed through handler somewhere in the background. Using the
> > models above:
> > I think it could be very powerful to consider the following concept as a way
> > to ease data / relational integrity while still maintaing a sense of
> > denormalization.
> > Often times certain fields for a model may be replicated to maintain a more
> > embedded view of properties. This leaves the developer responsible for its
> > upkeep often times through db.keys / ReferenceProperties between the model
> > and its counterpart (relation). Would it be possible to stream line this by
> > specifying that a given field should be synced to a model field; taking
> > advantage of the backwards/forward relations maintained by
> > ReferenceProperties.
> > For example:
> > class Author(db.Model):
> >     author_name = db.StringProperty()
> >     sex = db.StringProperty()
> >     birth_day = db.DateTimeProperty()
> >     .
> > class Story(db.Model):
> >     author_key = db.ReferenceProperty(Author)
> >     author_name = db.StringProperty(sync=author_key.author_name)
> >     title = db.StringProperty()
> >     date = db.DateTimeProperty()
> >     .
> > What happens here is that a manager handles the backwards relations for the
> > ReferenceProperty so that every time the related Author name changes / its
> > synced to its counterpart "foreign fields". I am not sure that
> > the (sync=author_key.author_name) is the most ideal way to specify this /
> > perhaps just a new Property in general such as
> > SyncProperty(som

[google-appengine] Re: Help desk app

2010-12-20 Thread Julian Namaro
Hi Richard,

Check out Rietveld, it is open source and could be a good base for
building your project:
http://code.google.com/appengine/articles/rietveld.html


On Dec 21, 12:47 am, Richard  wrote:
> I am looking for something simple I can customize to integrate with our
> existing business processes. The business is essentially negotiating
> contracts with clients and then managing the projects with contractors.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: How to implement 'Mark All Read' feature in appengine

2011-01-11 Thread Julian Namaro
This one year old thread discusses the same problem I think:
http://groups.google.com/group/google-appengine/browse_thread/thread/b35a0387dfdf918b/55563f57306defbc?lnk=gst&q=mark+as+read#55563f57306defbc

A possible solution:

class Message(db.Model):
  sender=db.StringProperty()
  body=db.TextProperty()

class MessageIndex(db.Model):
  # parent = Message, one message can have several
  # MessageIndex children to spread the load of recipients
  recipient = db.StringListProperty()

class MessageRead(db.Model):
  # key_name = message_id + recipient_id
  # The entity is written when a message is read
  pass

def getMessagesForUser( user, nb_msgs ):
  indexes = MessageIndexes.all(keys_only=True).filter(
  recipient= user).fetch( nb_msgs )
  for k in indexes:
keys.append( k.parent() )
keys.append( db.Key.from_path('MessageRead',
  k.parent().id() + recipient_id) )
  messages = db.get(keys)


I guess it's about the same that what Stephen is proposing.




On Jan 11, 1:21 pm, nischalshetty  wrote:
> You guys are all awesome. My friend and I are soon going to finalize a
> way and I'll post it here. Your feedback would be greatly appreciated.
>
> -N
>
> On Jan 10, 10:20 pm, master outside  wrote:
>
> > One option is to use a date time stamp when they mark all as read. If
> > you allow people to mark items as unread then you in addition need
> > have a way to detect that on old items. For example if you were  to
> > use '0' for unread and '1' for read you could use '2' to override the
> > the all read after time.
>
> > On Jan 9, 11:57 am, nischalshetty  wrote:
>
> > > Say a user selects 5000 unread messages (each message is an entity)
> > > and wants to mark all of them as read. It would be a massive update.
> > > Can anyone help me on what the best way to do such a large update is?
>
> > > Is there a different way to do this apart from updating each of the
> > > 5000 entities by marking their status as read? Even if there is no
> > > escaping the large update, how do you update so many entities on
> > > appengine?
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: Advice for database model

2011-01-11 Thread Julian Namaro
Your model won't work if one message can have several recipients. Is
this OK?
And is there a particular reason why you create an entity group for a
user and its messages ?
You only need an entity group if you plan to update different entities
transactionally.

Also, you might be interested by this parallel thread:
http://groups.google.com/group/google-appengine/browse_thread/thread/51dd2e2fbfd76900



On Jan 10, 3:27 am, "sameer.mhatre"  wrote:
> I need suggestion for the database model designed by me for the app I
> am going to develop.
>
> Requirement:
> I am going to fetch messages (belongs to a particular user) and insert
> into the datastore and update the unread count for that user
> accordingly. Later on user can do operations like marking all/some
> read/unread to his/her own messages.
>
> Database Model:
> Entities: User, Message, Count
> Entity Group: list of messages (belongs to a single user) and count
> (to keep trac of read/unread) entity. Entity group is per user
> specific (kept user's email id as parent for the entities to keep them
> in a single group).
>
> Operations:
> Insert: I want to add list of messages by making a makePersistsAll
> method call and update count for a user accordingly. I think I have to
> do this in a single transaction. Messages to be added belongs to a
> entity group (means belongs to a single user).
> Update: Updating status for the list of messages (could be more than
> 1000 entities) and update Count accordingly. Again doing it in a
> single transaction. Messages to be updated belongs to a single entity
> group (means belongs to a single user).
>
> May be possible that more than one user's requests comes in and have
> to add/update entity groups per user simultaneously.
>
> Is there any problem with this design model? Anything you can suggest
> to me.
>
> Thanks
> Sameer

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.



[google-appengine] Re: LIFO key name strategy?

2011-02-14 Thread Julian Namaro
I am not sure about the mathematics of it, but intuitively there is no
perfect algorithm for constructing timestamps in a reverse
lexicographical ordering, because adding a character to a string will
always make it lexicographically superior.

But I noticed the mapreduce library just pick a "ridiculously future
time" and go in reverse order from there:
http://code.google.com/p/appengine-mapreduce/source/browse/trunk/python/src/mapreduce/model.py#190

The library also add a random string to reduce the chance of
duplicates, maybe that can be replaced by an UUID if you're really
concerned by uniqueness.



On Feb 14, 5:57 am, Joseph Letness  wrote:
> Hi Calvin and Robert, thanks for your replies.  I should have been
> more clear about what I am doing, here is some more info:
>
> Calvin, thanks for the link to Ikai's blog post, I haven't seen that
> one and it was very interesting.
>
> Robert, here are specific answers to your questions:
>
> >>Why do you say: " I can't use a composite index since it would explode with 
> >>my use case"?
>
> I'm using Brett Slatkin's "Relation Index" method of building and
> querying set memberships (Google I/O 2009 - Building Scalable, Complex
> Apps on App Engine).  According to Brett, using a composite index on
> this kind would cause explosion, so any ordering of results will need
> to be done in-memory during the request. If the sort order is
> immutable, sorted key names can be used to order results based on the
> their lexicographical position.
>
> Since a creation timestamp is "immutable" data, I figured that using
> lexicographic key names would be the way to go.
>
> >>What would be fine if you could handle your entire result set in one 
> >>request?
>
> Ordering the result set in-memory.
>
> >>What are you trying to do?
>
> The app is a digital-asset manager.  Users need to be able to query a
> set (using the relation index method) and have the results return the
> most recent additions first.  The result set could easily be a few
> thousand, so I want to use cursor-pagination to display the results
> which would preclude any in-memory ordering.
>
> (I'm actually refactoring my existing app that I use to manage/deliver
> graphic assets to my clients so that they can add their own data.)
>
> >>Is there a single global LIFO stack, or are there multiple stacks?
>
> The entities are all of the same kind, however, LIFO behavior is
> localized to individual user groups.
>
> >>How are new items added to the stack(s)?,  What is the addition rate?
>
> Just one item per user request.  User groups would be just a few
> individual users probably less than twenty. The rate per group would
> be so low that chances of contention on any sort of accumulator would
> be almost nonexistent.
>
> >>Is there a requirement that the items are precisely ordered or are some (or 
> >>small) mis-orderings acceptable?
>
> Precision is NOT critical.  Close approximation of chronology is just
> fine.
>
> --The auto-generated ids are not strictly increasing
>
> I did not know that.  Thanks!
>
> --Using the current time may also be problematic since the machines
> will have slight variations, and in some cases significant variations.
>
> I was aware of that, but since absolute precision is not necessary I
> could still use the timestamp as an accumulator if there is some thing
> as an "inverse-timestamp algorithm"!?!?
>
> So...
>
> After spending some more time thinking about this, here is what I plan
> to do:
>
> Create a counter model kind that is created with an IntegerProperty
> starting value of ten billion (I'd like to see somebody reach the
> bottom of that!). Give each user group it's own counter and de-count
> the values in a transaction (or not, it might be simpler to dismiss
> contention and write a handler that ensures uniqueness of the key name
> but maintains approximate lexicographic position).  When the counter
> value is read, convert the value to a padded string and concatenate it
> with the user group name and a leading lowercase letter (k999836/
> usergroupname) and use that as the key name for the new asset.
>
> Furthermore, it occurred to me that as a result set is reduced to a
> manageable in-memory size, I could test for the length of results and
> offer the user the ability to custom order their results (asset name
> alphanumeric or asset kind, for example).  Just a thought.
>
> Thanks again for the replies, If anyone thinks there is a better
> approach please let me know, I kind of make this stuff up as I go
> along..
>
> --Joe
>
> On Feb 12, 10:52 pm, Robert Kluin  wrote:
>
> > Hi Joe,
> >   What are you actually trying to do?  Is there a single global LIFO
> > stack, or are there multiple stacks?  How are new items added to the
> > stack(s)?  In batches to one stack at a time, batches across stacks?
> > What is the addition rate?  How are items removed / processed from the
> > stack(s)?  Is there a requirement that the items are precisely ordered
> > or are some (or sma

[google-appengine] Re: Google Checkout + App engine stinks

2011-04-17 Thread Julian Namaro
Not ideal but not the end of the world either. This delay might be
needed to combat fraud which is a big difficulty for electronic money
services.


On Apr 17, 2:47 pm, Daniel  wrote:
> Every single time I try to up my quota, and pay a bit more for
> appengine, google checkout messes up and I have to wait 30 minutes to
> even attempt again.  Granted most of my problems lie in the fact that
> I'm using a google apps account.  My most recent attempt was thwarted
> when google checkout redirected me to combine a personal and google
> apps account, locking up appengine again and apparently deleting my
> google checkout account.  So now I have an application over quota
> linked to a non-existent google checkout account and I can't do
> anything for 30 minutes.
>
> Seriously, there has to be a better way to deal with billing.  I don't
> think I have ever seen an online application that had a 30 minute
> waiting period every time you made any attempt to change billing
> (especially the numerous failed attempts I'm riddled with).  I have
> ended up stuck in this ridiculous waiting period at least ten times
> and I don't think it's user error, rather flawed design on the app
> engine teams part.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.