On Mon, Oct 23, 2017 at 10:17 AM, Joshua Fox wrote:
>
> But it seems strange to have to write any code. It seems that
> Splunk/Sumo/ELK should have some builtin log ingestors requiring nothing
> more than a configuration in GCP or a provided log4j Handler.
>
I agree! I don't actually know these v
This is a complicated question with many answers, most of which will be
based on opinions. :)
There are a number of companies that do build products on App Engine,
although not many of them make much noise about it. Snapchat and Kahn
Academy have historically written about their experiences. I
My suggestion is to use the Stackdriver log export features to export to
PubSub, Google Cloud Storage, or BigQuery, then write some appropriate
piece of code to send that to the destination of your choice. This will
have the nice advantage that it will work for *all* logs for all Google
Cloud p
Personal opinion:
* Task queues are probably better to use from App Engine Standard. They've
existing for a long time, and I suspect the RPC API that is used to call
App Engine services is pretty efficient. Using Pub/Sub is likely to be
slightly worse in terms of overhead per API call, since it
That is exactly why I posted here: I'm hoping on the small chance someone
runs into this issue, they will find my post and it might help them resolve
the problem in less time than it took us! Cheers!
Evan
On Monday, August 28, 2017 at 11:21:26 PM UTC-4, Attila-Mihaly Balazs wrote:
>
> Thank y
If you see rare "stuck" requests that either hit the overall request
deadline (on frontend instances), or seem to hang "forever" on backend
instances, there is a very small chance you are running into this bug we
found in App Engine Standard's logging library. The brief summary is if you
call l
I'll assume this is App Engine Standard, running on a "backend" instance. I
have a bunch of things that do something similar, and to get the logs to
look sensible in the logs viewer, I start a new background thread to do the
work about once a minute. The logs get attached to the "start time" of
each user is logged in to the
> db with their own credentials (ERP type app), but it might be worth
> exploring if the pooling and driver can handle that. It would be great to
> know how much slower you found creating the connection is vs write to open
> connection...
>
>
>
One issue I've noticed in the past: If for some reason the *previous* cron
job is considered "still running", it appears the cron service will not
trigger a new request. We've occasionally seen issues where a request on a
backend service gets "stuck" and runs for an extremely long time (usually
My understanding is that App Engine Standard can only talk to things that
are accessible via a "public" Internet IP address, so I'm not sure I'm
going to be able to provide any magic suggestions. However, I will mention
that in our experience we can get "reasonable" latency. In particular, we
c
I would recommend using the BigQuery streaming API. We do a heck of a lot
of that at Bluecore and it works well. Depending on how your data arrives,
you may want to use a Task Queue or similar to collect lots of rows
together to be able to insert batches into BigQuery, which will be more
effici
This is probably happening due to Python package conflicts. By default,
when you do "import google" it finds the first "google" package in your
PYTHONPATH. It does *not* combine all the separate "google" packages
together. There is a way to "opt in" to combining packages, but the App
Engine package
It sounds like you are following the "Using third-party libraries" document
below. If you follow these directions, everything will work if you use
dev_appserver.py or run in production. However, when you run things
locally, you will need to add the lib directory to your PYTHONPATH in some
way.
It definitely works. I haven't investigated the performance carefully,
since we are primarily an App Engine app with a small amount of stuff
outside of App Engine. Basically: Our non-app engine stuff accesses the
datastore sufficiently quickly and reliably that we haven't had to
investigate it.
curl request with the
> header set, and the project id is also a part of the URL, isn't it?
>
> W dniu środa, 4 stycznia 2017 13:06:01 UTC-8 użytkownik Evan Jones napisał:
>>
>> You can check the X-Appengine-Inbound-Appid header on requests coming it
>> to your service
You can check the X-Appengine-Inbound-Appid header on requests coming it to
your service. On App Engine, it will be set by Google, so you can trust it.
Check that it matches the project(s) you expect, and return some HTTP error
if it doesn't match. See:
https://cloud.google.com/appengine/docs/g
Datastore gets also cost us a lot. The thing we have discussed doing, but
have not done yet, is put some instrumentation into the pre-call hook [1]
to count the entities involved in query and gets. This would allow us to
"slice" the data by entity, app engine service, and if you added stack
tra
Not sure if this is what you are seeing, however: if your request logs a
large amount of data, you will get multiple entries. The client log library
buffers some number of entries, and usually writes them at the end of the
request. However, if there are a large number, it will write partial
res
Yes, I'm seeing some more of these, in a slightly different forms:
InternalError: Server is not responding
However, may very well be our fault: We are also getting an increased rate
of "contention" exceptions, on a time frame that could line up with us
deploying new code. I haven't tracked it
I figured out why: gcloud version 134 fixed this. As of version 133 when I
run gcloud auth application-default login the scopes are: "
https://www.googleapis.com/auth/cloud-platform";
As of version 134, they are now: "
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/
I'm assuming you eventually saw the status page notice about this
issue: https://status.cloud.google.com/incident/bigquery/18022
On Tuesday, November 8, 2016 at 7:49:42 PM UTC-5, Richard Druce wrote:
>
> We're currently receiving:
>
> HttpError: https://www.googleapis.com/bigquery/v2/projects/**
Maybe I should add: I'm doing this from my machine, using my personal
credentials, and not an instance service account. I'm not sure if that
might make a difference or not.
On Wednesday, November 9, 2016 at 4:42:30 PM UTC-5, Evan Jones wrote:
>
> Yes, you need to follow t
Yes, you need to follow the directions to connect to an App Engine
application via the remote shell:
1. Run `gcloud auth application-default login`
2. Follow the directions here:
https://cloud.google.com/appengine/docs/python/tools/remoteapi
Result: HTTP 401 Unauthorized errors
I'm guessing
To fix it, you need to run:
gcloud auth application-default login
--scopes=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/userinfo.email
It would be really great if this could be made the default, or at least
documented in the App Engine docs, since I had to re
You may already be aware of this, but the documentation has started
encouraging users to "vendor" third-party dependencies by including them
directly yourself. This will work for any pure python library. From the
list above, the ones where this will NOT work are lxml, numpy, PIL and
pycrypto. F
questions about this! We're here
> to help.
>
> Cheers,
>
> Nick
> Cloud Platform Community Support
>
> On Friday, October 28, 2016 at 5:09:57 PM UTC-4, Evan Jones wrote:
>>
>> Sorry I should clarify: We have a slightly different symptom. Our
>> affected
On Sun, Oct 30, 2016 at 9:04 AM, Joshua Fox יהושע פוקס wrote:
>
> Worst case: We've discussed writing something ourselves multiple times,
>> that would scan our Datastore and writes entities out in a more useful way,
>> but we haven't prioritized it.
>>
>
> Evan, can you explain why this is not a
Sorry I should clarify: We have a slightly different symptom. Our affected
customer gets timeouts connecting to
https://(projectid).appspot.com/(static path), but it works fine if they
use HTTP.
On Friday, October 28, 2016 at 5:09:21 PM UTC-4, Evan Jones wrote:
>
> Funny, I've b
Funny, I've been dealing with the same issue this week. We have a customer
who is on their corporate network, and they cannot access our static
resources if they use https://(projectid).appspot.com. If they use HTTP, or
our domain alias (https://www.bluecore.com/) it works. I haven't been able
To chime in on this: I agree with you that backups are important to protect
against operator error. As a concrete example: we made a thankfully minor
error with BigQuery, so now we periodically backup all our BigQuery tables.
The datastore backup tool is not great, but we do it for the same reas
You are running a Flexible Environment virtual machine here. This
environment does not include the App Engine libraries (nor does it
recommend it, unless you need to port an existing app). You should just use
urllib2 or the requests library directly, since you are running on a VM.
You have no n
Stackdriver is great because it is built-in, so its trivial to get started.
It is not (yet) a very sophisticated data visualization and alerting tool
though.
We are starting to look into using Datadog, since it permits you to create
graphs and alerts on queries over custom metrics, which we are
If you need strong consistency guarantees across network requests, you are
going to need to do a bit of extra work, since the datastore's transactions
only work within a single request. To make this concrete:
1A Client A request: Read Work key "foo", send it to the browser.
1B Client B request:
and try to
>> get some conclusive information this time.
>>
>> On Friday, October 7, 2016 at 5:18:15 PM UTC-4, Evan Jones wrote:
>>>
>>> I just did a clean install of gcloud 129.0.0 and this has not changed:
>>> the lib directory still does not include
27;ll do some investigation and will update the
>> thread when I have more info.
>>
>> On Friday, September 30, 2016 at 11:45:17 AM UTC-4, Evan Jones wrote:
>>>
>>> It appears that the documentation now recommends using the version of
>>> App Engine ship
As far as I am aware, the only documented reason to use an entity group is
because it can give you strong consistency, which does not necessarily
require a transaction. To take your example, let's imagine we are trying to
decide if a user's "favourite colors" should be a list property on the Use
It appears that the documentation now recommends using the version of App
Engine shipped with gcloud (via gcloud components install
app-engine-python). However, this version does not include any of the
built-in versions of Django, which can be loaded by adding references to
the "libraries" sect
;
>
>
> On Tuesday, September 20, 2016 at 10:24:00 PM UTC+1, Evan Jones wrote:
>>
>> I'm not using Node JS, but we are using the Python flexible environment
>> with a service account. It needs to have the Editor role for the project.
>> We authenticate it w
I'm not using Node JS, but we are using the Python flexible environment
with a service account. It needs to have the Editor role for the project.
We authenticate it with:
gcloud auth activate-service-account (service account email) --key-file
(file.json)
gcloud config set project (projectid)
>From my limited experience of doing ETL-type tasks on App Engine, I'd
suggest using a "backend instance" with basic scaling if you can (although
these instances are probably slower than the n1-standard-2, and it
certainly will have much less memory). This would avoid the complexity of
managing
I'm pretty sure that we set both login: admin and secure: always in a bunch
of our apps, and the Task Queue happily connects to those handlers.
Evan
On Wednesday, August 31, 2016 at 9:55:03 PM UTC-4, Josh Hunt wrote:
>
> Example:
>
> - url: /internal/.*
> script: index.php
> login: admin
>
I have no idea what the problem might be, but we use some manually scaled
modules, and I can confirm that we only see /_ah/start on rare occasions
(e.g. instance uptime is at least a few hours, and possibly more). I *have*
seen this when we run into memory limits, causing our instances to get
r
There are a ton of things that are different between your situation and
what I'm familiar with, and I'd want to see a ton more details to actually
have a ssuggestion. However, as one random thing that we've seen make a
difference: Adjusting the "max_idle_instances" seems to impact billing
prett
Our solution: in App Engine we "trust" the task queue header(s), since
there are guarantees that it cannot be inserted by anything outside of app
engine. We look for that header, and automatically consider the request
authenticated. See the docs for the header names and details.
If you are usin
I noticed this as well, using Python and Go with the Flexible environment.
Its definitely a bug if you can't get the actual IP somehow. I think I can
get the actual IP in the REMOTE_ADDR "environment variable" in Python (also
as self.request.remote_addr), and I *thought* I was getting the actual
Are you using the Standard Environment? I think this happens if you are
trying to do work for longer than the App Engine request timeout (60
seconds), or after you have responded to the original web request. The key
words in this error are "context expired", which means this is some sort of
tim
I forgot to mention, the following Stack Overflow post is super useful on
the subject of initialization with Go on App
Engine:
http://stackoverflow.com/questions/36184701/initializing-go-appengine-app-with-datastore
On Friday, August 5, 2016 at 9:56:38 AM UTC-4, Evan Jones wrote:
>
>
I've actually only used Go to *query* the datastore, so I haven't tried
creating keys. However, a quick note that might be related:
If you are using the "Standard Environment": you can't use App Engine APIs
outside of handlers, because you need a "real" App Engine context. Make a
web request t
You cannot use anything that uses JNI on App Engine. You'll have to see if
you can configure this Snappy library to use a "pure Java" version, instead
of trying to load a native library. See:
https://cloud.google.com/appengine/docs/java/runtime
On Wednesday, August 3, 2016 at 9:42:23 AM UTC-4,
The documentation for datastore statistics states:
https://cloud.google.com/appengine/docs/python/datastore/stats
"When the statistics system creates new statistic entities, it does not
delete the old ones right away. The best way to get a consistent view of
the statistics is to query for the G
My understanding: Yes, you must have an App Engine application, and you
must run appcfg.py update_queues to create the App Engine Task Queues
before you can use them.
If you do not have an App Engine application, you should probably be using
Google Pub/Sub instead. It is many times faster and m
Yeah, that's probably the best one. :) Between bigger disks and enabling
health checking, that would probably eliminate this problem. Thanks for the
suggestion!
On Monday, May 2, 2016 at 3:58:38 PM UTC-4, pdknsk wrote:
>
> > Is there a workaround here? I'm tempted to write an agent which sshes
Strange! I have used manual scaling flexible environment services, and they
do what you would expect: They start that number of instances and nothing
else. Is there a chance that your deploy did not succeed? Or that it is
running multiple versions? If a previous version starts receiving traffic,
we write our own autoscaler, we might avoid this
bug.
On Sunday, April 24, 2016 at 12:44:40 PM UTC-4, Evan Jones wrote:
>
> After a day, the problem is still happening. If there are any workarounds,
> I'd love to hear it, because I think this is costing us real money. Might I
st>, so we
> can investigate further and get updates and feedback from the engineering
> team. There don't appear to be any existing reports of these issues
> publicly, but they may already be tracked internally (especially #1).
>
> On Sunday, April 24, 2016 at 4:11:15 PM UTC-4,
We are considering implementing our own automatic scaling policy by
periodically monitoring our service and calling the SetNumModules() API
that App Engine provides. We want to work around the bug that when you
deploy to an automatically scaled Flexible Environment service, the current
policy s
I'm a fan of the idea of the flexible environment VMs. However, they still
have quite a few rough edges. I'd love to help get these fixed, since they
are affecting our production application, and costing us money.
1. If the application logs fill the disk, they stop server. I'm using the
python
After a day, the problem is still happening. If there are any workarounds,
I'd love to hear it, because I think this is costing us real money. Might I
avoid this issue by using manual scaling, and calling the
modules.SetNumInstances() API myself?
I can see the problem very clearly on the chart
I just redeployed. It been running for the last few hours, and so far I
haven't seen this happen again. I'll report back after it has run for a few
days. I did notice some slightly unusual and possibly buggy behaviour
though:
1. First: During the "set up" process, the vm boot code still tries t
entirely clear
> where this issue is arising. The information requested above may be helpful
> to learning more about this.
>
> On Wednesday, April 20, 2016 at 6:14:46 PM UTC-4, Evan Jones wrote:
>>
>> *TL;DR*: I'm seeing the flexible environment autoscaler start a machin
I'm just a user, but I think you should have access. When I SSH to my
flexible environment machine, I've run the following command to check that
this works:
$ printf "get mykey\r\n" |nc -v 172.17.0.3 11211
172.17.0.3: inverse host lookup failed: Unknown host
(UNKNOWN) [172.17.0.3] 11211 (?) open
*TL;DR*: I'm seeing the flexible environment autoscaler start a machine,
then stop it 2 minutes later, before it has even finished starting the
application. Is this a misconfiguration on my side or a bug? Could the
issue be that our application loads too slowly? Can I reconfigure this
somehow?
62 matches
Mail list logo