get_serving_url failed, except, The API call datastore_v3.Get() was
explicitly cancelled.,
deadline failures as a result
--
You received this message because you are subscribed to the Google Groups
Google App Engine group.
To unsubscribe from this group and stop receiving emails from it,
GAE takes another API deadline dumps
had a few dozen failures related to get_serving_url
On Thursday, November 21, 2013 2:16:24 PM UTC-8, James Gilliam wrote:
get_serving_url failed, except, The API call datastore_v3.Get() was
explicitly cancelled.,
deadline failures as a result
--
I'm having a look at your project right now John.
This is quite interesting.
On Wednesday, November 20, 2013 12:42:20 PM UTC-5, John Belmonte wrote:
You may have missed my message behind Jeff's-- consistent data structures
can be done on memcache, directly in GAE. If you're using Python I
As limited as it could be, it does feel like a sweet trick.
On Tuesday, November 19, 2013 10:34:46 PM UTC-5, Vinny P wrote:
On Tue, Nov 19, 2013 at 3:28 PM, Stephen sdeasey...@gmail.comjavascript:
wrote:
As a refinement of the above:
- periodically parse the request log:
On Wed, Nov 20, 2013 at 4:08 AM, Rich r...@moozvine.com wrote:
Note that if I post _directly_ to this servlet, instead of the blobstore
upload URL, everything looks fine: I can read the input stream and it is a
nice, well-formed multipart; and I can access the parts perfectly using
Thanks Tom.
I went through pretty much the same servlet exercise a couple of hours before
seeing your post this afternoon.
The problem was that I missed the point the the docs that the code /must/ run
in a servlet.
(I'm not clear where it's stated, but I guessed from the comment about the test
I'm a bit confused by your statement.
If you want to run in the DevAppserver or in AppEngine then you don't need
to use the LocalServiceTestHelper at all. The LocalExample only does that
as a demo of how to run it as a local executable or if you want to use it
within Junit.
If you don't include
John,
Your solution has a lot of potential.
Especially since you distribute the load across the keyspace.
My only concern is the deque create/bind method.
Shouldn't it be a single method since multiple GAE instances could try to
create the deque concurrently?
On Thursday, November 21, 2013
I need the median value for multiple entities but only compared to
themselves.
In the future I will probably create the media across entities by doing a
median average.
On Tuesday, November 19, 2013 9:23:59 PM UTC-5, Jim wrote:
Are you doing a time-series type analysis where you need the
missed your comment... this is what we're doing, except we avoid the 1MB
limitation by storing the data sets in blobs and store the pointer to the
blob in the entity record
On Wednesday, November 13, 2013 1:20:21 PM UTC-6, Kaan Soral wrote:
A single datastore entity can hold up to 1MB's
I'm already using that approach.
However, the distribution of my metrics require a more precise solution for
my median.
On Thursday, November 21, 2013 12:50:41 AM UTC-5, Luca de Alfaro wrote:
If you can weigh recent data more than older data, you might consider
instead of building a rolling
Correct. It does not have to be directly in the servlet. For example App
Engine MapReduce: https://code.google.com/p/appengine-mapreduce/
uses the GCS client to write out data, but it is many levels removed from a
servlet.
On Thu, Nov 21, 2013 at 4:44 PM, Ken Bowen k...@form-runner.com wrote:
On Wed, Nov 20, 2013 at 11:02 AM, MarcoCanali marco.can...@gmail.com
wrote:
What does i need for authenticate my simple PHP page on GAE for access
data on GS ??
You need to tell the GCS bucket that your application is allowed access to
it. I wrote up a short demonstration at
The query volume doubled since I first started this thread and, at the
current rate, should double again by the end of next week.
I definitely need a solution that can handle in the thousand QPS because
that's where we're heading.
I'm currently running this without consistency using a dedicated
Lots of memcache failures
Then GAE flushed it over 600 megs from premium memcache
Before the cache could even restore, more failures
Normal hit ratio is 99%, currently 87%
Status continues with green -- no issues, no failures, lol
--
You received this message because you are subscribed to
If data points for each entity are not coming too fast, you could use
blobstore/gcs to store your time series for each entity in a blob, then
store a pointer to that blob in your entity in the data store. updating is
expensive but can run off a task queue. retrieval of the blobs is very
Using the 1.8.8 version of the SDK and depending on appengine-gcs-client
0.3.3, the following servlet based on your above example works:
public class GcsTest extends HttpServlet {
private void writeToFile(GcsService gcsService, GcsFilename
fullGcsFilename, byte[] content)
throws
Indeed. FYI for anyone coming on this thread, below is running code (in the
DevServer),
invoked from a test page via DWR, and returning the String result to be
displayed in an alert;
the result is:
startBytesSize = 1160921
bytesReadSize = 1160921
Code:
public static String
18 matches
Mail list logo