As another longtime user, I have mixed feelings. Some things are better,
some things are worse. I certainly wouldn't give up the current GAE to go
back to the old one, but mostly because I still have enough of the old one
available.
I straddle the old world and the new world, using the "old"
I can offer one observation about GAE/Standard using Java with 20+s app
startup times: Use-facing cold starts can be problematic when you have
low/intermittent traffic, but it smooths out when you have some traffic. I
don't typically see user-facing cold starts; my guess is that GAE spins up
There's one big disadvantage of the URLFetch service, which is that it's
limited to 32MB inbound and 10MB outbound. Those are pretty small numbers
in this day and age.
In Java8-land I've used both Apache HttpClient and OkHttp and both work
normally.
Jeff
On Thu, Jul 5, 2018 at 10:06 PM
I’ve done a lot of media work on GAE. Most of your work will be with the
Google Cloud Storage API, and you can do that work from pretty much
anywhere, including GAE. You won’t be doing video transcoding from GAE
Standard, but you probably shouldn’t be doing video transcoding at all.
Plenty of
I’m using it for a real (reasonably complicated, including guice with a lot
of AOP) app, although very light traffic. AFAICT it works as advertised.
Jeff
On Thu, Jul 6, 2017 at 12:45 PM, Patrick Jackson
wrote:
> Super excited to see Java 8 support. The removal of the
I’m using Cloud Postgres from Java8 using the vanilla JDBC driver and DBCP
for pooling. AFAICT it works fine.
Jeff
On Sun, Jul 2, 2017 at 2:57 AM, Attila-Mihaly Balazs
wrote:
> Thank you for the reply.
>
> I'm not all that familiar with Java on GAE (I've been using Python
>
d
> an entity before transactionless(), loads() in transactionless () will
> not
> include those changes?
>3. Is it possible to define a retry behavior for fail transition
>commits with ofy()? I'm looking to control the retries count.
>
> Thanks!
>
>
values etc., or
> you just sent documents to the Search API without much thought and it was
> still efficiently storing them?
>
> Nick
> On 10/06/2017 8:40 AM, "Jeff Schnitzer" <j...@infohazard.org> wrote:
>
>> The search index is incredibly efficient. I had some da
The search index is incredibly efficient. I had some data I was indexing in
the datastore, and the index was consuming 100GB+. When I moved it to the
Search API, the index consumed a few GB. Afterwards I felt silly for asking
for the quota raise in advance.
YMMV, of course.
Not a direct answer
If all you wanted to do was make sure that all writes happen together and
you otherwise don’t care about data consistency, then sure. However, 9
times out of 10 when people ask this question, they’re making a terrible
mistake and what they really want to do is load the entities in the
transaction.
I’m confused - this is “low-volume data” but you're worried about exceeding
4000 connections? Maybe I’m missing something in translation; how many QPS
do you expect?
4000 active connections is pretty crazy, even for a high-traffic system.
http://www.mysqlcalculator.com/ expects that to consume
One programmer’s opinion:
I did a migration from the Blobstore to GCS some years ago. The docs for
GCS were a little bit obtuse and took a little work to figure out, but the
end result was satisfying - GCS is a vastly better blobstore than the old
Blobstore. And the GCS API is much more powerful
I’ve been using both plugins (you can configure them both in your pom) for
a while now and can confirm: While the cloud tools plugin had a number of
issues up until a couple weeks ago, it works pretty well now. I dropped the
com.google.appengine plugin and now use the cloud tools one fulltime.
It’s unfortunate but: Always strip off ~ and anything before it when an
appid comes from an API. Then you don’t need to think about it.
Jeff
On Sat, Apr 15, 2017 at 10:51 PM, Joshua Fox wrote:
>
>
> On Mon, Apr 10, 2017 at 9:02 PM, 'Jordan (Cloud Platform Support)' via
>
For multiple years now I have been using CircleCI to deploy multiple
projects to GAE. My deploys are: “git checkout production; git merge
master; git push”. It’s super easy and pretty well documented at Circle’s
website. I’m happy to share my configs too.
“push to deploy” doesn’t seem like a CI
https://issuetracker.google.
> com/u/1/issues/36589995) which prompted this thread here.
>
> ÞG
>
>
> On Tuesday, April 4, 2017 at 1:24:52 PM UTC, Jeff Schnitzer wrote:
>>
>> Your path of least resistance is to use Maven (as opposed to Gradle or
>> Ant/Ivy).
Your path of least resistance is to use Maven (as opposed to Gradle or
Ant/Ivy). If you’re working in Java, you definitely want an IDE - Eclipse
and IntelliJ both have their following. I think most people will be happier
with IntelliJ but you have to pay for it.
There are two viable maven plugins
Not a Googler, but I’ve been around a while:
* They moved support to stackoverflow in early 2012. It seems to be a
common practice. I have mixed feelings about it myself, but it’s a thing,
and definitely not a new thing.
* The Mail API and Channel API have both been zombie products since
Overflow), so it would be fine to post such threads in this
>>> forum.
>>>
>>> As an additional, final comment, I'll say that we do pay close attention
>>> to the requests for services / service improvements that users bring
>>> forward, whe
hanks,
> -Louise
>
> Den torsdag den 30. marts 2017 kl. 20.30.45 UTC+2 skrev Jeff Schnitzer:
>>
>> There may be clock skew in the cluster; 15s is a lot but you can’t assume
>> that log entry timestamps are exact.
>>
>> You should send email by enqueueing a transac
racker. We've taken the
>> feedback from many users on the Mail API and bulk sending and while we
>> don't promise any concrete action given this feedback, know that we have
>> taken it into account and we want to thank you for bringing up how you view
>> things, what you'd li
these webhook operations do not cause
> the send mail operation to get a CME?
> I can only think of very hacked and ugly solutions - e.g. to have a
> lock/switch which is on when the send mail operation is running, on only
> executing webhook operations when the lock/switch is off.
>
When you load a key in a datastore transaction, that EG is enlisted in the
transaction. Any change to that EG by any other process in the system will
cause your commit to rollback with CME. Even if your transaction is
"read-only”.
When I said “linear” I mean that if you have a quiet datastore (no
You can only transactionally enqueue tasks from GAE standard using the
ApiProxy-based interface. You cannot yet transactionally enqueue tasks
(named or otherwise) with the new REST-based APIs for the datastore and
task queue. Flex only supports the REST APIs.
It’s great to hear that this is on
a generalized way of doing 2pc enqueueing on the queue…
Jeff
On Sun, Mar 26, 2017 at 10:18 AM, Jason Collins <jason.a.coll...@gmail.com>
wrote:
> "ability to transactionally enqueue tasks" <-- probably my favourite
> feature.
>
> On Sat, 25 Mar 2017 at 11:06 Jeff
That does not sound correct.
If you have a linear order of operations in the same transaction (or
entirely without transactions), you should never see concurrent
modification exception. Timeouts are a different matter.
To the OP: Where are you transaction boundaries? Are you accidentally
On Sat, Mar 25, 2017 at 8:39 AM, Jason Collins
wrote:
> "Not only for some of the API's that are unique to standard"
>
> Wilfred, which APIs specifically?
>
The most notable are the Task Queue API (with the ability to
transactionally enqueue tasks) and the Search API.
I can tell you what I do (in the standard env):
* Dev environment credentials are stored in source code. Every developer
needs access to these, might as well make it easy.
* Production/staging environment credentials are stored in a standalone
git repo and merged by the build script.
I find
If you’re doing aggregations across <10k rows, you don’t need (or want)
BigQuery or map/reduce or any other “big data” solution. You want a basic
SQL database. Use Cloud SQL if you want something easy to integrate with
GAE.
You’re not going to get SQL aggregations out of the datastore; it’s just
I use Postgres. Replicate a subset of your entity data to a datastore that
supports aggregations. Assuming your dataset fits in a traditional RDBMS,
they’re awesome for aggregations and ad-hoc queries.
The datastore makes an awesome primary datastore because it is zero
maintenance and never
Hi. I don’t have an easy answer for you, but I’ve been watching this thread
and can give you some advice. GWT and Cloud Endpoints have changed over the
years and there are probably no tutorials currently relevant. That
shouldn’t really matter. Cloud Endpoints will generate some javascript for
you,
Thanks Jordan. I’m in the beta program, however, it’ll be a while before I
can use the new API for real work. I pretty much live on push-queues and
the most critical feature (to me) is transactional enqueueing. I honestly
have no idea how anyone could build any kind of real-world app without
Just to be chime in - Objectify will eventually support the new HTTP-based
API to the datastore. However, until Google has a viable HTTP-based API to
the task queue that lets us transactionally enqueue tasks, I can’t use the
new datastore API. And I’ll be unlikely to make a successful migration of
Users are clever and insidious when it comes to breaking software. If you
aren’t using a transaction in a get/update/put cycle, there are all manner
of ways that updates could get screwed up or lost. Consider that requests
might be sitting for many seconds at a cold start and therefore come in out
There is no standard way of storing entities in memcache. Objectify uses
its own namespace and uses the string version of Keys as the cache key. I
don’t know what NDB does.
Cache invalidation is already a hard problem (that and naming things, as
they say). If you want to access data from both
I pretty much live without traditional backups. I use the cheesy old backup
tool to make a copy of everything meaningful once every few days but it’s
pretty much just a backstop against the worst-case-scenario. If we had to
rely on it, it would be a TON of work. And keep in mind that the backup
The GAE classloader does some security checking that isn’t present in the
dev container. Plus actual loading of classes from jars seems to be slower
(probably some sort of network filesystem is involved). 5-10s startup time
locally is quite long; a corresponding 30-60 server-side seems realistic,
Since this thread keeps coming up… I’ll make you all an offer: For $1k I’ll
migrate your GAE app to Sendgrid, Mailgun, or whatever other email service
you want. Assuming your code isn’t spaghetti, it will probably take me an
hour. It should take you about the same or less.
This is sooo much
I run a whitelabeled ecommerce app on GAE with hundreds of domains. It is
possible. It is a significant PITA.
We get away with it because we have a high-touch onboarding process.
There’s no easy way for this to become self-service. Your best bet would be
to build a proxy, but make sure you don’t
Geezus…inline…
On Mon, Jun 27, 2016 at 4:41 PM, Joshua Smith
wrote:
> There are so many examples:
>
> HRD is *not* a comparable alternative to the original data store. It lacks
> a bunch of consistency guarantees that require all sorts of hacky
> workarounds in apps
Just a few thousand constants? Even if each was 1k (!), you’re talking
about a few megabytes of RAM. Why not just load them from CSV into RAM and
keep them there?
Jeff
On Wed, Jun 8, 2016 at 8:20 PM, YuRen Lin wrote:
> Hi, all
>
> I am in the game industry and use Google
On Wed, May 25, 2016 at 4:04 PM, 'Alex Martelli' via Google App Engine <
google-appengine@googlegroups.com> wrote:
> [*] people on this group keep expressing doubts about that, but, facts are
> on my side -- e.g, classic-runtime App Engine modules just gained the
> ability to connect to Cloud SQL
Retrolambda works on the Standard Environment, and gives you the most
critical feature of Java8:
https://github.com/orfjackal/retrolambda
(just be sure to run your CI system on java7 to ensure no unsupported
8-isms creep into production)
Jeff
On Wed, May 11, 2016 at 12:19 PM, Chad Vincent
Geez, the blobstore? Old skool :-)
Jeff
On Tue, May 3, 2016 at 3:14 PM, Emanuele Ziglioli
wrote:
> yeah, one less reason to bet on App Engine: it's such a moving target.
> We can't afford to rewrite critical parts of the code every few years just
> because APIs get
> "Task Name 1" -> Datastore transaction (???) -> ????
>
>
> On Wednesday, April 27, 2016 at 12:43:43 PM UTC-7, Jeff Schnitzer wrote:
>>
>> My task queues sometimes have a lot of tasks sitting in them for various
>> reasons (usually failing/retrying). I’d _really_ love
im Lewandowski
> App Engine, Product Manager
>
>
> On Thursday, April 28, 2016 at 9:15:42 AM UTC-7, Jeff Schnitzer wrote:
>>
>> Hi Google…
>>
>> Yesterday I noticed that the “task queue” viewer in the cloud console was
>> changed to paginate the results e
Hi Google…
Yesterday I noticed that the “task queue” viewer in the cloud console was
changed to paginate the results every 10 entries. I can no longer see the
state of all my queues at a glance. Constantly clicking through four pages
is irksome, especially when there is a vast sea of blank space
My task queues sometimes have a lot of tasks sitting in them for various
reasons (usually failing/retrying). I’d _really_ love to be able to look at
a queue at a glance and see what’s in it. Instead it’s a wall of numbers.
Ah hah! I can name tasks, and this will show up in the interface, right?
“nearly exact” I meant to say. EB and Flexible runtimes are both mapped to
a whole hypervisor VM.
Jeff
On Sat, Apr 9, 2016 at 2:24 PM, Jeff Schnitzer <j...@infohazard.org> wrote:
> If you want a nearly close comparison to ElasticBeanstalk, use a Managed
> VM (or “Flexible Runt
If you want a nearly close comparison to ElasticBeanstalk, use a Managed VM
(or “Flexible Runtime” I think they’re being called now). They are
currently billed at the same rate as the underlying Compute Engine instance.
Jeff
On Fri, Apr 8, 2016 at 11:00 AM, Susan Lin
wrote:
>>>
>>> I want to state that I disagree with Jeff. Whoever maintains the mail
>>> service is doing a great job and something very useful. Just for the
>>> record...
>>>
>>> Thanks,
>>> PK
>>> p...@gae123.com
>>>
>&g
gt; which can be
> used to detect the most common email failures. I hope this helps ease your
> concerns.
>
> Best wishes,
>
> Nick
> Cloud Platform Community Support
>
> On Sunday, April 3, 2016 at 12:04:12 PM UTC-4, Jeff Schnitzer wrote:
>>
>> On Mon,
On Mon, Mar 28, 2016 at 8:24 AM, Rob Williams
wrote:
>
> The App Engine Mail API is fully featured and fully documented.
>
I hate hearing people say things like this. The _bare minimum_ expected of
a service that delivers email is some tracking of whether or not that
These are apples and oranges.
What you’re really asking for are the advantages and disadvantages of
relational databases vs the datastore (Hibernate and Objectify are merely
ways of accessing those two types of stores, respectively).
The datastore is great from a “fire and forget it” perspective
I’m *very* happy to see this announcement.
There’s no technical advantage to having mail builtin to GAE and it’s a
waste of developer resources that would be better spent building the things
that aren’t cheap commodity services. I want new features for the
datastore, a more responsive console,
Separate in your mind the Blobstore (place were data can be put) and the
Blobstore API (which is a programming API builtin to the GAE service layer
and can be used with GCS).
The Blobstore is a nonstarter. It is deprectated. It will go away.
The question you want to ask is: What is the best way
Nevermind the blobstore. The old Blobstore is deprecated (does it even
exist anymore?) and the Blobstore API was really designed around submitting
data to this service. Google adapted it to Google Cloud Storage but you’re
much better off just using the GCS API directly.
Jeff
On Thu, Feb 25, 2016
ere. It seems certainly possible that Ofy
>>> could be transforming the query. The only way is to test, and the methods
>>> to do so have been given. If this turns out to be a Datastore problem,
>>> rather than just Ofy, a post should be made to the Public Issue Tracker.
&
I already have my engineering team in an ‘eng’ group. Makes perfect sense
to me.
Jeff
On Mon, Jan 18, 2016 at 11:50 AM, Nick wrote:
> Thanks for mentioning this - my first impression was 'who the hell asked
> for this?' But now that you point it out, this is group
Srsly. I immediately went to the Permissions screen and rearranged it all.
This is awesome.
Jeff
On Sun, Jan 17, 2016 at 11:57 AM, Adam Sah wrote:
> Actually, this is a big deal, don't knock it.
>
> Adam
> GAE python user since 2008
>
> --
> You received this message
Please star this issue:
https://code.google.com/p/googleappengine/issues/detail?id=7415
I have to warn you, however, what you are doing (forwarding all traffic
through a proxy) is dangerous. Google has some sort of attack detection and
prevention system that recognizes malicious traffic patterns
Google didn't kill you. You ran some poorly-built opensource software that
you didn't understand and you misconfigured it in a way that produced a
ticking time bomb. I have a lot of sympathy for you - mistakes like this
are easy to make - but you can't blame the hosting platform for your
mistake.
The datastore now supports geospatial indexes directly:
https://cloud.google.com/appengine/docs/java/datastore/geosearch
It's alpha, you have to ask for an invite, and I have no idea what the PHP
bindings would look like, but it lets you do a "show records within a
circle (or box) of a point"
Run the equivalent query with the low-level API. If it produces the same
unexpected results, it's Google's issue. If it fixes the problem, it's
mine. Other than a potential hybridization issue I can't imagine what
Objectify could be doing wrong since it's just adding projections to an
underlying
NIO is not available on standard Google App Engine. The driver will also
have problems creating threads, if it tries to do so (most db drivers seem
to, though I haven't looked at Couchdb's). If you really need Couchbase,
consider using Managed VMs.
Jeff
On Sat, Dec 19, 2015 at 12:38 AM, Benjamin
A projection query should produce 1 read op for the query and 1 small op
per row fetched.
Possibly Objectify is trying to hybridize the query (convert to keys-only +
batch get) even though it's a projection. If this entity has the @Cache
annotation, try running the query like this:
12:02:23 PM UTC-8, Yun Li wrote:
>>>
>>> And there is NO transaction at all. We don't use transaction. But I am
>>> not sure if objectify uses the transaction in put.
>>>
>>> On Thursday, December 3, 2015 at 9:59:00 PM UTC-8, Jeff Schnitzer wrote:
>>
I don't lose any sleep about Google losing my data. I do lose sleep over
accidentally mangling it during a migration. I currently run nightly
backups using the datastore admin (and cron), but it's just insurance - it
would be a catastrophe if I ever attempted to restore it.
Jeff
On Sat, Dec
The statistics are batch updated (IIRC, nightly or thereabouts). They are
not intended to be used for precise counts. If you need an exact count,
maintain a sharded counter.
Jeff
On Fri, Dec 11, 2015 at 11:47 PM, Richard Cheesmar
wrote:
>
> Getting the numbers from the
I run a whitelabeled ecommerce system on GAE with hundreds (and growing) of
custom domains. It can be done but there are a couple issues.
1) The onboarding process for each custom domain involves an extra "verify
your ownership of the domain to google" step. There does not appear to be
an API for
You'll need to post some more code details - including the structure of the
object you are trying to change. Also you don't mention whether this is an
exception or just a log message. It's easy to accidentally create
contention with the task queue, especially if you're in a transaction and
Just wanted to comment on one thing:
On Thu, Nov 26, 2015 at 7:56 AM, Robert Dyas
wrote:
>
>
>1. Another problem is google's way of naming things and having too
>many similar but overlapping services. A new user to the platform will find
>it very difficult to
First: I think the namespace feature is a horrible idea and should never be
used (and I say this as someone that runs a whitelabeled, multitenant
system).
That said, doing migrations with namespaces shouldn't be all that different
from doing migrations without namespaces, you just have to add one
Right, there's no there's no addDeferedTask() method in the library... I
wrap the queue myself with a more convenient abstraction but you get the
idea.
Jeff
On Wed, Nov 18, 2015 at 2:29 AM, Trez Ertzzer wrote:
> Hello.
> *thank you very much for your answer. it's very
You are on the right track, but there are a couple tricks to it. I do
similar things all the time, often with millions of records/tasks. Since
you pointed at java documentation, I assume you're using Java.
The simplest way to do what you want is to perform a keys-only query for
your users and
On Fri, Nov 6, 2015 at 10:52 AM, Minie Takalova wrote:
> Hope I'am not the only one in such situation.
>
You appear to have some conceptual misunderstandings of how computers work.
Or possibly there is a language/miscommunication issue:
* A socket is not a thread and a
Due to the nature of classic google app engine, you can't leave threads
running. This doesn't mean you can't have sockets open between requests,
but the MongoDB driver wants to use threads to manage those sockets for you
and that's a no-no.
This leaves you pretty much two choices if you want to
I'm using MongoDB as an analytics platform (not a primary datastore) from
classic GAE/Java. I had to hack the driver pretty seriously with some help
from the author. I haven't looked at the Python driver code, but in very
general terms, what I had to do:
* Stop the monitoring thread from being
I'm calling a third-party HTTP service from a task that sometimes takes
more than 60s to respond. Since the URLFetchService has a hard limit of
60s, I thought I might be able to work around this with the Socket API -
there do not appear to be any documented time limits other than the 2m idle.
I
I've noticed that the team working on the console is uncharacteristically
responsive to issues filed against it in the issue tracker. So if there's
something you don't like about it, put an a request:
https://code.google.com/p/googleappengine/issues/list
Now that I've gotten used to it, I
Sounds like this problem:
https://groups.google.com/forum/#!searchin/google-appengine/whitelabel/google-appengine/PDcAUfWcjEM/ERQ__xTsiiQJ
There's a bit of bizarro behavior in the cloud console such that visible
domains are tied to the user account that verified them - NOT global to the
project.
I don't have any special information but I've noticed the Users API has
remained pretty much as-is since I arrived. I also don't think it's
particularly clever. Other than for prototyping in the early days, I have
avoided it.
Google Identity Toolkit is nice. Or, if you want simple and
where every
> non-default version has instances running. Perhaps this is the expected
> behavior since managed VMs is labeled as beta?
>
>
> On Tuesday, September 15, 2015 at 7:15:10 PM UTC-4, Jeff Schnitzer wrote:
>>
>> I have a python app on a Managed VM which I deploy with:
>
I have a python app on a Managed VM which I deploy with:
gcloud preview app deploy app.yaml --remote --set-default
It's set to manual scaling, instances 1.
It appears that every time I deploy it, the old instance stick around (and
get billed for). Even deleting the old versions from the
Hurray!
Is this rollout expected to address the issue that each administrator login
sees a completely different set of domains when they look at this page? Or
is that an unrelated issue?
Thanks,
Jeff
On Tue, Sep 15, 2015 at 5:10 PM, Lorne Kligerman
wrote:
> At long
FWIW, I am happy to hear publicly announced timelines for major features,
even if the schedules slip. It's nice to know what's being worked on.
Jeff
On Sun, Sep 6, 2015 at 4:19 PM, Darshan-Josiah Barber <
dars...@darshancomputing.com> wrote:
> Thanks, both of you. That catches me up to speed,
de defensively, just in case there is an intermittent issue, or a
>> network issue, or maintenance on your instance, or any number of things.
>>
>> I'm still trying to understand if what you're seeing is expected or
>> higher than the norm. As soon as I get anything I will le
errors) should take you out of this.
I will still investigate to try and see what I can gather as to the root
cause of this. As soon as I find anything, I'll update the thread here.
Cheers!
On Tuesday, August 25, 2015 at 3:52:28 PM UTC-4, Jeff Schnitzer wrote:
I put the query stacktrace
On Thursday, August 20, 2015 at 12:11:51 PM UTC-4, Jeff Schnitzer wrote:
I'm getting a lot of Could not fetch URL errors from BigQuery. I'm not
driving significant amounts of traffic to it (yet), but I get one of these
maybe every 50 or so queries:
Could not fetch URL:
https
gearlaunch-hub
I'll add some more to the gist.
Thanks,
Jeff
On Tue, Aug 25, 2015 at 11:11 AM, Ryan (Cloud Platform Support)
rbruy...@google.com wrote:
More the better. What is your app id?
On Tuesday, August 25, 2015 at 1:51:43 PM UTC-4, Jeff Schnitzer wrote:
Thanks for looking
I put the query stacktrace and code in there, plus a few more timestamps of
occurrences that happened today/yesterday.
Thanks,
Jeff
On Tue, Aug 25, 2015 at 12:21 PM, Jeff Schnitzer j...@infohazard.org
wrote:
gearlaunch-hub
I'll add some more to the gist.
Thanks,
Jeff
On Tue, Aug 25
, August 20, 2015 at 12:11:51 PM UTC-4, Jeff Schnitzer wrote:
I'm getting a lot of Could not fetch URL errors from BigQuery. I'm not
driving significant amounts of traffic to it (yet), but I get one of these
maybe every 50 or so queries:
Could not fetch URL:
https://www.googleapis.com/bigquery/v2
OpenID != OpenID Connect (confusing, I know)
If you want a canned solution for multiple party federated auth, Google
Identity Toolkit seems to be the easiest path. But if you can pick one and
one only (most likely Facebook or Google), you'll save yourself a lot of
trouble. Most businessy apps can
I'm getting a lot of Could not fetch URL errors from BigQuery. I'm not
driving significant amounts of traffic to it (yet), but I get one of these
maybe every 50 or so queries:
Could not fetch URL:
https://www.googleapis.com/bigquery/v2/projects/gearlaunch-hub/queries
I'm using the standard
It sounds like the root problem is OOM errors fail to produce good log
messages.
What CSV parser are you using? If you're on Java, most of them are crap.
Jackson has a pretty good CSV plugin, but you probably still need to be
very careful about streaming - it's really easy to blow up the heap
I asked for more information on the objectify list and Rajesh said this:
low-level API is giving the same appstats. I guess, that is how datastore
calls are. I also noticed, the google is billing for one RPC only, which
is good.
Jeff
On Thu, Aug 13, 2015 at 12:48 PM, Nick naoku...@gmail.com
This is not quite accurate. Objectify's session cache spans only a single
request; there is no instance cache shared across requests (other than
memcache).
However, if that first request includes the one-time entity class
registration that Objectify requires, that could easily explain extra time
Ah, thanks! Works great.
If it helps anyone, adding PIL is simply this Dockerfile:
FROM gcr.io/google_appengine/python-compat
RUN apt-get update apt-get install -y python-imaging
ADD . /app
On Fri, Aug 7, 2015 at 10:42 PM, pdknsk pdk...@gmail.com wrote:
However, when doing a dockerless
I'm a little puzzled how this is intended to work. In sandboxed env, I
specified 'libraries:' to get PIL. This appears to be ignored in the world
of Managed VMs, and the documentation suggests that I update the Dockerfile
which is created in my project dir. However, when doing a dockerless deploy
strongly
consistent for me, even across versions.
Thomas
On Tue, Aug 4, 2015 at 12:07 AM, Jeff Schnitzer j...@infohazard.org
wrote:
Again, JDO is not my area of expertise, but if so, this seems like a
shockingly obvious issue. Can you post a sample of the code you use to
demonstrate
1 - 100 of 1319 matches
Mail list logo