Chiming in here.
* I think the term callback is a bit confusing, it collides with a
different definition in the javascript world
* I like the idea of a status that can only be altered externally (REST /
CLI / sqla / ...) and that the scheduler simply disregards (and probably it
handles the timeout
I like the idea. I already raised the issue so we could refactor all the
Google Cloud operators together and at that time make sure they are
consistent. So a different repo would be a good idea here. And you can
manage your own dependencies. Would be cool that the same thing happens to
the AWS oper
Hey,
here is my feedback, because I've been thinking about events as well. I
would call it it 'WAITING_FOR_EVENT'. Here are the use-cases I would use it
for:
Have a thread (or process) listen on the Google Audit Log. It contains a
lot of changes on the Google Project (Google DataProc finished, Fi
I'm still going over the code how such a small change can have such a huge
effect. Some things that is specific to the setup:
worker/scheduler/webserver all run with no extra parameters
build in docker
Python 2.7.13
Celery with redis
Runs on Kubernetes
When connecting to scheduler pod I see the s
I meant the API -- will check the wiki now. Thanks!
On Wed, Feb 8, 2017 at 8:33 AM Bolke de Bruin wrote:
> On this proposal? No, not yet. Just popped in my mind yesterday. API there
> is a bit on the wiki.
>
> > On 8 Feb 2017, at 14:31, Jeremiah Lowin wrote:
> >
> > Makes a lot of sense. At the
Now that's service!
I've merged the PR. Travis logs should be considerably smaller now, though
task logs will be printed in full even when the test is successful (we can
revisit whether to restrict those in Travis at a future time if we need to).
Authors of recent PRs: please rebase and push -f y
On this proposal? No, not yet. Just popped in my mind yesterday. API there is a
bit on the wiki.
> On 8 Feb 2017, at 14:31, Jeremiah Lowin wrote:
>
> Makes a lot of sense. At the NY meetup there was considerable interest in
> using the API (and quite a few hacks around exposing the CLI!) -- is
Makes a lot of sense. At the NY meetup there was considerable interest in
using the API (and quite a few hacks around exposing the CLI!) -- is there
more complete documentation anywhere?
Thanks Bolke
On Wed, Feb 8, 2017 at 1:36 AM Bolke de Bruin wrote:
> Hi All,
>
> Now that we have an API in p
Done.
> On 8 Feb 2017, at 14:27, Jeremiah Lowin wrote:
>
> I agree something is affecting many of the Py3 PR builds but we can't see
> the errors because the logs are too verbose. I have a PR to limit log
> verbosity on Travis (except for failed tests) but it needs a +1, once
> that's in we shou
I agree something is affecting many of the Py3 PR builds but we can't see
the errors because the logs are too verbose. I have a PR to limit log
verbosity on Travis (except for failed tests) but it needs a +1, once
that's in we should regain transparency in to the errors:
https://github.com/apache/i
I have noticed this as well and I do actually think it is something that has
“ran” away. The ones that fail go on for much longer and are tied to python 3.
Can someone have a look at its please? I am a bit preoccupied with getting the
release out.
Cheers
Bolke
> On 8 Feb 2017, at 13:18, Miller
Alex,
Do you have anything more to go on? I don’t mind reverting the patch, however
it code part seems unrelated to what you described and the issue wasn’t
reproducible. I would really like to see more logging and maybe a test in a
clean environment plus debugging. Preferable I would like to ma
Hi All,
On a couple of my open PRs, I've been having trouble getting the units tests to
run successfully, not because there's a test failure, but because Travis is
reporting that the unit test log has become too large:
"
The log length has exceeded the limit of 4 MB (this usually means that
13 matches
Mail list logo