rc + your patch (and a couple of our own custom ones)
On Mon, Feb 27, 2017 at 2:11 PM, Bolke de Bruin wrote:
> Dan
>
> Btw are you running with my patch for this? Or still plain rc?
>
> Cheers
> Bolke
>
> Sent from my iPhone
>
> > On 27 Feb 2017, at 22:46, Bolke de Bruin wrote:
> >
> > I'll hav
Dan
Btw are you running with my patch for this? Or still plain rc?
Cheers
Bolke
Sent from my iPhone
> On 27 Feb 2017, at 22:46, Bolke de Bruin wrote:
>
> I'll have a look. I verified and the code is there to take of this.
>
> B.
>
> Sent from my iPhone
>
>> On 27 Feb 2017, at 22:34, Dan
I'll have a look. I verified and the code is there to take of this.
B.
Sent from my iPhone
> On 27 Feb 2017, at 22:34, Dan Davydov wrote:
>
> Repro steps:
> - Create a DAG with a dummy task
> - Let this DAG run for one dagrun
> - Add a new subdag operator that contains a dummy operator to th
Repro steps:
- Create a DAG with a dummy task
- Let this DAG run for one dagrun
- Add a new subdag operator that contains a dummy operator to this DAG that
has depends_on_past set to true
- click on the white square for the new subdag operator in the DAGs first
dagrun
- Click "Zoom into subdag" (ta
Dan
Can you elaborate on 2, cause I thought I specifically took care of that.
Cheers
Bolke
Sent from my iPhone
> On 27 Feb 2017, at 20:27, Dan Davydov wrote:
>
> I created https://issues.apache.org/jira/browse/AIRFLOW-921 to track the
> pending issues.
>
> There are two more issues we found
I created https://issues.apache.org/jira/browse/AIRFLOW-921 to track the
pending issues.
There are two more issues we found which I included there:
1. Task instances that have their state manually set to running make the UI
for their DAG unable to parse
2. Mark success doesn't work for non existen
(I am far from an expert in nose but) I tried running nose in parallel
simply by passing the --processes flag (
http://nose.readthedocs.io/en/latest/doc_tests/test_multiprocess/multiprocess.html
).
The SQLite envs ran about 2-3 minutes quicker than normal. All other envs
deadlocked and timed out.
Hey Max
It is massive for sure. Sorry about that ;-). However it is not as massive as
you might deduct from a first view. 0) run tasks concurrently across dag runs
1) ordering of the tasks was added to the loop. 2) calculating of deadlocks,
running tasks, tasks to run was corrected, 3) relying
This looks like a great effort to me at least in the short term (in the
long term I think most of the integration tests should be run together if
the infra allows this). Another thing we could start looking into is
parallelizing tests (though this may require beefier machines from Travis).
On Sat,
This PR is pretty massive and complex! It looks like solid work but let's
be really careful around testing and rolling this out.
This may be out of scope for this PR, but wanted to discuss the idea of
using the scheduler's logic to perform backfills. It'd be nice to have that
logic in one place, t
I have worked in the Backfill issue also in collaboration with Jeremiah.
The refactor to use dag runs in backfills caused a regression
in task execution performance as dag runs were executed
sequentially. Next to that, the backfills were non deterministic
due to the random execution of tasks, caus
11 matches
Mail list logo