1. list is not even close to a valid execution date, so I don't see that as a
clash.
2. Respond with 400 Bad Request if URL dag id is not whatever wildcard char we
pick. (Is ~ URL safe?) when dag_ids query param is provided.
3. Docs is the answer to that. "This is the same as GET, but allows fo
Hello,
It depends on the specific implementation of the Batch API. The
Microsoft API can be used with requests.
https://docs.microsoft.com/en-us/graph/json-batching
I would not like to add new endpoints to the API due to premature
optimization. Adding a new endpoint - /dags/{dag_id}/dagRuns/liss
Such a batch endpoint is much much harder for API clients to build requests
for, and consume (you can no longer just use cURL/requests/any http client), so
I'm not a fan of that
On 12 May 2020 17:07:12 BST, "Kamil Breguła" wrote:
>On Tue, May 12, 2020 at 3:49 PM Jarek Potiuk
>wrote:
>>
>> My 3
On Tue, May 12, 2020 at 3:49 PM Jarek Potiuk wrote:
>
> My 3 cents:
>
>
> > But on reading Google's https://aip.dev/159 that now makes more sense,
> > and that isn't what you were suggesting, but instaed a single, litteral
> > `-` to mean "any dag id". Is this correct?
> >
>
> That's also my under
; >> > > > > on different services etc. etc. It is very limiting to
> >> rely on
> >> this
> >> > > > > feature.
> >> > > > >
> >> > > > > On Tue, Apr 14, 2020 at 11:36 AM Kamil Breguła
My 3 cents:
> But on reading Google's https://aip.dev/159 that now makes more sense,
> and that isn't what you were suggesting, but instaed a single, litteral
> `-` to mean "any dag id". Is this correct?
>
That's also my understanding.
> So I think then my only ask is that we have a `dag_ids`
ording to
>> RFC-7231,
>> > > > > > the DELETE method should be idempotent.
>> > > > > >
>> > > > > > For example:
>> > > > > > If you want to delete items with index from 1 to 4, you
>> should set the
>> > >
> > > > DELETE /connections/hdfs_default/4
> > > > > >
> > > > > > If you use asynchronous HTTP clients (a popular in Node),
> the order of
> > > > > > requests will not be kept. It will also be a big problem wi
solved on a different layer.
> > > > >
> > > > > Best regards,
> > > > > Kamil
> > > > >
> > > > >
> > > > > On Thu, Apr 9, 2020 at 6:47 PM Ash Berlin-Taylor
> > >
o many when they stumble on it.
> > > > >
> > > > > It's not something we should do lightly, but it is a possibility.
> > > > >
> > > > > I think I'm probably leaning towards the "ordinal" concept:
> >
> >
> > > > On Apr 9 2020, at 2:31 pm, Shaw, Damian P.
> > > > wrote:
> > > >
> > > > > FYI if you look back at the thread "Re: [2.0 spring cleaning]
> Require
> > > > > unique conn_id" on 2
the "ordinal" concept:
> > > >
> > > > /connections/hdfs_default -> list of connections with that ID
> > > > /connections/hdfs_default/0 first connection of that type
> > > >
> > > > Something like that.
> > > >
&g
Something like that.
> > >
> > > On Apr 9 2020, at 2:31 pm, Shaw, Damian P.
> > > wrote:
> > >
> > > > FYI if you look back at the thread "Re: [2.0 spring cleaning] Require
> > > > unique conn_id" on 2019-04-14 you can see a message from
gt; > FYI if you look back at the thread "Re: [2.0 spring cleaning] Require
> > > unique conn_id" on 2019-04-14 you can see a message from Kevin Yang
> > > stating that this random choice of connections is a "feature" used to
> > > load balance co
lance connections in AirBnB. So users are relying on this behavior.
> >
> >
> > -Original Message-
> > From: Daniel (Daniel Lamblin ) [Data Infrastructure]
> >
> > Sent: Wednesday, April 8, 2020 20:01
> > To: dev@airflow.apache.org
> > Subject: Re:
esday, April 8, 2020 20:01
To: dev@airflow.apache.org<mailto:dev@airflow.apache.org>
Subject: Re: API spec questions
Having been bit by accidentally having two connections by the same
name or conn_id, I'd prefer if were made unique. In my experience
there's little utility in having
ure" used to
> > > load balance connections in AirBnB. So users are relying on this
> behavior.
> > >
> > >
> > > -Original Message-
> > > From: Daniel (Daniel Lamblin ) [Data Infrastructure]
> > >
> > > Sent: Wednesday, Apri
vior.
> >
> >
> > -Original Message-
> > From: Daniel (Daniel Lamblin ) [Data Infrastructure]
> >
> > Sent: Wednesday, April 8, 2020 20:01
> > To: dev@airflow.apache.org
> > Subject: Re: API spec questions
> >
> > Having been bit by acc
frastructure]
>
> Sent: Wednesday, April 8, 2020 20:01
> To: dev@airflow.apache.org
> Subject: Re: API spec questions
>
> Having been bit by accidentally having two connections by the same
> name or conn_id, I'd prefer if were made unique. In my experience
> there
ing on this behavior.
-Original Message-
From: Daniel (Daniel Lamblin ) [Data Infrastructure]
Sent: Wednesday, April 8, 2020 20:01
To: dev@airflow.apache.org
Subject: Re: API spec questions
Having been bit by accidentally having two connections by the same name or
conn_id, I'd prefer i
Having been bit by accidentally having two connections by the same name or
conn_id, I'd prefer if were made unique. In my experience there's little
utility in having multiple connections by the same name. Tasks that use a
connection do to fairly randomly choose one, rather they seem pretty consi
To expand on the "so I think we need to do one of":
- we need to update the name of "conn_id" somehow. I can't think of a
better option, and given all the operators have "x_conn_id" I don't
think that change should be made lightly.
- make conn_id unique (this "poor" HA has been a source of confu
22 matches
Mail list logo