Thanks, I opened https://issues.apache.org/jira/browse/INFRA-18004
2019년 3월 14일 (목) 오전 8:35, Marcelo Vanzin 님이 작성:
> Go for it. I would do it now, instead of waiting, since there's been
> enough time for them to take action.
>
> On Wed, Mar 13, 2019 at 4:32 PM Hyukjin Kwon wrote:
> >
> > Looks
Go for it. I would do it now, instead of waiting, since there's been
enough time for them to take action.
On Wed, Mar 13, 2019 at 4:32 PM Hyukjin Kwon wrote:
>
> Looks this bot keeps working. I am going to open a INFRA JIRA to block this
> bot in few days.
> Please let me know if you guys have
Looks this bot keeps working. I am going to open a INFRA JIRA to block this
bot in few days.
Please let me know if you guys have a different idea to prevent this.
2019년 3월 13일 (수) 오전 8:16, Hyukjin Kwon 님이 작성:
> Hi whom it may concern in Thincrs
>
>
>
> I am still observing this bot misuses
btw, let's wait and see if the non-k8s PRB tests pass before merging
https://github.com/apache/spark/pull/23993 in to 2.4.1
On Wed, Mar 13, 2019 at 3:42 PM shane knapp wrote:
> 2.4.1 k8s integration test passed:
>
>
>
2.4.1 k8s integration test passed:
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/8875/
thanks everyone! :)
On Wed, Mar 13, 2019 at 3:24 PM shane knapp wrote:
> 2.4.1 integration tests running:
>
2.4.1 integration tests running:
https://amplab.cs.berkeley.edu/jenkins/job/testing-k8s-prb-make-spark-distribution-unified/8875/
On Wed, Mar 13, 2019 at 3:15 PM shane knapp wrote:
> upgrade completed, jenkins building again... master PR merged, waiting
> for the 2.4.1 PR to launch the k8s
upgrade completed, jenkins building again... master PR merged, waiting for
the 2.4.1 PR to launch the k8s integration tests.
On Wed, Mar 13, 2019 at 2:55 PM shane knapp wrote:
> okie dokie! the time approacheth!
>
> i'll pause jenkins @ 3pm to not accept new jobs. i don't expect the
>
okie dokie! the time approacheth!
i'll pause jenkins @ 3pm to not accept new jobs. i don't expect the
upgrade to take more than 15-20 mins, following which i will re-enable
builds.
On Wed, Mar 13, 2019 at 12:17 PM shane knapp wrote:
> ok awesome. let's shoot for 3pm PST.
>
> On Wed, Mar 13,
Sounds good.
On Wed, Mar 13, 2019 at 12:17 PM shane knapp wrote:
>
> ok awesome. let's shoot for 3pm PST.
>
> On Wed, Mar 13, 2019 at 11:59 AM Marcelo Vanzin wrote:
>>
>> On Wed, Mar 13, 2019 at 11:53 AM shane knapp wrote:
>> > On Wed, Mar 13, 2019 at 11:49 AM Marcelo Vanzin
>> > wrote:
>>
ok awesome. let's shoot for 3pm PST.
On Wed, Mar 13, 2019 at 11:59 AM Marcelo Vanzin wrote:
> On Wed, Mar 13, 2019 at 11:53 AM shane knapp wrote:
> > On Wed, Mar 13, 2019 at 11:49 AM Marcelo Vanzin
> wrote:
> >>
> >> Do the upgraded minikube/k8s versions break the current master client
> >>
I'm OK with this take. The problem with back-porting the client update
to 2.4.x at all is that it drops support for some old-but-not-that-old
K8S versions, which feels surprising in a maintenance release. That
said, maybe it's OK, and a little more OK for a 2.4.2 in several
months' time.
On Wed,
On Wed, Mar 13, 2019 at 11:53 AM shane knapp wrote:
> On Wed, Mar 13, 2019 at 11:49 AM Marcelo Vanzin wrote:
>>
>> Do the upgraded minikube/k8s versions break the current master client
>> version too?
>>
> yes.
Ah, so that part kinda sucks.
Let's do this: since the master PR is good to go
On Wed, Mar 13, 2019 at 11:49 AM Marcelo Vanzin wrote:
> Do the upgraded minikube/k8s versions break the current master client
> version too?
>
> yes.
> I'm not super concerned about 2.4 integration tests being broken for a
> little bit. It's very uncommon for new PRs to be open against
>
Do the upgraded minikube/k8s versions break the current master client
version too?
I'm not super concerned about 2.4 integration tests being broken for a
little bit. It's very uncommon for new PRs to be open against
branch-2.4 that would affect k8s.
But I really don't want master to break. So if
hey everyone... i wanted to break this discussion out of the mega-threads
for the 2.4.1 RC candidates.
the TL;DR is that we've been trying to update the k8s client libs to
something much more modern. however, for us to do this, we need to update
our very old k8s and minikube versions.
the
AFAIK completed can happen in case of failures as well, check here:
https://github.com/kubernetes/kubernetes/blob/7f23a743e8c23ac6489340bbb34fa6f1d392db9d/pkg/client/conditions/conditions.go#L61
The phase of the pod should be `succeeded` to make a conclusion. This is
The reader necessarily knows the number of partitions, since it's
responsible for generating its output partitions in the first place. I
won't speak for everyone, but it would make sense to me to pass in a
Partitioning instance to the writer, since it's already part of the v2
interface through the
Hi,
We are running Spark jobs to Kubernetes (using Spark 2.4.0 and cluster
mode). To get the status of the spark job we check the status of the driver
pod (using Kubernetes REST API).
Is it okay to assume that spark job is successful if the status of the
driver pod is COMPLETED?
Thanks,
Chandu
Currently, Barrier TaskSet has a hard requirement that tasks can only be
launched in a single resourceOffers round with enough slots(or say,
sufficient resources), but can not be guaranteed even if with enough slots
due to task locality delay scheduling(also see discussion
19 matches
Mail list logo