Github user ShubhamGupta29 commented on the issue:
https://github.com/apache/spark/pull/15974
From where can I get the data for executors as /api/v1 end point for
/allexecutors is not updating , that means , after first fetch of data , the
data remains same forever.Why is it so?
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
@uncleGen Thankyou
@vanzin OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enab
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/15974
@ChorPangChan we will not merge new features into a maintenance branch. You
need to submit a new PR against master. You can't change this one to be against
master, you have to open a new one. Please
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
OK
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the featur
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
hey @uncleGen
I think we need to settle this down in order to make progress.
I had a quick look in your code, as i said above,
modifying spark-core is no a good idea to implemen
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
I stay adding a new package to streaming is a better structure then modify
the spark-core.
can we make the decision for which implementation to use first
---
If your project is set up for
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
I think the base solutions are same, expect some other information which I
am working to add.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user uncleGen commented on the issue:
https://github.com/apache/spark/pull/15974
I think there is no need to open another reduplicate PR. Do your mind
closing this PR, and let work on #15904 ?
---
If your project is set up for it, you can reply to this email and have your
repl
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
It require manual merge if I do it against master.
should I just PR with conflicts or rebase to master before PR?
btw those conflicts are just versions from the pom file.
---
If your pr
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/15974
This will never be merged into 1.6, so you'll have to send a new PR against
master. Please close this one.
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
I think what we trying to do is duplicated. I do believe create a new
package under streaming is better then modify the spark-core package.
the reason for merging this to 1.6 is just because
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/15974
I haven't had a chance to look through your code, but how does this related
to #15904 ? From your descriptions it looks like you both opened a JIRA and a
PR for the same task. Also is there a reas
Github user ChorPangChan commented on the issue:
https://github.com/apache/spark/pull/15974
hi @tdas
pre our conversation in spark-dev channel, I have done the implementation.
can you please have a look on this please?
---
If your project is set up for it, you can reply to thi
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15974
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feat
14 matches
Mail list logo