[
https://issues.apache.org/jira/browse/SPARK-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-13447:
Summary: Fix AM failure situation for dynamic allocation disabled situation
(was: Fix AM failure
[
https://issues.apache.org/jira/browse/SPARK-13447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-13447:
Summary: Fix AM failure situation for dynamic allocation diabled situation
(was: Fix AM failure
Saisai Shao created SPARK-13447:
---
Summary: Fix AM failure situation for dynamic allocation diabled
sitation
Key: SPARK-13447
URL: https://issues.apache.org/jira/browse/SPARK-13447
Project: Spark
You could set this configuration "auto.offset.reset" through parameter
"kafkaParams" which is provided in some other overloaded APIs of
createStream.
By default Kafka will pick data from latest offset unless you explicitly
set it, this is the behavior Kafka, not Spark.
Thanks
Saisai
On Mon, Feb
[
https://issues.apache.org/jira/browse/SPARK-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-13426:
Description:
Currently there's not so many users who will use SIMR to run Spark, especially
[
https://issues.apache.org/jira/browse/SPARK-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-13426:
Issue Type: Sub-task (was: Bug)
Parent: SPARK-11806
> Remove the support of SIMR clus
Saisai Shao created SPARK-13426:
---
Summary: Remove the support of SIMR cluster manager
Key: SPARK-13426
URL: https://issues.apache.org/jira/browse/SPARK-13426
Project: Spark
Issue Type: Bug
[
https://issues.apache.org/jira/browse/SPARK-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15153910#comment-15153910
]
Saisai Shao commented on SPARK-12343:
-
Hi guys,
Do we still want to support users who directly
IIUC for example you want to set environment FOO=bar in executor side, you
could use "spark.executor.Env.FOO=bar" in conf file, AM will pick this
configuration and set as environment variable through container launching.
Just list all the envs you want to set in executor side like
[
https://issues.apache.org/jira/browse/SPARK-13275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15149939#comment-15149939
]
Saisai Shao commented on SPARK-13275:
-
would you please clarify the specific problem you mentioned
[
https://issues.apache.org/jira/browse/SPARK-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15147022#comment-15147022
]
Saisai Shao commented on SPARK-13220:
-
[~andrewor14] mind me taking a crack on this?
> Deprec
Hi Divya,
Would you please provide full stack of exception? From my understanding
--executor-cores should be worked, we could know better if you provide the
full stack trace.
The performance relies on many different aspects, I'd recommend you to
check the spark web UI to know the application
I think it is due to our recent changes to override the external resolvers
in sbt building profile, I just created a JIRA (
https://issues.apache.org/jira/browse/SPARK-13109) to track this.
On Mon, Feb 1, 2016 at 3:01 PM, Mike Hynes <91m...@gmail.com> wrote:
> Hi devs,
>
> I used to be able to
Saisai Shao created SPARK-13109:
---
Summary: SBT publishLocal failed to publish to local ivy repo
Key: SPARK-13109
URL: https://issues.apache.org/jira/browse/SPARK-13109
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125655#comment-15125655
]
Saisai Shao commented on SPARK-13104:
-
I think it should be Codahale metrics.
> Spark Metr
[
https://issues.apache.org/jira/browse/SPARK-13106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15125598#comment-15125598
]
Saisai Shao commented on SPARK-13106:
-
IIUC, creating direct stream also supports passing
[
https://issues.apache.org/jira/browse/SPARK-3374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15123043#comment-15123043
]
Saisai Shao commented on SPARK-3374:
+1 to address this in the upcoming 2.0 release.
1. Currently
saying creating sparkcontext manually in your application
> still works then I'll investigate more on my side. It just before I dig
> more I wanted to know if it was still supported.
>
> Nir
>
> On Thu, Jan 28, 2016 at 7:47 PM, Saisai Shao <sai.sai.s...@gmail.com>
> wrote:
I think I met this problem before, this problem might be due to some race
conditions in exit period. The way you mentioned is still valid, this
problem only occurs when stopping the application.
Thanks
Saisai
On Fri, Jan 29, 2016 at 10:22 AM, Nirav Patel wrote:
> Hi, we
Hi Todd,
There're two levels of locality based scheduling when you run Spark on Yarn
if dynamic allocation enabled:
1. Container allocation is based on the locality ratio of pending tasks,
this is Yarn specific and only works with dynamic allocation enabled.
2. Task scheduling is locality
Any possibility that this file is still written by other application, so
what Spark Streaming processed is an incomplete file.
On Tue, Jan 26, 2016 at 5:30 AM, Shixiong(Ryan) Zhu wrote:
> Did you move the file into "hdfs://helmhdfs/user/patcharee/cerdata/", or
> write
[
https://issues.apache.org/jira/browse/SPARK-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12977:
Attachment: screenshot-1.png
> Factoring out StreamingListener and UI to support history
[
https://issues.apache.org/jira/browse/SPARK-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116844#comment-15116844
]
Saisai Shao commented on SPARK-12977:
-
Attach the current working progress, still some problems
[
https://issues.apache.org/jira/browse/SPARK-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15114807#comment-15114807
]
Saisai Shao commented on SPARK-12973:
-
I think there's a similar JIRA SPARK-10879 about this issue
Saisai Shao created SPARK-12977:
---
Summary: Factoring out StreamingListener and UI to support history
UI
Key: SPARK-12977
URL: https://issues.apache.org/jira/browse/SPARK-12977
Project: Spark
[
https://issues.apache.org/jira/browse/SPARK-11045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15111243#comment-15111243
]
Saisai Shao commented on SPARK-11045:
-
Hi [~dibbhatt], I'm afraid I could not agree with your comment
[
https://issues.apache.org/jira/browse/SPARK-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15111656#comment-15111656
]
Saisai Shao commented on SPARK-12140:
-
Hi guys, I though a bit on this feature, besides this one big
[
https://issues.apache.org/jira/browse/SPARK-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15111656#comment-15111656
]
Saisai Shao edited comment on SPARK-12140 at 1/22/16 1:29 AM:
--
Hi guys, I
[
https://issues.apache.org/jira/browse/SPARK-12140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15111656#comment-15111656
]
Saisai Shao edited comment on SPARK-12140 at 1/22/16 1:29 AM:
--
Hi guys, I
You could try increase the driver memory by "--driver-memory", looks like
the OOM is came from driver side, so the simple solution is to increase the
memory of driver.
On Tue, Jan 19, 2016 at 1:15 PM, Julio Antonio Soto wrote:
> Hi,
>
> I'm having trouble when uploadig spark
[
https://issues.apache.org/jira/browse/SPARK-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15107085#comment-15107085
]
Saisai Shao commented on SPARK-12883:
-
I get your point now. But I think these two descriptions
[
https://issues.apache.org/jira/browse/SPARK-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105671#comment-15105671
]
Saisai Shao commented on SPARK-12864:
-
What's Spark version are you using? I remember I fixed
[
https://issues.apache.org/jira/browse/SPARK-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15105671#comment-15105671
]
Saisai Shao edited comment on SPARK-12864 at 1/18/16 7:03 PM:
--
What Spark
[
https://issues.apache.org/jira/browse/SPARK-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12893:
Attachment: Screen Shot 2016-01-18 at 3.47.24 PM.png
> RM redirects to incorrect URL in Sp
Saisai Shao created SPARK-12893:
---
Summary: RM redirects to incorrect URL in Spark HistoryServer for
yarn-cluster mode
Key: SPARK-12893
URL: https://issues.apache.org/jira/browse/SPARK-12893
Project
[
https://issues.apache.org/jira/browse/SPARK-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12893:
Description:
This will cause application not found error, the screenshot shows below:
!https
[
https://issues.apache.org/jira/browse/SPARK-12883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106005#comment-15106005
]
Saisai Shao commented on SPARK-12883:
-
I think this doc is still valid, current way of setting
[
https://issues.apache.org/jira/browse/SPARK-12864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15106214#comment-15106214
]
Saisai Shao commented on SPARK-12864:
-
So the problem should be that: {{BlockManager}} should
[
https://issues.apache.org/jira/browse/SPARK-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12673:
Description:
The base URI of job description is not prepending in the current code, which
makes
[
https://issues.apache.org/jira/browse/SPARK-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12673:
Attachment: screenshot-1.png
> Prepending base URI of job description is miss
Saisai Shao created SPARK-12673:
---
Summary: Prepending base URI of job description is missing
Key: SPARK-12673
URL: https://issues.apache.org/jira/browse/SPARK-12673
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-12673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12673:
Description: The base URI of job description is not prepending in the
current code, which makes
[
https://issues.apache.org/jira/browse/SPARK-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084497#comment-15084497
]
Saisai Shao commented on SPARK-12650:
-
[~vines], what is your meaning of "SparkSubmit does no
[
https://issues.apache.org/jira/browse/SPARK-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082085#comment-15082085
]
Saisai Shao commented on SPARK-12516:
-
Thanks a lot [~vanzin] for your reply. Looks like work
[
https://issues.apache.org/jira/browse/SPARK-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15075693#comment-15075693
]
Saisai Shao commented on SPARK-12516:
-
Hi [~vanzin], what is your suggestion of this issue? I'm
[
https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15074588#comment-15074588
]
Saisai Shao commented on SPARK-12554:
-
For case 2, I think it is really a misconfiguration problem
Saisai Shao created SPARK-12552:
---
Summary: Recovered driver's resource is not counted in the Master
Key: SPARK-12552
URL: https://issues.apache.org/jira/browse/SPARK-12552
Project: Spark
Issue
[
https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073954#comment-15073954
]
Saisai Shao commented on SPARK-12554:
-
First from my understanding tasks can be scheduled after
Stdout will not be sent back to driver, no matter you use Scala or Java.
You must do something wrongly that makes you think it is an expected
behavior.
On Mon, Dec 28, 2015 at 5:33 PM, David John
wrote:
> I have used Spark *1.4* for 6 months. Thanks all the
[
https://issues.apache.org/jira/browse/SPARK-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15073430#comment-15073430
]
Saisai Shao commented on SPARK-11782:
-
I just verified with latest master branch, seems no such issue
ark-1.6.0 on one yarn
> cluster?
>
>
>
> *From:* Saisai Shao [mailto:sai.sai.s...@gmail.com]
> *Sent:* Monday, December 28, 2015 2:29 PM
> *To:* Jeff Zhang
> *Cc:* 顾亮亮; user@spark.apache.org; 刘骋昺
> *Subject:* Re: Opening Dynamic Scaling Executors on Yarn
>
>
&g
Replace all the shuffle jars and restart the NodeManager is enough, no need
to restart NN.
On Mon, Dec 28, 2015 at 2:05 PM, Jeff Zhang wrote:
> See
> http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation
>
>
>
> On Mon, Dec 28, 2015 at 2:00 PM,
I think SparkContext is thread-safe, you could concurrently submit jobs
from different threads, the problem you hit might not relate to this. Can
you reproduce this issue each time when you concurrently submit jobs, or is
it happened occasionally?
BTW, I guess you're using the old version of
might be one potential cause, you'd better
increase the vm resource to try again, just to verify your assumption.
On Fri, Dec 25, 2015 at 4:28 PM, donhoff_h <165612...@qq.com> wrote:
> Hi, Saisai Shao
>
> Many thanks for your reply. I used spark v1.3. Unfortunately I can not
> chang
[
https://issues.apache.org/jira/browse/SPARK-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12516:
Description:
Failure of NodeManager will make all the executors belong to that NM exit
silently
[
https://issues.apache.org/jira/browse/SPARK-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12516:
Description:
Failure of NodeManager will make all the executors belong to that NM exit
silently
[
https://issues.apache.org/jira/browse/SPARK-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12516:
Description:
Failure of NodeManager will make all the executors belong to that NM exit
silently
[
https://issues.apache.org/jira/browse/SPARK-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Saisai Shao updated SPARK-12447:
Description:
Currently {{YarnAllocator}} will update its managed states like
[
https://issues.apache.org/jira/browse/SPARK-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15070717#comment-15070717
]
Saisai Shao commented on SPARK-12514:
-
Since we need to differentiate the metrics between
Saisai Shao created SPARK-12516:
---
Summary: Properly handle NM failure situation for Spark on Yarn
Key: SPARK-12516
URL: https://issues.apache.org/jira/browse/SPARK-12516
Project: Spark
Issue
Yes, basically from the currently implementation it should be.
On Mon, Dec 21, 2015 at 6:39 PM, Arun Patel <arunp.bigd...@gmail.com> wrote:
> So, Does that mean only one RDD is created by all receivers?
>
>
>
> On Sun, Dec 20, 2015 at 10:23 PM, Saisai Shao <sai.sai
Saisai Shao created SPARK-12447:
---
Summary: Only update AM's internal state when executor is
successfully launched by NM
Key: SPARK-12447
URL: https://issues.apache.org/jira/browse/SPARK-12447
Project
Hi Siva,
How did you know that --executor-cores is ignored and where did you see
that only 1 Vcore is allocated?
Thanks
Saisai
On Tue, Dec 22, 2015 at 9:08 AM, Siva wrote:
> Hi Everyone,
>
> Observing a strange problem while submitting spark streaming job in
>
on web UI.
>
> Thanks,
> Sivakumar Bhavanari.
>
> On Mon, Dec 21, 2015 at 5:21 PM, Saisai Shao <sai.sai.s...@gmail.com>
> wrote:
>
>> Hi Siva,
>>
>> How did you know that --executor-cores is ignored and where did you see
>> that only 1 Vcore is alloc
Normally there will be one RDD in each batch.
You could refer to the implementation of DStream#getOrCompute.
On Mon, Dec 21, 2015 at 11:04 AM, Arun Patel
wrote:
> It may be simple question...But, I am struggling to understand this
>
> DStream is a sequence of RDDs
[
https://issues.apache.org/jira/browse/SPARK-10500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061730#comment-15061730
]
Saisai Shao commented on SPARK-10500:
-
[~sunrui] It would be better to back port to 1.5 if possible
[
https://issues.apache.org/jira/browse/SPARK-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063359#comment-15063359
]
Saisai Shao commented on SPARK-12400:
-
So from my understanding this will only be existed when
[
https://issues.apache.org/jira/browse/SPARK-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15063372#comment-15063372
]
Saisai Shao commented on SPARK-12400:
-
[~rxin], would you mind me taking a crack at this issue
Please check the Yarn AM log to see why AM is failed to start. That's the
reason why using `sc` will get such complaint.
On Fri, Dec 18, 2015 at 4:25 AM, Eran Witkon wrote:
> Hi,
> I am trying to install spark 1.5.2 on Apache hadoop 2.6 and Hive and yarn
>
> spark-env.sh
>
[
https://issues.apache.org/jira/browse/SPARK-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061639#comment-15061639
]
Saisai Shao commented on SPARK-12384:
-
IIUC, there's also another limitation in container level
[
https://issues.apache.org/jira/browse/SPARK-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059937#comment-15059937
]
Saisai Shao commented on SPARK-12345:
-
I think by default Spark Mesos implementation will ship all
[
https://issues.apache.org/jira/browse/SPARK-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059954#comment-15059954
]
Saisai Shao commented on SPARK-12345:
-
Having a quick test by not exporting {{SPARK_HOME
[
https://issues.apache.org/jira/browse/SPARK-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060035#comment-15060035
]
Saisai Shao commented on SPARK-12345:
-
Here is the one solution
(https://github.com/apache/spark
[
https://issues.apache.org/jira/browse/SPARK-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059778#comment-15059778
]
Saisai Shao commented on SPARK-12345:
-
A simple solution is to change the scripts to not expose
SPARK-6470 only supports node label expression for executors.
SPARK-7173 supports node label expression for AM (will be in 1.6).
If you want to schedule your whole application through label expression,
you have to configure both am and executor label expression. If you only
want to schedule
zzq98...@alibaba-inc.com]
> *发送时间:* 2015年12月16日 9:21
> *收件人:* 'Ted Yu'
> *抄送:* 'Saisai Shao'; 'dev'
> *主题:* Re: spark with label nodes in yarn
>
>
>
> Oops...
>
>
>
> I do use spark 1.5.0 and apache hadoop 2.6.0 (spark 1.4.1 + apache hadoop
> 2.6.0 is
[
https://issues.apache.org/jira/browse/SPARK-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059569#comment-15059569
]
Saisai Shao commented on SPARK-12345:
-
It is OK in my local test when I followed the step one by one
[
https://issues.apache.org/jira/browse/SPARK-12176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15055508#comment-15055508
]
Saisai Shao commented on SPARK-12176:
-
It is OK in my local test against latest master branch, seems
[
https://issues.apache.org/jira/browse/SPARK-9059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049804#comment-15049804
]
Saisai Shao commented on SPARK-9059:
HasOffsetRanges also has python version, which is added in SPARK
[
https://issues.apache.org/jira/browse/SPARK-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15049958#comment-15049958
]
Saisai Shao commented on SPARK-6735:
I've submitted a patch to continue this work
(https://github.com
I think this is the right JIRA to fix this issue (
https://issues.apache.org/jira/browse/SPARK-7111). It should be in Spark
1.4.
On Thu, Dec 10, 2015 at 12:32 AM, Cody Koeninger wrote:
> Looks like probably
>
> https://issues.apache.org/jira/browse/SPARK-8701
>
> so 1.5.0
>
[
https://issues.apache.org/jira/browse/SPARK-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046602#comment-15046602
]
Saisai Shao commented on SPARK-12178:
-
This is a good idea to make it generic if there's more direct
Please make sure the spark shell script you're running is pointed to
/bin/spark-shell
Just follow the instructions to correctly configure your spark 1.4.1 and
execute correct script are enough.
On Wed, Dec 9, 2015 at 11:28 AM, Divya Gehlot
wrote:
> Hi,
> As per
[
https://issues.apache.org/jira/browse/SPARK-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041260#comment-15041260
]
Saisai Shao commented on SPARK-10123:
-
Hi [~vanzin], would you mind letting me take a crack
[
https://issues.apache.org/jira/browse/SPARK-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15041301#comment-15041301
]
Saisai Shao commented on SPARK-12103:
-
I think I had a proposal of message handler (receiver
[
https://issues.apache.org/jira/browse/SPARK-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15042598#comment-15042598
]
Saisai Shao commented on SPARK-10123:
-
Just confirm if it is on your plan, in case duplicated
[
https://issues.apache.org/jira/browse/SPARK-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034997#comment-15034997
]
Saisai Shao edited comment on SPARK-12059 at 12/2/15 12:47 AM:
---
A simple
[
https://issues.apache.org/jira/browse/SPARK-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15034997#comment-15034997
]
Saisai Shao commented on SPARK-12059:
-
A simple solution is to loose the condition or remove
[
https://issues.apache.org/jira/browse/SPARK-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15035350#comment-15035350
]
Saisai Shao commented on SPARK-12059:
-
I see, so I will relax the condition to avoid exception from
[
https://issues.apache.org/jira/browse/SPARK-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15032806#comment-15032806
]
Saisai Shao commented on SPARK-12059:
-
Thanks a lot [~andrewor14], I will look into this issue
[
https://issues.apache.org/jira/browse/SPARK-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15033268#comment-15033268
]
Saisai Shao commented on SPARK-12059:
-
Hi [~andrewor14], when will this be happened? I suppose state
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15031308#comment-15031308
]
Saisai Shao commented on SPARK-12009:
-
OK, I see. So how about setting {{YarnAllocator
Might be related to this JIRA (
https://issues.apache.org/jira/browse/SPARK-11761), not very sure about it.
On Fri, Nov 27, 2015 at 10:22 AM, Nan Zhu wrote:
> Hi, all
>
> Anyone noticed that some of the tests just blocked at the test case “don't
> call ssc.stop in
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029425#comment-15029425
]
Saisai Shao commented on SPARK-12009:
-
So I guess your problem is that after you call {{sc.stop
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029416#comment-15029416
]
Saisai Shao commented on SPARK-12009:
-
So what actually version of Spark you're running? 1.4.0
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029416#comment-15029416
]
Saisai Shao edited comment on SPARK-12009 at 11/27/15 3:30 AM:
---
So what
[
https://issues.apache.org/jira/browse/SPARK-12002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029421#comment-15029421
]
Saisai Shao commented on SPARK-12002:
-
Looks like because Python `KafkaTransformDStream` specific
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029431#comment-15029431
]
Saisai Shao edited comment on SPARK-12009 at 11/27/15 3:50 AM:
---
Alright, my
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15029431#comment-15029431
]
Saisai Shao commented on SPARK-12009:
-
Alright, my code is master branch. Anyway I understood your
[
https://issues.apache.org/jira/browse/SPARK-12009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15028340#comment-15028340
]
Saisai Shao commented on SPARK-12009:
-
Looking at the code again, {{onDisconnected
1401 - 1500 of 1996 matches
Mail list logo