Thanks both of you guys on this!
bit1...@163.com
From: Akhil Das
Date: 2015-02-24 12:58
To: Tathagata Das
CC: user; bit1129
Subject: Re: About FlumeUtils.createStream
I see, thanks for the clarification TD.
On 24 Feb 2015 09:56, Tathagata Das t...@databricks.com wrote:
Akhil
The behvior is exactly what I expected. Thanks Akhil and Tathagata!
bit1...@163.com
From: Akhil Das
Date: 2015-02-24 13:32
To: bit1129
CC: Tathagata Das; user
Subject: Re: Re: About FlumeUtils.createStream
That depends on how many machines you have in your cluster. Say you have 6
workers
will stay on one cluster node, or will they distributed
among the cluster nodes?
bit1...@163.com
From: Akhil Das
Date: 2015-02-24 12:58
To: Tathagata Das
CC: user; bit1129
Subject: Re: About FlumeUtils.createStream
I see, thanks for the clarification TD.
On 24 Feb 2015 09:56, Tathagata Das t
-24 12:58
*To:* Tathagata Das t...@databricks.com
*CC:* user user@spark.apache.org; bit1129 bit1...@163.com
*Subject:* Re: About FlumeUtils.createStream
I see, thanks for the clarification TD.
On 24 Feb 2015 09:56, Tathagata Das t...@databricks.com wrote:
Akhil, that is incorrect.
Spark
I see, thanks for the clarification TD.
On 24 Feb 2015 09:56, Tathagata Das t...@databricks.com wrote:
Akhil, that is incorrect.
Spark will list on the given port for Flume to push data into it.
When in local mode, it will listen on localhost:
When in some kind of cluster, instead of
Spark won't listen on mate, It basically means you have a flume source
running at port of your localhost. And when you submit your
application in standalone mode, workers will consume date from that port.
Thanks
Best Regards
On Sat, Feb 21, 2015 at 9:22 AM, bit1...@163.com