things...
Thanks,
Ron
On Sep 8, 2014, at 7:41 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Ron,
On Tue, Sep 9, 2014 at 11:27 AM, Ron's Yahoo! zlgonza...@yahoo.com.invalid
wrote:
I’m trying to figure out how I can run Spark Streaming like an API.
The goal is to have a synchronous REST
...
Thanks,
Ron
On Sep 8, 2014, at 7:41 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Ron,
On Tue, Sep 9, 2014 at 11:27 AM, Ron's Yahoo! zlgonza...@yahoo.com.invalid
wrote:
I’m trying to figure out how I can run Spark Streaming like an API.
The goal is to have a synchronous REST API
,
Ron
On Sep 8, 2014, at 9:28 PM, Tobias Pfeiffer t...@preferred.jp wrote:
Hi,
On Tue, Sep 9, 2014 at 12:59 PM, Ron's Yahoo! zlgonza...@yahoo.com wrote:
I want to create a synchronous REST API that will process some data that is
passed in as some request.
I would imagine that the Spark
Not sure what your environment is but this happened to me before because I had
a couple of servlet-api jars in the path which did not match.
I was building a system that programmatically submitted jobs so I had my own
jars that conflicted with that of spark. The solution is do mvn
building only with -Phadoop-2.4? In the past users have reported issues
trying to build random spot versions... I think HW is supposed to be
compatible with the normal 2.4.0 build.
On Mon, Aug 4, 2014 at 8:35 AM, Ron's Yahoo! zlgonza...@yahoo.com.invalid
wrote:
Thanks, I ensured
I meant yarn and hadoop defaulted to 1.0.4 so the yarn build fails since 1.0.4
doesn’t exist for yarn...
Thanks,
Ron
On Aug 4, 2014, at 10:01 AM, Ron's Yahoo! zlgonza...@yahoo.com wrote:
That failed since it defaulted the versions for yarn and hadoop
I’ll give it a try with just 2.4.0
I think you’re going to have to make it serializable by registering it with the
Kryo registrator. I think multiple workers are running as separate VMs so it
might need to be able to serialize and deserialize broadcasted variables to the
different executors.
Thanks,
Ron
On Aug 3, 2014, at 6:38