Hi,
Did you use our quickstart (=maven archetype) scripts to setup your maven
project?
We should integrate the "maven-assembly-plugin" configuration with the
fat-jar preconfigured into the archetype so that users automatically get
their dependencies included. (
https://issues.apache.org/jira/brows
Hello Aljoscha and Robert,
Sorry for that stupid question. Building a fat JAR with maven worked for
me. Thank you.
actually I tried to copy the json4s-JARsto the lib folders of the
cluster. But that didn't work.
In Yarn-Cluster-Mode: where is the right directory to put that JARs? Is
it hadoop-V
If you are building the jar file using maven, you can also use the maven
assembly plugin to build a fat jar (jar-with-dependencies).
Then, the dependencies will be packed into the job's jar file.
On Mon, Aug 11, 2014 at 7:38 AM, Aljoscha Krettek
wrote:
> Hi,
> it seems you have to put the json4
Hi,
it seems you have to put the json4s jar into the lib folder of your Flink
(Stratosphere) installation on every Slave Node. Are you using yarn or our
own cluster management?
Aljoscha
On Sun, Aug 10, 2014 at 10:36 PM, Norman Spangenberg <
wir12...@studserv.uni-leipzig.de> wrote:
> Thank you A
Thank you Aljoscha,
Sorry, but now the next problem occurs.
The code i've posted works fine locally (in eclipse). but in the cluster
environement there's a problem: NoClassDefFoundError: org/json4s/Formats
I'm not really sure wether this problem is because of stratosphere/yarn
or json4s.
A litt
Hi,
I think it is a good way, yes. You could also handle the JSON parsing in a
custom input format but this would only shift the computation to a
different place. Performance should not be impacted by this. (I think
parsing JSON is slow no matter what you do and not matter what cluster
processing f
Hello Aljoscha,
Thanks for your reply. It was really helpful.
After some time to figure out the right syntax it worked perfectly.
val user_interest = lines.map( line => {
val parsed = parse(line)
implicit lazy val formats =
org.
Hi Norman,
right now it is only possible to use Primitive Types and Case Classes (of
which tuples are a special case) as Scala Data Types. Your program could
work if you omit the second map function and instead put that code in your
first map function. This way you avoid having that custom JSON Typ
hello,
i hope this is the right place for this question.
i'm currently experimenting and comparing flink/stratosphere and apache
spark.
my goal is to analyse large json-files of twitter-data and now i'm
looking for a way to parse the json-tuples in a map-function and put in
a dataset.
for this