Hi,
I am getting stackoverflow error when I run FpGrowth algorithm on my
21 million transactions with a low support, since I want almost every
products association with other product. I know the problem is caused
by the recursive lineage of the algorithm, but I don't know how to get
around this
How do I take input from Apache Kafka into Apache Spark Streaming for
stream processing ?
-sathya
Hi,
I am getting stackoverflow error when I run FpGrowth algorithm on my
21 million transactions with a low support, since I want almost every
products association with other product. I know the problem is caused
by the recursive lineage of the algorithm, but I don't know how to get
around this
I have updated my question:
http://stackoverflow.com/questions/41345552/spark-streaming-with-yarn-executors-not-fully-utilized
On Wed, Dec 28, 2016 at 9:49 AM, Nishant Kumar
wrote:
> Hi,
>
> I am running spark streaming with Yarn with -
>
> *spark-submit --master
Hi,
I am running spark streaming with Yarn with -
*spark-submit --master yarn --deploy-mode cluster --num-executors 2
--executor-memory 8g --driver-memory 2g --executor-cores 8 ..*
I am consuming Kafka through DireactStream approach (No receiver). I have 2
topics (each with 3 partitions).
I
Hi Mich ,
Have you set SPARK_CLASSPATH in Spark-env.sh ?
Thanks,
Divya
On 27 December 2016 at 17:33, Mich Talebzadeh
wrote:
> When one runs in Local mode (one JVM) on an edge host (the host user
> accesses the cluster), it is possible to put additional jar file
Once you are in, there is no way out… :-)
> On Dec 27, 2016, at 7:37 PM, Kyle Kelley wrote:
>
> You are now in position 238 for unsubscription. If you wish for your
> subscription to occur immediately, please email
> dev-unsubscr...@spark.apache.org
>
> Best wishes.
>
>
Although the Spark task scheduler is aware of rack-level data locality, it
seems that only YARN implements the support for it. However, node-level
locality can still work for Standalone.
It is not necessary to copy the hadoop config files into the Spark CONF
directory. Set HADOOP_CONF_DIR to
You should probably add --driver-class-path with the jar as well. In theory
--jars should add it to the driver as well but in my experience it does not (I
think there was a jira open on it). In any case you can find it in
stackoverflow: See
Hi all,
I have the same problem with spark 2.0.2.
Best regards,
On Tue, Dec 27, 2016, 9:40 AM Mich Talebzadeh
wrote:
> Thanks Deppak
>
> but get the same error unfortunately
>
> ADD_JARS="/home/hduser/jars/ojdbc6.jar" spark-shell
> Spark context Web UI available at
Thanks Deppak
but get the same error unfortunately
ADD_JARS="/home/hduser/jars/ojdbc6.jar" spark-shell
Spark context Web UI available at http://50.140.197.217:4041
Spark context available as 'sc' (master = local[*], app id =
local-1482842478988).
Spark session available as 'spark'.
Welcome to
How about this:
ADD_JARS="/home/hduser/jars/ojdbc6.jar" spark-shell
Thanks
Deepak
On Tue, Dec 27, 2016 at 5:04 PM, Mich Talebzadeh
wrote:
> Ok I tried this but no luck
>
> spark-shell --jars /home/hduser/jars/ojdbc6.jar
> Spark context Web UI available at
Ok I tried this but no luck
spark-shell --jars /home/hduser/jars/ojdbc6.jar
Spark context Web UI available at http://50.140.197.217:4041
Spark context available as 'sc' (master = local[*], app id =
local-1482838526271).
Spark session available as 'spark'.
Welcome to
__
I meant ADD_JARS as you said --jars is not working for you with spark-shell.
Thanks
Deepak
On Tue, Dec 27, 2016 at 4:51 PM, Mich Talebzadeh
wrote:
> Ok just to be clear do you mean
>
> ADD_JARS="~/jars/ojdbc6.jar" spark-shell
>
> or
>
> spark-shell --jars $ADD_JARS
>
Ok just to be clear do you mean
ADD_JARS="~/jars/ojdbc6.jar" spark-shell
or
spark-shell --jars $ADD_JARS
Thanks
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
It works for me with spark 1.6 (--jars)
Please try this:
ADD_JARS="<>" spark-shell
Thanks
Deepak
On Tue, Dec 27, 2016 at 3:49 PM, Mich Talebzadeh
wrote:
> Thanks.
>
> The problem is that with spark-shell --jars does not work! This is Spark 2
> accessing Oracle 12c
>
Thanks.
The problem is that with spark-shell --jars does not work! This is Spark 2
accessing Oracle 12c
spark-shell --jars /home/hduser/jars/ojdbc6.jar
It comes back with
java.sql.SQLException: No suitable driver
unfortunately
and spark-shell uses spark-submit under the bonnet if you look at
Hi Mich
You can copy the jar to shared location and use --jars command line
argument of spark-submit.
Who so ever needs access to this jar , can refer to the shared path and
access it using --jars argument.
Thanks
Deepak
On Tue, Dec 27, 2016 at 3:03 PM, Mich Talebzadeh
I take you don't want to use the --jars option to avoid moving them every
time?
On Tue, 27 Dec 2016, 10:33 Mich Talebzadeh,
wrote:
> When one runs in Local mode (one JVM) on an edge host (the host user
> accesses the cluster), it is possible to put additional jar file
When one runs in Local mode (one JVM) on an edge host (the host user
accesses the cluster), it is possible to put additional jar file say
accessing Oracle RDBMS tables in $SPARK_CLASSPATH. This works
export SPARK_CLASSPATH=~/user_jars/ojdbc6.jar
Normally a group of users can have read access to
20 matches
Mail list logo