Why do you need to skip java tests? I build the distro just fine with Java
8.
On Dec 27, 2014 4:21 AM, Ted Yu yuzhih...@gmail.com wrote:
In case jdk 1.7 or higher is used to build, --skip-java-test needs to be
specifed.
FYI
On Thu, Dec 25, 2014 at 5:03 PM, guxiaobo1982 guxiaobo1...@qq.com
Hey,
Tried to get the new spark.dynamicAllocation.enabled feature working on
Yarn (Hadoop 2.2), but am unsuccessful so far. I've tested with the
following settings:
conf
.set(spark.dynamicAllocation.enabled, true)
.set(spark.shuffle.service.enabled, true)
Hi all,
Brand new to Spark and to big data technologies in general. Eventually I'd
like to contribute to the testing effort on Spark.
I have an ARM Chromebook at my disposal: that's it for the moment. I can
vouch that it's OK for sending Hive queries to an AWS EMR cluster via SQL
Workbench.
I
Hey,
Tried to get the new spark.dynamicAllocation.enabled feature working on
Yarn (Hadoop 2.2), but am unsuccessful so far. I've tested with the
following settings:
conf
.set(spark.dynamicAllocation.enabled, true)
.set(spark.shuffle.service.enabled, true)
In make-distribution.sh, there is following check of Java version:
if [[ ! $JAVA_VERSION =~ 1.6 -z $SKIP_JAVA_TEST ]]; then
echo ***NOTE***: JAVA_HOME is not set to a JDK 6 installation. The
resulting
FYI
On Sat, Dec 27, 2014 at 1:31 AM, Sean Owen so...@cloudera.com wrote:
Why do you need
Yes, it is just a warning but it can be ignored unless you are running old
Java 6 at runtime too.
On Dec 27, 2014 3:11 PM, Ted Yu yuzhih...@gmail.com wrote:
In make-distribution.sh, there is following check of Java version:
if [[ ! $JAVA_VERSION =~ 1.6 -z $SKIP_JAVA_TEST ]]; then
echo
There are hardware recommendations at
http://spark.apache.org/docs/latest/hardware-provisioning.html but they're
overkill for just testing things out. You should be able to get meaningful
work done with 2 m3large for instance.
On Sat, Dec 27, 2014 at 8:27 AM, Amy Brown testingwithf...@gmail.com
This works for me:
export MAVEN_OPTS=-Xmx2g -XX:MaxPermSize=512M
-XX:ReservedCodeCacheSize=512m mvn -DskipTests clean package
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Installation-Maven-PermGen-OutOfMemoryException-tp20831p20868.html
Sent
Hi Anders,
I faced the same issue as you mentioned. Yes, you need to install
spark shuffle plugin for YARN. Please check following PRs which add
doc to enable dynamicAllocation:
https://github.com/apache/spark/pull/3731
https://github.com/apache/spark/pull/3757
I could run Spark on YARN with
The problem is a conflicts in the version of Jackson used in your cluster
versus what you run. I would start by taking off things like the assembly
jar from your classpath. Try the userClassPathFirst option as well to avoid
using the Jackson in your Hadoop distribution.
Hi,
I build the 1.2.0
I have a job where I want to map over all data in a cassandra database.
I’m then selectively sending things to my own external system (ActiveMQ) if
the item matches criteria.
The problem is that I need to do some init and shutdown. Basically on init
I need to create ActiveMQ connections and on
Hi All,
I want to check an item is present or not in a RDD of Iterable[Int] using
scala
something like in java we do :
*list.contains(item)*
and the statement returns true if the item is present otherwise false.
Please help me to find the solution.
Thanks
Amit
The console progress bars are implemented on top of a new stable status
API that was added in Spark 1.2. It's possible to query job progress
using this interface (in older versions of Spark, you could implement a
custom SparkListener and maintain the counts of completed / running /
failed tasks /
Is the item you're looking up an Int? So you want to find which of the
Iterable[Int] elements in your RDD contains the Int you're looking for?
On Sat Dec 27 2014 at 3:26:41 PM Amit Behera amit.bd...@gmail.com wrote:
Hi All,
I want to check an item is present or not in a RDD of Iterable[Int]
Hi,
Doing:
val ssc = new StreamingContext(conf, Seconds(1))
and getting:
Only one SparkContext may be running in this JVM (see SPARK-2243). To
ignore this error, set spark.driver.allowMultipleContexts = true.
But I dont think that I have another SparkContext running. Is there any way
I
Are you trying to do this in the shell? Shell is instantiated with a spark
context named sc.
-Ilya Ganelin
On Sat, Dec 27, 2014 at 5:24 PM, tfrisk tfris...@gmail.com wrote:
Hi,
Doing:
val ssc = new StreamingContext(conf, Seconds(1))
and getting:
Only one SparkContext may be
Currently only standalone cluster is supported with the spark-ec2 script. You
can use Cloudera/ambari/sequenceiq for creating yarn cluster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Using-YARN-on-a-cluster-created-with-spark-ec2-tp20816p20870.html
Yes you are right - thanks for that :)
On 27 December 2014 at 23:18, Ilya Ganelin ilgan...@gmail.com wrote:
Are you trying to do this in the shell? Shell is instantiated with a spark
context named sc.
-Ilya Ganelin
On Sat, Dec 27, 2014 at 5:24 PM, tfrisk tfris...@gmail.com wrote:
Hi,
Compile error from Spark 1.2.0
Hello , I am zigen.
I am using the Spark SQL 1.1.0.
I want to use the Spark SQL 1.2.0.
but my Spark application is a compile error.
Spark 1.1.0 had a DataType.DecimalType.
but Spark1.2.0 had not DataType.DecimalType.
Why ?
JavaDoc (Spark 1.1.0)
Hi All,
I am using spark in a grails app and have added below maven details.
compile group: 'org.apache.spark', name: 'spark-core_2.10', version: '1.2.0'
It fails with below error:
Resolve error obtaining dependencies: Failed to read artifact descriptor for
Please see:
[SPARK-3930] [SPARK-3933] Support fixed-precision decimal in SQL, and some
optimizations
Cheers
On Sat, Dec 27, 2014 at 7:20 PM, zigen dbviewer.zi...@gmail.com wrote:
Compile error from Spark 1.2.0
Hello , I am zigen.
I am using the Spark SQL 1.1.0.
I want to use the Spark
I encountered the following issue when enabling dynamicAllocation. You may
want to take a look at it.
https://issues.apache.org/jira/browse/SPARK-4951
Best Regards,
Shixiong Zhu
2014-12-28 2:07 GMT+08:00 Tsuyoshi OZAWA ozawa.tsuyo...@gmail.com:
Hi Anders,
I faced the same issue as you
What is the full error? This doesn't specify much detail but could be due
to network problems for example.
On Dec 28, 2014 4:16 AM, lalitagarw lalitag...@gmail.com wrote:
Hi All,
I am using spark in a grails app and have added below maven details.
compile group: 'org.apache.spark', name:
23 matches
Mail list logo