RE: spark sql

2014-08-02 Thread N . Venkata Naga Ravi
Hi Rajesh,

Can you recheck the version and your code again?
I tried similar below code and its work fine (compiles and executes)...

  // Apply a schema to an RDD of Java Beans and register it as a table.
JavaSchemaRDD schemaPeople = sqlCtx.applySchema(people, Person.class);
schemaPeople.registerAsTable(people);

// SQL can be run over RDDs that have been registered as tables.
JavaSchemaRDD teenagers = sqlCtx.sql(SELECT * FROM people WHERE age = 
13);

// The results of SQL queries are SchemaRDDs and support all the normal RDD 
operations.
// The columns of a row in the result can be accessed by ordinal.
ListString teenagerNames = teenagers.map(new FunctionRow, String() {
  public String call(Row row) {
return Name +row.getString(1)+ - Age:  + row.getInt(0);
  }
}).collect();
for (String name: teenagerNames) {
System.out.println(name);
  }


Can you try to replace Test.class with ANAInventory.class and check 

   JavaRDDANAInventory retrdd = jrdd.map(f);

JavaSchemaRDD schemaPeople = sqlCtx.applySchema(retrdd, Test.class);

However it should compile fine and may show issues while execution. Can you try 
with spark sql example and share the result?

Thanks,
Ravi

Date: Sat, 2 Aug 2014 18:36:26 +0530
Subject: Re: spark sql
From: mrajaf...@gmail.com
To: user@spark.apache.org

Hi Team,

Could you please help me to resolve above compilation issue.

Regards,
Rajesh


On Sat, Aug 2, 2014 at 2:02 AM, Madabhattula Rajesh Kumar mrajaf...@gmail.com 
wrote:

Hi Team,

I'm not able to print the values from Spark Sql JavaSchemaRDD. Please find 
below my code


 JavaSQLContext sqlCtx = new JavaSQLContext(sc);


NewHadoopRDDImmutableBytesWritable, Result rdd = new 
NewHadoopRDDImmutableBytesWritable, Result(
JavaSparkContext.toSparkContext(sc),
TableInputFormat.class, ImmutableBytesWritable.class,


Result.class, conf);

JavaRDDTuple2ImmutableBytesWritable, Result jrdd = rdd
.toJavaRDD();

ForEachFunction f = new ForEachFunction();



JavaRDDANAInventory retrdd = jrdd.map(f);

JavaSchemaRDD schemaPeople = sqlCtx.applySchema(retrdd, Test.class);
schemaPeople.registerAsTable(retrdd);



JavaSchemaRDD teenagers = sqlCtx.sql(SELECT * FROM retrdd);
   
When i add below code. It is giving compilation issue. Could you please help me 
to resolve this issue.




ListString teenagerNames = teenagers.map(new FunctionRow, String() {

  public String call(Row row) {
return null;  }}).collect();
for (String name: teenagerNames) {
  System.out.println(name);
}

Compilation issue :



The method map(FunctionRow,R in the type JavaSchemaRDD is not applicable for 
the arguments (new FunctionaRow, String(){}) 




Thank you for your help

Regards,
Rajesh
 



 
  

Spark SQL Query Plan optimization

2014-08-01 Thread N . Venkata Naga Ravi






Hi,

I am trying to understand the query plan and number of tasks /execution time 
created for joined query.

Consider following example , creating two tables emp, sal with appropriate 100 
records in each table with key for joining them.

EmpRDDRelation.scala

case class EmpRecord(key: Int, value: String)
case class SalRecord(key: Int, salary: Int)

object EmpRDDRelation {
  def main(args: Array[String]) {
val sparkConf = new 
SparkConf().setMaster(local[1]).setAppName(RDDRelation)
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)

// Importing the SQL context gives access to all the SQL functions and 
implicit conversions.
import sqlContext._

var rdd= sc.parallelize((1 to 100 ).map(i=EmpRecord(i, sname_$i)))
 
rdd.registerAsTable(emp)

// Once tables have been registered, you can run SQL queries over them.
println(Result of SELECT *:)
sql(SELECT * FROM emp).collect().foreach(println)


var salrdd = sc.parallelize((1 to 100).map(i=SalRecord(i,i*100)))
   
salrdd.registerAsTable(sal)
 sql(SELECT * FROM sal).collect().foreach(println)
 
var salRRDFromSQL= sql(SELECT emp.key,value,salary from emp,sal WHERE  
emp.key=30 AND emp.key=sal.key)
salRRDFromSQL.collect().foreach(println)

   
  }
}

Here are my observation :

Below is query plan for above join query which creates 150 tasks. I could see 
Filter is added in the plan , but not sure whether taken in optimized way. 
First of all it is not clear why 150 tasks are required, because i could see 
similar 150 tasks when executed the above join query without filter 
emp.key=30 like SELECT emp.key,value,salary from emp,sal WHERE  
emp.key=sal.key and took same time for both cases. So my understanding emp.key 
=30 filter should take place first and on top of the filtered records from emp 
table it should join with sal table( From the Oracle RDBMS perspective) .  But 
here query plan joins tables first  and applies filter later.  Is there anyway 
we can improve it from code wise or does require enhancement from Spark SQL 
side.

Please review my observation and let me know your comments.


== Query Plan ==
Project [key#0:0,value#1:1,salary#3:3]
 HashJoin [key#0], [key#2], BuildRight
  Exchange (HashPartitioning [key#0:0], 150)
   Filter (key#0:0 = 30)
ExistingRdd [key#0,value#1], MapPartitionsRDD[1] at mapPartitions at 
basicOperators.scala:174
  Exchange (HashPartitioning [key#2:0], 150)
   ExistingRdd [key#2,salary#3], MapPartitionsRDD[5] at mapPartitions at 
basicOperators.scala:174), which is now runnable
14/08/01 22:20:02 INFO DAGScheduler: Submitting 150 missing tasks from Stage 2 
(SchemaRDD[8] at RDD at SchemaRDD.scala:98
== Query Plan ==
Project [key#0:0,value#1:1,salary#3:3]
 HashJoin [key#0], [key#2], BuildRight
  Exchange (HashPartitioning [key#0:0], 150)
   Filter (key#0:0 = 30)
ExistingRdd [key#0,value#1], MapPartitionsRDD[1] at mapPartitions at 
basicOperators.scala:174
  Exchange (HashPartitioning [key#2:0], 150)
   ExistingRdd [key#2,salary#3], MapPartitionsRDD[5] at mapPartitions at 
basicOperators.scala:174)
14/08/01 22:20:02 INFO TaskSchedulerImpl: Adding task set 2.0 with 150 tasks


  

RE: Spark with HBase

2014-07-04 Thread N . Venkata Naga Ravi
Hi,

Any update on the solution? We are still facing this issue...
We could able to connect to HBase with independent code, but getting issue with 
Spark integration.

Thx,
Ravi

From: nvn_r...@hotmail.com
To: u...@spark.incubator.apache.org; user@spark.apache.org
Subject: RE: Spark with HBase
Date: Sun, 29 Jun 2014 15:32:42 +0530




+user@spark.apache.org

From: nvn_r...@hotmail.com
To: u...@spark.incubator.apache.org
Subject: Spark with HBase
Date: Sun, 29 Jun 2014 15:28:43 +0530




I am using follwoing versiongs ..

spark-1.0.0-bin-hadoop2
hbase-0.96.1.1-hadoop2


When executing Hbase Test , i am facing following exception. Looks like some 
version incompatibility, can you please help on it.

NERAVI-M-70HY:spark-1.0.0-bin-hadoop2 neravi$ ./bin/run-example 
org.apache.spark.examples.HBaseTest local localhost:4040 test



14/06/29 15:14:14 INFO RecoverableZooKeeper: The identifier of this process is 
69...@neravi-m-70hy.cisco.com
14/06/29 15:14:14 INFO ClientCnxn: Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL 
(unknown error)
14/06/29 15:14:14 INFO ClientCnxn: Socket connection established to 
localhost/0:0:0:0:0:0:0:1:2181, initiating session
14/06/29 15:14:14 INFO ClientCnxn: Session establishment complete on server 
localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x146e6fa10750009, negotiated 
timeout = 4
Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: PBUF


192.168.1.6�(
at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
at org.apache.hadoop.hbase.ServerName.init(ServerName.java:101)
at 
org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
at 
org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
at 
org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
at org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:126)
at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:37)
at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


Thanks,
Ravi

  

Spark with HBase

2014-06-29 Thread N . Venkata Naga Ravi
I am using follwoing versiongs ..

spark-1.0.0-bin-hadoop2
hbase-0.96.1.1-hadoop2


When executing Hbase Test , i am facing following exception. Looks like some 
version incompatibility, can you please help on it.

NERAVI-M-70HY:spark-1.0.0-bin-hadoop2 neravi$ ./bin/run-example 
org.apache.spark.examples.HBaseTest local localhost:4040 test



14/06/29 15:14:14 INFO RecoverableZooKeeper: The identifier of this process is 
69...@neravi-m-70hy.cisco.com
14/06/29 15:14:14 INFO ClientCnxn: Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL 
(unknown error)
14/06/29 15:14:14 INFO ClientCnxn: Socket connection established to 
localhost/0:0:0:0:0:0:0:1:2181, initiating session
14/06/29 15:14:14 INFO ClientCnxn: Session establishment complete on server 
localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x146e6fa10750009, negotiated 
timeout = 4
Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: PBUF


192.168.1.6�(
at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
at org.apache.hadoop.hbase.ServerName.init(ServerName.java:101)
at 
org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
at 
org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
at 
org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
at org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:126)
at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:37)
at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


Thanks,
Ravi
  

RE: Spark with HBase

2014-06-29 Thread N . Venkata Naga Ravi
+user@spark.apache.org

From: nvn_r...@hotmail.com
To: u...@spark.incubator.apache.org
Subject: Spark with HBase
Date: Sun, 29 Jun 2014 15:28:43 +0530




I am using follwoing versiongs ..

spark-1.0.0-bin-hadoop2
hbase-0.96.1.1-hadoop2


When executing Hbase Test , i am facing following exception. Looks like some 
version incompatibility, can you please help on it.

NERAVI-M-70HY:spark-1.0.0-bin-hadoop2 neravi$ ./bin/run-example 
org.apache.spark.examples.HBaseTest local localhost:4040 test



14/06/29 15:14:14 INFO RecoverableZooKeeper: The identifier of this process is 
69...@neravi-m-70hy.cisco.com
14/06/29 15:14:14 INFO ClientCnxn: Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL 
(unknown error)
14/06/29 15:14:14 INFO ClientCnxn: Socket connection established to 
localhost/0:0:0:0:0:0:0:1:2181, initiating session
14/06/29 15:14:14 INFO ClientCnxn: Session establishment complete on server 
localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x146e6fa10750009, negotiated 
timeout = 4
Exception in thread main java.lang.IllegalArgumentException: Not a host:port 
pair: PBUF


192.168.1.6�(
at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
at org.apache.hadoop.hbase.ServerName.init(ServerName.java:101)
at 
org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
at 
org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
at 
org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
at org.apache.hadoop.hbase.client.HBaseAdmin.init(HBaseAdmin.java:126)
at org.apache.spark.examples.HBaseTest$.main(HBaseTest.scala:37)
at org.apache.spark.examples.HBaseTest.main(HBaseTest.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:292)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:55)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)


Thanks,
Ravi

  

Spark Streaming with HBase

2014-06-29 Thread N . Venkata Naga Ravi
Hi,

Is there any example provided for Spark Streaming with Input provided from 
HBase table content.

Thanks,
Ravi
  

Spark with Drill

2014-05-16 Thread N . Venkata Naga Ravi
Hi,

I am trying to understand and and seeing Drill as one of the upcoming 
interesting tool outside. 
Can somebody clarify where Drill is going to position in Hadoop ecosystem 
compare with Spark and Shark?
Is it going to be used as alternative to any one of the Spark/Shark or Storm? 
Or Drill can integrate with them in this stack layer.
Also seeing MapR (major contributor of Drill) has going to packaging Spark in 
their recent announcement.


Thanks,
Ravi  

RE: Apache Spark is not building in Mac/Java 8

2014-05-02 Thread N . Venkata Naga Ravi
Thanks for your quick replay.

I tried with fresh installation, it downloads sbt 0.12.4 only (please check 
below logs). So it is not working. Can you tell where this 1.0 release 
candidate located which i can try?

dhcp-173-39-68-28:spark-0.9.1 neravi$ ./sbt/sbt assembly
Attempting to fetch sbt
 100.0%
Launching sbt from sbt/sbt-launch-0.12.4.jar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=350m; 
support was removed in 8.0
[info] Loading project definition from /Applications/spark-0.9.1/project/project
[info] Updating 
{file:/Applications/spark-0.9.1/project/project/}default-f15d5a...
[info] Resolving org.scala-sbt#precompiled-2_10_1;0.12.4 ...
[info] Done updating.
[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...
[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
[error] (bad constant pool tag 18 at byte 10)
[error] error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
[error] (bad constant pool tag 18 at byte 20)
[error] two errors found
[error] (compile:compile) Compilation failed
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? q
dhcp-173-39-68-28:spark-0.9.1 neravi$ ls
CHANGES.txtassemblycoredocsextras   
 pom.xmlsbinyarn
LICENSEbageldataec2graphx   
 projectsbt
NOTICEbindevexamples
make-distribution.shpythonstreaming
README.mdconfdockerexternalmllib
repltools
dhcp-173-39-68-28:spark-0.9.1 neravi$ cd sbt/
dhcp-173-39-68-28:sbt neravi$ ls
sbtsbt-launch-0.12.4.jar


From: scrapco...@gmail.com
Date: Fri, 2 May 2014 16:02:48 +0530
Subject: Re: Apache Spark is not building in Mac/Java 8
To: user@spark.apache.org

you will need to change sbt version to 13.2. I think spark 0.9.1 was released 
with sbt 13 ? Incase not then it may not work with java 8. Just wait for 1.0 
release or give 1.0 release candidate a try !


http://mail-archives.apache.org/mod_mbox/spark-dev/201404.mbox/%3CCABPQxstL6nwTO2H9p8%3DGJh1g2zxOJd02Wt7L06mCLjo-vwwG9Q%40mail.gmail.com%3E


Prashant Sharma


On Fri, May 2, 2014 at 3:56 PM, N.Venkata Naga Ravi nvn_r...@hotmail.com 
wrote:









Hi,


I am tyring to build Apache Spark with Java 8 in my Mac system ( OS X 10.8.5) , 
but getting following exception.
Please help on resolving it.


dhcp-173-39-68-28:spark-0.9.1 neravi$ java -version


java version 1.8.0
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)
dhcp-173-39-68-28:spark-0.9.1 neravi$ ./sbt/sbt assembly
Launching sbt from sbt/sbt-launch-0.12.4.jar


Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=350m; 
support was removed in 8.0
[info] Loading project definition from /Applications/spark-0.9.1/project/project
[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...


[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
[error] (bad constant pool tag 18 at byte 10)


[error] error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
[error] (bad constant pool tag 18 at byte 20)


[error] two errors found
[error] (compile:compile) Compilation failed
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? r
[info] Loading project definition from /Applications/spark-0.9.1/project/project


[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...
[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken


[error] (bad constant pool tag 18 at byte 10)
[error] error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken


[error] (bad constant pool tag 18 at byte 20)
[error] two errors found
[error] (compile:compile) Compilation failed
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? q


Thanks,
Ravi
  

  

  

RE: Apache Spark is not building in Mac/Java 8

2014-05-02 Thread N . Venkata Naga Ravi
Thanks Prashant . The 1.0 RC version is working fine in my system.
Let me explore further and get back you.

Thanks Again,
Ravi

From: scrapco...@gmail.com
Date: Fri, 2 May 2014 16:22:40 +0530
Subject: Re: Apache Spark is not building in Mac/Java 8
To: user@spark.apache.org

I have pasted the link in my previous post.Prashant Sharma


On Fri, May 2, 2014 at 4:15 PM, N.Venkata Naga Ravi nvn_r...@hotmail.com 
wrote:





Thanks for your quick replay.

I tried with fresh installation, it downloads sbt 0.12.4 only (please check 
below logs). So it is not working. Can you tell where this 1.0 release 
candidate located which i can try?



dhcp-173-39-68-28:spark-0.9.1 neravi$ ./sbt/sbt assembly
Attempting to fetch sbt
 100.0%
Launching sbt from sbt/sbt-launch-0.12.4.jar


Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=350m; 
support was removed in 8.0
[info] Loading project definition from /Applications/spark-0.9.1/project/project
[info] Updating 
{file:/Applications/spark-0.9.1/project/project/}default-f15d5a...


[info] Resolving org.scala-sbt#precompiled-2_10_1;0.12.4 ...
[info] Done updating.
[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...


[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
[error] (bad constant pool tag 18 at byte 10)


[error] error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
[error] (bad constant pool tag 18 at byte 20)


[error] two errors found
[error] (compile:compile) Compilation failed
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? q
dhcp-173-39-68-28:spark-0.9.1 neravi$ ls
CHANGES.txtassemblycoredocsextras   
 pom.xmlsbinyarn


LICENSEbageldataec2graphx   
 projectsbt
NOTICEbindevexamples
make-distribution.shpythonstreaming


README.mdconfdockerexternalmllib
repltools
dhcp-173-39-68-28:spark-0.9.1 neravi$ cd sbt/
dhcp-173-39-68-28:sbt neravi$ ls
sbtsbt-launch-0.12.4.jar




From: scrapco...@gmail.com
Date: Fri, 2 May 2014 16:02:48 +0530
Subject: Re: Apache Spark is not building in Mac/Java 8
To: user@spark.apache.org



you will need to change sbt version to 13.2. I think spark 0.9.1 was released 
with sbt 13 ? Incase not then it may not work with java 8. Just wait for 1.0 
release or give 1.0 release candidate a try !




http://mail-archives.apache.org/mod_mbox/spark-dev/201404.mbox/%3CCABPQxstL6nwTO2H9p8%3DGJh1g2zxOJd02Wt7L06mCLjo-vwwG9Q%40mail.gmail.com%3E




Prashant Sharma


On Fri, May 2, 2014 at 3:56 PM, N.Venkata Naga Ravi nvn_r...@hotmail.com 
wrote:











Hi,


I am tyring to build Apache Spark with Java 8 in my Mac system ( OS X 10.8.5) , 
but getting following exception.
Please help on resolving it.


dhcp-173-39-68-28:spark-0.9.1 neravi$ java -version




java version 1.8.0
Java(TM) SE Runtime Environment (build 1.8.0-b132)
Java HotSpot(TM) 64-Bit Server VM (build 25.0-b70, mixed mode)
dhcp-173-39-68-28:spark-0.9.1 neravi$ ./sbt/sbt assembly
Launching sbt from sbt/sbt-launch-0.12.4.jar




Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=350m; 
support was removed in 8.0
[info] Loading project definition from /Applications/spark-0.9.1/project/project
[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...




[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken
[error] (bad constant pool tag 18 at byte 10)




[error] error while loading Comparator, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/util/Comparator.class)'
 is broken
[error] (bad constant pool tag 18 at byte 20)




[error] two errors found
[error] (compile:compile) Compilation failed
Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? r
[info] Loading project definition from /Applications/spark-0.9.1/project/project




[info] Compiling 1 Scala source to 
/Applications/spark-0.9.1/project/project/target/scala-2.9.2/sbt-0.12/classes...
[error] error while loading CharSequence, class file 
'/Library/Java/JavaVirtualMachines/jdk1.8.0.jdk/Contents/Home/jre/lib/rt.jar(java/lang/CharSequence.class)'
 is broken




[error] (bad constant pool tag 18 at byte 10)
[error] error while loading Comparator, class file