Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-23 Thread Nicholas Chammas
Do we have a JIRA issue to track this? I think I've run into a similar
issue.


On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai yh...@databricks.com wrote:

 It is caused by a bug in Spark REPL. I still do not know which part of the
 REPL code causes it... I think people working REPL may have better idea.

 Regarding how I found it, based on exception, it seems we pulled in some
 irrelevant stuff and that import was pretty suspicious.

 Thanks,

 Yin


 On Tue, Jul 22, 2014 at 12:53 AM, Victor Sheng victorsheng...@gmail.com
 wrote:

 Hi, Yin Huai
 I test again with your snippet code.
 It works well in spark-1.0.1

 Here is my code:

  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
  case class Record(data_date: String, mobile: String, create_time: String)
  val mobile = Record(2014-07-20,1234567,2014-07-19)
  val lm = List(mobile)
  val mobileRDD = sc.makeRDD(lm)
  val mobileSchemaRDD = sqlContext.createSchemaRDD(mobileRDD)
  mobileSchemaRDD.registerAsTable(mobile)
  sqlContext.sql(select count(1) from mobile).collect()

 The Result is like below:
 14/07/22 15:49:53 INFO spark.SparkContext: Job finished: collect at
 SparkPlan.scala:52, took 0.296864832 s
 res9: Array[org.apache.spark.sql.Row] = Array([1])


But what is the main cause of this exception? And how you find it out
 by
 looking some unknown characters like $line11.$read$
 $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?

 Thanks,
 Victor




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.





Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-23 Thread Yin Huai
Yes, https://issues.apache.org/jira/browse/SPARK-2576 is used to track it.



On Wed, Jul 23, 2014 at 9:11 AM, Nicholas Chammas 
nicholas.cham...@gmail.com wrote:

 Do we have a JIRA issue to track this? I think I've run into a similar
 issue.


 On Wed, Jul 23, 2014 at 1:12 AM, Yin Huai yh...@databricks.com wrote:

 It is caused by a bug in Spark REPL. I still do not know which part of
 the REPL code causes it... I think people working REPL may have better
 idea.

 Regarding how I found it, based on exception, it seems we pulled in some
 irrelevant stuff and that import was pretty suspicious.

 Thanks,

 Yin


 On Tue, Jul 22, 2014 at 12:53 AM, Victor Sheng victorsheng...@gmail.com
 wrote:

 Hi, Yin Huai
 I test again with your snippet code.
 It works well in spark-1.0.1

 Here is my code:

  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
  case class Record(data_date: String, mobile: String, create_time:
 String)
  val mobile = Record(2014-07-20,1234567,2014-07-19)
  val lm = List(mobile)
  val mobileRDD = sc.makeRDD(lm)
  val mobileSchemaRDD = sqlContext.createSchemaRDD(mobileRDD)
  mobileSchemaRDD.registerAsTable(mobile)
  sqlContext.sql(select count(1) from mobile).collect()

 The Result is like below:
 14/07/22 15:49:53 INFO spark.SparkContext: Job finished: collect at
 SparkPlan.scala:52, took 0.296864832 s
 res9: Array[org.apache.spark.sql.Row] = Array([1])


But what is the main cause of this exception? And how you find it out
 by
 looking some unknown characters like $line11.$read$
 $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?

 Thanks,
 Victor




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.






Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-22 Thread Victor Sheng
Hi, Yin Huai
I test again with your snippet code.
It works well in spark-1.0.1

Here is my code:
 
 val sqlContext = new org.apache.spark.sql.SQLContext(sc)
 case class Record(data_date: String, mobile: String, create_time: String)
 val mobile = Record(2014-07-20,1234567,2014-07-19)
 val lm = List(mobile)
 val mobileRDD = sc.makeRDD(lm)
 val mobileSchemaRDD = sqlContext.createSchemaRDD(mobileRDD)
 mobileSchemaRDD.registerAsTable(mobile)
 sqlContext.sql(select count(1) from mobile).collect()
 
The Result is like below:
14/07/22 15:49:53 INFO spark.SparkContext: Job finished: collect at
SparkPlan.scala:52, took 0.296864832 s
res9: Array[org.apache.spark.sql.Row] = Array([1])

   
   But what is the main cause of this exception? And how you find it out by
looking some unknown characters like $line11.$read$
$line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ? 

Thanks,
Victor




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-22 Thread Yin Huai
It is caused by a bug in Spark REPL. I still do not know which part of the
REPL code causes it... I think people working REPL may have better idea.

Regarding how I found it, based on exception, it seems we pulled in some
irrelevant stuff and that import was pretty suspicious.

Thanks,

Yin


On Tue, Jul 22, 2014 at 12:53 AM, Victor Sheng victorsheng...@gmail.com
wrote:

 Hi, Yin Huai
 I test again with your snippet code.
 It works well in spark-1.0.1

 Here is my code:

  val sqlContext = new org.apache.spark.sql.SQLContext(sc)
  case class Record(data_date: String, mobile: String, create_time: String)
  val mobile = Record(2014-07-20,1234567,2014-07-19)
  val lm = List(mobile)
  val mobileRDD = sc.makeRDD(lm)
  val mobileSchemaRDD = sqlContext.createSchemaRDD(mobileRDD)
  mobileSchemaRDD.registerAsTable(mobile)
  sqlContext.sql(select count(1) from mobile).collect()

 The Result is like below:
 14/07/22 15:49:53 INFO spark.SparkContext: Job finished: collect at
 SparkPlan.scala:52, took 0.296864832 s
 res9: Array[org.apache.spark.sql.Row] = Array([1])


But what is the main cause of this exception? And how you find it out by
 looking some unknown characters like $line11.$read$
 $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$ ?

 Thanks,
 Victor




 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10390.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.



Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-21 Thread Victor Sheng
Hi,Kevin
I tried it on spark1.0.0, it works fine.
It's a bug in spark1.0.1 ...
Thanks,
Victor



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10288.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-21 Thread Yin Huai
Hi Victor,

Instead of importing sqlContext.createSchemaRDD, can you explicitly call
sqlContext.createSchemaRDD(rdd) to create a SchemaRDD?

For example,

You have a case class Record.

case class Record(data_date: String, mobile: String, create_time: String)

Then, you create a RDD[Record] and let's call it mobile.

Instead of using mobile.registerAsTable(mobile), can you try the
following snippet and see if it works?

val mobileSchemaRDD = sqlContext.createSchemaRDD(mobile)
mobileSchemaRDD.registerAsTable(mobile)

Thanks,

Yin


On Sun, Jul 20, 2014 at 11:10 PM, Victor Sheng victorsheng...@gmail.com
wrote:

 Hi,Kevin
 I tried it on spark1.0.0, it works fine.
 It's a bug in spark1.0.1 ...
 Thanks,
 Victor



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10288.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.



Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-20 Thread Victor Sheng
Hi, Michael
I only modified the default hadoop version to 0.20.2-cdh3u5, and
DEFAULT_HIVE=true in SparkBuild.scala.
Then sbt/sbt assembly.
I just run in the local standalone mode by using sbin/start-all.sh.
Hadoop version is 0.20.2-cdh3u5.
Then use spark-shell to execute the spark sql.

One machine both master and slave. OS is CentOS 5.
And not use mesos.
Thanks,
Victor



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10266.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-20 Thread Kevin Jung
Hi, Victor

I got the same issue and I posted it.
In my case, it only happens when I query some spark-sql on spark 1.0.1 but
for spark 1.0.0, it works properly.
Have you run the same job on spark 1.0.0 ? 

Sincerely,
Kevin



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10274.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-18 Thread Victor Sheng
Hi,Svend
  Your reply is very helpful to me. I'll keep an eye on that ticket.
  And also... Cheers  :)
Best Regards,
Victor



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-1-spark-sql-error-java-lang-NoClassDefFoundError-Could-not-initialize-class-line11-read-tp10135p10162.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.


Re: spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-18 Thread Michael Armbrust
Can you tell us more about your environment.  Specifically, are you also
running on Mesos?
On Jul 18, 2014 12:39 AM, Victor Sheng victorsheng...@gmail.com wrote:

 when I run a query to a hadoop file.
 mobile.registerAsTable(mobile)
 val count = sqlContext.sql(select count(1) from mobile)
 res5: org.apache.spark.sql.SchemaRDD =
 SchemaRDD[21] at RDD at SchemaRDD.scala:100
 == Query Plan ==
 ExistingRdd [data_date#0,mobile#1,create_time#2], MapPartitionsRDD[4] at
 mapPartitions at basicOperators.scala:176

 when I run collect.
 count.collect()

 It throws exceptions, Can anyone help me ?

 Job aborted due to stage failure: Task 3.0:22 failed 4 times, most recent
 failure: Exception failure in TID 153 on host wh-8-210:
 java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$
 $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
 $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
 scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
 scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
 scala.collection.Iterator$$anon$1.head(Iterator.scala:840)

 org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)

 org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
 org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
 org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
 org.apache.spark.scheduler.Task.run(Task.scala:51)
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)

 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 java.lang.Thread.run(Thread.java:722) Driver stacktrace:



 java.lang.ExceptionInInitializerError
 at $line11.$read$$iwC.init(console:6)
 at $line11.$read.init(console:26)
 at $line11.$read$.init(console:30)
 at $line11.$read$.clinit(console)
 at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
 at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
 at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
 at scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
 at scala.collection.Iterator$$anon$1.head(Iterator.scala:840)
 at

 org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)
 at

 org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
 at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
 at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
 at
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
 at
 org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
 at org.apache.spark.scheduler.Task.run(Task.scala:51)
 at
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
 at

 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
 at

 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
 at java.lang.Thread.run(Thread.java:722)


 My classpath is :

 /app/hadoop/spark-1.0.1/assembly/target/scala-2.10/spark-assembly-1.0.1-hadoop0.20.2-cdh3u5.jar
 System Classpath
 /app/hadoop/spark-1.0.1/confSystem Classpath
 /app/hadoop/spark-1.0.1/lib_managed/jars/JavaEWAH-0.3.2.jar System
 Classpath
 /app/hadoop/spark-1.0.1/lib_managed/jars/JavaEWAH-0.6.6.jar System
 Classpath
 /app/hadoop/spark-1.0.1/lib_managed/jars/ST4-4.0.4.jar  System Classpath
 

spark1.0.1 spark sql error java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$

2014-07-17 Thread Victor Sheng
when I run a query to a hadoop file.
mobile.registerAsTable(mobile)
val count = sqlContext.sql(select count(1) from mobile)
res5: org.apache.spark.sql.SchemaRDD = 
SchemaRDD[21] at RDD at SchemaRDD.scala:100
== Query Plan ==
ExistingRdd [data_date#0,mobile#1,create_time#2], MapPartitionsRDD[4] at
mapPartitions at basicOperators.scala:176

when I run collect.
count.collect()

It throws exceptions, Can anyone help me ?

Job aborted due to stage failure: Task 3.0:22 failed 4 times, most recent
failure: Exception failure in TID 153 on host wh-8-210:
java.lang.NoClassDefFoundError: Could not initialize class $line11.$read$
$line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
$line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
scala.collection.Iterator$$anon$1.head(Iterator.scala:840)
org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)
org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
org.apache.spark.scheduler.Task.run(Task.scala:51)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
java.lang.Thread.run(Thread.java:722) Driver stacktrace:



java.lang.ExceptionInInitializerError
at $line11.$read$$iwC.init(console:6)
at $line11.$read.init(console:26)
at $line11.$read$.init(console:30)
at $line11.$read$.clinit(console)
at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
at $line12.$read$$iwC$$iwC$$iwC$$iwC$$anonfun$2.apply(console:19)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$1.next(Iterator.scala:853)
at scala.collection.Iterator$$anon$1.head(Iterator.scala:840)
at
org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:181)
at
org.apache.spark.sql.execution.ExistingRdd$$anonfun$productToRowRdd$1.apply(basicOperators.scala:176)
at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
at org.apache.spark.rdd.RDD$$anonfun$12.apply(RDD.scala:559)
at
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111)
at org.apache.spark.scheduler.Task.run(Task.scala:51)
at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)


My classpath is :
/app/hadoop/spark-1.0.1/assembly/target/scala-2.10/spark-assembly-1.0.1-hadoop0.20.2-cdh3u5.jar
System Classpath
/app/hadoop/spark-1.0.1/confSystem Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/JavaEWAH-0.3.2.jar System Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/JavaEWAH-0.6.6.jar System Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/ST4-4.0.4.jar  System Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/activation-1.1.jar System Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/akka-actor_2.10-2.2.3-shaded-protobuf.jar
System Classpath
/app/hadoop/spark-1.0.1/lib_managed/jars/algebird-core_2.10-0.1.11.jar
System Classpath