Re: ISpark class not found
Sounds like ipython notebook issue, not an ISpark one. Might want to reinstall pip install ipython[notebook], which will grab the notebook necessary components like tornado. Try spinning up ispark console instead of notebook to see if the ISpark kernel is functioning. ipython console —profile spark From: MEETHU MATHEW meethu2...@yahoo.co.inmailto:meethu2...@yahoo.co.in Reply-To: MEETHU MATHEW meethu2...@yahoo.co.inmailto:meethu2...@yahoo.co.in Date: Wednesday, November 12, 2014 at 2:26 AM To: Capital One benjamin.la...@capitalone.commailto:benjamin.la...@capitalone.com, user@spark.apache.orgmailto:user@spark.apache.org user@spark.apache.orgmailto:user@spark.apache.org Subject: Re: ISpark class not found Hi, I was also trying Ispark..But I couldnt even start the notebook..I am getting the following error. ERROR:tornado.access:500 POST /api/sessions (127.0.0.1) 10.15ms referer=http://localhost:/notebooks/Scala/Untitled0.ipynb How did you start the notebook? Thanks Regards, Meethu M On Wednesday, 12 November 2014 6:50 AM, Laird, Benjamin benjamin.la...@capitalone.commailto:benjamin.la...@capitalone.com wrote: I've been experimenting with the ISpark extension to IScala (https://github.com/tribbloid/ISpark) Objects created in the REPL are not being loaded correctly on worker nodes, leading to a ClassNotFound exception. This does work correctly in spark-shell. I was curious if anyone has used ISpark and has encountered this issue. Thanks! Simple example: In [1]: case class Circle(rad:Float) In [2]: val rdd = sc.parallelize(1 to 1).map(i=Circle(i.toFloat)).take(10) 14/11/11 13:03:35 ERROR TaskResultGetter: Exception while getting task result com.esotericsoftware.kryo.KryoException: Unable to find class: [L$line5.$read$$iwC$$iwC$Circle; Full trace in my gist: https://gist.github.com/benjaminlaird/3e543a9a89fb499a3a14 The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
ISpark class not found
I've been experimenting with the ISpark extension to IScala (https://github.com/tribbloid/ISpark) Objects created in the REPL are not being loaded correctly on worker nodes, leading to a ClassNotFound exception. This does work correctly in spark-shell. I was curious if anyone has used ISpark and has encountered this issue. Thanks! Simple example: In [1]: case class Circle(rad:Float) In [2]: val rdd = sc.parallelize(1 to 1).map(i=Circle(i.toFloat)).take(10) 14/11/11 13:03:35 ERROR TaskResultGetter: Exception while getting task result com.esotericsoftware.kryo.KryoException: Unable to find class: [L$line5.$read$$iwC$$iwC$Circle; Full trace in my gist: https://gist.github.com/benjaminlaird/3e543a9a89fb499a3a14 The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: AVRO specific records
Something like this works and is how I create an RDD of specific records. val avroRdd = sc.newAPIHadoopFile(twitter.avro, classOf[AvroKeyInputFormat[twitter_schema]], classOf[AvroKey[twitter_schema]], classOf[NullWritable], conf) (From https://github.com/julianpeeters/avro-scala-macro-annotation-examples/blob/master/spark/src/main/scala/AvroSparkScala.scala) Keep in mind you'll need to use the kryo serializer as well. From: Frank Austin Nothaft fnoth...@berkeley.edumailto:fnoth...@berkeley.edu Date: Wednesday, November 5, 2014 at 5:06 PM To: Simone Franzini captainfr...@gmail.commailto:captainfr...@gmail.com Cc: user@spark.apache.orgmailto:user@spark.apache.org user@spark.apache.orgmailto:user@spark.apache.org Subject: Re: AVRO specific records Hi Simone, Matt Massie put together a good tutorial on his bloghttp://zenfractal.com/2013/08/21/a-powerful-big-data-trio/. If you’re looking for more code using Avro, we use it pretty extensively in our genomics project. Our Avro schemas are herehttps://github.com/bigdatagenomics/bdg-formats/blob/master/src/main/resources/avro/bdg.avdl, and we have serialization code herehttps://github.com/bigdatagenomics/adam/tree/master/adam-core/src/main/scala/org/bdgenomics/adam/serialization. We use Parquet for storing the Avro records, but there is also an Avro HadoopInputFormat. Regards, Frank Austin Nothaft fnoth...@berkeley.edumailto:fnoth...@berkeley.edu fnoth...@eecs.berkeley.edumailto:fnoth...@eecs.berkeley.edu 202-340-0466 On Nov 5, 2014, at 1:25 PM, Simone Franzini captainfr...@gmail.commailto:captainfr...@gmail.com wrote: How can I read/write AVRO specific records? I found several snippets using generic records, but nothing with specific records so far. Thanks, Simone Franzini, PhD http://www.linkedin.com/in/simonefranzini The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Executor Memory, Task hangs
Hi all, I'm doing some testing on a small dataset (HadoopRDD, 2GB, ~10M records), with a cluster of 3 nodes Simple calculations like count take approximately 5s when using the default value of executor.memory (512MB). When I scale this up to 2GB, several Tasks take 1m or more (while most still are 1s), and tasks hang indefinitely if I set it to 4GB or higher. While these worker nodes aren't very powerful, they seem to have enough RAM to handle this: Running 'free –m' shows I have 7GB free on each worker. Any tips on why these jobs would hang when given more available RAM? Thanks Ben The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Re: Executor Memory, Task hangs
Thanks Akhil and Sean. All three workers are doing the work and tasks stall simultaneously on all three. I think Sean hit on my issue. I've been under the impression that each application has one executor process per worker machine (not per core per machine). Is that incorrect? If an executor is running on each core that would totally make sense why things are stalling. Akhil, I'm running 8/cores per machine, and tasks are stalling on all three machines simultaneously. Also, no other Spark contexts are running, so I didn't think this was an issue of spark.executor.memory vs SPARK_WORKER_MEMORY (which is default currently). App UI ID NameCores Memory per Node Submitted Time UserState Duration app-20140819101355-0001http://tc1-master:8080/app?appId=app-20140819101355-0001 Spark shellhttp://tc1-master:4040/24 2.0 GB Worker UI ExecutorID Cores State Memory Job Details Logs 2 8 RUNNING 2.0 GB Tasks when it stalls: 129 129 SUCCESS NODE_LOCAL worker018/19/14 10:16 0.1 s 1 ms 130 130 RUNNING NODE_LOCAL worker038/19/14 10:16 5 s 131 131 RUNNING NODE_LOCAL worker028/19/14 10:16 5 s 132 132 SUCCESS NODE_LOCAL worker028/19/14 10:16 0.1 s 1 ms 133 133 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 134 134 RUNNING NODE_LOCAL worker028/19/14 10:16 5 s 135 135 RUNNING NODE_LOCAL worker038/19/14 10:16 5 s 136 136 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 137 137 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 138 138 RUNNING NODE_LOCAL worker038/19/14 10:16 5 s 139 139 RUNNING NODE_LOCAL worker028/19/14 10:16 5 s 140 140 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 141 141 RUNNING NODE_LOCAL worker028/19/14 10:16 5 s 142 142 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 143 143 RUNNING NODE_LOCAL worker018/19/14 10:16 5 s 144 144 RUNNING NODE_LOCAL worker038/19/14 10:16 5 s 145 145 RUNNING NODE_LOCAL worker028/19/14 10:16 5 s From: Sean Owen so...@cloudera.commailto:so...@cloudera.com Date: Tuesday, August 19, 2014 at 9:23 AM To: Capital One benjamin.la...@capitalone.commailto:benjamin.la...@capitalone.com Cc: user@spark.apache.orgmailto:user@spark.apache.org user@spark.apache.orgmailto:user@spark.apache.org Subject: Re: Executor Memory, Task hangs Given a fixed amount of memory allocated to your workers, more memory per executor means fewer executors can execute in parallel. This means it takes longer to finish all of the tasks. Set high enough, and your executors can find no worker with enough memory and so they all are stuck waiting for resources. The reason the tasks seem to take longer is really that they spend time waiting for an executor rather than spend more time running. That's my first guess. If you want Spark to use more memory on your machines, give workers more memory. It sounds like there is no value in increasing executor memory as it only means you are underutilizing the CPU of your cluster by not running as many tasks in parallel as would be optimal. Hi all, I'm doing some testing on a small dataset (HadoopRDD, 2GB, ~10M records), with a cluster of 3 nodes Simple calculations like count take approximately 5s when using the default value of executor.memory (512MB). When I scale this up to 2GB, several Tasks take 1m or more (while most still are 1s), and tasks hang indefinitely if I set it to 4GB or higher. While these worker nodes aren't very powerful, they seem to have enough RAM to handle this: Running 'free –m' shows I have 7GB free on each worker. Any tips on why these jobs would hang when given more available RAM? Thanks Ben The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended
Re: Avro Schema + GenericRecord to HadoopRDD
That makes sense, thanks Chris. I'm currently reworking my code to use the newAPIHadoopRDD with an AvroSequenceFileInputFormat (see below), but I think I'll run into the same issue. I'll give your suggestion a try. val avroRdd = sc.newAPIHadoopFile(fp, classOf[AvroSequenceFileInputFormat[AvroKey[GenericRecord],NullWritable]],c lassOf[AvroKey[GenericRecord]], classOf[NullWritable]) On 7/29/14, 7:13 PM, Severs, Chris csev...@ebay.com wrote: Hi Benjamin, I think the best bet would be to use the Avro code generation stuff to generate a SpecificRecord for your schema and then change the reader to use your specific type rather than GenericRecord. Trying to read up the generic record and then do type inference and spit out a tuple is way more headache than it's worth if you already have the schema in hand (I've done it for Cascading/Scalding). - Chris From: Laird, Benjamin [benjamin.la...@capitalone.com] Sent: Tuesday, July 29, 2014 8:00 AM To: user@spark.apache.org; u...@spark.incubator.apache.org Subject: Avro Schema + GenericRecord to HadoopRDD Hi all, I can read in Avro files to Spark with HadoopRDD and submit the schema in the jobConf, but with the guidance I've seen so far, I'm left with a avro GenericRecord of Java objects without type. How do I actually use the schema to have the types inferred? Example: scala AvroJob.setInputSchema(jobConf,schema); scala val rdd = sc.hadoopRDD(jobConf,classOf[org.apache.avro.mapred.AvroInputFormat[Generi c Record]],classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],classO f [org.apache.hadoop.io.NullWritable],10) 14/07/29 09:27:49 INFO storage.MemoryStore: ensureFreeSpace(134254) called with curMem=0, maxMem=308713881 14/07/29 09:27:49 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 131.1 KB, free 294.3 MB) rdd: org.apache.spark.rdd.RDD[(org.apache.avro.mapred.AvroWrapper[org.apache.av r o.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] = HadoopRDD[0] at hadoopRDD at console:50 scala rdd.first._1.datum.get(amt) 14/07/29 09:31:34 INFO spark.SparkContext: Starting job: first at console:53 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Got job 3 (first at console:53) with 1 output partitions (allowLocal=true) 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Final stage: Stage 3(first at console:53) 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Parents of final stage: List() 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Missing parents: List() 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Computing the requested partition locally 14/07/29 09:31:34 INFO rdd.HadoopRDD: Input split: hdfs://nameservice1:8020/user/nylab/prod/persistent_tables/creditsetl_ref_ e txns/201201/part-0.avro:0+34279385 14/07/29 09:31:34 INFO spark.SparkContext: Job finished: first at console:53, took 0.061220615 s res11: Object = 24.0 Thanks! Ben The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Avro Schema + GenericRecord to HadoopRDD
Hi all, I can read in Avro files to Spark with HadoopRDD and submit the schema in the jobConf, but with the guidance I've seen so far, I'm left with a avro GenericRecord of Java objects without type. How do I actually use the schema to have the types inferred? Example: scala AvroJob.setInputSchema(jobConf,schema); scala val rdd = sc.hadoopRDD(jobConf,classOf[org.apache.avro.mapred.AvroInputFormat[Generic Record]],classOf[org.apache.avro.mapred.AvroWrapper[GenericRecord]],classOf [org.apache.hadoop.io.NullWritable],10) 14/07/29 09:27:49 INFO storage.MemoryStore: ensureFreeSpace(134254) called with curMem=0, maxMem=308713881 14/07/29 09:27:49 INFO storage.MemoryStore: Block broadcast_0 stored as values to memory (estimated size 131.1 KB, free 294.3 MB) rdd: org.apache.spark.rdd.RDD[(org.apache.avro.mapred.AvroWrapper[org.apache.avr o.generic.GenericRecord], org.apache.hadoop.io.NullWritable)] = HadoopRDD[0] at hadoopRDD at console:50 scala rdd.first._1.datum.get(amt) 14/07/29 09:31:34 INFO spark.SparkContext: Starting job: first at console:53 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Got job 3 (first at console:53) with 1 output partitions (allowLocal=true) 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Final stage: Stage 3(first at console:53) 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Parents of final stage: List() 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Missing parents: List() 14/07/29 09:31:34 INFO scheduler.DAGScheduler: Computing the requested partition locally 14/07/29 09:31:34 INFO rdd.HadoopRDD: Input split: hdfs://nameservice1:8020/user/nylab/prod/persistent_tables/creditsetl_ref_e txns/201201/part-0.avro:0+34279385 14/07/29 09:31:34 INFO spark.SparkContext: Job finished: first at console:53, took 0.061220615 s res11: Object = 24.0 Thanks! Ben The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
RE: help
Joe, Do you have your SPARK_HOME variable set correctly in the spark-env.sh script? I was getting that error when I was first setting up my cluster, turned out I had to make some changes in the spark-env script to get things working correctly. Ben -Original Message- From: Joe L [mailto:selme...@yahoo.com] Sent: Sunday, April 27, 2014 1:17 PM To: u...@spark.incubator.apache.org Subject: help I am getting this error, please help me to fix it 4/04/28 02:16:20 INFO SparkDeploySchedulerBackend: Executor app-20140428021620-0007/10 removed: class java.io.IOException: Cannot run program /home/exobrain/install/spark-0.9.1/bin/compute-classpath.sh (in directory .): error=13, -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/help-tp4901.html Sent from the Apache Spark User List mailing list archive at Nabble.com. The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Running large join in ALS example through PySpark
Hello all - I'm running the ALS/Collaborative Filtering code through pySpark on spark0.9.0. (http://spark.apache.org/docs/0.9.0/mllib-guide.html#using-mllib-in-python) My data file has about 27M tuples (User, Item, Rating). ALS.train(ratings,1,30) runs on my 3 node cluster (24 cores, 60GB RAM) in about 5 minutes. However, the following seems to hang: testdata = ratings.map(lambda p: (int(p[0]), int(p[1]))) predictions = model.predictAll(testdata).map(lambda r: ((r[0], r[1]), r[2])) ratesAndPreds = ratings.map(lambda r: ((r[0], r[1]), r[2])).join(predictions) When the join in ratesAndPreds is calculated, 38 tasks are created. 32 are completed with locality level PROCESS_LOCAL in about ~5 minutes. However, 6 tasks are in locality NODE_LOCAL and run for over 45 minutes without completing. I was receiving a no heartbeat message from the Scheduler, so I changed my java args in spark-env.sh. I don't receive that now, but I have a suspicion that there are still some GC issues. Does anyone have any suggestions? I read that I can get GC problems or other memory issues if I have too few partitions. Should I investigate that? Thanks! Ben Ben Laird Data Scientist (202) 695-6205 benjamin.la...@capitalone.commailto:benjamin.la...@capitalone.com [cid:image001.png@01CF5E44.F673E050]http://www.capitalonelabs.com/ http://www.capitalonelabs.comhttp://www.capitalonelabs.com/ The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.