-- Forwarded message --
From: rapelly kartheek kartheek.m...@gmail.com
Date: Thu, Jan 1, 2015 at 12:05 PM
Subject: Re: NullPointerException
To: Josh Rosen rosenvi...@gmail.com, user@spark.apache.org
spark-1.0.0
On Thu, Jan 1, 2015 at 12:04 PM, Josh Rosen rosenvi...@gmail.com
It looks like 'null' might be selected as a block replication peer?
https://github.com/apache/spark/blob/v1.0.0/core/src/main/scala/org/apache/spark/storage/BlockManager.scala#L786
I know that we fixed some replication bugs in newer versions of Spark (such
as
Ok. Let me try out on a newer version.
Thank you!!
On Thu, Jan 1, 2015 at 12:17 PM, Josh Rosen rosenvi...@gmail.com wrote:
It looks like 'null' might be selected as a block replication peer?
)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Process finished with exit code 1
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/pyspark-1-1-1-on-windows-saveAsTextFile-NullPointerException-tp20764.html
Sent from
in context:
http://apache-spark-user-list.1001560.n3.nabble.com/pyspark-1-1-1-on-windows-saveAsTextFile-NullPointerException-tp20764.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-on-cluster-mode-when-using-foreachPartition-tp20719.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail
=
dbActorUpdater ! updateDBMessage(r)))
There is no problem. I think something is misconfigured
Thanks for help
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-on-cluster-mode-when-using-foreachPartition-tp20719.html
Sent from the Apache Spark
,
Cristóvão José Domingues Cordeiro
IT Department - 28/R-018
CERN
--
*From:* Simone Franzini [captainfr...@gmail.com]
*Sent:* 15 December 2014 16:52
*To:* Cristovao Jose Domingues Cordeiro
*Subject:* Re: NullPointerException When Reading Avro Sequence Files
Ok, I
--
*From:* Simone Franzini [captainfr...@gmail.com]
*Sent:* 06 December 2014 15:48
*To:* Cristovao Jose Domingues Cordeiro
*Subject:* Re: NullPointerException When Reading Avro Sequence Files
java.lang.IncompatibleClassChangeError: Found interface
*Subject:* Re: NullPointerException When Reading Avro Sequence Files
Hi Cristovao,
I have seen a very similar issue that I have posted about in this thread:
http://apache-spark-user-list.1001560.n3.nabble.com/Kryo-NPE-with-Array-td19797.html
I think your main issue here is somewhat
.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p20456.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
);
The program fails with the following NullPointerException:
Exception in thread main org.apache.spark.SparkException: Job
aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most
recent failure: Lost task 0.3 in stage 0.0 (TID 6, 10.21.6.68):
java.lang.NullPointerException
),
new DummyClass(4)
);
JavaRDDDummyClass rdd = sparkContext.parallelize(dummyClasses);
for (DummyClass dummyClass: rdd.collect())
LOGGER.info(driver collected {}, dummyClass);
The program fails with the following NullPointerException:
Exception in thread main
+ NullPointerException while
reading
You might consider using the native parquet support built into Spark SQL
instead of using the raw library:
http://spark.apache.org/docs/latest/sql-programming-guide.html#parquet-files
On Mon, Nov 3, 2014 at 7:33 PM, Michael Albert m_albert...@yahoo.com.invalid
wrote
Greetings!
I'm trying to use avro and parquet with the following schema:
{
name: TestStruct,
namespace: bughunt,
type: record,
fields: [
{
name: string_array,
type: { type: array, items: string }
}
]
}
The writing
NullPointerException
// compute something on the input
}
If I pass myCounter as a parameter to foo(), then I get : noSerializable
exception for SparkContext. If I keep myCounter global (in a scala singleton
object), I get nullpointer exception.
What is the proper way send an accumulator distributed to all
Hi, everybody!
I'm trying to deploy a simple app in Spark standalone cluster with a single
node (the localhost).
Unfortunately, something goes wrong while processing the JAR file and an
exception NullPointerException is thrown.
I'm running everything in a single machine with Windows8.
Check below
://apache-spark-user-list.1001560.n3.nabble.com/RDD-data-checkpoint-cleaning-td14847.html
tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-on-reading-checkpoint-files-tp7306p14882.html
Sent from the Apache Spark User List mailing list
.nabble.com/RDD-data-checkpoint-cleaning-td14847.html
tnks,
Rod
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-on-reading-checkpoint-files-tp7306p14882.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
(context.sparkContext.makeRDD(Seq((null, 0L)),
1)))
.reduceByKey(_ + _)
.map(_._2)
}
transform is the line throwing the NullPointerException. Can anyone give
some hints as what would cause _ to be null (it is indeed null)? This only
happens when there is no data to process.
When there's data
Message-
From: anoldbrain [mailto:anoldbr...@gmail.com]
Sent: Wednesday, August 20, 2014 4:13 PM
To: u...@spark.incubator.apache.org
Subject: Re: NullPointerException from '.count.foreachRDD'
Looking at the source codes of DStream.scala
/**
* Return a new DStream in which each RDD has
an indication that I had my InputDStream implementation wrong. On
the other hand, why use return type Option if None should not be used at
all?
Thanks for help solving my problem.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-from-count-foreachRDD
access to that object from my client driver?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-connecting-from-Spark-to-a-Hive-table-backed-by-HBase-tp12284p12331.html
Sent from the Apache Spark User List mailing list archive
you know how I am supposed to set that table name on the jobConf?
I
don't have access to that object from my client driver?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-connecting-from-Spark-to-a-Hive-table-backed-by-HBase
Hello:
I am trying to setup Spark to connect to a Hive table which is backed by
HBase, but I am running into the following NullPointerException:
scala val hiveCount = hiveContext.sql(select count(*) from
dataset_records).collect().head.getLong(0)
14/08/18 06:34:29 INFO ParseDriver: Parsing
to connect to a Hive table which is backed by
HBase, but I am running into the following NullPointerException:
scala val hiveCount = hiveContext.sql(select count(*) from
dataset_records).collect().head.getLong(0)
14/08/18 06:34:29 INFO ParseDriver: Parsing command: select count(*) from
dataset_records
...@zephyrhealthinc.com
wrote:
Hello:
I am trying to setup Spark to connect to a Hive table which is backed by
HBase, but I am running into the following NullPointerException:
scala val hiveCount = hiveContext.sql(select count(*) from
dataset_records).collect().head.getLong(0)
14/08/18 06:34:29
...@zephyrhealthinc.com wrote:
Hello:
I am trying to setup Spark to connect to a Hive table which is backed by
HBase, but I am running into the following NullPointerException:
scala val hiveCount = hiveContext.sql(select count(*) from
dataset_records).collect().head.getLong(0)
14/08/18 06:34:29 INFO
is backed
by HBase, but I am running into the following NullPointerException:
scala val hiveCount = hiveContext.sql(select count(*) from
dataset_records).collect().head.getLong(0)
14/08/18 06:34:29 INFO ParseDriver: Parsing command: select count(*)
from dataset_records
14/08/18 06:34:29 INFO
Looks like hbaseTableName is null, probably caused by incorrect configuration.
String hbaseTableName = jobConf.get(HBaseSerDe.HBASE_TABLE_NAME);
setHTable(new HTable(HBaseConfiguration.create(jobConf),
Bytes.toBytes(hbaseTableName)));
Here is the definition.
public static final
Thanks, Zhan for the follow up.
But, do you know how I am supposed to set that table name on the jobConf? I
don't have access to that object from my client driver?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-connecting-from
for long without a success. What caused this
problem, bugs in my codes or some other issues? Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/the-pregel-operator-of-graphx-throws-NullPointerException-tp10865.html
Sent from the Apache Spark User List
Denis RP qq378789...@gmail.com writes:
[error] (run-main-0) org.apache.spark.SparkException: Job aborted due to
stage failure: Task 6.0:4 failed 4 times, most recent failure: Exception
failure in TID 598 on host worker6.local: java.lang.NullPointerException
[error]
with the serializable bug fixed, java
was installed with openjdk-7-jdk.
BTW, is there a chance that bagel can work fine?
Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/the-pregel-operator-of-graphx-throws-NullPointerException-tp10865p10920.html
Sent from the Apache
Running a simple collect method on a group of Avro objects causes a plain
NullPointerException. Does anyone know what may be wrong?
files.collect()
Press ENTER or type command to continue
Exception in thread Executor task launch worker-0
java.lang.NullPointerException
For those curious I was using KryoRegistrator it was causing some null
pointer exception. I removed the code and problem went away.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/collect-on-small-list-causes-NullPointerException-tp10400p10402.html
Sent
with the records having same values.
2014-07-22 15:01 GMT+02:00 Sparky gullo_tho...@bah.com:
Running a simple collect method on a group of Avro objects causes a plain
NullPointerException. Does anyone know what may be wrong?
files.collect()
Press ENTER or type command to continue
Exception
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10305.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10233.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10234.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10234.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
around
sequence file).
I'll put together an example of the problem so others can better understand
what I'm talking about.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10237.html
Sent from
/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10203.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
)
at
org.apache.spark.scheduler.DAGScheduler.runLocallyWithinThread(DAGScheduler.scala:578)
... 1 more
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-when-reading-Avro-Sequence-files-tp10201p10204.html
Sent from the Apache Spark User
Hi all,
I catch very confusing exception running Spark 1.0 on HDP2.1
During save rdd as text file I got:
14/07/02 10:11:12 WARN TaskSetManager: Loss was due to
java.lang.NullPointerException
java.lang.NullPointerException
at
Hi Konstantin,
Thanks for reporting this. This happens because there are null keys in your
data. In general, Spark should not throw null pointer exceptions, so this
is a bug. I have fixed this here: https://github.com/apache/spark/pull/1288.
For now, you can workaround this by special-handling
I am also seeing similar problem when trying to continue job using saved
checkpoint. Can somebody help in solving this problem?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/NullPointerException-on-reading-checkpoint-files-tp7306p7507.html
Sent from
)
at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Calling-JavaPairRDD-first-after-calling-JavaPairRDD-groupByKey-results-in-NullPointerException-tp7318.html
Sent from the Apache Spark User List mailing list archive
101 - 148 of 148 matches
Mail list logo