gt; > at $iwC.(:76)
> > at (:78)
> > at .(:82)
> > at .()
> > at .(:7)
> > at .()
> > at $print()
> >
> > Thanks!
> > - Terry
> >
> > On Sun, Sep 6, 2015 at 4:53 PM, Sea
Hi,
Am I doing something off base to execute tests for core module using
sbt as follows?
[spark]> core/test
...
[info] KryoSerializerAutoResetDisabledSuite:
[info] - sort-shuffle with bypassMergeSort (SPARK-7873) (53 milliseconds)
[info] - calling deserialize() after deserializeStream()
pointing in spark streaming with kafka. I can see that
spark streaming is checkpointing to the mentioned directory at hdfs. How
can
i test that it works fine and recover with no data loss ?
Thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Writing
Hi!
I have enables check pointing in spark streaming with kafka. I can see that
spark streaming is checkpointing to the mentioned directory at hdfs. How can
i test that it works fine and recover with no data loss ?
Thanks
--
View this message in context:
http://apache-spark-user-list
Thanks for your response Yana,
I can increase the MaxPermSize parameter and it will allow me to run the
unit test a few more times before I run out of memory.
However, the primary issue is that running the same unit test in the same
JVM (multiple times) results in increased memory (each run
I'd suggest setting sbt to fork when running tests.
On Wed, Aug 26, 2015 at 10:51 AM, Mike Trienis mike.trie...@orcsol.com
wrote:
Thanks for your response Yana,
I can increase the MaxPermSize parameter and it will allow me to run the
unit test a few more times before I run out of memory
Hello,
I am using sbt and created a unit test where I create a `HiveContext` and
execute some query and then return. Each time I run the unit test the JVM
will increase it's memory usage until I get the error:
Internal error when running tests: java.lang.OutOfMemoryError: PermGen space
Exception
Thanks Chenghao!
At 2015-08-25 13:06:40, Cheng, Hao hao.ch...@intel.com wrote:
Yes, check the source code
under:https://github.com/apache/spark/tree/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst
From: Todd [mailto:bit1...@163.com]
Sent: Tuesday, August 25, 2015 1:01
test where I create a `HiveContext` and
execute some query and then return. Each time I run the unit test the JVM
will increase it's memory usage until I get the error:
Internal error when running tests: java.lang.OutOfMemoryError: PermGen
space
Exception in thread Thread-2
Yes, check the source code under:
https://github.com/apache/spark/tree/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst
From: Todd [mailto:bit1...@163.com]
Sent: Tuesday, August 25, 2015 1:01 PM
To: user@spark.apache.org
Subject: Test case for the spark sql catalyst
Hi
Hi, Are there test cases for the spark sql catalyst, such as testing the rules
of transforming unsolved query plan?
Thanks!
didn't happen to other localized files, such
as country_codes
FYI
On Wed, Jul 15, 2015 at 8:53 PM, luohui20...@sina.com wrote:
Hi all
when I am running my HiBench in my spark/Hadoop/Hive cluster. I
found there is always a failure in my aggregation test. I doubt this
problem maybe some issue
:31:58 WARN hdfs.DFSClient: DFSInputStream has been closed already
Hi Mike: I am new to Hibench, so I just setup a test enviroment of 1 node
spark/hadoop cluster to test, no data actually. Because Hibench will
autogenerate test data itself.
Thanksamp
hi ,all
there two examples one is throw Task not serializable when execute in spark
shell,the other one is ok,i am very puzzled,can anyone give what's different
about this two code and why the other is ok
1.The one which throw Task not serializable :
import org.apache.spark._
import
I can't tell immediately, but you might be able to get more info with the
hint provided here:
http://stackoverflow.com/questions/27980781/spark-task-not-serializable-with-simple-accumulator
(short version, set -Dsun.io.serialization.extendedDebugInfo=true)
Also, unless you're simplifying your
Dear List,
I'm trying to reference a lonely message to this list from March 25th,(
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Maven-Test-error-td22216.html
), but I'm unsure this will thread properly. Sorry, if didn't work out.
Anyway, using Spark 1.4.0-RC4 I run into the same
)
at
org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
15/03/06 17:39:32 INFO yarn.ApplicationMaster: AppMaster received a signal./
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/HiveContext-test-Spark-Context-did-not-initialize-after-waiting
(ApplicationMaster.scala:433)
at
org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala)
15/03/06 17:39:32 INFO yarn.ApplicationMaster: AppMaster received a
signal./
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/HiveContext-test-Spark
:
Do you get this failure repeatedly?
On Thu, May 14, 2015 at 12:55 AM, kf wangf...@huawei.com wrote:
Hi, all, i got following error when i run unit test of spark by
dev/run-tests
on the latest branch-1.4 branch.
the latest commit id:
commit d518c0369fa412567855980c3f0f426cde5c190d
error when i run unit test of spark by dev/run-tests
on the latest branch-1.4 branch.
the latest commit id:
commit d518c0369fa412567855980c3f0f426cde5c190d
Author: zsxwing zsxw...@gmail.commailto:zsxw...@gmail.com
Date: Wed May 13 17:58:29 2015 -0700
error
[info] Test
Hi, all, i got following error when i run unit test of spark by dev/run-tests
on the latest branch-1.4 branch.
the latest commit id:
commit d518c0369fa412567855980c3f0f426cde5c190d
Author: zsxwing zsxw...@gmail.com
Date: Wed May 13 17:58:29 2015 -0700
error
[info] Test
Do you get this failure repeatedly?
On Thu, May 14, 2015 at 12:55 AM, kf wangf...@huawei.com wrote:
Hi, all, i got following error when i run unit test of spark by
dev/run-tests
on the latest branch-1.4 branch.
the latest commit id:
commit d518c0369fa412567855980c3f0f426cde5c190d
Author
I'm also getting the same error.
Any ideas?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-unit-test-fails-tp22368p22798.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hi,
I selected a starter task in JIRA, and made changes to my github fork of
the current code.
I assumed I would be able to build and test.
% mvn clean compile was fine
but
%mvn package failed
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.18:test (default-test
The standard incantation -- which is a little different from standard
Maven practice -- is:
mvn -DskipTests [your options] clean package
mvn [your options] test
Some tests require the assembly, so you have to do it this way.
I don't know what the test failures were, you didn't post them
data, to contingency
tables of frequency counts, to Pearson Chi-Square correlation statistics
and perform a Chi-Squared hypothesis test. The user response data
represents a multiple choice question-answer (MCQ) format. The goal is to
compute all choose-two combinations of question answers
Chi-Square correlation statistics and
perform a Chi-Squared hypothesis test. The user response data represents a
multiple choice question-answer (MCQ) format. The goal is to compute all
choose-two combinations of question answers (precondition, question X
question) contingency tables. Each cell
It's because your tests are running in parallel and you can only have one
context running at a time.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-run-unit-test-tp14459p22429.html
Sent from the Apache Spark User List mailing list archive
--
View this message in context: Spark unit test fails
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-unit-test-fails-tp22368.html
Sent from the Apache Spark User List mailing list archive
http://apache-spark-user-list.1001560.n3.nabble.com/ at Nabble.com.
Hi experts,
I am trying to write unit tests for my spark application which fails with
javax.servlet.FilterRegistration error.
I am using CDH5.3.2 Spark and below is my dependencies list.
val spark = 1.2.0-cdh5.3.2
val esriGeometryAPI = 1.2
val csvWriter = 1.0.0
:19998/test)
and
rdd.saveAsTextFile(tachyon://host:19998/test) succeed, but
rdd.toDF().saveAsParquetFile(tachyon://host:19998/test) failure.
ERROR MESSAGE:java.lang.IllegalArgumentException: Wrong FS:
tachyon://host:19998/test, expected: hdfs://host:8020
spark version is 1.3.0 with tanhyon-0.6.1
QUESTION DESCRIPTION: rdd.saveAsObjectFile(tachyon://host:19998/test) and
rdd.saveAsTextFile(tachyon://host:19998/test) succeed, but
rdd.toDF().saveAsParquetFile(tachyon://host:19998/test) failure.
ERROR MESSAGE
I use command to run Unit test, as follow:
./make-distribution.sh --tgz --skip-java-test -Pscala-2.10 -Phadoop-2.3
-Phive -Phive-thriftserver -Pyarn -Dyarn.version=2.3.0-cdh5.1.2
-Dhadoop.version=2.3.0-cdh5.1.2
mvn -Pscala-2.10 -Phadoop-2.3 -Phive -Phive-thriftserver -Pyarn
-Dyarn.version=2.3.0
-test-Spark-Context-did-not-initialize-after-waiting-1ms-tp21953.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e
On Fri, Mar 6, 2015 at 2:47 PM, nitinkak001 nitinkak...@gmail.com wrote:
I am trying to run a Hive query from Spark using HiveContext. Here is the
code
/ val conf = new SparkConf().setAppName(HiveSparkIntegrationTest)
conf.set(spark.executor.extraClassPath,
.1001560.n3.nabble.com/Spark-Standard-Application-to-Test-tp21803.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e
Sent from my iPhone
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org
I am using Spark-1.1.1. When I used sbt test, I ran into the
following exceptions. Any idea how to solve it? Thanks! I think
somebody posted this question before, but no one seemed to have
answered it. Could it be the version of io.netty I put in my
build.sbt? I included an dependency
Hi,
I extended org.apache.spark.streaming.TestSuiteBase for some testing, and I
was able to run this test fine:
test(Sliding window join with 3 second window duration) {
val input1 =
Seq(
Seq(req1),
Seq(req2, req3),
Seq(),
Seq(req4, req5, req6),
Seq(req7
().accept(MediaType.APPLICATION_JSON_TYPE).get(String.class);
logger.warn(!!! DEBUG !!! Spotlight response: {}, response);
When run inside a unit test as follows:
mvn clean test -Dtest=SpotlightTest#testCountWords
it contacts the RESTful web service and retrieves some data as expected
(String.class);
logger.warn(!!! DEBUG !!! Spotlight response: {}, response);
When run inside a unit test as follows:
mvn clean test -Dtest=SpotlightTest#testCountWords
it contacts the RESTful web service and retrieves some data as expected. But
when the same code is run as part
] | \- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
[INFO] +- junit:junit:jar:4.11:test
[INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:test
[INFO] +- org.apache.avro:avro:jar:1.7.7:compile
[INFO] | +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO
: {}, response);
It seems to work when I use spark-submit to submit the application that
includes this code.
Funny thing is, now my relevant unit test does not run, complaining about
not having enough memory:
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0xc490
thing is, now my relevant unit test does not run, complaining about
not having enough memory:
Java HotSpot(TM) 64-Bit Server VM warning: INFO:
os::commit_memory(0xc490, 25165824, 0) failed; error='Cannot
allocate memory' (errno=12)
#
# There is insufficient memory for the Java
()
.get(String.class);
logger.warn(!!! DEBUG !!! Spotlight response: {}, response);
It seems to work when I use spark-submit to submit the application that
includes this code.
Funny thing is, now my relevant unit test does not run, complaining about
not having enough
)
at scala.Option.foreach(Option.scala:236)
On 15.12.2014, at 22:36, Marius Soutier mps@gmail.com wrote:
Ok, maybe these test versions will help me then. I’ll check it out.
On 15.12.2014, at 22:33, Michael Armbrust mich...@databricks.com wrote:
Using a single SparkContext should not cause this problem
)
org.xerial.snappy.Snappy.uncompressedLength(Snappy.java:545)
I can only prevent this from happening by using isolated Specs tests thats
always create a new SparkContext that is not shared between tests (but there
can also be only a single SparkContext per test), and also by using standard
SQLContext instead
(Snappy.java:545)
I can only prevent this from happening by using isolated Specs tests thats
always create a new SparkContext that is not shared between tests (but
there can also be only a single SparkContext per test), and also by using
standard SQLContext instead of HiveContext. It does
Ok, maybe these test versions will help me then. I’ll check it out.
On 15.12.2014, at 22:33, Michael Armbrust mich...@databricks.com wrote:
Using a single SparkContext should not cause this problem. In the SQL tests
we use TestSQLContext and TestHive which are global singletons for all
to my local Spark, it waits for a file to be
written to a given directory, and when I create that file it successfully
prints the number of words. I terminate the application by pressing Ctrl+C.
Now I've tried to create a very basic unit test for this functionality, but
in the test I was not able
Hi,
https://github.com/databricks/spark-perf/tree/master/streaming-tests/src/main/scala/streaming/perf
contains some performance tests for streaming. There are examples of how to
generate synthetic files during the test in that repo, maybe you
can find some code snippets that you can use
Hello,
I'm currently developing a Spark Streaming application and trying to write
my first unit test. I've used Java for this application, and I also need
use Java (and JUnit) for writing unit tests.
I could not find any documentation that focuses on Spark Streaming unit
testing, all I could
Please specify '-DskipTests' on commandline.
Cheers
On Dec 5, 2014, at 3:52 AM, Emre Sevinc emre.sev...@gmail.com wrote:
Hello,
I'm currently developing a Spark Streaming application and trying to write my
first unit test. I've used Java for this application, and I also need use
Java
Hello,
Specifying '-DskipTests' on commandline worked, though I can't be sure
whether first running 'sbt assembly' also contributed to the solution.
(I've tried 'sbt assembly' because branch-1.1's README says to use sbt).
Thanks for the answer.
Kind regards,
Emre Sevinç
I am trying to look at problems reading a data file over 4G. In my testing
I am trying to create such a file.
My plan is to create a fasta file (a simple format used in biology)
looking like
1
TCCTTACGGAGTTCGGGTGTTTATCTTACTTATCGCGGTTCGCTGCCGCTCCGGGAGCCCGGATAGGCTGCGTTAATACCTAAGGAGCGCGTATTG
2
Test message
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/This-is-just-a-test-tp19895.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e
Dear all,
We encountered problems of failed jobs with huge amount of data.
A simple local test was prepared for this question at
https://gist.github.com/copy-of-rezo/6a137e13a1e4f841e7eb
It generates 2 sets of key-value pairs, join them, selects distinct values
and counts data finally.
object
Hi Please can someone advice on this.
On Wed, Sep 17, 2014 at 6:59 PM, VJ Shalish vjshal...@gmail.com wrote:
I am trying to benchmark spark in a hadoop cluster.
I need to design a sample spark job to test the CPU utilization, RAM
usage, Input throughput, Output throughput and Duration
I am trying to benchmark spark in a hadoop cluster.
I need to design a sample spark job to test the CPU utilization, RAM usage,
Input throughput, Output throughput and Duration of execution in the
cluster.
I need to test the state of the cluster for :-
A spark job which uses high CPU
A spark
When I run `sbt test-only SparkTest` or `sbt test-only SparkTest1`, it
was pass. But run `set test` to tests SparkTest and SparkTest1, it was
failed.
If merge all cases into one file, the test was pass.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com
I use
https://github.com/apache/spark/blob/master/streaming/src/test/scala/org/apache/spark/streaming/TestSuiteBase.scala
to help me with testing.
In spark 9.1 my tests depending on TestSuiteBase worked fine. As soon as I
switched to latest (1.0.1) all tests fail. My sbt import
guxiaobo1...@qq.com wrote:
Hi Xiangrui,
You can refer to An Introduction to Statistical Learning with Applications
in R, there are many stander hypothesis test to do regarding to linear
regression and logistic regression, they should be implement in the fist
order, then we will list some
Hi Xiangrui,
You can refer to An Introduction to Statistical Learning with Applications in
R, there are many stander hypothesis test to do regarding to linear
regression and logistic regression, they should be implement in the fist order,
then we will list some other testes, which are also
:50 PM, guxiaobo1982 guxiaobo1...@qq.com wrote:
Hi,
From the documentation I think only the model fitting part is implement,
what about the various hypothesis test and performance indexes used to
evaluate the model fit?
Regards,
Xiaobo Gu
Hi,
From the documentation I think only the model fitting part is implement, what
about the various hypothesis test and performance indexes used to evaluate the
model fit?
Regards,
Xiaobo Gu
Hi TD,
I tried some different setup on maven these days, and now I can at least get
something when running mvn test. However, it seems like scalatest cannot
find the test cases specified in the test suite.
Here is the output I get:
http://apache-spark-user-list.1001560.n3.nabble.com/file/n11825
Thank you TD,
I have worked around that problem and now the test compiles.
However, I don't actually see that test running. As when I do mvn test, it
just says BUILD SUCCESS, without any TEST section on stdout.
Are we suppose to use mvn test to run the test? Are there any other
methods can
Does it not show the name of the testsuite on stdout, showing that it has
passed? Can you try writing a small test unit-test, in the same way as
your kafka unit test, and with print statements on stdout ... to see
whether it works? I believe it is some configuration issue in maven, which
is hard
when trying to run the KafkaStreamSuite.scala unit
test.
I added scalatest-maven-plugin to my pom.xml, then ran mvn test, and got
the follow error message:
error: object Utils in package util cannot be accessed in package
org.apache.spark.util
[INFO
Hello Spark Users,
I have a spark streaming program that stream data from kafka topics and
output as parquet file on HDFS.
Now I want to write a unit test for this program to make sure the output
data is correct (i.e not missing any data from kafka).
However, I have no idea about how to do
Appropriately timed question! Here is the PR that adds a real unit
test for Kafka stream in Spark Streaming. Maybe this will help!
https://github.com/apache/spark/pull/1751/files
On Mon, Aug 4, 2014 at 6:30 PM, JiajiaJing jj.jing0...@gmail.com wrote:
Hello Spark Users,
I have a spark
This helps a lot!!
Thank you very much!
Jiajia
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Unit-Test-for-Spark-Streaming-tp11394p11396.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hi guys,
I want to use Elasticsearch and HBase in my spark project, I want to create
a test. I pulled up ES and Zookeeper, but if I put val htest = new
HBaseTestingUtility() to my app I got a strange exception (compilation
time, not runtime).
https://gist.github.com/b0c1/4a4b3f6350816090c3b5
/please-read-if-experiencing-job-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty(hadoop.home.dir, d:\\winutil\\)
after that test runs
Thank you,
Konstantin Kudryavtsev
On Wed, Jul 2, 2014 at 10:24 PM, Denny Lee denny.g@gmail.com wrote:
You
=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty(hadoop.home.dir, d:\\winutil\\)
after that test runs
Thank you,
Konstantin Kudryavtsev
On Wed, Jul 2, 2014 at 10:24 PM, Denny Lee denny.g@gmail.com wrote:
You don't actually need it per se - its just that some
winutils.exe from
http://social.msdn.microsoft.com/Forums/windowsazure/en-US/28a57efb-082b-424b-8d9e-731b1fe135de/please-read-if-experiencing-job-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty(hadoop.home.dir, d:\\winutil\\)
after that test runs
-failures?forum=hdinsight
2) put this file into d:\winutil\bin
3) add in my test: System.setProperty(hadoop.home.dir, d:\\winutil\\)
after that test runs
Thank you,
Konstantin Kudryavtsev
On Wed, Jul 2, 2014 at 10:24 PM, Denny Lee denny.g@gmail.com wrote:
You don't actually need
Hi all,
I'm trying to run some transformation on *Spark*, it works fine on cluster
(YARN, linux machines). However, when I'm trying to run it on local machine
(*Windows 7*) under unit test, I got errors:
java.io.IOException: Could not locate executable null\bin\winutils.exe
in the Hadoop
on *Spark*, it works fine on
cluster (YARN, linux machines). However, when I'm trying to run it on local
machine (*Windows 7*) under unit test, I got errors:
java.io.IOException: Could not locate executable null\bin\winutils.exe in the
Hadoop binaries
GMT-07:00 Konstantin Kudryavtsev
kudryavtsev.konstan...@gmail.com:
Hi all,
I'm trying to run some transformation on *Spark*, it works fine on
cluster (YARN, linux machines). However, when I'm trying to run it on local
machine (*Windows 7*) under unit test, I got errors:
java.io.IOException
:00 Konstantin Kudryavtsev
kudryavtsev.konstan...@gmail.com:
Hi all,
I'm trying to run some transformation on Spark, it works fine on cluster
(YARN, linux machines). However, when I'm trying to run it on local machine
(Windows 7) under unit test, I got errors:
java.io.IOException
on cluster
(YARN, linux machines). However, when I'm trying to run it on local machine
(Windows 7) under unit test, I got errors:
java.io.IOException: Could not locate executable null\bin\winutils.exe in
the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318
(*Windows 7*) under unit test, I got errors:
java.io.IOException: Could not locate executable null\bin\winutils.exe in
the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333
Hello ,I am a new guy on scala spark, yestday i compile spark from 1.0.0
source code and run test,there is and testcase failed:
For example run command in shell : sbt/sbt testOnly
org.apache.spark.streaming.InputStreamsSuite
the testcase: test(socket input stream) would
tests.
This can be done by adding the following line in your build.sbt:
parallelExecution in Test := false
Cheers,
Anselme
2014-06-17 23:01 GMT+02:00 SK skrishna...@gmail.com:
Hi,
I have 3 unit tests (independent of each other) in the /src/test/scala
folder. When I run each of them
will run faster. Start-up and shutdown is
time consuming (can add a few seconds per test).
- The downside is that your tests are using the same SparkContext so they are
less independent of each other. I haven’t seen issues with this yet but there
are likely some things that might crop up.
Best
the previous test's tearDown
spark = new SparkContext(local, test spark)
}
@After
def tearDown() {
spark.stop
spark = null //not sure why this helps but it does!
System.clearProperty(spark.master.port)
}
It's been since last fall (i.e. version 0.8.x
Hi,
I have 3 unit tests (independent of each other) in the /src/test/scala
folder. When I run each of them individually using: sbt test-only test,
all the 3 pass the test. But when I run them all using sbt test, then they
fail with the warning below. I am wondering if the binding exception
Hi,
My unit test is failing (the output is not matching the expected output). I
would like to printout the value of the output. But
rdd.foreach(r=println(r)) does not work from the unit test. How can I print
or write out the output to a file/screen?
thanks.
--
View this message in context
Hi!
I have two question:
1.
I want to test my application. My app will write the result to elasticsearch
(stage 1) with saveAsHadoopFile. How can I write test case for it? Need to
pull up a MiniDFSCluster? Or there are other way?
My application flow plan:
Kafka = Spark Streaming (enrich
主题: 答复: 答复: java.io.FileNotFoundException:
/test/spark-0.9.1/work/app-20140505053550-/2/stdout (No such file or
directory)
i looked into the log again, all exceptions are about FileNotFoundException .
In the Webui, no anymore info I can check except for the basic description of
job
Got.
But it doesn't indicate all can receive this test.
Mail list is unstable recently.
Sent from my iPhone5s
On 2014年5月10日, at 13:31, Matei Zaharia matei.zaha...@gmail.com wrote:
This message has no content.
I didn't get the original message, only the reply. Ruh-roh.
On Sun, May 11, 2014 at 8:09 AM, Azuryy azury...@gmail.com wrote:
Got.
But it doesn't indicate all can receive this test.
Mail list is unstable recently.
Sent from my iPhone5s
On 2014年5月10日, at 13:31, Matei Zaharia matei.zaha
AbstractHttpConnection:
/logPage/?appId=app-20140505053550-executorId=2logType=stdout
java.io.FileNotFoundException:
/test/spark-0.9.1/work/app-20140505053550-/2/stdout (No such file or
directory)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.init
The file does not exist in fact and no permission issue.
francis@ubuntu-4:/test/spark-0.9.1$ ll work/app-20140505053550-/
total 24
drwxrwxr-x 6 francis francis 4096 May 5 05:35 ./
drwxrwxr-x 11 francis francis 4096 May 5 06:18 ../
drwxrwxr-x 2 francis francis 4096 May 5 05:35 2
101 - 196 of 196 matches
Mail list logo