Hello,
Appreciate if you have xml file with the following style code ?
https://cwiki.apache.org/confluence/display/SPARK/Spark+Code+Style+Guide
thanks.
Hello ,
I am trying to find some tools but useless. So, as title described, Is
there some open source tools which implements draggable widget and make the app
running in a form of DAG like workflow ?
Thanks,
Minglei.
Hello,
I want to find some test file in spark which support the same function
just like in Hadoop MiniCluster test environment. But I can not find them.
Anyone know about that ?
I’m sorry. The error is not when I build spark occurs. It’s happen when running
the example with LogisticRegreesionWithElasticNetExample.scala.
发件人: zml张明磊 [mailto:mingleizh...@ctrip.com]
发送时间: 2015年12月31日 15:01
收件人: user@spark.apache.org
主题: Error:scalac: Error: assertion failed: List(object
Hello,
Recently, I build spark from apache/master and getting the following error.
From stackoverflow
http://stackoverflow.com/questions/24165184/scalac-assertion-failed-while-run-scalatest-in-idea,
I can not find Preferences > Scala he said in Intellij IDEA. And SBT is not
worked for me in
Hi,
I am a new to Scala and Spark and trying to find relative API in
DataFrame to solve my problem as title described. However, I just only find
this API DataFrame.col(colName : String) : Column which returns an object of
Column. Not the content. If only DataFrame support such API which
Hi,
I am trying to figure out how maven works. When I add a dependency to my
existing pom.xml and rebuild my spark application project. BUILD SUCCESS I can
get from the console. However, when I running the spark application, the
spark-shell was not happy and directly give me a message
Hi,
Spark-version : 1.4.1
Runing the code getting the following error, how can I fix the code and run
collectly ? I don’t know why the schema don’t support this type system. If I
use callUDF instead of udf. Everything is good.
Thanks,
Minglei.
val index:(String => (String => Int)) =
Hi ,
I am a new to scala and spark. Recently, I need to write a tool that
transform category variables to dummy/indicator variables. I want to know are
there some tools in scala and spark which support this transformation which
like pandas.get_dummies in python ? Any example or study
Yesterday night, I run the jar on my pseudo-distributed mode without WARN and
ERROR. However, Today, Getting the WARN and directly leading to the ERROR
below. My computer memory is 8GB and I think it’s not the issue as the LOG WARN
describe. What ‘s wrong ? The code haven’t change yet. And the
Hi,
My spark version is spark-1.4.1-bin-hadoop2.6. When I submit a spark
job and read data from hive table. Getting the following error. Although it’s
just a WARN. But it’s leading to the job failure.
Maybe the following jira has solved. So, I am confusing.
11 matches
Mail list logo