Databricks provides a sample code on its website...but i can't find it for now.
At 2015-02-12 00:43:07, captainfranz captainfr...@gmail.com wrote:
I am confused as to whether avro support was merged into Spark 1.2 or it is
still an independent library.
I see some people writing
yuzhih...@gmail.com wrote:
Spark depends on slf4j 1.7.5
Please check your classpath and make sure slf4j is included.
Cheers
On Wed, Feb 11, 2015 at 6:20 AM, Todd bit1...@163.com wrote:
After compiling the Spark 1.2.0 codebase in Intellj Idea, and run the LocalPi
example,I got the following
After compiling the Spark 1.2.0 codebase in Intellj Idea, and run the LocalPi
example,I got the following slf4j related issue. Does anyone know how to fix
this? Thanks
Error:scalac: bad symbolic reference. A signature in Logging.class refers to
type Logger
in package org.slf4j which is not
sorry for the noise. I have found it..
At 2015-02-18 23:34:40, Todd bit1...@163.com wrote:
Looks the log anylysis reference app provided by Databricks at
https://github.com/databricks/reference-apps only has java API?
I'd like to see the Scala version one.
Looks the log anylysis reference app provided by Databricks at
https://github.com/databricks/reference-apps only has java API?
I'd like to see the Scala version one.
I am a bit new to Spark, except that I tried simple things like word count, and
the examples given in the spark sql programming guide.
Now, I am investigating the internals of Spark, but I think I am almost lost,
because I could not grasp a whole picture what spark does when it executes the
Thanks Sean.
I follow the guide, import the codebase into IntellijIdea as Maven project,
with the profiles:hadoop2.4 and yarn.
In the maven project view, I run Maven Install against the module: Spark
Project Parent POM(root).After a pretty long time, all the modules are built
successfully.
Hi,
I have imported the Spark source code in Intellij Idea as a SBT project. I try
to do maven install in Intellij Idea by clicking Install in the Spark Project
Parent POM(root),but failed.
I would ask which profiles should be checked. What I want to achieve is staring
Spark in IDE and Hadoop
Hi,
Following is copied from the spark EventTimeline UI. I don't understand why
there are overlapping between tasks?
I think they should be sequentially one by one in one executor(there are one
core each executor).
The blue part of each task is the scheduler delay time. Does it mean it is the
Hi,
I can't access
http://people.csail.mit.edu/matei/papers/2015/sigmod_spark_sql.pdf.
Could someone help try to see if it is available and reply with it?Thanks!
Hi,
With following code snippet, I cached the raw RDD(which is already in memory,
but just for illustration) and its DataFrame.
I thought that the df cache would take less space than the rdd cache,which is
wrong because from the UI that I see the rdd cache takes 168B,while the df
cache takes
expecting footprint of dataframe to be lower when it contains more
information ( RDD + Schema)
On Sat, Aug 15, 2015 at 6:35 PM, Todd bit1...@163.com wrote:
Hi,
With following code snippet, I cached the raw RDD(which is already in memory,
but just for illustration) and its DataFrame.
I thought
One spark application can have many jobs,eg,first call rdd.count then call
rdd.collect
At 2015-08-18 15:37:14, Hemant Bhanawat hemant9...@gmail.com wrote:
It is still in memory for future rdd transformations and actions.
This is interesting. You mean Spark holds the data in memory
Take a look at the doc for the method:
/**
* Applies a schema to an RDD of Java Beans.
*
* WARNING: Since there is no guaranteed ordering for fields in a Java Bean,
* SELECT * queries will return the columns in an undefined order.
* @group dataframes
* @since
Hi,
I would ask if there are some blogs/articles/videos on how to analyse spark
performance during runtime,eg, tools that can be used or something related.
? Is there a way to auto relaunch if
driver runs as a Hadoop Yarn Application?
On Wednesday, 19 August 2015 12:49 PM, Todd bit1...@163.com wrote:
There is an option for the spark-submit (Spark standalone or Mesos with cluster
deploy mode only)
--supervise If given, restarts
I think I find the answer..
On the UI, the recording time of each task is when it is put into the thread
pool. Then the UI makes sense
At 2015-08-18 17:40:07, Todd bit1...@163.com wrote:
Hi,
Following is copied from the spark EventTimeline UI. I don't understand why
there are overlapping
please try DataFrame.toJSON, it will give you an RDD of JSON string.
At 2015-08-21 15:59:43, smagadi sudhindramag...@fico.com wrote:
val teenagers = sqlContext.sql(SELECT name FROM people WHERE age = 13 AND
age = 19)
I need teenagers to be a JSON object rather a simple row .How can we get
There is an option for the spark-submit (Spark standalone or Mesos with cluster
deploy mode only)
--supervise If given, restarts the driver on failure.
At 2015-08-19 14:55:39, Spark Enthusiast sparkenthusi...@yahoo.in wrote:
Folks,
As I see, the Driver program is a
I don't find related talk on whether spark sql supports column indexing. If it
does, is there guide how to do it? Thanks.
Hi,I have a basic spark sql join run in the local mode. I checked the UI,and
see that there are two jobs are run. There DAG graph are pasted at the end.
I have several questions here:
1. Looks that Job0 and Job1 all have the same DAG Stages, but the stage 3 and
stage4 are skipped. I would ask
requires
that you have already created the data/tables. I'll work on updating the README
as the QA period moves forward.
On Thu, Aug 13, 2015 at 6:49 AM, Todd bit1...@163.com wrote:
Hi,
I got a question about the spark-sql-perf project by Databricks at
https://github.com/databricks/spark-sql
Hi,
I would ask whether there are slides, blogs or videos on the topic about how
spark sql is implemented, the process or the whole picture when spark sql
executes the code, Thanks!.
Hi,
I got a question about the spark-sql-perf project by Databricks at
https://github.com/databricks/spark-sql-perf/
The Tables.scala
(https://github.com/databricks/spark-sql-perf/blob/master/src/main/scala/com/databricks/spark/sql/perf/bigdata/Tables.scala)
and BigData
/a/databricks.com/document/d/1Hc_Ehtr0G8SQUg69cmViZsMi55_Kf3tISD9GPGU5M1Y/edit
FYI
On Thu, Aug 13, 2015 at 8:54 PM, Todd bit1...@163.com wrote:
Hi,
I would ask whether there are slides, blogs or videos on the topic about how
spark sql is implemented, the process or the whole picture when spark sql
There are many such kind of case class or concept such as
Attribute/AttributeReference/Expression in Spark SQL
I would ask what Attribute/AttributeReference/Expression mean, given a sql
query like select a,b from c, it a, b are two Attributes? a + b is an
expression?
Looks I misunderstand it
Hi,
I am staring spark thrift server with the following script,
./start-thriftserver.sh --master yarn-client --driver-memory 1G
--executor-memory 2G --driver-cores 2 --executor-cores 2 --num-executors 4
--hiveconf hive.server2.thrift.port=10001 --hiveconf
Hi,
When I cache the dataframe and run the query,
val df = sqlContext.sql("select name,age from TBL_STUDENT where age = 37")
df.cache()
df.show
println(df.queryExecution)
I got the following execution plan,from the optimized logical plan,I can see
the whole analyzed logical
Hi,
I am trying to build spark 1.5.1 in my environment, but encounter the following
error complaining Required file not found: sbt-interface.jar:
The error message is below and I am building with:
./make-distribution.sh --name spark-1.5.1-bin-2.6.0 --tgz --with-tachyon
-Phadoop-2.6
I am launching spark R with following script:
./sparkR --driver-memory 12G
and I try to load a local 3G csv file with following code,
> a=read.transactions("/home/admin/datamining/data.csv",sep="\t",format="single",cols=c(1,2))
but I encounter an error: could not allocate memory (2048 Mb) in
I am using tachyon in the spark program below,but I encounter a
BlockNotFoundxception.
Does someone know what's wrong and also is there guide on how to configure
spark to work with Tackyon?Thanks!
conf.set(spark.externalBlockStore.url, tachyon://10.18.19.33:19998)
.
Are you able to get more detailed error message ?
Thanks
On Aug 25, 2015, at 6:57 PM, Todd bit1...@163.com wrote:
Thanks Ted Yu.
Following are the error message:
1. The exception that is shown on the UI is :
Exception in thread Thread-113 Exception in thread Thread-126 Exception
Sorry for the noise, It's my bad...I have worked it out now.
At 2015-08-26 13:20:57, Todd bit1...@163.com wrote:
I think the answer is No. I only see such message on the console..and #2 is the
thread stack trace。
I am thinking is that in Spark SQL Perf forks many dsdgen process to generate
to understand more about
scope of modules:
https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html
On Tue, Aug 25, 2015 at 12:18 PM, Todd bit1...@163.com wrote:
I cloned the code from https://github.com/apache/spark to my machine. It can
compile successfully
Thanks Chenghao!
At 2015-08-25 13:06:40, Cheng, Hao hao.ch...@intel.com wrote:
Yes, check the source code
under:https://github.com/apache/spark/tree/master/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst
From: Todd [mailto:bit1...@163.com]
Sent: Tuesday, August 25, 2015 1:01
I cloned the code from https://github.com/apache/spark to my machine. It can
compile successfully,
But when I run the sparkpi, it throws an exception below complaining the
scala.collection.Seq is not found.
I have installed scala2.10.4 in my machine, and use the default profiles:
Hi, Are there test cases for the spark sql catalyst, such as testing the rules
of transforming unsolved query plan?
Thanks!
:13 PM, Todd bit1...@163.com wrote:
There are many such kind of case class or concept such as
Attribute/AttributeReference/Expression in Spark SQL
I would ask what Attribute/AttributeReference/Expression mean, given a sql
query like select a,b from c, it a, b are two Attributes? a + b
Increase the number of executors, :-)
At 2015-08-26 16:57:48, Ted Yu yuzhih...@gmail.com wrote:
Mind sharing how you fixed the issue ?
Cheers
On Aug 26, 2015, at 1:50 AM, Todd bit1...@163.com wrote:
Sorry for the noise, It's my bad...I have worked it out now.
At 2015-08-26 13:20:57
Hi,
The spark sql perf itself contains benchmark data generation. I am using spark
shell to run the spark sql perf to generate the data with 10G memory for both
driver and executor.
When I increase the scalefactor to be 30,and run the job, Then I got the
following error:
When I jstack it to
- or paste error in text.
Cheers
On Tue, Aug 25, 2015 at 4:22 AM, Todd bit1...@163.com wrote:
Hi,
The spark sql perf itself contains benchmark data generation. I am using spark
shell to run the spark sql perf to generate the data with 10G memory for both
driver and executor.
When I increase
Hi,
I am using data generated with
sparksqlperf(https://github.com/databricks/spark-sql-perf) to test the spark
sql performance (spark on yarn, with 10 nodes) with the following code (The
table store_sales is about 90 million records, 6G in size)
val
t;> code generation could introduce slowness
>>
>>
>> 在2015年09月11日 15:58,Cheng, Hao 写道:
>>
>> Can you confirm if the query really run in the cluster mode? Not the local
>> mode. Can you print the call stack of the executor when the query is running?
&g
e;” in Spark 1.5, and run the query again?
In our previous testing, it’s about 20% slower for sort merge join. I am not
sure if there anything else slow down the performance.
Hao
From: Jesse F Chen [mailto:jfc...@us.ibm.com]
Sent: Friday, September 11, 2015 1:18 PM
To: Michael Armbrus
oth runs would be helpful whenever
reporting performance changes.
On Thu, Sep 10, 2015 at 1:24 AM, Todd <bit1...@163.com> wrote:
Hi,
I am using data generated with
sparksqlperf(https://github.com/databricks/spark-sql-perf) to test the spark
sql performance (spark on yarn, with 10 nodes
5, and it’s true by
default, but we found it probably causes the performance reduce dramatically.
From: Todd [mailto:bit1...@163.com]
Sent: Friday, September 11, 2015 2:17 PM
To: Cheng, Hao
Cc: Jesse F Chen; Michael Armbrust; user@spark.apache.org
Subject: Re:RE: spark 1.5 SQL slows down dr
',there is no table to show queries and execution
plan information.
At 2015-09-11 14:39:06, "Todd" <bit1...@163.com> wrote:
Thanks Hao.
Yes,it is still low as SMJ。Let me try the option your suggested,
At 2015-09-11 14:34:46, "Cheng, Hao" <hao.ch...@intel.com> w
Hi,
I am able to maven install the whole spark project(from github ) in my IDEA.
But, when I run the SparkPi example, IDEA compiles the code again and following
exeception is thrown,
Does someone meet this problem? Thanks a lot.
Error:scalac:
while compiling:
Did you run hive on spark with spark 1.5 and hive 1.1?
I think hive on spark doesn't support spark 1.5. There are compatibility issues.
At 2016-01-28 01:51:43, "Ruslan Dautkhanov" wrote:
https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
Hi,
I am kind of confused about how data locality is honored when spark is
running on yarn(client or cluster mode),can someone please elaberate on this?
Thanks!
Hi,
I have a long computing chain, when I get the last RDD after a series of
transformation. I have two choices to do with this last RDD
1. Call checkpoint on RDD to materialize it to disk
2. Call RDD.saveXXX to save it to HDFS, and read it back for further processing
I would ask which choice
From the official site http://arrow.apache.org/, Apache Arrow is used for
Columnar In-Memory storage. I have two quick questions:
1. Does spark support Apache Arrow?
2. When dataframe is cached in memory, the data are saved in columnar in-memory
style. What is the relationship between this
Hi,
I brief the spark code, and it looks that structured streaming doesn't support
kafka as data source yet?
Hi,
In the spark code, guava maven dependency scope is provided, my question is,
how spark depends on guava during runtime? I looked into the
spark-assembly-1.6.1-hadoop2.6.1.jar,and didn't find class entries like
com.google.common.base.Preconditions etc...
r" <m...@schaffer.me> wrote:
I got curious so I tried sbt dependencyTree. Looks like Guava comes into spark
core from a couple places.
-Mat
matschaffer.com
On Mon, May 23, 2016 at 2:32 PM, Todd <bit1...@163.com> wrote:
Can someone please take alook at my question?I am
Can someone please take alook at my question?I am spark-shell local mode and
yarn-client mode.Spark code uses guava library,spark should have guava in place
during run time.
Thanks.
At 2016-05-23 11:48:58, "Todd" <bit1...@163.com> wrote:
Hi,
In the spark code, guava
As far as I know, there would be Akka version conflicting issue when using
Akka as spark streaming source.
At 2016-05-23 21:19:08, "Chaoqiang" wrote:
>I want to know why spark 1.6 use Netty instead of Akka? Is there some
>difficult problems which Akka can not
There is a jira that works on spark thrift server HA, the patch works,but still
hasn't merged into the master branch.
At 2016-05-23 20:10:26, "qmzhang" <578967...@qq.com> wrote:
>Dear guys, please help...
>
>In hive,we can enable hiveserver2 high available by using dynamic service
Thanks Ted!
At 2016-05-17 16:16:09, "Ted Yu" <yuzhih...@gmail.com> wrote:
Please take a look at:
[SPARK-13146][SQL] Management API for continuous queries
[SPARK-14555] Second cut of Python API for Structured Streaming
On Mon, May 16, 2016 at 11:46 PM, Todd <bit1...@
Hi,
We have a requirement to do count(distinct) in a processing batch against all
the streaming data(eg, last 24 hours' data),that is,when we do
count(distinct),we actually want to compute distinct against last 24 hours'
data.
Does structured streaming support this scenario?Thanks!
Hi,
Are there code examples about how to use the structured streaming feature?
Thanks.
ByValueAndWindow(Seconds(windowLength),
Seconds(slidingInterval))
HTH
Dr Mich Talebzadeh
LinkedIn
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com
On 17 May 2016 at 20:02, Michael Armbrust <mich...@databricks.
Hi,
I am wondering whether structured streaming supports Kafka as data source. I
brief the source code(meanly related with DataSourceRegister trait), and didn't
find kafka data source things
If
Thanks.
scala> records.groupBy("name").count().write.trigger(ProcessingTime("30
seconds")).option("checkpointLocation",
"file:///home/hadoop/jsoncheckpoint").startStream("file:///home/hadoop/jsonresult")
org.apache.spark.sql.AnalysisException: Aggregations are not supported on
streaming
s queries
// outputMode() is used for continuous queries
assertNotStreaming("mode() can only be called on non-continuous queries")
this.mode = saveMode
this
}
On Wed, May 18, 2016 at 12:25 PM, Todd <bit1...@163.com> wrote:
Thanks Ted.
I didn't try, but I think SaveMode and OuputM
,
Todd
From: Anselme Vignon [mailto:anselme.vig...@flaminem.com]
Sent: Wednesday, June 18, 2014 12:33 AM
To: user@spark.apache.org
Subject: Re: Unit test failure: Address already in use
Hi,
Could your problem come from the fact that you run your tests in parallel ?
If you are spark in local mode
-on-reduceByKey-td2462.html
Thanks,
Todd
-Original Message-
From: SK [mailto:skrishna...@gmail.com]
Sent: Tuesday, October 7, 2014 2:12 PM
To: u...@spark.incubator.apache.org
Subject: Re: Shuffle files
- We set ulimit to 50. But I still get the same too many open files
warning.
- I tried
points to the Hive
Metastore URI in your cluster --*
valuethrift://*HostNameHere*:9083/value
descriptionURI for client to contact metastore server/description
/property
/configuration
HTH.
-Todd
On Fri, Feb 6, 2015 at 4:12 AM, ashu ashutosh.triv...@iiitb.org wrote:
Hi,
I have Hive
on this one. Anything I may be missing here?
Thanks for the help, it is much appreciated. I will give Arush suggestion
a try tomorrow.
-Todd
On Tue, Feb 10, 2015 at 7:24 PM, Silvio Fiorito
silvio.fior...@granturing.com wrote:
Todd,
I just tried it in bin/spark-sql shell. I created a folder
for the assistance.
-Todd
On Wed, Feb 11, 2015 at 3:20 PM, Andrew Lee alee...@hotmail.com wrote:
Sorry folks, it is executing Spark jobs instead of Hive jobs. I mis-read
the logs since there were other activities going on on the cluster.
--
From: alee
.html
On Thu, Feb 12, 2015 at 7:24 AM, Todd Nist tsind...@gmail.com wrote:
I have a question with regards to accessing SchemaRDD’s and Spark SQL
temp tables via the thrift server. It appears that a SchemaRDD when
created is only available in the local namespace / context and are
unavailable
Hi Dhimant,
I believe if you change your spark-shell to pass -driver-class-path
/usr/local/spark/lib/mysql-connector-java-5.1.34-bin.jar vs putting it in
--jars.
-Todd
On Wed, Feb 18, 2015 at 10:41 PM, Dhimant dhimant84.jays...@gmail.com
wrote:
Found solution from one of the post found
(path '/user/data/json/test/*’);
cache table test;
1. Refresh connection.
Then select “New Custom SQL” and issue something like:
select * from test;
You will see your table appear.
HTH.
-Todd
On Thu, Feb 19, 2015 at 5:41 AM, ashu ashutosh.triv...@iiitb.org wrote:
Hi,
I would like
Hi Emre,
Have you tried adjusting these:
.set(spark.akka.frameSize, 500).set(spark.akka.askTimeout,
30).set(spark.core.connection.ack.wait.timeout, 600)
-Todd
On Fri, Feb 20, 2015 at 8:14 AM, Emre Sevinc emre.sev...@gmail.com wrote:
Hello,
We are building a Spark Streaming application
, but does work for now.
I also have it working by doing a SaveAsTable on the ingested data which
stores the reference into the metastore for access via the thrift server.
Thanks for the help.
-Todd
On Wed, Feb 11, 2015 at 8:41 PM, Silvio Fiorito
silvio.fior...@granturing.com wrote:
Hey Todd,
I
your executors:
spark.executor.extraClassPath=.
HTH.
-Todd
On Tue, Jan 6, 2015 at 10:00 AM, bchazalet bchaza...@companywatch.net
wrote:
It does not look like you're supposed to fiddle with the SparkConf and even
SparkContext in a 'job' (again, I don't know much about jobserver), as
you're
for the assistance.
-Todd
On Wed, Feb 11, 2015 at 2:34 AM, Arush Kharbanda ar...@sigmoidanalytics.com
wrote:
BTW what tableau connector are you using?
On Wed, Feb 11, 2015 at 12:55 PM, Arush Kharbanda
ar...@sigmoidanalytics.com wrote:
I am a little confused here, why do you want to create the tables
something to expose these via hive / metastore other
than creating a table in hive?
3. Does the thriftserver need to be configured to expose these in some
fashion, sort of related to question 2.
TIA for the assistance.
-Todd
like
this:
create temporary table test
using org.apache.spark.sql.json
options (path ‘/data/json/*');
cache table test;
I am using Spark 1.2.1. If not available now will it be in 1.3.x? Or is
the only way to achieve this is store into the metastore and does the imply
hive.
-Todd
HTH.
-Todd
On Thu, Feb 12, 2015 at 1:16 AM, kundan kumar iitr.kun...@gmail.com wrote:
I want to create/access the hive tables from spark.
I have placed the hive-site.xml inside the spark/conf directory. Even
though it creates a local metastore in the directory where I run the spark
shell
/resources/kv1.txt' INTO TABLE src)
// Queries are expressed in HiveQLsqlContext.sql(FROM src SELECT key,
value).collect().foreach(println)
Or did you have something else in mind?
-Todd
On Tue, Feb 10, 2015 at 6:35 PM, Todd Nist tsind...@gmail.com wrote:
Arush,
Thank you will take a look
Arush,
Thank you will take a look at that approach in the morning. I sort of
figured the answer to #1 was NO and that I would need to do 2 and 3 thanks
for clarifying it for me.
-Todd
On Tue, Feb 10, 2015 at 5:24 PM, Arush Kharbanda ar...@sigmoidanalytics.com
wrote:
1. Can the connector
to the requested database.
Thanks again for the suggestion and I will give work with it a bit more
tomorrow.
-Todd
On Tue, Feb 10, 2015 at 5:48 PM, Silvio Fiorito
silvio.fior...@granturing.com wrote:
Hi Todd,
What you could do is run some SparkSQL commands immediately after the
Thrift
to be working fine with HDP as well and steps 2a
and 2b are not required.
HTH
-Todd
On Mon, Mar 16, 2015 at 10:13 AM, Bharath Ravi Kumar reachb...@gmail.com
wrote:
Hi,
Trying to run spark ( 1.2.1 built for hdp 2.2) against a yarn cluster results
in the AM failing to start with following
no luck running purpose-built 1.3 against HDP 2.2 after following
all the instructions. Anyone else faced this issue?
On Mon, Mar 16, 2015 at 8:53 PM, Bharath Ravi Kumar reachb...@gmail.com
wrote:
Hi Todd,
Thanks for the help. I'll try again after building a distribution with
the 1.3 sources
version set to:
sparkVersion = 1.2.1
Other than possibly missing an exclude that is bring in an older version of
Spark from some where, I do see that I am referencing the
org.apache.hadoop % hadoop-client % 2.6.0 % provided, but I don't
think that is the issue.
Any other thoughts?
-Todd
On Wed, Mar
ElasticSearch 1.4.4, spark-1.2.1-bin-hadoop2.4, and the
elasticsearch-hadoop:
org.elasticsearch % elasticsearch-hadoop % 2.1.0.BUILD-SNAPSHOT
Any insight on what I am doing wrong?
TIA for the assistance.
-Todd
ElasticSearch 1.4.4, spark-1.2.1-bin-hadoop2.4, and the
elasticsearch-hadoop:
org.elasticsearch % elasticsearch-hadoop % 2.1.0.BUILD-SNAPSHOT
Any insight on what I am doing wrong?
TIA for the assistance.
-Todd
/com/cloudera/spark/hbase/example/JavaHBaseMapGetPutExample.java
https://github.com/tmalaska/SparkOnHBase/blob/master/src/main/scala/com/cloudera/spark/hbase/JavaHBaseContext.scala
On Thu, Mar 12, 2015 at 8:34 AM, Udbhav Agarwal udbhav.agar...@syncoms.com
wrote:
Thanks Todd,
But this link
with flushing remote transports.15/03/06 12:35:40
INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut
down.
Thanks again for the help.
-Todd
On Thu, Mar 5, 2015 at 7:06 PM, Zhan Zhang zzh...@hortonworks.com wrote:
In addition, you may need following patch if it is not in 1.2.1
=2.2.0.0-2041
spark.yarn.am.extraJavaOptions -Dhdp.version=2.2.0.0-2041
without the patch *${hdp.version} * was not being substituted.
Thanks for pointing me to that patch, appreciate it.
-Todd
On Fri, Mar 6, 2015 at 1:12 PM, Zhan Zhang zzh...@hortonworks.com wrote:
Hi Todd,
Looks like
, at 11:40 AM, Zhan Zhang zzh...@hortonworks.com wrote:
You are using 1.2.1 right? If so, please add java-opts in conf
directory and give it a try.
[root@c6401 conf]# more java-opts
-Dhdp.version=2.2.2.0-2041
Thanks.
Zhan Zhang
On Mar 6, 2015, at 11:35 AM, Todd Nist tsind
There is the PR https://github.com/apache/spark/pull/2077 for doing this.
On Fri, Mar 13, 2015 at 6:42 AM, t1ny wbr...@gmail.com wrote:
Hi all,
We are looking for a tool that would let us visualize the DAG generated by
a
Spark application as a simple graph.
This graph would represent the
Have you considered using the spark-hbase-connector for this:
https://github.com/nerdammer/spark-hbase-connector
On Thu, Mar 12, 2015 at 5:19 AM, Udbhav Agarwal udbhav.agar...@syncoms.com
wrote:
Thanks Akhil.
Additionaly if we want to do sql query we need to create JavaPairRdd, then
Perhaps this project, https://github.com/calrissian/spark-jetty-server,
could help with your requirements.
On Tue, Mar 24, 2015 at 7:12 AM, Jeffrey Jedele jeffrey.jed...@gmail.com
wrote:
I don't think there's are general approach to that - the usecases are just
to different. If you really need
be collected to master hence further
transmutations can be applied, as DataFrame has “richer optimizations under
the hood” and the convention from an R/julia user, I really hope this error
is able to be tackled, and DataFrame is robust enough to depend.
Thanks in advance!
REGARDS,
Todd
--
View
into where I am off? I'm sure it is probably something small,
just not seeing it yet.
TIA for the assistance.
-Todd
looking for.
-Todd
On Tue, Mar 31, 2015 at 5:06 PM, Burak Yavuz brk...@gmail.com wrote:
Hi,
If I recall correctly, I've read people integrating REST calls to Spark
Streaming jobs in the user list. I don't imagine any cases for why it
shouldn't be possible.
Best,
Burak
On Tue, Mar 31, 2015
)
Is this the right approach? Is this syntax available in 1.2.1:
SELECT
v1.name, v2.city, v2.state
FROM people
LATERAL VIEW json_tuple(people.jsonObject, 'name', 'address') v1
as name, address
LATERAL VIEW json_tuple(v1.address, 'city', 'state') v2
as city, state;
-Todd
On Tue, Mar 31, 2015
1 - 100 of 205 matches
Mail list logo