Hi all,
I started Hive Thrift Server with command,
/sbin/start-thriftserver.sh --master yarn -hiveconf
hive.server2.thrift.port 10003
The Thrift server started at the particular node without any error.
When doing the same, except pointing to different node to start the server,
check the "*inputFile*" variable name
lol
On Fri, Jul 15, 2016 at 12:12 PM, RK Spark wrote:
> I am using Spark version is 1.5.1, I am getting errors in first program of
> spark,ie.e., word count. Please help me to solve this
>
> *scala> val inputfile =
Hi Puneet,
Have you tried appending
--jars $SPARK_HOME/lib/spark-examples-*.jar
to the execution command?
Ram
On Thu, Jul 7, 2016 at 5:19 PM, Puneet Tripathi <
puneet.tripa...@dunnhumby.com> wrote:
> Guys, Please can anyone help on the issue below?
>
>
>
> Puneet
>
>
>
> *From:* Puneet
, Divya Gehlot <divya.htco...@gmail.com>
wrote:
> Can you try var df_join = df1.join(df2,df1( "Id") ===df2("Id"),
> "fullouter").drop(df1("Id"))
> On May 18, 2016 2:16 PM, "ram kumar" <ramkumarro...@gmail.
id| A| id| B|
>
> +++++
>
> | 1| 0|null|null|
>
> | 2| 0| 2| 0|
>
> |null|null| 3| 0|
>
> +++++
>
>
> df1: org.apache.spark.sql.DataFrame = [id: int, A: int]
>
> df2: org.apache.spark.sql.DataFrame =
0)).toDF("id", "A")
> val df2 = Seq((1, 0), (2, 0), (3, 0)).toDF("id", "B")
> df1.join(df2, df1("id") === df2("id"), "outer").show
>
> // maropu
>
>
> On Wed, May 18, 2016 at 3:29 PM, ram kumar <ramkumarro..
> On 17 May 2016 at 21:52, Bijay Kumar Pathak <bkpat...@mtu.edu> wrote:
>
>> Hi,
>>
>> Try this one:
>>
>>
>> df_join = df1.*join*(df2, 'Id', "fullouter")
>>
>> Thanks,
>> Bijay
>>
>>
>> On Tue, May 17, 2016 at
:
> Hi,
>
> Try this one:
>
>
> df_join = df1.*join*(df2, 'Id', "fullouter")
>
> Thanks,
> Bijay
>
>
> On Tue, May 17, 2016 at 9:39 AM, ram kumar <ramkumarro...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I tried to join two dat
Hi,
I tried to join two dataframe
df_join = df1.*join*(df2, ((df1("Id") === df2("Id")), "fullouter")
df_join.registerTempTable("join_test")
When querying "Id" from "join_test"
0: jdbc:hive2://> *select Id from join_test;*
*Error*: org.apache.spark.sql.AnalysisException: Reference 'Id' is
Hi,
I wrote a spark job which registers a temp table
and when I expose it via beeline (JDBC client)
$ *./bin/beeline*
beeline>
* !connect jdbc:hive2://IP:10003 -n ram -p *0: jdbc:hive2://IP>
*show
Hi,
I wrote a spark job which registers a temp table
and when I expose it via beeline (JDBC client)
$ *./bin/beeline*
beeline>
* !connect jdbc:hive2://IP:10003 -n ram -p *0: jdbc:hive2://IP>
*show
Hi,
I started hivecontext as,
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc);
I want to stop this sql context
Thanks
Hi,
In spark-shell (scala), we import,
*org.apache.spark.sql.hive.thriftserver._*
for starting Hive Thrift server programatically for particular hive context
as
*HiveThriftServer2.startWithContext(hiveContext)*
to expose registered temp table for that particular session.
We used pyspark for
Hi,
In spark-shell, we start hive thrift server by importing,
import org.apache.spark.sql.hive.thriftserver._
Is there a package for importing it from pyspark
Thanks
I am facing this same issue.
Can any1 help me with this
Thanks
On Mon, Dec 7, 2015 at 9:14 AM, Shige Song wrote:
> Hard to tell.
>
> On Mon, Dec 7, 2015 at 11:35 AM, zhangjp <592426...@qq.com> wrote:
>
>> Hi all,
>>
>> I'm using saprk prebuild version 1.5.2+hadoop2.6 and
h2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> <https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 5 April 2016 at 05:52, ram kumar <ramkumarro...@gmail.com> wrote:
>
>> HI,
>>
>> I started a hi
Hi,
I started thrift server
cd $SPARK_HOME
./sbin/start-thriftserver.sh
Then, jdbc client
$ ./bin/beeline
Beeline version 1.5.2 by Apache Hive
beeline>!connect jdbc:hive2://ip:1
show tables;
++--+--+
| tableName | isTemporary |
++--+--+
|
Hi,
I get the following error when running a job as pyspark,
{{{
An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
; Cheers
>
> On Fri, Mar 11, 2016 at 5:02 AM, ram kumar <ramkumarro...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I registered a dataframe as a table using registerTempTable
>> and I didn't close the Spark context.
>>
>> Will the table be available for longer time?
>>
>> Thanks
>>
>
>
Hi,
I registered a dataframe as a table using registerTempTable
and I didn't close the Spark context.
Will the table be available for longer time?
Thanks
on using yarn-cluster, it works good
On Mon, Jun 29, 2015 at 12:07 PM, ram kumar ramkumarro...@gmail.com wrote:
SPARK_CLASSPATH=$CLASSPATH:/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/*
in spark-env.sh
I think i am facing the same issue
https://issues.apache.org/jira/browse/SPARK-6203
On Mon
SPARK_CLASSPATH=$CLASSPATH:/usr/hdp/2.2.0.0-2041/hadoop-mapreduce/*
in spark-env.sh
I think i am facing the same issue
https://issues.apache.org/jira/browse/SPARK-6203
On Mon, Jun 29, 2015 at 11:38 AM, ram kumar ramkumarro...@gmail.com wrote:
I am using Spark 1.2.0.2.2.0.0-82 (git revision
Hi,
-
JavaStreamingContext ssc = new JavaStreamingContext(conf, new
Duration(1));
ssc.checkpoint(checkPointDir);
JavaStreamingContextFactory factory = new JavaStreamingContextFactory() {
public JavaStreamingContext create() {
23 matches
Mail list logo