Well I figure out a way to use explode. But it returns two rows if there is two
match in nested array objects.
select id from department LATERAL VIEW explode(employee) dummy_table as emp
where emp.name = 'employee0'
I was looking for an operator that loops through the array and return true
Thanks for you response Yong! Array syntax works fine. But I am not sure how to
use explode. Should I use as follows?
select id from department where explode(employee).name = 'employee0
This query gives me java.lang.UnsupportedOperationException . I am using
HiveContext.
From:
Thanks all for your reply. I was evaluating which one fits best for me. I
picked epahomov/docker-spark from docker registry and suffice my need.
Thanks
Tridib
Date: Fri, 22 May 2015 14:15:42 +0530
Subject: Re: Official Docker container for Spark
From: riteshoneinamill...@gmail.com
To:
the hbase release you're using has the following fix ?
HBASE-8 non environment variable solution for IllegalAccessError
Cheers
On Tue, Apr 28, 2015 at 10:47 PM, Tridib Samanta tridib.sama...@live.com
wrote:
I turned on the TRACE and I see lot of following exception
will only hang for a few mins max and
return a helpful error message.
hbaseConf.set(HConstants.HBASE_CLIENT_RETRIES_NUMBER, 2)
--
Dean Chen
On Tue, Apr 28, 2015 at 10:18 PM, Tridib Samanta tridib.sama...@live.com
wrote:
Nope, my hbase is unsecured.
From: d...@ocirs.com
Date: Tue, 28 Apr 2015
I am using Spark 1.2.0 and HBase 0.98.1-cdh5.1.0.
Here is the jstack trace. Complete stack trace attached.
Executor task launch worker-1 #58 daemon prio=5 os_prio=0
tid=0x7fd3d0445000 nid=0x488 waiting on condition [0x7fd4507d9000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
To: tridib.sama...@live.com
CC: user@spark.apache.org
How did you distribute hbase-site.xml to the nodes ?
Looks like HConnectionManager couldn't find the hbase:meta server.
Cheers
On Tue, Apr 28, 2015 at 9:19 PM, Tridib Samanta tridib.sama...@live.com wrote:
I am using Spark 1.2.0 and HBase
an alias to the count in the select clause and use that alias in the
order by clause.
On Wed, Feb 25, 2015 at 11:17 PM, Tridib Samanta tridib.sama...@live.com
wrote:
Actually I just realized , I am using 1.2.0.
Thanks
Tridib
Date: Thu, 26 Feb 2015 12:37:06 +0530
Subject: Re: group by order
Actually I just realized , I am using 1.2.0.
Thanks
Tridib
Date: Thu, 26 Feb 2015 12:37:06 +0530
Subject: Re: group by order by fails
From: ak...@sigmoidanalytics.com
To: tridib.sama...@live.com
CC: user@spark.apache.org
Which version of spark are you having? It seems there was a similar Jira
I am trying to group by on a calculated field. Is it supported on spark sql? I
am running it on a nested json structure.
Query: SELECT YEAR(c.Patient.DOB), sum(c.ClaimPay.TotalPayAmnt) FROM claim c
group by YEAR(c.Patient.DOB)
Spark Version: spark-1.2.0-SNAPSHOT wit Hive and hadoop 2.4.
I am getting exception at sparksheel at the following line:
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
error: bad symbolic reference. A signature in HiveContext.class refers to term
hive
in package org.apache.hadoop which is not available.
It may be completely missing from
I am using spark 1.1.0.
I built it using:
./make-distribution.sh -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive
-DskipTests
My ultimate goal is to execute a query on parquet file with nested structure
and cast a date string to Date. This is required to calculate the age of Person
12 matches
Mail list logo