Try restarting the Hive Server 2 with the setting
"hive.server2.enable.doAs=true" in hive-site.xml. This will make
HiveServer2 perform actions as connecting user.
On Tue, Aug 5, 2014 at 5:16 PM, sai chaitanya tirumerla wrote:
> Hi,
>
> I am trying to connect using my username and password fro
This is likely due to the failure of resolution of
export
HIVE_AUX_JARS_PATH=/usr/lib/hive-hcatalog/share/hcatalog/hive-hcatalog-core-*.jar
in /etc/hive/conf/hive-env.sh
You should be able to fix this by modifying /etc/hive/conf/hive-env.sh to
set:
export
HIVE_AUX_JARS_PATH=/usr/lib/hive-hcatalog/
I am wondering if the presence of _SUCCESS file is causing the empty
result. Can you try setting property
"mapreduce.fileoutputcommitter.marksuccessfuljobs" to false to disable the
generation of _SUCCESS file? Just a long shot but might not hurt to debug
this.
On Mon, Jul 21, 2014 at 4:47 AM, Ti
URL: jdbc:hive2://hiveservice:11000
>
>
>
>
> On Thu, Jul 10, 2014 at 2:51 PM, D K wrote:
>
>> Oh, somewhere in the email thread I thought http transport mode was being
>> used. If that's not the case then you should be able to login using:
>> hive --serv
hiveclient.48940 > hiveservice.11000: P 10:35(25) ack 1
> win 46
> E..M].@.@.4.
> GJs
> GJ..,*..I
> I...e.anonymous.anonymous
> 12:44:38.164592 IP hiveservice.11000 > hiveclient.48940: P 1:6(5) ack 35
> win 46
> E..9..@.@..B
> GJ.
>
;
>>
>>
>> When I run the above command I am getting the error below : -
>>
>>
>>
>> FAILED: Execution Error, return code 1 from
>> org.apache.hadoop.hive.ql.exec.DDLTask. str_date not found in table's
>> partition spec: {pcol1=str_hour, pcol2
Here is an example:
ALTER TABLE alter_rename_partition PARTITION (pCol1='old_part1',
pcol2='old_part2') RENAME TO PARTITION (pCol1='new_part1',
pcol2='new_part2');
On Wed, Jul 9, 2014 at 9:20 AM, Manish Kothari
wrote:
> Hi,
>
>
>
> I have a table name siplogs_partitioned which is partitioned
Under your HIVE_HOME/conf create a fine hive-site.xml and override the
following property:
javax.jdo.option.ConnectionURL
jdbc:derby:;databaseName=*/new_db_location*
/metastore_db;create=true
JDBC connect string for a JDBC metastore
substitute the new_db_location with the new location you
Did you start the Hive Metastore? You can start that by running
hive --service metastore
On Tue, Jul 8, 2014 at 5:27 AM, Sarath Chandra <
sarathchandra.jos...@algofusiontech.com> wrote:
> Thanks Santhosh.
> So before going to launch hive shell, we need to start hive server is what
> I understan
How about try this?
hive --service beeline -u
jdbc:hive2://hiveservice:10001/default?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
In your previous response I see that you have
"hive.server2.thrift.http.port=10001"
On Thu, Jul 3, 2014 at 5:15 AM, Hang Chan wrote:
>
What version of Pig and Hive you are using? Boolean support was added in
Pig v0.10 and in the HCatStorer in Hive v0.12.
On Wed, Jul 2, 2014 at 7:07 AM, Carlotta Hicks
wrote:
> Yes. Here is a row of data:
>
>
>
>
> cust_1,M,56,D,Hopatcong,NJ,7843,15-May-74,15-May-77,88.00,43,688.00,458.00,N
>
Probably you meant from_unixtime(timestamp in bigint, "dd-MMM-"). "dd"
vs "DD" does make a difference in the output.
-Deepesh
On Wed, Jun 25, 2014 at 2:30 PM, Matouk IFTISSEN wrote:
> sorry use this : from_unixtime(field_date,'DD-MMM-')
>
>
> 2014-06-25 23:27 GMT+02:00 Matouk IFTISSEN
Can you change the storage handler classname in your query from
"org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler" to
"org.apache.hcatalog.hbase.HBaseHCatStorageHandler" and try?
-Deepesh
On Tue, Jun 24, 2014 at 12:40 PM, Carlotta Hicks
wrote:
> I am submitting the following with HCatal
If the MR job is failing can you try the following on Hive CLI before
running the query?
add jar $HBASE_HOME/lib/hbase-client--hadoop2.jar;
add jar $HBASE_HOME/lib/hbase-protocol--hadoop2.jar;
add jar $HBASE_HOME/lib/hbase-server--hadoop2.jar;
add jar $HBASE_HOME/lib/htrace-core-2.01.jar
replace
14 matches
Mail list logo