Done. Welcome to the Hive wiki team, Shannon!
-- Lefty
On Tue, Aug 4, 2015 at 1:25 AM, Shannon Ladymon slady...@gmail.com wrote:
Hi!
I'd like to request write access to the Hive wiki. I'd like to work on
documentation.
My Confluence user name is sladymon.
Thanks!
-Shannon
I posted a question few days ago about how to capture current user
information in a UDF and the answer basically was to
use SessionState.getUserFromAuthenticator().
I am wondering if there is a similar mechanism to obtain query metadata
information such as the sql statement or the current
Looks like the user running hiveserver2 didn’t have permission (local file
system) to write to the directory specified by
HiveConf.ConfVars.LOCALSCRATCHDIR . The default scratch directory is:
“${system:java.io.tmpdir}” + File.separator + “${system:user.name}” , so it’s
unusual to see this
i want create a hive table using java code. my code is
package com.inndata.services;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.Statement;
import java.sql.DriverManager;
public class HiveCreateTable {
private static String driverName =
Hi pls unsubscribe me from this forum .
Regards
Ajeet Ojha
Tata Consultancy Services Limited
Cell:- 9811220828
Mailto: ajee...@tcs.com
Website: http://www.tcs.com
Experience certainty. IT Services
Business Solutions
Good question. I can't find it in any Hive releases.
There's hive.auto.convert.join.noconditionaltask (starting in 0.11.0) but
not hive.auto.convert.sortmerge.join.noconditionaltask.
Several JIRA issues mention it, including the 0.13.0 release note for
HIVE-6098
Ok, the next step is to look at the logs from your Hive metastore server
and see exactly what's happening. The error you're seeing is from the
client. On your metastore server there should be logs with the same
timestamp giving details on why the transaction operation failed.
Alan.
Sarath
Hi all,
I've had some questions from users regarding setting
`hive.auto.convert.sortmerge.join.noconditionaltask`. I see, in some
documentation from users and vendors, that it is recommended to set this
parameter. In neither Hive 0.12 nor 0.14 can I find in HiveConf where this
is actually defined
Yes, the explain plan definitely only has Move Operators (no Copy
Operators). With that though, this definitely looks like a hive bug? Does
anyone know if there is corresponding HIVE ticket or a workaround for the
issue? Thanks!
Stage: Stage-3
Move Operator
files:
hdfs
Moving data to:
s3n://access_key:secret_key@my_bucket/a/b/2015-07-30/.hive-staging_hiv
e_2015-08-04_18-38-47_649_1476668515119011800-1/-ext-1
Failed with exception Wrong FS:
s3n://access_key:secret_key@my_bucket/a/b/2015-07-30/.hive-staging_hiv
CREATE EXTERNAL TABLE pick (
ud STRING,
pi ARRAY
STRUCT
count ARRAY
STRUCT
)
ROW FORMAT SERDE 'com.proofpoint.hive.serde.JsonSerde'
LOCATION 's3n://';
Hello,
I have a query that used to work fine previously. I am testing on hive 1.1
now and it is failing. The AWS access and secret key have permissions to
read and write data to this directory. The directory exists.
hive -e insert overwrite directory
12 matches
Mail list logo