HS2: Permission denied for my own table?

2019-04-16 Thread Kaidi Zhao
Hello!

Did I miss anything here or it is an known issue? Hive 1.2.1, hadoop 2.7.x,
kerberos, impersonation.

Using hive client, create a hive db and hive table. I can select from this
table correctly.
In hdfs, change the table folder's permission to be 711. In hive client, I
can still select from the table.
However, if using beeline client (which talks to HS2 I believe), it
complains about can't read the table folder in hdfs, something like:

Error: Error while compiling statement: FAILED: SemanticException Unable to
fetch table fact_app_logs. java.security.AccessControlException: Permission
denied: user=hive, access=READ,
inode="/data/mydb.db/my_table":myuser:mygroup:drwxr-x--x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:307)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:220)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1736)
at
org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPathAccess(FSDirectory.java:1710)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAccess(FSNamesystem.java:8220)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkAccess(NameNodeRpcServer.java:1932)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.checkAccess(ClientNamenodeProtocolServerSideTranslatorPB.java:1455)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2218)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2214)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1760)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2212)
(state=42000,code=4)

Note, from the log, it says it tries to use user "hive" (instead of my own
user "myuser") to read the table's folder (the folder is only readable by
its owner - myuser)
Again, using hive client I can read the table, but using beeline it can't.
If I change the folder's permission to 755, then it works.

Why beeline / HS2 needs to use "hive" to read the table's folder?

Thanks in advance.

Kaidi


RE: Hive/ODBC, Tableau Server, and Kerberos auth

2019-04-16 Thread Shawn Weeks
In my company the Windows Servers aren’t part of the same domain as the Hadoop 
Servers so we’ve been using Apache Knox to enable username/password auth to an 
Kerberos enabled Hive instance. This has been tested with the Hortonworks HDP 
2.6.5 distribution of Hive and Tableau.

Thanks
Shawn

From: Mithun RK 
Sent: Tuesday, April 16, 2019 1:28 PM
To: user@hive.apache.org
Subject: Hive/ODBC, Tableau Server, and Kerberos auth

Hello, chaps.

I have a silly question regarding using Tableau Server with Hive/ODBC:

As far as I know, the Hive ODBC (Simba) drivers provide no way to use a keytab 
to automatically kinit before attempting an HS2 connection. What is the 
recommended way for a user running Tableau Server to automatically refresh a 
report running on HS2? (I mean, short of running kinit off the keytab first, or 
running kinit in a cronjob.)

How does the user ensure that a valid Kerberos ticket is presented on the 
refresh?

Mithun


Re: Hive metastore service

2019-04-16 Thread Mich Talebzadeh
Try this

Assuming that you are talking about Hive Thrift server

beeline -u jdbc:hive2://rhes75:10099/default
org.apache.hive.jdbc.HiveDriver *-n USERNAME -p PASSWORD*  -i
/home/hduser/dba/bin/add_jars.hql'

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Tue, 16 Apr 2019 at 16:11, Odon Copon  wrote:

> Hi,
> would be possible to add authentication to the Thrift Hive metastore
> service? like user and password?
> I cannot find any documentation on how to protect this endpoint.
> Thanks.
>


Hive/ODBC, Tableau Server, and Kerberos auth

2019-04-16 Thread Mithun RK
Hello, chaps.

I have a silly question regarding using Tableau Server with Hive/ODBC:

As far as I know, the Hive ODBC (Simba) drivers provide no way to use a
keytab to automatically kinit before attempting an HS2 connection. What is
the recommended way for a user running Tableau Server to automatically
refresh a report running on HS2? (I mean, short of running kinit off the
keytab first, or running kinit in a cronjob.)

How does the user ensure that a valid Kerberos ticket is presented on the
refresh?

Mithun


Hive metastore service

2019-04-16 Thread Odon Copon
Hi,
would be possible to add authentication to the Thrift Hive metastore
service? like user and password?
I cannot find any documentation on how to protect this endpoint.
Thanks.


Re: Hive on Tez vs Impala

2019-04-16 Thread Edward Capriolo
I have changes jobs 3 times since tez was introduced. It is a true waste of
compute resources and time that it was never patched in. So I either have
to waste my time patching it in, waste my time running a side deployment,
or not installing it and waste money having queries run longer on mr/spark
engine.

Imagine how much compute hours have been lost world wide.
On Tuesday, April 16, 2019, Manoj Murumkar  wrote:

> If we install our own build of Hive, we'll be out of support from CDH.
>
> Tez is not supported anyway and we're not touching any CDH bits, so it's
> not a big issue to have our own build of Tez engine.
>
> > On Apr 15, 2019, at 9:20 PM, Gopal Vijayaraghavan 
> wrote:
> >
> >
> > Hi,
> >
> >>> However, we have built Tez on CDH and it runs just fine.
> >
> > Down that path you'll also need to deploy a slightly newer version of
> Hive as well, because Hive 1.1 is a bit ancient & has known bugs with the
> tez planner code.
> >
> > You effectively end up building the hortonworks/hive-release builds, by
> undoing the non-htrace tracing impl & applying the htrace one back etc.
> >
> >> Lol. I was hoping that the merger would unblock the "saltyness".
> >
> > Historically, I've unofficially supported folks using Tez on CDH in prod
> (assuming they buy me enough coffee), though I might have discontinue that.
> >
> > https://github.com/t3rmin4t0r/tez-autobuild/blob/llap/
> vendor-repos.xml#L11
> >
> > Cheers,
> > Gopal
> >
> >
>


-- 
Sorry this was sent from mobile. Will do less grammar and spell check than
usual.