Hi,
When I use the user "spark" to create table and run spark streaming
application.
I'm confused about why the user "yarn" needs the hdfs access? if so, i
can't use spark user to run app, but only use yarn user.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=yarn, access=WRITE,
inode="/carbondata/carbonstore/default/sale/Metadata/schema":spark:hdfs:drwxr-xr-x
INFO 21-12 11:07:52,389 - ********starting clean up**********
WARN 21-12 11:07:52,442 - Exception while invoking
ClientNamenodeProtocolTranslatorPB.delete over dpnode02/192.168.9.2:8020.
Not retrying because try once and fail.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException):
Permission denied: user=yarn, access=WRITE,
inode="/carbondata/carbonstore/sale/sale/Fact/Part0/Segment_0":spark:hdfs:drwxr-xr-x
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
Thanks
--
View this message in context:
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/user-yarn-needs-the-hdfs-access-when-loading-data-tp5082.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at
Nabble.com.