hi, We use SQL standards based authorization for authorization in Hive
0.14. But it has not support for column level privileges. So, I want to know
Is there anyway to set column level privileges? Thanks!
Column level security in hive was added at HIVE-5837
https://issues.apache.org/jira/browse/HIVE-5837
It has the PDF link for your readings.
https://cwiki.apache.org/confluence/display/Hive/AuthDev talks about
setting column level permissions
On Thu, Mar 26, 2015 at 4:39 PM, Allen
Create a view with the permitted columns and handle the privileges for it
Daniel
On 26 במרץ 2015, at 12:40, Allen bjallenw...@sina.com wrote:
hi,
We use SQL standards based authorization for authorization in Hive
0.14. But it has not support for column level privileges.
I do not know for sure but you can always create a view on the table
excluding columns that you don't want this particular application to see.
Rather tedious.
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for
Thanks for your replay.If we handle the privileges by creating views, it will
lead to lots of views in our database.I found there is a table named
TBL_COL_PRIV in hive metastore database, maybe this table is related to column
privilege,but it is never used in hive. Anybody knew why?
Adding my thoughts here...
Both Apache Sentry and Apache ranger would not offer you column level fine
grained security. Perhaps, as suggested by one member, creating views is
more meaningful
--
Thanks,
Raunak Jhawar
On Thu, Mar 26, 2015 at 8:53 PM, Alan Gates alanfga...@gmail.com wrote:
Hi, NitinThanks for your replay. This very useful for us. We'll consider about
your suggestions.
- 原始邮件 -
发件人:Alan Gates alanfga...@gmail.com
收件人:user@hive.apache.org
主题:Re: how to set column level privileges
日期:2015年03月26日 23点23分
Column level
Hi,
Can anyone direct me to a good explanation on understanding Hive's execution
plan?
Thanks,
Daniel
Hi,
What is wrong with this query? I am reading the docs and it appears that
this should work no?
INSERT OVERWRITE DIRECTORY “/user/admin/mydirectory”
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’
select * from my_table_that_exists;
Error occurred executing hive query: Error while compiling
Hi, thanks for your quick reply.
I see your point, but in my case would I not have the required
RecordIdentifiers available as I'm already reading the entire partition to
determine which records have changed? Admittedly Hive will not reveal
the ROW__IDs to me but I assume (incorrectly perhaps)
The missing piece for adding update and delete to the streaming API is a
primary key. Updates and deletes in SQL work by scanning the table or
partition where the record resides. This is assumed to be ok since we
are not supporting transactional workloads and thus update/deletes are
assumed
Hi,
The primary key is required for updates/deletes to uniquely identify the record
that needs to be updated otherwise you are going to have unpredictable results
as you may be updating too many or deleting too many.
This is indeed a requirement for real time delivery of data to data
Hi Elliot,
How do you determine a record in a partition has changed? Are you relying on
timestamp or something like that?
Thanks
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen
Hi,
I'd like to ascertain if it might be possible to add 'update' and 'delete'
operations to the hive-hcatalog-streaming API. I've been looking at the API
with interest for the last week as it appears to have the potential to help
with some general data processing patterns that are prevalent
Thanks for that Elliot.
As a matter of interest what is the source of data in this case. Is the data
delivered periodically including new rows and deltas?
Cheers,
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for
Are you saying that when the records arrive you don't know updates from
inserts and you're already doing processing to determine that? If so,
this is exactly the case we'd like to hit with the merge functionality.
If you're already scanning the existing ORC file and obtaining the
unique
I am very new to hive optimiser
Here I have a table with 4 million rows imported from Oracle via sqoop/hive. In
this table object_id column is unique. Oracle table has primary key constraint
on object_id column which is basically a unique B-tree index.
I do a very simple query to see how
Have you seen this article although it looks a bit dated.
Adding ACID to Apache Hive
http://hortonworks.com/blog/adding-acid-to-apache-hive/
HTH
Mich Talebzadeh
http://talebzadehmich.wordpress.com
Publications due shortly:
Creating in-memory Data Grid for Trading Systems
Hi Mich,
Yes, we have a timestamp on each record. Our processes effectively group by
a key and order by time stamp.
Cheers - Elliot.
Elliot West mailto:tea...@gmail.com
March 26, 2015 at 15:58
Hi Alan,
Yes, this is precisely our situation. The issues I'm having with the
current API are that I cannot intercept the creation of the
OrcRecordUpdater to set the recordIdColumn in
the AcidOutputFormat.Options instance.
Column level permissions was added to Hive default authorization in
HIVE-5837. That is why the TBL_COL_PRIV tables exists in the
metastore. The problem with default auth is it isn't really secure, as
anyone can grant anybody (including themselves) any privilege.
But Allen is correct that
21 matches
Mail list logo