Hi,
Should the grant statement be this below?
GRANT INSERT ON TABLE table_priv1 TO user2;
and not
GRANT INSERT ON table_priv1 TO USER user2;
Hope this helps.
Regards,
Bill
On Wed, Jun 10, 2020 at 11:47 PM Nasrulla Khan Haris
wrote:
> I did enable auth related configs in hive-site.xml as
I did enable auth related configs in hive-site.xml as per below document.
I tried this on Spark 2.4.4. Is it supported ?
https://cwiki.apache.org/confluence/display/Hive/Storage+Based+Authorization+in+the+Metastore+Server
From: Nasrulla Khan Haris
Sent: Wednesday, June 10, 2020 5:55 PM
To:
Congrats and thanks, Holden!
Bests,
Takeshi
On Thu, Jun 11, 2020 at 11:16 AM Dongjoon Hyun
wrote:
> Thank you so much, Holden! :)
>
> On Wed, Jun 10, 2020 at 6:59 PM Hyukjin Kwon wrote:
>
>> Yay!
>>
>> 2020년 6월 11일 (목) 오전 10:38, Holden Karau 님이 작성:
>>
>>> We are happy to announce the
Thank you so much, Holden! :)
On Wed, Jun 10, 2020 at 6:59 PM Hyukjin Kwon wrote:
> Yay!
>
> 2020년 6월 11일 (목) 오전 10:38, Holden Karau 님이 작성:
>
>> We are happy to announce the availability of Spark 2.4.6!
>>
>> Spark 2.4.6 is a maintenance release containing stability, correctness,
>> and
Yay!
2020년 6월 11일 (목) 오전 10:38, Holden Karau 님이 작성:
> We are happy to announce the availability of Spark 2.4.6!
>
> Spark 2.4.6 is a maintenance release containing stability, correctness,
> and security fixes.
> This release is based on the branch-2.4 maintenance branch of Spark. We
> strongly
We are happy to announce the availability of Spark 2.4.6!
Spark 2.4.6 is a maintenance release containing stability, correctness, and
security fixes.
This release is based on the branch-2.4 maintenance branch of Spark. We
strongly recommend all 2.4 users to upgrade to this stable release.
To
HI Spark users,
I see REVOKE/GRANT operations In list of supported operations but when I run
the on a table. I see
Error: org.apache.spark.sql.catalyst.parser.ParseException:
Operation not allowed: GRANT(line 1, pos 0)
== SQL ==
GRANT INSERT ON table_priv1 TO USER user2
^^^
at
We have a case where data the is small enough to be broadcasted in joined
with multiple tables in a single plan. Looking at the physical plan, I do
not see anything that indicates if the broadcast data is done only once
i.e., the BroadcastExchange is being reused i.i.e., that data is not
Using JDBC drivers much like accessing Oracle data, one can utilise the
power of Spark on Teradata via JDBC drivers.
I have seen connections in some articles which indicates this process is
pretty mature.
My question is if anyone has done this work and how is performance in Spark
vis-a-vis
Hi,
This is a general question regarding moving spark SQL query to PySpark, if
needed I will add some more from the errors log and query syntax.
I'm trying to move a spark SQL query to run through PySpark.
The query syntax and spark configuration are the same.
For some reason the query failed to
10 matches
Mail list logo