[jira] [Created] (RANGER-4766) Add validation criteria for username can not be part of password in Ranger
Himanshu Maurya created RANGER-4766: --- Summary: Add validation criteria for username can not be part of password in Ranger Key: RANGER-4766 URL: https://issues.apache.org/jira/browse/RANGER-4766 Project: Ranger Issue Type: Improvement Components: Ranger Reporter: Himanshu Maurya Assignee: Himanshu Maurya In ranger add the password validation criteria for user name cannot be part of password -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: Review Request 74949: RANGER-4763: Send user-friendly message if Test connection is not implemented for a service definition
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/74949/ --- (Updated April 3, 2024, 4:40 a.m.) Review request for ranger, Asit Vadhavkar, Madhan Neethiraj, Monika Kachhadiya, Siddhesh Phatak, and Subhrat Chaudhary. Bugs: RANGER-4763 https://issues.apache.org/jira/browse/RANGER-4763 Repository: ranger Description --- Send user-friendly message if Test connection is not implemented for a service definition Diffs - agents-common/src/main/java/org/apache/ranger/plugin/service/RangerDefaultService.java c89b55757 security-admin/src/main/java/org/apache/ranger/biz/ServiceMgr.java 7e071ba0e security-admin/src/main/java/org/apache/ranger/common/TimedExecutor.java d6fc01176 Diff: https://reviews.apache.org/r/74949/diff/2/ Testing (updated) --- Validated "Test Connection" when the service definition does not have implClass, it gives the below response in this case. { "statusCode": 1, "msgDesc": "Configuration validation is not implemented for hbase", "messageList": [ { "message": "Configuration validation is not implemented for hbase" } ] } Thanks, Anand Nadar
Tagsync not working, documentation didn't help
Hi All, I’m trying to setup apache ranger tagsync with atlas. I see in multiple demos on youtube that atlas tags are automatically synced with ranger, however for me that’s not happening. For my setup I have created a hive instance, added atlas hook and ranger plugin to it. For tagsync install.properties, I’m not able to understand what should be the value of TAGSYNC_ATLAS_TO_RANGER_SERVICE_MAPPING My setup is below: Ranger: * Hive service name: hive * Atlas service name: atlas dev I want to implement tag bsed service for Trino , and further I want to use datahub as a source for tags instead of atlas. Any help here would be highly appreciated. Thanks *** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify the system manager. This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. If you are not the intended recipient you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited . The information contained in this mail is propriety and strictly confidential. ***
Re: Review Request 74949: RANGER-4763: Send user-friendly message if Test connection is not implemented for a service definition
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/74949/ --- (Updated April 3, 2024, 4:29 a.m.) Review request for ranger, Asit Vadhavkar, Madhan Neethiraj, Monika Kachhadiya, Siddhesh Phatak, and Subhrat Chaudhary. Bugs: RANGER-4763 https://issues.apache.org/jira/browse/RANGER-4763 Repository: ranger Description --- Send user-friendly message if Test connection is not implemented for a service definition Diffs (updated) - agents-common/src/main/java/org/apache/ranger/plugin/service/RangerDefaultService.java c89b55757 security-admin/src/main/java/org/apache/ranger/biz/ServiceMgr.java 7e071ba0e security-admin/src/main/java/org/apache/ranger/common/TimedExecutor.java d6fc01176 Diff: https://reviews.apache.org/r/74949/diff/2/ Changes: https://reviews.apache.org/r/74949/diff/1-2/ Testing --- Validated "Test Connection" when the service definition does not have implClass, it gives the below response in this case. { "statusCode": 1, "msgDesc": "Configuration validation is not implemented for hive-service-1", "messageList": [ { "message": "Configuration validation is not implemented for hive-service-1" } ] } Thanks, Anand Nadar
Re: [PR] RANGER-4761: make lazy memory allocation for family map instead … [ranger]
kumaab commented on code in PR #307: URL: https://github.com/apache/ranger/pull/307#discussion_r1548609407 ## ranger-tools/src/main/python/stress/stress-hbase-loadgenerator.py: ## @@ -0,0 +1,106 @@ +import subprocess +import time +import argparse +import os +from datetime import datetime + +def increase_memory_for_loadgenerator(): +try: +cmd = "export HBASE_OPTS='-Xmx10g'" +print(cmd) +op = subprocess.call(cmd, shell=True) +print("Output:", op) +except subprocess.CalledProcessError as e: +print("Error in setting HBASE_HEAPSIZE:", e) +exit(1) +def login(): +try: +cmd = "kinit -kt systest" Review Comment: Instead of hardcoding the value it is better to provide the keytab as an argument. Alternatively, doing the kinit externally and documenting the steps in readme also works. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@ranger.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] RANGER-4761: make lazy memory allocation for family map instead … [ranger]
kumaab commented on code in PR #307: URL: https://github.com/apache/ranger/pull/307#discussion_r1548604094 ## ranger-tools/src/main/python/stress/stress-hbase-loadgenerator.py: ## @@ -0,0 +1,106 @@ +import subprocess +import time +import argparse +import os +from datetime import datetime + +def increase_memory_for_loadgenerator(): +try: +cmd = "export HBASE_OPTS='-Xmx10g'" +print(cmd) +op = subprocess.call(cmd, shell=True) +print("Output:", op) +except subprocess.CalledProcessError as e: +print("Error in setting HBASE_HEAPSIZE:", e) +exit(1) +def login(): +try: +cmd = "kinit -kt systest" +print(cmd) +login_op = subprocess.call(cmd, shell=True) +print("Login output:", login_op) +except subprocess.CalledProcessError as e: +print("Error in login:", e) +exit(1) + +def create_ltt_command_multiput(num_cols_per_cf=1000, num_threads=10, num_keys=100, table_name = "multitest",avg_data_size=2, num_col_families=3, col_family_pattern="cf%d", num_regions_per_server=1): +def get_column_families(): +col_families = [] +for i in range(num_col_families): +col_families.append(col_family_pattern % i) +return ','.join(col_families) +#Sample: hbase ltt -tn multitest -families f1,f2,f3 -write 2:2:20 -multiput -num_keys 1000 -num_regions_per_server 1 +cmd = f"hbase ltt -tn {table_name} -families {get_column_families()} -write {num_cols_per_cf}:{avg_data_size}:{num_threads}" \ + f" -multiput -num_keys {num_keys} -num_regions_per_server {num_regions_per_server}" +return cmd + + +def create_pe_command_multiget(multiget_batchsize=500, num_threads=10, num_keys=100, table_name="multitest", num_col_families=3): +#Sample: hbase pe --table=multitest --families=3 --columns=1 --multiGet=10 --rows=1000 --nomapred randomRead 5 + +cmd = f"hbase pe --table={table_name} --families={num_col_families} --columns={num_cols_per_cf} " \ + f"--multiGet={multiget_batchsize} --rows={num_keys} --nomapred randomRead {num_threads}" +return cmd + + + +def generate_hbase_load(op_type, multiget_batchsize, num_cf, num_rows_list, num_cols_per_cf, num_threads_list, metadata, csv_outfile="/root/ltt_output.csv", ): +#if output file does not exist only then write the header +if(not os.path.exists(csv_outfile)): +with open(csv_outfile, "w") as f: + f.write("op,num_cf,num_keys,num_cols_per_cf,num_threads,time_taken,command,metadata,date_start,time_start,date_end,time_end\n") +assert type(num_threads_list) == list +assert type(num_rows_list) == list +for num_keys in num_rows_list: +for num_threads in num_threads_list: +if op_type == "multiput": +cmd = create_ltt_command_multiput(num_cols_per_cf=num_cols_per_cf, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +elif op_type == "multiget": +cmd = create_pe_command_multiget(multiget_batchsize=multiget_batchsize, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +else: +print("Invalid op_type") +exit(1) + +datetime_start = datetime.now() +date_start_str = datetime_start.date() +time_start_str = str(datetime_start.time()).split(".")[0] +time_start = time.time() +ltt_out = subprocess.call(cmd, shell=True) Review Comment: Consider adding error handling with hbase commands. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@ranger.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] RANGER-4761: make lazy memory allocation for family map instead … [ranger]
kumaab commented on code in PR #307: URL: https://github.com/apache/ranger/pull/307#discussion_r1548582891 ## ranger-tools/src/main/python/stress/stress-hbase-loadgenerator.py: ## @@ -0,0 +1,106 @@ +import subprocess +import time +import argparse +import os +from datetime import datetime + +def increase_memory_for_loadgenerator(): +try: +cmd = "export HBASE_OPTS='-Xmx10g'" +print(cmd) +op = subprocess.call(cmd, shell=True) +print("Output:", op) +except subprocess.CalledProcessError as e: +print("Error in setting HBASE_HEAPSIZE:", e) +exit(1) +def login(): +try: +cmd = "kinit -kt systest" +print(cmd) +login_op = subprocess.call(cmd, shell=True) +print("Login output:", login_op) +except subprocess.CalledProcessError as e: +print("Error in login:", e) +exit(1) + +def create_ltt_command_multiput(num_cols_per_cf=1000, num_threads=10, num_keys=100, table_name = "multitest",avg_data_size=2, num_col_families=3, col_family_pattern="cf%d", num_regions_per_server=1): +def get_column_families(): +col_families = [] +for i in range(num_col_families): +col_families.append(col_family_pattern % i) +return ','.join(col_families) +#Sample: hbase ltt -tn multitest -families f1,f2,f3 -write 2:2:20 -multiput -num_keys 1000 -num_regions_per_server 1 +cmd = f"hbase ltt -tn {table_name} -families {get_column_families()} -write {num_cols_per_cf}:{avg_data_size}:{num_threads}" \ + f" -multiput -num_keys {num_keys} -num_regions_per_server {num_regions_per_server}" +return cmd + + +def create_pe_command_multiget(multiget_batchsize=500, num_threads=10, num_keys=100, table_name="multitest", num_col_families=3): +#Sample: hbase pe --table=multitest --families=3 --columns=1 --multiGet=10 --rows=1000 --nomapred randomRead 5 + +cmd = f"hbase pe --table={table_name} --families={num_col_families} --columns={num_cols_per_cf} " \ + f"--multiGet={multiget_batchsize} --rows={num_keys} --nomapred randomRead {num_threads}" +return cmd + + + +def generate_hbase_load(op_type, multiget_batchsize, num_cf, num_rows_list, num_cols_per_cf, num_threads_list, metadata, csv_outfile="/root/ltt_output.csv", ): +#if output file does not exist only then write the header +if(not os.path.exists(csv_outfile)): +with open(csv_outfile, "w") as f: + f.write("op,num_cf,num_keys,num_cols_per_cf,num_threads,time_taken,command,metadata,date_start,time_start,date_end,time_end\n") +assert type(num_threads_list) == list +assert type(num_rows_list) == list +for num_keys in num_rows_list: +for num_threads in num_threads_list: +if op_type == "multiput": +cmd = create_ltt_command_multiput(num_cols_per_cf=num_cols_per_cf, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +elif op_type == "multiget": +cmd = create_pe_command_multiget(multiget_batchsize=multiget_batchsize, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +else: +print("Invalid op_type") +exit(1) + +datetime_start = datetime.now() +date_start_str = datetime_start.date() +time_start_str = str(datetime_start.time()).split(".")[0] +time_start = time.time() +ltt_out = subprocess.call(cmd, shell=True) Review Comment: subprocess.run() is a recommended approach, please see: https://docs.python.org/3/library/subprocess.html#using-the-subprocess-module -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@ranger.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] RANGER-4761: make lazy memory allocation for family map instead … [ranger]
kumaab commented on code in PR #307: URL: https://github.com/apache/ranger/pull/307#discussion_r1548578831 ## ranger-tools/src/main/python/stress/stress-hbase-loadgenerator.py: ## @@ -0,0 +1,106 @@ +import subprocess +import time +import argparse +import os +from datetime import datetime + +def increase_memory_for_loadgenerator(): +try: +cmd = "export HBASE_OPTS='-Xmx10g'" +print(cmd) +op = subprocess.call(cmd, shell=True) +print("Output:", op) +except subprocess.CalledProcessError as e: +print("Error in setting HBASE_HEAPSIZE:", e) +exit(1) +def login(): +try: +cmd = "kinit -kt systest" +print(cmd) +login_op = subprocess.call(cmd, shell=True) +print("Login output:", login_op) +except subprocess.CalledProcessError as e: +print("Error in login:", e) +exit(1) + +def create_ltt_command_multiput(num_cols_per_cf=1000, num_threads=10, num_keys=100, table_name = "multitest",avg_data_size=2, num_col_families=3, col_family_pattern="cf%d", num_regions_per_server=1): +def get_column_families(): +col_families = [] +for i in range(num_col_families): +col_families.append(col_family_pattern % i) +return ','.join(col_families) +#Sample: hbase ltt -tn multitest -families f1,f2,f3 -write 2:2:20 -multiput -num_keys 1000 -num_regions_per_server 1 +cmd = f"hbase ltt -tn {table_name} -families {get_column_families()} -write {num_cols_per_cf}:{avg_data_size}:{num_threads}" \ + f" -multiput -num_keys {num_keys} -num_regions_per_server {num_regions_per_server}" +return cmd + + +def create_pe_command_multiget(multiget_batchsize=500, num_threads=10, num_keys=100, table_name="multitest", num_col_families=3): +#Sample: hbase pe --table=multitest --families=3 --columns=1 --multiGet=10 --rows=1000 --nomapred randomRead 5 + +cmd = f"hbase pe --table={table_name} --families={num_col_families} --columns={num_cols_per_cf} " \ + f"--multiGet={multiget_batchsize} --rows={num_keys} --nomapred randomRead {num_threads}" +return cmd + + + +def generate_hbase_load(op_type, multiget_batchsize, num_cf, num_rows_list, num_cols_per_cf, num_threads_list, metadata, csv_outfile="/root/ltt_output.csv", ): +#if output file does not exist only then write the header +if(not os.path.exists(csv_outfile)): +with open(csv_outfile, "w") as f: + f.write("op,num_cf,num_keys,num_cols_per_cf,num_threads,time_taken,command,metadata,date_start,time_start,date_end,time_end\n") +assert type(num_threads_list) == list +assert type(num_rows_list) == list +for num_keys in num_rows_list: +for num_threads in num_threads_list: +if op_type == "multiput": +cmd = create_ltt_command_multiput(num_cols_per_cf=num_cols_per_cf, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +elif op_type == "multiget": +cmd = create_pe_command_multiget(multiget_batchsize=multiget_batchsize, + num_threads=num_threads, + num_keys=num_keys, + num_col_families=num_cf) +else: +print("Invalid op_type") +exit(1) + +datetime_start = datetime.now() +date_start_str = datetime_start.date() +time_start_str = str(datetime_start.time()).split(".")[0] +time_start = time.time() +ltt_out = subprocess.call(cmd, shell=True) +time_end = time.time() +datetime_end = datetime.now() +date_end_str = datetime_end.date() +time_end_str = str(datetime_end.time()).split(".")[0] +time_taken = time_end - time_start + +print("cmd:", cmd) +print("LTT output:", ltt_out) +print("Time taken:", time_taken) +with open(csv_outfile, "a") as f: +if ltt_out != 0: +time_taken = "non_zero_exit_code" + f.write(f'{op_type},{num_cf},{num_keys},{num_cols_per_cf},{num_threads},{time_taken},"{cmd}",{metadata},{date_start_str},{time_start_str},{date_end_str},{time_end_str}\n') +print(f"Written to file: {csv_outfile}") +# Sleep added so that the next command does not start immediately and any metric measurement such as heap useage can be captured more accurately +time.sleep(90) + +if __name__ == '__main__': +argparser = argparse.ArgumentParser("Generate LTT load and create report") +argparser.add_argument('-csv_output', '--csv_output', help='Full path to the csv output file', default="/root/ltt_outp
Re: [PR] RANGER-4761: make lazy memory allocation for family map instead … [ranger]
fateh288 commented on PR #307: URL: https://github.com/apache/ranger/pull/307#issuecomment-2032639005 @mneethiraj @rameeshm Following up for review -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@ranger.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Assigned] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mugdha Varadkar reassigned RANGER-4765: --- Assignee: Mugdha Varadkar > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Assignee: Mugdha Varadkar >Priority: Major > Attachments: image-2024-04-02-18-39-01-410.png, > image-2024-04-02-18-42-51-611.png > > > Need date filter to filter the records the history tab in dataset and > datashare tabs. > !image-2024-04-02-18-39-01-410.png|width=412,height=217! > !image-2024-04-02-18-42-51-611.png|width=470,height=206! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4765: Description: Need date filter to filter the records the history tab in dataset and datashare tabs. !image-2024-04-02-18-39-01-410.png|width=412,height=217! !image-2024-04-02-18-42-51-611.png|width=470,height=206! was: Need date filter to filter the records the history tab in dataset and datashare tabs. !image-2024-04-02-18-39-01-410.png|width=412,height=217! > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Priority: Major > Attachments: image-2024-04-02-18-39-01-410.png, > image-2024-04-02-18-42-51-611.png > > > Need date filter to filter the records the history tab in dataset and > datashare tabs. > !image-2024-04-02-18-39-01-410.png|width=412,height=217! > !image-2024-04-02-18-42-51-611.png|width=470,height=206! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4765: Attachment: image-2024-04-02-18-42-51-611.png > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Priority: Major > Attachments: image-2024-04-02-18-39-01-410.png, > image-2024-04-02-18-42-51-611.png > > > Need date filter to filter the records the history tab in dataset and > datashare tabs. > !image-2024-04-02-18-39-01-410.png|width=412,height=217! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4765: Attachment: image-2024-04-02-18-39-01-410.png > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Priority: Major > Attachments: image-2024-04-02-18-39-01-410.png > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4765: Description: Need date filter to filter the records the history tab in dataset and datashare tabs. !image-2024-04-02-18-39-01-410.png|width=412,height=217! > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Priority: Major > Attachments: image-2024-04-02-18-39-01-410.png > > > Need date filter to filter the records the history tab in dataset and > datashare tabs. > !image-2024-04-02-18-39-01-410.png|width=412,height=217! -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab for dataset/datashare
[ https://issues.apache.org/jira/browse/RANGER-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4765: Summary: GDS UI: Need a date filter to filter the records in history tab for dataset/datashare (was: GDS UI: Need a date filter to filter the records in history tab) > GDS UI: Need a date filter to filter the records in history tab for > dataset/datashare > - > > Key: RANGER-4765 > URL: https://issues.apache.org/jira/browse/RANGER-4765 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (RANGER-4765) GDS UI: Need a date filter to filter the records in history tab
Anand Nadar created RANGER-4765: --- Summary: GDS UI: Need a date filter to filter the records in history tab Key: RANGER-4765 URL: https://issues.apache.org/jira/browse/RANGER-4765 Project: Ranger Issue Type: Task Components: admin Reporter: Anand Nadar -- This message was sent by Atlassian Jira (v8.20.10#820010)
Review Request 74950: RANGER-4764: Update the policyName of associated policies when dataset/project name is modified
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/74950/ --- Review request for ranger, Asit Vadhavkar, Madhan Neethiraj, Monika Kachhadiya, Siddhesh Phatak, and Subhrat Chaudhary. Bugs: RANGER-4764 https://issues.apache.org/jira/browse/RANGER-4764 Repository: ranger Description --- When a dataset/project name is modified, then the policyName of all it's associated policies should be modified. Diffs - security-admin/src/main/java/org/apache/ranger/biz/GdsDBStore.java a1a2f9920 Diff: https://reviews.apache.org/r/74950/diff/1/ Testing --- Validated policy name change of associated policies when dataset/project name is modified. Thanks, Anand Nadar
[jira] [Resolved] (RANGER-4713) Alter view needs additional select permission on db which is not required for create view
[ https://issues.apache.org/jira/browse/RANGER-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahesh Hanumant Bandal resolved RANGER-4713. Resolution: Not A Bug > Alter view needs additional select permission on db which is not required for > create view > - > > Key: RANGER-4713 > URL: https://issues.apache.org/jira/browse/RANGER-4713 > Project: Ranger > Issue Type: Bug > Components: Ranger >Reporter: suja s >Assignee: Mahesh Hanumant Bandal >Priority: Major > > STEPS TO REPRODUCE: > Create db dbfortest > Create table tablefortest under dbfortest (create table > dbfortest.tablefortest(id int, name1 string, name2 string)) > Insert rows into dbfortest.tablefortest > User u1 has select access on tablefortest via ranger policy[db=dbfortest, > table=tablefortest, column=*] - policy P1 > User u1 has create and alter permissions via ranger policy [db=dbfortest, > table=viewfortest, column=*] - policy P2 > Connect to beeline as user u1 and execute command 'create view > dbfortest.viewfortest as select id,name1 from dbfortest.tablefortest' > View creation is successful, Ranger access audits show that policy P1 granted > select on tablefortest and policy P2 granted create on viewfortest > Execute command 'alter view dbfortest.viewfortest as select id,name2 from > dbfortest.tablefortest'. > CURRENT BEHAVIOUR: > Alter view command fails with access denied error for user not having select > permissions on database dbfortest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (RANGER-4713) Alter view needs additional select permission on db which is not required for create view
[ https://issues.apache.org/jira/browse/RANGER-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17833137#comment-17833137 ] Mahesh Hanumant Bandal commented on RANGER-4713: [~suja] - This behaviour was added in RANGER-4001. Removing check for [select] access on database during 'ALTERVIEW_AS' operation may lead to security issues. This is working as expected. Also for the 'CREATEVIEW' operation, adding check for [select] access on database will cause change in behaviour. Resolving this JIRA. > Alter view needs additional select permission on db which is not required for > create view > - > > Key: RANGER-4713 > URL: https://issues.apache.org/jira/browse/RANGER-4713 > Project: Ranger > Issue Type: Bug > Components: Ranger >Reporter: suja s >Assignee: Mahesh Hanumant Bandal >Priority: Major > > STEPS TO REPRODUCE: > Create db dbfortest > Create table tablefortest under dbfortest (create table > dbfortest.tablefortest(id int, name1 string, name2 string)) > Insert rows into dbfortest.tablefortest > User u1 has select access on tablefortest via ranger policy[db=dbfortest, > table=tablefortest, column=*] - policy P1 > User u1 has create and alter permissions via ranger policy [db=dbfortest, > table=viewfortest, column=*] - policy P2 > Connect to beeline as user u1 and execute command 'create view > dbfortest.viewfortest as select id,name1 from dbfortest.tablefortest' > View creation is successful, Ranger access audits show that policy P1 granted > select on tablefortest and policy P2 granted create on viewfortest > Execute command 'alter view dbfortest.viewfortest as select id,name2 from > dbfortest.tablefortest'. > CURRENT BEHAVIOUR: > Alter view command fails with access denied error for user not having select > permissions on database dbfortest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (RANGER-4713) Alter view needs additional select permission on db which is not required for create view
[ https://issues.apache.org/jira/browse/RANGER-4713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mahesh Hanumant Bandal reassigned RANGER-4713: -- Assignee: Mahesh Hanumant Bandal > Alter view needs additional select permission on db which is not required for > create view > - > > Key: RANGER-4713 > URL: https://issues.apache.org/jira/browse/RANGER-4713 > Project: Ranger > Issue Type: Bug > Components: Ranger >Reporter: suja s >Assignee: Mahesh Hanumant Bandal >Priority: Major > > STEPS TO REPRODUCE: > Create db dbfortest > Create table tablefortest under dbfortest (create table > dbfortest.tablefortest(id int, name1 string, name2 string)) > Insert rows into dbfortest.tablefortest > User u1 has select access on tablefortest via ranger policy[db=dbfortest, > table=tablefortest, column=*] - policy P1 > User u1 has create and alter permissions via ranger policy [db=dbfortest, > table=viewfortest, column=*] - policy P2 > Connect to beeline as user u1 and execute command 'create view > dbfortest.viewfortest as select id,name1 from dbfortest.tablefortest' > View creation is successful, Ranger access audits show that policy P1 granted > select on tablefortest and policy P2 granted create on viewfortest > Execute command 'alter view dbfortest.viewfortest as select id,name2 from > dbfortest.tablefortest'. > CURRENT BEHAVIOUR: > Alter view command fails with access denied error for user not having select > permissions on database dbfortest -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (RANGER-4764) GDS: When a dataset name is being modified, then the policy name of all the policies which is associated with dataset should also be updated.
[ https://issues.apache.org/jira/browse/RANGER-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar updated RANGER-4764: Description: Suppose there exists a Dataset with name "Sales 2021". And it has a policy giving access to some user. Then the policy which is being created will have its name as {code:java} "DATASET: " + dataset.getName() + "@" + System.currentTimeMillis() {code} So the dataset name will be "DATASET: Sales 2021@1699530693847". This policy name will be seen in the access audits when enforcement is done using this policy. But now when the Dataset name is modified to "Sales US 2021", the policy name remains "DATASET: Sales 2021@1699530693847" and whenever policy enforcement is done, it will still show the policy name with old dataset name. Solution: To resolve this, we need to update all the policies associated with the dataset when the dataset name is being modified. Similarly this needs to be done for project policy as well. When the project name is modified, all policies associated with that project should be updated with the new project name. was: Suppose there exists a Dataset with name "Sales 2021". And it has a policy giving access to some user. Then the policy which is being created will have its name as {code:java} "DATASET: " + dataset.getName() + "@" + System.currentTimeMillis() {code} So the dataset name will be "DATASET: Sales 2021@1699530693847". This policy name will be seen in the access audits when enforcement is done using this policy. But now when the Dataset name is modified to "Sales US 2021", the policy name remains "DATASET: Sales 2021@1699530693847" and whenever policy enforcement is done, it will still show the policy name with old dataset name. Solution: To resolve this, we need to update all the policies associated with the dataset when the dataset name is being modified. > GDS: When a dataset name is being modified, then the policy name of all the > policies which is associated with dataset should also be updated. > -- > > Key: RANGER-4764 > URL: https://issues.apache.org/jira/browse/RANGER-4764 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Assignee: Anand Nadar >Priority: Major > > Suppose there exists a Dataset with name "Sales 2021". And it has a policy > giving access to some user. Then the policy which is being created will have > its name as > {code:java} > "DATASET: " + dataset.getName() + "@" + System.currentTimeMillis() {code} > So the dataset name will be "DATASET: Sales 2021@1699530693847". > This policy name will be seen in the access audits when enforcement is done > using this policy. > But now when the Dataset name is modified to "Sales US 2021", the policy name > remains "DATASET: Sales 2021@1699530693847" and whenever policy enforcement > is done, it will still show the policy name with old dataset name. > Solution: > To resolve this, we need to update all the policies associated with the > dataset when the dataset name is being modified. > Similarly this needs to be done for project policy as well. When the project > name is modified, all policies associated with that project should be updated > with the new project name. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Assigned] (RANGER-4764) GDS: When a dataset name is being modified, then the policy name of all the policies which is associated with dataset should also be updated.
[ https://issues.apache.org/jira/browse/RANGER-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anand Nadar reassigned RANGER-4764: --- Assignee: Anand Nadar > GDS: When a dataset name is being modified, then the policy name of all the > policies which is associated with dataset should also be updated. > -- > > Key: RANGER-4764 > URL: https://issues.apache.org/jira/browse/RANGER-4764 > Project: Ranger > Issue Type: Task > Components: admin >Reporter: Anand Nadar >Assignee: Anand Nadar >Priority: Major > > Suppose there exists a Dataset with name "Sales 2021". And it has a policy > giving access to some user. Then the policy which is being created will have > its name as > {code:java} > "DATASET: " + dataset.getName() + "@" + System.currentTimeMillis() {code} > So the dataset name will be "DATASET: Sales 2021@1699530693847". > This policy name will be seen in the access audits when enforcement is done > using this policy. > But now when the Dataset name is modified to "Sales US 2021", the policy name > remains "DATASET: Sales 2021@1699530693847" and whenever policy enforcement > is done, it will still show the policy name with old dataset name. > Solution: > To resolve this, we need to update all the policies associated with the > dataset when the dataset name is being modified. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (RANGER-4764) GDS: When a dataset name is being modified, then the policy name of all the policies which is associated with dataset should also be updated.
Anand Nadar created RANGER-4764: --- Summary: GDS: When a dataset name is being modified, then the policy name of all the policies which is associated with dataset should also be updated. Key: RANGER-4764 URL: https://issues.apache.org/jira/browse/RANGER-4764 Project: Ranger Issue Type: Task Components: admin Reporter: Anand Nadar Suppose there exists a Dataset with name "Sales 2021". And it has a policy giving access to some user. Then the policy which is being created will have its name as {code:java} "DATASET: " + dataset.getName() + "@" + System.currentTimeMillis() {code} So the dataset name will be "DATASET: Sales 2021@1699530693847". This policy name will be seen in the access audits when enforcement is done using this policy. But now when the Dataset name is modified to "Sales US 2021", the policy name remains "DATASET: Sales 2021@1699530693847" and whenever policy enforcement is done, it will still show the policy name with old dataset name. Solution: To resolve this, we need to update all the policies associated with the dataset when the dataset name is being modified. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Review Request 74949: RANGER-4763: Send user-friendly message if Test connection is not implemented for a service definition
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/74949/ --- Review request for ranger, Asit Vadhavkar, Madhan Neethiraj, Monika Kachhadiya, Siddhesh Phatak, and Subhrat Chaudhary. Bugs: RANGER-4763 https://issues.apache.org/jira/browse/RANGER-4763 Repository: ranger Description --- Send user-friendly message if Test connection is not implemented for a service definition Diffs - agents-common/src/main/java/org/apache/ranger/plugin/service/RangerDefaultService.java c89b55757 security-admin/src/main/java/org/apache/ranger/biz/ServiceMgr.java 7e071ba0e security-admin/src/main/java/org/apache/ranger/common/TimedExecutor.java d6fc01176 Diff: https://reviews.apache.org/r/74949/diff/1/ Testing --- Validated "Test Connection" when the service definition does not have implClass, it gives the below response in this case. { "statusCode": 1, "msgDesc": "Configuration validation is not implemented for hive-service-1", "messageList": [ { "message": "Configuration validation is not implemented for hive-service-1" } ] } Thanks, Anand Nadar