A-little-bit-of-data opened a new issue, #6650:
URL: https://github.com/apache/kyuubi/issues/6650

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Describe the bug
   
   The official website of ranger's audit log does not have the following 
configuration
   <property>
   <name>xasecure.audit.destination.db.jdbc.driver</name>
   <value>com.mysql.jdbc.Driver</value>
   </property>
   
   <property>
   <name>xasecure.audit.destination.db.jdbc.url</name>
   <value>jdbc:mysql://10.171.161.78/ranger</value>
   </property>
   Only in very old versions is there a configuration for writing audit logs to 
the db. Currently, I have compiled version 2.4.0, and can only choose es or 
solr. I use es and have added some related packages. But after I added some 
elasticsearch packages, it still reported an error. The added packages are as 
follows:
   elasticsearch-7.10.2.jar
   elasticsearch-cli-7.10.2.jar
   elasticsearch-core-7.10.2.jar
   elasticsearch-geo-7.10.2.jar
   elasticsearch-rest-client-7.15.2.jar
   elasticsearch-rest-high-level-client-7.15.2.jar
   elasticsearch-secure-sm-7.10.2.jar
   elasticsearch-x-content-7.10.2.jar
   
   ### Affects Version(s)
   
   1.9.1
   
   ### Kyuubi Server Log Output
   
   ```logtalk
   24/08/30 19:44:28 INFO SparkContext: Added JAR 
file:/tmp/spark-dc96f564-c20c-46e4-bdc4-aba523c06acc/kyuubi-spark-sql-engine_2.12-1.9.1.jar
 at spark://sparksql-XXXX.svc:7078/jars/kyuubi-spark-sql-engine_2.12-1.9.1.jar 
with timestamp 1725018267113
   
    24/08/30 19:44:28 INFO SparkKubernetesClientFactory: Auto-configuring K8S 
client using current context from users K8S config file
   
    24/08/30 19:44:30 INFO ExecutorPodsAllocator: Going to request 1 executors 
from Kubernetes for ResourceProfile Id: 0, target: 1, known: 0, 
sharedSlotFromPendingPods: 2147483647.
   
    24/08/30 19:44:30 INFO ExecutorPodsAllocator: Found 0 reusable PVCs from 0 
PVCs
   
    24/08/30 19:44:30 INFO KubernetesClientUtils: Spark configuration files 
loaded from Some(/opt/spark/conf) : 
ranger-spark-security.xml,ranger-spark-audit.xml
   
    24/08/30 19:44:30 INFO KubernetesClientUtils: Spark configuration files 
loaded from Some(/opt/spark/conf) : 
ranger-spark-security.xml,ranger-spark-audit.xml
   
    24/08/30 19:44:30 INFO BasicExecutorFeatureStep: Decommissioning not 
enabled, skipping shutdown script
   
    24/08/30 19:44:30 INFO Utils: Successfully started service 
'org.apache.spark.network.netty.NettyBlockTransferService' on port 7079.
   
    24/08/30 19:44:30 INFO NettyBlockTransferService: Server created on 
sparksql-XXXX.svc 172.20.255.3:7079
   
    24/08/30 19:44:30 INFO BlockManager: Using 
org.apache.spark.storage.RandomBlockReplicationPolicy for block replication 
policy
   
    24/08/30 19:44:30 INFO BlockManagerMaster: Registering BlockManager 
BlockManagerId(driver, sparksql-cc3c4591a319ac82-driver-svc.xxxx.svc, 7079, 
None)
   
    24/08/30 19:44:30 INFO BlockManagerMasterEndpoint: Registering block 
manager sparksql-cc3c4591a319ac82-driver-svc.xxxx.svc:7079 with 413.9 MiB RAM, 
BlockManagerId(driver, sparksql-cc3c4591a319ac82-driver-svc.xxxx.svc, 7079, 
None)
   
    24/08/30 19:44:30 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, sparksql-cc3c4591a319ac82-driver-svc.xxxx.svc, 7079, 
None)
   
    24/08/30 19:44:30 INFO BlockManager: Initialized BlockManager: 
BlockManagerId(driver, sparksql-cc3c4591a319ac82-driver-svc.xxxx.svc, 7079, 
None)
   
    24/08/30 19:44:34 INFO 
KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: No executor found 
for 172.20.255.95:59794
   
    24/08/30 19:44:34 INFO 
KubernetesClusterSchedulerBackend$KubernetesDriverEndpoint: Registered executor 
NettyRpcEndpointRef(spark-client://Executor) (172.20.255.95:59802) with ID 1,  
ResourceProfileId 0
   
    24/08/30 19:44:34 INFO KubernetesClusterSchedulerBackend: SchedulerBackend 
is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
   
    24/08/30 19:44:34 INFO BlockManagerMasterEndpoint: Registering block 
manager 172.20.255.95:39933 with 413.9 MiB RAM, BlockManagerId(1, 
172.20.255.95, 39933, None)
   
    24/08/30 19:44:34 INFO RangerConfiguration: 
addResourceIfReadable(ranger-spark-audit.xml): resource file is 
file:/opt/spark/conf/ranger-spark-audit.xml
   
    24/08/30 19:44:34 INFO RangerConfiguration: 
addResourceIfReadable(ranger-spark-security.xml): resource file is 
file:/opt/spark/conf/ranger-spark-security.xml
   
    24/08/30 19:44:34 ERROR RangerConfiguration: 
addResourceIfReadable(ranger-spark-policymgr-ssl.xml): couldn't find resource 
file location
   
    24/08/30 19:44:34 ERROR RangerConfiguration: 
addResourceIfReadable(ranger-spark-sparksql-audit.xml): couldn't find resource 
file location
   
    24/08/30 19:44:34 ERROR RangerConfiguration: 
addResourceIfReadable(ranger-spark-sparksql-security.xml): couldn't find 
resource file location
   
    24/08/30 19:44:34 ERROR RangerConfiguration: 
addResourceIfReadable(ranger-spark-sparksql-policymgr-ssl.xml): couldn't find 
resource file location
   
    24/08/30 19:44:34 INFO RangerPluginConfig: 
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AuditProviderFactory: 
creating..
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AuditProviderFactory: 
initializing..
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.is.enabled=true
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
ranger.plugin.spark.policy.cache.dir=/opt/spark/policycache
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.password=elasticsearch
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.port=9200
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.urls=es01.xxxx
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
ranger.plugin.spark.policy.rest.url=http://ranger-admin.xxxx:6080
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.index=ranger_audits
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.user=elastic
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
ranger.plugin.spark.policy.pollIntervalMs=5000
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
ranger.plugin.spark.service.name=sparksql
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
ranger.plugin.spark.policy.source.impl=org.apache.ranger.admin.client.RangerAdminRESTClient
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch=true
   
    24/08/30 19:44:34 INFO AuditProviderFactory: AUDIT PROPERTY: 
xasecure.audit.destination.elasticsearch.protocol=http
   
    24/08/30 19:44:34 INFO AuditProviderFactory: Audit destination 
xasecure.audit.destination.elasticsearch is set to true
   
    24/08/30 19:44:34 INFO AuditDestination: AuditDestination() enter
   
    24/08/30 19:44:34 INFO BaseAuditHandler: BaseAuditProvider.init()
   
    24/08/30 19:44:34 INFO BaseAuditHandler: 
propPrefix=xasecure.audit.destination.elasticsearch
   
    24/08/30 19:44:34 INFO BaseAuditHandler: Using providerName from property 
prefix. providerName=elasticsearch
   
    24/08/30 19:44:34 INFO BaseAuditHandler: providerName=elasticsearch
   
    24/08/30 19:44:34 INFO ElasticSearchAuditDestination: Connecting to 
ElasticSearch: User:elastic, http://es01.xxxx:9200/ranger_audits
   
    24/08/30 19:44:34 ERROR ElasticSearchAuditDestination: Can't connect to 
ElasticSearch server: User:elastic, http://es01.xxxx:9200/ranger_audits
   
    java.lang.NoClassDefFoundError: 
org/elasticsearch/action/search/OpenPointInTimeRequest
   
        at 
org.apache.ranger.audit.destination.ElasticSearchAuditDestination.newClient(ElasticSearchAuditDestination.java:261)
   
        at 
org.apache.ranger.audit.destination.ElasticSearchAuditDestination.getClient(ElasticSearchAuditDestination.java:187)
   
        at 
org.apache.ranger.audit.destination.ElasticSearchAuditDestination.init(ElasticSearchAuditDestination.java:101)
   
        at 
org.apache.ranger.audit.provider.AuditProviderFactory.init(AuditProviderFactory.java:183)
   
        at 
org.apache.ranger.plugin.service.RangerBasePlugin.init(RangerBasePlugin.java:234)
   
        at 
org.apache.kyuubi.plugin.spark.authz.ranger.SparkRangerAdminPlugin$.initialize(SparkRangerAdminPlugin.scala:68)
   
        at 
org.apache.kyuubi.plugin.spark.authz.ranger.RangerSparkExtension.<init>(RangerSparkExtension.scala:44)
   
        at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
   
        at 
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(Unknown
 Source)
   
        at 
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown
 Source)
   
        at java.base/java.lang.reflect.Constructor.newInstance(Unknown Source)
   
        at 
org.apache.spark.sql.SparkSession$.$anonfun$applyExtensions$2(SparkSession.scala:1368)
   
        at 
org.apache.spark.sql.SparkSession$.$anonfun$applyExtensions$2$adapted(SparkSession.scala:1365)
   
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
   
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
   
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
   
        at 
org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$applyExtensions(SparkSession.scala:1365)
   
        at 
org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:1104)
   
        at 
org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:329)
   
        at 
org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:407)
   
        at 
org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
   
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   
        at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
   
        at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown 
Source)
   
        at java.base/java.lang.reflect.Method.invoke(Unknown Source)
   
        at 
org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
   
        at 
org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1029)
   
        at 
org.apache.spark.deploy.SparkSubmit.$anonfun$submit$2(SparkSubmit.scala:169)
   
        at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)
   
        at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)
   
        at java.base/java.security.AccessController.doPrivileged(Native Method)
   
        at java.base/javax.security.auth.Subject.doAs(Unknown Source)
   
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
   
        at 
org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)
   
        at 
org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:169)
   
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:217)
   
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
   
        at 
org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1120)
   
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1129)
   
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
   
    Caused by: java.lang.ClassNotFoundException: 
org.elasticsearch.action.search.OpenPointInTimeRequest
   
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(Unknown 
Source)
   
        at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(Unknown 
Source)
   
        at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
        ... 40 more
    24/08/30 19:44:34 INFO AuditProviderFactory: 
xasecure.audit.destination.elasticsearch.queue is not set. Setting queue to 
batch for elasticsearch
   
    24/08/30 19:44:34 INFO AuditProviderFactory: queue for elasticsearch is 
batch
   
    24/08/30 19:44:34 INFO AuditQueue: BaseAuditProvider.init()
   
    24/08/30 19:44:34 INFO BaseAuditHandler: BaseAuditProvider.init()
   
    24/08/30 19:44:34 INFO BaseAuditHandler: 
propPrefix=xasecure.audit.destination.elasticsearch.batch
   
    24/08/30 19:44:34 INFO BaseAuditHandler: providerName=batch
   ```
   
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   _No response_
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   ranger-spark-audit.xml
   
   <configuration>
   
       <property>
           <name>xasecure.audit.is.enabled</name>
           <value>true</value>
       </property>
   
       <property>
           <name>xasecure.audit.destination.elasticsearch</name>
           <value>true</value>
       </property>
   
       <property>
           <name>xasecure.audit.destination.elasticsearch.urls</name>
           <value>es01.xxxx</value>
       </property>
   
       <property>
           <name>xasecure.audit.destination.elasticsearch.user</name>
           <value>elastic</value>
       </property>
   
       <property>
           <name>xasecure.audit.destination.elasticsearch.password</name>
           <value>elasticsearch</value>
       </property>
     <property>
           <name>xasecure.audit.destination.elasticsearch.index</name>
           <value>ranger_audits</value>
       </property>
     <property>
           <name>xasecure.audit.destination.elasticsearch.port</name>
           <value>9200</value>
       </property>
   <property>
           <name>xasecure.audit.destination.elasticsearch.protocol</name>
           <value>http</value>
       </property>
   
   </configuration>
   
   
   I want to know if the kyuubi ranger plugin supports writing audit logs to 
elastic search? If it does, what else do I need to configure or what es-related 
jar packages do I need to add?
   
   ### Are you willing to submit PR?
   
   - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi 
community to fix.
   - [X] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to