[
https://issues.apache.org/jira/browse/HDDS-1333?focusedWorklogId=221339&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-221339
]
ASF GitHub Bot logged work on HDDS-1333:
----------------------------------------
Author: ASF GitHub Bot
Created on: 01/Apr/19 15:56
Start Date: 01/Apr/19 15:56
Worklog Time Spent: 10m
Work Description: elek commented on pull request #653: HDDS-1333.
OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security
classes
URL: https://github.com/apache/hadoop/pull/653#discussion_r270937426
##########
File path: hadoop-ozone/dist/dev-support/bin/dist-layout-stitching
##########
@@ -114,6 +114,7 @@ run cp
"${ROOT}/hadoop-ozone/objectstore-service/target/hadoop-ozone-objectstore
cp -r "${ROOT}/hadoop-hdds/docs/target/classes/docs" ./
#Copy docker compose files
-run cp -p -R "${ROOT}/hadoop-ozone/dist/src/main/compose" .
+#compose files are preprocessed: properties (eg. project.version) are replaced
first by maven.
+run cp -p -R "${ROOT}/hadoop-ozone/dist/target/compose" .
Review comment:
Sure. It's a good question.
Until now we used wildcard in the docker-compose.yaml to define the
classpath, independent from the current version:
```
HADOOP_CLASSPATH:
/opt/ozone/share/ozone/lib/hadoop-ozone-filesystem-lib-current*.jar
```
Unfortunately it didn't work all the time. When I executed a command from
the container (after `docker-compose exec scm`) the jar file is not added to
the classpath.
I modified the docker-compose file to use the exact jar file. To make it
version independent instead of the simple copy I replace the current version
with maven (filtering) and copy to preprocessed version to the final
destination with dist-layout-stitching script.
Old version: src/main/compose ---[copy with dist-layout-stitching]--->
destination dir
New version: src/main/compose ---[copy and replace version with maven]--->
target/compose ---[dist layout stiching] --> destination dir
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 221339)
Time Spent: 4h (was: 3h 50m)
> OzoneFileSystem can't work with spark/hadoop2.7 because incompatible security
> classes
> -------------------------------------------------------------------------------------
>
> Key: HDDS-1333
> URL: https://issues.apache.org/jira/browse/HDDS-1333
> Project: Hadoop Distributed Data Store
> Issue Type: Bug
> Reporter: Elek, Marton
> Assignee: Elek, Marton
> Priority: Major
> Labels: pull-request-available
> Time Spent: 4h
> Remaining Estimate: 0h
>
> The current ozonefs compatibility layer is broken by: HDDS-1299.
> The spark jobs (including hadoop 2.7) can't be executed any more:
> {code}
> 2019-03-25 09:50:08 INFO StateStoreCoordinatorRef:54 - Registered
> StateStoreCoordinator endpoint
> Exception in thread "main" java.lang.NoClassDefFoundError:
> org/apache/hadoop/crypto/key/KeyProviderTokenIssuer
> at java.lang.ClassLoader.defineClass1(Native Method)
> at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
> at
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
> at java.net.URLClassLoader.defineClass(URLClassLoader.java:468)
> at java.net.URLClassLoader.access$100(URLClassLoader.java:74)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:369)
> at java.net.URLClassLoader$1.run(URLClassLoader.java:363)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:362)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
> at
> org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
> at
> org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
> at
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
> at
> org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:45)
> at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:332)
> at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> at
> org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> at
> org.apache.spark.sql.DataFrameReader.text(DataFrameReader.scala:715)
> at
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:757)
> at
> org.apache.spark.sql.DataFrameReader.textFile(DataFrameReader.scala:724)
> at org.apache.spark.examples.JavaWordCount.main(JavaWordCount.java:45)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
> at
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
> at
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.crypto.key.KeyProviderTokenIssuer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> ... 43 more
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]