apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728434301



##########
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##########
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
     return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache<Configuration,byte[]> CACHE = CacheBuilder.newBuilder()
+    .maximumSize(100)
+    .expireAfterAccess(1, TimeUnit.HOURS)
+    .build(
+      new CacheLoader<Configuration,byte[]>() {
+        public byte[] load(Configuration conf) throws Exception {
+          final String s = conf.get(ZSTD_DICTIONARY_FILE_KEY);
+          if (s == null) {
+            throw new IllegalArgumentException(ZSTD_DICTIONARY_FILE_KEY + " is 
not set");
+          }
+          final Path p = new Path(s);
+          final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+          final byte[] buffer = new byte[8192];
+          try (final FSDataInputStream in = FileSystem.get(p.toUri(), 
conf).open(p)) {

Review comment:
       Yes. If there is a size limit and it is exceeded the codec load should 
be rejected by throwing a RuntimeException probably. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to