[ https://issues.apache.org/jira/browse/STORM-876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15007408#comment-15007408 ]
ASF GitHub Bot commented on STORM-876: -------------------------------------- Github user knusbaum commented on a diff in the pull request: https://github.com/apache/storm/pull/845#discussion_r44989185 --- Diff: external/storm-hdfs/src/main/java/org/apache/storm/hdfs/blobstore/HdfsBlobStore.java --- @@ -0,0 +1,381 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.hdfs.blobstore; + +import backtype.storm.Config; +import backtype.storm.blobstore.AtomicOutputStream; +import backtype.storm.blobstore.AtomicOutputStream; +import backtype.storm.blobstore.BlobStore; +import backtype.storm.blobstore.BlobStoreAclHandler; +import backtype.storm.blobstore.BlobStoreFile; +import backtype.storm.blobstore.InputStreamWithMeta; +import backtype.storm.generated.AuthorizationException; +import backtype.storm.generated.KeyNotFoundException; +import backtype.storm.generated.KeyAlreadyExistsException; +import backtype.storm.generated.ReadableBlobMeta; +import backtype.storm.generated.SettableBlobMeta; +import backtype.storm.nimbus.NimbusInfo; +import backtype.storm.utils.NimbusClient; +import backtype.storm.utils.Utils; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.apache.hadoop.security.UserGroupInformation; +import org.apache.thrift7.TBase; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import javax.security.auth.Subject; +import java.io.ByteArrayOutputStream; +import java.io.FileNotFoundException; +import java.io.IOException; +import java.io.InputStream; +import java.security.AccessController; +import java.security.PrivilegedAction; +import java.util.Iterator; +import java.util.Map; + +import static backtype.storm.blobstore.BlobStoreAclHandler.ADMIN; +import static backtype.storm.blobstore.BlobStoreAclHandler.READ; +import static backtype.storm.blobstore.BlobStoreAclHandler.WRITE; + +/** + * Provides a HDFS file system backed blob store implementation. + * Note that this provides an api for having HDFS be the backing store for the blobstore, + * it is not a service/daemon. + */ +public class HdfsBlobStore extends BlobStore { + public static final Logger LOG = LoggerFactory.getLogger(HdfsBlobStore.class); + private static final String DATA_PREFIX = "data_"; + private static final String META_PREFIX = "meta_"; + private BlobStoreAclHandler _aclHandler; + private HdfsBlobStoreImpl _hbs; + private Subject _localSubject; + private Map conf; + + /* + * Get the subject from Hadoop so we can use it to validate the acls. There is no direct + * interface from UserGroupInformation to get the subject, so do a doAs and get the context. + * We could probably run everything in the doAs but for now just grab the subject. + */ --- End diff -- Can we turn this into an actual javadoc-style comment? > Dist Cache: Basic Functionality > ------------------------------- > > Key: STORM-876 > URL: https://issues.apache.org/jira/browse/STORM-876 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core > Reporter: Robert Joseph Evans > Assignee: Robert Joseph Evans > Attachments: DISTCACHE.md, DistributedCacheDesignDocument.pdf > > > Basic functionality for the Dist Cache feature. > As part of this a new API should be added to support uploading and > downloading dist cache items. storm-core.ser, storm-conf.ser and storm.jar > should be written into the blob store instead of residing locally. We need a > default implementation of the blob store that does essentially what nimbus > currently does and does not need anything extra. But having an HDFS backend > too would be great for scalability and HA. > The supervisor should provide a way to download and manage these blobs and > provide a working directory for the worker process with symlinks to the > blobs. It should also allow the blobs to be updated and switch the symlink > atomically to point to the new blob once it is downloaded. > All of this is already done by code internal to Yahoo! we are in the process > of getting it ready to push back to open source shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)