Hi all , would appreciate some insight into the below error when attempting to 
stream data out to HDFS. The output operator is pasted below the error.

2016-03-17 10:48:02,197 [8/DurabilityOut_HDHT:HdfsFileOutputOperator] ERROR 
engine.StreamingContainer run - Operator set
[OperatorDeployInfo[id=8,name=DurabilityOut_HDHT,type=GENERIC,checkpoint={ffffffffffffffff,
 0, 
0},inputs=[OperatorDeployInfo.InputDeployInfo[portName=input,streamId=Records 
-:- 
Durability_HDHT,sourceNodeId=2,sourcePortName=recordHash,locality=<null>,partitionMask=0,partitionKeys=<null>]],outputs=[]]]
 stopped running due to an exception.
com.google.common.util.concurrent.UncheckedExecutionException: 
java.lang.NullPointerException
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.datatorrent.lib.io.fs.AbstractFileOutputOperator.processTuple(AbstractFileOutputOperator.java:796)
at 
com.datatorrent.lib.io.fs.AbstractFileOutputOperator$1.process(AbstractFileOutputOperator.java:270)
at com.datatorrent.api.DefaultInputPort.put(DefaultInputPort.java:70)
at 
com.datatorrent.stram.stream.BufferServerSubscriber$BufferReservoir.sweep(BufferServerSubscriber.java:265)
at com.datatorrent.stram.engine.GenericNode.run(GenericNode.java:229)
at 
com.datatorrent.stram.engine.StreamingContainer$2.run(StreamingContainer.java:1380)
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hdfs.DFSOutputStream.isLazyPersist(DFSOutputStream.java:1709)
at 
org.apache.hadoop.hdfs.DFSOutputStream.getChecksum4Compute(DFSOutputStream.java:1550)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1560)
at org.apache.hadoop.hdfs.DFSOutputStream.<init>(DFSOutputStream.java:1667)
at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForAppend(DFSOutputStream.java:1694)
at org.apache.hadoop.hdfs.DFSClient.callAppend(DFSClient.java:1824)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1885)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:1855)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:340)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:336)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:348)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:318)
at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:1164)
at 
com.datatorrent.lib.io.fs.AbstractFileOutputOperator.openStream(AbstractFileOutputOperator.java:641)
at 
com.datatorrent.lib.io.fs.AbstractFileOutputOperator$2.load(AbstractFileOutputOperator.java:550)
at 
com.datatorrent.lib.io.fs.AbstractFileOutputOperator$2.load(AbstractFileOutputOperator.java:504)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)

package com.capitalone.vault8.citadel.operators.impl;

import com.datatorrent.api.Context;
import com.datatorrent.lib.io.fs.AbstractFileOutputOperator;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocalFileSystem;
import org.apache.hadoop.fs.Path;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.BufferedReader;
import java.io.IOException;

/**
 * Ingest a file, and emit it timestamped (line-by-line) into Apex. <p/> <a
 * href="mailto:[email protected]";>Vault 8</a>
 *
 * @author Vault 8
 */
public class HdfsFileOutputOperator extends AbstractFileOutputOperator<String> {
  transient BufferedReader br = null;
  private static Logger LOG = 
LoggerFactory.getLogger(HdfsFileOutputOperator.class.toString());

  private static String idDelim = "@";
  private String coreSite;
  private String hdfsSite;
  private String outputFileName;

  private transient int operatorId;

  // Because every operator may have multiple physical partitions, we should 
organize the data by
  // partition
  private transient String operatorUniquePath;

  public String getHdfsSite() {
    return hdfsSite;
  }

  public void setHdfsSite(String hdfsSite) {
    this.hdfsSite = hdfsSite;
  }

  public String getCoreSite() {
    return coreSite;
  }

  public void setCoreSite(String coreSite) {
    this.coreSite = coreSite;
  }

  // Set the actual name of the file on HDFS
  public void setOutputFileName(String outputFileName) {
    this.outputFileName = outputFileName;
  }

  // Get the name of of the file
  public String getOutputFileName() {
    return outputFileName;
  }

  // Override method to get the absolute path for the output
  @Override
  protected String getFileName(String tuple) {
    return operatorUniquePath;
  }

  @Override
  public void setup(Context.OperatorContext context) {
    super.setup(context);
    operatorId = context.getId();
    operatorUniquePath = new Path(getOutputFileName() + idDelim + operatorId + 
".txt").toString();
  }

  /**
   * This method is used to provide the FileSystem for the operator. This 
allows us to use HDFS
   */
  @Override
  protected FileSystem getFSInstance() throws IOException {
    Configuration config = new Configuration();
    // Update the configuration here
    try {
      config.addResource(new Path(getCoreSite()));
      config.addResource(new Path(getHdfsSite()));
    } catch (IllegalArgumentException e) {
      LOG.warn("Tried to create an empty path ", e);
    }

    FileSystem tempFS = FileSystem.newInstance(new Path(filePath).toUri(), 
config);

    if (tempFS instanceof LocalFileSystem) {
      tempFS = ((LocalFileSystem) tempFS).getRaw();
    }

    return tempFS;
  }


  @Override
  protected byte[] getBytesForTuple(String tuple) {
    return tuple.getBytes();
  }
}
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.

Reply via email to