Couldn’t you set topology.classpath to include the location of the file? It would have to be installed in the same location on every worker node.
From: Noam Cohen <noam.co...@acuityads.com<mailto:noam.co...@acuityads.com>> Reply-To: "user@storm.apache.org<mailto:user@storm.apache.org>" <user@storm.apache.org<mailto:user@storm.apache.org>> Date: Monday, October 20, 2014 at 9:07 AM To: "user@storm.apache.org<mailto:user@storm.apache.org>" <user@storm.apache.org<mailto:user@storm.apache.org>> Subject: Hadoop and HBase configuration files location (inside the topology jar?) Hey guys - Our topology writes data into both HBase and HDFS. Because of that - it has to find the location of the Hadoop/HBase masters in the "hadoop-site.xml" and "hbase-site.xml" configuration files. For non-storm applications that we run - we simply make sure to load those files from the application's classpath. That way we can run the same program both on our lab and on our production environment and have it locate the right master servers based on the configuration files. However - we couldn't find a way to do the same for Storm. We are forced to place the files inside the topology's JAR file and hence build two different JARs - one for production and one for lab. Is there a way to make Storm load those files from the Worker server's local disks instead of packing them up inside the topology? Thanks! Noam