You need to add them to your classpath in some way.  There are several ways to 
do this.
1) include it in your jar (this is the simplest because it should work 
everywhere, and on all versions of storm).  I realize you don't want to do 
this. We do this by having a stripped down generic version of the config that 
works most places.
2) Update your classpath to point to it.  If you already have it on all of the 
nodes that storm will run on you can add it to your classpath by setting the 
config `topology.classpath` to point to it.  This should either be a string or 
a list of strings.  It will come at the end of the classpath after the storm 
classpath and your own topology jar.
3) Ship it through the distributed cache + 2.  If you don't have the files 
already installed on all of the nodes you can ship it to the nodes using the 
storm distributed cache (versions 1.x +).  You will still need to update the 
classpath to point to them.  Typically you could just add ./ to the classpath 
as a symlink to the files are created in the current working directory of the 
worker.
Or you could play games with trying to download it yourself and add them to the 
classpath.  This is hard and I do not recommend it. 



- Bobby


On Wednesday, June 28, 2017, 8:32:03 AM CDT, Шатаев Илья Михайлович 
<[email protected]> wrote:

<!--#yiv7127506015 _filtered #yiv7127506015 {font-family:Helvetica;panose-1:2 
11 6 4 2 2 2 2 2 4;} _filtered #yiv7127506015 {font-family:"Cambria 
Math";panose-1:2 4 5 3 5 4 6 3 2 4;} _filtered #yiv7127506015 
{font-family:Calibri;panose-1:2 15 5 2 2 2 4 3 2 4;}#yiv7127506015 
#yiv7127506015 p.yiv7127506015MsoNormal, #yiv7127506015 
li.yiv7127506015MsoNormal, #yiv7127506015 div.yiv7127506015MsoNormal 
{margin:0cm;margin-bottom:.0001pt;font-size:11.0pt;font-family:"Calibri", 
sans-serif;}#yiv7127506015 a:link, #yiv7127506015 
span.yiv7127506015MsoHyperlink 
{color:#0563C1;text-decoration:underline;}#yiv7127506015 a:visited, 
#yiv7127506015 span.yiv7127506015MsoHyperlinkFollowed 
{color:#954F72;text-decoration:underline;}#yiv7127506015 
span.yiv7127506015EmailStyle17 {font-family:"Calibri", 
sans-serif;color:windowtext;}#yiv7127506015 .yiv7127506015MsoChpDefault {} 
_filtered #yiv7127506015 {margin:2.0cm 42.5pt 2.0cm 3.0cm;}#yiv7127506015 
div.yiv7127506015WordSection1 {}-->
Hello,  we writing topologies what working on HDFS not HA , but when we running 
on Hadoop cluster with HA HDFS, We get follow ERROR:
 
java.lang.IllegalArgumentException: java.net.UnknownHostException: sorm3-dev at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:411)
 at 
org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:311)
 at 
org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) at 
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:688) at 
org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:629) at 
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:159)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2761) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:99) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2795) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2777) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:386) at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at 
org.apache.hadoop.hive.ql.io.orc.OrcRecordUpdater.<init>(OrcRecordUpdater.java:234)
 at 
org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat.getRecordUpdater(OrcOutputFormat.java:289)
 at 
org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdater(AbstractRecordWriter.java:253)
 at 
org.apache.hive.hcatalog.streaming.AbstractRecordWriter.createRecordUpdaters(AbstractRecordWriter.java:245)
 at 
org.apache.hive.hcatalog.streaming.AbstractRecordWriter.newBatch(AbstractRecordWriter.java:189)
 at 
org.apache.hive.hcatalog.streaming.StrictJsonWriter.newBatch(StrictJsonWriter.java:41)
 at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:607)
 at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$TransactionBatchImpl.<init>(HiveEndPoint.java:555)
 at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatchImpl(HiveEndPoint.java:441)
 at 
org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.fetchTransactionBatch(HiveEndPoint.java:421)
 at 
ru.mts.sorm.storm.hive.common.HiveWriter.lambda$nextTxnBatch$5(HiveWriter.java:250)
 at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745) Caused by: 
java.net.UnknownHostException: sorm3-dev
 
  
 
But if we put hdfs-site.xml and core-site.xml into topologies jar it’s works 
fine!!!
 
How we can read hdfs-site.xml and core-site.xml from FS, not including in jar.
 
  
 
Storm versions: 1.0.1
 
  
 
Best regards,
 
I. Shataev
 

Reply via email to