[ 
https://issues.apache.org/jira/browse/STORM-561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14533580#comment-14533580
 ] 

ASF GitHub Bot commented on STORM-561:
--------------------------------------

Github user HeartSaVioR commented on the pull request:

    https://github.com/apache/storm/pull/546#issuecomment-100045226
  
    I follow simple_hbase test with using properties filter, and failed to run.
    
    Running with --local it prints 
    ```
    Received topology submission for hbase-persistent-wordcount with conf 
{"topology.max.task.parallelism" nil, "topology.acker.executors" nil, 
"topology.kryo.register" nil, "topology.kryo.decorators" (), "topology.name" 
"hbase-persistent-wordcount", "storm.id" 
"hbase-persistent-wordcount-1-1431039734", "hbase.conf" {"hbase.rootdir" 
"hdfs://<hdfs>:8020/hbase", "hbase.zookeeper.quorum" "<zk>:2181"}, 
"topology.workers" 1}
    ```
    But HBaseBolt throws TableNotFoundException, 
    ```
    7124 [Thread-11-hbase-bolt] WARN  
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - 
Encountered problems when prefetch hbase:meta table:
    org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in 
hbase:meta for table: WordCount, row=WordCount,nathan,99999999999999
        at 
org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:146) 
~[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1159)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1223)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1111)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1068)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.AsyncProcess.findDestLocation(AsyncProcess.java:365)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:507) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.AsyncProcess.submitAll(AsyncProcess.java:476) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.processBatchCallback(HConnectionManager.java:2355)
 [flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at org.apache.hadoop.hbase.client.HTable.batchCallback(HTable.java:835) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:814) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
org.apache.storm.hbase.common.HBaseClient.batchMutate(HBaseClient.java:100) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at org.apache.storm.hbase.bolt.HBaseBolt.execute(HBaseBolt.java:63) 
[flux-examples-0.11.0-SNAPSHOT.jar:0.11.0-SNAPSHOT]
        at 
backtype.storm.daemon.executor$fn__4722$tuple_action_fn__4724.invoke(executor.clj:633)
 [storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.daemon.executor$mk_task_receiver$fn__4645.invoke(executor.clj:401)
 [storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.disruptor$clojure_handler$reify__1446.onEvent(disruptor.clj:58) 
[storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:120)
 [storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
 [storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) 
[storm-core-0.9.4.jar:0.9.4]
        at 
backtype.storm.daemon.executor$fn__4722$fn__4734$fn__4781.invoke(executor.clj:748)
 [storm-core-0.9.4.jar:0.9.4]
        at backtype.storm.util$async_loop$fn__458.invoke(util.clj:463) 
[storm-core-0.9.4.jar:0.9.4]
        at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.7.0_55]
    ```
    
    Table is enabled and have one region.
    WordCount   1       'WordCount', {NAME => 'cf'}
    
    I didn't clarify yet it is a storm-hbase issue or flux issue.


> Add ability to create topologies dynamically
> --------------------------------------------
>
>                 Key: STORM-561
>                 URL: https://issues.apache.org/jira/browse/STORM-561
>             Project: Apache Storm
>          Issue Type: Improvement
>            Reporter: Nathan Leung
>            Assignee: P. Taylor Goetz
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> It would be nice if a storm topology could be built dynamically, instead of 
> requiring a recompile to change parameters (e.g. number of workers, number of 
> tasks, layout, etc).
> I would propose the following data structures for building core storm 
> topologies.  I haven't done a design for trident yet but the intention would 
> be to add trident support when core storm support is complete (or in parallel 
> if there are other people working on it):
> {code}
> // fields value and arguments are mutually exclusive
> class Argument {
>     String argumentType;  // Class used to lookup arguments in 
> method/constructor
>     String implementationType; // Class used to create this argument
>     String value; // String used to construct this argument
>     List<Argument> arguments; // arguments used to build this argument
> }
> class Dependency {
>     String upstreamComponent; // name of upstream component
>     String grouping;
>     List<Argument> arguments; // arguments for the grouping
> }
> class StormSpout {
>     String name;
>     String klazz;  // Class of this spout
>     List <Argument> arguments;
>     int numTasks;
>     int numExecutors;
> }
> class StormBolt {
>     String name;
>     String klazz; // Class of this bolt
>     List <Argument> arguments;
>     int numTasks;
>     int numExecutors;
>     List<Dependency> dependencies;
> }
> class StormTopologyRepresentation {
>     String name;
>     List<StormSpout> spouts;
>     List<StormBolt> bolts;
>     Map config;
>     int numWorkers;
> }
> {code}
> Topology creation will be built on top of the data structures above.  The 
> benefits:
> * Dependency free.  Code to unmarshal from json, xml, etc, can be kept in 
> extensions, or as examples, and users can write a different unmarshaller if 
> they want to use a different text representation.
> * support arbitrary spout and bolts types
> * support of all groupings, streams, via reflections
> * ability to specify configuration map via config file
> * reification of spout / bolt / dependency arguments
> ** recursive argument reification for complex objects



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to