Github user HeartSaVioR commented on a diff in the pull request:

    https://github.com/apache/storm/pull/1608#discussion_r74367408
  
    --- Diff: bin/storm.py ---
    @@ -232,44 +271,91 @@ def jar(jarfile, klass, *args):
         The process is configured so that StormSubmitter
         
(http://storm.apache.org/releases/current/javadocs/org/apache/storm/StormSubmitter.html)
         will upload the jar at topology-jar-path when the topology is 
submitted.
    +
    +    When you want to ship other jars which is not included to application 
jar, you can pass them to --jars option with comma-separated string.
    +    For example, --jars "your-local-jar.jar,your-local-jar2.jar" will load 
your-local-jar.jar and your-local-jar2.jar.
    +    And when you want to ship maven artifacts and its transitive 
dependencies, you can pass them to --artifacts with comma-separated string.
    +    You can also exclude some dependencies like what you're doing in maven 
pom.
    +    Please add exclusion artifacts with '^' separated string after the 
artifact.
    +    For example, --artifacts 
"redis.clients:jedis:2.9.0,org.apache.kafka:kafka_2.10:0.8.2.2^org.slf4j:slf4j-log4j12"
 will load jedis and kafka artifact and all of transitive dependencies but 
exclude slf4j-log4j12 from kafka.
    +
    +    Complete example of both options is here: `./bin/storm jar 
example/storm-starter/storm-starter-topologies-*.jar 
org.apache.storm.starter.RollingTopWords blobstore-remote2 remote --jars 
"./external/storm-redis/storm-redis-1.1.0.jar,./external/storm-kafka/storm-kafka-1.1.0.jar"
 --artifacts 
"redis.clients:jedis:2.9.0,org.apache.kafka:kafka_2.10:0.8.2.2^org.slf4j:slf4j-log4j12"`
    +
    +    When you pass jars and/or artifacts options, StormSubmitter will 
upload them when the topology is submitted, and they will be included to 
classpath of both the process which runs the class, and also workers for that 
topology.
         """
    +    global DEP_JARS_OPTS, DEP_ARTIFACTS_OPTS
    +
    +    local_jars = DEP_JARS_OPTS
    +    artifact_to_file_jars = resolve_dependencies(DEP_ARTIFACTS_OPTS)
    +
         transform_class = confvalue("client.jartransformer.class", 
[CLUSTER_CONF_DIR])
         if (transform_class != None and transform_class != "nil"):
             tmpjar = os.path.join(tempfile.gettempdir(), 
uuid.uuid1().hex+".jar")
             
exec_storm_class("org.apache.storm.daemon.ClientJarTransformerRunner", 
args=[transform_class, jarfile, tmpjar], fork=True, daemon=False)
    +        extra_jars = [tmpjar, USER_CONF_DIR, STORM_BIN_DIR]
    +        extra_jars.extend(local_jars)
    +        extra_jars.extend(artifact_to_file_jars.values())
             topology_runner_exit_code = exec_storm_class(
                     klass,
                     jvmtype="-client",
    -                extrajars=[tmpjar, USER_CONF_DIR, STORM_BIN_DIR],
    +                extrajars=extra_jars,
                     args=args,
                     daemon=False,
                     fork=True,
    -                jvmopts=JAR_JVM_OPTS + ["-Dstorm.jar=" + tmpjar])
    +                jvmopts=JAR_JVM_OPTS + ["-Dstorm.jar=" + tmpjar] +
    +                        ["-Dstorm.dependency.jars=" + 
",".join(local_jars)] +
    +                        ["-Dstorm.dependency.artifacts=" + 
json.dumps(artifact_to_file_jars)])
             os.remove(tmpjar)
             sys.exit(topology_runner_exit_code)
         else:
    +        extra_jars=[jarfile, USER_CONF_DIR, STORM_BIN_DIR]
    +        extra_jars.extend(local_jars)
    +        extra_jars.extend(artifact_to_file_jars.values())
             exec_storm_class(
                 klass,
                 jvmtype="-client",
    -            extrajars=[jarfile, USER_CONF_DIR, STORM_BIN_DIR],
    +            extrajars=extra_jars,
                 args=args,
                 daemon=False,
    -            jvmopts=JAR_JVM_OPTS + ["-Dstorm.jar=" + jarfile])
    +            jvmopts=JAR_JVM_OPTS + ["-Dstorm.jar=" + jarfile] +
    +                    ["-Dstorm.dependency.jars=" + ",".join(local_jars)] +
    +                    ["-Dstorm.dependency.artifacts=" + 
json.dumps(artifact_to_file_jars)])
     
     def sql(sql_file, topology_name):
         """Syntax: [storm sql sql-file topology-name]
     
         Compiles the SQL statements into a Trident topology and submits it to 
Storm.
    +
    +    --jars and --artifacts options available for jar are also applied to 
sql command.
    +    Please refer "help jar" to see how to use --jars and --artifacts 
options.
    +    You normally want to pass these options since you need to set data 
source to your sql which is an external storage in many cases.
         """
    +    global DEP_JARS_OPTS, DEP_ARTIFACTS_OPTS
    +
    +    local_jars = DEP_JARS_OPTS
    +    artifact_to_file_jars = resolve_dependencies(DEP_ARTIFACTS_OPTS)
    +
    +    sql_core_jars = get_jars_full(STORM_DIR + 
"/external/sql/storm-sql-core")
    --- End diff --
    
    Definitely. I've also stated that thing to somewhere other pull requests.
    `external` has various kinds of modules, and we need to arrange that. 
    
    Maybe we can leave kind of connectors as it is (we can even rename on 
this), and get flux, sql, kafka-monitor (this module is referred from script in 
bin, too), and more to out of external.
    
    Since this issue is not only regarding to storm-sql I would like to 
initiate the discussion before making the change, and handle this to another 
issue. What do you think?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to