[jira] [Created] (STORM-2089) Replace Consumer of ISqlTridentDataSource with StateFactory and StateUpdater
Jungtaek Lim created STORM-2089: --- Summary: Replace Consumer of ISqlTridentDataSource with StateFactory and StateUpdater Key: STORM-2089 URL: https://issues.apache.org/jira/browse/STORM-2089 Project: Apache Storm Issue Type: Improvement Components: storm-sql Reporter: Jungtaek Lim Currently ISqlTridentDataSource exposes Function as Consumer which provides only single row update. To maximize the performance, it should be changed to StateFactory (or StateSpec) and StateUpdater. This also includes change of storm-sql-kafka. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2075) Storm SQL Phase III
[ https://issues.apache.org/jira/browse/STORM-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15484071#comment-15484071 ] Jungtaek Lim commented on STORM-2075: - Below is Trident support for external modules. || module || producer || consumer || | Cassandra | N/A | State | | Druid | N/A | State | | ElasticSearch | N/A | State | | EventHubs | Opaque/Transactional Spout | N/A | | HBase | N/A | State | | HDFS | N/A | State | | Hive | N/A | State | | JDBC | N/A | State | | Kafka | Opaque/Transactional Spout | State | | Kafka-Client (New) | N/A | N/A | | Kinesis | N/A | N/A | | Mongo | N/A | State | | MQTT | N/A | BaseFunction | | OpenTSDB | N/A | State | | Redis | N/A | State | | Solr | N/A | State | Btw, I found that storm-sql-kafka provides BaseFunction via State. To maximize performance we should support State directly. > Storm SQL Phase III > --- > > Key: STORM-2075 > URL: https://issues.apache.org/jira/browse/STORM-2075 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Jungtaek Lim > > This epic tracks the effort of the phase III development of StormSQL. > For now Storm SQL only supports Kafka as a data source, which is limited for > normal use cases. We would need to support others as well. Candidates are > external modules. > And also consider supporting State since Trident provides a way to set / get > in batch manner to only State. Current way (each) does insert a row 1 by 1. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1970) external project examples refator
[ https://issues.apache.org/jira/browse/STORM-1970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1970. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Thanks [~vesense], I merged into master and 1.x branches. > external project examples refator > - > > Key: STORM-1970 > URL: https://issues.apache.org/jira/browse/STORM-1970 > Project: Apache Storm > Issue Type: Improvement >Reporter: Xin Wang >Assignee: Xin Wang > Fix For: 2.0.0, 1.1.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > refactor example projects: > *storm-elasticsearch > *storm-hbase > *storm-hdfs > *storm-hive > *storm-jdbc > *storm-kafka > *storm-mongodb > *storm-mqtt > *storm-opentsdb > *storm-redis > *storm-solr -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2086) use DefaultTopicSelector instead of creating a new one
[ https://issues.apache.org/jira/browse/STORM-2086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2086. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~vesense], I merged into master, 1.x, 1.0.x branches. > use DefaultTopicSelector instead of creating a new one > -- > > Key: STORM-2086 > URL: https://issues.apache.org/jira/browse/STORM-2086 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Xin Wang >Assignee: Xin Wang >Priority: Minor > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > _KafkaDataSourcesProvider_ should use _DefaultTopicSelector_ instead of > creating a new one. that make code much clearer. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482854#comment-15482854 ] Jungtaek Lim commented on STORM-1444: - Yes I lost my draft comment which describes how Calcite supports explain. https://calcite.apache.org/apidocs/org/apache/calcite/plan/RelOptUtil.html#toString-org.apache.calcite.rel.RelNode-org.apache.calcite.sql.SqlExplainLevel- This returns the plan based on Planner. For now Storm SQL completely relies on Calcite (not having Storm specific algebra model) so we could just show this. I already added this to pull request of JOIN feature for unit tests. So it's closer to how we expose this functionality. > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482777#comment-15482777 ] Jungtaek Lim commented on STORM-1444: - No opinion received for this. I'll just add 'explain mode' and change it if needed. > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2088) Typos in documentation "Guaranteeing Message Processing"
[ https://issues.apache.org/jira/browse/STORM-2088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15482719#comment-15482719 ] Jungtaek Lim commented on STORM-2088: - [~kinow] You can create pull request on Github mirror, and that's how Storm accepts the contributions. https://github.com/apache/storm The file which you would like to fix is here: https://github.com/apache/storm/blob/master/docs/Guaranteeing-message-processing.md > Typos in documentation "Guaranteeing Message Processing" > > > Key: STORM-2088 > URL: https://issues.apache.org/jira/browse/STORM-2088 > Project: Apache Storm > Issue Type: Bug > Components: documentation >Affects Versions: 2.0.0 >Reporter: Bruno P. Kinoshita >Priority: Trivial > Labels: documentation, patch, site, trivial > Attachments: STORM-2088-1.patch > > Original Estimate: 10m > Remaining Estimate: 10m > > Minor typos in "Guaranteeing Message Processing" page. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2079) Unneccessary readStormConfig operation
[ https://issues.apache.org/jira/browse/STORM-2079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2079. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~jerrypeng], I merged into master, 1.x, 1.0.x branches. > Unneccessary readStormConfig operation > -- > > Key: STORM-2079 > URL: https://issues.apache.org/jira/browse/STORM-2079 > Project: Apache Storm > Issue Type: Bug >Reporter: Boyang Jerry Peng >Assignee: Boyang Jerry Peng >Priority: Trivial > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 40m > Remaining Estimate: 0h > > https://github.com/apache/storm/blob/master/storm-core/src/jvm/org/apache/storm/topology/BaseConfigurationDeclarer.java#L26 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2074) NPE bug in storm-kafka-monitor
[ https://issues.apache.org/jira/browse/STORM-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2074: Component/s: storm-kafka-monitor > NPE bug in storm-kafka-monitor > -- > > Key: STORM-2074 > URL: https://issues.apache.org/jira/browse/STORM-2074 > Project: Apache Storm > Issue Type: Bug > Components: storm-kafka-monitor >Reporter: Xin Wang >Assignee: Xin Wang >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > storm-kafka-monitor will throw an NPE when __--zk-node__ value does not > exist. this is not friendly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2074) NPE bug in storm-kafka-monitor
[ https://issues.apache.org/jira/browse/STORM-2074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2074. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Thanks [~vesense], I merged into master and 1.x branches. > NPE bug in storm-kafka-monitor > -- > > Key: STORM-2074 > URL: https://issues.apache.org/jira/browse/STORM-2074 > Project: Apache Storm > Issue Type: Bug > Components: storm-kafka-monitor >Reporter: Xin Wang >Assignee: Xin Wang >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > Time Spent: 3.5h > Remaining Estimate: 0h > > storm-kafka-monitor will throw an NPE when __--zk-node__ value does not > exist. this is not friendly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2081) create external directory for storm-sql various data sources and move storm-sql-kafka to it
[ https://issues.apache.org/jira/browse/STORM-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2081. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~vesense], I merged into master, 1.x, 1.0.x branches. > create external directory for storm-sql various data sources and move > storm-sql-kafka to it > --- > > Key: STORM-2081 > URL: https://issues.apache.org/jira/browse/STORM-2081 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Xin Wang >Assignee: Xin Wang > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 40m > Remaining Estimate: 0h > > this is a sub task that is part of storm-sql phase III > http://mail-archives.apache.org/mod_mbox/storm-dev/201609.mbox/%3CJIRA.13001879.1472699329000.470211.1472745081393%40Atlassian.JIRA%3E > {quote} > For now Storm SQL only supports Kafka as a data source, which is limited for > normal use cases. > We would need to support others as well. Candidates are external > modules.{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2081) create external directory for storm-sql various data sources and move storm-sql-kafka to it
[ https://issues.apache.org/jira/browse/STORM-2081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2081: Component/s: storm-sql > create external directory for storm-sql various data sources and move > storm-sql-kafka to it > --- > > Key: STORM-2081 > URL: https://issues.apache.org/jira/browse/STORM-2081 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Xin Wang >Assignee: Xin Wang > Time Spent: 0.5h > Remaining Estimate: 0h > > this is a sub task that is part of storm-sql phase III > http://mail-archives.apache.org/mod_mbox/storm-dev/201609.mbox/%3CJIRA.13001879.1472699329000.470211.1472745081393%40Atlassian.JIRA%3E > {quote} > For now Storm SQL only supports Kafka as a data source, which is limited for > normal use cases. > We would need to support others as well. Candidates are external > modules.{quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2059) storm-submit-tools is getting rat failures.
[ https://issues.apache.org/jira/browse/STORM-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2059. - Resolution: Fixed Fix Version/s: 1.1.0 Fixed via STORM-2054 > storm-submit-tools is getting rat failures. > --- > > Key: STORM-2059 > URL: https://issues.apache.org/jira/browse/STORM-2059 > Project: Apache Storm > Issue Type: Bug > Components: storm-submit-tools >Reporter: Robert Joseph Evans >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > > https://travis-ci.org/revans2/incubator-storm/jobs/155187695 > {code} > [INFO] Rat check: Summary of files. Unapproved: 17 unknown: 17 generated: 0 > approved: 14 licence. > ... > [INFO] storm-submit-tools . FAILURE [ 2.296 > s] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2054) DependencyResolver should be aware of relative path and absolute path
[ https://issues.apache.org/jira/browse/STORM-2054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2054. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Merged into master (by [~revans2]) and 1.1.0 (by me) > DependencyResolver should be aware of relative path and absolute path > - > > Key: STORM-2054 > URL: https://issues.apache.org/jira/browse/STORM-2054 > Project: Apache Storm > Issue Type: Bug > Components: storm-submit-tools >Affects Versions: 1.1.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim >Priority: Critical > Fix For: 2.0.0, 1.1.0 > > Time Spent: 1h 40m > Remaining Estimate: 0h > > DependencyResolver always create directory based on storm.home or current > working directory which is intended for relative path but not intended for > absolute path. > Furthermore, DependencyResolverTest doesn't remove temporary directory after > testing. Test creates a new temporary absolute path but due to this bug, > temporary directory is created in working directory which prevents cleaning > up, and finally making RAT error on all builds. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1344) storm-jdbc build error "object name already exists: USER_DETAILS in statement"
[ https://issues.apache.org/jira/browse/STORM-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1344. - Resolution: Fixed Assignee: Paul Poulosky Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~ppoulosk], I merged into master, 1.x, 1.0.x branches. > storm-jdbc build error "object name already exists: USER_DETAILS in statement" > -- > > Key: STORM-1344 > URL: https://issues.apache.org/jira/browse/STORM-1344 > Project: Apache Storm > Issue Type: Bug > Components: storm-jdbc >Affects Versions: 1.0.0 > Environment: os X, jdk7 >Reporter: Longda Feng >Assignee: Paul Poulosky >Priority: Critical > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > ``` > [ERROR] Failed to execute goal org.codehaus.mojo:sql-maven-plugin:1.5:execute > (create-db) on project storm-jdbc: object name already exists: USER_DETAILS > in statement [ /** * Licensed to the Apache Software Foundation (ASF) under > one * or more contributor license agreements. See the NOTICE file * > distributed with this work for additional information * regarding copyright > ownership. The ASF licenses this file * to you under the Apache License, > Version 2.0 (the * "License"); you may not use this file except in compliance > * with the License. You may obtain a copy of the License at * * > http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable > law or agreed to in writing, software * distributed under the License is > distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY > KIND, either express or implied. * See the License for the specific language > governing permissions and * limitations under the License. */ > [ERROR] create table user_details (id integer, user_name varchar(100), > create_date date)] > [ERROR] -> [Help 1] > org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute > goal org.codehaus.mojo:sql-maven-plugin:1.5:execute (create-db) on project > storm-jdbc: object name already exists: USER_DETAILS in statement [ /** * > Licensed to the Apache Software Foundation (ASF) under one * or more > contributor license agreements. See the NOTICE file * distributed with this > work for additional information * regarding copyright ownership. The ASF > licenses this file * to you under the Apache License, Version 2.0 (the * > "License"); you may not use this file except in compliance * with the > License. You may obtain a copy of the License at * * > http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable > law or agreed to in writing, software * distributed under the License is > distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY > KIND, either express or implied. * See the License for the specific language > governing permissions and * limitations under the License. */ > create table user_details (id integer, user_name varchar(100), create_date > date)] > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153) > at > org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116) > at > org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80) > at > org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51) > at > org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307) > at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193) > at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106) > at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862) > at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286) > at org.apache.maven.cli.MavenCli.main(MavenCli.java:197) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289) > at > org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229) > at > org.codehaus.plexus.
[jira] [Resolved] (STORM-1459) Allow not specifying producer properties in read-only Kafka table in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1459. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Great work [~mauzhang], I merged into master and 1.x branches. Keep up the good work. > Allow not specifying producer properties in read-only Kafka table in StormSQL > - > > Key: STORM-1459 > URL: https://issues.apache.org/jira/browse/STORM-1459 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Haohui Mai >Assignee: Manu Zhang > Fix For: 2.0.0, 1.1.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Currently users need to specify the properties of Kafka producer in StormSQL > Kafka table even if the table is read-only. It is preferable to allow users > to omit it for read-only tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (STORM-2080) storm-submit-tools license check failure
[ https://issues.apache.org/jira/browse/STORM-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim closed STORM-2080. --- Resolution: Duplicate > storm-submit-tools license check failure > > > Key: STORM-2080 > URL: https://issues.apache.org/jira/browse/STORM-2080 > Project: Apache Storm > Issue Type: Bug > Components: storm-submit-tools >Affects Versions: 2.0.0 >Reporter: Manu Zhang >Assignee: Manu Zhang >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2075) Storm SQL Phase III
[ https://issues.apache.org/jira/browse/STORM-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2075: Description: This epic tracks the effort of the phase III development of StormSQL. For now Storm SQL only supports Kafka as a data source, which is limited for normal use cases. We would need to support others as well. Candidates are external modules. And also consider supporting State since Trident provides a way to set / get in batch manner to only State. Current way (each) does insert a row 1 by 1. was: This epic tracks the effort of the phase III development of StormSQL. For now Storm SQL only supports Kafka as a data source, which is limited for normal use cases. We would need to support others as well. Candidates are external modules. And also consider supporting State since Trident only provides a way to set / get via batch. Current way (each) does insert a row 1 by 1. > Storm SQL Phase III > --- > > Key: STORM-2075 > URL: https://issues.apache.org/jira/browse/STORM-2075 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Jungtaek Lim > > This epic tracks the effort of the phase III development of StormSQL. > For now Storm SQL only supports Kafka as a data source, which is limited for > normal use cases. We would need to support others as well. Candidates are > external modules. > And also consider supporting State since Trident provides a way to set / get > in batch manner to only State. Current way (each) does insert a row 1 by 1. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (STORM-2046) Errors when using TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE in local mode.
[ https://issues.apache.org/jira/browse/STORM-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim closed STORM-2046. --- Resolution: Duplicate Duplicated via STORM-2040 > Errors when using TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE in local mode. > -- > > Key: STORM-2046 > URL: https://issues.apache.org/jira/browse/STORM-2046 > Project: Apache Storm > Issue Type: Bug >Affects Versions: 1.0.2 > Environment: Ubuntu 16.04 linux, 4.4.0-34-generic > Java 1.8.0_92 >Reporter: Cory Kolbeck >Priority: Minor > Labels: test > > When using a LocalCluster during tests, if > {{TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE}} is specified, > {{assert-can-serialize}} attempts to destructure a Java model object and > throws, killing the worker. A minimal-ish case and the full logs are here: > https://gist.github.com/ckolbeck/557734429e62b097efa9382a714122b0 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2070) Sigar native binary download link went 404
[ https://issues.apache.org/jira/browse/STORM-2070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2070. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Merged into master, 1.x, 1.0.x branches. > Sigar native binary download link went 404 > -- > > Key: STORM-2070 > URL: https://issues.apache.org/jira/browse/STORM-2070 > Project: Apache Storm > Issue Type: Bug > Components: storm-metrics >Affects Versions: 2.0.0, 1.0.2 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 40m > Remaining Estimate: 0h > > {code} > > > 1.6.4 > > https://magelan.googlecode.com/files/hyperic-sigar-${sigar.version}.zip > 8f79d4039ca3ec6c88039d5897a80a268213e6b7 > > > ${settings.localRepository}/org/fusesource/sigar/${sigar.version} > > {code} > Sigar download url is set to > https://magelan.googlecode.com/files/hyperic-sigar-1.6.4.zip which is not > working. > Google Code seems changed their download link. Current link of sigar binary > 1.6.4 is > https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/magelan/hyperic-sigar-1.6.4.zip -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2075) Storm SQL Phase III
[ https://issues.apache.org/jira/browse/STORM-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2075: Description: This epic tracks the effort of the phase III development of StormSQL. For now Storm SQL only supports Kafka as a data source, which is limited for normal use cases. We would need to support others as well. Candidates are external modules. And also consider supporting State since Trident only provides a way to set / get via batch. Current way (each) does insert a row 1 by 1. was: This epic tracks the effort of the phase III development of StormSQL. For now Storm SQL only supports Kafka as a data source, which is limited for normal use cases. We would need to support others as well. Candidates are external modules. > Storm SQL Phase III > --- > > Key: STORM-2075 > URL: https://issues.apache.org/jira/browse/STORM-2075 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Jungtaek Lim > > This epic tracks the effort of the phase III development of StormSQL. > For now Storm SQL only supports Kafka as a data source, which is limited for > normal use cases. We would need to support others as well. Candidates are > external modules. > And also consider supporting State since Trident only provides a way to set / > get via batch. Current way (each) does insert a row 1 by 1. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15454308#comment-15454308 ] Jungtaek Lim commented on STORM-1444: - It might need to have different approach. Storm SQL is creating Trident topology and execute in local cluster or submit to remote cluster. The usage of EXPLAIN is opposite, since users don't expect topology will be executed or submitted if users want "EXPLAIN". So while we can still support "EXPLAIN" statement (by not executing or submitting topology when query has EXPLAIN), we can also add explain mode just same level as standalone and trident. Since we have multiple queries on sql file, what we select affects UX of Storm SQL Runner. I'd rather hear voices from dev@ list which is better, and go ahead. > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2075) Storm SQL Phase III
Jungtaek Lim created STORM-2075: --- Summary: Storm SQL Phase III Key: STORM-2075 URL: https://issues.apache.org/jira/browse/STORM-2075 Project: Apache Storm Issue Type: Epic Components: storm-sql Reporter: Jungtaek Lim This epic tracks the effort of the phase III development of StormSQL. For now Storm SQL only supports Kafka as a data source, which is limited for normal use cases. We would need to support others as well. Candidates are external modules. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2056) Bugs in logviewer
[ https://issues.apache.org/jira/browse/STORM-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2056. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~jerrypeng], I merged into master, 1.x, 1.0.x branches. > Bugs in logviewer > - > > Key: STORM-2056 > URL: https://issues.apache.org/jira/browse/STORM-2056 > Project: Apache Storm > Issue Type: Bug >Reporter: Boyang Jerry Peng >Assignee: Boyang Jerry Peng > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 40m > Remaining Estimate: 0h > > 1. Incorrect url for prev,first,last,next buttons when viewing daemon logs > via logviewer > Example: > http://storm.cluster.com:8000/log?file=nimbus.log&start=0&length=51200 > should be: > http://storm.cluster.com:8000/daemonlog?file=nimbus.log&start=0&length=51200 > Function with bug: > https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L374 > 2. Downloading daemon files causes exception to be thrown because of function > download-log-file checks authorization via worker.yaml. Obviously daemon log > root will not have this file. > java.io.FileNotFoundException: > /home/y/var/storm/workers-artifacts/supervisor.log/worker.yaml (No such file > or directory) > at java.io.FileInputStream.open0(Native Method) > at java.io.FileInputStream.open(FileInputStream.java:195) > at java.io.FileInputStream.(FileInputStream.java:138) > at java.io.FileReader.(FileReader.java:72) > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:422) > at clojure.lang.Reflector.invokeConstructor(Reflector.java:180) > at backtype.storm.util$clojure_from_yaml_file.invoke(util.clj:1066) > at > backtype.storm.daemon.logviewer$get_log_user_group_whitelist.invoke(logviewer.clj:310) > at > backtype.storm.daemon.logviewer$authorized_log_user_QMARK_.invoke(logviewer.clj:326) > at > backtype.storm.daemon.logviewer$download_log_file.invoke(logviewer.clj:497) > at backtype.storm.daemon.logviewer$fn__11528.invoke(logviewer.clj:1024) > at > org.apache.storm.shade.compojure.core$make_route$fn__6445.invoke(core.clj:93) > at > org.apache.storm.shade.compojure.core$if_route$fn__6433.invoke(core.clj:39) > at > org.apache.storm.shade.compojure.core$if_method$fn__6426.invoke(core.clj:24) > at > org.apache.storm.shade.compojure.core$routing$fn__6451.invoke(core.clj:106) > at clojure.core$some.invoke(core.clj:2515) > at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106) > at clojure.lang.RestFn.applyTo(RestFn.java:139) > at clojure.core$apply.invoke(core.clj:626) > at > org.apache.storm.shade.compojure.core$routes$fn__6455.invoke(core.clj:111) > 3. search-log-file should not check for authorized users in worker.yaml > https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L833 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1646) Intermittent test failures in storm-kafka unit tests
[ https://issues.apache.org/jira/browse/STORM-1646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1646. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~ppoulosk], I merged into master, 1.x, 1.0.x branches. > Intermittent test failures in storm-kafka unit tests > > > Key: STORM-1646 > URL: https://issues.apache.org/jira/browse/STORM-1646 > Project: Apache Storm > Issue Type: Bug >Reporter: Paul Poulosky >Assignee: Paul Poulosky > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 1h 10m > Remaining Estimate: 0h > > We have been seeing intermittent test failures in KafkaUtilsTest on slow > hardware and lightly resourced VMs, as well as an intermittent race condition > when running the storm-kafka ExponentialBackoffManager unit test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2039) Backpressure refactoring in worker and executor
[ https://issues.apache.org/jira/browse/STORM-2039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2039. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~abellina], I merged this into master, 1.x, 1.0.x branches. NOTE: This is not a bug, but not also a kind of new feature or improvement so I just merged this to 1.0.x. It also reduces divergence. > Backpressure refactoring in worker and executor > --- > > Key: STORM-2039 > URL: https://issues.apache.org/jira/browse/STORM-2039 > Project: Apache Storm > Issue Type: Story >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 1h > Remaining Estimate: 0h > > * Use backpressure flags directly from disruptor queue instead of in the > disruptor-backpressure-handlers in worker and executor > * Other minor refactoring (eliminate redundant function) -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2042) Nimbus client connections not closed properly causing connection leaks
[ https://issues.apache.org/jira/browse/STORM-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2042: Fix Version/s: 0.10.3 > Nimbus client connections not closed properly causing connection leaks > -- > > Key: STORM-2042 > URL: https://issues.apache.org/jira/browse/STORM-2042 > Project: Apache Storm > Issue Type: Bug >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0, 1.0.3, 0.10.3 > > Time Spent: 1h 20m > Remaining Estimate: 0h > > The nimbus client connections are not closed properly causing connection > leaks. After the number of connections exceed nimbus.thrift.threads, a > RejectedExecutionException is thrown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2064) Add storm name and function, access result and function to log-thrift-access
[ https://issues.apache.org/jira/browse/STORM-2064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2064. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~abellina], I merged this into master, 1.x, 1.0.x branches. > Add storm name and function, access result and function to log-thrift-access > > > Key: STORM-2064 > URL: https://issues.apache.org/jira/browse/STORM-2064 > Project: Apache Storm > Issue Type: Bug >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 1h > Remaining Estimate: 0h > > Improves overall usefulness of the thrift access log. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2063) Add thread name in worker logs
[ https://issues.apache.org/jira/browse/STORM-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2063. - Resolution: Fixed Fix Version/s: 1.0.3 1.1.0 2.0.0 Thanks [~abellina], I merged into master, 1.x, 1.0.x branches. > Add thread name in worker logs > -- > > Key: STORM-2063 > URL: https://issues.apache.org/jira/browse/STORM-2063 > Project: Apache Storm > Issue Type: Improvement >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Closed] (STORM-1703) In local mode, process is not shutting down clearly
[ https://issues.apache.org/jira/browse/STORM-1703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim closed STORM-1703. --- Resolution: Duplicate > In local mode, process is not shutting down clearly > --- > > Key: STORM-1703 > URL: https://issues.apache.org/jira/browse/STORM-1703 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.0, 2.0.0 >Reporter: Jungtaek Lim > Attachments: RollingTopWords-2.0.0-SNAPSHOT.jstack, > RollingTopWords-2.0.0-SNAPSHOT.log, RollingTopWords-thread-dump.jstack, > RollingTopWords.log > > > Process is not shutting down clearly in local mode, but ‘Ctrl + C’ can > terminate the process. > Will attach log file and jstack dump file. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2073) Reduce multi-steps on visitProject into one
Jungtaek Lim created STORM-2073: --- Summary: Reduce multi-steps on visitProject into one Key: STORM-2073 URL: https://issues.apache.org/jira/browse/STORM-2073 Project: Apache Storm Issue Type: Improvement Components: storm-sql Reporter: Jungtaek Lim Assignee: Jungtaek Lim In STORM-1434 we revamped the way to build Trident topology for Storm SQL. While revamping visitProject(), it should handle multiple things (calculate expression, projection) and moreover Trident doesn't allow duplicated field name. So we end up having multiple steps - each -> project -> each -> project - which doesn't look good. STORM-2072 is introducing a way to map / flatMap with specifying different output fields. With this, we can reduce visitProject() to have just 1 step instead of 4 steps. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2071) nimbus-test test-leadership failing with Exception
[ https://issues.apache.org/jira/browse/STORM-2071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15451044#comment-15451044 ] Jungtaek Lim commented on STORM-2071: - Just found that building master branch (2.0.0) also fails with this. Seems like this is an intermittent failure. > nimbus-test test-leadership failing with Exception > -- > > Key: STORM-2071 > URL: https://issues.apache.org/jira/browse/STORM-2071 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 2.0.0, 1.0.1 > Environment: Mac os X >Reporter: Paul Poulosky > Time Spent: 0.5h > Remaining Estimate: 0h > > When running unit tests on my Mac, I get repeated failures in test-leadership. > > 73752 [main] INFO o.a.s.l.ThriftAccessLogger - Request ID: 1 access from: > null principal: null operation: deactivate > ]]> > Uncaught > exception, not in assertion. > expected: nil > actual: java.lang.RuntimeException: No transition for event: :inactivate, > status: {:type :rebalancing} storm-id: t1-1-1472598899 > at org.apache.storm.daemon.nimbus$transition_BANG_$get_event__4879.invoke > (nimbus.clj:365) > org.apache.storm.daemon.nimbus$transition_BANG_.invoke (nimbus.clj:373) > clojure.lang.AFn.applyToHelper (AFn.java:165) > clojure.lang.AFn.applyTo (AFn.java:144) > clojure.core$apply.invoke (core.clj:636) > org.apache.storm.daemon.nimbus$transition_name_BANG_.doInvoke > (nimbus.clj:391) > clojure.lang.RestFn.invoke (RestFn.java:467) > org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__5850.deactivate > (nimbus.clj:1773) > sun.reflect.NativeMethodAccessorImpl.invoke0 > (NativeMethodAccessorImpl.java:-2) > sun.reflect.NativeMethodAccessorImpl.invoke > (NativeMethodAccessorImpl.java:62) > sun.reflect.DelegatingMethodAccessorImpl.invoke > (DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke (Method.java:497) > clojure.lang.Reflector.invokeMatchingMethod (Reflector.java:93) > clojure.lang.Reflector.invokeInstanceMethod (Reflector.java:28) > org.apache.storm.nimbus_test$fn__1203$fn__1209.invoke > (nimbus_test.clj:1222) > org.apache.storm.nimbus_test/fn (nimbus_test.clj:1210) > clojure.test$test_var$fn__7670.invoke (test.clj:704) > clojure.test$test_var.invoke (test.clj:704) > clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars$fn__7692.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars.invoke (test.clj:718) > clojure.test$test_all_vars.invoke (test.clj:728) > (test.clj:747) > clojure.core$map$fn__4553.invoke (core.clj:2624) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.Cons.next (Cons.java:39) > clojure.lang.RT.boundedLength (RT.java:1735) > clojure.lang.RestFn.applyTo (RestFn.java:130) > clojure.core$apply.invoke (core.clj:632) > clojure.test$run_tests.doInvoke (test.clj:762) > clojure.lang.RestFn.invoke (RestFn.java:408) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365$fn__8366.invoke > (test_runner.clj:107) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365.invoke > (test_runner.clj:53) > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364.invoke > (test_runner.clj:52) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.RT.seq (RT.java:507) > clojure.core/seq (core.clj:137) > clojure.core$dorun.invoke (core.clj:3009) > org.apache.storm.testrunner$eval8358.invoke (test_runner.clj:52) > clojure.lang.Compiler.eval (Compiler.java:6782) > clojure.lang.Compiler.load (Compiler.java:7227) > clojure.lang.Compiler.loadFile (Compiler.java:7165) > clojure.main$load_script.invoke (main.clj:275) > clojure.main$script_opt.invoke (main.clj:337) > clojure.main$main.doInvoke (main.clj:421) > clojure.lang.RestFn.invoke (RestFn.java:421) > clojure.lang.Var.invoke (Var.java:383) > clojure.lang.AFn.applyToHelper (AFn.java:156) > clojure.lang.Var.applyTo (Var.java:700) > clojure.main.main (main.java:37) > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2071) nimbus-test test-leadership failing with Exception
[ https://issues.apache.org/jira/browse/STORM-2071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2071: Affects Version/s: 2.0.0 > nimbus-test test-leadership failing with Exception > -- > > Key: STORM-2071 > URL: https://issues.apache.org/jira/browse/STORM-2071 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 2.0.0, 1.0.1 > Environment: Mac os X >Reporter: Paul Poulosky > Time Spent: 0.5h > Remaining Estimate: 0h > > When running unit tests on my Mac, I get repeated failures in test-leadership. > > 73752 [main] INFO o.a.s.l.ThriftAccessLogger - Request ID: 1 access from: > null principal: null operation: deactivate > ]]> > Uncaught > exception, not in assertion. > expected: nil > actual: java.lang.RuntimeException: No transition for event: :inactivate, > status: {:type :rebalancing} storm-id: t1-1-1472598899 > at org.apache.storm.daemon.nimbus$transition_BANG_$get_event__4879.invoke > (nimbus.clj:365) > org.apache.storm.daemon.nimbus$transition_BANG_.invoke (nimbus.clj:373) > clojure.lang.AFn.applyToHelper (AFn.java:165) > clojure.lang.AFn.applyTo (AFn.java:144) > clojure.core$apply.invoke (core.clj:636) > org.apache.storm.daemon.nimbus$transition_name_BANG_.doInvoke > (nimbus.clj:391) > clojure.lang.RestFn.invoke (RestFn.java:467) > org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__5850.deactivate > (nimbus.clj:1773) > sun.reflect.NativeMethodAccessorImpl.invoke0 > (NativeMethodAccessorImpl.java:-2) > sun.reflect.NativeMethodAccessorImpl.invoke > (NativeMethodAccessorImpl.java:62) > sun.reflect.DelegatingMethodAccessorImpl.invoke > (DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke (Method.java:497) > clojure.lang.Reflector.invokeMatchingMethod (Reflector.java:93) > clojure.lang.Reflector.invokeInstanceMethod (Reflector.java:28) > org.apache.storm.nimbus_test$fn__1203$fn__1209.invoke > (nimbus_test.clj:1222) > org.apache.storm.nimbus_test/fn (nimbus_test.clj:1210) > clojure.test$test_var$fn__7670.invoke (test.clj:704) > clojure.test$test_var.invoke (test.clj:704) > clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars$fn__7692.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars.invoke (test.clj:718) > clojure.test$test_all_vars.invoke (test.clj:728) > (test.clj:747) > clojure.core$map$fn__4553.invoke (core.clj:2624) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.Cons.next (Cons.java:39) > clojure.lang.RT.boundedLength (RT.java:1735) > clojure.lang.RestFn.applyTo (RestFn.java:130) > clojure.core$apply.invoke (core.clj:632) > clojure.test$run_tests.doInvoke (test.clj:762) > clojure.lang.RestFn.invoke (RestFn.java:408) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365$fn__8366.invoke > (test_runner.clj:107) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365.invoke > (test_runner.clj:53) > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364.invoke > (test_runner.clj:52) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.RT.seq (RT.java:507) > clojure.core/seq (core.clj:137) > clojure.core$dorun.invoke (core.clj:3009) > org.apache.storm.testrunner$eval8358.invoke (test_runner.clj:52) > clojure.lang.Compiler.eval (Compiler.java:6782) > clojure.lang.Compiler.load (Compiler.java:7227) > clojure.lang.Compiler.loadFile (Compiler.java:7165) > clojure.main$load_script.invoke (main.clj:275) > clojure.main$script_opt.invoke (main.clj:337) > clojure.main$main.doInvoke (main.clj:421) > clojure.lang.RestFn.invoke (RestFn.java:421) > clojure.lang.Var.invoke (Var.java:383) > clojure.lang.AFn.applyToHelper (AFn.java:156) > clojure.lang.Var.applyTo (Var.java:700) > clojure.main.main (main.java:37) > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2072) Add map, flatMap with different outputs (T->V) in Trident
[ https://issues.apache.org/jira/browse/STORM-2072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15451030#comment-15451030 ] Jungtaek Lim commented on STORM-2072: - And I have use case of MapFunction which needs to be initialized. Unfortunately MapFunction and FlatMapFunction doesn't extends neither Function nor Operation. In order to address my use case without breaking API change, I'd like to suggest 'OperationAware'MapFunction and 'OperationAware'FlatMapFunction, which just extend Operation. When we call map with OperationAwareFlatMapFunction or flatMap() with OperationAwareMapFunction, Trident will call prepare() and also cleanup(). > Add map, flatMap with different outputs (T->V) in Trident > - > > Key: STORM-2072 > URL: https://issues.apache.org/jira/browse/STORM-2072 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > > In STORM-1505 we introduced map, flatMap, filter to Trident which are more > familiar than what Trident originally had (users had to use each and project > to do the similar thing). > Current version of map and flatMap assume next output fields are same to > current output fields, in other words, supporting only T -> T conversion. But > in many situations we need to have T -> V conversion. > This issue adds T -> V conversion (via method overloading) to both map and > flatMap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2072) Add map, flatMap with different outputs (T->V) in Trident
Jungtaek Lim created STORM-2072: --- Summary: Add map, flatMap with different outputs (T->V) in Trident Key: STORM-2072 URL: https://issues.apache.org/jira/browse/STORM-2072 Project: Apache Storm Issue Type: Improvement Components: storm-core Reporter: Jungtaek Lim Assignee: Jungtaek Lim In STORM-1505 we introduced map, flatMap, filter to Trident which are more familiar than what Trident originally had (users had to use each and project to do the similar thing). Current version of map and flatMap assume next output fields are same to current output fields, in other words, supporting only T -> T conversion. But in many situations we need to have T -> V conversion. This issue adds T -> V conversion (via method overloading) to both map and flatMap. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2071) nimbus-test test-leadership failing with Exception
[ https://issues.apache.org/jira/browse/STORM-2071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15450483#comment-15450483 ] Jungtaek Lim commented on STORM-2071: - Does this occur from 1.0.1, or 1.0.x-branch (I mean 1.0.2)? If it only occurs from 1.0.1 this is not a problem. And could you retry build with stopping ZK if you run Zookeeper from your dev. machine? > nimbus-test test-leadership failing with Exception > -- > > Key: STORM-2071 > URL: https://issues.apache.org/jira/browse/STORM-2071 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.1 > Environment: Mac os X >Reporter: Paul Poulosky > > When running unit tests on my Mac, I get repeated failures in test-leadership. > > 73752 [main] INFO o.a.s.l.ThriftAccessLogger - Request ID: 1 access from: > null principal: null operation: deactivate > ]]> > Uncaught > exception, not in assertion. > expected: nil > actual: java.lang.RuntimeException: No transition for event: :inactivate, > status: {:type :rebalancing} storm-id: t1-1-1472598899 > at org.apache.storm.daemon.nimbus$transition_BANG_$get_event__4879.invoke > (nimbus.clj:365) > org.apache.storm.daemon.nimbus$transition_BANG_.invoke (nimbus.clj:373) > clojure.lang.AFn.applyToHelper (AFn.java:165) > clojure.lang.AFn.applyTo (AFn.java:144) > clojure.core$apply.invoke (core.clj:636) > org.apache.storm.daemon.nimbus$transition_name_BANG_.doInvoke > (nimbus.clj:391) > clojure.lang.RestFn.invoke (RestFn.java:467) > org.apache.storm.daemon.nimbus$mk_reified_nimbus$reify__5850.deactivate > (nimbus.clj:1773) > sun.reflect.NativeMethodAccessorImpl.invoke0 > (NativeMethodAccessorImpl.java:-2) > sun.reflect.NativeMethodAccessorImpl.invoke > (NativeMethodAccessorImpl.java:62) > sun.reflect.DelegatingMethodAccessorImpl.invoke > (DelegatingMethodAccessorImpl.java:43) > java.lang.reflect.Method.invoke (Method.java:497) > clojure.lang.Reflector.invokeMatchingMethod (Reflector.java:93) > clojure.lang.Reflector.invokeInstanceMethod (Reflector.java:28) > org.apache.storm.nimbus_test$fn__1203$fn__1209.invoke > (nimbus_test.clj:1222) > org.apache.storm.nimbus_test/fn (nimbus_test.clj:1210) > clojure.test$test_var$fn__7670.invoke (test.clj:704) > clojure.test$test_var.invoke (test.clj:704) > clojure.test$test_vars$fn__7692$fn__7697.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars$fn__7692.invoke (test.clj:722) > clojure.test$default_fixture.invoke (test.clj:674) > clojure.test$test_vars.invoke (test.clj:718) > clojure.test$test_all_vars.invoke (test.clj:728) > (test.clj:747) > clojure.core$map$fn__4553.invoke (core.clj:2624) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.Cons.next (Cons.java:39) > clojure.lang.RT.boundedLength (RT.java:1735) > clojure.lang.RestFn.applyTo (RestFn.java:130) > clojure.core$apply.invoke (core.clj:632) > clojure.test$run_tests.doInvoke (test.clj:762) > clojure.lang.RestFn.invoke (RestFn.java:408) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365$fn__8366.invoke > (test_runner.clj:107) > > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364$fn__8365.invoke > (test_runner.clj:53) > org.apache.storm.testrunner$eval8358$iter__8359__8363$fn__8364.invoke > (test_runner.clj:52) > clojure.lang.LazySeq.sval (LazySeq.java:40) > clojure.lang.LazySeq.seq (LazySeq.java:49) > clojure.lang.RT.seq (RT.java:507) > clojure.core/seq (core.clj:137) > clojure.core$dorun.invoke (core.clj:3009) > org.apache.storm.testrunner$eval8358.invoke (test_runner.clj:52) > clojure.lang.Compiler.eval (Compiler.java:6782) > clojure.lang.Compiler.load (Compiler.java:7227) > clojure.lang.Compiler.loadFile (Compiler.java:7165) > clojure.main$load_script.invoke (main.clj:275) > clojure.main$script_opt.invoke (main.clj:337) > clojure.main$main.doInvoke (main.clj:421) > clojure.lang.RestFn.invoke (RestFn.java:421) > clojure.lang.Var.invoke (Var.java:383) > clojure.lang.AFn.applyToHelper (AFn.java:156) > clojure.lang.Var.applyTo (Var.java:700) > clojure.main.main (main.java:37) > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2070) Sigar native binary download link went 404
Jungtaek Lim created STORM-2070: --- Summary: Sigar native binary download link went 404 Key: STORM-2070 URL: https://issues.apache.org/jira/browse/STORM-2070 Project: Apache Storm Issue Type: Bug Components: storm-metrics Affects Versions: 2.0.0, 1.0.2 Reporter: Jungtaek Lim Assignee: Jungtaek Lim {code} 1.6.4 https://magelan.googlecode.com/files/hyperic-sigar-${sigar.version}.zip 8f79d4039ca3ec6c88039d5897a80a268213e6b7 ${settings.localRepository}/org/fusesource/sigar/${sigar.version} {code} Sigar download url is set to https://magelan.googlecode.com/files/hyperic-sigar-1.6.4.zip which is not working. Google Code seems changed their download link. Current link of sigar binary 1.6.4 is https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/magelan/hyperic-sigar-1.6.4.zip -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1881) storm-redis is missing dependant libraries in distribution
[ https://issues.apache.org/jira/browse/STORM-1881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1881. - Resolution: Fixed Assignee: Jungtaek Lim Fix Version/s: 1.1.0 2.0.0 [~sakanaou] With STORM-2016, you can just run storm jar with {noformat}--artifacts org.apache.storm:storm-redis:{noformat} and it handles transitive dependencies. You can also exclude some dependencies from storm-redis if topology has conflicting dependencies. Please refer below link to see how to use: https://github.com/apache/storm/blob/1.x-branch/docs/Command-line-client.md#jar NOTE: Storm 1.1.0 is not released yet, so you still need to wait more. Sorry for the inconvenience. > storm-redis is missing dependant libraries in distribution > -- > > Key: STORM-1881 > URL: https://issues.apache.org/jira/browse/STORM-1881 > Project: Apache Storm > Issue Type: Bug > Components: storm-redis >Affects Versions: 1.0.1 >Reporter: Daniel Klessing >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > > Despite the documentation on > http://storm.apache.org/releases/1.0.1/State-checkpointing.html it is not > enough to simply copy {{storm-redis-*.jar}} to {{extlib}} to get the > {{RedisKeyValueStateProvider}} working. Depending jedis and > apache-commons-pool2 jars are missing and must be copied by hand to get it > working. Else one is greeted with exception stack traces like: > {code} > Caused by: java.lang.ClassNotFoundException: > org.apache.commons.pool2.impl.GenericObjectPoolConfig > {code} > or > {code} > Caused by: java.lang.ClassNotFoundException: > redis.clients.jedis.JedisPoolConfig > {code} > Copying {{commons-pool2-2.4.2.jar}} and {{jedis-2.8.1.jar}} from hand to > {{extlib}} solves the issue. > It might be better to create a "fat" jar of {{storm-redis-*.jar}} or provide > documentation, which libraries have to be made available. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2068) org.apache.storm.utils.NimbusLeaderNotFoundException
[ https://issues.apache.org/jira/browse/STORM-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15448439#comment-15448439 ] Jungtaek Lim commented on STORM-2068: - Please post this question to user@ mailing list so that one of users / devs can answer this. http://storm.apache.org/getting-help.html If it turns out to a bug we can file an issue with more details information. I'll close this. Thanks. > org.apache.storm.utils.NimbusLeaderNotFoundException > > > Key: STORM-2068 > URL: https://issues.apache.org/jira/browse/STORM-2068 > Project: Apache Storm > Issue Type: Question >Reporter: seokwoo yang > > my storm version is apache-storm-2.0.0-SNAPSHOT > from git clone "storm github", mvn clean install command > and, setting storm.yaml like bellow: > storm.zookeeper.servers: > - "NN" > - "DN01" > - "DN02" > - "DN03" > - "DN04" > nimbus.seeds: ["NN"] > supervisor.slots.ports: > - 6700 > - 6701 > - 6702 > - 6703 > ui.port: 8989 > storm cluster is woring well. but, UI, storm jar command rise ERROR > org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader > nimbus from seed hosts ["NN"]. Did you specify a valid list of nimbus hosts > for config nimbus.seeds? > at > org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:106) > at org.apache.storm.ui.core$all_topologies_summary.invoke(core.clj:508) > at org.apache.storm.ui.core$fn__3916.invoke(core.clj:1141) > at > org.apache.storm.shade.compojure.core$make_route$fn__989.invoke(core.clj:100) > at > org.apache.storm.shade.compojure.core$if_route$fn__977.invoke(core.clj:46) > at > org.apache.storm.shade.compojure.core$if_method$fn__970.invoke(core.clj:31) > at > org.apache.storm.shade.compojure.core$routing$fn__995.invoke(core.clj:113) > at clojure.core$some.invoke(core.clj:2570) > at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:113) > at clojure.lang.RestFn.applyTo(RestFn.java:139) > at clojure.core$apply.invoke(core.clj:632) > at > org.apache.storm.shade.compojure.core$routes$fn__999.invoke(core.clj:118) > at > org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__1956.invoke(json.clj:56) > at > org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__1491.invoke(multipart_params.clj:118) > at > org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__1412.invoke(reload.clj:22) > at > org.apache.storm.ui.helpers$requests_middleware$fn__3226.invoke(helpers.clj:54) > at org.apache.storm.ui.core$catch_errors$fn__4103.invoke(core.clj:1425) > at > org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__2955.invoke(keyword_params.clj:35) > at > org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__2998.invoke(nested_params.clj:84) > at > org.apache.storm.shade.ring.middleware.params$wrap_params$fn__2927.invoke(params.clj:64) > at > org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__1491.invoke(multipart_params.clj:118) > at > org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__3213.invoke(flash.clj:35) > at > org.apache.storm.shade.ring.middleware.session$wrap_session$fn__3199.invoke(session.clj:98) > at > org.apache.storm.shade.ring.util.servlet$make_service_method$fn__2821.invoke(servlet.clj:127) > at > org.apache.storm.shade.ring.util.servlet$servlet$fn__2825.invoke(servlet.clj:136) > at > org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown > Source) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320) > at > org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47) > at > org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291) > at > org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247) > at > org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443) > at > org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandl
[jira] [Closed] (STORM-2068) org.apache.storm.utils.NimbusLeaderNotFoundException
[ https://issues.apache.org/jira/browse/STORM-2068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim closed STORM-2068. --- Resolution: Won't Fix > org.apache.storm.utils.NimbusLeaderNotFoundException > > > Key: STORM-2068 > URL: https://issues.apache.org/jira/browse/STORM-2068 > Project: Apache Storm > Issue Type: Question >Reporter: seokwoo yang > > my storm version is apache-storm-2.0.0-SNAPSHOT > from git clone "storm github", mvn clean install command > and, setting storm.yaml like bellow: > storm.zookeeper.servers: > - "NN" > - "DN01" > - "DN02" > - "DN03" > - "DN04" > nimbus.seeds: ["NN"] > supervisor.slots.ports: > - 6700 > - 6701 > - 6702 > - 6703 > ui.port: 8989 > storm cluster is woring well. but, UI, storm jar command rise ERROR > org.apache.storm.utils.NimbusLeaderNotFoundException: Could not find leader > nimbus from seed hosts ["NN"]. Did you specify a valid list of nimbus hosts > for config nimbus.seeds? > at > org.apache.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:106) > at org.apache.storm.ui.core$all_topologies_summary.invoke(core.clj:508) > at org.apache.storm.ui.core$fn__3916.invoke(core.clj:1141) > at > org.apache.storm.shade.compojure.core$make_route$fn__989.invoke(core.clj:100) > at > org.apache.storm.shade.compojure.core$if_route$fn__977.invoke(core.clj:46) > at > org.apache.storm.shade.compojure.core$if_method$fn__970.invoke(core.clj:31) > at > org.apache.storm.shade.compojure.core$routing$fn__995.invoke(core.clj:113) > at clojure.core$some.invoke(core.clj:2570) > at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:113) > at clojure.lang.RestFn.applyTo(RestFn.java:139) > at clojure.core$apply.invoke(core.clj:632) > at > org.apache.storm.shade.compojure.core$routes$fn__999.invoke(core.clj:118) > at > org.apache.storm.shade.ring.middleware.json$wrap_json_params$fn__1956.invoke(json.clj:56) > at > org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__1491.invoke(multipart_params.clj:118) > at > org.apache.storm.shade.ring.middleware.reload$wrap_reload$fn__1412.invoke(reload.clj:22) > at > org.apache.storm.ui.helpers$requests_middleware$fn__3226.invoke(helpers.clj:54) > at org.apache.storm.ui.core$catch_errors$fn__4103.invoke(core.clj:1425) > at > org.apache.storm.shade.ring.middleware.keyword_params$wrap_keyword_params$fn__2955.invoke(keyword_params.clj:35) > at > org.apache.storm.shade.ring.middleware.nested_params$wrap_nested_params$fn__2998.invoke(nested_params.clj:84) > at > org.apache.storm.shade.ring.middleware.params$wrap_params$fn__2927.invoke(params.clj:64) > at > org.apache.storm.shade.ring.middleware.multipart_params$wrap_multipart_params$fn__1491.invoke(multipart_params.clj:118) > at > org.apache.storm.shade.ring.middleware.flash$wrap_flash$fn__3213.invoke(flash.clj:35) > at > org.apache.storm.shade.ring.middleware.session$wrap_session$fn__3199.invoke(session.clj:98) > at > org.apache.storm.shade.ring.util.servlet$make_service_method$fn__2821.invoke(servlet.clj:127) > at > org.apache.storm.shade.ring.util.servlet$servlet$fn__2825.invoke(servlet.clj:136) > at > org.apache.storm.shade.ring.util.servlet.proxy$javax.servlet.http.HttpServlet$ff19274a.service(Unknown > Source) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:654) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1320) > at > org.apache.storm.logging.filters.AccessLoggingFilter.handle(AccessLoggingFilter.java:47) > at > org.apache.storm.logging.filters.AccessLoggingFilter.doFilter(AccessLoggingFilter.java:39) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291) > at > org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:247) > at > org.apache.storm.shade.org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:210) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1291) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:443) > at > org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044) > at > org.apache.storm.shade.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372) > at > org.apache.storm.shade.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978) >
[jira] [Updated] (STORM-2067) "array element type mismatch" from compute-executors in nimbus.clj
[ https://issues.apache.org/jira/browse/STORM-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2067: Affects Version/s: 2.0.0 > "array element type mismatch" from compute-executors in nimbus.clj > -- > > Key: STORM-2067 > URL: https://issues.apache.org/jira/browse/STORM-2067 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 2.0.0 >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > Time Spent: 10m > Remaining Estimate: 0h > > In some scenarios, Nimbus throws "java.lang.IllegalArgumentException: array > element type mismatch". > {noformat} > 08:49:35.321 [timer] ERROR o.a.s.d.nimbus - Error when processing event > java.lang.IllegalArgumentException: array element type mismatch > at java.lang.reflect.Array.set(Native Method) ~[?:1.8.0_66] > at clojure.lang.RT.seqToTypedArray(RT.java:1719) ~[clojure-1.7.0.jar:?] > at clojure.lang.RT.seqToTypedArray(RT.java:1692) ~[clojure-1.7.0.jar:?] > at clojure.core$into_array.invoke(core.clj:3319) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$compute_executors$fn__4307.doInvoke(nimbus.clj:645) > ~[classes/:?] > at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$compute_executors.invoke(nimbus.clj:645) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$compute_executor__GT_component.invoke(nimbus.clj:655) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$read_topology_details.invoke(nimbus.clj:565) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$mk_assignments$iter__4668__4672$fn__4673.invoke(nimbus.clj:967) > ~[classes/:?] > at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] > at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] > at clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] > at clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] > at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) > ~[clojure-1.7.0.jar:?] > at clojure.core.protocols$fn__6506.invoke(protocols.clj:101) > ~[clojure-1.7.0.jar:?] > at > clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) > ~[clojure-1.7.0.jar:?] > at clojure.core$reduce.invoke(core.clj:6519) ~[clojure-1.7.0.jar:?] > at clojure.core$into.invoke(core.clj:6600) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:966) > ~[classes/:?] > at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366$fn__5367.invoke(nimbus.clj:2409) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366.invoke(nimbus.clj:2408) > ~[classes/:?] > at clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] > at org.apache.storm.StormTimer$1.run(StormTimer.java:190) ~[classes/:?] > at org.apache.storm.StormTimer$StormTimerTask.run(StormTimer.java:83) > [classes/:?] > {noformat} > The exception is thrown from into-array, which is called from below line: > {code} > ((fn [ & maps ] (Utils/joinMaps (into-array (into [component->executors] > maps) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2067) "array element type mismatch" from compute-executors in nimbus.clj
[ https://issues.apache.org/jira/browse/STORM-2067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15447500#comment-15447500 ] Jungtaek Lim commented on STORM-2067: - This is due to Clojure's optimization. Clojure has two implicit types of persistent map: PersistentArrayMap and PersistentHashMap. Which type Clojure uses for persisting map is determined by specific threshold, so in fact Clojure should make sure that users don't need to care about this. But this is not true for into-array with no type information. If into-array is called with no type information, into-array uses the class which is type of first element. Unfortunately, from below line, component->executors and maps were different types which one is PersistentArrayMap and another one is PersistentHashMap. {code} (into-array (into [component->executors] maps)) {code} We should pass common type of two maps. Since parameter type of Utils/joinMaps is Map[], we can pass Map to into-array. > "array element type mismatch" from compute-executors in nimbus.clj > -- > > Key: STORM-2067 > URL: https://issues.apache.org/jira/browse/STORM-2067 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > > In some scenarios, Nimbus throws "java.lang.IllegalArgumentException: array > element type mismatch". > {noformat} > 08:49:35.321 [timer] ERROR o.a.s.d.nimbus - Error when processing event > java.lang.IllegalArgumentException: array element type mismatch > at java.lang.reflect.Array.set(Native Method) ~[?:1.8.0_66] > at clojure.lang.RT.seqToTypedArray(RT.java:1719) ~[clojure-1.7.0.jar:?] > at clojure.lang.RT.seqToTypedArray(RT.java:1692) ~[clojure-1.7.0.jar:?] > at clojure.core$into_array.invoke(core.clj:3319) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$compute_executors$fn__4307.doInvoke(nimbus.clj:645) > ~[classes/:?] > at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$compute_executors.invoke(nimbus.clj:645) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$compute_executor__GT_component.invoke(nimbus.clj:655) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$read_topology_details.invoke(nimbus.clj:565) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$mk_assignments$iter__4668__4672$fn__4673.invoke(nimbus.clj:967) > ~[classes/:?] > at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] > at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] > at clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] > at clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] > at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) > ~[clojure-1.7.0.jar:?] > at clojure.core.protocols$fn__6506.invoke(protocols.clj:101) > ~[clojure-1.7.0.jar:?] > at > clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) > ~[clojure-1.7.0.jar:?] > at clojure.core$reduce.invoke(core.clj:6519) ~[clojure-1.7.0.jar:?] > at clojure.core$into.invoke(core.clj:6600) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:966) > ~[classes/:?] > at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.7.0.jar:?] > at > org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366$fn__5367.invoke(nimbus.clj:2409) > ~[classes/:?] > at > org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366.invoke(nimbus.clj:2408) > ~[classes/:?] > at clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] > at org.apache.storm.StormTimer$1.run(StormTimer.java:190) ~[classes/:?] > at org.apache.storm.StormTimer$StormTimerTask.run(StormTimer.java:83) > [classes/:?] > {noformat} > The exception is thrown from into-array, which is called from below line: > {code} > ((fn [ & maps ] (Utils/joinMaps (into-array (into [component->executors] > maps) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2067) "array element type mismatch" from compute-executors in nimbus.clj
Jungtaek Lim created STORM-2067: --- Summary: "array element type mismatch" from compute-executors in nimbus.clj Key: STORM-2067 URL: https://issues.apache.org/jira/browse/STORM-2067 Project: Apache Storm Issue Type: Bug Components: storm-core Reporter: Jungtaek Lim Assignee: Jungtaek Lim In some scenarios, Nimbus throws "java.lang.IllegalArgumentException: array element type mismatch". {noformat} 08:49:35.321 [timer] ERROR o.a.s.d.nimbus - Error when processing event java.lang.IllegalArgumentException: array element type mismatch at java.lang.reflect.Array.set(Native Method) ~[?:1.8.0_66] at clojure.lang.RT.seqToTypedArray(RT.java:1719) ~[clojure-1.7.0.jar:?] at clojure.lang.RT.seqToTypedArray(RT.java:1692) ~[clojure-1.7.0.jar:?] at clojure.core$into_array.invoke(core.clj:3319) ~[clojure-1.7.0.jar:?] at org.apache.storm.daemon.nimbus$compute_executors$fn__4307.doInvoke(nimbus.clj:645) ~[classes/:?] at clojure.lang.RestFn.invoke(RestFn.java:408) ~[clojure-1.7.0.jar:?] at org.apache.storm.daemon.nimbus$compute_executors.invoke(nimbus.clj:645) ~[classes/:?] at org.apache.storm.daemon.nimbus$compute_executor__GT_component.invoke(nimbus.clj:655) ~[classes/:?] at org.apache.storm.daemon.nimbus$read_topology_details.invoke(nimbus.clj:565) ~[classes/:?] at org.apache.storm.daemon.nimbus$mk_assignments$iter__4668__4672$fn__4673.invoke(nimbus.clj:967) ~[classes/:?] at clojure.lang.LazySeq.sval(LazySeq.java:40) ~[clojure-1.7.0.jar:?] at clojure.lang.LazySeq.seq(LazySeq.java:49) ~[clojure-1.7.0.jar:?] at clojure.lang.RT.seq(RT.java:507) ~[clojure-1.7.0.jar:?] at clojure.core$seq__4128.invoke(core.clj:137) ~[clojure-1.7.0.jar:?] at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30) ~[clojure-1.7.0.jar:?] at clojure.core.protocols$fn__6506.invoke(protocols.clj:101) ~[clojure-1.7.0.jar:?] at clojure.core.protocols$fn__6452$G__6447__6465.invoke(protocols.clj:13) ~[clojure-1.7.0.jar:?] at clojure.core$reduce.invoke(core.clj:6519) ~[clojure-1.7.0.jar:?] at clojure.core$into.invoke(core.clj:6600) ~[clojure-1.7.0.jar:?] at org.apache.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:966) ~[classes/:?] at clojure.lang.RestFn.invoke(RestFn.java:410) ~[clojure-1.7.0.jar:?] at org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366$fn__5367.invoke(nimbus.clj:2409) ~[classes/:?] at org.apache.storm.daemon.nimbus$fn__5354$exec_fn__579__auto5355$fn__5366.invoke(nimbus.clj:2408) ~[classes/:?] at clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at org.apache.storm.StormTimer$1.run(StormTimer.java:190) ~[classes/:?] at org.apache.storm.StormTimer$StormTimerTask.run(StormTimer.java:83) [classes/:?] {noformat} The exception is thrown from into-array, which is called from below line: {code} ((fn [ & maps ] (Utils/joinMaps (into-array (into [component->executors] maps) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2059) storm-submit-tools is getting rat failures.
[ https://issues.apache.org/jira/browse/STORM-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15439173#comment-15439173 ] Jungtaek Lim commented on STORM-2059: - STORM-2054 is for resolving this. Could you review pull request for STORM-2054 and also verify that STORM-2054 resolves STORM-2059 too? Thanks in advance! > storm-submit-tools is getting rat failures. > --- > > Key: STORM-2059 > URL: https://issues.apache.org/jira/browse/STORM-2059 > Project: Apache Storm > Issue Type: Bug > Components: storm-submit-tools >Reporter: Robert Joseph Evans > Fix For: 2.0.0 > > > https://travis-ci.org/revans2/incubator-storm/jobs/155187695 > {code} > [INFO] Rat check: Summary of files. Unapproved: 17 unknown: 17 generated: 0 > approved: 14 licence. > ... > [INFO] storm-submit-tools . FAILURE [ 2.296 > s] > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2057) Support JOIN statement in Storm SQL
Jungtaek Lim created STORM-2057: --- Summary: Support JOIN statement in Storm SQL Key: STORM-2057 URL: https://issues.apache.org/jira/browse/STORM-2057 Project: Apache Storm Issue Type: New Feature Components: storm-sql Reporter: Jungtaek Lim It would be great to support JOIN statement in Storm SQL. http://storm.apache.org/releases/1.0.1/Trident-API-Overview.html According to this page, Trident supports 'join' across multiple spouts which is done by synchronizing spouts. This might be good start for Storm SQL join feature. This restricts the boundary of join to batch, but we're OK for now since aggregation is implemented via same restriction. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1446) Compile the Calcite logical plan to Storm physical plan
[ https://issues.apache.org/jira/browse/STORM-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436979#comment-15436979 ] Jungtaek Lim commented on STORM-1446: - Just posted a question to Calcite dev@ list to hear explanation the meaning of comments. http://mail-archives.apache.org/mod_mbox/calcite-dev/201608.mbox/%3CCAF5108iN77REQqJ=1xse+se_8mv1fauo9_byzewc3vvaawm...@mail.gmail.com%3E > Compile the Calcite logical plan to Storm physical plan > --- > > Key: STORM-1446 > URL: https://issues.apache.org/jira/browse/STORM-1446 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Haohui Mai > > As suggested in > https://issues.apache.org/jira/browse/STORM-1040?focusedCommentId=15036651&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036651, > compiling the logical plan from Calcite down to Storm physical plan will > clarify the implementation of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2050) [storm-sql] Support User Defined Aggregate Function for Trident mode
[ https://issues.apache.org/jira/browse/STORM-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2050. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Merged into master and 1.x-branch. > [storm-sql] Support User Defined Aggregate Function for Trident mode > > > Key: STORM-2050 > URL: https://issues.apache.org/jira/browse/STORM-2050 > Project: Apache Storm > Issue Type: Task > Components: storm-sql >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > Currently UDAF (User defined aggregated function) is supported for standalone > mode. (STORM-1709) > Now GROUP BY for Trident mode is in progress (STORM-1434), so we would need > to support UDAF for Trident mode. > Note: DDL for UDAF is supported so we don't need to address it again for > Trident mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436500#comment-15436500 ] Jungtaek Lim edited comment on STORM-1444 at 8/25/16 8:39 AM: -- Since we construct TridentTopology to submit to Nimbus, for now we can print the internal graph of TridentTopology to show physical plan for that. (Currently printing graph is really verbose and not showing which operation is in node. It should be addressed first.) Btw, it might be changed to show Storm physical algebra once STORM-1446 will be resolved. Explain of Spark SQL for example, {code} scala> spark.sql("SELECT GRPID, COUNT(*) AS CNT, MAX(AGE) AS MAX_AGE, MIN(AGE) AS MIN_AGE, AVG(AGE) AS AVG_AGE, MAX(AGE) - MIN(AGE) AS DIFF FROM FOO WHERE ID > 2 GROUP BY GRPID").explain == Physical Plan == *HashAggregate(keys=[GRPID#64], functions=[count(1), max(AGE#67), min(AGE#67), avg(cast(AGE#67 as bigint)), max(AGE#67), min(AGE#67)]) +- Exchange hashpartitioning(GRPID#64, 200) +- *HashAggregate(keys=[GRPID#64], functions=[partial_count(1), partial_max(AGE#67), partial_min(AGE#67), partial_avg(cast(AGE#67 as bigint)), partial_max(AGE#67), partial_min(AGE#67)]) +- *Project [grpid#64, age#67] +- *Filter (isnotnull(ID#63) && (ID#63 > 2)) +- LocalTableScan [id#63, grpid#64, name#65, addr#66, age#67] {code} It shows only SQL related operations. We also need to do that. was (Author: kabhwan): Since we construct TridentTopology to submit to Nimbus, for now we can print the internal graph of TridentTopology to show physical plan for that. (Currently printing graph is really verbose and not showing which operation is in node. It should be addressed first.) Btw, it might be changed to show Storm physical algebra once STORM-1446 will be resolved. Explain of Spark SQL for example, ``` scala> spark.sql("SELECT GRPID, COUNT(*) AS CNT, MAX(AGE) AS MAX_AGE, MIN(AGE) AS MIN_AGE, AVG(AGE) AS AVG_AGE, MAX(AGE) - MIN(AGE) AS DIFF FROM FOO WHERE ID > 2 GROUP BY GRPID").explain == Physical Plan == *HashAggregate(keys=[GRPID#64], functions=[count(1), max(AGE#67), min(AGE#67), avg(cast(AGE#67 as bigint)), max(AGE#67), min(AGE#67)]) +- Exchange hashpartitioning(GRPID#64, 200) +- *HashAggregate(keys=[GRPID#64], functions=[partial_count(1), partial_max(AGE#67), partial_min(AGE#67), partial_avg(cast(AGE#67 as bigint)), partial_max(AGE#67), partial_min(AGE#67)]) +- *Project [grpid#64, age#67] +- *Filter (isnotnull(ID#63) && (ID#63 > 2)) +- LocalTableScan [id#63, grpid#64, name#65, addr#66, age#67] ``` It shows only SQL related operations. We also need to do that. > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436500#comment-15436500 ] Jungtaek Lim commented on STORM-1444: - Since we construct TridentTopology to submit to Nimbus, for now we can print the internal graph of TridentTopology to show physical plan for that. (Currently printing graph is really verbose and not showing which operation is in node. It should be addressed first.) Btw, it might be changed to show Storm physical algebra once STORM-1446 will be resolved. Explain of Spark SQL for example, ``` scala> spark.sql("SELECT GRPID, COUNT(*) AS CNT, MAX(AGE) AS MAX_AGE, MIN(AGE) AS MIN_AGE, AVG(AGE) AS AVG_AGE, MAX(AGE) - MIN(AGE) AS DIFF FROM FOO WHERE ID > 2 GROUP BY GRPID").explain == Physical Plan == *HashAggregate(keys=[GRPID#64], functions=[count(1), max(AGE#67), min(AGE#67), avg(cast(AGE#67 as bigint)), max(AGE#67), min(AGE#67)]) +- Exchange hashpartitioning(GRPID#64, 200) +- *HashAggregate(keys=[GRPID#64], functions=[partial_count(1), partial_max(AGE#67), partial_min(AGE#67), partial_avg(cast(AGE#67 as bigint)), partial_max(AGE#67), partial_min(AGE#67)]) +- *Project [grpid#64, age#67] +- *Filter (isnotnull(ID#63) && (ID#63 > 2)) +- LocalTableScan [id#63, grpid#64, name#65, addr#66, age#67] ``` It shows only SQL related operations. We also need to do that. > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1433) StormSQL Phase II
[ https://issues.apache.org/jira/browse/STORM-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436252#comment-15436252 ] Jungtaek Lim commented on STORM-1433: - Added you to contributor list. You can assign yourself to issue. Please let me know if it's not working. > StormSQL Phase II > - > > Key: STORM-1433 > URL: https://issues.apache.org/jira/browse/STORM-1433 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Haohui Mai > > This epic tracks the effort of the phase II development of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1433) StormSQL Phase II
[ https://issues.apache.org/jira/browse/STORM-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15436173#comment-15436173 ] Jungtaek Lim commented on STORM-1433: - If you're seeing older codebase you can see that Storm SQL writes the codes for constructing Trident topology and compile. I changed it to construct Trident topology directly so understanding the code should be easier for you now. If you don't mind to start contributing from other than core itself, I guess you can take STORM-1459 for the first task. Though I'm still experimenting and learning this area, please let me know if you have any questions regarding Storm SQL via dev@ or issue comment. > StormSQL Phase II > - > > Key: STORM-1433 > URL: https://issues.apache.org/jira/browse/STORM-1433 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Haohui Mai > > This epic tracks the effort of the phase II development of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1433) StormSQL Phase II
[ https://issues.apache.org/jira/browse/STORM-1433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15435879#comment-15435879 ] Jungtaek Lim commented on STORM-1433: - [~mauzhang] Manu, are you still interested to contribute storm-sql? If you're familiar with Calcite and Trident then you should be familiar with storm-sql easily. As you may be noticed, I'm addressing issues sequentially with trying my best. So I would be really appreciated when you come in and have a discussion with me and pick some issues. Thanks in advance! > StormSQL Phase II > - > > Key: STORM-1433 > URL: https://issues.apache.org/jira/browse/STORM-1433 > Project: Apache Storm > Issue Type: Epic > Components: storm-sql >Reporter: Haohui Mai > > This epic tracks the effort of the phase II development of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1434) Support the GROUP BY clause in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1434. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Merged into master and 1.x-branch. > Support the GROUP BY clause in StormSQL > --- > > Key: STORM-1434 > URL: https://issues.apache.org/jira/browse/STORM-1434 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > Time Spent: 5h > Remaining Estimate: 0h > > This jira tracks the effort of implement the support `GROUP BY` clause in > StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2054) DependencyResolver should be aware of relative path and absolute path
Jungtaek Lim created STORM-2054: --- Summary: DependencyResolver should be aware of relative path and absolute path Key: STORM-2054 URL: https://issues.apache.org/jira/browse/STORM-2054 Project: Apache Storm Issue Type: Bug Components: storm-submit-tools Affects Versions: 1.1.0 Reporter: Jungtaek Lim Assignee: Jungtaek Lim Priority: Critical DependencyResolver always create directory based on storm.home or current working directory which is intended for relative path but not intended for absolute path. Furthermore, DependencyResolverTest doesn't remove temporary directory after testing. Test creates a new temporary absolute path but due to this bug, temporary directory is created in working directory which prevents cleaning up, and finally making RAT error on all builds. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1870) Allow FluxShellBolt/Spout set custom "componentConfig" via yaml.
[ https://issues.apache.org/jira/browse/STORM-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434124#comment-15434124 ] Jungtaek Lim commented on STORM-1870: - [~darkless] Sorry it seems a bit late to visit. I found this while taking a look at 'waiting on upstream' label on streamparse. Patch is ready for reviewing. > Allow FluxShellBolt/Spout set custom "componentConfig" via yaml. > > > Key: STORM-1870 > URL: https://issues.apache.org/jira/browse/STORM-1870 > Project: Apache Storm > Issue Type: Improvement > Components: Flux >Reporter: Pavel Grochal >Assignee: Jungtaek Lim > Time Spent: 10m > Remaining Estimate: 0h > > FluxShellBolt/Spout should have option to provide custom config option when > importing topology from YAML file. > We use this to provide custom configuration for example to our python > RabbitMQ bolts. (Passing strings and list of strings). > We are using this code: > {code:title=FluxShellBolt.java|borderStyle=solid} > public class FluxShellBolt extends ShellBolt implements IRichBolt { > > //... (rest of the class) > public void addComponentConfig(String key, Object value) { > this.componentConfig.put(key, value); > } > public void addComponentConfig(String key, Object[] value) { > this.componentConfig.put(key, value); > } > } > {code} > And our YAML file: > {code:title=topology.yaml|borderStyle=solid} > bolts: > - className: org.apache.storm.flux.wrappers.bolts.FluxShellBolt > configMethods: > - name: addComponentConfig > args: [rabbitmq.configfile, etc/rabbit.yml] > - name: addComponentConfig > args: > - publisher.data_paths > - [actions] > > ... (rest of yaml file) > {code} > It works fine, however it produces this type of warning: > {code} > WARN o.a.s.f.FluxBuilder - Found multiple invokable methods for class class > org.apache.storm.flux.wrappers.bolts.FluxShellBolt, method > addComponentConfig, given arguments [publisher.data_paths, [actions]]. Using > the last one found. > {code} > Which fortunately happens to be correct method, but it is not correct > solution. > Any ideas? > It is quite needed to provide custom config to ShellSpout/Bolt, since we run > all spouts/bolts in python via this option. It would be nice to have this > option in official release. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-1870) Allow FluxShellBolt/Spout set custom "componentConfig" via yaml.
[ https://issues.apache.org/jira/browse/STORM-1870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim reassigned STORM-1870: --- Assignee: Jungtaek Lim > Allow FluxShellBolt/Spout set custom "componentConfig" via yaml. > > > Key: STORM-1870 > URL: https://issues.apache.org/jira/browse/STORM-1870 > Project: Apache Storm > Issue Type: Improvement > Components: Flux >Reporter: Pavel Grochal >Assignee: Jungtaek Lim > Time Spent: 10m > Remaining Estimate: 0h > > FluxShellBolt/Spout should have option to provide custom config option when > importing topology from YAML file. > We use this to provide custom configuration for example to our python > RabbitMQ bolts. (Passing strings and list of strings). > We are using this code: > {code:title=FluxShellBolt.java|borderStyle=solid} > public class FluxShellBolt extends ShellBolt implements IRichBolt { > > //... (rest of the class) > public void addComponentConfig(String key, Object value) { > this.componentConfig.put(key, value); > } > public void addComponentConfig(String key, Object[] value) { > this.componentConfig.put(key, value); > } > } > {code} > And our YAML file: > {code:title=topology.yaml|borderStyle=solid} > bolts: > - className: org.apache.storm.flux.wrappers.bolts.FluxShellBolt > configMethods: > - name: addComponentConfig > args: [rabbitmq.configfile, etc/rabbit.yml] > - name: addComponentConfig > args: > - publisher.data_paths > - [actions] > > ... (rest of yaml file) > {code} > It works fine, however it produces this type of warning: > {code} > WARN o.a.s.f.FluxBuilder - Found multiple invokable methods for class class > org.apache.storm.flux.wrappers.bolts.FluxShellBolt, method > addComponentConfig, given arguments [publisher.data_paths, [actions]]. Using > the last one found. > {code} > Which fortunately happens to be correct method, but it is not correct > solution. > Any ideas? > It is quite needed to provide custom config to ShellSpout/Bolt, since we run > all spouts/bolts in python via this option. It would be nice to have this > option in official release. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2053) Support connection between existing Trident Stream and Storm SQL (Input / Output)
Jungtaek Lim created STORM-2053: --- Summary: Support connection between existing Trident Stream and Storm SQL (Input / Output) Key: STORM-2053 URL: https://issues.apache.org/jira/browse/STORM-2053 Project: Apache Storm Issue Type: Improvement Components: storm-sql Reporter: Jungtaek Lim Currently Storm SQL requires users to set input / output data sources on DDL which restricts whole topology constructed by only SQL statement. It might be good to have hybrid construction of Trident topology via providing API for connecting Trident API and Storm SQL. (It would be a concatenation of Streams.) Things to check: Trident doesn't preserve type information which can be blocker for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1435) Build a single jar with dependency for StormSQL dependency
[ https://issues.apache.org/jira/browse/STORM-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1435. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 STORM-2016 covers this by applying whole new way. Now we don't need to build uber jar for storm-sql. Closing. > Build a single jar with dependency for StormSQL dependency > -- > > Key: STORM-1435 > URL: https://issues.apache.org/jira/browse/STORM-1435 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > > Currently StormSQL requires all dependency of the topology to reside in > either the `lib` or the `extlib` directory. It will greatly improve the > usability if StormSQL can provide a mechanism to pack all dependency with the > jar compiled from the topology. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1993) Update storm-sql README to have actual dependencies
[ https://issues.apache.org/jira/browse/STORM-1993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1993. - Resolution: Fixed Fix Version/s: 1.0.3 1.0.2 1.0.1 1.0.0 Only merged into 1.0.x branch. Also update website to reflect this change. Fix version/s are for that. > Update storm-sql README to have actual dependencies > --- > > Key: STORM-1993 > URL: https://issues.apache.org/jira/browse/STORM-1993 > Project: Apache Storm > Issue Type: Documentation > Components: storm-sql >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > Fix For: 1.0.0, 1.0.1, 1.0.2, 1.0.3 > > Time Spent: 0.5h > Remaining Estimate: 0h > > http://storm.apache.org/releases/1.0.1/storm-sql.html > In order to run storm-sql-kafka example, the document states that users need > to copy these jar files to extlib: > curator-client-2.5.0.jar, curator-framework-2.5.0.jar, zookeeper-3.4.6.jar, > scala-library-2.10.4.jar, kafka-clients-0.8.2.1.jar, kafka_2.10-0.8.2.1.jar, > metrics-core-2.2.0.jar, json-simple-1.1.1.jar, > jackson-annotations-2.6.0.jar,storm-kafka-\*.jar > storm-sql-kafka-\*.jar,storm-sql-runtime-\*.jar > But in fact this is not enough to run the example from Storm 1.0.2 RC3. > I need to copy below things to extlib to make workers running properly. > {code} > calcite-avatica-1.4.0-incubating.jar > calcite-core-1.4.0-incubating.jar > calcite-linq4j-1.4.0-incubating.jar > commons-lang-2.6.jar > curator-client-2.5.0.jar > curator-framework-2.5.0.jar > guava-16.0.1.jar > jackson-annotations-2.6.0.jar > jackson-core-2.6.3.jar > jackson-databind-2.6.3.jar > json-simple-1.1.1.jar > kafka-clients-0.8.2.1.jar > kafka_2.10-0.8.2.1.jar > metrics-core-2.2.0.jar > scala-library-2.10.4.jar > storm-kafka-1.0.2.jar > storm-sql-kafka-1.0.2.jar > storm-sql-runtime-1.0.2.jar > zookeeper-3.4.6.jar > {code} > While I feel storm-sql also needs to provide uber jar with shaded > dependencies (since copying them to extlib affects worker classpath which > breaks user topologies) guide document should be updated to run the example > properly. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2016) Topology submission improvement: support adding local jars and maven artifacts on submission
[ https://issues.apache.org/jira/browse/STORM-2016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2016. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Merged into master and 1.x-branch. > Topology submission improvement: support adding local jars and maven > artifacts on submission > > > Key: STORM-2016 > URL: https://issues.apache.org/jira/browse/STORM-2016 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Reporter: Jungtaek Lim >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.1.0 > > Time Spent: 3h > Remaining Estimate: 0h > > This JIRA tracks actual work on below proposal / design document. > https://cwiki.apache.org/confluence/display/STORM/A.+Design+doc%3A+adding+jars+and+maven+artifacts+at+submission > Proposal discussion thread is here: > http://mail-archives.apache.org/mod_mbox/storm-dev/201608.mbox/%3ccaf5108i9+tjanz0lgrktmkvqel7f+53k9uyzxct6zhsu6oh...@mail.gmail.com%3E > Let's post on discussion thread if we have any opinions / ideas on this > instead of leaving comments on this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2047) In secure setup the log page can't be viewed
[ https://issues.apache.org/jira/browse/STORM-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2047. - Resolution: Fixed Published. Marking as fixed. > In secure setup the log page can't be viewed > > > Key: STORM-2047 > URL: https://issues.apache.org/jira/browse/STORM-2047 > Project: Apache Storm > Issue Type: Bug > Components: documentation >Reporter: Raghav Kumar Gautam >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Attachments: screenshot-1.png > > Time Spent: 1h > Remaining Estimate: 0h > > This is about the topology inspector feature. When we click events button on > the bolt page, we expect that we will get to a log page which will show > tuples. Instead we get authentication required error, see attached image. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2047) In secure setup the log page can't be viewed
[ https://issues.apache.org/jira/browse/STORM-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15431884#comment-15431884 ] Jungtaek Lim commented on STORM-2047: - Thanks [~arunmahadevan], I merged the change to master, 1.x, 1.0.x branches. I didn't applied to website yet, so I'll take care and close this issue. > In secure setup the log page can't be viewed > > > Key: STORM-2047 > URL: https://issues.apache.org/jira/browse/STORM-2047 > Project: Apache Storm > Issue Type: Bug > Components: documentation >Reporter: Raghav Kumar Gautam >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Attachments: screenshot-1.png > > Time Spent: 1h > Remaining Estimate: 0h > > This is about the topology inspector feature. When we click events button on > the bolt page, we expect that we will get to a log page which will show > tuples. Instead we get authentication required error, see attached image. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2047) In secure setup the log page can't be viewed
[ https://issues.apache.org/jira/browse/STORM-2047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2047: Fix Version/s: 1.0.3 1.1.0 2.0.0 > In secure setup the log page can't be viewed > > > Key: STORM-2047 > URL: https://issues.apache.org/jira/browse/STORM-2047 > Project: Apache Storm > Issue Type: Bug > Components: documentation >Reporter: Raghav Kumar Gautam >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Attachments: screenshot-1.png > > Time Spent: 1h > Remaining Estimate: 0h > > This is about the topology inspector feature. When we click events button on > the bolt page, we expect that we will get to a log page which will show > tuples. Instead we get authentication required error, see attached image. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2045) NPE in SpoutExecutor in 2.0 branch
[ https://issues.apache.org/jira/browse/STORM-2045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2045. - Resolution: Fixed Thanks [~Cody], I merged into master. > NPE in SpoutExecutor in 2.0 branch > -- > > Key: STORM-2045 > URL: https://issues.apache.org/jira/browse/STORM-2045 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 2.0.0 >Reporter: Cody >Assignee: Cody > Fix For: 2.0.0 > > Time Spent: 50m > Remaining Estimate: 0h > > This issue was raised in [STORM-1949], but since the original issue mainly > discusses about whether to disable ABP by default, I'd like to pick this NPE > as another issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1985) Provide a tool for showing and killing corrupted topology
[ https://issues.apache.org/jira/browse/STORM-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15430399#comment-15430399 ] Jungtaek Lim commented on STORM-1985: - [~bkamal] Thanks for working on this. Since Storm already has several daemons and also REST API server, I'd like to see admin tool by not adding more daemons. My intention was just a command line tool which directly communicates with ZK, Nimbus, or so (need to address authentication) but originally this was an idea from [~revans2] so would like to see his opinion regarding this. > Provide a tool for showing and killing corrupted topology > - > > Key: STORM-1985 > URL: https://issues.apache.org/jira/browse/STORM-1985 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Reporter: Jungtaek Lim > Labels: newbie > Attachments: proposal_admin_tool_design.docx > > > After STORM-1976, Nimbus doesn't clean up corrupted topologies. > (corrupted topology means the topology whose codes are not available on > blobstore.) > Also after STORM-1977, no Nimbus is gaining leadership if one or more > topologies are corrupted, which means all nimbuses will be no-op. > So we should provide a tool to kill specific topology without accessing > leader nimbus (because there's no leader nimbus at that time). The tool > should also determine which topologies are corrupted, and show its list or > clean up automatically. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1994) Add table with per-topology & worker resource usage and components in (new) supervisor and topology pages
[ https://issues.apache.org/jira/browse/STORM-1994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1994. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 Thanks [~abellina] for the great work. I merged into master and 1.x branch. > Add table with per-topology & worker resource usage and components in (new) > supervisor and topology pages > - > > Key: STORM-1994 > URL: https://issues.apache.org/jira/browse/STORM-1994 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core, storm-ui >Reporter: Alessandro Bellina >Assignee: Alessandro Bellina >Priority: Minor > Fix For: 2.0.0, 1.1.0 > > Attachments: supervisor_page_worker_table.png, > topology_page_worker_table.png > > Time Spent: 1h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2050) [storm-sql] Support User Defined Aggregate Function for Trident mode
Jungtaek Lim created STORM-2050: --- Summary: [storm-sql] Support User Defined Aggregate Function for Trident mode Key: STORM-2050 URL: https://issues.apache.org/jira/browse/STORM-2050 Project: Apache Storm Issue Type: Task Components: storm-sql Reporter: Jungtaek Lim Currently UDAF (User defined aggregated function) is supported for standalone mode. (STORM-1709) Now GROUP BY for Trident mode is in progress (STORM-1434), so we would need to support UDAF for Trident mode. Note: DDL for UDAF is supported so we don't need to address it again for Trident mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2042) Nimbus client connections not closed properly causing connection leaks
[ https://issues.apache.org/jira/browse/STORM-2042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2042. - Resolution: Fixed Fix Version/s: (was: 1.x) 1.0.3 1.1.0 Thanks [~arunmahadevan], I also merged into master and 1.0.x branch. (Harsha merged this into 1.x-branch) > Nimbus client connections not closed properly causing connection leaks > -- > > Key: STORM-2042 > URL: https://issues.apache.org/jira/browse/STORM-2042 > Project: Apache Storm > Issue Type: Bug >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0, 1.0.3 > > Time Spent: 40m > Remaining Estimate: 0h > > The nimbus client connections are not closed properly causing connection > leaks. After the number of connections exceed nimbus.thrift.threads, a > RejectedExecutionException is thrown. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-2048) Refactor code blocks which are ported to for-loop to Java Stream API
[ https://issues.apache.org/jira/browse/STORM-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15427487#comment-15427487 ] Jungtaek Lim commented on STORM-2048: - We might have multiple pull requests for this issue, so please hold on making as resolved when pull request is merged to master. > Refactor code blocks which are ported to for-loop to Java Stream API > > > Key: STORM-2048 > URL: https://issues.apache.org/jira/browse/STORM-2048 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Affects Versions: 2.0.0 >Reporter: Jungtaek Lim > > We just changed minimum requirement for master branch to Java 1.8 from > STORM-2041. > Thanks for the change we can change ported code block which was functional > style to similar style again. > We could even broaden the boundary of this issue for applying other benefits > from Java 8, or file separate issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (STORM-2048) Refactor code blocks which are ported to for-loop to Java Stream API
Jungtaek Lim created STORM-2048: --- Summary: Refactor code blocks which are ported to for-loop to Java Stream API Key: STORM-2048 URL: https://issues.apache.org/jira/browse/STORM-2048 Project: Apache Storm Issue Type: Improvement Components: storm-core Affects Versions: 2.0.0 Reporter: Jungtaek Lim We just changed minimum requirement for master branch to Java 1.8 from STORM-2041. Thanks for the change we can change ported code block which was functional style to similar style again. We could even broaden the boundary of this issue for applying other benefits from Java 8, or file separate issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-2041) Make Java 8 as minimum requirement for 2.0 release
[ https://issues.apache.org/jira/browse/STORM-2041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-2041. - Resolution: Fixed Thanks [~sriharsha] for taking care of this. I merged into master. > Make Java 8 as minimum requirement for 2.0 release > -- > > Key: STORM-2041 > URL: https://issues.apache.org/jira/browse/STORM-2041 > Project: Apache Storm > Issue Type: Task >Reporter: Sriharsha Chintalapani >Assignee: Sriharsha Chintalapani > Fix For: 2.0.0 > > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1446) Compile the Calcite logical plan to Storm physical plan
[ https://issues.apache.org/jira/browse/STORM-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15426095#comment-15426095 ] Jungtaek Lim commented on STORM-1446: - [~wheat9] STORM-1433 is introducing the change, so please let me know if you think the change covers this issue. https://github.com/apache/storm/pull/1633 Thanks in advance! > Compile the Calcite logical plan to Storm physical plan > --- > > Key: STORM-1446 > URL: https://issues.apache.org/jira/browse/STORM-1446 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Haohui Mai > > As suggested in > https://issues.apache.org/jira/browse/STORM-1040?focusedCommentId=15036651&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036651, > compiling the logical plan from Calcite down to Storm physical plan will > clarify the implementation of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-2043) Nimbus should not make assignments crazy when Pacemaker down
[ https://issues.apache.org/jira/browse/STORM-2043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-2043: Fix Version/s: (was: 1.0.2) > Nimbus should not make assignments crazy when Pacemaker down > > > Key: STORM-2043 > URL: https://issues.apache.org/jira/browse/STORM-2043 > Project: Apache Storm > Issue Type: Improvement > Components: storm-core >Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0 > Environment: CentOS 6.5 >Reporter: Yuzhao Chen > Labels: patch, performance > Original Estimate: 672h > Remaining Estimate: 672h > > When pacemaker goes down, all the heartbeats of workers are lost. These > heartbeats will need a long time to recover even if pacemaker goes up > immediately if it costs dozens of GB memory. During the time worker > heartbeats are not complete,Nimbus will think the workers are died( heartbeat > time out ), and reassign these workers crazily. But actually the workers are > healthy, the reassignment will move in cycles until pacemaker heartbeats > recover. During this time, all the topologies's throughout will goes down. We > should avoid this, because Pacemaker has no HA. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1446) Compile the Calcite logical plan to Storm physical plan
[ https://issues.apache.org/jira/browse/STORM-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425530#comment-15425530 ] Jungtaek Lim commented on STORM-1446: - [~wheat9] Could you clarify more on this issue: what's Storm physical plan? I can see the possibility to convert Calcite logical plan to Trident logical plan (not creating the code but using Trident features) so that Trident can optimize the topology, so I would like to know if you mean this, or you mean handling the assignment (scheduler). > Compile the Calcite logical plan to Storm physical plan > --- > > Key: STORM-1446 > URL: https://issues.apache.org/jira/browse/STORM-1446 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Haohui Mai > > As suggested in > https://issues.apache.org/jira/browse/STORM-1040?focusedCommentId=15036651&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036651, > compiling the logical plan from Calcite down to Storm physical plan will > clarify the implementation of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1915) Supervisor keeps restarting forever
[ https://issues.apache.org/jira/browse/STORM-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1915. - Resolution: Fixed Assignee: Jungtaek Lim Fix Version/s: 1.1.0 1.0.2 2.0.0 Resolving this since STORM-1934 was merged. > Supervisor keeps restarting forever > --- > > Key: STORM-1915 > URL: https://issues.apache.org/jira/browse/STORM-1915 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.1 > Environment: Linode 4GB running on KVM - Ubuntu 14.04 LTS >Reporter: Gergely Nagy >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.0.2, 1.1.0 > > > While submitting a topology to a 20 node 40 worker strong cluster, the > supervisor keeps throwing errors and keeps restarting the workers it is > supervising. > For this reason the topology never starts, instead it keeps dancing by > reassigning the bolts and spouts forever. > I'd love to attach the logs here but I can't find any upload button in the > JIRA form. > The error basically says: > {code} > 2016-06-18 12:04:26.589 o.a.s.config [WARN] Failed to get worker user for . > #error { > :cause /home/fogetti/downloads/apache-storm-1.0.1/storm-local/workers-users > (Is a directory) > :via > [{:type java.io.FileNotFoundException >:message > /home/fogetti/downloads/apache-storm-1.0.1/storm-local/workers-users (Is a > directory) >:at [java.io.FileInputStream open0 FileInputStream.java -2]}] > :trace > [[java.io.FileInputStream open0 FileInputStream.java -2] > [java.io.FileInputStream open FileInputStream.java 195] > [java.io.FileInputStream FileInputStream.java 138] > [clojure.java.io$fn__9189 invoke io.clj 229] > [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69] > [clojure.java.io$fn__9201 invoke io.clj 258] > [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69] > [clojure.java.io$fn__9163 invoke io.clj 165] > [clojure.java.io$fn__9115$G__9091__9122 invoke io.clj 69] > [clojure.java.io$reader doInvoke io.clj 102] > [clojure.lang.RestFn invoke RestFn.java 410] > [clojure.lang.AFn applyToHelper AFn.java 154] > [clojure.lang.RestFn applyTo RestFn.java 132] > [clojure.core$apply invoke core.clj 632] > [clojure.core$slurp doInvoke core.clj 6653] > [clojure.lang.RestFn invoke RestFn.java 410] > [org.apache.storm.config$get_worker_user invoke config.clj 239] > [org.apache.storm.daemon.supervisor$shutdown_worker invoke supervisor.clj > 281] > > [org.apache.storm.daemon.supervisor$kill_existing_workers_with_change_in_components > invoke supervisor.clj 536] > [org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078 > invoke supervisor.clj 595] > [org.apache.storm.event$event_manager$fn__8630 invoke event.clj 40] > [clojure.lang.AFn run AFn.java 22] > [java.lang.Thread run Thread.java 745]]} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1915) Supervisor keeps restarting forever
[ https://issues.apache.org/jira/browse/STORM-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425463#comment-15425463 ] Jungtaek Lim commented on STORM-1915: - STORM-1934 is applied to 1.0.2 but I just forgot to close this. [~fogetti] Please try out 1.0.2 and see it works without the symptom, and reopen if it's not. Thanks! > Supervisor keeps restarting forever > --- > > Key: STORM-1915 > URL: https://issues.apache.org/jira/browse/STORM-1915 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.1 > Environment: Linode 4GB running on KVM - Ubuntu 14.04 LTS >Reporter: Gergely Nagy > > While submitting a topology to a 20 node 40 worker strong cluster, the > supervisor keeps throwing errors and keeps restarting the workers it is > supervising. > For this reason the topology never starts, instead it keeps dancing by > reassigning the bolts and spouts forever. > I'd love to attach the logs here but I can't find any upload button in the > JIRA form. > The error basically says: > {code} > 2016-06-18 12:04:26.589 o.a.s.config [WARN] Failed to get worker user for . > #error { > :cause /home/fogetti/downloads/apache-storm-1.0.1/storm-local/workers-users > (Is a directory) > :via > [{:type java.io.FileNotFoundException >:message > /home/fogetti/downloads/apache-storm-1.0.1/storm-local/workers-users (Is a > directory) >:at [java.io.FileInputStream open0 FileInputStream.java -2]}] > :trace > [[java.io.FileInputStream open0 FileInputStream.java -2] > [java.io.FileInputStream open FileInputStream.java 195] > [java.io.FileInputStream FileInputStream.java 138] > [clojure.java.io$fn__9189 invoke io.clj 229] > [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69] > [clojure.java.io$fn__9201 invoke io.clj 258] > [clojure.java.io$fn__9102$G__9095__9109 invoke io.clj 69] > [clojure.java.io$fn__9163 invoke io.clj 165] > [clojure.java.io$fn__9115$G__9091__9122 invoke io.clj 69] > [clojure.java.io$reader doInvoke io.clj 102] > [clojure.lang.RestFn invoke RestFn.java 410] > [clojure.lang.AFn applyToHelper AFn.java 154] > [clojure.lang.RestFn applyTo RestFn.java 132] > [clojure.core$apply invoke core.clj 632] > [clojure.core$slurp doInvoke core.clj 6653] > [clojure.lang.RestFn invoke RestFn.java 410] > [org.apache.storm.config$get_worker_user invoke config.clj 239] > [org.apache.storm.daemon.supervisor$shutdown_worker invoke supervisor.clj > 281] > > [org.apache.storm.daemon.supervisor$kill_existing_workers_with_change_in_components > invoke supervisor.clj 536] > [org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078 > invoke supervisor.clj 595] > [org.apache.storm.event$event_manager$fn__8630 invoke event.clj 40] > [clojure.lang.AFn run AFn.java 22] > [java.lang.Thread run Thread.java 745]]} > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1879) Supervisor may not shut down workers cleanly
[ https://issues.apache.org/jira/browse/STORM-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1879. - Resolution: Fixed Assignee: Jungtaek Lim Fix Version/s: 1.1.0 1.0.2 2.0.0 Resolving this since STORM-1934 was merged. > Supervisor may not shut down workers cleanly > > > Key: STORM-1879 > URL: https://issues.apache.org/jira/browse/STORM-1879 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.1 >Reporter: Stig Rohde Døssing >Assignee: Jungtaek Lim > Fix For: 2.0.0, 1.0.2, 1.1.0 > > Attachments: fix_missing_worker_pid.patch, nimbus-supervisor.zip, > supervisor.log > > > We've run into a strange issue with a zombie worker process. It looks like > the worker pid file somehow got deleted without the worker process shutting > down. This causes the supervisor to try repeatedly to kill the worker > unsuccessfully, and means multiple workers may be assigned to the same port. > The worker root folder sticks around because the worker is still heartbeating > to it. > It may or may not be related that we've seen Nimbus occasionally enter an > infinite loop of printing logs similar to the below. > {code} > 2016-05-19 14:55:14.196 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > 2016-05-19 14:55:14.210 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.218 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > 2016-05-19 14:55:14.256 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.273 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.316 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > {code} > Which continues until Nimbus is rebooted. We also see repeating blocks > similar to the logs below. > {code} > 2016-06-02 07:45:03.656 o.a.s.d.nimbus [INFO] Cleaning up > ZendeskTicketTopology-127-1464780171 > 2016-06-02 07:45:04.132 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormjar.jar) > 2016-06-02 07:45:04.144 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormconf.ser) > 2016-06-02 07:45:04.155 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormcode.ser) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1879) Supervisor may not shut down workers cleanly
[ https://issues.apache.org/jira/browse/STORM-1879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15425457#comment-15425457 ] Jungtaek Lim commented on STORM-1879: - [~kevinconaway] I guess yes, since STORM-1934 is applied to 1.0.2. I just forgot to close related issues. Please try out 1.0.2 and see it works without the symptom, and reopen if it's not. Thanks! > Supervisor may not shut down workers cleanly > > > Key: STORM-1879 > URL: https://issues.apache.org/jira/browse/STORM-1879 > Project: Apache Storm > Issue Type: Bug > Components: storm-core >Affects Versions: 1.0.1 >Reporter: Stig Rohde Døssing > Attachments: fix_missing_worker_pid.patch, nimbus-supervisor.zip, > supervisor.log > > > We've run into a strange issue with a zombie worker process. It looks like > the worker pid file somehow got deleted without the worker process shutting > down. This causes the supervisor to try repeatedly to kill the worker > unsuccessfully, and means multiple workers may be assigned to the same port. > The worker root folder sticks around because the worker is still heartbeating > to it. > It may or may not be related that we've seen Nimbus occasionally enter an > infinite loop of printing logs similar to the below. > {code} > 2016-05-19 14:55:14.196 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > 2016-05-19 14:55:14.210 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.218 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > 2016-05-19 14:55:14.256 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.273 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormcode.ser > 2016-05-19 14:55:14.316 o.a.s.b.BlobStoreUtils [ERROR] Could not update the > blob with keyZendeskTicketTopology-5-1463647641-stormconf.ser > {code} > Which continues until Nimbus is rebooted. We also see repeating blocks > similar to the logs below. > {code} > 2016-06-02 07:45:03.656 o.a.s.d.nimbus [INFO] Cleaning up > ZendeskTicketTopology-127-1464780171 > 2016-06-02 07:45:04.132 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormjar.jar) > 2016-06-02 07:45:04.144 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormconf.ser) > 2016-06-02 07:45:04.155 o.a.s.d.nimbus [INFO] > ExceptionKeyNotFoundException(msg:ZendeskTicketTopology-127-1464780171-stormcode.ser) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1434) Support the GROUP BY clause in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422771#comment-15422771 ] Jungtaek Lim commented on STORM-1434: - Yeah while I open the possibility for persistentAggregate, we would want join & aggregation with windowing first. I don't want to introduce custom syntax / semantics to SQL statement, so I would not consider about persistentAggregate and go on with batch first. > Support the GROUP BY clause in StormSQL > --- > > Key: STORM-1434 > URL: https://issues.apache.org/jira/browse/STORM-1434 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > This jira tracks the effort of implement the support `GROUP BY` clause in > StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1234) port backtype.storm.security.auth.DefaultHttpCredentialsPlugin-test to java
[ https://issues.apache.org/jira/browse/STORM-1234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1234. - Resolution: Fixed Fix Version/s: 2.0.0 Thanks [~abhishek.agarwal], I merged into master. > port backtype.storm.security.auth.DefaultHttpCredentialsPlugin-test to java > > > Key: STORM-1234 > URL: https://issues.apache.org/jira/browse/STORM-1234 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > Fix For: 2.0.0 > > > to junit test conversion -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1256) port backtype.storm.utils.ZookeeperServerCnxnFactory-test to java
[ https://issues.apache.org/jira/browse/STORM-1256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1256. - Resolution: Fixed Fix Version/s: 2.0.0 Thanks [~abhishek.agarwal], I merged into master. > port backtype.storm.utils.ZookeeperServerCnxnFactory-test to java > - > > Key: STORM-1256 > URL: https://issues.apache.org/jira/browse/STORM-1256 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > Fix For: 2.0.0 > > > junit migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1240) port backtype.storm.security.auth.authorizer.DRPCSimpleACLAuthorizer-test to java
[ https://issues.apache.org/jira/browse/STORM-1240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1240. - Resolution: Fixed Fix Version/s: 2.0.0 Thanks [~abhishek.agarwal], I merged into master. > port backtype.storm.security.auth.authorizer.DRPCSimpleACLAuthorizer-test to > java > -- > > Key: STORM-1240 > URL: https://issues.apache.org/jira/browse/STORM-1240 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > Fix For: 2.0.0 > > > junit migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1251) port backtype.storm.serialization.SerializationFactory-test to java
[ https://issues.apache.org/jira/browse/STORM-1251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1251. - Resolution: Fixed Fix Version/s: 2.0.0 Thanks [~abhishek.agarwal], I merged into master. > port backtype.storm.serialization.SerializationFactory-test to java > > > Key: STORM-1251 > URL: https://issues.apache.org/jira/browse/STORM-1251 > Project: Apache Storm > Issue Type: New Feature > Components: storm-core >Reporter: Robert Joseph Evans >Assignee: Abhishek Agarwal > Labels: java-migration, jstorm-merger > Fix For: 2.0.0 > > > junit migration -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1434) Support the GROUP BY clause in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422731#comment-15422731 ] Jungtaek Lim commented on STORM-1434: - Currently Storm SQL is relying on Trident so I'm thinking about doing it within micro-batch for now, but would like to check that we want global aggregation with state. I'm not familiar with Trident optimization, but Trident API doc explained this, "The benefits of CombinerAggregators are seen when you use them with the aggregate method instead of partitionAggregate. In that case, Trident automatically optimizes the computation by doing partial aggregations before transferring tuples over the network." Based on this explanation, we could make aggregate function based on CombinerAggregator and call aggregate with this, and Trident does two pass (partition -> global) aggregations. > Support the GROUP BY clause in StormSQL > --- > > Key: STORM-1434 > URL: https://issues.apache.org/jira/browse/STORM-1434 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > This jira tracks the effort of implement the support `GROUP BY` clause in > StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (STORM-1434) Support the GROUP BY clause in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422281#comment-15422281 ] Jungtaek Lim commented on STORM-1434: - This feature depends on the boundary of aggregation. Do we want to apply aggregation for each batch? Or do we want to hold the state of the global aggregation? > Support the GROUP BY clause in StormSQL > --- > > Key: STORM-1434 > URL: https://issues.apache.org/jira/browse/STORM-1434 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > This jira tracks the effort of implement the support `GROUP BY` clause in > StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1833) Add simple equi-join support in storm-sql standalone mode
[ https://issues.apache.org/jira/browse/STORM-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1833: Component/s: storm-sql > Add simple equi-join support in storm-sql standalone mode > - > > Key: STORM-1833 > URL: https://issues.apache.org/jira/browse/STORM-1833 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > > Provide simple equi join support in storm sql standalone mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1570) Support nested field lookup in Storm sql
[ https://issues.apache.org/jira/browse/STORM-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1570. - Resolution: Fixed Fix Version/s: 2.0.0 1.0.0 > Support nested field lookup in Storm sql > > > Key: STORM-1570 > URL: https://issues.apache.org/jira/browse/STORM-1570 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 1.0.0, 2.0.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1570) Support nested field lookup in Storm sql
[ https://issues.apache.org/jira/browse/STORM-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1570: Component/s: storm-sql > Support nested field lookup in Storm sql > > > Key: STORM-1570 > URL: https://issues.apache.org/jira/browse/STORM-1570 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1586) ExprCompiler support for UDFs in Storm-sql
[ https://issues.apache.org/jira/browse/STORM-1586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1586: Component/s: storm-sql > ExprCompiler support for UDFs in Storm-sql > -- > > Key: STORM-1586 > URL: https://issues.apache.org/jira/browse/STORM-1586 > Project: Apache Storm > Issue Type: Sub-task > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.0.1 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (STORM-1833) Add simple equi-join support in storm-sql standalone mode
[ https://issues.apache.org/jira/browse/STORM-1833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim resolved STORM-1833. - Resolution: Fixed Fix Version/s: 1.1.0 2.0.0 > Add simple equi-join support in storm-sql standalone mode > - > > Key: STORM-1833 > URL: https://issues.apache.org/jira/browse/STORM-1833 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0 > > > Provide simple equi join support in storm sql standalone mode. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1585) Add DDL support for UDFs in Storm-sql
[ https://issues.apache.org/jira/browse/STORM-1585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1585: Component/s: storm-sql > Add DDL support for UDFs in Storm-sql > - > > Key: STORM-1585 > URL: https://issues.apache.org/jira/browse/STORM-1585 > Project: Apache Storm > Issue Type: Sub-task > Components: storm-sql >Affects Versions: 1.0.0, 2.0.0 >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.0.1 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1709) Add group by support in storm-sql standalone mode
[ https://issues.apache.org/jira/browse/STORM-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1709: Component/s: storm-sql > Add group by support in storm-sql standalone mode > - > > Key: STORM-1709 > URL: https://issues.apache.org/jira/browse/STORM-1709 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Arun Mahadevan >Assignee: Arun Mahadevan > Fix For: 2.0.0, 1.1.0 > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1443) Support customizing parallelism in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1443: Component/s: storm-sql > Support customizing parallelism in StormSQL > --- > > Key: STORM-1443 > URL: https://issues.apache.org/jira/browse/STORM-1443 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > Currently all processors in StormSQL have a default parallelism of 1. It is > desirable to have the ability to set parallelism for each processor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1446) Compile the Calcite logical plan to Storm physical plan
[ https://issues.apache.org/jira/browse/STORM-1446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1446: Component/s: storm-sql > Compile the Calcite logical plan to Storm physical plan > --- > > Key: STORM-1446 > URL: https://issues.apache.org/jira/browse/STORM-1446 > Project: Apache Storm > Issue Type: Improvement > Components: storm-sql >Reporter: Haohui Mai > > As suggested in > https://issues.apache.org/jira/browse/STORM-1040?focusedCommentId=15036651&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036651, > compiling the logical plan from Calcite down to Storm physical plan will > clarify the implementation of StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1435) Build a single jar with dependency for StormSQL dependency
[ https://issues.apache.org/jira/browse/STORM-1435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1435: Component/s: storm-sql > Build a single jar with dependency for StormSQL dependency > -- > > Key: STORM-1435 > URL: https://issues.apache.org/jira/browse/STORM-1435 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai >Assignee: Jungtaek Lim > > Currently StormSQL requires all dependency of the topology to reside in > either the `lib` or the `extlib` directory. It will greatly improve the > usability if StormSQL can provide a mechanism to pack all dependency with the > jar compiled from the topology. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1444) Support EXPLAIN statement in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1444: Component/s: storm-sql > Support EXPLAIN statement in StormSQL > - > > Key: STORM-1444 > URL: https://issues.apache.org/jira/browse/STORM-1444 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > It is useful to support the `EXPLAIN` statement in StormSQL to allow > debugging and customizing the topology generated by StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1434) Support the GROUP BY clause in StormSQL
[ https://issues.apache.org/jira/browse/STORM-1434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1434: Component/s: storm-sql > Support the GROUP BY clause in StormSQL > --- > > Key: STORM-1434 > URL: https://issues.apache.org/jira/browse/STORM-1434 > Project: Apache Storm > Issue Type: New Feature > Components: storm-sql >Reporter: Haohui Mai > > This jira tracks the effort of implement the support `GROUP BY` clause in > StormSQL. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1766) A better algorithm server rack selection for RAS
[ https://issues.apache.org/jira/browse/STORM-1766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1766: Fix Version/s: 1.1.0 > A better algorithm server rack selection for RAS > > > Key: STORM-1766 > URL: https://issues.apache.org/jira/browse/STORM-1766 > Project: Apache Storm > Issue Type: Improvement >Reporter: Boyang Jerry Peng >Assignee: Boyang Jerry Peng > Fix For: 2.0.0, 1.1.0 > > > Currently the getBestClustering algorithm for RAS finds the "Best" > cluster/rack based on which rack has the most available resources this may be > insufficient and may cause topologies not to be able to be scheduled > successfully even though there are enough resources to schedule it in the > cluster. We attempt to find the rack with the most resources by find the rack > with the biggest sum of available memory + available cpu. This method is not > effective since it does not consider the number of slots available. This > method also fails in identifying racks that are not schedulable due to the > exhaustion of one of the resources either memory, cpu, or slots. The current > implementation also tries the initial scheduling on one rack and not try to > schedule on all the racks before giving up which may cause topologies to be > failed to be scheduled due to the above mentioned shortcomings in the current > method. Also the current method does not consider failures of workers. When > executors of a topology gets unassigned and needs to be scheduled again, the > current logic in getBestClustering may be inadequate if not complete wrong. > When executors needs to rescheduled due to a fault, getBestClustering will > likely return a cluster that is different from where the majority of > executors from the topology is originally scheduling in. > Thus, I propose a different strategy/algorithm to find the "best" cluster. I > have come up with a ordering strategy I dub subordinate resource availability > ordering (inspired by Dominant Resource Fairness) that sorts racks by the > subordinate (not dominant) resource availability. > For example given 4 racks with the following resource availabilities > {code} > //generate some that has alot of memory but little of cpu > rack-3 Avail [ CPU 100.0 MEM 20.0 Slots 40 ] Total [ CPU 100.0 MEM > 20.0 Slots 40 ] > //generate some supervisors that are depleted of one resource > rack-2 Avail [ CPU 0.0 MEM 8.0 Slots 40 ] Total [ CPU 0.0 MEM 8.0 > Slots 40 ] > //generate some that has a lot of cpu but little of memory > rack-4 Avail [ CPU 6100.0 MEM 1.0 Slots 40 ] Total [ CPU 6100.0 MEM > 1.0 Slots 40 ] > //generate another rack of supervisors with less resources than rack-0 > rack-1 Avail [ CPU 2000.0 MEM 4.0 Slots 40 ] Total [ CPU 2000.0 MEM > 4.0 Slots 40 ] > rack-0 Avail [ CPU 4000.0 MEM 8.0 Slots 40( ] Total [ CPU 4000.0 MEM > 8.0 Slots 40 ] > Cluster Overall Avail [ CPU 12200.0 MEM 41.0 Slots 200 ] Total [ CPU > 12200.0 MEM 41.0 Slots 200 ] > {code} > It is clear that rack-0 is the best cluster since its the most balanced and > can potentially schedule the most executors, while rack-2 is the worst rack > since rack-2 is depleted of cpu resource thus rendering it unschedulable even > though there are other resources available. > We first calculate the resource availability percentage of all the racks for > each resource by computing: > {code} > (resource available on rack) / (resource available in cluster) > {code} > We do this calculation to normalize the values otherwise the resource values > would not be comparable. > So for our example: > {code} > rack-3 Avail [ CPU 0.819672131147541% MEM 48.78048780487805% Slots 20.0% ] > effective resources: 0.00819672131147541 > rack-2 Avail [ 0.0% MEM 19.51219512195122% Slots 20.0% ] effective resources: > 0.0 > rack-4 Avail [ CPU 50.0% MEM 2.4390243902439024% Slots 20.0% ] effective > resources: 0.024390243902439025 > rack-1 Avail [ CPU 16.39344262295082% MEM 9.75609756097561% Slots 20.0% ] > effective resources: 0.0975609756097561 > rack-0 Avail [ CPU 32.78688524590164% MEM 19.51219512195122% Slots 20.0% ] > effective resources: 0.1951219512195122 > {code} > The effective resource of a rack, which is also the subordinate resource, is > computed by: > {code} > MIN(resource availability percentage of {CPU, Memory, # of free Slots}). > {code} > Then we order the racks by the effective resource. > Thus for our example: > {code} > Sorted rack: [rack-0, rack-1, rack-4, rack-3, rack-2] > {code} > Also to deal with the presence of failures, if a topology is partially > scheduled, we find the rack with the most scheduled executors for the > topology and we try to schedule on that rack first. > Thus for the sorting for racks. We first sort b
[jira] [Commented] (STORM-1913) Additions and Improvements for Trident RAS API
[ https://issues.apache.org/jira/browse/STORM-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15420346#comment-15420346 ] Jungtaek Lim commented on STORM-1913: - Also merged into 1.x-branch. > Additions and Improvements for Trident RAS API > -- > > Key: STORM-1913 > URL: https://issues.apache.org/jira/browse/STORM-1913 > Project: Apache Storm > Issue Type: Improvement >Reporter: Kyle Nusbaum >Assignee: Kyle Nusbaum > Fix For: 2.0.0, 1.1.0 > > > Trident's RAS API does not honor the following config values: > {code} > topology.component.resources.onheap.memory.mb > topology.component.resources.offheap.memory.mb > topology.component.cpu.pcore.percent > {code} > Trident does not receive the user's config as part of its builder API, so it > does not know the value of these. Instead of altering the existing API (we > want to remain backwards-compatible), add some new methods for dealing with > this. > There is also currently no way to set the master coord spouts' resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-1913) Additions and Improvements for Trident RAS API
[ https://issues.apache.org/jira/browse/STORM-1913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-1913: Fix Version/s: 1.1.0 > Additions and Improvements for Trident RAS API > -- > > Key: STORM-1913 > URL: https://issues.apache.org/jira/browse/STORM-1913 > Project: Apache Storm > Issue Type: Improvement >Reporter: Kyle Nusbaum >Assignee: Kyle Nusbaum > Fix For: 2.0.0, 1.1.0 > > > Trident's RAS API does not honor the following config values: > {code} > topology.component.resources.onheap.memory.mb > topology.component.resources.offheap.memory.mb > topology.component.cpu.pcore.percent > {code} > Trident does not receive the user's config as part of its builder API, so it > does not know the value of these. Instead of altering the existing API (we > want to remain backwards-compatible), add some new methods for dealing with > this. > There is also currently no way to set the master coord spouts' resources. -- This message was sent by Atlassian JIRA (v6.3.4#6332)