[GitHub] storm pull request: STORM-919 Gathering worker and supervisor proc...
Github user bourneagain commented on the pull request: https://github.com/apache/storm/pull/608#issuecomment-127623505 I will revert the changes to Thrift generated files and update the pull request. Also I have updated details on how we plan to use these metrics @ https://issues.apache.org/jira/browse/STORM-919. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-919) Gathering worker and supervisor process information (CPU/Memory)
[ https://issues.apache.org/jira/browse/STORM-919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653684#comment-14653684 ] ASF GitHub Bot commented on STORM-919: -- Github user bourneagain commented on the pull request: https://github.com/apache/storm/pull/608#issuecomment-127623505 I will revert the changes to Thrift generated files and update the pull request. Also I have updated details on how we plan to use these metrics @ https://issues.apache.org/jira/browse/STORM-919. Gathering worker and supervisor process information (CPU/Memory) Key: STORM-919 URL: https://issues.apache.org/jira/browse/STORM-919 Project: Apache Storm Issue Type: New Feature Reporter: Shyam Rajendran Assignee: Shyam Rajendran Priority: Minor It would be useful to have supervisor and worker process related information such as %cpu utilization, JVM memory and network bandwidth available to NIMBUS which would be useful for resource aware scheduler implementation later on. As a beginning, the information can be piggybacked on the existing heartbeats into the ZK or to the pacemaker as required. Related JIRAs STORM-177 STORM-891 STORM-899 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: [DISCUSS] Backport bugfixes (to 0.10.x / 0.9.x)
I'm just done with backporting 4 issues to 0.10.x-branch. - Jungtaek Lim (HeartSaVioR) 2015-08-04 10:29 GMT+09:00 P. Taylor Goetz ptgo...@gmail.com: Good catch on storm-903. I'll take a closer look. -Taylor On Aug 3, 2015, at 7:25 PM, 임정택 kabh...@gmail.com wrote: Thanks all. I also think that it is really painful to backport something. That's why I asked about what version lines we'll consider from other thread. Seems like we're sure about releasing official version of 0.10.0 and phasing out 0.9.x lines. I'll backport bugfixes to only 0.10.x-branch and let you know when I finished. Before start releasing 0.10.0, we may take a look at STORM-903 https://issues.apache.org/jira/browse/STORM-903, which seems to be not finished. Thanks, Jungtaek Lim (HeartSaVioR) 2015-08-04 5:36 GMT+09:00 P. Taylor Goetz ptgo...@gmail.com: Thanks for putting together this list Jungtaek. Back-porting is a pain, and the more the 0.9.x, 0.10.x and master lines diverge, the harder it gets. I propose we back-port the 4 fixes you identified for the 0.10 branch, and start discussing releasing 0.10.0 (final, not beta). Once 0.10.0 is out, I think we can start phasing out the 0.9.x line. The idea was to continue to support 0.9.x while 0.10.0 stabilized and allow early upgraders had a chance to kick the tires and report any glaring issues. IMO more than enough time has passed and we should move forward with a 0.10.0 release. In terms of the who and when of back porting, the general principle I’ve followed is that once a patch has been merged, it is a candidate for back-porting, and that any committer can do that since the patch had already been reviewed and accepted. I don’t think a separate pull request is necessary. In fact, I think extra pull requests for back-porting makes JIRA/Github issues a little messy and confusing. IMO the only time we need back-port pull requests is: a) A non-committer contributor is requesting a patch be applied to an earlier version. b) A committer back-ported a patch with a lot of conflicts, and feels it warrants further review before committing. Basically a way of saying “This merge was messy. Could others check my work?” If things go wrong at any time, there’s always “git revert”. I don’t think we need to codify any of this in our BYLAWS unless there is some sort of conflict, which for now there isn’t. If we feel the need to document the process I feel documenting it README/wiki entry should suffice. I’m more in favor of mutual trust among committers than hard and fast rules. Once a particular practice gets formalized in our bylaws, it can be very difficult to change. -Taylor On Aug 3, 2015, at 12:56 PM, Derek Dagit der...@yahoo-inc.com.INVALID wrote: Dealing with branches is a pain, and it is good we are paying attention to back-porting. It is good to bring it up for discussion, and I agree checking with those who do releases is a reasonable thing to do. I do not think there are special restrictions on back-porting fixes to previous branches. I would be comfortable with the normal rules for a pull request. Effort is one cost, and we could eventually run into some more challenging merge conflicts as well. There are multiple things to consider, and I think it is a judgment call. On the other hand, if it does become clear that clarifying principles helpful in our BYLAWS, then I am all for it. If we commit to supporting specific branches with certain kinds of fixes, then we need to stick to such a commitment. -- Derek - Original Message - From: Parth Brahmbhatt pbrahmbh...@hortonworks.com To: dev@storm.apache.org dev@storm.apache.org Cc: Sent: Monday, August 3, 2015 11:26 AM Subject: Re: [DISCUSS] Backport bugfixes (to 0.10.x / 0.9.x) Given how huge 0.10 release was I feel trying to back port all bug fixes and testing that it does not brake something else might turn out to be a huge PITA. I think going with a stable 0.10 release might be the best solution for now. I don’t think back porting requires confirmation however given we will probably have to do release for each version where back porting was done it is probably best to notify Release manager and discuss options. I agree having a rule/bylaw would help clarify things for future. Thanks Parth On 8/2/15, 4:30 PM, 임정택 kabh...@gmail.com wrote: Bump. Does anyone have opinions about this? I already did back-port some bugfixes (not in list) into 0.10.x and 0.9.x lines, but I'm not 100% sure that it is preferred way. Seems like we don't have explicit rules about doing back-port. Only thing I know is Taylor was (or has been) a gatekeeper. Now I really want to know that it still need to be confirmed by Taylor before doing back-port. Thanks, Jungtaek Lim (HeartSaVioR)
[jira] [Updated] (STORM-857) Supervisor process fails to write log metadata to YAML file when supervisor.run.worker.as.user is enabled
[ https://issues.apache.org/jira/browse/STORM-857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-857: --- Fix Version/s: (was: 0.11.0) 0.10.0 Supervisor process fails to write log metadata to YAML file when supervisor.run.worker.as.user is enabled --- Key: STORM-857 URL: https://issues.apache.org/jira/browse/STORM-857 Project: Apache Storm Issue Type: Bug Environment: CentOS 6.6 Hortonworks HDP 2.2.4 Storm 0.9.3.2.2.4.2-2 Reporter: Gunnar Schulze Assignee: Derek Dagit Fix For: 0.10.0 When supervisor.run.worker.as.user is set to true in a kerberized cluster, the supervisor process fails to write log metadata to a YAML file, resulting in the supervisor to shutdown. /var/log/storm/supervisor.log shows the following exception: 2015-06-09 16:59:10 b.s.event [ERROR] Error when processing event java.io.FileNotFoundException: /var/log/storm/metadata/test-1-1433861936-worker-6701.yaml (No such file or directory) at java.io.FileOutputStream.open0(Native Method) ~[na:1.8.0_40] at java.io.FileOutputStream.open(FileOutputStream.java:270) ~[na:1.8.0_40] at java.io.FileOutputStream.init(FileOutputStream.java:213) ~[na:1.8.0_40] at java.io.FileOutputStream.init(FileOutputStream.java:162) ~[na:1.8.0_40] at java.io.FileWriter.init(FileWriter.java:90) ~[na:1.8.0_40] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_40] at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_40] at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_40] at java.lang.reflect.Constructor.newInstance(Constructor.java:422) ~[na:1.8.0_40]at clojure.lang.Reflector.invokeConstructor(Reflector.java:180) ~[clojure-1.5.1.jar:na]at backtype.storm.daemon.supervisor$write_log_metadata_to_yaml_file_BANG_.invoke(supervisor.clj:583) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]at backtype.storm.daemon.supervisor$write_log_metadata_BANG_.invoke(supervisor.clj:598) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]at backtype.storm.daemon.supervisor$fn__5912.invoke(supervisor.clj:679) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]at clojure.lang.MultiFn.invoke(MultiFn.java:241) ~[clojure-1.5.1.jar:na] at backtype.storm.daemon.supervisor$sync_processes$iter__5762__5766$fn__5767.invoke(supervisor.clj:386) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]at clojure.lang.LazySeq.sval(LazySeq.java:42) ~[clojure-1.5.1.jar:na]at clojure.lang.LazySeq.seq(LazySeq.java:60) ~[clojure-1.5.1.jar:na]at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.5.1.jar:na]at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.5.1.jar:na]at clojure.core$dorun.invoke(core.clj:2780) ~[clojure-1.5.1.jar:na]at clojure.core$doall.invoke(core.clj:2796) ~[clojure-1.5.1.jar:na]at backtype.storm.daemon.supervisor$sync_processes.invoke(supervisor.clj:374) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2]at clojure.lang.AFn.applyToHelper(AFn.java:161) [clojure-1.5.1.jar:na]at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.5.1.jar:na]at clojure.core$apply.invoke(core.clj:619) ~[clojure-1.5.1.jar:na]at clojure.core$partial$fn__4190.doInvoke(core.clj:2396) ~[clojure-1.5.1.jar:na] at clojure.lang.RestFn.invoke(RestFn.java:397) ~[clojure-1.5.1.jar:na] at backtype.storm.event$event_manager$fn__4027.invoke(event.clj:40) ~[storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40] 2015-06-09 16:59:10 b.s.util [ERROR] Halting process: (Error when processing an event) java.lang.RuntimeException: (Error when processing an event) at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:322) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na] at backtype.storm.event$event_manager$fn__4027.invoke(event.clj:48) [storm-core-0.9.3.2.2.4.2-2.jar:0.9.3.2.2.4.2-2] at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40] When creating the /var/log/storm/metadata directory manually, everything works fine. Apparently, lines 599-601 in supervisor.clj seem to be the culprit, which create the metadata directory only if the
[jira] [Updated] (STORM-139) hashCode does not work for byte[]
[ https://issues.apache.org/jira/browse/STORM-139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-139: --- Fix Version/s: (was: 0.11.0) 0.10.0 hashCode does not work for byte[] - Key: STORM-139 URL: https://issues.apache.org/jira/browse/STORM-139 Project: Apache Storm Issue Type: Bug Reporter: James Xu Assignee: Derek Dagit Priority: Minor Fix For: 0.10.0 https://github.com/nathanmarz/storm/issues/245 Storm should use a different hashCode method when getting the hash for a byte[] array, since the default one uses the object identity. Should check the behavior on other arrays as well -- xiaokang: I tested byte[] and other arrays. The hashCode of array is the array object identity. I alse tested that java.util.Arrays.hashCode(xx[]) is based of the array element's hash code. It maybe ok change the list-hash-code function of tuple.clj to fix the problem. -- Sirwellington: you may want to read this: http://martin.kleppmann.com/2012/06/18/java-hashcode-unsafe-for-distributed-systems.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-860) UI: while topology is transitioned to killed, Activate button is enabled but not functioning
[ https://issues.apache.org/jira/browse/STORM-860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-860: --- Fix Version/s: (was: 0.11.0) 0.10.0 UI: while topology is transitioned to killed, Activate button is enabled but not functioning -- Key: STORM-860 URL: https://issues.apache.org/jira/browse/STORM-860 Project: Apache Storm Issue Type: Bug Reporter: Jungtaek Lim Assignee: Jungtaek Lim Priority: Minor Fix For: 0.10.0 When I kill Topology from UI, its state is transitioned to 'killed', but 'Activate' button is still enabled. And I push the button, it complains Error while communicating to nimbus from popup. It would be better to disable 'Activate' button while killing topology. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (STORM-793) Logviewer 500 response when metadata has not yet been written (with auth enabled)
[ https://issues.apache.org/jira/browse/STORM-793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jungtaek Lim updated STORM-793: --- Fix Version/s: (was: 0.11.0) 0.10.0 Logviewer 500 response when metadata has not yet been written (with auth enabled) - Key: STORM-793 URL: https://issues.apache.org/jira/browse/STORM-793 Project: Apache Storm Issue Type: Bug Affects Versions: 0.10.0 Reporter: Derek Dagit Assignee: Sanket Reddy Priority: Minor Labels: Newbie Fix For: 0.10.0 When ui.filter is defined and used, and a user navigates to a logviewer link for which the logging metadata has not yet been initialized, we [throw an NPE|https://github.com/apache/storm/blob/84e8bc6d28b54056dd75375be7d316ab03125fb6/storm-core/src/clj/backtype/storm/daemon/logviewer.clj#L184] that results in a 500 response. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: Storm-Kafka trident topology example
GitHub user arunmahadevan opened a pull request: https://github.com/apache/storm/pull/666 Storm-Kafka trident topology example A sample word count trident topology to illustrate the use of Transactional Kafka spout and Kafka bolt. This has the following components 1. KafkaBolt that receives random sentences from RandomSentenceSpout and publishes the sentences to a kafka test topic. 2. TransactionalTridentKafkaSpout that consumes sentences from the test topic, splits it into words, aggregates and stores the word count in a MemoryMapState. 3. DRPC query that returns the word counts by querying the trident state (MemoryMapState). You can merge this pull request into a Git repository by running: $ git pull https://github.com/arunmahadevan/storm storm-kafka-example Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/666.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #666 commit 287718c8bc0f8b51ca89de4f62fcb6710d525992 Author: Arun Mahadevan ai...@hortonworks.com Date: 2015-08-04T08:36:45Z Storm-Kafka trident topology example --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: Storm-Kafka trident topology example
Github user caofangkun commented on a diff in the pull request: https://github.com/apache/storm/pull/666#discussion_r36169516 --- Diff: examples/storm-starter/pom.xml --- @@ -96,6 +96,12 @@ groupIdcom.google.guava/groupId artifactIdguava/artifactId /dependency +dependency + groupIdorg.apache.storm/groupId + artifactIdstorm-kafka/artifactId --- End diff -- should move ``` moduleexamples/storm-starter/module``` in [pom.xml](https://github.com/apache/storm/blob/master/pom.xml#L164) after ```moduleexternal/flux/module``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: Storm-Kafka trident topology example
Github user arunmahadevan commented on a diff in the pull request: https://github.com/apache/storm/pull/666#discussion_r36173014 --- Diff: examples/storm-starter/pom.xml --- @@ -96,6 +96,12 @@ groupIdcom.google.guava/groupId artifactIdguava/artifactId /dependency +dependency + groupIdorg.apache.storm/groupId + artifactIdstorm-kafka/artifactId --- End diff -- @caofangkun thanks. Fixed the dependencies and changed the order. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-966) ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double
[ https://issues.apache.org/jira/browse/STORM-966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653726#comment-14653726 ] ASF GitHub Bot commented on STORM-966: -- Github user jerrypeng commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127633390 You are right @caofangkun, not sure how that line got deleted, but it shouldn't have been removed. I added back and I think the tests will pass now ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double --- Key: STORM-966 URL: https://issues.apache.org/jira/browse/STORM-966 Project: Apache Storm Issue Type: Improvement Reporter: Boyang Jerry Peng Assignee: Boyang Jerry Peng Priority: Minor ConfigValidation.DoubleValidator code only checks if the object is null whether if the object is a instance of Number which is a parent class of Double. DoubleValidator is only used once in Config.java and in that instance: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = ConfigValidation.DoubleValidator; can just be set to: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = NUMBER.class; Then we can just get rid of the misleading function ConfigValidation.DoubleValidator since it doesn't really check if a object is of double type thus the validator function doesn't really do anything and the name is misleading. In previous commit https://github.com/apache/storm/commit/214ee7454548b884c591991b1faea770d1478cec Number.Class was used anyway -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-966 ConfigValidation.DoubleValidator is ...
Github user jerrypeng commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127633390 You are right @caofangkun, not sure how that line got deleted, but it shouldn't have been removed. I added back and I think the tests will pass now --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-966 ConfigValidation.DoubleValidator is ...
Github user HeartSaVioR commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127604553 @jerrypeng There's a compile failure. Could you check it? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[GitHub] storm pull request: STORM-845 Storm ElasticSearch connector
Github user HeartSaVioR commented on the pull request: https://github.com/apache/storm/pull/573#issuecomment-127606460 +1. @harshach Could you take a look? Or do you want me to merge it without your review? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-845) Storm ElasticSearch connector
[ https://issues.apache.org/jira/browse/STORM-845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653619#comment-14653619 ] ASF GitHub Bot commented on STORM-845: -- Github user HeartSaVioR commented on the pull request: https://github.com/apache/storm/pull/573#issuecomment-127606460 +1. @harshach Could you take a look? Or do you want me to merge it without your review? Storm ElasticSearch connector - Key: STORM-845 URL: https://issues.apache.org/jira/browse/STORM-845 Project: Apache Storm Issue Type: New Feature Reporter: Adrian Seungjin Lee Assignee: Adrian Seungjin Lee It would be nice to provide storm driver for elasticsearch, just like it does for hive, redis and so on. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-944) storm-hive pom.xml has a dependency conflict with calcite
[ https://issues.apache.org/jira/browse/STORM-944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Dossett reassigned STORM-944: --- Assignee: Aaron Dossett (was: Aaron Dossett) storm-hive pom.xml has a dependency conflict with calcite - Key: STORM-944 URL: https://issues.apache.org/jira/browse/STORM-944 Project: Apache Storm Issue Type: Bug Components: external Reporter: Aaron Dossett Assignee: Aaron Dossett Priority: Trivial Hive 0.14.0 has a dependency on calcite-0.9.2-incubating-SNAPSHOT which can't be resolved in maven central. See HIVE-8906 for details. This gives a harmless compile warning for storm-hive but it does prevent some IDEs (IntelliJ for certain, probably others) from correctly resolving the project dependencies. storm-hive already has a dependency on calcite-0.9.2-incubating so calcite should be excluded from the hive dependency. Compile warning: [WARNING] Missing POM for org.apache.calcite:calcite-core:jar:0.9.2-incubating-SNAPSHOT [WARNING] Missing POM for org.apache.calcite:calcite-avatica:jar:0.9.2-incubating-SNAPSHOT -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-969) HDFS Bolt can end up in an unrecoverable state
[ https://issues.apache.org/jira/browse/STORM-969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Dossett reassigned STORM-969: --- Assignee: Aaron Dossett (was: Aaron Dossett) HDFS Bolt can end up in an unrecoverable state -- Key: STORM-969 URL: https://issues.apache.org/jira/browse/STORM-969 Project: Apache Storm Issue Type: Improvement Components: storm-hdfs Reporter: Aaron Dossett Assignee: Aaron Dossett The body of the HDFSBolt.execute() method is essentially one try-catch block. The catch block reports the error and fails the current tuple. In some cases the bolt's FSDataOutputStream object (named 'out') is in an unrecoverable state and no subsequent calls to execute() can succeed. To produce this scenario: - process some tuples through HDFS bolt - put the underlying HDFS system into safemode - process some more tuples and receive a correct ClosedChannelException - take the underlying HDFS system out of safemode - subsequent tuples continue to fail with the same exception The three fundamental operations that execute takes (writing, sync'ing, rotating) need to be isolated so that errors from each are specifically handled. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-943) In TestHiveBolt make the collector mock slightly more precise
[ https://issues.apache.org/jira/browse/STORM-943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Dossett reassigned STORM-943: --- Assignee: Aaron Dossett (was: Aaron Dossett) In TestHiveBolt make the collector mock slightly more precise - Key: STORM-943 URL: https://issues.apache.org/jira/browse/STORM-943 Project: Apache Storm Issue Type: Improvement Components: external Reporter: Aaron Dossett Assignee: Aaron Dossett Priority: Trivial The OutputCollector passed to bolt.prepare() can be mocked directly instead of mocking an IOutputCollector used to construct the collector each time. This makes the tests slightly more readable and it will be easier to verify more complex interactions with the collector in the future if needed. Resolved by this PR: https://github.com/apache/storm/pull/583 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (STORM-960) Hive-Bolt can lose tuples when flushing data
[ https://issues.apache.org/jira/browse/STORM-960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Dossett reassigned STORM-960: --- Assignee: Aaron Dossett (was: Aaron Dossett) Hive-Bolt can lose tuples when flushing data Key: STORM-960 URL: https://issues.apache.org/jira/browse/STORM-960 Project: Apache Storm Issue Type: Improvement Components: external Reporter: Aaron Dossett Assignee: Aaron Dossett Priority: Minor In HiveBolt's execute method tuples are ack'd as they are received. When a batchsize of tuples has been received, the writers are flushed. However, if the flush fails only the most recent tuple will be marked as failed. All prior tuples will already have been ack'd. This creates a window for data loss. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-966 ConfigValidation.DoubleValidator is ...
Github user caofangkun commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127811188 I checked the CI build log and find https://travis-ci.org/apache/storm/jobs/74073153#L4132 and there's a PR on this , @HeartSaVioR Could you please review #652 ? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-966) ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double
[ https://issues.apache.org/jira/browse/STORM-966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653861#comment-14653861 ] ASF GitHub Bot commented on STORM-966: -- Github user jerrypeng commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127659910 I don't know why the build is still failing. I just built and ran all the tests fine on my local machine. I heard that since we just recently started using travis-ci that there are some bugs with it. ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double --- Key: STORM-966 URL: https://issues.apache.org/jira/browse/STORM-966 Project: Apache Storm Issue Type: Improvement Reporter: Boyang Jerry Peng Assignee: Boyang Jerry Peng Priority: Minor ConfigValidation.DoubleValidator code only checks if the object is null whether if the object is a instance of Number which is a parent class of Double. DoubleValidator is only used once in Config.java and in that instance: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = ConfigValidation.DoubleValidator; can just be set to: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = NUMBER.class; Then we can just get rid of the misleading function ConfigValidation.DoubleValidator since it doesn't really check if a object is of double type thus the validator function doesn't really do anything and the name is misleading. In previous commit https://github.com/apache/storm/commit/214ee7454548b884c591991b1faea770d1478cec Number.Class was used anyway -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[GitHub] storm pull request: STORM-966 ConfigValidation.DoubleValidator is ...
Github user jerrypeng commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127659910 I don't know why the build is still failing. I just built and ran all the tests fine on my local machine. I heard that since we just recently started using travis-ci that there are some bugs with it. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---
[jira] [Commented] (STORM-966) ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double
[ https://issues.apache.org/jira/browse/STORM-966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14654629#comment-14654629 ] ASF GitHub Bot commented on STORM-966: -- Github user caofangkun commented on the pull request: https://github.com/apache/storm/pull/658#issuecomment-127811188 I checked the CI build log and find https://travis-ci.org/apache/storm/jobs/74073153#L4132 and there's a PR on this , @HeartSaVioR Could you please review #652 ? ConfigValidation.DoubleValidator doesn't really validate whether the type of the object is a double --- Key: STORM-966 URL: https://issues.apache.org/jira/browse/STORM-966 Project: Apache Storm Issue Type: Improvement Reporter: Boyang Jerry Peng Assignee: Boyang Jerry Peng Priority: Minor ConfigValidation.DoubleValidator code only checks if the object is null whether if the object is a instance of Number which is a parent class of Double. DoubleValidator is only used once in Config.java and in that instance: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = ConfigValidation.DoubleValidator; can just be set to: public static final Object TOPOLOGY_STATS_SAMPLE_RATE_SCHEMA = NUMBER.class; Then we can just get rid of the misleading function ConfigValidation.DoubleValidator since it doesn't really check if a object is of double type thus the validator function doesn't really do anything and the name is misleading. In previous commit https://github.com/apache/storm/commit/214ee7454548b884c591991b1faea770d1478cec Number.Class was used anyway -- This message was sent by Atlassian JIRA (v6.3.4#6332)