I found the cause. Filed https://issues.apache.org/jira/browse/STORM-2912. Will craft patches for branches shortly.
FYI, this issue affects all the RCs being opened now. I'll comment to other RCs as well. -Jungtaek Lim (HeartSaVioR) 2018년 1월 25일 (목) 오후 12:09, Jungtaek Lim <[email protected]>님이 작성: > Alexandre, > > As Priyank stated we're seeing odd numbers in storm-starter. I also found > the odd value but it was not 1.2.0 but RC for another version. Maybe I > missed to check for 1.2.0 here. > I'm planning to test all the open release candidates as well and test with > various topologies, but it would be really helpful if you could provide > information on your test topology to reproduce, like using multilang, > asynchronous bolt, etc. > > Except metrics, other things are still OK. > > Anyway, changing my vote to -1 (binding). > > Thanks, > Jungtaek Lim > > 2018년 1월 25일 (목) 오후 12:02, Priyank Shah <[email protected]>님이 작성: > >> Release looks good overall except for complete latency and capacity. >> >> Verified the md5, sha and asc >> Unzipped and untared the source and binary releases. >> Built from source and ran daemons from storm-dist binary release after >> packing from the same. >> Submitted FastWordCount and RollingTopWords >> Played around with UI and tried different operations like Deactivate, >> Activate, Kill, LogViewer, Search log, etc, . Everything works ok. >> >> However I noticed complete latency for FastWordCount to be more than 4 >> seconds which seems higher. Also, capacity for RollingTopWords (was >> verifying for Jungtaek) seemed to be very high. >> Running 1.1.1 to compare the same numbers for both the topologies. Will >> update with my results. >> >> >> On 1/24/18, 12:03 PM, "Alexandre Vermeerbergen" <[email protected]> >> wrote: >> >> Hello Storm developers, >> >> One of my team member (Noureddine Chatti) found that the regression in >> "Assigned Memory (MB)" columns for our topologies with Storm 1.2.0rc1, >> where the 65 MB value is displayed regardless of our actual >> topologies's memory setting: >> >> We define our workers's max memory heap using this option (as >> documented in Hortonworks documentation here: >> >> https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_storm-component-guide/content/config-storm-settings.html >> ) >> >> worker.childopts: "-Xmx2048m" >> >> Until Storm 1.1.0, this value is displayed in "Assigned Memory (MB)" >> columns by Nimbus UI. >> In Storm 1.2.0 rc1, the 65 MB value is displayed regardless of this >> setting. >> >> He also found that using the following setting: >> >> topology.worker.max.heap.size.mb: 2048. >> >> then the displayed "Assigned Memory (MB)" columns by Nimbus UI are in >> line with the value (i.e. they show 2048MB). >> >> Questions: >> 1. This is change of behavior from Storm 1.1.0 to Storm 1.2.0rc1 >> international? any related JIRA who would help understanding the >> rationale? >> 2. Is the maximum heap size defined using worker.childopts actually >> used to setup Worker's max heap size? >> 3. What's the best practice for setting worker's memory consumption : >> is topology.worker.max.heap.size.mb now mandatory, or is the use of >> -Xmx in worker.childopts still supported? >> 4. Could be there other differences from Storm 1.1.0 and Storm >> 1.2.0rc1 which could explain why we get very weird statistics in >> Nimbus UI for Capacity & Latency for our topologies? >> >> Best regards, >> Alexandre Vermeerbergen >> >> >> >>
