[GitHub] storm issue #2930: STORM-3274: Migrates storm CLI to using argparse making d...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2930 @govind-menon The CLI is not recognizing -c options are configuration parameters. ---
[GitHub] storm issue #2930: STORM-3274: Migrates storm CLI to using argparse making d...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2930 @govind-menon The `node-health-check` command is missing from `storm.py` after this change. ---
[GitHub] storm issue #2931: STORM-3307: Fixes error time on UI for time of last error...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2931 @govind-menon Thank you for the patch. ---
[GitHub] storm issue #2925: STORM-3302: Ensures we close streams to HDFS
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2925 @d2r Thank you for the fix and squashing the commits. ---
[GitHub] storm pull request #2925: STORM-3302: Ensures we close streams to HDFS
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2925#discussion_r240759867 --- Diff: storm-client/src/jvm/org/apache/storm/dependency/DependencyUploader.java --- @@ -154,11 +154,23 @@ private boolean uploadDependencyToBlobStore(String key, File dependency) acls.add(new AccessControl(AccessControlType.OTHER, BlobStoreAclHandler.READ)); -AtomicOutputStream blob = getBlobStore().createBlob(key, new SettableBlobMeta(acls)); -Files.copy(dependency.toPath(), blob); -blob.close(); +AtomicOutputStream blob = null; +try { +blob = getBlobStore().createBlob(key, new SettableBlobMeta(acls)); +Files.copy(dependency.toPath(), blob); +blob.close(); +blob = null; -uploadNew = true; +uploadNew = true; +} finally { +try { +if (blob != null) { +blob.cancel(); +} +} catch (IOException throwaway) { +// Ignore. --- End diff -- Ignore that. This exception will appear only while attempting to cancel the blob connection. We should ignore it. ---
[GitHub] storm pull request #2925: STORM-3302: Ensures we close streams to HDFS
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2925#discussion_r240759159 --- Diff: storm-client/src/jvm/org/apache/storm/dependency/DependencyUploader.java --- @@ -154,11 +154,23 @@ private boolean uploadDependencyToBlobStore(String key, File dependency) acls.add(new AccessControl(AccessControlType.OTHER, BlobStoreAclHandler.READ)); -AtomicOutputStream blob = getBlobStore().createBlob(key, new SettableBlobMeta(acls)); -Files.copy(dependency.toPath(), blob); -blob.close(); +AtomicOutputStream blob = null; +try { +blob = getBlobStore().createBlob(key, new SettableBlobMeta(acls)); +Files.copy(dependency.toPath(), blob); +blob.close(); +blob = null; -uploadNew = true; +uploadNew = true; +} finally { +try { +if (blob != null) { +blob.cancel(); +} +} catch (IOException throwaway) { +// Ignore. --- End diff -- It would be useful to log Error/warning in case there is `IOException`. ---
[GitHub] storm pull request #2920: STORM-3297 prevent supervisor restart when no nimb...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2920#discussion_r239231045 --- Diff: storm-server/src/main/java/org/apache/storm/metricstore/NimbusMetricProcessor.java --- @@ -24,7 +24,7 @@ public void processWorkerMetrics(Map conf, WorkerMetrics metrics) throws MetricException { try (NimbusClient client = NimbusClient.getConfiguredClient(conf)) { client.getClient().processWorkerMetrics(metrics); -} catch (TException e) { --- End diff -- Could we instead of handling all exceptions, be more specific on capturing `NimbusLeaderNotFoundException` ? `catch (TException | NimbusLeaderNotFoundException e)` ---
[GitHub] storm issue #2917: [STORM-3294] Upgrade jetty version to latest stable 9.4.1...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2917 @srdo There was no explicit issue. Just wanted to upgrade to latest on jetty minor version ---
[GitHub] storm pull request #2917: [STORM-3294] Upgrade jetty version to latest stabl...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2917 [STORM-3294] Upgrade jetty version to latest stable 9.4.14.v20181114 version You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm upgrade-jetty-version Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2917.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2917 commit d460bfe18b3cf5c6bbeb8231fe018d15ce5d62c2 Author: Kishor Patil Date: 2018-12-03T19:22:25Z Upgrade jetty version to latest stable ---
[GitHub] storm pull request #2908: STORM-3276: Updated Flux to deal with storm local ...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2908#discussion_r234019467 --- Diff: storm-server/src/main/java/org/apache/storm/LocalCluster.java --- @@ -1197,8 +1238,9 @@ public IBolt makeAckerBoltImpl() { * When running a topology locally, for tests etc. It is helpful to be sure that the topology is dead before the test exits. This is * an AutoCloseable topology that not only gives you access to the compiled StormTopology but also will kill the topology when it * closes. - * + * ``` --- End diff -- Minor one. Not sure if ``` turns it into pre formatted code. Do we need to use ` tag? ---
[GitHub] storm pull request #2903: [STORM-3284]: Add inheritance to cgroups
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2903 [STORM-3284]: Add inheritance to cgroups You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm add-cgroup-inheritance Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2903.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2903 commit 9178c10211874f8928ebc5ce4fdbf04ae698bcb5 Author: Kishor Patil Date: 2018-11-08T17:40:37Z Add inheritance to cgroups ---
[GitHub] storm pull request #2902: [STORM-3282] Fix RAS worker count estimation
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2902#discussion_r231608903 --- Diff: storm-server/src/main/java/org/apache/storm/utils/ServerUtils.java --- @@ -691,8 +691,10 @@ public static boolean isRAS(Map conf) { public static int getEstimatedWorkerCountForRASTopo(Map topoConf, StormTopology topology) throws InvalidTopologyException { -return (int) Math.ceil(getEstimatedTotalHeapMemoryRequiredByTopo(topoConf, topology) / - ObjectReader.getDouble(topoConf.get(Config.WORKER_HEAP_MEMORY_MB))); +Double defaultWorkerMaxHeap = ObjectReader.getDouble(topoConf.get(Config.WORKER_HEAP_MEMORY_MB)); +Double topologyWorkerMaxHeap = ObjectReader.getDouble(topoConf.get(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB)); --- End diff -- The default in the code is last resort, as will be picked if we ever remove defaults from yaml ---
[GitHub] storm pull request #2902: [STORM-3282] Fix RAS worker count estimation
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2902#discussion_r231580524 --- Diff: storm-server/src/main/java/org/apache/storm/utils/ServerUtils.java --- @@ -691,8 +691,10 @@ public static boolean isRAS(Map conf) { public static int getEstimatedWorkerCountForRASTopo(Map topoConf, StormTopology topology) throws InvalidTopologyException { -return (int) Math.ceil(getEstimatedTotalHeapMemoryRequiredByTopo(topoConf, topology) / - ObjectReader.getDouble(topoConf.get(Config.WORKER_HEAP_MEMORY_MB))); +Double defaultWorkerMaxHeap = ObjectReader.getDouble(topoConf.get(Config.WORKER_HEAP_MEMORY_MB)); +Double topologyWorkerMaxHeap = ObjectReader.getDouble(topoConf.get(Config.TOPOLOGY_WORKER_MAX_HEAP_SIZE_MB)); --- End diff -- There is default of this value - https://github.com/apache/storm/blob/master/conf/defaults.yaml#L332 and https://github.com/apache/storm/blob/master/conf/defaults.yaml#L332 avoids it. But we can So I don't anticipate that happening, but I am adding defaults. ---
[GitHub] storm issue #2902: [STORM-3282] Fix RAS worker count estimation
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2902 The jenkins failure seems unrelated. ---
[GitHub] storm pull request #2902: [STORM-3282] Fix RAS worker count estimation
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2902 [STORM-3282] Fix RAS worker count estimation You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3282 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2902.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2902 commit 8288d41b313126ce7a9da7b548801a58da98b7ae Author: Kishor Patil Date: 2018-11-06T21:58:31Z Fix RAS worker count estimation ---
[GitHub] storm pull request #2895: [STORM-3275] Fix UIHelpers timeout while starting ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2895 [STORM-3275] Fix UIHelpers timeout while starting profiler The Jprofiler start action, requires calculating timeout in milliseconds, while input is in minutes. Also, the logic works to say, STOP at timeout instead of start. This was I think clojure to java translation error. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3275 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2895.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2895 commit 6b42587dfb1f99c8dfb6066a2bbc83e29552cff8 Author: Kishor Patil Date: 2018-10-24T21:50:59Z Fix UIHelpers timeout while starting profiler ---
[GitHub] storm issue #2882: STORM-3260: Add in support to print some state
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2882 Travis-ci build failures seems unrelated to the changes. ---
[GitHub] storm issue #2882: STORM-3260: Add in support to print some state
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2882 @revans2 , ok, looking at output, I thought it was trying to output JSON data. Thanks for the explanation. ---
[GitHub] storm pull request #2882: STORM-3260: Add in support to print some state
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2882#discussion_r227061728 --- Diff: storm-core/src/jvm/org/apache/storm/command/AdminCommands.java --- @@ -104,6 +115,164 @@ public void printCliHelp(String command, PrintStream out) { } } +/** + * Print value in a human readable format. + * @param value what to print. + * @return a human readable string + */ +public static String prettyPrint(TBase value) { +StringBuilder builder = new StringBuilder(); +prettyPrint(value, 0, builder); +return builder.toString(); +} + +private static void println(StringBuilder out, int depth, Object value) { +for (int i = 0; i < depth; i++) { +out.append("\t"); +} +out.append(value); +out.append("\n"); +} + +private static void prettyPrint(TBase value, int depth, StringBuilder out) { +if (value == null) { +println(out, depth,"null"); +return; +} +println(out, depth, "{"); +prettyPrintFields(value, depth + 1, out); +println(out, depth, "}"); +} + +private static void prettyPrintFields(TBase value, int depth, StringBuilder out) { +for (Map.Entry entry : FieldMetaData.getStructMetaDataMap(value.getClass()).entrySet()) { +TFieldIdEnum key = entry.getKey(); +if (!value.isSet(key)) { +println(out, depth, key.getFieldName() + ": not set"); +} else { +Object o = value.getFieldValue(key); +prettyPrintKeyValue(key.getFieldName(), o, depth, out); +} +} +} + +private static String keyStr(String key) { +return key == null ? "" : (key + ": "); --- End diff -- should be probably "key" I guess ---
[GitHub] storm pull request #2882: STORM-3260: Add in support to print some state
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2882#discussion_r227061218 --- Diff: storm-core/src/jvm/org/apache/storm/command/AdminCommands.java --- @@ -104,6 +115,164 @@ public void printCliHelp(String command, PrintStream out) { } } +/** + * Print value in a human readable format. + * @param value what to print. + * @return a human readable string + */ +public static String prettyPrint(TBase value) { +StringBuilder builder = new StringBuilder(); +prettyPrint(value, 0, builder); +return builder.toString(); +} + +private static void println(StringBuilder out, int depth, Object value) { +for (int i = 0; i < depth; i++) { +out.append("\t"); +} +out.append(value); +out.append("\n"); +} + +private static void prettyPrint(TBase value, int depth, StringBuilder out) { +if (value == null) { +println(out, depth,"null"); +return; +} +println(out, depth, "{"); +prettyPrintFields(value, depth + 1, out); +println(out, depth, "}"); +} + +private static void prettyPrintFields(TBase value, int depth, StringBuilder out) { +for (Map.Entry entry : FieldMetaData.getStructMetaDataMap(value.getClass()).entrySet()) { +TFieldIdEnum key = entry.getKey(); +if (!value.isSet(key)) { +println(out, depth, key.getFieldName() + ": not set"); +} else { +Object o = value.getFieldValue(key); +prettyPrintKeyValue(key.getFieldName(), o, depth, out); +} +} +} + +private static String keyStr(String key) { +return key == null ? "" : (key + ": "); +} + +private static void prettyPrintKeyValue(String key, Object o, int depth, StringBuilder out) { +//Special cases for storm... +if ("json_conf".equals(key) && o instanceof String) { +try { +o = Utils.parseJson((String)o); +} catch (Exception e) { +LOG.error("Could not parse json_conf as JSON", e); +} +} +if (o instanceof TBase) { +println(out, depth, keyStr(key) + "{"); +prettyPrintFields((TBase) o, depth + 1, out); +println(out, depth, "}"); +} else if (o instanceof Map) { +println(out, depth, keyStr(key) + "{"); +for (Map.Entry entry : ((Map) o).entrySet()) { +prettyPrintKeyValue(entry.getKey().toString(), entry.getValue(), depth + 1, out); +} +println(out, depth, "}"); +} else if (o instanceof Collection) { +println(out, depth, keyStr(key) + "["); +for (Object sub: (Collection)o) { +prettyPrintKeyValue(null, sub, depth + 1, out); +} +println(out, depth, "]"); +} else if (o instanceof String) { +println(out, depth, keyStr(key) + "\"" + o + "\""); +} else { +println(out, depth, keyStr(key) + o); +} +} + +private static class PrintTopo implements AdminCommand { + +@Override +public void run(String[] args, Map conf, String command) throws Exception { +for (String arg: args) { +System.out.println(arg + ":"); --- End diff -- We should probably print quotes around `arg` to make it more compatible json like output ---
[GitHub] storm pull request #2882: STORM-3260: Add in support to print some state
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2882#discussion_r227061504 --- Diff: storm-core/src/jvm/org/apache/storm/command/AdminCommands.java --- @@ -104,6 +115,164 @@ public void printCliHelp(String command, PrintStream out) { } } +/** + * Print value in a human readable format. + * @param value what to print. + * @return a human readable string + */ +public static String prettyPrint(TBase value) { +StringBuilder builder = new StringBuilder(); +prettyPrint(value, 0, builder); +return builder.toString(); +} + +private static void println(StringBuilder out, int depth, Object value) { +for (int i = 0; i < depth; i++) { +out.append("\t"); +} +out.append(value); +out.append("\n"); +} + +private static void prettyPrint(TBase value, int depth, StringBuilder out) { +if (value == null) { +println(out, depth,"null"); +return; +} +println(out, depth, "{"); +prettyPrintFields(value, depth + 1, out); +println(out, depth, "}"); +} + +private static void prettyPrintFields(TBase value, int depth, StringBuilder out) { +for (Map.Entry entry : FieldMetaData.getStructMetaDataMap(value.getClass()).entrySet()) { +TFieldIdEnum key = entry.getKey(); +if (!value.isSet(key)) { +println(out, depth, key.getFieldName() + ": not set"); --- End diff -- Simply make this empty "" String result or `{}`. not set is not exactly parsable json ---
[GitHub] storm pull request #2744: [STORM-3132] Avoid NPE in the Values Constructor
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2744#discussion_r225509252 --- Diff: storm-client/test/jvm/org/apache/storm/tuple/ValuesTest.java --- @@ -0,0 +1,49 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.tuple; + +import org.junit.Assert; +import org.junit.Test; + +public class ValuesTest { + +@Test +public void testNoArgsToValues() { +Values vals = new Values(); +Assert.assertTrue("Failed to add null to Values", vals.size() == 0); +} + +@Test +public void testNullArgsToValues() { +Values vals = new Values(null); +Assert.assertTrue("Failed to add null to Values", vals.size() == 1); --- End diff -- @HeartSaVioR, Make changes to unit tests to explicitly check elements in values. ---
[GitHub] storm pull request #2744: [STORM-3132] Avoid NPE in the Values Constructor
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2744#discussion_r225509106 --- Diff: storm-client/src/jvm/org/apache/storm/tuple/Values.java --- @@ -23,9 +23,13 @@ public Values() { } public Values(Object... vals) { -super(vals.length); -for (Object o : vals) { -add(o); +super(vals != null ? vals.length : 0); --- End diff -- Making change ---
[GitHub] storm pull request #2878: [STORM-3257] 'storm kill' command line should be a...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2878#discussion_r225502712 --- Diff: storm-core/src/jvm/org/apache/storm/command/KillTopology.java --- @@ -25,21 +25,49 @@ public static void main(String[] args) throws Exception { Map cl = CLI.opt("w", "wait", null, CLI.AS_INT) +.boolOpt("i", "ignore-errors") .arg("TOPO", CLI.INTO_LIST) .parse(args); + +@SuppressWarnings("unchecked") final List names = (List) cl.get("TOPO"); + +// wait seconds for topology to shut down Integer wait = (Integer) cl.get("w"); +// if '-i' set, we'll try to kill every topology listed, even if an error occurs +Boolean continueOnError = (Boolean) cl.get("i"); + final KillOptions opts = new KillOptions(); if (wait != null) { opts.set_wait_secs(wait); } + NimbusClient.withConfiguredClient(new NimbusClient.WithNimbus() { @Override public void run(Nimbus.Iface nimbus) throws Exception { +int errorCount = 0; for (String name : names) { -nimbus.killTopologyWithOpts(name, opts); -LOG.info("Killed topology: {}", name); +try { +nimbus.killTopologyWithOpts(name, opts); +LOG.info("Killed topology: {}", name); +} catch (Exception e) { +errorCount += 1; +if (!continueOnError) { +throw e; +} else { +LOG.info( +"Caught error killing topology '{}'; continuing as -i was passed. Exception: {}", --- End diff -- I would use LOG.error and let all error detais/stracktrace be printed instead of providing just Exception class name. ---
[GitHub] storm pull request #2878: [STORM-3257] 'storm kill' command line should be a...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2878#discussion_r225358947 --- Diff: storm-core/src/jvm/org/apache/storm/command/KillTopology.java --- @@ -25,21 +25,49 @@ public static void main(String[] args) throws Exception { Map cl = CLI.opt("w", "wait", null, CLI.AS_INT) +.boolOpt("c", "continue-on-error") --- End diff -- I suspect this conflicts with existing `-c` usage on https://github.com/apache/storm/blob/master/bin/storm.py#L1027 to specify general configuration parameters. I would use and `-i` `-ignore-errors` or something like that. ---
[GitHub] storm issue #2871: [STORM-3252] Bug fix for blobstore sync
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2871 > @kishorvpatil I am not sure if we want to swallow IOException, since it may be due to some serious problem which we want to propagate and nimbus cannot continue in that case. Agreed. We don't want any other `IOException`. ---
[GitHub] storm pull request #2876: [STORM-3254] Don't wait for localization of blobs ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2876 [STORM-3254] Don't wait for localization of blobs if assignments change. Delay in blobstore can cause slot to become unusable. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3254 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2876.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2876 commit 23dab727ed23ed739e066d414c546652cb8ac4d1 Author: Kishor Patil Date: 2018-10-11T22:04:23Z Don't wait for localization of blobs if assignments change. ---
[GitHub] storm pull request #2871: [STORM-3252] Bug fix for blobstore sync
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2871#discussion_r224548982 --- Diff: storm-server/src/main/java/org/apache/storm/blobstore/BlobStoreUtils.java --- @@ -191,6 +192,8 @@ public static boolean downloadUpdatedBlob(Map conf, BlobStore bl out.close(); } isSuccess = true; +} catch(FileNotFoundException fnf) { +LOG.warn("FileNotFoundException", fnf); --- End diff -- I see. but then, any other `IOException` such as failure to reach Blobstore etc could cause nimbus restart. I would treat all IOExceptions in same manner. i.e. log them and return `false`. ---
[GitHub] storm pull request #2871: [STORM-3252] Bug fix for blobstore sync
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2871#discussion_r224538909 --- Diff: storm-server/src/main/java/org/apache/storm/blobstore/BlobStoreUtils.java --- @@ -191,6 +192,8 @@ public static boolean downloadUpdatedBlob(Map conf, BlobStore bl out.close(); } isSuccess = true; +} catch(FileNotFoundException fnf) { +LOG.warn("FileNotFoundException", fnf); --- End diff -- rethrow fnf? ---
[GitHub] storm pull request #2864: [STORM-3246]Use utf-8 charset to write log files
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2864 [STORM-3246]Use utf-8 charset to write log files You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm fix-logfile-charset Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2864.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2864 commit c8ce783f2fe5e444a97ea385a33ad300f99f67ff Author: Kishor Patil Date: 2018-10-04T20:18:40Z Use utf-8 charset to write log files ---
[GitHub] storm pull request #2855: STORM-3240 health checks should succeed on exit co...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2855#discussion_r222100110 --- Diff: storm-server/src/main/java/org/apache/storm/healthcheck/HealthChecker.java --- @@ -106,13 +106,13 @@ public void run() { BufferedReader reader = new BufferedReader(new InputStreamReader(stdin)); while ((str = reader.readLine()) != null) { if (str.startsWith("ERROR")) { +LOG.warn("The healthcheck process {} exited with code {}", script, process.exitValue()); return FAILED; } } return SUCCESS; --- End diff -- @agresch, if loop should always return FAIL since exit code is != 0. Currently, there is possibility that you will get out of if loop by return code SUCCESS. We need something similar to ``` if (process.exitValue() != 0) { String str; InputStream stdin = process.getInputStream(); BufferedReader reader = new BufferedReader(new InputStreamReader(stdin)); while ((str = reader.readLine()) != null) { if (str.startsWith("ERROR")) { return FAILED; } } LOG.warn("The healthcheck process {} exited with code {}", script, process.exitValue()); return FAILED_WITH_EXIT_CODE; } return SUCCESS; ``` ---
[GitHub] storm pull request #2855: STORM-3240 health checks should succeed on exit co...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2855#discussion_r222091724 --- Diff: storm-server/src/main/java/org/apache/storm/healthcheck/HealthChecker.java --- @@ -106,13 +106,13 @@ public void run() { BufferedReader reader = new BufferedReader(new InputStreamReader(stdin)); while ((str = reader.readLine()) != null) { if (str.startsWith("ERROR")) { +LOG.warn("The healthcheck process {} exited with code {}", script, process.exitValue()); return FAILED; } } return SUCCESS; --- End diff -- Should this not failed with `return FAILED_WITH_EXIT_CODE;` since the exit code is not 0 ? ---
[GitHub] storm pull request #2852: STORM-3237 track Nimbus mkAssignment failures
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2852#discussion_r221288492 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -2850,6 +2853,7 @@ public void launchServer() throws Exception { } doCleanup(); } catch (Exception e) { + this.mkAssignmentsErrors.mark(); --- End diff -- Yes, let's apply this narrowly to just `mkAssignments` ---
[GitHub] storm pull request #2852: STORM-3237 track Nimbus mkAssignment failures
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2852#discussion_r221257606 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -2850,6 +2853,7 @@ public void launchServer() throws Exception { } doCleanup(); } catch (Exception e) { + this.mkAssignmentsErrors.mark(); --- End diff -- should we have separate tracker for cleanup failures and `doCleanup()` ? ---
[GitHub] storm pull request #2850: [STORM-3235] Fix WorkerToken renewal criteria and ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2850 [STORM-3235] Fix WorkerToken renewal criteria and refactor - Fix broken condition to validate if new `WorkerToken` should be added. - Refactor code out of `Nimbus` into `WorkerTokenManager` - Updated `WorkerTokenTest` to ensure `WorkerTokenManager` recognizes token ready for renewal. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3235 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2850.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2850 commit 24a28eaadd17d2594eb88e098455d68aa74380a7 Author: Kishor Patil Date: 2018-09-26T11:26:54Z Fix WorkerToken renewal criteria and refactor ---
[GitHub] storm issue #2831: STORM-3224: Fix FLUX YAML Viewer icon location/position o...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2831 https://user-images.githubusercontent.com/6090397/45646222-afa9d500-ba90-11e8-9651-56b153e7ce42.png";> @revans2 I fixed the position and removed duplicate space entries showing FLUX image icon. ---
[GitHub] storm issue #2831: STORM-3224: Fix FLUX YAML Viewer icon location/position o...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2831 OMG, my bad.. looks like the span tag is added twice. Let me fix it.. ---
[GitHub] storm issue #2805: STORM-3197: Make StormMetricsRegistry non-static
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2805 @revans2 Looks like we have few conflicts on this PR? ---
[GitHub] storm pull request #2836: STORM-3162: Cleanup heartbeats cache and make it t...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2836#discussion_r218086663 --- Diff: storm-client/src/jvm/org/apache/storm/stats/ClientStatsUtil.java --- @@ -0,0 +1,202 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.stats; + +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import org.apache.storm.generated.ClusterWorkerHeartbeat; +import org.apache.storm.generated.ExecutorInfo; +import org.apache.storm.generated.ExecutorStats; +import org.apache.storm.generated.GlobalStreamId; +import org.apache.storm.shade.com.google.common.collect.Lists; +import org.apache.storm.utils.Time; + +/** + * Stats calculations needed by storm client code. + */ +public class ClientStatsUtil { +public static final String SPOUT = "spout"; +public static final String BOLT = "bolt"; +static final String EXECUTOR_STATS = "executor-stats"; +static final String UPTIME = "uptime"; +public static final String TIME_SECS = "time-secs"; +public static final ToGlobalStreamIdTransformer TO_GSID = new ToGlobalStreamIdTransformer(); +public static final IdentityTransformer IDENTITY = new IdentityTransformer(); + +/** + * Convert a List executor to java List. + */ +public static List convertExecutor(List executor) { +return Lists.newArrayList(executor.get(0).intValue(), executor.get(1).intValue()); +} + +/** + * Make and map of executors to empty stats. + * @param executors the executors as keys of the map. + * @return and empty map of executors to stats. + */ +public static Map, ExecutorStats> mkEmptyExecutorZkHbs(Set> executors) { +Map, ExecutorStats> ret = new HashMap<>(); +for (Object executor : executors) { +List startEnd = (List) executor; +ret.put(convertExecutor(startEnd), null); +} +return ret; +} + +/** + * Convert Long Executor Ids in ZkHbs to Integer ones structure to java maps. + */ +public static Map, ExecutorStats> convertExecutorZkHbs(Map, ExecutorStats> executorBeats) { +Map, ExecutorStats> ret = new HashMap<>(); +for (Map.Entry, ExecutorStats> entry : executorBeats.entrySet()) { +ret.put(convertExecutor(entry.getKey()), entry.getValue()); +} +return ret; +} + +/** + * Create a new worker heartbeat for zookeeper. + * @param topoId the topology id + * @param executorStats the stats for the executors + * @param uptime the uptime for the worker. + * @return the heartbeat map. + */ +public static Map mkZkWorkerHb(String topoId, Map, ExecutorStats> executorStats, Integer uptime) { +Map ret = new HashMap<>(); +ret.put("storm-id", topoId); +ret.put(EXECUTOR_STATS, executorStats); +ret.put(UPTIME, uptime); +ret.put(TIME_SECS, Time.currentTimeSecs()); + +return ret; +} + +private static Number getByKeyOr0(Map m, String k) { +if (m == null) { +return 0; +} + +Number n = (Number) m.get(k); +if (n == null) { +return 0; +} +return n; +} + +/** + * Get a sub-map by a given key. + * @param map the original map + * @param key the key to get it from. + * @return the map stored under key. + */ +public static Map getMapByKey(Map map, String key) { +if (map == null) { +return null; +} +return (Map) map.get(key); +} + +
[GitHub] storm pull request #2831: STORM-3224: Fix FLUX YAML Viewer icon location/pos...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2831 STORM-3224: Fix FLUX YAML Viewer icon location/position on UI page You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm fix-flux-icon-formatting Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2831.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2831 commit 66ce42a5620bda5a26b510cf8bde131ecb4c811e Author: Kishor Patil Date: 2018-09-13T19:19:54Z fix flux icon formatting ---
[GitHub] storm pull request #2825: [STORM-3220] Allow ability to enable/disable http ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2825 [STORM-3220] Allow ability to enable/disable http binding You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3220 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2825.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2825 commit da554572000b802681876fd6ec5e9d12359675c8 Author: Kishor Patil Date: 2018-09-11T15:47:14Z Allow ability to enable/disable http binding ---
[GitHub] storm pull request #2814: [STORM-3207] Fix Sasl Plugin to use WorkerToken
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2814 [STORM-3207] Fix Sasl Plugin to use WorkerToken The `doAsUser` is null for DRPCClient. If WorkerToken is found, it should use it. Also, setting on `addServerDefinition` to `localhost` literal string is incorrect on server side. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3207 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2814.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2814 commit 9f815caafe82164e1b22ddfacdc227f28cb4afad Author: Kishor Patil Date: 2018-08-28T16:34:52Z Fix Sasl Plugin to use WorkerToken ---
[GitHub] storm issue #2744: [STORM-3132] Avoid NPE in the Values Constructor
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2744 @HeartSaVioR Removed unwanted condition. ---
[GitHub] storm issue #2744: [STORM-3132] Avoid NPE in the Values Constructor
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2744 @revans2 @HeartSaVioR Now constructor allows `null` values. ---
[GitHub] storm pull request #2744: [STORM-3132] Avoid NPE in the Values Constructor
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2744 [STORM-3132] Avoid NPE in the Values Constructor `Values` construction could end up throwing NPE. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3132 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2744.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2744 commit e6f95fb92597216aae50a884d62d0ead6a16bbcd Author: Kishor Patil Date: 2018-06-29T05:48:43Z Avoid NPE in the Values Constructor ---
[GitHub] storm issue #2660: STORM-3056: Add a test for quickly rebinding to a port
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2660 @raghavgautam Can you please address the merge conflicts? ---
[GitHub] storm issue #2718: STORM-3103 allow nimbus to shutdown properly
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2718 Good catch @agresch ---
[GitHub] storm pull request #2700: [STORM-3093] Cache the storm id to executors mappi...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2700#discussion_r192811084 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -1333,6 +1336,21 @@ private AssignmentDistributionService getAssignmentsDistributer() { return heartbeatsCache; } +public AtomicReference>>> getIdToExecutors() { +return idToExecutors; +} + +private Set> getOrUpdateExecutors(String topoId, StormBase base, Map topoConf, +StormTopology topology) +throws IOException, AuthorizationException, InvalidTopologyException, KeyNotFoundException { +Set> executors = idToExecutors.get().get(topoId); +if (null == executors) { +executors = new HashSet<>(computeExecutors(topoId, base, topoConf, topology)); +idToExecutors.getAndUpdate(new Assoc<>(topoId, executors)); --- End diff -- Do we intend to create new HashMap every time? If not, would `idToExecutors.get().put(topoId, executors);` should suffice? ---
[GitHub] storm issue #2651: [STORM-3054] Add Topology level configuration socket time...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2651 @HeartSaVioR @revans2 sorry for delay in addressing the review comments. ---
[GitHub] storm pull request #2651: [STORM-3054] Add Topology level configuration sock...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2651#discussion_r188107350 --- Diff: storm-client/src/jvm/org/apache/storm/drpc/ReturnResults.java --- @@ -103,21 +85,32 @@ public void execute(Tuple input) { LOG.error("Failed to return results to DRPC server", tex); _collector.fail(input); } -reconnectClient((DRPCInvocationsClient) client); +client = getDRPCClient(host, port); --- End diff -- This is same as before, if we could not make connection. or reconnect failed in previous case. ---
[GitHub] storm pull request #2651: [STORM-3054] Add Topology level configuration sock...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2651#discussion_r188107411 --- Diff: storm-client/src/jvm/org/apache/storm/drpc/ReturnResults.java --- @@ -103,21 +85,32 @@ public void execute(Tuple input) { LOG.error("Failed to return results to DRPC server", tex); _collector.fail(input); } -reconnectClient((DRPCInvocationsClient) client); +client = getDRPCClient(host, port); } } } } -private void reconnectClient(DRPCInvocationsClient client) { -if (client instanceof DRPCInvocationsClient) { -try { -LOG.info("reconnecting... "); -client.reconnectClient(); //Blocking call -} catch (TException e2) { -LOG.error("Failed to connect to DRPC server", e2); +private DistributedRPCInvocations.Iface getDRPCClient(String host, int port) { +DistributedRPCInvocations.Iface client; +if (local) { +client = (DistributedRPCInvocations.Iface) ServiceRegistry.getService(host); +} else { +String server = getServer(host, port); +if (!_clients.containsKey(server)) { +try { +_clients.put(server, new DRPCInvocationsClient(_conf, host, port)); +} catch (org.apache.thrift.transport.TTransportException ex) { +throw new RuntimeException(ex); +} } +client = _clients.get(server); } +return client; +} + +private String getServer(String host, int port) { +return host + port; --- End diff -- Reverting to List. ---
[GitHub] storm pull request #2638: [STORM-3034] Adding exception stacktrace for execu...
Github user kishorvpatil closed the pull request at: https://github.com/apache/storm/pull/2638 ---
[GitHub] storm issue #2638: [STORM-3034] Adding exception stacktrace for executor fai...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2638 @srdo, You are right. This was exception I noticed on old internal version of the code package. It is clearly not necessary in 2.x latest code. Let me close this patch. ---
[GitHub] storm pull request #2651: [STORM-3054] Add Topology level configuration sock...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2651 [STORM-3054] Add Topology level configuration socket timeout for DRPC Invocation Client This patch fixes following this: - Add Topology level configuration socket timeout for DRPC Invocation Client - Fix the `_clients` map key in `ReturnResults` - Add `ReturnResults` debug log entries. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3054 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2651.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2651 commit a96581f36e6cae12943c29c42a843e46f5b6ad7e Author: Kishor Patil Date: 2018-04-30T22:39:04Z Add Topology level configuration socket timeout for DRPC Invocation Client commit da1cb49f3bcef0df84330422e9e091a65bdc541b Author: Kishor Patil Date: 2018-04-30T22:46:07Z Fix ReturnResults reconnection logic commit 0ce231e10e0c49a89f3c3a286b9aadf493a4bc7f Author: Kishor Patil Date: 2018-04-30T22:48:06Z Add debug statements to ReturnResults ---
[GitHub] storm issue #2638: [STORM-3034] Adding exception stacktrace for executor fai...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2638 @srdo , What I mean is there are many instances where kafka is wrapping actual exceptions into `InterruptedException`. I am not sure why/what is the objective, but only way to understand the source of origin is to log the stack trace here. There are many kafka classes ( `KafkaConsumer`, `KafkaConsumerCoordinator` etc. the wrap other exceptions into `InterruptedException` . Including [InterruptException.java](https://github.com/apache/kafka/blob/e31c0c9bdbad432bc21b583bd3c084f05323f642/clients/src/main/java/org/apache/kafka/common/errors/InterruptException.java#L39) where `InterrupedException` is created ---
[GitHub] storm pull request #2641: [STORM-3037] Lowering CheckStyle Violations across...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2641#discussion_r184525490 --- Diff: examples/storm-starter/src/jvm/org/apache/storm/starter/BlobStoreAPIWordCountTopology.java --- @@ -38,176 +47,66 @@ import org.apache.storm.topology.TopologyBuilder; import org.apache.storm.topology.base.BaseBasicBolt; import org.apache.storm.topology.base.BaseRichSpout; -import org.apache.storm.blobstore.BlobStoreAclHandler; import org.apache.storm.tuple.Fields; import org.apache.storm.tuple.Tuple; import org.apache.storm.tuple.Values; import org.apache.storm.utils.Utils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import java.io.BufferedReader; -import java.io.BufferedWriter; -import java.io.File; -import java.io.FileReader; -import java.io.FileWriter; -import java.io.IOException; -import java.util.HashSet; -import java.util.Iterator; -import java.util.LinkedList; -import java.util.List; -import java.util.Map; -import java.util.Random; -import java.util.Set; -import java.util.StringTokenizer; - public class BlobStoreAPIWordCountTopology { +private static final Logger LOG = LoggerFactory.getLogger(BlobStoreAPIWordCountTopology.class); private static ClientBlobStore store; // Client API to invoke blob store API functionality private static String key = "key"; private static String fileName = "blacklist.txt"; -private static final Logger LOG = LoggerFactory.getLogger(BlobStoreAPIWordCountTopology.class); public static void prepare() { Config conf = new Config(); conf.putAll(Utils.readStormConfig()); store = Utils.getClientBlobStore(conf); } -// Spout implementation -public static class RandomSentenceSpout extends BaseRichSpout { -SpoutOutputCollector _collector; - -@Override -public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) { -_collector = collector; -} - -@Override -public void nextTuple() { -Utils.sleep(100); -_collector.emit(new Values(getRandomSentence())); -} - -@Override -public void ack(Object id) { -} - -@Override -public void fail(Object id) { -} - -@Override -public void declareOutputFields(OutputFieldsDeclarer declarer) { -declarer.declare(new Fields("sentence")); -} - -} - -// Bolt implementation -public static class SplitSentence extends ShellBolt implements IRichBolt { - -public SplitSentence() { -super("python", "splitsentence.py"); -} - -@Override -public void declareOutputFields(OutputFieldsDeclarer declarer) { -declarer.declare(new Fields("word")); -} - -@Override -public Map getComponentConfiguration() { -return null; -} -} - -public static class FilterWords extends BaseBasicBolt { -boolean poll = false; -long pollTime; -Set wordSet; -@Override -public void execute(Tuple tuple, BasicOutputCollector collector) { -String word = tuple.getString(0); -// Thread Polling every 5 seconds to update the wordSet seconds which is -// used in FilterWords bolt to filter the words -try { -if (!poll) { -wordSet = parseFile(fileName); -pollTime = System.currentTimeMillis(); -poll = true; -} else { -if ((System.currentTimeMillis() - pollTime) > 5000) { -wordSet = parseFile(fileName); -pollTime = System.currentTimeMillis(); -} -} -} catch (IOException exp) { -throw new RuntimeException(exp); -} -if (wordSet !=null && !wordSet.contains(word)) { -collector.emit(new Values(word)); -} -} - -@Override -public void declareOutputFields(OutputFieldsDeclarer declarer) { -declarer.declare(new Fields("word")); -} -} - -public void buildAndLaunchWordCountTopology(String[] args) { -TopologyBuilder builder = new TopologyBuilder(); -builder.setSpout(
[GitHub] storm pull request #2641: [STORM-3037] Lowering CheckStyle Violations across...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2641#discussion_r184525138 --- Diff: examples/storm-perf/src/main/java/org/apache/storm/perf/utils/MetricsSample.java --- @@ -160,7 +159,7 @@ private static MetricsSample getMetricsSample(TopologyInfo topInfo) { ret.spoutEmitted = spoutEmitted; ret.spoutTransferred = spoutTransferred; ret.sampleTime = System.currentTimeMillis(); -//ret.numSupervisors = clusterSummary.get_supervisors_size(); +//ret.numSupervisors = clusterSummary.get_supervisors_size(); --- End diff -- Addressed ---
[GitHub] storm pull request #2641: [STORM-3037] Lowering CheckStyle Violations across...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2641#discussion_r183772336 --- Diff: examples/storm-hbase-examples/src/main/java/org/apache/storm/hbase/topology/LookupWordCount.java --- @@ -1,25 +1,19 @@ /** - * Licensed to the Apache Software Foundation (ASF) under one - * or more contributor license agreements. See the NOTICE file - * distributed with this work for additional information - * regarding copyright ownership. The ASF licenses this file - * to you under the Apache License, Version 2.0 (the - * "License"); you may not use this file except in compliance - * with the License. You may obtain a copy of the License at + * Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version + * 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at --- End diff -- The line length allowed is 140 characters. So the paragraphs are reformatted ---
[GitHub] storm pull request #2641: [STORM-3037] Lowering CheckStyle Violations across...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2641#discussion_r183771851 --- Diff: examples/storm-hdfs-examples/src/main/java/org/apache/storm/hdfs/bolt/SequenceFileTopology.java --- @@ -151,15 +145,15 @@ public void nextTuple() { } count++; total++; -if(count > 2){ +if (count > 2) { count = 0; System.out.println("Pending count: " + this.pending.size() + ", total: " + this.total); } Thread.yield(); } public void ack(Object msgId) { -//System.out.println("ACK"); +//System.out.println("ACK"); --- End diff -- This is attempt to only fix format to allow for checkstyle to pass it. I did not go into each comment for its validity for deletion. ---
[GitHub] storm issue #2638: [STORM-3034] Adding exception stacktrace for executor fai...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2638 @HeartSaVioR @srdo ., The issue in assuming that `InterruptedException` is only coming from external interruptions. Below is the actual example we noticed when Kafka Spout`TimeoutException` was wrapped in `InterruptedException`. Users could not identify where the exception was raised. ``` KafkaSpoutI_[39 39] [WARN] Expecting exception of class: class java.lang.InterruptedException, but exception chain only contains: (#)``` ---
[GitHub] storm pull request #2641: [STORM-3037] Lowering CheckStyle Violations across...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2641 [STORM-3037] Lowering CheckStyle Violations across all modules Below is the list of modules affected by this patch. The max allowed violations are going down by 17000+. | Module Name | Max Allowed Exceptions Before | Max Allowed Exceptions After | | --- | - | | | storm-client | 10079 | 3298 | | storm-server | 2585 | 783 | | storm-eventhubs | 1765 | 45 | | storm-starter | 1538 | 263 | | storm-hdfs | 1406 | 189 | | storm-sql-core | 1286 | 59 | | storm-redis | 602 | 64 | | storm-cassandra | 578 | 159 | | storm-kafka | 557 | 180 | | storm-hbase | 371 | 100 | | storm-maven-plugins | 269 | 11 | | storm-hive | 259 | 58 | | storm-core | 254 | 73 | | storm-jms | 235 | 63 | | storm-hdfs-examples | 224 | 29 | | storm-kafka-monitor | 178 | 87 | | storm-mqtt | 158 | 39 | | storm-jdbc | 149 | 36 | | storm-solr | 108 | 47 | | storm-perf | 100 | 65 | | storm-hbase-examples | 55 | 16 | | maven-shade-clojure-transformer | 16 | 0 | You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm fix-style Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2641.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2641 commit 8b34c6e7d15e7f79bb9e939ae2cb4a6d5f893eb7 Author: Kishor Patil Date: 2018-04-23T00:16:49Z Fixing stylecheck problems with storm-client commit 1d8a9b6243494b11f140343819e67fc84e701200 Author: Kishor Patil Date: 2018-04-23T00:46:33Z Fixing stylecheck problemswith storm-sql-core commit f1c0bcbed16085699f5fa809afb7c58f9925ce68 Author: Kishor Patil Date: 2018-04-23T00:53:42Z Fixing stylecheck problemswith storm-server commit fc1cf09b05c86dc5d1802049e7b650ceb52e54d4 Author: Kishor Patil Date: 2018-04-23T01:12:29Z Fixing stylecheck problems with storm-core commit 18723171612bdfe818929297378433a3c069e4e7 Author: Kishor Patil Date: 2018-04-23T01:35:57Z Fixing stylecheck problems with storm-eventhubs commit 7da98cf0c5d3e23fac42871974ff8017924673c5 Author: Kishor Patil Date: 2018-04-23T01:45:33Z Fixing stylecheck problems with storm-hdfs commit 81ec15d1096cd526b94313661e7b5de7ed1791d0 Author: Kishor Patil Date: 2018-04-23T02:36:19Z Fixing stylecheck problems with storm-starter commit e3f5b138eb33e0b67c98a3ba3d8b26d70c988537 Author: Kishor Patil Date: 2018-04-23T02:41:44Z Fixing stylecheck problems with storm-redis commit 1a2d131f99e62823bf15fa958cc676a51efda10c Author: Kishor Patil Date: 2018-04-23T02:44:56Z Fixing stylecheck problems with storm-cassandra commit 4fe4f04bb9750301b96f5c20142acb9a9a6a6000 Author: Kishor Patil Date: 2018-04-23T02:59:46Z Fixing stylecheck problems with storm-kafka commit 84084ab0ac16390f24e7f24a1d9d1693062cb023 Author: Kishor Patil Date: 2018-04-23T03:03:52Z Fixing stylecheck problems with storm-maven-plugins commit 6ccf6a0a0954590e3db4c95a3f22b504a5a72757 Author: Kishor Patil Date: 2018-04-23T03:05:45Z Fixing stylecheck problems with maven-shade-clojure-transformer commit 880d14f1e7c6d450375b195f3aa5bc4045151fab Author: Kishor Patil Date: 2018-04-23T03:08:31Z Fixing stylecheck problems with storm-hbase commit f4ba7c952825ef7b0ee10bd1861dd0f288fe7761 Author: Kishor Patil Date: 2018-04-23T03:11:24Z Fixing stylecheck problems with storm-hbase-examples commit 0e409ecd8b2d6956ed38f78bb068ce8fef67e83b Author: Kishor Patil Date: 2018-04-23T03:14:26Z Fixing stylecheck problems with storm-perf commit 5fc4e9f0bdf7a58852f7c27a1f8049e2bb3776a5 Author: Kishor Patil Date: 2018-04-23T03:19:45Z Fixing stylecheck problems with storm-hive commit 224633d3ccbaa843c7f94c709d5cc573b1c59845 Author: Kishor Patil Date: 2018-04-23T03:21:46Z Fixing stylecheck problems with storm-kafka-monitor commit 95602b1be7493c33b7dc8c3a8cb6e406b59e907d Author: Kishor Patil Date: 2018-04-23T03:23:30Z Fixing stylecheck problems with storm-jms commit 53adebcfec5dec5e7a59f0a9108da734d3a3f01a Author: Kishor Patil Date: 2018-04-23T03:24:58Z Fixing stylecheck problems with storm-solr commit 6d20c4af585611c6d317ad817b0a0b4b172a40ce Author: Kishor Patil Date: 2018-04-23T03:26:15Z Fixing stylecheck problems with storm-jdbc commit f7f3524d1401d76e1779c823391dce46e58acf48 Author: Kishor Patil Date: 2018-04-23T03:32:01Z Fixing stylecheck problems with storm-mqtt commit e8ffac458baf6cc7a90dd0ced78cc01f0ffb4229 Author: Kishor Patil Date: 2018-04-23T04:44:09Z Fixing stylecheck problems with storm-hdfs
[GitHub] storm issue #2638: [STORM-3034] Adding exception stacktrace for executor fai...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2638 @srdo Moved the `LOG.error` before changing exception Cause. ---
[GitHub] storm pull request #2638: [STORM-3034] Adding exception stacktrace for execu...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2638#discussion_r182847223 --- Diff: storm-client/src/jvm/org/apache/storm/utils/Utils.java --- @@ -109,6 +109,8 @@ import javax.security.auth.Subject; +import static org.apache.commons.lang.exception.ExceptionUtils.*; --- End diff -- Updated. ---
[GitHub] storm pull request #2638: [STORM-3034] Adding exception stacktrace for execu...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2638#discussion_r182843808 --- Diff: storm-client/src/jvm/org/apache/storm/utils/Utils.java --- @@ -361,6 +363,7 @@ public void run() { Time.sleep(s); } } catch (Throwable t) { +LOG.info("Async loop Exception Stacktrace is: {} ", getStackTrace(t)); --- End diff -- The method can return at line 370 without reaching 372. We had instances where users could not understand root cause of the interrupt exception. ---
[GitHub] storm pull request #2638: [STORM-3034] Adding exception stacktrace for execu...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2638 [STORM-3034] Adding exception stacktrace for executor failures in worker You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3034 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2638.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2638 commit c289c47c98307e20bdacbfe2f26d4655963c392f Author: Kishor Patil Date: 2018-04-19T14:01:27Z Adding exception stacktrace for executor failures in worker ---
[GitHub] storm issue #2631: [STORM-3025] Optimize Cluster methods with Caching to avo...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2631 @agresch I have addressed the code review comments. ---
[GitHub] storm pull request #2631: [STORM-3025] Optimize Cluster methods with Caching...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2631#discussion_r180811674 --- Diff: storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java --- @@ -187,4 +187,29 @@ public double getTotalCpu() { public NormalizedResources getNormalizedResources() { return this.normalizedResources; } + +public void removeOffHeap(final double offHeap) { +this.offHeap += offHeap; +} + +public void remove(WorkerResources value) { --- End diff -- Not used ---
[GitHub] storm pull request #2631: [STORM-3025] Optimize Cluster methods with Caching...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2631#discussion_r180800090 --- Diff: storm-server/src/main/java/org/apache/storm/scheduler/resource/normalization/NormalizedResourceRequest.java --- @@ -187,4 +187,29 @@ public double getTotalCpu() { public NormalizedResources getNormalizedResources() { return this.normalizedResources; } + +public void removeOffHeap(final double offHeap) { --- End diff -- This is unused method. I was trying some thing else. Let me delete this method. ---
[GitHub] storm pull request #2631: [STORM-3025] Optimize Cluster methods with Caching...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2631 [STORM-3025] Optimize Cluster methods with Caching to avoid loopy loops Optimizing the `Cluster` methods that are called from with high frequency calls to speed-up scheduling time on new topologies on large clusters. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3025 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2631.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2631 commit 46486d87fc8a26262d75abc05c1398afceee687c Author: Kishor Patil Date: 2018-04-10T21:42:03Z Remove loopy loops in scheduler cluster state commit caf7f885d6c10eaed440b02ec345e863af700762 Author: Kishor Patil Date: 2018-04-10T22:10:10Z Clean up caching vars from Cluster commit b57654c8a90705029ba3555ccd75d7056051c31c Author: Kishor Patil Date: 2018-04-10T22:42:54Z Cache supervisor to Used Ports in Cluster state ---
[GitHub] storm issue #2603: [STORM-3003] Adding Assignment caching to Nimbus
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2603 @HeartSaVioR , I think changes in #2433 do take care of caching assignments within `InMemoryAssignmentBackend`, so the changes proposed in #2603 are no longer needed. I am closing this PR ---
[GitHub] storm pull request #2603: [STORM-3003] Adding Assignment caching to Nimbus
Github user kishorvpatil closed the pull request at: https://github.com/apache/storm/pull/2603 ---
[GitHub] storm issue #2603: [STORM-3003] Adding Assignment caching to Nimbus
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2603 @HeartSaVioR Sure. Let me review #2433. In the meantime, I will create patch for 1.x-branch. ---
[GitHub] storm issue #2603: [STORM-3003] Adding Assignment caching to Nimbus
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2603 @d2r Thank you for the review. I have addressed the comments. The failure seems unrelated to my changes - the test passes on local environment. ---
[GitHub] storm pull request #2603: [STORM-3003] Adding Assignment caching to Nimbus
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2603 [STORM-3003] Adding Assignment caching to Nimbus Since nimbus ( scheduler generates assignments) it can cache it instead of polling for it from ZK or other state manager. This would improve scheduling iteration time, as well as all UI pages that require assignment information. The need for this improvement felt when we noticed this is larger clusters where ZK continues to be bottleneck. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm3003 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2603.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2603 commit c0e460ccc74872a7b716e72445f0a10247b6f450 Author: Kishor Patil Date: 2018-03-22T20:43:00Z Adding Assignment caching to Nimbus ---
[GitHub] storm pull request #2576: [STORM-2976] Fix Supervisor HealthCheck validation...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2576 [STORM-2976] Fix Supervisor HealthCheck validations The couple of issues with Supervisor HealthCheck functionality. 1. `ClassCastException` while reading configuration. 2. The supervisor should die if healthchecks fail. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm2976 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2576.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2576 commit a15f78f9c7755a009338c348bf14b1f6b6892ef9 Author: Kishor Patil Date: 2018-02-26T19:14:39Z Fix Supervisor HealthCheck validations ---
[GitHub] storm issue #2563: [STORM-2961] Refactor addResource and addResources Method...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2563 @HeartSaVioR Thank you, I fixed the indentation. ---
[GitHub] storm pull request #2563: [STORM-2961] Refactor addResource and addResources...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2563 [STORM-2961] Refactor addResource and addResources Methods in BaseConfigurationDeclarer You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm refactorDeclarers Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2563.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2563 commit 0267d549a96c1e7aefa3cb5d23e1cca559dd8e74 Author: Kishor Patil Date: 2018-02-17T00:23:17Z Refactor addResource and addResources Method and avoid NPEs ---
[GitHub] storm pull request #2487: [STORM-2873] Do not delete backpressure ephemeral ...
Github user kishorvpatil closed the pull request at: https://github.com/apache/storm/pull/2487 ---
[GitHub] storm issue #2487: [STORM-2873] Do not delete backpressure ephemeral node fr...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2487 closing the PR as no longer needed for master branch. ---
[GitHub] storm pull request #2531: STORM-2898: Support for WorkerToken authentication
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2531#discussion_r164154647 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/nimbus/Nimbus.java --- @@ -2638,6 +2647,34 @@ private int getNumOfAckerExecs(Map totalConf, StormTopology topo } } +private void upsertWorkerTokensInCreds(Map creds, String user, String topologyId) { +if (workerTokenManager != null) { +final long renewIfExpirationBefore = workerTokenManager.getMaxExpirationTimeForRenewal(); +for (WorkerTokenServiceType type : WorkerTokenServiceType.values()) { +boolean shouldAdd = true; +WorkerToken oldToken = AuthUtils.readWorkerToken(creds, type); +if (oldToken != null) { +try { +WorkerTokenInfo info = AuthUtils.getWorkerTokenInfo(oldToken); +if (info.is_set_expirationTimeMillis() || info.get_expirationTimeMillis() > renewIfExpirationBefore) { +//Found an existing token and it is not going to expire any time soon, so don't bother adding in a new +// token. +shouldAdd = false; +} +} catch (Exception e) { +//The old token could not be deserialized. This is bad, but we are going to replace it anyways so just keep going. +LOG.error("Could not deserialize token info", e); +} +} +if (shouldAdd) { +AuthUtils.setWorkerToken(creds, workerTokenManager.createOrUpdateTokenFor(type, user, topologyId)); --- End diff -- Not necessary, but some info level logs mentioning update to the worker tokens might help. ---
[GitHub] storm pull request #2531: STORM-2898: Support for WorkerToken authentication
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2531#discussion_r164134542 --- Diff: storm-core/test/clj/org/apache/storm/nimbus_test.clj --- @@ -46,7 +46,7 @@ (:import [org.apache.commons.io FileUtils]) (:import [org.json.simple JSONValue]) (:import [org.apache.storm.daemon StormCommon]) - (:import [org.apache.storm.cluster IStormClusterState StormClusterStateImpl ClusterStateContext ClusterUtils]) + (:import [org.apache.storm.cluster DaemonType IStormClusterState StormClusterStateImpl ClusterStateContext ClusterUtils]) --- End diff -- Harmless, but don't see tests using `DaemonType` in this file. ---
[GitHub] storm pull request #2531: STORM-2898: Support for WorkerToken authentication
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2531#discussion_r163729225 --- Diff: storm-client/src/jvm/org/apache/storm/security/auth/kerberos/ServerCallbackHandler.java --- @@ -18,79 +18,86 @@ package org.apache.storm.security.auth.kerberos; +import javax.security.sasl.RealmCallback; import org.apache.storm.security.auth.AuthUtils; import org.apache.storm.security.auth.ReqContext; -import org.apache.storm.security.auth.SaslTransportPlugin; +import org.apache.storm.security.auth.sasl.SaslTransportPlugin; import org.slf4j.Logger; import org.slf4j.LoggerFactory; -import javax.security.auth.Subject; import javax.security.auth.callback.*; import javax.security.auth.login.AppConfigurationEntry; import javax.security.auth.login.Configuration; import javax.security.sasl.AuthorizeCallback; import java.io.IOException; -import java.util.Map; /** - * SASL server side callback handler + * SASL server side callback handler for kerberos auth. */ public class ServerCallbackHandler implements CallbackHandler { private static final Logger LOG = LoggerFactory.getLogger(ServerCallbackHandler.class); -private String userName; - -public ServerCallbackHandler(Configuration configuration, Map topoConf) throws IOException { -if (configuration==null) return; +public ServerCallbackHandler(Configuration configuration) throws IOException { +if (configuration == null) { +return; +} AppConfigurationEntry configurationEntries[] = configuration.getAppConfigurationEntry(AuthUtils.LOGIN_CONTEXT_SERVER); if (configurationEntries == null) { String errorMessage = "Could not find a '"+AuthUtils.LOGIN_CONTEXT_SERVER+"' entry in this configuration: Server cannot start."; --- End diff -- spacing. ---
[GitHub] storm pull request #2531: STORM-2898: Support for WorkerToken authentication
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2531#discussion_r163728603 --- Diff: storm-client/src/jvm/org/apache/storm/security/auth/kerberos/KerberosSaslTransportPlugin.java --- @@ -114,22 +120,26 @@ public TTransportFactory getServerTransportFactory() throws IOException { //check the credential of our principal if (subject.getPrivateCredentials(KerberosTicket.class).isEmpty()) { throw new RuntimeException("Fail to verify user principal with section \"" -+AuthUtils.LOGIN_CONTEXT_SERVER+"\" in login configuration file "+ login_conf); ++AuthUtils.LOGIN_CONTEXT_SERVER+"\" in login configuration file "+ loginConf); --- End diff -- spacing.. ---
[GitHub] storm issue #2487: [STORM-2873] Do not delete backpressure ephemeral node fr...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2487 @HeartSaVioR, Please find patch for 1.x-branch - https://github.com/apache/storm/pull/2529 ---
[GitHub] storm pull request #2529: [STORM-2873] Do not delete backpressure ephemeral ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2529 [STORM-2873] Do not delete backpressure ephemeral node frequently - Do not delete backpressure ephemeral node frequently - Add backpressure timeout. - Cleanup backpressure znodes . - Add in "restart timeout" for backpressure. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm2873-1.x-branch Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2529.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2529 commit dd04a5563317fa6f57d3d7ec32190940b98454d7 Author: Kishor Patil Date: 2018-01-22T20:47:42Z Adding backpressure timeout, backpressure znodes cleanup, Do not delete backpressure ephemeral node frequently commit 14cb3a94a65136d016da25973d82e7177b2538ce Author: Kishor Patil Date: 2018-01-23T17:39:46Z Use 1.7 compatible Long size ---
[GitHub] storm issue #2487: [STORM-2873] Do not delete backpressure ephemeral node fr...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2487 Ok. I will create the PR for 1.x-branch. ---
[GitHub] storm issue #2487: [STORM-2873] Do not delete backpressure ephemeral node fr...
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2487 @srdo , Please review changes addressing the comments. ---
[GitHub] storm pull request #2487: [STORM-2873] Do not delete backpressure ephemeral ...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2487 [STORM-2873] Do not delete backpressure ephemeral node frequently If ephemeral znode is created once, then we can leave it as is - as other workers would look at timestamp to ensure it is not stale. This avoid deletion/creation of same ephemeral znode path at very high frequency. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm STORM-2873 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2487.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2487 commit 7f891fd6eaed1be5a9e238ff4d246ca195d37b21 Author: Kishor Patil Date: 2017-12-29T18:20:32Z Do not delete backpressure ephemeral node frequently ---
[GitHub] storm pull request #2456: YSTORM-4457: Fix for wouldFit
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2456#discussion_r157072777 --- Diff: storm-server/src/test/java/org/apache/storm/scheduler/resource/TestResourceAwareScheduler.java --- @@ -640,6 +641,150 @@ public void testHeterogeneousCluster() { //end of Test3 } +@Test +public void testHeterogeneousClusterwithGras() { --- End diff -- Please refactor to avoid duplicate code. --- From common-issues-return-145801-archive=mail-archive@hadoop.apache.org Thu Dec 14 13:56:13 2017 Return-path: Envelope-to: arch...@mail-archive.com Delivery-date: Thu, 14 Dec 2017 13:56:13 -0800 Received: from c7-b.mxthunder.net ([208.53.48.218]) by mail-archive.com with esmtp (Exim 4.76) (envelope-from ) id 1ePbUD-0001qZ-7R for arch...@mail-archive.com; Thu, 14 Dec 2017 13:56:13 -0800 Received: by bolt10b.mxthunder.net (Postfix, from userid 12345) id 3yyS7D4FLBz3571S; Thu, 14 Dec 2017 13:56:06 -0800 (PST) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by bolt10b.mxthunder.net (Postfix) with SMTP id 3yyS783WJpz1w0Fp for ; Thu, 14 Dec 2017 13:56:04 -0800 (PST) Received: (qmail 4709 invoked by uid 500); 14 Dec 2017 21:56:04 - Mailing-List: contact common-issues-h...@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: <mailto:common-issues-h...@hadoop.apache.org> List-Unsubscribe: <mailto:common-issues-unsubscr...@hadoop.apache.org> List-Post: <mailto:common-iss...@hadoop.apache.org> List-Id: Delivered-To: mailing list common-iss...@hadoop.apache.org Received: (qmail 4697 invoked by uid 99); 14 Dec 2017 21:56:04 - Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 14 Dec 2017 21:56:04 + Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id ED1E4180718 for ; Thu, 14 Dec 2017 21:56:02 + (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id 3mZNeus4d64g for ; Thu, 14 Dec 2017 21:56:02 + (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 6B8965F250 for ; Thu, 14 Dec 2017 21:56:01 + (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id AE31AE0026 for ; Thu, 14 Dec 2017 21:56:00 + (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 65DE3212FB for ; Thu, 14 Dec 2017 21:56:00 + (UTC) Date: Thu, 14 Dec 2017 21:56:00 + (UTC) From: "Lukas Waldmann (JIRA)" To: common-iss...@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (HADOOP-1) New implementation of ftp and sftp filesystems MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 X-MXTHUNDER-Identifier: X-MXTHUNDER-IP-Rating: 0, 140.211.11.3, Ugly c=0.921348 p=-0.985073 Source White X-MXTHUNDER-Scan-Result: 100 X-MXTHUNDER-Rules: 100-75141-5451-5465-m 100-75141-7924-7938-m 100-75141-0-8061-f X-MXTHUNDER-Group: Bulk Mail [ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291683#comment-16291683 ] Lukas Waldmann commented on HADOOP-1: - Hmm, never used this functionality. I will have to check it out. Hopefully I will have some spare time over EoY period > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.11.patch, > HADOOP-1.1
[GitHub] storm pull request #2400: STORM-2792: Remove RAS EvictionPolicy and cleanup
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2400#discussion_r148882086 --- Diff: storm-server/src/main/java/org/apache/storm/scheduler/resource/ResourceAwareScheduler.java --- @@ -62,123 +70,144 @@ public void prepare(Map conf) { @Override public void schedule(Topologies topologies, Cluster cluster) { -//initialize data structures -for (TopologyDetails td : cluster.getTopologies()) { +Map userMap = getUsers(cluster); +List orderedTopologies = new ArrayList<>(schedulingPriorityStrategy.getOrderedTopologies(cluster, userMap)); +LOG.info("Ordered list of topologies is: {}", orderedTopologies.stream().map((t) -> t.getId()).collect(Collectors.toList())); --- End diff -- I would leave this at debug. ---
[GitHub] storm pull request #2390: [STORM-2790] Add nimbus admins groups
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2390#discussion_r147774596 --- Diff: storm-server/src/main/java/org/apache/storm/DaemonConfig.java --- @@ -159,14 +159,6 @@ public static final String SCHEDULER_DISPLAY_RESOURCE = "scheduler.display.resource"; /** - * Initialization parameters for the group mapping service plugin. - * Provides a way for a @link{STORM_GROUP_MAPPING_SERVICE_PROVIDER_PLUGIN} - * implementation to access optional settings. - */ -@isType(type = Map.class) -public static final String STORM_GROUP_MAPPING_SERVICE_PARAMS = "storm.group.mapping.service.params"; --- End diff -- It was not used anywhere until this PR - where `FixedGroupsMapping` uses it. ---
[GitHub] storm pull request #2390: [STORM-2790] Add nimbus admins groups
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2390 [STORM-2790] Add nimbus admins groups You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm add-admin-groups Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2390.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2390 commit e7d9881c1876494b179e06c7c3ee64c606590343 Author: Kishor Patil Date: 2017-10-26T21:59:35Z Add nimbus admins groups ---
[GitHub] storm pull request #2363: STORM-2759: Let users indicate if a blob should re...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2363#discussion_r143844543 --- Diff: storm-server/src/test/java/org/apache/storm/localizer/AsyncLocalizerTest.java --- @@ -286,29 +297,80 @@ public void testRequestDownloadTopologyBlobs() throws Exception { class TestLocalizer extends AsyncLocalizer { TestLocalizer(Map conf, String baseDir) throws IOException { -super(conf, AdvancedFSOps.make(conf), baseDir, new AtomicReference<>(new HashMap<>()), null); +super(conf, AdvancedFSOps.make(conf), baseDir); } @Override protected ClientBlobStore getClientBlobStore() { return mockblobstore; } + +synchronized void addReferences(List localresource, PortAndAssignment pna, BlobChangingCallback cb) { +String user = pna.getOwner(); +for (LocalResource blob : localresource) { +ConcurrentMap lrsrcSet = blob.shouldUncompress() ? userArchives.get(user) : userFiles.get(user); +if (lrsrcSet != null) { +LocalizedResource lrsrc = lrsrcSet.get(blob.getBlobName()); +if (lrsrc != null) { +lrsrc.addReference(pna, blob.needsCallback() ? cb : null); +lrsrc.addReference(pna, blob.needsCallback() ? cb : null); --- End diff -- remove duplicate call? ---
[GitHub] storm issue #2345: STORM-2438: added in rebalance changes to support RAS
Github user kishorvpatil commented on the issue: https://github.com/apache/storm/pull/2345 @revans2 LGTM. ---
[GitHub] storm pull request #2345: STORM-2438: added in rebalance changes to support ...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2345#discussion_r143559023 --- Diff: storm-server/src/main/java/org/apache/storm/localizer/LocallyCachedBlob.java --- @@ -0,0 +1,220 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.storm.localizer; + +import java.io.IOException; +import java.nio.file.Files; +import java.nio.file.LinkOption; +import java.nio.file.Path; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CountDownLatch; +import java.util.concurrent.TimeUnit; +import org.apache.storm.blobstore.BlobStore; +import org.apache.storm.blobstore.ClientBlobStore; +import org.apache.storm.generated.AuthorizationException; +import org.apache.storm.generated.KeyNotFoundException; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +/** + * Represents a blob that is cached locally on disk by the supervisor. + */ +public abstract class LocallyCachedBlob { +private static final Logger LOG = LoggerFactory.getLogger(LocallyCachedBlob.class); +public static final long NOT_DOWNLOADED_VERSION = -1; +// A callback that does nothing. +private static final BlobChangingCallback NOOP_CB = (assignment, port, resource, go) -> {}; + +private long lastUsed = System.currentTimeMillis(); +private final Map references = new HashMap<>(); +private final String blobDescription; +private final String blobKey; +private CompletableFuture doneUpdating = null; + +/** + * Create a new LocallyCachedBlob. + * @param blobDescription a description of the blob this represents. Typically it should at least be the blob key, but ideally also + * include if it is an archive or not, what user or topology it is for, or if it is a storm.jar etc. + */ +protected LocallyCachedBlob(String blobDescription, String blobKey) { +this.blobDescription = blobDescription; +this.blobKey = blobKey; +} + +/** + * Get the version of the blob cached locally. If the version is unknown or it has not been downloaded NOT_DOWNLOADED_VERSION + * should be returned. + * PRECONDITION: this can only be called with a lock on this instance held. + */ +public abstract long getLocalVersion(); + +/** + * Get the version of the blob in the blob store. + * PRECONDITION: this can only be called with a lock on this instance held. + */ +public abstract long getRemoteVersion(ClientBlobStore store) throws KeyNotFoundException, AuthorizationException; + +/** + * Download the latest version to a temp location. This may also include unzipping some or all of the data to a temp location. + * PRECONDITION: this can only be called with a lock on this instance held. + * @param store the store to us to download the data. + * @return the version that was downloaded. + */ +public abstract long downloadToTempLocation(ClientBlobStore store) throws IOException, KeyNotFoundException, AuthorizationException; + +/** + * Commit the new version and make it available for the end user. + * PRECONDITION: uncompressToTempLocationIfNeeded will have been called. + * PRECONDITION: this can only be called with a lock on this instance held. + * @param version the version of the blob to commit. + */ +public abstract void commitNewVersion(long version) throws IOException; + +/** + * Clean up any temporary files. This will be called after updating a blob, either successfully or if an error has occured. + * The goal is to find any files that may be left over and remove them so space is not leaked. + * PRECONDITION: this can only be called with
[GitHub] storm pull request #2345: STORM-2438: added in rebalance changes to support ...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2345#discussion_r143510721 --- Diff: storm-server/src/main/java/org/apache/storm/localizer/AsyncLocalizer.java --- @@ -156,17 +146,141 @@ public AsyncLocalizer(Map conf, AtomicReference(); blobPending = new HashMap<>(); this.currAssignment = currAssignment; recoverBlobReferences(portToAssignments); } +public AsyncLocalizer(Map conf, AtomicReference> currAssignment, + Map portToAssignments) throws IOException { +this(conf, AdvancedFSOps.make(conf), ConfigUtils.supervisorLocalDir(conf), currAssignment, portToAssignments); +} + +@VisibleForTesting +LocallyCachedBlob getTopoJar(final String topologyId) throws IOException { +String topoJarKey = ConfigUtils.masterStormJarKey(topologyId); +LocallyCachedBlob topoJar = topologyBlobs.get(topoJarKey); +if (topoJar == null) { +topoJar = new LocallyCachedTopologyBlob(topologyId, isLocalMode, conf, fsOps, +LocallyCachedTopologyBlob.TopologyBlobType.TOPO_JAR); +topologyBlobs.put(topoJarKey, topoJar); +} +return topoJar; +} + +@VisibleForTesting +LocallyCachedBlob getTopoCode(final String topologyId) throws IOException { +String topoCodeKey = ConfigUtils.masterStormCodeKey(topologyId); +LocallyCachedBlob topoCode = topologyBlobs.get(topoCodeKey); +if (topoCode == null) { +topoCode = new LocallyCachedTopologyBlob(topologyId, isLocalMode, conf, fsOps, +LocallyCachedTopologyBlob.TopologyBlobType.TOPO_CODE); +topologyBlobs.put(topoCodeKey, topoCode); +} +return topoCode; +} + +@VisibleForTesting +LocallyCachedBlob getTopoConf(final String topologyId) throws IOException { +String topoConfKey = ConfigUtils.masterStormConfKey(topologyId); +LocallyCachedBlob topoConf = topologyBlobs.get(topoConfKey); +if (topoConf == null) { +topoConf = new LocallyCachedTopologyBlob(topologyId, isLocalMode, conf, fsOps, +LocallyCachedTopologyBlob.TopologyBlobType.TOPO_CONF); +topologyBlobs.put(topoConfKey, topoConf); +} +return topoConf; +} + +public synchronized CompletableFuture requestDownloadTopologyBlobs(final LocalAssignment assignment, final int port, + final BlobChangingCallback cb) throws IOException { +final String topologyId = assignment.get_topology_id(); + +CompletableFuture baseBlobs = requestDownloadBaseTopologyBlobs(assignment, port, cb); +return baseBlobs.thenComposeAsync((v) -> { +LocalDownloadedResource localResource = blobPending.get(topologyId); +if (localResource == null) { +Supplier supplier = new DownloadBlobs(topologyId, assignment.get_owner()); +localResource = new LocalDownloadedResource(CompletableFuture.supplyAsync(supplier, execService)); +blobPending.put(topologyId, localResource); +} +CompletableFuture r = localResource.reserve(port, assignment); +LOG.debug("Reserved blobs {} {}", topologyId, localResource); +return r; +}); +} + +@VisibleForTesting +synchronized CompletableFuture requestDownloadBaseTopologyBlobs(final LocalAssignment assignment, final int port, + BlobChangingCallback cb) throws IOException { +PortAndAssignment pna = new PortAndAssignment(port, assignment); +final String topologyId = assignment.get_topology_id(); + +LocallyCachedBlob topoJar = getTopoJar(topologyId); +topoJar.addReference(pna, cb); + +LocallyCachedBlob topoCode = getTopoCode(topologyId); +topoCode.addReference(pna, cb); + +LocallyCachedBlob topoConf = getTopoConf(topologyId); +topoConf.addReference(pna, cb); + +CompletableFuture ret = topologyBasicDownloaded.get(topologyId); +if (ret == null) { +ret = downloadOrUpdate(topoJar, topoCode, topoConf); +} +return ret; +} + +private static final int ATTEMPTS_INTERVAL_TIME = 100; + +private CompletableFuture downloadOrUpdate(LocallyCachedBlob ... blobs) { +CompletableFuture [] all = new CompletableFuture[blobs.length]; +
[GitHub] storm pull request #2345: STORM-2438: added in rebalance changes to support ...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2345#discussion_r143501314 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java --- @@ -263,7 +314,41 @@ public String toString() { return "{ " + topoId + ": " + request + " }"; } } - + +/** + * Holds the information about a blob that is changing. + */ +static class BlobChangeing { --- End diff -- Does this need to be `BlobChanging`? ---
[GitHub] storm pull request #2345: STORM-2438: added in rebalance changes to support ...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2345#discussion_r143499778 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java --- @@ -104,9 +111,12 @@ this.port = port; this.iSupervisor = iSupervisor; this.localState = localState; +this.changingCallback = changingCallback; } } - + +//TODO go through all of the state transitions and make sure we handle changingBlobs +//TODO make sure to add in transition helpers that clean changingBlobs && pendingChangeingBlobs for not the current topology --- End diff -- ?? ---
[GitHub] storm pull request #2358: [STORM-2770] Add fragmentation metrics for CPU and...
GitHub user kishorvpatil opened a pull request: https://github.com/apache/storm/pull/2358 [STORM-2770] Add fragmentation metrics for CPU and Memory The patch addresses following: - Calculate fragmentation of CPU/memory - Display it on UI under Cluster summary - Add following guages: -- fragmented_memory -- fragmented_cpu -- available_memory -- available_cpu -- total_memory -- total_cpu - Fix num_supervisors calculations. https://user-images.githubusercontent.com/6090397/31197647-bd7f147a-a91f-11e7-9566-355eff7f1349.png";> You can merge this pull request into a Git repository by running: $ git pull https://github.com/kishorvpatil/incubator-storm storm2770 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/storm/pull/2358.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2358 commit 22427d0e793dd67c16498e59f4da70ed174e6cc9 Author: Kishor Patil Date: 2017-10-04T15:25:51Z Adding cluster level fragmented CPU/Memory metrics to nimbus commit e0e9bcb30c5d6dff8f2fe08322312374fb7d63f2 Author: Kishor Patil Date: 2017-10-04T17:21:16Z Add fragmented cpu/memory to SupervisorSummary commit 2269e2028db6999c19c887116d0fcc95448a9a6d Author: Kishor Patil Date: 2017-10-04T17:59:51Z Show fragmented cpu/memory on cluster summary Fix Configuration porting ---
[GitHub] storm pull request #2337: [STORM2751] Removing AsyncLoggerContext from Super...
Github user kishorvpatil commented on a diff in the pull request: https://github.com/apache/storm/pull/2337#discussion_r140328729 --- Diff: bin/storm.py --- @@ -739,7 +739,6 @@ def supervisor(klass="org.apache.storm.daemon.supervisor.Supervisor"): cppaths = [CLUSTER_CONF_DIR] jvmopts = parse_args(confvalue("supervisor.childopts", cppaths)) + [ "-Dlogfile.name=" + STORM_SUPERVISOR_LOG_FILE, - "-DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector", --- End diff -- synchronized logging might slow down UI response time. May be if we notice this in future we can remove it. ---