[GitHub] storm pull request #2518: STORM-2902: Some improvements for storm-rocketmq m...
Github user vesense commented on a diff in the pull request: https://github.com/apache/storm/pull/2518#discussion_r178704108 --- Diff: external/storm-rocketmq/src/main/java/org/apache/storm/rocketmq/RocketMqConfig.java --- @@ -23,28 +23,20 @@ import java.util.Properties; import java.util.UUID; +import org.apache.commons.lang.StringUtils; import org.apache.commons.lang.Validate; import org.apache.rocketmq.client.ClientConfig; import org.apache.rocketmq.client.consumer.DefaultMQPushConsumer; import org.apache.rocketmq.client.exception.MQClientException; import org.apache.rocketmq.client.producer.DefaultMQProducer; import org.apache.rocketmq.common.consumer.ConsumeFromWhere; -import org.apache.rocketmq.remoting.common.RemotingUtil; /** * RocketMqConfig for Consumer/Producer. */ --- End diff -- @vongosling This is limited by the storm checkstyle rule `AbbreviationAsWordInName`. Refer to https://github.com/apache/storm/blob/master/storm-checkstyle/src/main/resources/storm/storm_checkstyle.xml#L213 ---
[GitHub] storm issue #2518: STORM-2902: Some improvements for storm-rocketmq module
Github user vesense commented on the issue: https://github.com/apache/storm/pull/2518 @vongosling Added unit tests and rebased PR. ---
[GitHub] storm pull request #2618: [STORM-2905] Fix KeyNotFoundException when kill a ...
Github user danny0405 commented on a diff in the pull request: https://github.com/apache/storm/pull/2618#discussion_r178693115 --- Diff: storm-server/src/main/java/org/apache/storm/localizer/LocalizedResourceRetentionSet.java --- @@ -94,17 +93,17 @@ public void cleanup(ClientBlobStore store) { Map.Entry> rsrc = i.next(); LocallyCachedBlob resource = rsrc.getKey(); try { -resource.getRemoteVersion(store); +if (!store.isRemoteBlobExists(resource.getKey())) { --- End diff -- Ok, i will separate it apart with another JIRA. ---
[GitHub] storm issue #2618: [STORM-2905] Fix KeyNotFoundException when kill a storm a...
Github user danny0405 commented on the issue: https://github.com/apache/storm/pull/2618 @revans2 I did this path for the concurrent race condition on `AsyncLocalize#topologyBlobs` of func: `updateBlobs` and `releaseSlotFor`, `AsyncLocalize#topologyBlobs` overdue keys will be only cleaned by `AsyncLocalize#clean()` timer task, `updateBlobs` is also a timer task but not guarded by lock. I add a RPC api `isRemoteBlobExist` only to let the log not confusing, which is unrelated to this patch. ---
[GitHub] storm pull request #2618: [STORM-2905] Fix KeyNotFoundException when kill a ...
Github user danny0405 commented on a diff in the pull request: https://github.com/apache/storm/pull/2618#discussion_r178692521 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java --- @@ -497,7 +497,6 @@ static DynamicState cleanupCurrentContainer(DynamicState dynamicState, StaticSta assert(dynamicState.container.areAllProcessesDead()); dynamicState.container.cleanUp(); - staticState.localizer.releaseSlotFor(dynamicState.currentAssignment, staticState.port); --- End diff -- yeah, but `staticState.localizer.releaseSlotFor` only remove topology base blobs[jar/code/conf] which are already taken care of by releaseSlotFor itself. ---
[GitHub] storm issue #2613: .NET Core 2.0 multi-lang adapter
Github user HeartSaVioR commented on the issue: https://github.com/apache/storm/pull/2613 @oizu @MaurGi Hello Azure team, please follow up the Storm developer mailing list since such discussion is taking place. If you're not subscribing dev mailing list, here's a link for tracking thread. https://mail-archives.apache.org/mod_mbox/storm-dev/201804.mbox/%3c59aff170-49de-4f83-a7b9-23616da21...@gmail.com%3E ---
Re: Storm multilang - .net core
+1 I think that’s a good idea. It would also help to have some wording regarding how one requests a project be added to that page. -Taylor > On Apr 2, 2018, at 10:03 AM, Stig Rohde Døssing> wrote: > > Sorry about necro'ing this thread, but I think this is relevant again, due > to https://github.com/apache/storm/pull/2613. > > I think it makes more sense to have adapters like this in separate > repositories from Storm, as they can then release fixes when they need to, > without being coupled to the Storm release schedule. For example, > https://github.com/Parsely/streamparse is released this way. Releasing > separately also makes it easier for users to tell whether the adapter is > still being actively developed. > > Maybe we should add an "Ecosystem" section to the web page, similar to what > Kafka has here https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem. > It would help users find projects related to Storm, which is one of the few > advantages I can see to adding the adapter to the Storm repo (visibility). > > 2018-02-02 5:23 GMT+01:00 Jungtaek Lim : > >> CORRECTION: I don't work on multilang for now. "had some works" on >> multilang and introduced small changes on multilang protocol. >> >> 2018년 2월 2일 (금) 오전 9:12, Jungtaek Lim 님이 작성: >> >>> Please remove or bcc. for company mail/mailing list which don't allow >>> sending mail outside of MS. My previous mail bounced. I just removed CCC >>> from recipient in current mail. >>> >>> 2018년 2월 2일 (금) 오전 9:08, Jungtaek Lim 님이 작성: >>> Mauro, First of all, thanks for proposing contribution of valuable effort on Apache Storm! Really appreciated. I don't know about C#/.net but I have been working on the change of multilang, so adding my voice here. 1. Major concern from my side for code donation is sustainability. Not sure but according to my experience, most of Storm PMCs and contributors >> don't look using Windows at their dev. environment. It doesn't strictly mean >> they don't have experience with C#, but relatively higher chance. I'm sorry >> to the Azure team, but I recently found a forgotten major pull request on storm-eventhubs, which doesn't look like no one could review and >> maintain. It might become real pain if we receive the code which we can't/don't maintain, hence I'd rather not vote +1 unless at least two PMCs know C# well, understand the code completely, and willing to volunteer to >> maintain. 2. As you may know, all of multilang adapters in Storm repo are actually close to "example" of implementations. They're just implementations of >> the protocol, and don't provide any language specific optimization as well >> as language-standard code style. Most of Python users in Storm community >> would rather use StreamParse, and it is not uncommon to see StreamParse >> question in user group (whereas they have their own Github repo and issue as >> well). I would like to see adapter (and more) projects really looking >> attractive for other languages as well. 3. How it affect releasing Storm? We don't publish them as package in its language package environment. If NuGet is one of them, we need to add >> the sequence of release phase for NuGet while releasing Storm, which was not happened yet, and will become another pain. Moreover, if it's the case, >> I don't feel needs for having strict coupled between Storm and .net >> adapter package. For user side it's not a "battery-included" and there's no difference between maintaining inside/outside. You could freely use user/dev list to announce new .net adapter release and such >> announcements are happening from other projects as well. 4. It's completely a thought on my own, but I feel more needs of having >> more language-native way of supporting language instead of keep improving multilang. (Not meant to discontinue.) We will introduce streams API in Storm 2.0.0, a higher-level API like Trident but typed and >> record-to-record processing. We haven't supported Trident in multilang, but I'd like to >> see support streams API in non-JVM language, not only defining protocol, but also have it as first-class support (users should be able to construct their own topology with only using the language. there's thrift but not that convenient.). IMHO, according to Spark's "Lesson learned" >> (Databricks had a poll and published it), I think it's really clear that Python >> should be first (and only, R might be another good to have). Thanks for reading a wall of text. Regardless of whether we could >> include .net adapter into Storm core or not, thanks again for crafting .net >> adapter and proposing for donation.
[GitHub] storm issue #2593: STORM-2994 KafkaSpout doesn't commit null tuples offsets
Github user srdo commented on the issue: https://github.com/apache/storm/pull/2593 +1, thanks for your patience. Please squash to one commit, and we can merge. ---
[GitHub] storm issue #2583: STORM-2649 More detailed check of config serialization
Github user ghajos commented on the issue: https://github.com/apache/storm/pull/2583 @srdo @HeartSaVioR @revans2 Thank you for the review and the patience! ---
[GitHub] storm issue #2613: .NET Core 2.0 multi-lang adapter
Github user MaurGi commented on the issue: https://github.com/apache/storm/pull/2613 Hello - we are proposing a new multilang adapter for .net core - this is in the storm-multilang/dotnet folder - In the folder we have a dotnet project for the adapter and then a dotnet project for the sample We also use Flux to define the topology, so there is a flux yaml config file - Do we need to get a Jira issue to have a reviewer assigned to this? Thanks - ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user reiabreu commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178586564 --- Diff: external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutNullTupleTest.java --- @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.kafka.spout; + + +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.storm.kafka.spout.config.builder.SingleTopicKafkaSpoutConfiguration; +import org.apache.storm.utils.Time; +import org.junit.Test; + +import java.util.regex.Pattern; + +import static org.mockito.ArgumentMatchers.*; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.verify; + +public class KafkaSpoutNullTupleTest extends KafkaSpoutAbstractTest { +private final int maxPollRecords = 10; +private final int maxRetries = 3; + +public KafkaSpoutNullTupleTest() { +super(2_000); +} + + +@Override +KafkaSpoutConfigcreateSpoutConfig() { +return SingleTopicKafkaSpoutConfiguration.setCommonSpoutConfigNullTuples( --- End diff -- Absolutely, will simplify it ---
[GitHub] storm issue #2618: [STORM-2905] Fix KeyNotFoundException when kill a storm a...
Github user revans2 commented on the issue: https://github.com/apache/storm/pull/2618 I am trying to understand the reasons behind this change. Is this jira just to remove an exception that shows up in the logs? Or is that exception actually causing a problem? The reason I ask is a risk vs reward situation. The code in AsyncLocalizer is really very complicated and because it is asynchronous there are lots of races and corner cases. This makes me a bit nervous to start changing fundamental things just because of some extra logs. Additionally this is a distributed system and this particular race is inherent in the system. It is possible for someone to delete a blob at any point in time and the code in the supervisor needs to handle it. ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user srdo commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178571304 --- Diff: external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutNullTupleTest.java --- @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.kafka.spout; + + +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.storm.kafka.spout.config.builder.SingleTopicKafkaSpoutConfiguration; +import org.apache.storm.utils.Time; +import org.junit.Test; + +import java.util.regex.Pattern; + +import static org.mockito.ArgumentMatchers.*; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.verify; + +public class KafkaSpoutNullTupleTest extends KafkaSpoutAbstractTest { +private final int maxPollRecords = 10; +private final int maxRetries = 3; + +public KafkaSpoutNullTupleTest() { +super(2_000); +} + + +@Override +KafkaSpoutConfigcreateSpoutConfig() { +return SingleTopicKafkaSpoutConfiguration.setCommonSpoutConfigNullTuples( +KafkaSpoutConfig.builder("127.0.0.1:" + kafkaUnitRule.getKafkaUnit().getKafkaPort(), +Pattern.compile(SingleTopicKafkaSpoutConfiguration.TOPIC))) +.setOffsetCommitPeriodMs(commitOffsetPeriodMs) +.setRetry(new KafkaSpoutRetryExponentialBackoff(KafkaSpoutRetryExponentialBackoff.TimeInterval.seconds(0), KafkaSpoutRetryExponentialBackoff.TimeInterval.seconds(0), --- End diff -- Nit: Unless it's important to the test, I'd leave this config out or use the SingleTopicKafkaSpoutConfiguration utility to set common configuration. Same for maxPollRecords. ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user srdo commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178571949 --- Diff: external/storm-kafka-client/src/test/java/org/apache/storm/kafka/spout/KafkaSpoutNullTupleTest.java --- @@ -0,0 +1,76 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.storm.kafka.spout; + + +import org.apache.kafka.clients.consumer.ConsumerConfig; +import org.apache.storm.kafka.spout.config.builder.SingleTopicKafkaSpoutConfiguration; +import org.apache.storm.utils.Time; +import org.junit.Test; + +import java.util.regex.Pattern; + +import static org.mockito.ArgumentMatchers.*; +import static org.mockito.Mockito.never; +import static org.mockito.Mockito.verify; + +public class KafkaSpoutNullTupleTest extends KafkaSpoutAbstractTest { +private final int maxPollRecords = 10; +private final int maxRetries = 3; + +public KafkaSpoutNullTupleTest() { +super(2_000); +} + + +@Override +KafkaSpoutConfigcreateSpoutConfig() { +return SingleTopicKafkaSpoutConfiguration.setCommonSpoutConfigNullTuples( --- End diff -- I think you can simplify this by using SingleTopicKafkaSpoutConfiguration.createKafkaSpoutConfigBuilder, and simply setting the record translator. I don't think there's a reason to put a whole new configuration in SingleTopicKafkaSpoutConfiguration. I'd also rather keep the record translator local to this class, since we're not using it in other tests. ---
[GitHub] storm issue #2593: STORM-2994 KafkaSpout commit offsets for null tuples
Github user reiabreu commented on the issue: https://github.com/apache/storm/pull/2593 KafkaSpoutAbstractTest is tightly coupled to KafkaSpoutConfig through createSpoutConfig, meaning that to test a single configuration change, we need to create a new test class. This is something that we can redesign in the future. ---
[GitHub] storm pull request #2618: [STORM-2905] Fix KeyNotFoundException when kill a ...
Github user revans2 commented on a diff in the pull request: https://github.com/apache/storm/pull/2618#discussion_r178568732 --- Diff: storm-server/src/main/java/org/apache/storm/localizer/LocalizedResourceRetentionSet.java --- @@ -94,17 +93,17 @@ public void cleanup(ClientBlobStore store) { Map.Entry> rsrc = i.next(); LocallyCachedBlob resource = rsrc.getKey(); try { -resource.getRemoteVersion(store); +if (!store.isRemoteBlobExists(resource.getKey())) { --- End diff -- Admittedly the code is cleaner with this, but the change is totally unneeded. It behaves exactly the same as it did before. I think this is a good change, it would just be nice to have it be a separate pull request and a separate JIRA as it is not really a part of the needed fix. ---
[GitHub] storm pull request #2618: [STORM-2905] Fix KeyNotFoundException when kill a ...
Github user revans2 commented on a diff in the pull request: https://github.com/apache/storm/pull/2618#discussion_r178567096 --- Diff: storm-server/src/main/java/org/apache/storm/daemon/supervisor/Slot.java --- @@ -497,7 +497,6 @@ static DynamicState cleanupCurrentContainer(DynamicState dynamicState, StaticSta assert(dynamicState.container.areAllProcessesDead()); dynamicState.container.cleanUp(); - staticState.localizer.releaseSlotFor(dynamicState.currentAssignment, staticState.port); --- End diff -- cleanupCurrentConatiner gets used in many different locations, not just during the blob update. We need to release the slot when the container is killed or reference counting will be off. ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user srdo commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178562679 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -570,20 +572,25 @@ public void ack(Object messageId) { // Only need to keep track of acked tuples if commits to Kafka are controlled by // tuple acks, which happens only for at-least-once processing semantics final KafkaSpoutMessageId msgId = (KafkaSpoutMessageId) messageId; -if (!emitted.contains(msgId)) { -if (msgId.isEmitted()) { +if (!msgId.isNullTuple()) { --- End diff -- Great, that sounds good. ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user reiabreu commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178559146 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -570,20 +572,25 @@ public void ack(Object messageId) { // Only need to keep track of acked tuples if commits to Kafka are controlled by // tuple acks, which happens only for at-least-once processing semantics final KafkaSpoutMessageId msgId = (KafkaSpoutMessageId) messageId; -if (!emitted.contains(msgId)) { -if (msgId.isEmitted()) { +if (!msgId.isNullTuple()) { --- End diff -- Sounds good, will update it. Regarding tests, we need to add at least one: Test all messages are commited for all null tuples when Spout is not set to emit null tuples ``` @Test public void testShouldCommitAllMessagesIfNotSetToEmitNullTuples() throws Exception { final int messageCount = 10; prepareSpout(messageCount); //All null tuples should be commited, meaning they were considered by to be emitted and acked for(int i = 0; i < messageCount; i++) { spout.nextTuple(); } verify(collectorMock,never()).emit( anyString(), anyList(), any()); Time.advanceTime(commitOffsetPeriodMs + KafkaSpout.TIMER_DELAY_MS); //Commit offsets spout.nextTuple(); verifyAllMessagesCommitted(messageCount); } ``` We would need a new SingleTopicKafkaSpoutConfiguration with something like: ``` private static class NullRecordExtractor implements RecordTranslator{ @Override public List apply(ConsumerRecord record) { return null; } @Override public Fields getFieldsFor(String stream) { return new Fields("topic", "key", "value"); } @Override public Object apply(Object record) { return null; } } ``` I was planning to extend KafkaSpoutAbstractTest on something similar to KafkaSpoutSingleTopicTest. ---
[GitHub] storm pull request #2583: STORM-2649 More detailed check of config serializa...
Github user asfgit closed the pull request at: https://github.com/apache/storm/pull/2583 ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user srdo commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178551488 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -570,20 +572,25 @@ public void ack(Object messageId) { // Only need to keep track of acked tuples if commits to Kafka are controlled by // tuple acks, which happens only for at-least-once processing semantics final KafkaSpoutMessageId msgId = (KafkaSpoutMessageId) messageId; -if (!emitted.contains(msgId)) { -if (msgId.isEmitted()) { +if (!msgId.isNullTuple()) { --- End diff -- Nit: You can reduce the nesting by a bit here by switching this to a guard clause, e.g. ``` if (is null tuple) { ... return } if (emitted contains) { ... } else { ... } ``` Up to you whether you want to change it ---
Re: Storm multilang - .net core
Sorry about necro'ing this thread, but I think this is relevant again, due to https://github.com/apache/storm/pull/2613. I think it makes more sense to have adapters like this in separate repositories from Storm, as they can then release fixes when they need to, without being coupled to the Storm release schedule. For example, https://github.com/Parsely/streamparse is released this way. Releasing separately also makes it easier for users to tell whether the adapter is still being actively developed. Maybe we should add an "Ecosystem" section to the web page, similar to what Kafka has here https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem. It would help users find projects related to Storm, which is one of the few advantages I can see to adding the adapter to the Storm repo (visibility). 2018-02-02 5:23 GMT+01:00 Jungtaek Lim: > CORRECTION: I don't work on multilang for now. "had some works" on > multilang and introduced small changes on multilang protocol. > > 2018년 2월 2일 (금) 오전 9:12, Jungtaek Lim 님이 작성: > > > Please remove or bcc. for company mail/mailing list which don't allow > > sending mail outside of MS. My previous mail bounced. I just removed CCC > > from recipient in current mail. > > > > 2018년 2월 2일 (금) 오전 9:08, Jungtaek Lim 님이 작성: > > > >> Mauro, > >> > >> First of all, thanks for proposing contribution of valuable effort on > >> Apache Storm! Really appreciated. > >> > >> I don't know about C#/.net but I have been working on the change of > >> multilang, so adding my voice here. > >> > >> 1. > >> Major concern from my side for code donation is sustainability. Not sure > >> but according to my experience, most of Storm PMCs and contributors > don't > >> look using Windows at their dev. environment. It doesn't strictly mean > they > >> don't have experience with C#, but relatively higher chance. I'm sorry > to > >> the Azure team, but I recently found a forgotten major pull request on > >> storm-eventhubs, which doesn't look like no one could review and > maintain. > >> It might become real pain if we receive the code which we can't/don't > >> maintain, hence I'd rather not vote +1 unless at least two PMCs know C# > >> well, understand the code completely, and willing to volunteer to > maintain. > >> > >> 2. > >> As you may know, all of multilang adapters in Storm repo are actually > >> close to "example" of implementations. They're just implementations of > the > >> protocol, and don't provide any language specific optimization as well > as > >> language-standard code style. Most of Python users in Storm community > would > >> rather use StreamParse, and it is not uncommon to see StreamParse > question > >> in user group (whereas they have their own Github repo and issue as > well). > >> I would like to see adapter (and more) projects really looking > attractive > >> for other languages as well. > >> > >> 3. > >> How it affect releasing Storm? We don't publish them as package in its > >> language package environment. If NuGet is one of them, we need to add > the > >> sequence of release phase for NuGet while releasing Storm, which was not > >> happened yet, and will become another pain. Moreover, if it's the case, > I > >> don't feel needs for having strict coupled between Storm and .net > adapter > >> package. For user side it's not a "battery-included" and there's no > >> difference between maintaining inside/outside. You could freely use > >> user/dev list to announce new .net adapter release and such > announcements > >> are happening from other projects as well. > >> > >> 4. > >> It's completely a thought on my own, but I feel more needs of having > more > >> language-native way of supporting language instead of keep improving > >> multilang. (Not meant to discontinue.) We will introduce streams API in > >> Storm 2.0.0, a higher-level API like Trident but typed and > record-to-record > >> processing. We haven't supported Trident in multilang, but I'd like to > see > >> support streams API in non-JVM language, not only defining protocol, but > >> also have it as first-class support (users should be able to construct > >> their own topology with only using the language. there's thrift but not > >> that convenient.). IMHO, according to Spark's "Lesson learned" > (Databricks > >> had a poll and published it), I think it's really clear that Python > should > >> be first (and only, R might be another good to have). > >> > >> Thanks for reading a wall of text. Regardless of whether we could > include > >> .net adapter into Storm core or not, thanks again for crafting .net > adapter > >> and proposing for donation. > >> > >> Jungtaek Lim (HeartSaVioR) > >> > >> 2018년 2월 2일 (금) 오전 2:03, Mauro Giusti 님이 > >> 작성: > >> > >>> Hello Storm devs - > >>> > >>> We are working on a project that uses Storm and C# / .net core > >>> components. > >>> > >>> As part of that, we would love to contribute to the
[GitHub] storm pull request #2612: STORM-3015: Remove kafka_2.10 dependency from stor...
Github user asfgit closed the pull request at: https://github.com/apache/storm/pull/2612 ---
[GitHub] storm pull request #2619: [STORM-3018] Fix integration test DemoTest#testExc...
Github user asfgit closed the pull request at: https://github.com/apache/storm/pull/2619 ---
[GitHub] storm pull request #2583: STORM-2649 More detailed check of config serializa...
Github user ghajos commented on a diff in the pull request: https://github.com/apache/storm/pull/2583#discussion_r178527233 --- Diff: storm-client/test/jvm/org/apache/storm/utils/UtilsTest.java --- @@ -173,4 +176,49 @@ public void isZkAuthenticationConfiguredStormServerWithPropertyTest() { } } } + +@Test --- End diff -- Found a positive test case in TestConfigValidate and added negative test cases. Should I collect some more test cases? ---
[GitHub] storm issue #2583: STORM-2649 More detailed check of config serialization
Github user ghajos commented on the issue: https://github.com/apache/storm/pull/2583 Sorry for the late reply! ---
[GitHub] storm pull request #2593: STORM-2994 KafkaSpout commit offsets for null tupl...
Github user reiabreu commented on a diff in the pull request: https://github.com/apache/storm/pull/2593#discussion_r178510737 --- Diff: external/storm-kafka-client/src/main/java/org/apache/storm/kafka/spout/KafkaSpout.java --- @@ -484,8 +484,11 @@ private boolean emitOrRetryTuple(ConsumerRecordrecord) { return true; } } else { +/*if a null tuple is not configured to be emitted, it should be marked as emitted and acked immediately +* to allow its offset to be commited to Kafka*/ LOG.debug("Not emitting null tuple for record [{}] as defined in configuration.", record); msgId.setEmitted(false); --- End diff -- I concur. I'll refactor it. ---
[GitHub] storm issue #2591: STORM-2979: WorkerHooks EOFException during run_worker_sh...
Github user hummelm commented on the issue: https://github.com/apache/storm/pull/2591 Yes i Will add a list of deserialized hook into the worker and use it to start.stop. ---