[jira] [Commented] (STORM-2077) KafkaSpout doesn't retry failed tuples

2016-09-01 Thread Manu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457702#comment-15457702
 ] 

Manu Zhang commented on STORM-2077:
---

[~tobiasmaier], have you turned on acker ?

> KafkaSpout doesn't retry failed tuples
> --
>
> Key: STORM-2077
> URL: https://issues.apache.org/jira/browse/STORM-2077
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka
>Affects Versions: 1.0.2
>Reporter: Tobias Maier
>
> KafkaSpout does not retry all failed tuples.
> We used following Configuration:
> Map props = new HashMap<>();
> props.put(KafkaSpoutConfig.Consumer.GROUP_ID, "c1");
> props.put(KafkaSpoutConfig.Consumer.KEY_DESERIALIZER, 
> ByteArrayDeserializer.class.getName());
> props.put(KafkaSpoutConfig.Consumer.VALUE_DESERIALIZER, 
> ByteArrayDeserializer.class.getName());
> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> broker.bootstrapServer());
> KafkaSpoutStreams kafkaSpoutStreams = new 
> KafkaSpoutStreams.Builder(FIELDS_KAFKA_EVENT, new 
> String[]{"test-topic"}).build();
> KafkaSpoutTuplesBuilder kafkaSpoutTuplesBuilder = new 
> KafkaSpoutTuplesBuilder.Builder<>(new 
> KeyValueKafkaSpoutTupleBuilder("test-topic")).build();
> KafkaSpoutRetryService retryService = new 
> KafkaSpoutLoggedRetryExponentialBackoff(KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.milliSeconds(1),
>  KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.milliSeconds(1), 3, 
> KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.seconds(1));
> KafkaSpoutConfig config = new 
> KafkaSpoutConfig.Builder<>(props, kafkaSpoutStreams, kafkaSpoutTuplesBuilder, 
> retryService)
> .setFirstPollOffsetStrategy(UNCOMMITTED_LATEST)
> .setMaxUncommittedOffsets(30)
> .setOffsetCommitPeriodMs(10)
> .setMaxRetries(3)
> .build();
> kafkaSpout = new org.apache.storm.kafka.spout.KafkaSpout<>(config);
> The downstream bolt fails every tuple and we expect, that those tuple will 
> all be replayed. But that's not the case for every tuple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77299756
  
--- Diff: 
storm-core/test/jvm/org/apache/storm/daemon/supervisor/BasicContainerTest.java 
---
@@ -0,0 +1,459 @@
+package org.apache.storm.daemon.supervisor;
+
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.*;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.junit.Test;
+
+public class BasicContainerTest {
+public static class CommandRun {
+final List cmd;
+final Map env;
+final File pwd;
+
+public CommandRun(List cmd, Map env, File 
pwd) {
+this.cmd = cmd;
+this.env = env;
+this.pwd = pwd;
+}
+}
+
+public static class MockBasicContainer extends BasicContainer {
+public final List profileCmds = new ArrayList<>();
+public final List workerCmds = new ArrayList<>();
+
+public MockBasicContainer(int port, LocalAssignment assignment, 
Map conf,
+String supervisorId, LocalState localState, 
ResourceIsolationInterface resourceIsolationManager,
+boolean recover) throws IOException {
+super(port, assignment, conf, supervisorId, localState, 
resourceIsolationManager, recover);
+}
+
+public MockBasicContainer(AdvancedFSOps ops, int port, 
LocalAssignment assignment,
+Map conf, Map topoConf, 
String supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager, localState, profileCmd);
+}
+
+@Override
+protected Map readTopoConf() throws IOException {
+return new HashMap<>();
+}
+
+@Override
+public void createNewWorkerId() {
+super.createNewWorkerId();
+}
+
+@Override
+public List substituteChildopts(Object value, int 
memOnheap) {
+return super.substituteChildopts(value, memOnheap);
+}
+   
+@Override
+protected boolean runProfilingCommand(List command, 
Map env, String logPrefix,
+File targetDir) throws IOException, InterruptedException {
+profileCmds.add(new CommandRun(command, env, targetDir));
+return true;
+}
+
+@Override
+protected void launchWorkerProcess(List command, 
Map env, String logPrefix,
+ExitCodeCallback processExitCallback, File targetDir) 
throws IOException {
+workerCmds.add(new CommandRun(command, env, targetDir));
+}
+
+@Override
+protected String javaCmd(String cmd) {
+//avoid system dependent things
+return cmd;
+}
+
+@Override
+protected List frameworkClasspath() {
+//We are not really running anything so make this
+// simple to check for
+return Arrays.asList("FRAMEWORK_CP");
+}
+
+@Override
+protected String javaLibraryPath(String stormRoot, Map conf) {
+return "JLP";
+}
+}
+
+@Test
+public void testCreateNewWorkerId() throws Exception {
+final String topoId = "test_topology";
+final int port = 8080;
+LocalAssignment la = new LocalAssignment();
+la.set_topology_id(topoId);
+
+AdvancedFSOps ops = mock(AdvancedFSOps.class);
+
+LocalState ls = mock(LocalState.class);
+
+MockBasicContainer mc = new MockBasicContainer(ops, port, la, new 
HashMap(), 
+new HashMap(), "SUPERVISOR", null, ls, 
"profile");
+
+mc.createNewWorkerId();
+
+ass

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77296684
  
--- Diff: 
storm-core/test/jvm/org/apache/storm/daemon/supervisor/BasicContainerTest.java 
---
@@ -0,0 +1,459 @@
+package org.apache.storm.daemon.supervisor;
+
+import static org.junit.Assert.*;
+import static org.mockito.Mockito.*;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.junit.Test;
+
+public class BasicContainerTest {
+public static class CommandRun {
+final List cmd;
+final Map env;
+final File pwd;
+
+public CommandRun(List cmd, Map env, File 
pwd) {
+this.cmd = cmd;
+this.env = env;
+this.pwd = pwd;
+}
+}
+
+public static class MockBasicContainer extends BasicContainer {
+public final List profileCmds = new ArrayList<>();
+public final List workerCmds = new ArrayList<>();
+
+public MockBasicContainer(int port, LocalAssignment assignment, 
Map conf,
+String supervisorId, LocalState localState, 
ResourceIsolationInterface resourceIsolationManager,
+boolean recover) throws IOException {
+super(port, assignment, conf, supervisorId, localState, 
resourceIsolationManager, recover);
+}
+
+public MockBasicContainer(AdvancedFSOps ops, int port, 
LocalAssignment assignment,
+Map conf, Map topoConf, 
String supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager, localState, profileCmd);
+}
+
+@Override
+protected Map readTopoConf() throws IOException {
+return new HashMap<>();
+}
+
+@Override
+public void createNewWorkerId() {
+super.createNewWorkerId();
+}
+
+@Override
+public List substituteChildopts(Object value, int 
memOnheap) {
+return super.substituteChildopts(value, memOnheap);
+}
+   
+@Override
+protected boolean runProfilingCommand(List command, 
Map env, String logPrefix,
+File targetDir) throws IOException, InterruptedException {
+profileCmds.add(new CommandRun(command, env, targetDir));
+return true;
+}
+
+@Override
+protected void launchWorkerProcess(List command, 
Map env, String logPrefix,
+ExitCodeCallback processExitCallback, File targetDir) 
throws IOException {
+workerCmds.add(new CommandRun(command, env, targetDir));
+}
+
+@Override
+protected String javaCmd(String cmd) {
+//avoid system dependent things
+return cmd;
+}
+
+@Override
+protected List frameworkClasspath() {
+//We are not really running anything so make this
+// simple to check for
+return Arrays.asList("FRAMEWORK_CP");
+}
+
+@Override
+protected String javaLibraryPath(String stormRoot, Map conf) {
+return "JLP";
+}
+}
+
+@Test
+public void testCreateNewWorkerId() throws Exception {
+final String topoId = "test_topology";
+final int port = 8080;
+LocalAssignment la = new LocalAssignment();
+la.set_topology_id(topoId);
+
+AdvancedFSOps ops = mock(AdvancedFSOps.class);
+
+LocalState ls = mock(LocalState.class);
+
+MockBasicContainer mc = new MockBasicContainer(ops, port, la, new 
HashMap(), 
+new HashMap(), "SUPERVISOR", null, ls, 
"profile");
+
+mc.createNewWorkerId();
+
+ass

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77294585
  
--- Diff: storm-core/src/jvm/org/apache/storm/utils/ConfigUtils.java ---
@@ -353,25 +350,21 @@ public LocalState nimbusTopoHistoryStateImpl(Map 
conf) throws IOException {
 }
 
 // we use this "weird" wrapper pattern temporarily for mocking in 
clojure test
-public static Map readSupervisorStormConf(Map conf, String stormId) 
throws IOException {
+public static Map readSupervisorStormConf(Map conf, String stormId) throws IOException {
 return _instance.readSupervisorStormConfImpl(conf, stormId);
 }
 
-public Map readSupervisorStormConfImpl(Map conf, String stormId) 
throws IOException {
+public Map readSupervisorStormConfImpl(Map conf, String stormId) throws IOException {
 String stormRoot = supervisorStormDistRoot(conf, stormId);
 String confPath = supervisorStormConfPath(stormRoot);
 return readSupervisorStormConfGivenPath(conf, confPath);
 }
 
 // we use this "weird" wrapper pattern temporarily for mocking in 
clojure test
-public static StormTopology readSupervisorTopology(Map conf, String 
stormId) throws IOException {
-return _instance.readSupervisorTopologyImpl(conf, stormId);
-}
-
-public StormTopology readSupervisorTopologyImpl(Map conf, String 
stormId) throws IOException {
+public static StormTopology readSupervisorTopology(Map conf, String 
stormId, AdvancedFSOps ops) throws IOException {
--- End diff --

Don't we need to consider mock like `readSupervisorStormConf()`? If we 
don't need to consider, let's remove above comment to make consistent.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77294478
  
--- Diff: storm-core/src/jvm/org/apache/storm/utils/ConfigUtils.java ---
@@ -512,6 +463,10 @@ public static String workerTmpRoot(Map conf, String 
id) {
 public static String workerPidPath(Map conf, String id, String pid) {
 return (workerPidsRoot(conf, id) + FILE_SEPARATOR + pid);
 }
+
+public static String workerPidPath(Map conf, String 
id, long pid) {
+return (workerPidsRoot(conf, id) + FILE_SEPARATOR + pid);
--- End diff --

Nit: might be better to call workerPidPath(conf, id, String.valueOf(pid)) 
or vice versa.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Storm SQL Phase 3 created

2016-09-01 Thread Jungtaek Lim
Hi devs,

I just created epic issue for Storm SQL phase 3 which tracks efforts for
adding available data sources for Storm SQL.
https://issues.apache.org/jira/browse/STORM-2075

Currently Storm SQL only supports Apache Kafka as producer (input table)
and consumer (output table) which is not sufficient for industry use cases.
Since we support various external modules as spout and bolt, and also
Trident state, we can expand them to data sources of Storm SQL.
The implementation of storm-sql-kafka

is simple enough so adding other modules as data source should be simple,
too.

There're still some other works in Phase II
 to make Storm SQL
production ready, so any contributions on Storm SQL would be really
appreciated.

Please let me know if you have any questions or suggestions.

Thanks,
Jungtaek Lim (HeartSaVioR)


[GitHub] storm issue #1648: STORM-2054 DependencyResolver should be aware of relative...

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1648
  
Also fixed license form as @manuzhang found.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1648: STORM-2054 DependencyResolver should be aware of r...

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1648#discussion_r77280313
  
--- Diff: 
external/storm-submit-tools/src/main/java/org/apache/storm/submit/dependency/Booter.java
 ---
@@ -37,16 +37,19 @@ public static RepositorySystemSession 
newRepositorySystemSession(
 RepositorySystem system, String localRepoPath) {
 MavenRepositorySystemSession session = new 
MavenRepositorySystemSession();
 
-// find homedir
-String home = System.getProperty("storm.home");
-if (home == null) {
-home = ".";
-}
+File repoDir = new File(localRepoPath);
--- End diff --

Addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1671: STORM-2080: storm-submit-tools license check failu...

2016-09-01 Thread manuzhang
Github user manuzhang closed the pull request at:

https://github.com/apache/storm/pull/1671


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1648: STORM-2054 DependencyResolver should be aware of r...

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1648#discussion_r77279840
  
--- Diff: 
external/storm-submit-tools/src/main/java/org/apache/storm/submit/dependency/Booter.java
 ---
@@ -37,16 +37,19 @@ public static RepositorySystemSession 
newRepositorySystemSession(
 RepositorySystem system, String localRepoPath) {
 MavenRepositorySystemSession session = new 
MavenRepositorySystemSession();
 
-// find homedir
-String home = System.getProperty("storm.home");
-if (home == null) {
-home = ".";
-}
+File repoDir = new File(localRepoPath);
--- End diff --

This code is from Zeppelin and I just fixed the path handling bug, but I 
think your suggestion would be better. I'll reflect it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (STORM-1344) storm-jdbc build error "object name already exists: USER_DETAILS in statement"

2016-09-01 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-1344.
-
   Resolution: Fixed
 Assignee: Paul Poulosky
Fix Version/s: 1.0.3
   1.1.0
   2.0.0

Thanks [~ppoulosk], I merged into master, 1.x, 1.0.x branches.

> storm-jdbc build error "object name already exists: USER_DETAILS in statement"
> --
>
> Key: STORM-1344
> URL: https://issues.apache.org/jira/browse/STORM-1344
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-jdbc
>Affects Versions: 1.0.0
> Environment: os X, jdk7
>Reporter: Longda Feng
>Assignee: Paul Poulosky
>Priority: Critical
> Fix For: 2.0.0, 1.1.0, 1.0.3
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> ```
> [ERROR] Failed to execute goal org.codehaus.mojo:sql-maven-plugin:1.5:execute 
> (create-db) on project storm-jdbc: object name already exists: USER_DETAILS 
> in statement [ /** * Licensed to the Apache Software Foundation (ASF) under 
> one * or more contributor license agreements.  See the NOTICE file * 
> distributed with this work for additional information * regarding copyright 
> ownership.  The ASF licenses this file * to you under the Apache License, 
> Version 2.0 (the * "License"); you may not use this file except in compliance 
> * with the License.  You may obtain a copy of the License at * * 
> http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable 
> law or agreed to in writing, software * distributed under the License is 
> distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY 
> KIND, either express or implied. * See the License for the specific language 
> governing permissions and * limitations under the License. */
> [ERROR] create table user_details (id integer, user_name varchar(100), 
> create_date date)]
> [ERROR] -> [Help 1]
> org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
> goal org.codehaus.mojo:sql-maven-plugin:1.5:execute (create-db) on project 
> storm-jdbc: object name already exists: USER_DETAILS in statement [ /** * 
> Licensed to the Apache Software Foundation (ASF) under one * or more 
> contributor license agreements.  See the NOTICE file * distributed with this 
> work for additional information * regarding copyright ownership.  The ASF 
> licenses this file * to you under the Apache License, Version 2.0 (the * 
> "License"); you may not use this file except in compliance * with the 
> License.  You may obtain a copy of the License at * * 
> http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable 
> law or agreed to in writing, software * distributed under the License is 
> distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY 
> KIND, either express or implied. * See the License for the specific language 
> governing permissions and * limitations under the License. */
>  create table user_details (id integer, user_name varchar(100), create_date 
> date)]
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
>   at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
>   at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
>   at 
> org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
>   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
>   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
>   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
>   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
>   at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
>   at 
> org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
>   at 
> org.codehaus.plexus.

[GitHub] storm pull request #1648: STORM-2054 DependencyResolver should be aware of r...

2016-09-01 Thread manuzhang
Github user manuzhang commented on a diff in the pull request:

https://github.com/apache/storm/pull/1648#discussion_r77279412
  
--- Diff: 
external/storm-submit-tools/src/main/java/org/apache/storm/submit/dependency/Booter.java
 ---
@@ -37,16 +37,19 @@ public static RepositorySystemSession 
newRepositorySystemSession(
 RepositorySystem system, String localRepoPath) {
 MavenRepositorySystemSession session = new 
MavenRepositorySystemSession();
 
-// find homedir
-String home = System.getProperty("storm.home");
-if (home == null) {
-home = ".";
-}
+File repoDir = new File(localRepoPath);
--- End diff --

Why not leave the handling of path to the caller of `Booter` ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1662: [STORM-1344] Remove sql command from storm-jdbc bu...

2016-09-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/1662


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1671: STORM-2080: storm-submit-tools license check failure

2016-09-01 Thread manuzhang
Github user manuzhang commented on the issue:

https://github.com/apache/storm/pull/1671
  
Oh, I missed that. I will comment there and, FYI, some license headers are 
not in the right form


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1662: [STORM-1344] Remove sql command from storm-jdbc build.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1662
  
still +1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (STORM-1459) Allow not specifying producer properties in read-only Kafka table in StormSQL

2016-09-01 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-1459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-1459.
-
   Resolution: Fixed
Fix Version/s: 1.1.0
   2.0.0

Great work [~mauzhang], I merged into master and 1.x branches.
Keep up the good work.

> Allow not specifying producer properties in read-only Kafka table in StormSQL
> -
>
> Key: STORM-1459
> URL: https://issues.apache.org/jira/browse/STORM-1459
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: storm-sql
>Reporter: Haohui Mai
>Assignee: Manu Zhang
> Fix For: 2.0.0, 1.1.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently users need to specify the properties of Kafka producer in StormSQL 
> Kafka table even if the table is read-only. It is preferable to allow users 
> to omit it for read-only tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (STORM-2080) storm-submit-tools license check failure

2016-09-01 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim closed STORM-2080.
---
Resolution: Duplicate

> storm-submit-tools license check failure
> 
>
> Key: STORM-2080
> URL: https://issues.apache.org/jira/browse/STORM-2080
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-submit-tools
>Affects Versions: 2.0.0
>Reporter: Manu Zhang
>Assignee: Manu Zhang
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm issue #1671: STORM-2080: storm-submit-tools license check failure

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1671
  
Duplicated #1648 (STORM-2054)
Actually there's also a bug which doesn't handle relative path vs absolute 
path.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1666: STORM-1459: allow not specifying producer properti...

2016-09-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/storm/pull/1666


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1671: STORM-2080: storm-submit-tools license check failu...

2016-09-01 Thread manuzhang
GitHub user manuzhang opened a pull request:

https://github.com/apache/storm/pull/1671

STORM-2080: storm-submit-tools license check failure



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/manuzhang/storm STORM-2080

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/1671.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1671


commit 9034f293981f569fc8e84f1942fefb2b8bc60070
Author: manuzhang 
Date:   2016-09-02T00:28:37Z

STORM-2080: storm-submit-tools license check failure




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (STORM-2080) storm-submit-tools license check failure

2016-09-01 Thread Manu Zhang (JIRA)
Manu Zhang created STORM-2080:
-

 Summary: storm-submit-tools license check failure
 Key: STORM-2080
 URL: https://issues.apache.org/jira/browse/STORM-2080
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-submit-tools
Affects Versions: 2.0.0
Reporter: Manu Zhang
Assignee: Manu Zhang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm issue #1670: [STORM-2079] - Unneccessary readStormConfig operation

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1670
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread revans2
Github user revans2 commented on the issue:

https://github.com/apache/storm/pull/1642
  
I think I have addressed most of the issues so far.  I have been running 
some manual tests and have run a cluster with run as user and cgroup 
enforcement enabled.  I plan on doing some more tests, but I think the current 
pull request should be ready for anyone who whats to take a look at it/test it 
to do so.

It should be a drop in replacement for the current supervisor, and I have 
even tested it doing a rolling upgrade/downgrade.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77260851
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/BasicContainer.java ---
@@ -0,0 +1,644 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * A container that runs processes on the local box.
+ */
+public class BasicContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+private static final FilenameFilter jarFilter = new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+return name.endsWith(".jar");
+}
+};
+private static final Joiner CPJ = 
+Joiner.on(Utils.CLASS_PATH_SEPARATOR).skipNulls();
+
+protected final LocalState _localState;
+protected final String _profileCmd;
+protected volatile boolean _exitedEarly = false;
+
+private class ProcessExitCallback implements ExitCodeCallback {
+private final String _logPrefix;
+
+public ProcessExitCallback(String logPrefix) {
+_logPrefix = logPrefix;
+}
+
+@Override
+public void call(int exitCode) {
+LOG.info("{} exited with code: {}", _logPrefix, exitCode);
+_exitedEarly = true;
+}
+}
+
+//For testing purposes
+public BasicContainer(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+_profileCmd = profileCmd;
+}
+
+public BasicContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId,
+LocalState localState, ResourceIsolationInterface 
resourceIsolationManager, boolean recover)
+throws IOException {
+super(port, assignment, conf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+
+if (recover) {
+synchronized (localState) {
+String wid = null;
+Map workerToPort = 
localState.getApprovedWorkers();
+for (Map.Entry entry : 
workerToPort.entrySet()) {
+if (port == entry.getValue().intValue()) {
+wid = entry.getKey();
+}
+}
+if (wid == null) {
+throw new ContainerRecoveryException("Could n

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77253867
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/BasicContainer.java ---
@@ -0,0 +1,629 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * A container that runs processes on the local box.
+ */
+public class BasicContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+private static final FilenameFilter jarFilter = new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+return name.endsWith(".jar");
+}
+};
+private static final Joiner CPJ = 
+Joiner.on(Utils.CLASS_PATH_SEPARATOR).skipNulls();
+
+protected final LocalState _localState;
+protected final String _profileCmd;
+protected volatile boolean _exitedEarly = false;
+
+private class ProcessExitCallback implements ExitCodeCallback {
+private final String _logPrefix;
+
+public ProcessExitCallback(String logPrefix) {
+_logPrefix = logPrefix;
+}
+
+@Override
+public void call(int exitCode) {
+LOG.info("{} exited with code: {}", _logPrefix, exitCode);
+_exitedEarly = true;
+}
+}
+
+//For testing purposes
+public BasicContainer(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+_profileCmd = profileCmd;
+}
+
+public BasicContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId,
+LocalState localState, ResourceIsolationInterface 
resourceIsolationManager, boolean recover)
+throws IOException {
+super(port, assignment, conf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+
+if (recover) {
+synchronized (localState) {
+String wid = null;
+Map workerToPort = 
localState.getApprovedWorkers();
+for (Map.Entry entry : 
workerToPort.entrySet()) {
+if (port == entry.getValue().intValue()) {
+wid = entry.getKey();
+}
+}
+if (wid == null) {
+throw new ContainerRecoveryException("Could not find 
worker id for " + port + " " + assignment);
+   

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread revans2
Github user revans2 commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77245653
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/AdvancedFSOps.java ---
@@ -0,0 +1,314 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.Writer;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardCopyOption;
+import java.nio.file.attribute.PosixFilePermission;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.storm.Config;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class AdvancedFSOps {
--- End diff --

AdvancedFSOps is the default only if you are on windows or if you are doing 
run as user is a subclass used.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1670: [STORM-2079] - Unneccessary readStormConfig operation

2016-09-01 Thread knusbaum
Github user knusbaum commented on the issue:

https://github.com/apache/storm/pull/1670
  
+1 pending SD


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1670: [STORM-2079] - Unneccessary readStormConfig operat...

2016-09-01 Thread jerrypeng
GitHub user jerrypeng opened a pull request:

https://github.com/apache/storm/pull/1670

[STORM-2079] - Unneccessary readStormConfig operation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jerrypeng/storm STORM-2079

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/1670.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1670






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (STORM-2079) Unneccessary readStormConfig operation

2016-09-01 Thread Boyang Jerry Peng (JIRA)
Boyang Jerry Peng created STORM-2079:


 Summary: Unneccessary readStormConfig operation
 Key: STORM-2079
 URL: https://issues.apache.org/jira/browse/STORM-2079
 Project: Apache Storm
  Issue Type: Bug
Reporter: Boyang Jerry Peng
Assignee: Boyang Jerry Peng
Priority: Trivial


https://github.com/apache/storm/blob/master/storm-core/src/jvm/org/apache/storm/topology/BaseConfigurationDeclarer.java#L26



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request #1665: STORM-2074: fix storm-kafka-monitor NPE bug

2016-09-01 Thread priyank5485
Github user priyank5485 commented on a diff in the pull request:

https://github.com/apache/storm/pull/1665#discussion_r77241785
  
--- Diff: 
external/storm-kafka-monitor/src/main/java/org/apache/storm/kafka/monitor/KafkaOffsetLagUtil.java
 ---
@@ -373,16 +377,20 @@ private static Options buildOptions () {
 curatorFramework.start();
 String partitionPrefix = "partition_";
 String zkPath = oldKafkaSpoutOffsetQuery.getZkPath();
-if (!zkPath.endsWith("/")) {
-zkPath += "/";
+ if (zkPath.endsWith("/")) {
--- End diff --

This is called from TopologySpoutLag using ShellUtils to display the 
results in UI. It is expected that KafkaOffsetLagUtil will always exit with 
code 0, unless there is a command line options error. TopologySpout guarantees 
that it will never call with wrong command line arguments. For any other 
runtime errors it will still exit with code 0, printing out the error condition 
on System.out. Reason is we want the error message to be displayed in UI. That 
is why there is a catch all block in the main method. If exit code is not 0, 
ShellUtils will throw an exception and there will be no meaningful message 
printed on storm UI. Instead of exiting here, can you throw an exception with a 
message like "zk node does not exist" and let the main method catch it. If you 
want to test out what I mentioned you can build a quick topology with kafka 
spout and look at storm ui. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1665: STORM-2074: fix storm-kafka-monitor NPE bug

2016-09-01 Thread priyank5485
Github user priyank5485 commented on a diff in the pull request:

https://github.com/apache/storm/pull/1665#discussion_r77237088
  
--- Diff: 
external/storm-kafka-monitor/src/main/java/org/apache/storm/kafka/monitor/KafkaOffsetLagUtil.java
 ---
@@ -89,6 +89,10 @@ public static void main (String args[]) {
 printUsageAndExit(options, OPTION_ZK_SERVERS_LONG + " 
and " + OPTION_ZK_COMMITTED_NODE_LONG + " are required  with " +
 OPTION_OLD_CONSUMER_LONG);
 }
+String zkNode = 
commandLine.getOptionValue(OPTION_ZK_COMMITTED_NODE_LONG);
--- End diff --

I am not sure if just checking the length <= 1 is really that useful. As 
you mentioned that it should start with / may be we add check satisfying all 
the conditions for a valid zookeeper node name? Or just remove any check? We 
already have a check to see if node exists later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Issue Comment Deleted] (STORM-2056) Bugs in logviewer

2016-09-01 Thread Boyang Jerry Peng (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Jerry Peng updated STORM-2056:
-
Comment: was deleted

(was: [~kabhwan],

Thanks for merging in the fix! What about 2.x branch?  Seems like you merged 
the fix into that branch as well)

> Bugs in logviewer
> -
>
> Key: STORM-2056
> URL: https://issues.apache.org/jira/browse/STORM-2056
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Boyang Jerry Peng
>Assignee: Boyang Jerry Peng
> Fix For: 2.0.0, 1.1.0, 1.0.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> 1. Incorrect url for prev,first,last,next buttons when viewing daemon logs 
> via logviewer
> Example:
> http://storm.cluster.com:8000/log?file=nimbus.log&start=0&length=51200
> should be:
> http://storm.cluster.com:8000/daemonlog?file=nimbus.log&start=0&length=51200
> Function with bug:
> https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L374
> 2. Downloading daemon files causes exception to be thrown because of function 
> download-log-file checks authorization via worker.yaml.  Obviously daemon log 
> root will not have this file.
> java.io.FileNotFoundException: 
> /home/y/var/storm/workers-artifacts/supervisor.log/worker.yaml (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileReader.(FileReader.java:72)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at clojure.lang.Reflector.invokeConstructor(Reflector.java:180)
>   at backtype.storm.util$clojure_from_yaml_file.invoke(util.clj:1066)
>   at 
> backtype.storm.daemon.logviewer$get_log_user_group_whitelist.invoke(logviewer.clj:310)
>   at 
> backtype.storm.daemon.logviewer$authorized_log_user_QMARK_.invoke(logviewer.clj:326)
>   at 
> backtype.storm.daemon.logviewer$download_log_file.invoke(logviewer.clj:497)
>   at backtype.storm.daemon.logviewer$fn__11528.invoke(logviewer.clj:1024)
>   at 
> org.apache.storm.shade.compojure.core$make_route$fn__6445.invoke(core.clj:93)
>   at 
> org.apache.storm.shade.compojure.core$if_route$fn__6433.invoke(core.clj:39)
>   at 
> org.apache.storm.shade.compojure.core$if_method$fn__6426.invoke(core.clj:24)
>   at 
> org.apache.storm.shade.compojure.core$routing$fn__6451.invoke(core.clj:106)
>   at clojure.core$some.invoke(core.clj:2515)
>   at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
>   at clojure.lang.RestFn.applyTo(RestFn.java:139)
>   at clojure.core$apply.invoke(core.clj:626)
>   at 
> org.apache.storm.shade.compojure.core$routes$fn__6455.invoke(core.clj:111)
> 3. search-log-file should not check for authorized users in worker.yaml
> https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L833



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2056) Bugs in logviewer

2016-09-01 Thread Boyang Jerry Peng (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15455843#comment-15455843
 ] 

Boyang Jerry Peng commented on STORM-2056:
--

[~kabhwan],

Thanks for merging in the fix! What about 2.x branch?  Seems like you merged 
the fix into that branch as well

> Bugs in logviewer
> -
>
> Key: STORM-2056
> URL: https://issues.apache.org/jira/browse/STORM-2056
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Boyang Jerry Peng
>Assignee: Boyang Jerry Peng
> Fix For: 2.0.0, 1.1.0, 1.0.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> 1. Incorrect url for prev,first,last,next buttons when viewing daemon logs 
> via logviewer
> Example:
> http://storm.cluster.com:8000/log?file=nimbus.log&start=0&length=51200
> should be:
> http://storm.cluster.com:8000/daemonlog?file=nimbus.log&start=0&length=51200
> Function with bug:
> https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L374
> 2. Downloading daemon files causes exception to be thrown because of function 
> download-log-file checks authorization via worker.yaml.  Obviously daemon log 
> root will not have this file.
> java.io.FileNotFoundException: 
> /home/y/var/storm/workers-artifacts/supervisor.log/worker.yaml (No such file 
> or directory)
>   at java.io.FileInputStream.open0(Native Method)
>   at java.io.FileInputStream.open(FileInputStream.java:195)
>   at java.io.FileInputStream.(FileInputStream.java:138)
>   at java.io.FileReader.(FileReader.java:72)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at clojure.lang.Reflector.invokeConstructor(Reflector.java:180)
>   at backtype.storm.util$clojure_from_yaml_file.invoke(util.clj:1066)
>   at 
> backtype.storm.daemon.logviewer$get_log_user_group_whitelist.invoke(logviewer.clj:310)
>   at 
> backtype.storm.daemon.logviewer$authorized_log_user_QMARK_.invoke(logviewer.clj:326)
>   at 
> backtype.storm.daemon.logviewer$download_log_file.invoke(logviewer.clj:497)
>   at backtype.storm.daemon.logviewer$fn__11528.invoke(logviewer.clj:1024)
>   at 
> org.apache.storm.shade.compojure.core$make_route$fn__6445.invoke(core.clj:93)
>   at 
> org.apache.storm.shade.compojure.core$if_route$fn__6433.invoke(core.clj:39)
>   at 
> org.apache.storm.shade.compojure.core$if_method$fn__6426.invoke(core.clj:24)
>   at 
> org.apache.storm.shade.compojure.core$routing$fn__6451.invoke(core.clj:106)
>   at clojure.core$some.invoke(core.clj:2515)
>   at org.apache.storm.shade.compojure.core$routing.doInvoke(core.clj:106)
>   at clojure.lang.RestFn.applyTo(RestFn.java:139)
>   at clojure.core$apply.invoke(core.clj:626)
>   at 
> org.apache.storm.shade.compojure.core$routes$fn__6455.invoke(core.clj:111)
> 3. search-log-file should not check for authorized users in worker.yaml
> https://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/daemon/logviewer.clj#L833



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm issue #1662: [STORM-1344] Remove sql command from storm-jdbc build.

2016-09-01 Thread abellina
Github user abellina commented on the issue:

https://github.com/apache/storm/pull/1662
  
+1 Thanks


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1662: [STORM-1344] Remove sql command from storm-jdbc build.

2016-09-01 Thread ppoulosk
Github user ppoulosk commented on the issue:

https://github.com/apache/storm/pull/1662
  
@abellina Yes.  I've now pushed the change without the plugin.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (STORM-2075) Storm SQL Phase III

2016-09-01 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim updated STORM-2075:

Description: 
This epic tracks the effort of the phase III development of StormSQL.

For now Storm SQL only supports Kafka as a data source, which is limited for 
normal use cases. We would need to support others as well. Candidates are 
external modules.

And also consider supporting State since Trident provides a way to set / get in 
batch manner to only State. Current way (each) does insert a row 1 by 1.

  was:
This epic tracks the effort of the phase III development of StormSQL.

For now Storm SQL only supports Kafka as a data source, which is limited for 
normal use cases. We would need to support others as well. Candidates are 
external modules.

And also consider supporting State since Trident only provides a way to set / 
get via batch. Current way (each) does insert a row 1 by 1.


> Storm SQL Phase III
> ---
>
> Key: STORM-2075
> URL: https://issues.apache.org/jira/browse/STORM-2075
> Project: Apache Storm
>  Issue Type: Epic
>  Components: storm-sql
>Reporter: Jungtaek Lim
>
> This epic tracks the effort of the phase III development of StormSQL.
> For now Storm SQL only supports Kafka as a data source, which is limited for 
> normal use cases. We would need to support others as well. Candidates are 
> external modules.
> And also consider supporting State since Trident provides a way to set / get 
> in batch manner to only State. Current way (each) does insert a row 1 by 1.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm issue #1662: [STORM-1344] Remove sql command from storm-jdbc build.

2016-09-01 Thread abellina
Github user abellina commented on the issue:

https://github.com/apache/storm/pull/1662
  
@ppoulosk were you able to test without the plugin all together?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1662: [STORM-1344] Remove sql command from storm-jdbc build.

2016-09-01 Thread ppoulosk
Github user ppoulosk commented on the issue:

https://github.com/apache/storm/pull/1662
  
@HeartSaVioR Thanks for the review!!  I've squashed commits and added JIRA 
to commit comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: please help: maven complie error

2016-09-01 Thread Jungtaek Lim
Hi,

At first you would want to check if that machine can reach
http://repo1.maven.org/maven2/. If it's not you need to make sure or setup
mirror for maven repo.
And that fail occasionally happens (I mean intermittently) so you might
want to retry the build.

Hope this helps. Please let me know if problem still persists.

Thanks,
Jungtaek Lim (HeartSaVioR)

2016년 9월 2일 (금) 오전 12:14, 양석우 님이 작성:

> 1. git clone https://github.com/apache/storm.git storm-master
> 2. in storm-master -> mvn clean install -DskipTests=true -e
> this command rise errors like bellow
>
>
> [INFO] Building storm-hdfs 2.0.0-SNAPSHOT
> [INFO]
> 
> [INFO]
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ storm-hdfs ---
> [INFO]
> [INFO] --- maven-clean-plugin:2.5:clean (cleanup) @ storm-hdfs ---
> [INFO]
> [INFO] --- maven-remote-resources-plugin:1.2.1:process (default) @
> storm-hdfs ---
> Downloading:
>
> http://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.1.2/httpcore-4.1.2.jar
> [INFO]
> 
> [INFO] Reactor Summary:
> [INFO]
> [INFO] Storm .. SUCCESS [
>  0.814 s]
> [INFO] multilang-javascript ... SUCCESS [
>  0.321 s]
> [INFO] multilang-python ... SUCCESS [
>  0.031 s]
> [INFO] multilang-ruby . SUCCESS [
>  0.027 s]
> [INFO] maven-shade-clojure-transformer  SUCCESS [
>  0.914 s]
> [INFO] storm-maven-plugins  SUCCESS [
>  1.142 s]
> [INFO] Storm Core . SUCCESS [01:17
> min]
> [INFO] storm-rename-hack .. SUCCESS [
>  1.276 s]
> [INFO] storm-kafka  SUCCESS [
>  0.561 s]
> [INFO] storm-hdfs . FAILURE [
>  2.040 s]
> [INFO] storm-hbase  SKIPPED
> [INFO] storm-hive . SKIPPED
> [INFO] storm-jdbc . SKIPPED
> [INFO] storm-redis  SKIPPED
> [INFO] storm-eventhubs  SKIPPED
> [INFO] flux ... SKIPPED
> [INFO] flux-wrappers .. SKIPPED
> [INFO] flux-core .. SKIPPED
> [INFO] flux-examples .. SKIPPED
> [INFO] storm-sql-runtime .. SKIPPED
> [INFO] storm-sql-core . SKIPPED
> [INFO] storm-sql-kafka  SKIPPED
> [INFO] sql  SKIPPED
> [INFO] storm-elasticsearch  SKIPPED
> [INFO] storm-solr . SKIPPED
> [INFO] storm-metrics .. SKIPPED
> [INFO] storm-cassandra  SKIPPED
> [INFO] storm-mqtt-parent .. SKIPPED
> [INFO] storm-mqtt . SKIPPED
> [INFO] storm-mqtt-examples  SKIPPED
> [INFO] storm-mongodb .. SKIPPED
> [INFO] storm-clojure .. SKIPPED
> [INFO] storm-starter .. SKIPPED
> [INFO] storm-kafka-client . SKIPPED
> [INFO] storm-opentsdb . SKIPPED
> [INFO] storm-kafka-monitor  SKIPPED
> [INFO] storm-kinesis .. SKIPPED
> [INFO] storm-druid  SKIPPED
> [INFO] storm-submit-tools . SKIPPED
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time: 01:25 min
> [INFO] Finished at: 2016-09-01T14:31:17+09:00
> [INFO] Final Memory: 121M/2612M
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.apache.maven.plugins:maven-remote-resources-plugin:1.2.1:process
> (default) on project storm-hdfs: Failed to resolve dependencies for one or
> more projects in the reactor. Reason: Could not transfer artifact
> org.apache.httpcomponents:httpcore:jar:4.1.2 from/to central (
> http://repo1.maven.org/maven2/): TransferFailedException
> [ERROR] org.apache.httpcomponents:httpcore:jar:4.1.2
> [ERROR]
> [ERROR] from the specified remote repositories:
> [ERROR] confluent (http://packages.confluent.io/maven, releases=true,
>

please help: maven complie error

2016-09-01 Thread 양석우
1. git clone https://github.com/apache/storm.git storm-master
2. in storm-master -> mvn clean install -DskipTests=true -e
this command rise errors like bellow


[INFO] Building storm-hdfs 2.0.0-SNAPSHOT
[INFO]

[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ storm-hdfs ---
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (cleanup) @ storm-hdfs ---
[INFO]
[INFO] --- maven-remote-resources-plugin:1.2.1:process (default) @
storm-hdfs ---
Downloading:
http://repo1.maven.org/maven2/org/apache/httpcomponents/httpcore/4.1.2/httpcore-4.1.2.jar
[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] Storm .. SUCCESS [
 0.814 s]
[INFO] multilang-javascript ... SUCCESS [
 0.321 s]
[INFO] multilang-python ... SUCCESS [
 0.031 s]
[INFO] multilang-ruby . SUCCESS [
 0.027 s]
[INFO] maven-shade-clojure-transformer  SUCCESS [
 0.914 s]
[INFO] storm-maven-plugins  SUCCESS [
 1.142 s]
[INFO] Storm Core . SUCCESS [01:17
min]
[INFO] storm-rename-hack .. SUCCESS [
 1.276 s]
[INFO] storm-kafka  SUCCESS [
 0.561 s]
[INFO] storm-hdfs . FAILURE [
 2.040 s]
[INFO] storm-hbase  SKIPPED
[INFO] storm-hive . SKIPPED
[INFO] storm-jdbc . SKIPPED
[INFO] storm-redis  SKIPPED
[INFO] storm-eventhubs  SKIPPED
[INFO] flux ... SKIPPED
[INFO] flux-wrappers .. SKIPPED
[INFO] flux-core .. SKIPPED
[INFO] flux-examples .. SKIPPED
[INFO] storm-sql-runtime .. SKIPPED
[INFO] storm-sql-core . SKIPPED
[INFO] storm-sql-kafka  SKIPPED
[INFO] sql  SKIPPED
[INFO] storm-elasticsearch  SKIPPED
[INFO] storm-solr . SKIPPED
[INFO] storm-metrics .. SKIPPED
[INFO] storm-cassandra  SKIPPED
[INFO] storm-mqtt-parent .. SKIPPED
[INFO] storm-mqtt . SKIPPED
[INFO] storm-mqtt-examples  SKIPPED
[INFO] storm-mongodb .. SKIPPED
[INFO] storm-clojure .. SKIPPED
[INFO] storm-starter .. SKIPPED
[INFO] storm-kafka-client . SKIPPED
[INFO] storm-opentsdb . SKIPPED
[INFO] storm-kafka-monitor  SKIPPED
[INFO] storm-kinesis .. SKIPPED
[INFO] storm-druid  SKIPPED
[INFO] storm-submit-tools . SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 01:25 min
[INFO] Finished at: 2016-09-01T14:31:17+09:00
[INFO] Final Memory: 121M/2612M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-remote-resources-plugin:1.2.1:process
(default) on project storm-hdfs: Failed to resolve dependencies for one or
more projects in the reactor. Reason: Could not transfer artifact
org.apache.httpcomponents:httpcore:jar:4.1.2 from/to central (
http://repo1.maven.org/maven2/): TransferFailedException
[ERROR] org.apache.httpcomponents:httpcore:jar:4.1.2
[ERROR]
[ERROR] from the specified remote repositories:
[ERROR] confluent (http://packages.confluent.io/maven, releases=true,
snapshots=true),
[ERROR] central (http://repo1.maven.org/maven2/, releases=true,
snapshots=false),
[ERROR] clojars (https://clojars.org/repo/, releases=true, snapshots=true),
[ERROR] apache.snapshots (http://repository.apache.org/snapshots,
releases=false, snapshots=true),
[ERROR] apache.snapshots.https (
https://repository.apache.org/content/repositories/snapshots,
releases=true, snapshots=true),
[ERROR] repository.jboss.org (
http://repository.jboss.org/nexus/content/groups/public/, releases=true,
snapshots=false)
[ERROR] Path to dependency:
[ERROR] 1) org.apache.storm:s

[GitHub] storm issue #1565: STORM-1970: external project examples refator

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1565
  
@vesense You can refer pom.xml of storm-starter to see the trick. It 
determines whether we apply 'provided' or 'compile' via property and profile.


https://github.com/apache/storm/blob/master/examples/storm-starter/pom.xml#L35 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1565: STORM-1970: external project examples refator

2016-09-01 Thread vesense
Github user vesense commented on the issue:

https://github.com/apache/storm/pull/1565
  
hi @HeartSaVioR 
Of course, we should make sure that all examples are  'runnable'.
>profile trick to change scope of 'storm-core' : other modules use 
'intellij' profile

Actually, I can't see what you mean..

>create fat jar for each project : change scope of external module to 
'compile'

Yes, I will change the scope to `compile` and add `maven-shade-plugin` for 
building a fat jar.
Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1565: STORM-1970: external project examples refator

2016-09-01 Thread vesense
Github user vesense commented on a diff in the pull request:

https://github.com/apache/storm/pull/1565#discussion_r77186650
  
--- Diff: examples/storm-mqtt-examples/pom.xml ---
@@ -24,18 +24,24 @@
 
   storm-mqtt-examples
 
-  
-org.apache.storm
-storm-mqtt-parent
-2.0.0-SNAPSHOT
-../pom.xml
-  
+
--- End diff --

Will fix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1669: STORM-2078: enable paging in worker datatable

2016-09-01 Thread revans2
Github user revans2 commented on the issue:

https://github.com/apache/storm/pull/1669
  
+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1669: STORM-2078: enable paging in worker datatable

2016-09-01 Thread abellina
GitHub user abellina opened a pull request:

https://github.com/apache/storm/pull/1669

STORM-2078: enable paging in worker datatable



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/abellina/storm 
STORM-2078_enable_pagination_in_worker_tables

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/1669.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1669






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (STORM-2078) Enable paging in worker data tables

2016-09-01 Thread Alessandro Bellina (JIRA)
Alessandro Bellina created STORM-2078:
-

 Summary: Enable paging in worker data tables
 Key: STORM-2078
 URL: https://issues.apache.org/jira/browse/STORM-2078
 Project: Apache Storm
  Issue Type: Improvement
Reporter: Alessandro Bellina
Assignee: Alessandro Bellina
Priority: Minor


Currently worker data tables can get really long, with many workers per 
topology, or many workers per supervisor. This will enable paging to make it 
more manageable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm issue #1661: [STORM-2071] Add in retry after rebalance in unit test

2016-09-01 Thread revans2
Github user revans2 commented on the issue:

https://github.com/apache/storm/pull/1661
  
I would be happy to pull that fix back into a separate patch.  @ppoulosk 
this patch looks fine to me, my only concern would be documenting it a little 
more explaining that the state change might not have completed yet, and until 
it does we cannot rebalance.  Alternatively if you want to pull in my changes, 
by replacing sleep with the following I would be OK with that too.

```
(defn wait-for-status [nimbus name status]
  (while-timeout 5000
(let [topo-summary (first (filter (fn [topo] (= name (.get_name topo))) 
(.get_topologies (.getClusterInfo nimbus
  topo-status (if topo-summary (.get_status topo-summary) 
"NOT-RUNNING")]
  (log-message "WAITING FOR "name" TO BE " status " CURRENT " 
topo-status)
  (not= topo-status status))
(Thread/sleep 100)))
...
(wait-for-status nimbus "t1" "ACTIVE") 
```

Either solution is fine with me.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77154990
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/Container.java ---
@@ -0,0 +1,493 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Reader;
+import java.io.Writer;
+import java.lang.ProcessBuilder.Redirect;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LSWorkerHeartbeat;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Yaml;
+
+/**
+ * Represents a container that a worker will run in.
+ */
+public abstract class Container implements Killable {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+protected final Map _conf;
+protected final Map _topoConf;
+protected String _workerId;
+protected final String _topologyId;
+protected final String _supervisorId;
+protected final int _port;
+protected final LocalAssignment _assignment;
+protected final AdvancedFSOps _ops;
+protected final ResourceIsolationInterface _resourceIsolationManager;
+
+//Exposed for testing
+protected Container(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager) throws 
IOException {
+assert((assignment == null && port <= 0) ||
+(assignment != null && port > 0));
+assert(conf != null);
+assert(ops != null);
+assert(supervisorId != null);
+
+_port = port;
+_ops = ops;
+_assignment = assignment;
+if (assignment != null) {
+_topologyId = assignment.get_topology_id();
+} else {
+_topologyId = null;
+}
+_conf = conf;
+_supervisorId = supervisorId;
+_resourceIsolationManager = resourceIsolationManager;
+if (topoConf == null) {
+_topoConf = readTopoConf();
+} else {
+_topoConf = topoConf;
+}
+}
+
+@Override
+public String toString() {
+return this.getClass().getSimpleName() + " topo:" + _topologyId + 
" worker:" + _workerId;
+}
+
+protected Map readTopoConf() throws IOException {
+assert(_topologyId != null);
+return ConfigUtils.readSupervisorStormConf(_conf, _topologyId);
+}
+
+protected Container(int port, LocalAssignment assignment, Map conf, 
+String supervisorId, ResourceIsolationInterface 
resourceIsolationManager) throws IOException {
+this(AdvancedFSOps.make(conf), port, assignment, conf, null, 
supervisorId, resourceIsolationManager);
+}
+
+/**
+ * Constructor to use when trying to recover a container from just the 
worker ID.
+ * @param workerId the id of the worker
+ * @param conf the config of the supervisor
+ * @param supervisorId the id of the supervisor
+ * @param resourceIsolationManager the isolation manager.
+ * @throws IOEx

[jira] [Created] (STORM-2077) KafkaSpout doesn't retry failed tuples

2016-09-01 Thread Tobias Maier (JIRA)
Tobias Maier created STORM-2077:
---

 Summary: KafkaSpout doesn't retry failed tuples
 Key: STORM-2077
 URL: https://issues.apache.org/jira/browse/STORM-2077
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-kafka
Affects Versions: 1.0.2
Reporter: Tobias Maier


KafkaSpout does not retry all failed tuples.

We used following Configuration:
Map props = new HashMap<>();
props.put(KafkaSpoutConfig.Consumer.GROUP_ID, "c1");
props.put(KafkaSpoutConfig.Consumer.KEY_DESERIALIZER, 
ByteArrayDeserializer.class.getName());
props.put(KafkaSpoutConfig.Consumer.VALUE_DESERIALIZER, 
ByteArrayDeserializer.class.getName());
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
broker.bootstrapServer());

KafkaSpoutStreams kafkaSpoutStreams = new 
KafkaSpoutStreams.Builder(FIELDS_KAFKA_EVENT, new 
String[]{"test-topic"}).build();

KafkaSpoutTuplesBuilder kafkaSpoutTuplesBuilder = new 
KafkaSpoutTuplesBuilder.Builder<>(new 
KeyValueKafkaSpoutTupleBuilder("test-topic")).build();
KafkaSpoutRetryService retryService = new 
KafkaSpoutLoggedRetryExponentialBackoff(KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.milliSeconds(1),
 KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.milliSeconds(1), 3, 
KafkaSpoutLoggedRetryExponentialBackoff.TimeInterval.seconds(1));

KafkaSpoutConfig config = new 
KafkaSpoutConfig.Builder<>(props, kafkaSpoutStreams, kafkaSpoutTuplesBuilder, 
retryService)
.setFirstPollOffsetStrategy(UNCOMMITTED_LATEST)
.setMaxUncommittedOffsets(30)
.setOffsetCommitPeriodMs(10)
.setMaxRetries(3)
.build();

kafkaSpout = new org.apache.storm.kafka.spout.KafkaSpout<>(config);


The downstream bolt fails every tuple and we expect, that those tuple will all 
be replayed. But that's not the case for every tuple.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77154288
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/Container.java ---
@@ -0,0 +1,493 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.BufferedReader;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.Reader;
+import java.io.Writer;
+import java.lang.ProcessBuilder.Redirect;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LSWorkerHeartbeat;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import org.yaml.snakeyaml.Yaml;
+
+/**
+ * Represents a container that a worker will run in.
+ */
+public abstract class Container implements Killable {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+protected final Map _conf;
+protected final Map _topoConf;
+protected String _workerId;
+protected final String _topologyId;
+protected final String _supervisorId;
+protected final int _port;
+protected final LocalAssignment _assignment;
+protected final AdvancedFSOps _ops;
+protected final ResourceIsolationInterface _resourceIsolationManager;
+
+//Exposed for testing
+protected Container(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager) throws 
IOException {
+assert((assignment == null && port <= 0) ||
+(assignment != null && port > 0));
+assert(conf != null);
+assert(ops != null);
+assert(supervisorId != null);
+
+_port = port;
+_ops = ops;
+_assignment = assignment;
+if (assignment != null) {
+_topologyId = assignment.get_topology_id();
+} else {
+_topologyId = null;
+}
+_conf = conf;
+_supervisorId = supervisorId;
+_resourceIsolationManager = resourceIsolationManager;
+if (topoConf == null) {
+_topoConf = readTopoConf();
+} else {
+_topoConf = topoConf;
+}
+}
+
+@Override
+public String toString() {
+return this.getClass().getSimpleName() + " topo:" + _topologyId + 
" worker:" + _workerId;
+}
+
+protected Map readTopoConf() throws IOException {
+assert(_topologyId != null);
+return ConfigUtils.readSupervisorStormConf(_conf, _topologyId);
+}
+
+protected Container(int port, LocalAssignment assignment, Map conf, 
+String supervisorId, ResourceIsolationInterface 
resourceIsolationManager) throws IOException {
+this(AdvancedFSOps.make(conf), port, assignment, conf, null, 
supervisorId, resourceIsolationManager);
+}
+
+/**
+ * Constructor to use when trying to recover a container from just the 
worker ID.
+ * @param workerId the id of the worker
+ * @param conf the config of the supervisor
+ * @param supervisorId the id of the supervisor
+ * @param resourceIsolationManager the isolation manager.
+ * @throws IOEx

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77151542
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/BasicContainer.java ---
@@ -0,0 +1,644 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * A container that runs processes on the local box.
+ */
+public class BasicContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+private static final FilenameFilter jarFilter = new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+return name.endsWith(".jar");
+}
+};
+private static final Joiner CPJ = 
+Joiner.on(Utils.CLASS_PATH_SEPARATOR).skipNulls();
+
+protected final LocalState _localState;
+protected final String _profileCmd;
+protected volatile boolean _exitedEarly = false;
+
+private class ProcessExitCallback implements ExitCodeCallback {
+private final String _logPrefix;
+
+public ProcessExitCallback(String logPrefix) {
+_logPrefix = logPrefix;
+}
+
+@Override
+public void call(int exitCode) {
+LOG.info("{} exited with code: {}", _logPrefix, exitCode);
+_exitedEarly = true;
+}
+}
+
+//For testing purposes
+public BasicContainer(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+_profileCmd = profileCmd;
+}
+
+public BasicContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId,
+LocalState localState, ResourceIsolationInterface 
resourceIsolationManager, boolean recover)
+throws IOException {
+super(port, assignment, conf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+
+if (recover) {
+synchronized (localState) {
+String wid = null;
+Map workerToPort = 
localState.getApprovedWorkers();
+for (Map.Entry entry : 
workerToPort.entrySet()) {
+if (port == entry.getValue().intValue()) {
+wid = entry.getKey();
+}
+}
+if (wid == null) {
+throw new ContainerRecoveryException("Could not 

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77150790
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/BasicContainer.java ---
@@ -0,0 +1,644 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * A container that runs processes on the local box.
+ */
+public class BasicContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+private static final FilenameFilter jarFilter = new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+return name.endsWith(".jar");
+}
+};
+private static final Joiner CPJ = 
+Joiner.on(Utils.CLASS_PATH_SEPARATOR).skipNulls();
+
+protected final LocalState _localState;
+protected final String _profileCmd;
+protected volatile boolean _exitedEarly = false;
+
+private class ProcessExitCallback implements ExitCodeCallback {
+private final String _logPrefix;
+
+public ProcessExitCallback(String logPrefix) {
+_logPrefix = logPrefix;
+}
+
+@Override
+public void call(int exitCode) {
+LOG.info("{} exited with code: {}", _logPrefix, exitCode);
+_exitedEarly = true;
+}
+}
+
+//For testing purposes
+public BasicContainer(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+_profileCmd = profileCmd;
+}
+
+public BasicContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId,
+LocalState localState, ResourceIsolationInterface 
resourceIsolationManager, boolean recover)
+throws IOException {
+super(port, assignment, conf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+
+if (recover) {
+synchronized (localState) {
+String wid = null;
+Map workerToPort = 
localState.getApprovedWorkers();
+for (Map.Entry entry : 
workerToPort.entrySet()) {
+if (port == entry.getValue().intValue()) {
+wid = entry.getKey();
+}
+}
+if (wid == null) {
+throw new ContainerRecoveryException("Could not 

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77150344
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/BasicContainer.java ---
@@ -0,0 +1,644 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.HashMap;
+import java.util.LinkedList;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.storm.Config;
+import org.apache.storm.container.ResourceIsolationInterface;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileAction;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.generated.StormTopology;
+import org.apache.storm.generated.WorkerResources;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.LocalState;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+/**
+ * A container that runs processes on the local box.
+ */
+public class BasicContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(BasicContainer.class);
+private static final FilenameFilter jarFilter = new FilenameFilter() {
+@Override
+public boolean accept(File dir, String name) {
+return name.endsWith(".jar");
+}
+};
+private static final Joiner CPJ = 
+Joiner.on(Utils.CLASS_PATH_SEPARATOR).skipNulls();
+
+protected final LocalState _localState;
+protected final String _profileCmd;
+protected volatile boolean _exitedEarly = false;
+
+private class ProcessExitCallback implements ExitCodeCallback {
+private final String _logPrefix;
+
+public ProcessExitCallback(String logPrefix) {
+_logPrefix = logPrefix;
+}
+
+@Override
+public void call(int exitCode) {
+LOG.info("{} exited with code: {}", _logPrefix, exitCode);
+_exitedEarly = true;
+}
+}
+
+//For testing purposes
+public BasicContainer(AdvancedFSOps ops, int port, LocalAssignment 
assignment,
+Map conf, Map topoConf, String 
supervisorId, 
+ResourceIsolationInterface resourceIsolationManager, 
LocalState localState,
+String profileCmd) throws IOException {
+super(ops, port, assignment, conf, topoConf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+_profileCmd = profileCmd;
+}
+
+public BasicContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId,
+LocalState localState, ResourceIsolationInterface 
resourceIsolationManager, boolean recover)
+throws IOException {
+super(port, assignment, conf, supervisorId, 
resourceIsolationManager);
+_localState = localState;
+
+if (recover) {
+synchronized (localState) {
+String wid = null;
+Map workerToPort = 
localState.getApprovedWorkers();
+for (Map.Entry entry : 
workerToPort.entrySet()) {
+if (port == entry.getValue().intValue()) {
+wid = entry.getKey();
+}
+}
+if (wid == null) {
+throw new ContainerRecoveryException("Could not 

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77144475
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/LocalContainer.java ---
@@ -0,0 +1,85 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.IOException;
+import java.util.Map;
+
+import org.apache.storm.ProcessSimulator;
+import org.apache.storm.daemon.Shutdownable;
+import org.apache.storm.generated.LocalAssignment;
+import org.apache.storm.generated.ProfileRequest;
+import org.apache.storm.messaging.IContext;
+import org.apache.storm.utils.ConfigUtils;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import clojure.java.api.Clojure;
+import clojure.lang.IFn;
+
+public class LocalContainer extends Container {
+private static final Logger LOG = 
LoggerFactory.getLogger(LocalContainer.class);
+private volatile boolean _isAlive = false;
+private final IContext _sharedContext;
+
+public LocalContainer(int port, LocalAssignment assignment, 
Map conf, String supervisorId, IContext sharedContext) throws 
IOException {
+super(port, assignment, conf, supervisorId, null);
+_sharedContext = sharedContext;
+_workerId = Utils.uuid();
+}
+
+@Override
+public void launch() throws IOException {
+IFn mkWorker = Clojure.var("org.apache.storm.daemon.worker", 
"mk-worker");
--- End diff --

Let's leave a comment that it should be modified when worker is being 
ported.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77144436
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/AdvancedFSOps.java ---
@@ -0,0 +1,314 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.Writer;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardCopyOption;
+import java.nio.file.attribute.PosixFilePermission;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.storm.Config;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class AdvancedFSOps {
--- End diff --

If this class is supposed to be subclassed before use, why not make it 
abstract?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77143604
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/daemon/supervisor/AdvancedFSOps.java ---
@@ -0,0 +1,314 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.storm.daemon.supervisor;
+
+import java.io.File;
+import java.io.FileNotFoundException;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.io.Writer;
+import java.nio.file.FileSystems;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.nio.file.StandardCopyOption;
+import java.nio.file.attribute.PosixFilePermission;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.storm.Config;
+import org.apache.storm.utils.Utils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class AdvancedFSOps {
+private static final Logger LOG = 
LoggerFactory.getLogger(AdvancedFSOps.class);
+
+/**
+ * Factory to create a new AdvancedFSOps
+ * @param conf the configuration of the process
+ * @return the appropriate instance of the class for this config and 
environment.
+ */
+public static AdvancedFSOps make(Map conf) {
+if (Utils.isOnWindows()) {
+return new AdvancedWindowsFSOps(conf);
+}
+if 
(Utils.getBoolean(conf.get(Config.SUPERVISOR_RUN_WORKER_AS_USER), false)) {
+return new AdvancedRunAsUserFSOps(conf);
+}
+return new AdvancedFSOps();
+}
+
+private static class AdvancedRunAsUserFSOps extends AdvancedFSOps {
+private final Map _conf;
+
+public AdvancedRunAsUserFSOps(Map conf) {
+if (Utils.isOnWindows()) {
+throw new UnsupportedOperationException("ERROR: Windows 
doesn't support running workers as different users yet");
+}
+_conf = conf;
+}
+
+@Override
+public void setupBlobPermissions(File path, String user) throws 
IOException {
+String logPrefix = "setup blob permissions for " + path;
+SupervisorUtils.processLauncherAndWait(_conf, user, 
Arrays.asList("blob", path.toString()), null, logPrefix);
+}
+
+@Override
+public void deleteIfExists(File path, String user, String 
logPrefix) throws IOException {
+String absolutePath = path.getAbsolutePath();
+LOG.info("Deleting path {}", absolutePath);
+if (user == null) {
+user = Files.getOwner(path.toPath()).getName();
+}
+List commands = new ArrayList<>();
+commands.add("rmr");
+commands.add(absolutePath);
+SupervisorUtils.processLauncherAndWait(_conf, user, commands, 
null, logPrefix);
+if (Utils.checkFileExists(absolutePath)) {
+throw new RuntimeException(path + " was not deleted.");
+}
+}
+
+@Override
+public void setupStormCodeDir(Map topologyConf, 
File path) throws IOException {
+SupervisorUtils.setupStormCodeDir(_conf, topologyConf, 
path.getCanonicalPath());
+}
+}
+
+/**
+ * Operations that need to override the default ones when running on 
Windows
+ *
+ */
+private static class AdvancedWindowsFSOps extends AdvancedFSOps {
+
+public AdvancedWindowsFSOps(Map conf) {
+if 
(Utils.getBoolean(conf.get(Config.SUPERVISOR_RUN_WORKER_AS_USER), false)) {
+throw new RuntimeException("ERROR: Windows doesn't support 
running workers as different users yet");

[GitHub] storm pull request #1642: STORM-2018: Supervisor V2.

2016-09-01 Thread srdo
Github user srdo commented on a diff in the pull request:

https://github.com/apache/storm/pull/1642#discussion_r77141710
  
--- Diff: 
storm-core/src/jvm/org/apache/storm/container/ResourceIsolationInterface.java 
---
@@ -56,4 +64,13 @@
  */
 List getLaunchCommandPrefix(String workerId);
 
+/**
+ * Get the list of PIDs currently in an isolated container
+ * @param workerId the id of the worker to get these for
+ * @return the set of PIDs, this will be combined with
+ * other ways of getting PIDs. An Empty set or null if
+ * no PIDs are found.
+ * @throws IOException on any error
+ */
--- End diff --

I agree with @d2r, it's just as easy for the implementation to return 
Collections.emptySet() as null, and null checking can be easy to forget when 
the code is modified later.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1667: STORM-2076: Add new atom to prevent sync-processes from d...

2016-09-01 Thread srdo
Github user srdo commented on the issue:

https://github.com/apache/storm/pull/1667
  
@HeartSaVioR Sure, the new design looks much nicer. I agree that 
backporting the new design if it works well makes more sense than to keep 
patching these race conditions.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1668: STORM-2040 Fix bug on assert-can-serialize

2016-09-01 Thread HeartSaVioR
GitHub user HeartSaVioR opened a pull request:

https://github.com/apache/storm/pull/1668

STORM-2040 Fix bug on assert-can-serialize

* type of element of tuple-batch is changed to AddressedTuple but not 
reflected to assert-can-serialize
* it only raises issue when topology.testing.always.try.serialize is true

This should also be ported back to 1.x and 1.0.x branches.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HeartSaVioR/storm STORM-2040

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/1668.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1668


commit d7de92ced449737034d4451891c17721b408411e
Author: Jungtaek Lim 
Date:   2016-09-01T08:08:25Z

STORM-2040 Fix bug on assert-can-serialize

* type of element of tuple-batch is changed but not reflected to 
assert-can-serialize
* it only raises issue when topology.testing.always.try.serialize is true




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm issue #1667: STORM-2076: Add new atom to prevent sync-processes from d...

2016-09-01 Thread HeartSaVioR
Github user HeartSaVioR commented on the issue:

https://github.com/apache/storm/pull/1667
  
Hi @srdo,

Great catch and thanks for the contribution.
Btw, as we're continuously suffering race conditions in supervisor, 
@revans2 proposed new supervisor design and submitted pull request - 
[STORM-2018](issues.apache.org/jira/browse/STORM-2018) #1642 

I'm not sure when we can finish reviewing and deciding to merge, but review 
comments are positive so I guess we can switch supervisor to V2 in near future. 
For now #1642 is for master, but since we found race even 1.0.2 again, I think 
we should port back to 1.x and 1.0.x and release.

Could you help to review #1642 if you don't mind? Thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] storm pull request #1667: STORM-2076: Add new atom to prevent sync-processes...

2016-09-01 Thread srdo
GitHub user srdo opened a pull request:

https://github.com/apache/storm/pull/1667

STORM-2076: Add new atom to prevent sync-processes from deleting new …

…topology code

@HeartSaVioR Could you take a look at this? I'll make a similar change on 
master if you think this is the right solution.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/srdo/storm STORM-2076

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/storm/pull/1667.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1667


commit 57b10b258c2ebe969ed025a0852fda68e3621ba4
Author: Stig Rohde Døssing 
Date:   2016-08-30T08:28:41Z

STORM-2076: Add new atom to prevent sync-processes from deleting new 
topology code




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (STORM-2076) Supervisor sync-processes and sync-supervisor race when downloading new topology code.

2016-09-01 Thread JIRA
Stig Rohde Døssing created STORM-2076:
-

 Summary: Supervisor sync-processes and sync-supervisor race when 
downloading new topology code.
 Key: STORM-2076
 URL: https://issues.apache.org/jira/browse/STORM-2076
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-core
Affects Versions: 1.0.2
Reporter: Stig Rohde Døssing
Assignee: Stig Rohde Døssing


The fix for https://issues.apache.org/jira/browse/STORM-1934 moved the cleanup 
of topology code to sync-processes. The cleanup is based on 
ls-local-assignment, but this is not called from sync-supervisor until all the 
new topology code has been downloaded. As a result, sync-processes may delete 
new topology code before sync-supervisor has had a chance to update 
ls-local-assignment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Closed] (STORM-2046) Errors when using TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE in local mode.

2016-09-01 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim closed STORM-2046.
---
Resolution: Duplicate

Duplicated via STORM-2040

> Errors when using TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE in local mode.
> --
>
> Key: STORM-2046
> URL: https://issues.apache.org/jira/browse/STORM-2046
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 1.0.2
> Environment: Ubuntu 16.04 linux, 4.4.0-34-generic
> Java 1.8.0_92
>Reporter: Cory Kolbeck
>Priority: Minor
>  Labels: test
>
> When using a LocalCluster during tests, if 
> {{TOPOLOGY_TESTING_ALWAYS_TRY_SERIALIZE}} is specified, 
> {{assert-can-serialize}} attempts to destructure a Java model object and 
> throws, killing the worker. A minimal-ish case and the full logs are here: 
> https://gist.github.com/ckolbeck/557734429e62b097efa9382a714122b0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)