[jira] [Created] (STORM-2427) Event logger enable/disable UI is not working as expected in master branch

2017-03-20 Thread Arun Mahadevan (JIRA)
Arun Mahadevan created STORM-2427:
-

 Summary: Event logger enable/disable UI is not working as expected 
in master branch
 Key: STORM-2427
 URL: https://issues.apache.org/jira/browse/STORM-2427
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Arun Mahadevan
Assignee: Arun Mahadevan


Need to pull missing commits from 1.x branch



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim updated STORM-2423:

Fix Version/s: (was: 1.x)
   1.1.0

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.1.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana reassigned STORM-2423:
-

Assignee: Satish Duggana  (was: Roshan Naik)

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Satish Duggana
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana resolved STORM-2423.
---
Resolution: Fixed

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana reassigned STORM-2423:
-

Assignee: Roshan Naik  (was: Satish Duggana)

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated STORM-2423:
--
Component/s: storm-core

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated STORM-2423:
--
Fix Version/s: 1.x
   2.0.0

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2423) Join Bolt : Use explicit instead of default window anchoring of emitted tuples

2017-03-20 Thread Satish Duggana (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Satish Duggana updated STORM-2423:
--
Affects Version/s: 1.x
   2.0.0

> Join Bolt : Use explicit instead of default window anchoring of emitted tuples
> --
>
> Key: STORM-2423
> URL: https://issues.apache.org/jira/browse/STORM-2423
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.x
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Critical
> Fix For: 2.0.0, 1.x
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Default anchoring will anchor each emitted tuple to every tuple in current 
> window. This requires a very large numbers of ACKs from any downstream bolt.  
> If topology.debug is enabled, it also worsens the load on the system 
> significantly. 
> Letting the topo run in this mode (in particular with max.spout.pending 
> disabled), could lead to the worker running out of memory and crashing.
> Fix: Join Bolt should avoid using default window anchoring, and explicitly 
> anchor each emitted tuple with the exact matching tuples form each inputs 
> streams. This reduces the complexity of the tuple trees and consequently the 
> reduces burden on the ACKing & messaging subsystems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (STORM-2416) Release Packaging Improvements

2017-03-20 Thread Jungtaek Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15933764#comment-15933764
 ] 

Jungtaek Lim commented on STORM-2416:
-

Patch for 1.x branch merged. I'm keeping this open for master branch but please 
feel free to mark as resolved if it's easier or clearer.

> Release Packaging Improvements
> --
>
> Key: STORM-2416
> URL: https://issues.apache.org/jira/browse/STORM-2416
> Project: Apache Storm
>  Issue Type: Improvement
>  Components: build
>Reporter: P. Taylor Goetz
>Assignee: P. Taylor Goetz
> Fix For: 1.1.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> This issue is to address distribution packaging improvements discussed on the 
> dev@ list:
> 1. Move remaining examples to "examples" directory.
> 2. Package examples as source-only, to be compiled by users
> 3. Remove connector jars from binary distribution (since they are available 
> in Maven, and we want to discourage users from hand-crafting topology jars)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2407) KafkaTridentSpoutOpaque Doesn't Poll Data From All Topic-Partitions When Parallelism Hint Not a Multiple Total Topic-Partitions

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2407.
-
   Resolution: Fixed
Fix Version/s: 1.1.0
   2.0.0

PRs are merged by [~harsha_ch]

> KafkaTridentSpoutOpaque Doesn't Poll Data From All Topic-Partitions When 
> Parallelism Hint Not a Multiple Total Topic-Partitions
> ---
>
> Key: STORM-2407
> URL: https://issues.apache.org/jira/browse/STORM-2407
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka, storm-kafka-client
>Affects Versions: 2.0.0, 1.x
>Reporter: Hugo Louro
>Assignee: Hugo Louro
>Priority: Blocker
> Fix For: 2.0.0, 1.1.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (STORM-2038) Provide an alternative to using symlinks

2017-03-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932723#comment-15932723
 ] 

Robert Joseph Evans commented on STORM-2038:


[~kabhwan],

Thanks for merging it in.

> Provide an alternative to using symlinks
> 
>
> Key: STORM-2038
> URL: https://issues.apache.org/jira/browse/STORM-2038
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Affects Versions: 1.0.1
> Environment: Any windows
>Reporter: Paul Milliken
>Assignee: Robert Joseph Evans
>  Labels: symlink, windows
> Fix For: 2.0.0, 1.1.0, 1.0.4
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> As of Storm 1.0 and above, some functionality (such as the worker-artifacts 
> directory) require the use of symlinks. On Windows platforms, this requires 
> that Storm either be run as an administrator or that certain group policy 
> settings are changed.
> In locked-down environments, both of these solutions are not suitable.
> Where possible, an alternative option should be provided to the use of 
> symlinks. For example, it may be possible to create additional copies of the 
> worker artifacts directory for each worker (possibly inefficient) or provide 
> the workers with the canonical path to the real directory.
> See the [brief 
> discussion|http://mail-archives.apache.org/mod_mbox/storm-dev/201608.mbox/%3C1293850887.13165119.1471022901569.JavaMail.yahoo%40mail.yahoo.com%3E]
>  on the mailing list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (STORM-2422) Serialized Trident topology size does not grow linerarly

2017-03-20 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932614#comment-15932614
 ] 

Robert Joseph Evans commented on STORM-2422:


[~kabhwan],

The only real thing to be careful with pulling it into another branch is making 
sure to set the serialVersionUID correctly.  I tested it for 2.x and I know 
that it works.  I think that 1.x will have the same ID as 2.x, but because we 
were not specifying it before I really would want to check before back-porting.

> Serialized Trident topology size does not grow linerarly
> 
>
> Key: STORM-2422
> URL: https://issues.apache.org/jira/browse/STORM-2422
> Project: Apache Storm
>  Issue Type: Bug
>  Components: trident
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 2.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Many of the Trident bolts contain a serialized version of the entire trident 
> graph.  This means that as the size of the graph grows, and the number of 
> bolts grows with it, that the serialized topology grows quadraticly.
> The bolt only uses a small portion of that graph so we can filter out most of 
> that.
> As an extreme example a topology with 1000 Bolts (4000 nodes in the graph) 
> can grow to be over 1GB in size.  If we trim out the parts of the graph that 
> are not needed the serialized topology is < 40MB in size.
> This is a backwards compatible change, because we are only stripping out data 
> that is never touched by the bolt anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2426) First tuples fail after worker is respawn

2017-03-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/STORM-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antti Järvinen updated STORM-2426:
--
Description: 
Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
reading from two different topics with same consumer group ID. 

1. Kill the only worker process for topology
2. Storm creates new worker
3. Kafka starts rebalancing (log line 15-16)
4. Kafka rebalancing done (log line 18-19)
5. Kafka topics read and tuples emitted (log line 28-29)
6. Tuples immediately fail (log line 30-33)

The delay between tuples emitted and tuples failing is just some 10 ms. No 
bolts in topology received the tuples.

What could cause this? The assumption is that there are uncommitted messages in 
Spout when it is killed and those are retried.


  was:
Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
reading from two different topics. 

1. Kill the only worker process for topology
2. Storm creates new worker
3. Kafka starts rebalancing (log line 15-16)
4. Kafka rebalancing done (log line 18-19)
5. Kafka topics read and tuples emitted (log line 28-29)
6. Tuples immediately fail (log line 30-33)

The delay between tuples emitted and tuples failing is just some 10 ms. No 
bolts in topology received the tuples.

What could cause this? The assumption is that there are uncommitted messages in 
Spout when it is killed and those are retried.



> First tuples fail after worker is respawn
> -
>
> Key: STORM-2426
> URL: https://issues.apache.org/jira/browse/STORM-2426
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.0.2
>Reporter: Antti Järvinen
> Attachments: 2017-03-20-Kafka-spout-issue.txt
>
>
> Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
> reading from two different topics with same consumer group ID. 
> 1. Kill the only worker process for topology
> 2. Storm creates new worker
> 3. Kafka starts rebalancing (log line 15-16)
> 4. Kafka rebalancing done (log line 18-19)
> 5. Kafka topics read and tuples emitted (log line 28-29)
> 6. Tuples immediately fail (log line 30-33)
> The delay between tuples emitted and tuples failing is just some 10 ms. No 
> bolts in topology received the tuples.
> What could cause this? The assumption is that there are uncommitted messages 
> in Spout when it is killed and those are retried.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2426) First tuples fail after worker is respawn

2017-03-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/STORM-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antti Järvinen updated STORM-2426:
--
Description: 
Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
reading from two different topics. 

1. Kill the only worker process for topology
2. Storm creates new worker
3. Kafka starts rebalancing (log line 15-16)
4. Kafka rebalancing done (log line 18-19)
5. Kafka topics read and tuples emitted (log line 28-29)
6. Tuples immediately fail (log line 30-33)

The delay between tuples emitted and tuples failing is just some 10 ms. No 
bolts in topology received the tuples.

What could cause this? The assumption is that there are uncommitted messages in 
Spout when it is killed and those are retried.


  was:
Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
reading from two different topics. 

1. Kill the only worker process for topology
2. Storm creates new worker
3. Kafka starts rebalancing (log line 15-16)
4. Kafka rebalancing done (log line 18-19)
5. Kafka topics read and tuples emitted (log line 28-29)
6. Tuples immediately fail (log line 30-33)

The delay between tuples emitted and tuples failing is just some 10 ms. No 
bolts in topology received the tuples.

What could cause this?



> First tuples fail after worker is respawn
> -
>
> Key: STORM-2426
> URL: https://issues.apache.org/jira/browse/STORM-2426
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka-client
>Affects Versions: 1.0.2
>Reporter: Antti Järvinen
> Attachments: 2017-03-20-Kafka-spout-issue.txt
>
>
> Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
> reading from two different topics. 
> 1. Kill the only worker process for topology
> 2. Storm creates new worker
> 3. Kafka starts rebalancing (log line 15-16)
> 4. Kafka rebalancing done (log line 18-19)
> 5. Kafka topics read and tuples emitted (log line 28-29)
> 6. Tuples immediately fail (log line 30-33)
> The delay between tuples emitted and tuples failing is just some 10 ms. No 
> bolts in topology received the tuples.
> What could cause this? The assumption is that there are uncommitted messages 
> in Spout when it is killed and those are retried.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (STORM-2426) First tuples fail after worker is respawn

2017-03-20 Thread JIRA
Antti Järvinen created STORM-2426:
-

 Summary: First tuples fail after worker is respawn
 Key: STORM-2426
 URL: https://issues.apache.org/jira/browse/STORM-2426
 Project: Apache Storm
  Issue Type: Bug
  Components: storm-kafka-client
Affects Versions: 1.0.2
Reporter: Antti Järvinen
 Attachments: 2017-03-20-Kafka-spout-issue.txt

Topology with two Kafka spouts (org.apache.storm.kafka.spout.KafkaSpout) 
reading from two different topics. 

1. Kill the only worker process for topology
2. Storm creates new worker
3. Kafka starts rebalancing (log line 15-16)
4. Kafka rebalancing done (log line 18-19)
5. Kafka topics read and tuples emitted (log line 28-29)
6. Tuples immediately fail (log line 30-33)

The delay between tuples emitted and tuples failing is just some 10 ms. No 
bolts in topology received the tuples.

What could cause this?




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (STORM-2407) KafkaTridentSpoutOpaque Doesn't Poll Data From All Topic-Partitions When Parallelism Hint Not a Multiple Total Topic-Partitions

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim updated STORM-2407:


Just added to epic link given that this is marked as Blocker. 
It's not effectively a blocker for 1.1.0 though since patch for 1.x branch is 
already merged.

> KafkaTridentSpoutOpaque Doesn't Poll Data From All Topic-Partitions When 
> Parallelism Hint Not a Multiple Total Topic-Partitions
> ---
>
> Key: STORM-2407
> URL: https://issues.apache.org/jira/browse/STORM-2407
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-kafka, storm-kafka-client
>Affects Versions: 2.0.0, 1.x
>Reporter: Hugo Louro
>Assignee: Hugo Louro
>Priority: Blocker
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2411) Event Logger count in defaults.yaml needs to be 0

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2411.
-
Resolution: Fixed

Thanks [~roshan_naik], I merged into master.

> Event Logger count in defaults.yaml needs to be 0 
> --
>
> Key: STORM-2411
> URL: https://issues.apache.org/jira/browse/STORM-2411
> Project: Apache Storm
>  Issue Type: Bug
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Fix For: 2.0.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Seems like a  regression in master branch where the default setting for this 
> has changed to null.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2422) Serialized Trident topology size does not grow linerarly

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2422.
-
   Resolution: Fixed
Fix Version/s: 2.0.0

Thanks [~revans2], I merged into master.

I'm following your comment on PR and just only merged into master, but we can 
port back this anytime.

> Serialized Trident topology size does not grow linerarly
> 
>
> Key: STORM-2422
> URL: https://issues.apache.org/jira/browse/STORM-2422
> Project: Apache Storm
>  Issue Type: Bug
>  Components: trident
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Fix For: 2.0.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Many of the Trident bolts contain a serialized version of the entire trident 
> graph.  This means that as the size of the graph grows, and the number of 
> bolts grows with it, that the serialized topology grows quadraticly.
> The bolt only uses a small portion of that graph so we can filter out most of 
> that.
> As an extreme example a topology with 1000 Bolts (4000 nodes in the graph) 
> can grow to be over 1GB in size.  If we trim out the parts of the graph that 
> are not needed the serialized topology is < 40MB in size.
> This is a backwards compatible change, because we are only stripping out data 
> that is never touched by the bolt anyways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (STORM-2425) Storm Hive Bolt not closing open transactions

2017-03-20 Thread Arun Mahadevan (JIRA)
Arun Mahadevan created STORM-2425:
-

 Summary: Storm Hive Bolt not closing open transactions
 Key: STORM-2425
 URL: https://issues.apache.org/jira/browse/STORM-2425
 Project: Apache Storm
  Issue Type: Bug
Reporter: Arun Mahadevan
Assignee: Arun Mahadevan


Hive bolt will close connection only if parameter "max_connections" is exceeded 
or bolt dies. So if we open a connection to Hive via Hive bolt and some time 
later we stop producing messages to Hive bolt, connection will be maintained 
and corresponding transactions will be opened. This can be a problem if we 
launch two topologies and one of them will maintain open transactions doing 
nothing, and other will work writing messages to hive. At some point hive will 
launch compactions to collapse small delta files generated by Hive Bolt into 
one base file. But compaction wont launch if we have opened transactions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2414) Skip checking meta's ACL when subject has write privileges for any blobs

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2414.
-
   Resolution: Fixed
Fix Version/s: 1.0.4
   1.1.0
   2.0.0

Merged into master, 1.x, 1.0.x branches.

> Skip checking meta's ACL when subject has write privileges for any blobs
> 
>
> Key: STORM-2414
> URL: https://issues.apache.org/jira/browse/STORM-2414
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0, 1.0.4
>Reporter: Jungtaek Lim
>Assignee: Jungtaek Lim
> Fix For: 2.0.0, 1.1.0, 1.0.4
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> When BlobStore.deleteBlob is called, it always tries to get blobs if not 
> existed on local because the logic needs to check ACL with given subject. 
> That is not necessary when syncing up blobs in follower Nimbuses.
> More generically, some subjects have write privilege for any blobs (say, 
> superuser or admin) and for them BlobStore doesn't need to check (even 
> download) meta's ACL and just deletes them from storage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (STORM-1560) Topology stops processing after Netty catches/swallows Throwable

2017-03-20 Thread sun (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-1560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932302#comment-15932302
 ] 

sun commented on STORM-1560:


Hi,Nico Meyer,the same problem appears in version 0.10.0.The situation is 
similar to what you say.Can I know how you solve this problem?I am Looking 
forward to your replay.Thanks a lot.

> Topology stops processing after Netty catches/swallows Throwable
> 
>
> Key: STORM-1560
> URL: https://issues.apache.org/jira/browse/STORM-1560
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 1.0.0
>Reporter: P. Taylor Goetz
>
> In some scenarios, netty connection problems can leave a topology in an 
> unrecoverable state. The likely culprit is the Netty {{HashedWheelTimer}} 
> class that contains the following code:
> {code}
> public void expire() {
> if(this.compareAndSetState(0, 2)) {
> try {
> this.task.run(this);
> } catch (Throwable var2) {
> if(HashedWheelTimer.logger.isWarnEnabled()) {
> HashedWheelTimer.logger.warn("An exception was thrown 
> by " + TimerTask.class.getSimpleName() + '.', var2);
> }
> }
> }
> }
> {code}
> The exception being swallowed can be seen below:
> {code}
> 2016-02-18 08:46:59.116 o.a.s.m.n.Client [INFO] closing Netty Client 
> Netty-Client-/192.168.202.6:6701
> 2016-02-18 08:46:59.173 o.a.s.m.n.Client [INFO] waiting up to 60 ms to 
> send 0 pending messages to Netty-Client-/192.168.202.6:6701
> 2016-02-18 08:46:59.271 STDIO [ERROR] Feb 18, 2016 8:46:59 AM 
> org.apache.storm.shade.org.jboss.netty.util.HashedWheelTimer
> WARNING: An exception was thrown by TimerTask.
> java.lang.RuntimeException: Giving up to scheduleConnect to 
> Netty-Client-/192.168.202.6:6701 after 44 failed attempts. 3 messages were 
> lost
>   at org.apache.storm.messaging.netty.Client$Connect.run(Client.java:573)
>   at 
> org.apache.storm.shade.org.jboss.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:546)
>   at 
> org.apache.storm.shade.org.jboss.netty.util.HashedWheelTimer$Worker.notifyExpiredTimeouts(HashedWheelTimer.java:446)
>   at 
> org.apache.storm.shade.org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:395)
>   at 
> org.apache.storm.shade.org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The netty client then never recovers, and the follows messages repeat forever:
> {code}
> 2016-02-18 09:42:56.251 o.a.s.m.n.Client [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> 2016-02-18 09:43:25.248 o.a.s.m.n.Client [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> 2016-02-18 09:43:55.248 o.a.s.m.n.Client [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> 2016-02-18 09:43:55.752 o.a.s.m.n.Client [ERROR] discarding 2 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> 2016-02-18 09:43:56.252 o.a.s.m.n.Client [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> 2016-02-18 09:44:25.249 o.a.s.m.n.Client [ERROR] discarding 1 messages 
> because the Netty client to Netty-Client-/192.168.202.6:6701 is being closed
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (STORM-2038) Provide an alternative to using symlinks

2017-03-20 Thread Jungtaek Lim (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15932270#comment-15932270
 ] 

Jungtaek Lim commented on STORM-2038:
-

[~JanecekPetr]
I've just merged the patch for all 1.x / 2.x version lines so you can pull the 
change, and build, and give it a try.
Setting `storm.disable.symlinks: true` to storm.yaml for every nodes should 
work for Windows with account which is not able to make symlink.

> Provide an alternative to using symlinks
> 
>
> Key: STORM-2038
> URL: https://issues.apache.org/jira/browse/STORM-2038
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Affects Versions: 1.0.1
> Environment: Any windows
>Reporter: Paul Milliken
>Assignee: Robert Joseph Evans
>  Labels: symlink, windows
> Fix For: 2.0.0, 1.1.0, 1.0.4
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> As of Storm 1.0 and above, some functionality (such as the worker-artifacts 
> directory) require the use of symlinks. On Windows platforms, this requires 
> that Storm either be run as an administrator or that certain group policy 
> settings are changed.
> In locked-down environments, both of these solutions are not suitable.
> Where possible, an alternative option should be provided to the use of 
> symlinks. For example, it may be possible to create additional copies of the 
> worker artifacts directory for each worker (possibly inefficient) or provide 
> the workers with the canonical path to the real directory.
> See the [brief 
> discussion|http://mail-archives.apache.org/mod_mbox/storm-dev/201608.mbox/%3C1293850887.13165119.1471022901569.JavaMail.yahoo%40mail.yahoo.com%3E]
>  on the mailing list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (STORM-2038) Provide an alternative to using symlinks

2017-03-20 Thread Jungtaek Lim (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jungtaek Lim resolved STORM-2038.
-
   Resolution: Fixed
Fix Version/s: 1.0.4
   1.1.0
   2.0.0

Thanks [~revans2], I merged into master, 1.x, 1.0.x branches respectively.

> Provide an alternative to using symlinks
> 
>
> Key: STORM-2038
> URL: https://issues.apache.org/jira/browse/STORM-2038
> Project: Apache Storm
>  Issue Type: New Feature
>  Components: storm-core
>Affects Versions: 1.0.1
> Environment: Any windows
>Reporter: Paul Milliken
>Assignee: Robert Joseph Evans
>  Labels: symlink, windows
> Fix For: 2.0.0, 1.1.0, 1.0.4
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> As of Storm 1.0 and above, some functionality (such as the worker-artifacts 
> directory) require the use of symlinks. On Windows platforms, this requires 
> that Storm either be run as an administrator or that certain group policy 
> settings are changed.
> In locked-down environments, both of these solutions are not suitable.
> Where possible, an alternative option should be provided to the use of 
> symlinks. For example, it may be possible to create additional copies of the 
> worker artifacts directory for each worker (possibly inefficient) or provide 
> the workers with the canonical path to the real directory.
> See the [brief 
> discussion|http://mail-archives.apache.org/mod_mbox/storm-dev/201608.mbox/%3C1293850887.13165119.1471022901569.JavaMail.yahoo%40mail.yahoo.com%3E]
>  on the mailing list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)