[jira] [Created] (STORM-2330) Fix storm sql code generation for UDAF with non standard sql types

2017-01-26 Thread Arun Mahadevan (JIRA)
Arun Mahadevan created STORM-2330:
-

 Summary: Fix storm sql code generation for UDAF with non standard 
sql types
 Key: STORM-2330
 URL: https://issues.apache.org/jira/browse/STORM-2330
 Project: Apache Storm
  Issue Type: Bug
Reporter: Arun Mahadevan
Assignee: Arun Mahadevan
 Fix For: 2.0.0, 1.1.0


Storm-sql uses cacite's code generator. For UDAF with non standard return 
types, the generated code does not compile.

E.g. For an UDAF that returns a List cacite converts the return type to SQL 
type "OTHER" and finally inserts a cast to Object[] in the returned code which 
fails to compile with "java.lang.ClassCastException: java.util.ArrayList cannot 
be cast to [Ljava.lang.Object;"




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2329) Topology halts when getting HDFS writer in a secure environment

2017-01-26 Thread Kristopher Kane (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15841335#comment-15841335
 ] 

Kristopher Kane commented on STORM-2329:


Perhaps call HdfsSecurityUtil.login(conf, hdfsConfig); again?  Having worked 
with this bolt for a while I realized I don't actually know what renews the 
security context periodically. 

> Topology halts when getting HDFS writer in a secure environment
> ---
>
> Key: STORM-2329
> URL: https://issues.apache.org/jira/browse/STORM-2329
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 0.10.1
> Environment: Kerberos
>Reporter: Kristopher Kane
>
> Simple topologies writing to Kerberized HDFS will sometimes stop while 
> getting a new writer (storm-hdfs) in a Kerberized environment: 
> java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
> setup connection for principal@realm to nn1/nn1IP; Host Details : local host 
> is: "hostname/ip"; destination host is: "nn hostname":8020; at 
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1473) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1400) at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>  at com.sun.proxy.$Proxy26.create(Unknown Source) at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
>  at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:497) at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy27.create(Unknown Source) at 
> org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1726)
>  at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1668) at 
> org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1593) at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
>  at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
>  at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
>  at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at 
> org.apache.storm.hdfs.bolt.AvroGenericRecordBolt.makeNewWriter(AvroGenericRecordBolt.java:115)
>  at 
> org.apache.storm.hdfs.bolt.AbstractHdfsBolt.getOrCreateWriter(AbstractHdfsBolt.java:222)
>  at 
> org.apache.storm.hdfs.bolt.AbstractHdfsBolt.execute(AbstractHdfsBolt.java:154)
>  at 
> backtype.storm.daemon.executor$fn_3697$tuple_action_fn3699.invoke(executor.clj:670)
>  at 
> backtype.storm.daemon.executor$mk_task_receiver$fn3620.invoke(executor.clj:426)
>  at 
> backtype.storm.disruptor$clojure_handler$reify3196.onEvent(disruptor.clj:58) 
> at 
> backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
>  at 
> backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
>  at 
> backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80)
>  at 
> backtype.storm.daemon.executor$fn3697$fn3710$fn3761.invoke(executor.clj:808) 
> at backtype.storm.util$async_loop$fn_544.invoke(util.clj:475) at 
> clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) 
> Caused by: java.io.IOException: Couldn't setup connection for principal@realm 
> to nn1/nn1IP:8020 at 
> org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:673) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>  at 
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:644)
>  at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731) 
> at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:369) at 
> org.apache.hadoop.ipc.Client.getConnection(Client.java:1522) at 
> org.apache.hadoop.ipc.Client.call(Client.java:1439) ... 35 more Caused by: 
> javax.security.sasl.SaslException: GSS initiate failed [Caused by

[jira] [Updated] (STORM-2329) Topology halts when getting HDFS writer in a secure environment

2017-01-26 Thread Kristopher Kane (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kristopher Kane updated STORM-2329:
---
Description: 
Simple topologies writing to Kerberized HDFS will sometimes stop while getting 
a new writer (storm-hdfs) in a Kerberized environment: 

java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
setup connection for principal@realm to nn1/nn1IP; Host Details : local host 
is: "hostname/ip"; destination host is: "nn hostname":8020; at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
org.apache.hadoop.ipc.Client.call(Client.java:1473) at 
org.apache.hadoop.ipc.Client.call(Client.java:1400) at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
 at com.sun.proxy.$Proxy26.create(Unknown Source) at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
 at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy27.create(Unknown Source) at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1726)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1668) at 
org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1593) at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at 
org.apache.storm.hdfs.bolt.AvroGenericRecordBolt.makeNewWriter(AvroGenericRecordBolt.java:115)
 at 
org.apache.storm.hdfs.bolt.AbstractHdfsBolt.getOrCreateWriter(AbstractHdfsBolt.java:222)
 at 
org.apache.storm.hdfs.bolt.AbstractHdfsBolt.execute(AbstractHdfsBolt.java:154) 
at 
backtype.storm.daemon.executor$fn_3697$tuple_action_fn3699.invoke(executor.clj:670)
 at 
backtype.storm.daemon.executor$mk_task_receiver$fn3620.invoke(executor.clj:426) 
at backtype.storm.disruptor$clojure_handler$reify3196.onEvent(disruptor.clj:58) 
at 
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
 at 
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
 at 
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) 
at backtype.storm.daemon.executor$fn3697$fn3710$fn3761.invoke(executor.clj:808) 
at backtype.storm.util$async_loop$fn_544.invoke(util.clj:475) at 
clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: Couldn't setup connection for principal@realm 
to nn1/nn1IP:8020 at 
org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:673) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:644)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731) at 
org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:369) at 
org.apache.hadoop.ipc.Client.getConnection(Client.java:1522) at 
org.apache.hadoop.ipc.Client.call(Client.java:1439) ... 35 more Caused by: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)] at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413) at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:554) at 
org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:369) at 
org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:723) at 
org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:719) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at o

[jira] [Created] (STORM-2329) Topology halts when getting HDFS writer in a secure environment

2017-01-26 Thread Kristopher Kane (JIRA)
Kristopher Kane created STORM-2329:
--

 Summary: Topology halts when getting HDFS writer in a secure 
environment
 Key: STORM-2329
 URL: https://issues.apache.org/jira/browse/STORM-2329
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 0.10.1
 Environment: Kerberos
Reporter: Kristopher Kane


Simple topologies writing to Kerberized HDFS will sometimes stop while getting 
a new writer (storm-hdfs) in a Kerberized environment: 

java.io.IOException: Failed on local exception: java.io.IOException: Couldn't 
setup connection for principal@realm to nn1/nn1IP; Host Details : local host 
is: "hostname/ip"; destination host is: "nn hostname":8020; at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) at 
org.apache.hadoop.ipc.Client.call(Client.java:1473) at 
org.apache.hadoop.ipc.Client.call(Client.java:1400) at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
 at com.sun.proxy.$Proxy26.create(Unknown Source) at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
 at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source) at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:497) at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
 at com.sun.proxy.$Proxy27.create(Unknown Source) at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1726)
 at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1668) at 
org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1593) at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)
 at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)
 at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)
 at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786) at 
org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775) at 
org.apache.storm.hdfs.bolt.AvroGenericRecordBolt.makeNewWriter(AvroGenericRecordBolt.java:115)
 at 
org.apache.storm.hdfs.bolt.AbstractHdfsBolt.getOrCreateWriter(AbstractHdfsBolt.java:222)
 at 
org.apache.storm.hdfs.bolt.AbstractHdfsBolt.execute(AbstractHdfsBolt.java:154) 
at 
backtype.storm.daemon.executor$fn_3697$tuple_action_fn3699.invoke(executor.clj:670)
 at 
backtype.storm.daemon.executor$mk_task_receiver$fn3620.invoke(executor.clj:426) 
at backtype.storm.disruptor$clojure_handler$reify3196.onEvent(disruptor.clj:58) 
at 
backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125)
 at 
backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99)
 at 
backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) 
at backtype.storm.daemon.executor$fn3697$fn3710$fn3761.invoke(executor.clj:808) 
at backtype.storm.util$async_loop$fn_544.invoke(util.clj:475) at 
clojure.lang.AFn.run(AFn.java:22) at java.lang.Thread.run(Thread.java:745) 
Caused by: java.io.IOException: Couldn't setup connection for principal@realm 
to nn1/nn1IP:8020 at 
org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:673) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
 at 
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:644)
 at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:731) at 
org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:369) at 
org.apache.hadoop.ipc.Client.getConnection(Client.java:1522) at 
org.apache.hadoop.ipc.Client.call(Client.java:1439) ... 35 more Caused by: 
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: 
No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)] at 
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
 at 
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413) at 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:554) at 
org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:369) at 
org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:723) at 
org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:719) at

[jira] [Created] (STORM-2328) Batching And Vector Operations

2017-01-26 Thread Roshan Naik (JIRA)
Roshan Naik created STORM-2328:
--

 Summary: Batching And Vector Operations
 Key: STORM-2328
 URL: https://issues.apache.org/jira/browse/STORM-2328
 Project: Apache Storm
  Issue Type: Sub-task
  Components: storm-core
Affects Versions: 2.0.0
Reporter: Roshan Naik


Sub-topic of Storm Worker redesign. Design doc to come.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-2327) Abstract class ConfigurableTopology

2017-01-26 Thread Julien Nioche (JIRA)
Julien Nioche created STORM-2327:


 Summary: Abstract class ConfigurableTopology
 Key: STORM-2327
 URL: https://issues.apache.org/jira/browse/STORM-2327
 Project: Apache Storm
  Issue Type: New Feature
  Components: storm-core
Affects Versions: 2.0.0
Reporter: Julien Nioche
Priority: Minor


Classes which run topologies often repeat the same code and pattern to:
* populate the configuration from a file instead of ~/.storm
* determine whether to run locally or remotely
* set a TTL for a topology

Flux provides an elegant way of dealing with these but sometimes it is simpler 
to define a topology in Java code. 

In [StormCrawler|http://stormcrawler.net], we implemented an abstract class 
named ConfigurableTopology which can be extended and saves users the hassle of 
having to write code for the things above. I will open a PR containing this 
class so that we can discuss and comment whether it is of any use at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (STORM-2055) Exception when running topology from Maven exec with Flux

2017-01-26 Thread Julien Nioche (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche updated STORM-2055:
-
Comment: was deleted

(was: Related to that one?)

> Exception when running topology from Maven exec with Flux
> -
>
> Key: STORM-2055
> URL: https://issues.apache.org/jira/browse/STORM-2055
> Project: Apache Storm
>  Issue Type: Bug
>  Components: Flux
>Affects Versions: 1.0.2
>Reporter: Julien Nioche
>
> When running a topology from Maven with Flux as a dependency, we get
> {code}
> 11335 [Thread-8] ERROR o.a.s.event - Error when processing event
> java.io.FileNotFoundException: Source 
> 'file:/home/julien/.m2/repository/org/apache/storm/flux-core/1.0.1/flux-core-1.0.1.jar!/resources'
>  does not exist
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1368)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1261)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1230)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.daemon.supervisor$fn__9359.invoke(supervisor.clj:1194) 
> ~[storm-core-1.0.1.jar:1.0.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:243) ~[clojure-1.7.0.jar:?]
> at 
> org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078$fn__9096.invoke(supervisor.clj:582)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078.invoke(supervisor.clj:581)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at org.apache.storm.event$event_manager$fn__8630.invoke(event.clj:40) 
> [storm-core-1.0.1.jar:1.0.1]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
> {code}
> The same topology runs fine when executed with Eclipse or via the storm 
> command.
> See [https://github.com/DigitalPebble/storm-crawler/issues/324]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2055) Exception when running topology from Maven exec with Flux

2017-01-26 Thread Julien Nioche (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15839751#comment-15839751
 ] 

Julien Nioche commented on STORM-2055:
--

Related to that one?

> Exception when running topology from Maven exec with Flux
> -
>
> Key: STORM-2055
> URL: https://issues.apache.org/jira/browse/STORM-2055
> Project: Apache Storm
>  Issue Type: Bug
>  Components: Flux
>Affects Versions: 1.0.2
>Reporter: Julien Nioche
>
> When running a topology from Maven with Flux as a dependency, we get
> {code}
> 11335 [Thread-8] ERROR o.a.s.event - Error when processing event
> java.io.FileNotFoundException: Source 
> 'file:/home/julien/.m2/repository/org/apache/storm/flux-core/1.0.1/flux-core-1.0.1.jar!/resources'
>  does not exist
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1368)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1261)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.shade.org.apache.commons.io.FileUtils.copyDirectory(FileUtils.java:1230)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.daemon.supervisor$fn__9359.invoke(supervisor.clj:1194) 
> ~[storm-core-1.0.1.jar:1.0.1]
> at clojure.lang.MultiFn.invoke(MultiFn.java:243) ~[clojure-1.7.0.jar:?]
> at 
> org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078$fn__9096.invoke(supervisor.clj:582)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at 
> org.apache.storm.daemon.supervisor$mk_synchronize_supervisor$this__9078.invoke(supervisor.clj:581)
>  ~[storm-core-1.0.1.jar:1.0.1]
> at org.apache.storm.event$event_manager$fn__8630.invoke(event.clj:40) 
> [storm-core-1.0.1.jar:1.0.1]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_101]
> {code}
> The same topology runs fine when executed with Eclipse or via the storm 
> command.
> See [https://github.com/DigitalPebble/storm-crawler/issues/324]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-2326) Upgrade log4j and slf4j

2017-01-26 Thread Julien Nioche (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Julien Nioche updated STORM-2326:
-
External issue URL: https://github.com/apache/storm/pull/1896

> Upgrade log4j and slf4j
> ---
>
> Key: STORM-2326
> URL: https://issues.apache.org/jira/browse/STORM-2326
> Project: Apache Storm
>  Issue Type: Dependency upgrade
>Affects Versions: 2.0.0, 1.x
>Reporter: Julien Nioche
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The dependencies to log4j could be upgraded from 2.1 to 2.7, same for slf4j 
> to 1.7.21.
> This would help fix [STORM-1386]
> BTW any idea why we need log4j-over-slf4j?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-2326) Upgrade log4j and slf4j

2017-01-26 Thread Julien Nioche (JIRA)
Julien Nioche created STORM-2326:


 Summary: Upgrade log4j and slf4j
 Key: STORM-2326
 URL: https://issues.apache.org/jira/browse/STORM-2326
 Project: Apache Storm
  Issue Type: Dependency upgrade
Affects Versions: 2.0.0, 1.x
Reporter: Julien Nioche
Priority: Minor


The dependencies to log4j could be upgraded from 2.1 to 2.7, same for slf4j to 
1.7.21.

This would help fix [STORM-1386]

BTW any idea why we need log4j-over-slf4j?




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (STORM-2322) Could not find or load main class blobstore

2017-01-26 Thread Bogdan Rudka (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bogdan Rudka updated STORM-2322:

Priority: Critical  (was: Major)

> Could not find or load main class blobstore 
> 
>
> Key: STORM-2322
> URL: https://issues.apache.org/jira/browse/STORM-2322
> Project: Apache Storm
>  Issue Type: Bug
>Affects Versions: 1.0.1, 1.0.2
>Reporter: Bogdan Rudka
>Priority: Critical
>
> I got this error (Could not find or load main class blobstore) on Windows 
> machine while I'm trying to run a command: storm blobstore create --file 
> README.txt --acl o::rwa --replication-factor 4 key1. This error occurs for 
> other commands that I'm trying to use and for different machines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)