回复:Why tuples fail in spout

2016-05-02 Thread John Fang
Some tuples failed in the bolts. You can review the bolts' code. Maybe your 
bolts' code trigger the fail() due to some reasons, Or the operation of bolts 
need more time. 
--发件人:Sai Dilip 
Reddy Kiralam 发送时间:2016年5月3日(星期二) 12:06收件人:user 
主 题:Why tuples fail in spout
Hi all,


I'm running storm topology in the local machine & storm UI shows some tuples 
are failed in the spout.as per my knowledge spout tuples are transferred to a 
bolts with out any failure.Can any of you help me out in finding the reason of 
tuples failures in the spout.  




 
Best regards,K.Sai Dilip Reddy.




Re: [DISCUSS] Would like to make collective intelligence about Metrics on Storm

2016-05-02 Thread Harsha
Jungtaek,
I think filters that can support a regex gives more felxibility.
Thanks,
Harsha
 
 
On Mon, May 2, 2016, at 07:48 PM, Jungtaek Lim wrote:
> Kevin,
>
> For specific task, you can register your own metrics which resides
> per task.
> But metrics doc on Storm is not kind enough to let users follow, so I
> addressed this and submitted pull request.
> https://github.com/HeartSaVioR/storm/blob/STORM-1724-1.x/docs/Metrics.md
>
> There're no custom worker metrics for users since Storm abstracts
> user's logic into 'task' so normally users don't need to measure JVM
> level metrics (except JMX metrics). But it would be possible to add if
> it's really great.
>
> Does current Storm work for you now? Or we would like to address /
> improve something?
>
> Stephen and Harsha,
>
> Improve MetricsConsumer is an umbrella issue, and it contains the
> feature 'blacklist & whitelist of metrics[1]'.
> For now it filters metrics by only metrics name, but adding filter
> targets to component name, host, etc. are easy so I would like to see
> the needs before making the change.
> Do you think adding filter targets makes sense? Or we can just start
> with metrics name filter?
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
>
> 2016년 5월 3일 (화) 오전 2:18, Harsha 님이 작성:
>> __
>> Jungtaek,
>> Probably a filter config to whitelist and blacklist certain metrics.
>> So that it will scale if there are too many workers and users can
>> turn off certain metrics.
>>
>> Thanks,
>> Harsha
>>
>>
>> On Mon, May 2, 2016, at 06:19 AM, Stephen Powis wrote:
>>> Oooh I'd love this as well!  I really dig the ease of the metric
>>> framework in storm and have all the metrics go thru one centralized
>>> config.  But as the number of storm hosts and number of tasks grow,
>>> I've found that Graphite/Grafana has a hard time collecting up all
>>> the relevant metrics across a lot of wildcarded keys for things like
>>> hostnames and taskIds to properly display my graphs.
>>>
>>> On Sun, May 1, 2016 at 8:17 AM, Kevin Conaway
>>>  wrote:
 One thing I would like to see added (if not already present) is the
 ability to register metrics that are not tied to a component.

 As of now, the only non-component metrics are reported by the
 SystemBolt pseudo-component which feels like a work-around.  It
 reports JVM level metrics like GC time, heap size and other things
 that aren't associated with a given component.

 It would be great if application developers could expose similar
 metrics like this for things like connection pools and other JVM
 wide objects that aren't unique to a specific component.

 I don't think this is possible now, is it?

 On Wed, Apr 20, 2016 at 12:29 AM, Jungtaek Lim 
 wrote:
> Let me start sharing my thought. :)
>
> 1. Need to enrich docs about metrics / stats.
>
> In fact, I couldn't see the fact - topology stats are sampled by
> default and sample rate is 0.05 - from the docs when I was newbie
> of Apache Storm. It made me misleading and made me saying "Why
> there're difference between the counts?". I also saw some mails
> from user@ about same question. If we include this to guide doc
> that would be better.
>
> And Metrics document page[2] seems not well written. I think it
> has appropriate headings but lacks contents on each heading.
> It should be addressed, and introducing some external metrics
> consumer plugins (like storm-graphite[3] from Verisign) would be
> great, too.
>
> 2. Need to increase sample rate or (ideally) no sampling at all.
>
> Let's postpone considering performance hit at this time.
> Ideally, we expect precision of metrics gets better when we
> increase sample rate. It affects non-gauge kinds of metrics which
> are counter, and latency, and so on.
>
> Btw, I would like to hear about opinions on latency since I'm not
> an expert.
> Storm provides only average latency and it's indeed based on
> sample rate. Do we feel OK with this? If not how much having also
> percentiles can help us?
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> 2016년 4월 20일 (수) 오전 10:55, Jungtaek Lim 님이 작성:
>> Hi Storm users,
>>
>> I'm Jungtaek Lim, committer and PMC member of Apache Storm.
>>
>> If you subscribed dev@ mailing list, you may have seen that
>> recently we're addressing the metrics feature on Apache Storm.
>>
>> For now, improvements are going forward based on current metrics
>> feature.
>>
>> - Improve (Topology) MetricsConsumer[4]
>> - Provide topology metrics in detail (metrics per each stream)[5]
>> - (WIP) Introduce Cluster Metrics Consumer
>>
>> As I don't maintain large cluster for myself, I really want to
>> collect the any ideas for improving, any inconveniences, use
>> 

Why tuples fail in spout

2016-05-02 Thread Sai Dilip Reddy Kiralam
Hi all,


I'm running storm topology in the local machine & storm UI shows some
tuples are failed in the spout.as per my knowledge spout tuples are
transferred to a bolts with out any failure.Can any of you help me out in
finding the reason of tuples failures in the spout.






*Best regards,*

*K.Sai Dilip Reddy.*


Re: [DISCUSS] Would like to make collective intelligence about Metrics on Storm

2016-05-02 Thread Jungtaek Lim
Kevin,

For specific task, you can register your own metrics which resides per
task.
But metrics doc on Storm is not kind enough to let users follow, so I
addressed this and submitted pull request.
https://github.com/HeartSaVioR/storm/blob/STORM-1724-1.x/docs/Metrics.md

There're no custom worker metrics for users since Storm abstracts user's
logic into 'task' so normally users don't need to measure JVM level metrics
(except JMX metrics). But it would be possible to add if it's really great.

Does current Storm work for you now? Or we would like to address / improve
something?

Stephen and Harsha,

Improve MetricsConsumer is an umbrella issue, and it contains the
feature 'blacklist
& whitelist of metrics '.
For now it filters metrics by only metrics name, but adding filter targets
to component name, host, etc. are easy so I would like to see the needs
before making the change.
Do you think adding filter targets makes sense? Or we can just start with
metrics name filter?

Thanks,
Jungtaek Lim (HeartSaVioR)


2016년 5월 3일 (화) 오전 2:18, Harsha 님이 작성:

> Jungtaek,
>Probably a filter config to whitelist and blacklist certain
> metrics. So that it will scale if there are too many workers and users can
> turn off certain metrics.
>
> Thanks,
> Harsha
>
>
> On Mon, May 2, 2016, at 06:19 AM, Stephen Powis wrote:
>
> Oooh I'd love this as well!  I really dig the ease of the metric framework
> in storm and have all the metrics go thru one centralized config.  But as
> the number of storm hosts and number of tasks grow, I've found that
> Graphite/Grafana has a hard time collecting up all the relevant metrics
> across a lot of wildcarded keys for things like hostnames and taskIds to
> properly display my graphs.
>
> On Sun, May 1, 2016 at 8:17 AM, Kevin Conaway 
> wrote:
>
> One thing I would like to see added (if not already present) is the
> ability to register metrics that are not tied to a component.
>
> As of now, the only non-component metrics are reported by the SystemBolt
> pseudo-component which feels like a work-around.  It reports JVM level
> metrics like GC time, heap size and other things that aren't associated
> with a given component.
>
> It would be great if application developers could expose similar metrics
> like this for things like connection pools and other JVM wide objects that
> aren't unique to a specific component.
>
> I don't think this is possible now, is it?
>
> On Wed, Apr 20, 2016 at 12:29 AM, Jungtaek Lim  wrote:
>
> Let me start sharing my thought. :)
>
> 1. Need to enrich docs about metrics / stats.
>
> In fact, I couldn't see the fact - topology stats are sampled by default
> and sample rate is 0.05 - from the docs when I was newbie of Apache
> Storm. It made me misleading and made me saying "Why there're difference
> between the counts?". I also saw some mails from user@ about same question.
> If we include this to guide doc that would be better.
>
> And Metrics document page
>  seems not well
> written. I think it has appropriate headings but lacks contents on each
> heading.
> It should be addressed, and introducing some external metrics consumer
> plugins (like storm-graphite  from
> Verisign) would be great, too.
>
> 2. Need to increase sample rate or (ideally) no sampling at all.
>
> Let's postpone considering performance hit at this time.
> Ideally, we expect precision of metrics gets better when we increase
> sample rate. It affects non-gauge kinds of metrics which are counter, and
> latency, and so on.
>
> Btw, I would like to hear about opinions on latency since I'm not an
> expert.
> Storm provides only average latency and it's indeed based on sample rate.
> Do we feel OK with this? If not how much having also percentiles can help
> us?
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> 2016년 4월 20일 (수) 오전 10:55, Jungtaek Lim 님이 작성:
>
> Hi Storm users,
>
> I'm Jungtaek Lim, committer and PMC member of Apache Storm.
>
> If you subscribed dev@ mailing list, you may have seen that recently
> we're addressing the metrics feature on Apache Storm.
>
> For now, improvements are going forward based on current metrics feature.
>
> - Improve (Topology) MetricsConsumer
> 
> - Provide topology metrics in detail (metrics per each stream)
> 
> - (WIP) Introduce Cluster Metrics Consumer
>
> As I don't maintain large cluster for myself, I really want to collect the
> any ideas for improving, any inconveniences, use cases of Metrics with
> community members, so we're on the right way to go forward.
>
> Let's talk!
>
> Thanks in advance,
> Jungtaek Lim (HeartSaVioR)
>
>
>
>
> --
> Kevin Conaway
> http://www.linkedin.com/pub/kevin-conaway/7/107/580/
> 

Re: Configuring Python environment

2016-05-02 Thread Jiaming Lin
You can try streamparse, I haven't used it, but it looks charming.

On Tuesday, May 3, 2016, Alec Swan  wrote:

> Thanks, Joaquin, but I've seen all these articles since I spent the entire
> weekend trying to get my topology to run on CentOS/Python2.7.
> Getting it to work on Mac OSX was a breeze because I can easily install
> Python 2.7 on it.
>
> It'd be great if somebody with the actually CentOS/Python2.7 experience
> could share their thoughts.
>
> Thanks,
>
> Alec
>
> On Mon, May 2, 2016 at 7:59 PM, Joaquin Menchaca  > wrote:
>
>> virtualenvwrappers
>>   - http://virtualenvwrapper.readthedocs.io/en/latest/
>> - https://pypi.python.org/pypi/virtualenvwrapper/
>> Pyenv
>>   - https://amaral.northwestern.edu/resources/guides/pyenv-tutorial
>>   - https://github.com/yyuu/pyenv
>> Docker
>>
>> You can have a Python in another directory, then put that directory in
>> the path.  Maybe linuxbrew can help with that.
>>
>> Linuxbrew - http://linuxbrew.sh
>>
>> Disclaimer, I haven't tested any of these with Storm.  I personally don't
>> use CentOS (or RHEL) any more, because they are dinosaurs when it comes to
>> scripting languages and web development.  But many love the slow churn and
>> stability.
>>
>> I also found these:
>> -
>> https://github.com/h2oai/h2o-2/wiki/Installing-python-2.7-on-centos-6.3.-Follow-this-sequence-exactly-for-centos-machine-only
>> -
>> https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4
>> - http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/
>> On May 2, 2016 3:35 PM, "Alec Swan" > > wrote:
>>
>>> Hello,
>>>
>>> I am having a real hard time configuring Storm to use python2.7 virtual
>>> environment that I installed on my CentOS 6.5 host.
>>>
>>> CentOS 6.5 ships with python2.6 but storm requires at least python2.7.
>>> Switching to python2.7 breaks some of CentOS functionality, so I had to
>>> install python2.7 as a separate environment.
>>>
>>> I also read that if I set BASH_ENV env var to point to the script that
>>> activates the virtual environment, then it will be loaded before scripts
>>> are executed. I also changed the first line in storm.py to:
>>> #!/usr/bin/env python
>>> so that the right python executable is being used. However, I still have
>>> no luck and keep getting the error shown below after topology is
>>> successfully submitted.
>>>
>>> Has anyone figured out how to use Storm with python virtual environments?
>>>
>>> 2016-05-02 21:57:43.503 b.s.util [ERROR] Async loop died!
>>> java.lang.RuntimeException: Error when launching multilang subprocess
>>> python: error while loading shared libraries: libpython2.7.so.1.0:
>>> cannot open shared object file: No such file or directory
>>>
>>> at
>>> backtype.storm.utils.ShellProcess.launch(ShellProcess.java:68)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:117)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at
>>> backtype.storm.daemon.executor$fn__5694$fn__5707.invoke(executor.clj:757)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at backtype.storm.util$async_loop$fn__545.invoke(util.clj:477)
>>> [storm-core-0.10.0.jar:0.10.0]
>>> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
>>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
>>> Caused by: java.io.IOException: Broken pipe
>>> at java.io.FileOutputStream.writeBytes(Native Method)
>>> ~[?:1.8.0_73]
>>> at java.io.FileOutputStream.write(FileOutputStream.java:326)
>>> ~[?:1.8.0_73]
>>> at
>>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>>> ~[?:1.8.0_73]
>>> at
>>> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>>> ~[?:1.8.0_73]
>>> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
>>> ~[?:1.8.0_73]
>>> at
>>> backtype.storm.multilang.JsonSerializer.writeString(JsonSerializer.java:96)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at
>>> backtype.storm.multilang.JsonSerializer.writeMessage(JsonSerializer.java:89)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at
>>> backtype.storm.multilang.JsonSerializer.connect(JsonSerializer.java:61)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> at
>>> backtype.storm.utils.ShellProcess.launch(ShellProcess.java:64)
>>> ~[storm-core-0.10.0.jar:0.10.0]
>>> ... 5 more
>>>
>>>
>>> Thanks,
>>>
>>> Alec
>>>
>>
>


Re: Configuring Python environment

2016-05-02 Thread Alec Swan
Thanks, Joaquin, but I've seen all these articles since I spent the entire
weekend trying to get my topology to run on CentOS/Python2.7.
Getting it to work on Mac OSX was a breeze because I can easily install
Python 2.7 on it.

It'd be great if somebody with the actually CentOS/Python2.7 experience
could share their thoughts.

Thanks,

Alec

On Mon, May 2, 2016 at 7:59 PM, Joaquin Menchaca 
wrote:

> virtualenvwrappers
>   - http://virtualenvwrapper.readthedocs.io/en/latest/
> - https://pypi.python.org/pypi/virtualenvwrapper/
> Pyenv
>   - https://amaral.northwestern.edu/resources/guides/pyenv-tutorial
>   - https://github.com/yyuu/pyenv
> Docker
>
> You can have a Python in another directory, then put that directory in the
> path.  Maybe linuxbrew can help with that.
>
> Linuxbrew - http://linuxbrew.sh
>
> Disclaimer, I haven't tested any of these with Storm.  I personally don't
> use CentOS (or RHEL) any more, because they are dinosaurs when it comes to
> scripting languages and web development.  But many love the slow churn and
> stability.
>
> I also found these:
> -
> https://github.com/h2oai/h2o-2/wiki/Installing-python-2.7-on-centos-6.3.-Follow-this-sequence-exactly-for-centos-machine-only
> -
> https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4
> - http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/
> On May 2, 2016 3:35 PM, "Alec Swan"  wrote:
>
>> Hello,
>>
>> I am having a real hard time configuring Storm to use python2.7 virtual
>> environment that I installed on my CentOS 6.5 host.
>>
>> CentOS 6.5 ships with python2.6 but storm requires at least python2.7.
>> Switching to python2.7 breaks some of CentOS functionality, so I had to
>> install python2.7 as a separate environment.
>>
>> I also read that if I set BASH_ENV env var to point to the script that
>> activates the virtual environment, then it will be loaded before scripts
>> are executed. I also changed the first line in storm.py to:
>> #!/usr/bin/env python
>> so that the right python executable is being used. However, I still have
>> no luck and keep getting the error shown below after topology is
>> successfully submitted.
>>
>> Has anyone figured out how to use Storm with python virtual environments?
>>
>> 2016-05-02 21:57:43.503 b.s.util [ERROR] Async loop died!
>> java.lang.RuntimeException: Error when launching multilang subprocess
>> python: error while loading shared libraries: libpython2.7.so.1.0: cannot
>> open shared object file: No such file or directory
>>
>> at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:68)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:117)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at
>> backtype.storm.daemon.executor$fn__5694$fn__5707.invoke(executor.clj:757)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at backtype.storm.util$async_loop$fn__545.invoke(util.clj:477)
>> [storm-core-0.10.0.jar:0.10.0]
>> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
>> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
>> Caused by: java.io.IOException: Broken pipe
>> at java.io.FileOutputStream.writeBytes(Native Method)
>> ~[?:1.8.0_73]
>> at java.io.FileOutputStream.write(FileOutputStream.java:326)
>> ~[?:1.8.0_73]
>> at
>> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>> ~[?:1.8.0_73]
>> at
>> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>> ~[?:1.8.0_73]
>> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
>> ~[?:1.8.0_73]
>> at
>> backtype.storm.multilang.JsonSerializer.writeString(JsonSerializer.java:96)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at
>> backtype.storm.multilang.JsonSerializer.writeMessage(JsonSerializer.java:89)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at
>> backtype.storm.multilang.JsonSerializer.connect(JsonSerializer.java:61)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:64)
>> ~[storm-core-0.10.0.jar:0.10.0]
>> ... 5 more
>>
>>
>> Thanks,
>>
>> Alec
>>
>


Re: Configuring Python environment

2016-05-02 Thread Joaquin Menchaca
virtualenvwrappers
  - http://virtualenvwrapper.readthedocs.io/en/latest/
- https://pypi.python.org/pypi/virtualenvwrapper/
Pyenv
  - https://amaral.northwestern.edu/resources/guides/pyenv-tutorial
  - https://github.com/yyuu/pyenv
Docker

You can have a Python in another directory, then put that directory in the
path.  Maybe linuxbrew can help with that.

Linuxbrew - http://linuxbrew.sh

Disclaimer, I haven't tested any of these with Storm.  I personally don't
use CentOS (or RHEL) any more, because they are dinosaurs when it comes to
scripting languages and web development.  But many love the slow churn and
stability.

I also found these:
-
https://github.com/h2oai/h2o-2/wiki/Installing-python-2.7-on-centos-6.3.-Follow-this-sequence-exactly-for-centos-machine-only
-
https://www.digitalocean.com/community/tutorials/how-to-set-up-python-2-7-6-and-3-3-3-on-centos-6-4
- http://toomuchdata.com/2014/02/16/how-to-install-python-on-centos/
On May 2, 2016 3:35 PM, "Alec Swan"  wrote:

> Hello,
>
> I am having a real hard time configuring Storm to use python2.7 virtual
> environment that I installed on my CentOS 6.5 host.
>
> CentOS 6.5 ships with python2.6 but storm requires at least python2.7.
> Switching to python2.7 breaks some of CentOS functionality, so I had to
> install python2.7 as a separate environment.
>
> I also read that if I set BASH_ENV env var to point to the script that
> activates the virtual environment, then it will be loaded before scripts
> are executed. I also changed the first line in storm.py to:
> #!/usr/bin/env python
> so that the right python executable is being used. However, I still have
> no luck and keep getting the error shown below after topology is
> successfully submitted.
>
> Has anyone figured out how to use Storm with python virtual environments?
>
> 2016-05-02 21:57:43.503 b.s.util [ERROR] Async loop died!
> java.lang.RuntimeException: Error when launching multilang subprocess
> python: error while loading shared libraries: libpython2.7.so.1.0: cannot
> open shared object file: No such file or directory
>
> at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:68)
> ~[storm-core-0.10.0.jar:0.10.0]
> at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:117)
> ~[storm-core-0.10.0.jar:0.10.0]
> at
> backtype.storm.daemon.executor$fn__5694$fn__5707.invoke(executor.clj:757)
> ~[storm-core-0.10.0.jar:0.10.0]
> at backtype.storm.util$async_loop$fn__545.invoke(util.clj:477)
> [storm-core-0.10.0.jar:0.10.0]
> at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
> Caused by: java.io.IOException: Broken pipe
> at java.io.FileOutputStream.writeBytes(Native Method) ~[?:1.8.0_73]
> at java.io.FileOutputStream.write(FileOutputStream.java:326)
> ~[?:1.8.0_73]
> at
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
> ~[?:1.8.0_73]
> at
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
> ~[?:1.8.0_73]
> at java.io.DataOutputStream.flush(DataOutputStream.java:123)
> ~[?:1.8.0_73]
> at
> backtype.storm.multilang.JsonSerializer.writeString(JsonSerializer.java:96)
> ~[storm-core-0.10.0.jar:0.10.0]
> at
> backtype.storm.multilang.JsonSerializer.writeMessage(JsonSerializer.java:89)
> ~[storm-core-0.10.0.jar:0.10.0]
> at
> backtype.storm.multilang.JsonSerializer.connect(JsonSerializer.java:61)
> ~[storm-core-0.10.0.jar:0.10.0]
> at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:64)
> ~[storm-core-0.10.0.jar:0.10.0]
> ... 5 more
>
>
> Thanks,
>
> Alec
>


Configuring Python environment

2016-05-02 Thread Alec Swan
Hello,

I am having a real hard time configuring Storm to use python2.7 virtual
environment that I installed on my CentOS 6.5 host.

CentOS 6.5 ships with python2.6 but storm requires at least python2.7.
Switching to python2.7 breaks some of CentOS functionality, so I had to
install python2.7 as a separate environment.

I also read that if I set BASH_ENV env var to point to the script that
activates the virtual environment, then it will be loaded before scripts
are executed. I also changed the first line in storm.py to:
#!/usr/bin/env python
so that the right python executable is being used. However, I still have no
luck and keep getting the error shown below after topology is successfully
submitted.

Has anyone figured out how to use Storm with python virtual environments?

2016-05-02 21:57:43.503 b.s.util [ERROR] Async loop died!
java.lang.RuntimeException: Error when launching multilang subprocess
python: error while loading shared libraries: libpython2.7.so.1.0: cannot
open shared object file: No such file or directory

at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:68)
~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.task.ShellBolt.prepare(ShellBolt.java:117)
~[storm-core-0.10.0.jar:0.10.0]
at
backtype.storm.daemon.executor$fn__5694$fn__5707.invoke(executor.clj:757)
~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:477)
[storm-core-0.10.0.jar:0.10.0]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
Caused by: java.io.IOException: Broken pipe
at java.io.FileOutputStream.writeBytes(Native Method) ~[?:1.8.0_73]
at java.io.FileOutputStream.write(FileOutputStream.java:326)
~[?:1.8.0_73]
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
~[?:1.8.0_73]
at
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
~[?:1.8.0_73]
at java.io.DataOutputStream.flush(DataOutputStream.java:123)
~[?:1.8.0_73]
at
backtype.storm.multilang.JsonSerializer.writeString(JsonSerializer.java:96)
~[storm-core-0.10.0.jar:0.10.0]
at
backtype.storm.multilang.JsonSerializer.writeMessage(JsonSerializer.java:89)
~[storm-core-0.10.0.jar:0.10.0]
at
backtype.storm.multilang.JsonSerializer.connect(JsonSerializer.java:61)
~[storm-core-0.10.0.jar:0.10.0]
at backtype.storm.utils.ShellProcess.launch(ShellProcess.java:64)
~[storm-core-0.10.0.jar:0.10.0]
... 5 more


Thanks,

Alec


Re: Spout Questions

2016-05-02 Thread Adrien Carreira
Thank you for the feedback.

I've juste switched back to 0.10 and I don't have the issue.

nextTuple is quite fast, I'm handleling a buffer of Tuple, when the buffer
is empty, I'm loading data from redis.

Ack method is deleting data on redis using pipeline to be fast.

It's strange that in the 0.10 I reach the buffer size and note in the 1.0

I developping a web crawler, and to be fast I need to emit a constant
quantity of Tuple.

Adrien

Le lundi 2 mai 2016, P. Taylor Goetz  a écrit :

> nextTuple(), ack(), and fail() are all called by the same thread.
> nextTuple() should be fast, so you probably only want to emit one or a
> handful of tuples. Emitting a huge number of tuples in the nextTuple()
> method is what’s causing your problem.
>
> -Taylor
>
> > On May 2, 2016, at 9:08 AM, Adrien Carreira  > wrote:
> >
> > Hi there,
> >
> > Dont't know if I'm on the right place.. But let's try.
> >
> > I'm build a Topology, And I've a spout plugged on Redis.
> >
> > My question is, when the topology is active, Why the nextTuple() method
> isn't call when ack() method is called.
> >
> > Meaning, I've about 10k acking message without a nextTuple() called...
> >
> > So what going is : nextTuple is called to emit 3k message, stops,
> acking is called to ack all message without calling nextTuple to refeed the
> topoogy
> >
> > What can be the problem ?
> >
> >
> > Thanks for your feedbacks and sorry for my bad english.
> >
>
>


Re: Spout Questions

2016-05-02 Thread P. Taylor Goetz
nextTuple(), ack(), and fail() are all called by the same thread. nextTuple() 
should be fast, so you probably only want to emit one or a handful of tuples. 
Emitting a huge number of tuples in the nextTuple() method is what’s causing 
your problem.

-Taylor

> On May 2, 2016, at 9:08 AM, Adrien Carreira  wrote:
> 
> Hi there,
> 
> Dont't know if I'm on the right place.. But let's try.
> 
> I'm build a Topology, And I've a spout plugged on Redis.
> 
> My question is, when the topology is active, Why the nextTuple() method isn't 
> call when ack() method is called.
> 
> Meaning, I've about 10k acking message without a nextTuple() called...
> 
> So what going is : nextTuple is called to emit 3k message, stops, acking 
> is called to ack all message without calling nextTuple to refeed the 
> topoogy
> 
> What can be the problem ?
> 
> 
> Thanks for your feedbacks and sorry for my bad english.
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Storm-Vagrant for 0.10.0

2016-05-02 Thread Joaquin Menchaca
I was able to get a successful setup for Apache Storm 0.10.0 using
Vagrant.  Thanks for the help.  I used some of the same scripts to build
out one in AWS as well.

   - https://github.com/darkn3rd/storm-vagrant

-- 

是故勝兵先勝而後求戰,敗兵先戰而後求勝。


Re: [DISCUSS] Would like to make collective intelligence about Metrics on Storm

2016-05-02 Thread Harsha
Jungtaek,
Probably a filter config to whitelist and blacklist certain metrics. So
that it will scale if there are too many workers and users can turn off
certain metrics.
 
Thanks,
Harsha
 
 
On Mon, May 2, 2016, at 06:19 AM, Stephen Powis wrote:
> Oooh I'd love this as well!  I really dig the ease of the metric
> framework in storm and have all the metrics go thru one centralized
> config.  But as the number of storm hosts and number of tasks grow,
> I've found that Graphite/Grafana has a hard time collecting up all the
> relevant metrics across a lot of wildcarded keys for things like
> hostnames and taskIds to properly display my graphs.
>
> On Sun, May 1, 2016 at 8:17 AM, Kevin Conaway
>  wrote:
>> One thing I would like to see added (if not already present) is the
>> ability to register metrics that are not tied to a component.
>>
>> As of now, the only non-component metrics are reported by the
>> SystemBolt pseudo-component which feels like a work-around.  It
>> reports JVM level metrics like GC time, heap size and other things
>> that aren't associated with a given component.
>>
>> It would be great if application developers could expose similar
>> metrics like this for things like connection pools and other JVM wide
>> objects that aren't unique to a specific component.
>>
>> I don't think this is possible now, is it?
>>
>> On Wed, Apr 20, 2016 at 12:29 AM, Jungtaek Lim
>>  wrote:
>>> Let me start sharing my thought. :)
>>>
>>> 1. Need to enrich docs about metrics / stats.
>>>
>>> In fact, I couldn't see the fact - topology stats are sampled by
>>> default and sample rate is 0.05 - from the docs when I was newbie of
>>> Apache Storm. It made me misleading and made me saying "Why there're
>>> difference between the counts?". I also saw some mails from user@
>>> about same question. If we include this to guide doc that would be
>>> better.
>>>
>>> And Metrics document page[1] seems not well written. I think it has
>>> appropriate headings but lacks contents on each heading.
>>> It should be addressed, and introducing some external metrics
>>> consumer plugins (like storm-graphite[2] from Verisign) would be
>>> great, too.
>>>
>>> 2. Need to increase sample rate or (ideally) no sampling at all.
>>>
>>> Let's postpone considering performance hit at this time.
>>> Ideally, we expect precision of metrics gets better when we increase
>>> sample rate. It affects non-gauge kinds of metrics which are
>>> counter, and latency, and so on.
>>>
>>> Btw, I would like to hear about opinions on latency since I'm not an
>>> expert.
>>> Storm provides only average latency and it's indeed based on sample
>>> rate. Do we feel OK with this? If not how much having also
>>> percentiles can help us?
>>>
>>> Thanks,
>>> Jungtaek Lim (HeartSaVioR)
>>>
>>> 2016년 4월 20일 (수) 오전 10:55, Jungtaek Lim 님이 작성:
 Hi Storm users,

 I'm Jungtaek Lim, committer and PMC member of Apache Storm.

 If you subscribed dev@ mailing list, you may have seen that
 recently we're addressing the metrics feature on Apache Storm.

 For now, improvements are going forward based on current metrics
 feature.

 - Improve (Topology) MetricsConsumer[3]
 - Provide topology metrics in detail (metrics per each stream)[4]
 - (WIP) Introduce Cluster Metrics Consumer

 As I don't maintain large cluster for myself, I really want to
 collect the any ideas for improving, any inconveniences, use cases
 of Metrics with community members, so we're on the right way to go
 forward.

 Let's talk!

 Thanks in advance,
 Jungtaek Lim (HeartSaVioR)
>>
>>
>>
>> --
>> Kevin Conaway http://www.linkedin.com/pub/kevin-conaway/7/107/580/
>> https://github.com/kevinconaway
>>
 

Links:

  1. http://storm.apache.org/releases/1.0.0/Metrics.html
  2. https://github.com/verisign/storm-graphite
  3. https://issues.apache.org/jira/browse/STORM-1699
  4. https://issues.apache.org/jira/browse/STORM-1719


Re: How Does Nimbus Decide to Restart Topology?

2016-05-02 Thread Kevin Conaway
Unfortunately we're not capturing disk i/o in our metrics, I can look in to
doing that for next time.

We're not capturing GC logs, we are using the graphite storm metric
consumer to push metrics to graphite, one of which is the GC time from the
default GC mxbean.

> I'm assuming you're saying that multiple workers had state :timed-out at
once?

We have 6 workers.  Only one had state :timed-out, the others had state
:disallowed.  Looking at one of the supervisors as an example, it looks
like it received multiple requests to reschedule the worker which caused
the worker to be launched multiple times.  Is this normal?

2016-04-30 01:34:00.872 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 589b0ed5-c4e9-422f-a6c5-5d65145915f7. Current supervisor time:
1461980040. State: :disallowed, Heartbeat: {:time-secs 1461980040,
:storm-id "", :executors [[124 124] [64 64] [196 196] [40 40] [28
28] [184 184] [100 100] [-1 -1] [172 172] [16 16] [52 52] [148 148] [136
136] [112 112] [76 76] [88 88] [160 160] [4 4]], :port 6700}

2016-04-30 01:34:00.874 b.s.d.supervisor [INFO] Shutting down
3c373bc0-e8d5-4c3c-960d-53dfeb27fc86:589b0ed5-c4e9-422f-a6c5-5d65145915f7

2016-04-30 01:34:02.013 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 460a7d66-06de-4ca5-9140-7d46dcdea841. Current supervisor time:
1461980040. State: :disallowed, Heartbeat: {:time-secs 1461980040,
:storm-id "", :executors [[178 178] [58 58] [190 190] [118 118]
[22 22] [142 142] [-1 -1] [166 166] [106 106] [70 70] [10 10] [46 46] [82
82] [154 154] [94 94] [34 34] [130 130]], :port 6701}

2016-04-30 01:34:02.014 b.s.d.supervisor [INFO] Shutting down
3c373bc0-e8d5-4c3c-960d-53dfeb27fc86:460a7d66-06de-4ca5-9140-7d46dcdea841

2016-04-30 01:34:03.095 b.s.d.supervisor [INFO] Launching worker with
assignment {:storm-id "", :executors [[3 3] [33 33] [103 103]
[163 163] [53 53] [73 73] [123 123] [43 43] [63 63] [23 23] [93 93] [153
153] [13 13] [193 193] [143 143] [83 83] [173 173] [133 133] [183 183] [113
113]]} for this supervisor 3c373bc0-e8d5-4c3c-960d-53dfeb27fc86 on port
6700 with id 567a18fd-33d1-49b6-a3f4-ace65641bd67

2016-04-30 01:34:03.122 b.s.d.supervisor [INFO] Launching worker with
assignment {:storm-id "", :executors [[8 8] [188 188] [68 68]
[198 198] [178 178] [58 58] [118 118] [18 18] [28 28] [38 38] [98 98] [48
48] [148 148] [158 158] [128 128] [88 88] [138 138] [108 108] [168 168] [78
78]]} for this supervisor 3c373bc0-e8d5-4c3c-960d-53dfeb27fc86 on port 6701
with id 9fcc869d-08d7-44ec-bde2-bf9ed86403e6


2016-04-30 01:34:41.322 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 567a18fd-33d1-49b6-a3f4-ace65641bd67. Current supervisor time:
1461980081. State: :disallowed, Heartbeat: {:time-secs 1461980080,
:storm-id "", :executors [[3 3] [33 33] [103 103] [163 163] [53
53] [73 73] [123 123] [43 43] [63 63] [23 23] [93 93] [-1 -1] [153 153] [13
13] [193 193] [143 143] [83 83] [173 173] [133 133] [183 183] [113 113]],
:port 6700}

2016-04-30 01:34:41.323 b.s.d.supervisor [INFO] Shutting down
3c373bc0-e8d5-4c3c-960d-53dfeb27fc86:567a18fd-33d1-49b6-a3f4-ace65641bd67

2016-04-30 01:34:42.353 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 9fcc869d-08d7-44ec-bde2-bf9ed86403e6. Current supervisor time:
1461980081. State: :disallowed, Heartbeat: {:time-secs 1461980080,
:storm-id "", :executors [[8 8] [188 188] [68 68] [198 198] [178
178] [58 58] [118 118] [18 18] [28 28] [38 38] [98 98] [48 48] [-1 -1] [148
148] [158 158] [128 128] [88 88] [138 138] [108 108] [168 168] [78 78]],
:port 6701}

2016-04-30 01:34:42.354 b.s.d.supervisor [INFO] Shutting down
3c373bc0-e8d5-4c3c-960d-53dfeb27fc86:9fcc869d-08d7-44ec-bde2-bf9ed86403e6

On Sun, May 1, 2016 at 5:52 PM, Erik Weathers  wrote:

> Maybe disk I/O was high?  Are you capturing GC logs to disk in unique
> files (you can sub in the PID and timestamp into the GC log filename)?  I
> know you believe it's not responsible, but it's the only thing I've ever
> found to be responsible thus far.  (Except for a problem in storm 0.9.3
> with netty that has since been fixed -- we worked around that by
> downgrading to zero-MQ.)  You might try monitoring the heartbeat files
> written by the workers to watch for the file creation to be happening less
> frequently than once per second.
>
> > all of the worker sessions expired at the same time
>
> I'm assuming you're saying that multiple workers had state :timed-out at
> once?  Was that on the same host?  If the state is :disallowed, that is
> perfectly normal when the reassignment happens, as I described earlier.
>
> - Erik
>
> On Sunday, May 1, 2016, Kevin Conaway  wrote:
>
>> Any tips on where to continue investigating or other metrics to capture?
>>
>> As i mentioned before, the topology was mostly idle. Low cpu usage, low
>> gc time (cms parnew), stable heap, no eth errors. Its hard to see why all
>> of the worker sessions expired at the same time
>>
>> On 

Re: [DISCUSS] Would like to make collective intelligence about Metrics on Storm

2016-05-02 Thread Stephen Powis
Oooh I'd love this as well!  I really dig the ease of the metric framework
in storm and have all the metrics go thru one centralized config.  But as
the number of storm hosts and number of tasks grow, I've found that
Graphite/Grafana has a hard time collecting up all the relevant metrics
across a lot of wildcarded keys for things like hostnames and taskIds to
properly display my graphs.

On Sun, May 1, 2016 at 8:17 AM, Kevin Conaway 
wrote:

> One thing I would like to see added (if not already present) is the
> ability to register metrics that are not tied to a component.
>
> As of now, the only non-component metrics are reported by the SystemBolt
> pseudo-component which feels like a work-around.  It reports JVM level
> metrics like GC time, heap size and other things that aren't associated
> with a given component.
>
> It would be great if application developers could expose similar metrics
> like this for things like connection pools and other JVM wide objects that
> aren't unique to a specific component.
>
> I don't think this is possible now, is it?
>
> On Wed, Apr 20, 2016 at 12:29 AM, Jungtaek Lim  wrote:
>
>> Let me start sharing my thought. :)
>>
>> 1. Need to enrich docs about metrics / stats.
>>
>> In fact, I couldn't see the fact - topology stats are sampled by default
>> and sample rate is 0.05 - from the docs when I was newbie of Apache
>> Storm. It made me misleading and made me saying "Why there're difference
>> between the counts?". I also saw some mails from user@ about same question.
>> If we include this to guide doc that would be better.
>>
>> And Metrics document page
>>  seems not well
>> written. I think it has appropriate headings but lacks contents on each
>> heading.
>> It should be addressed, and introducing some external metrics consumer
>> plugins (like storm-graphite 
>>  from Verisign) would be great, too.
>>
>> 2. Need to increase sample rate or (ideally) no sampling at all.
>>
>> Let's postpone considering performance hit at this time.
>> Ideally, we expect precision of metrics gets better when we increase
>> sample rate. It affects non-gauge kinds of metrics which are counter,
>> and latency, and so on.
>>
>> Btw, I would like to hear about opinions on latency since I'm not an
>> expert.
>> Storm provides only average latency and it's indeed based on sample rate.
>> Do we feel OK with this? If not how much having also percentiles can help
>> us?
>>
>> Thanks,
>> Jungtaek Lim (HeartSaVioR)
>>
>> 2016년 4월 20일 (수) 오전 10:55, Jungtaek Lim 님이 작성:
>>
>>> Hi Storm users,
>>>
>>> I'm Jungtaek Lim, committer and PMC member of Apache Storm.
>>>
>>> If you subscribed dev@ mailing list, you may have seen that recently
>>> we're addressing the metrics feature on Apache Storm.
>>>
>>> For now, improvements are going forward based on current metrics feature.
>>>
>>> - Improve (Topology) MetricsConsumer
>>> 
>>> - Provide topology metrics in detail (metrics per each stream)
>>> 
>>> - (WIP) Introduce Cluster Metrics Consumer
>>>
>>> As I don't maintain large cluster for myself, I really want to collect
>>> the any ideas for improving, any inconveniences, use cases of Metrics with
>>> community members, so we're on the right way to go forward.
>>>
>>> Let's talk!
>>>
>>> Thanks in advance,
>>> Jungtaek Lim (HeartSaVioR)
>>>
>>
>
>
> --
> Kevin Conaway
> http://www.linkedin.com/pub/kevin-conaway/7/107/580/
> https://github.com/kevinconaway
>


Spout Questions

2016-05-02 Thread Adrien Carreira
Hi there,

Dont't know if I'm on the right place.. But let's try.

I'm build a Topology, And I've a spout plugged on Redis.

My question is, when the topology is active, Why the nextTuple() method
isn't call when ack() method is called.

Meaning, I've about 10k acking message without a nextTuple() called...

So what going is : nextTuple is called to emit 3k message, stops,
acking is called to ack all message without calling nextTuple to refeed the
topoogy

What can be the problem ?


Thanks for your feedbacks and sorry for my bad english.


Re: Is Storm 1.0.0 compatible with Kafka 0.8.2.x?

2016-05-02 Thread Abhishek Agarwal
good thing is that storm-kafka artifact itself is compatible with kafka
0.8.2.1. So in your maven project, you can simply exclude the
org.apache.kafka dependencies coming out of storm-kafka project and use the
version you need.

On Mon, May 2, 2016 at 3:03 PM, Abhishek Agarwal 
wrote:

> John, I think you have hit it right. I started using storm-kafka 1.0 with
> kafka server 0.8.2.1 and started running into all sorts of issues including
> the one you pointed out. Also 0.9 documentation clearly states that
> upgraded clients will not be compatible with older kafka version.
> http://kafka.apache.org/090/documentation.html#upgrade
>
> On Wed, Apr 20, 2016 at 5:42 PM, John Yost  wrote:
>
>> Argh, fat fingers...I am attempting to write to Kafka 0.8.2.1 from Storm
>> 1.0.0 which is has a dependency upon Kafka 0.9.0.1.
>>
>> @Abhishek -> interesting you are seeing the same exception for Storm
>> 0.10.0 because that has a dependency upon Kafka 0.8.1.1.
>>
>> On Wed, Apr 20, 2016 at 8:06 AM, John Yost  wrote:
>>
>>> Oh, gotcha, okay, will do. BTW, here's the link I failed to provide the
>>> first time: https://github.com/confluentinc/examples/issues/15
>>>
>>> --John
>>>
>>> On Wed, Apr 20, 2016 at 7:44 AM, Abhishek Agarwal 
>>> wrote:
>>>
 @John -
 can you file a JIRA for this? I doubt it is related to 1.0.0 version in
 particular. I have run into "illegalArugmentExceptions" in KafkaSpout
 (0.10.0).

 On Wed, Apr 20, 2016 at 4:44 PM, John Yost 
 wrote:

> Also, I found this link that indicates the exception I reported
> yesterday can be symptomatic of a mismatch between the client and broker
> where the client is one version newer.  I am not saying that's the case
> here with Storm 1.0.0, but wanted to provide this info 
> troubleshooting-wise.
>
> Thanks
>
> --John
>
> On Tue, Apr 19, 2016 at 3:26 PM, John Yost 
> wrote:
>
>> Hi Harsha,
>>
>> When the Storm 1.0.0 KafkaSpout (from the storm-kafka jar) attempts
>> to read from the Kafka 0.8.2.1 partition an IlegalArgumentException is
>> thrown, the root exception of which is as follows:
>>
>> at java.nio.Buffer.limit(Buffer.java:267)
>> at
>> kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:37)
>> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:99)
>> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:97)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at
>> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
>> at scala.collection.immutable.Range.foreach(Range.scala:141)
>> at
>> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
>> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
>> at kafka.api.TopicData$.readFrom(FetchResponse.scala:97)
>> at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
>> at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:168)
>> at
>> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
>> at
>> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
>> at scala.collection.immutable.Range.foreach(Range.scala:141)
>> at
>> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
>> at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
>>
>> The corresponding source code in Kafka where the root exception is
>> thrown is bolded:
>>
>> object FetchResponsePartitionData {
>>   def readFrom(buffer: ByteBuffer): FetchResponsePartitionData = {
>> val error = buffer.getShort
>> val hw = buffer.getLong
>> val messageSetSize = buffer.getInt
>> val messageSetBuffer = buffer.slice()
>>  *   messageSetBuffer.limit(messageSetSize)*
>> buffer.position(buffer.position + messageSetSize)
>> new FetchResponsePartitionData(error, hw, new
>> ByteBufferMessageSet(messageSetBuffer))
>>   }
>>
>> I am using all the default KafkaConfig settings for the KafkaSpout
>> with the exception of startOffsetTime, so I don't *think* I have a
>> misconfiguration, but I may be wrong.
>>
>> Please confirm if there is anything I need to do config-wise to make
>> this work.
>>
>> Thanks
>>
>> --John
>>
>> On Sat, Apr 16, 2016 at 10:49 PM,  wrote:
>>
>>> Awesome, thanks Harsha!
>>>
>>> --John
>>>
>>> Sent from my iPhone
>>>
>>> > On Apr 16, 2016, at 1:28 PM, Harsha  wrote:
>>> >
>>> > John,
>>> > I think you are asking if you will be able to run
>>> 

Re: Firewall - what ports should be open?

2016-05-02 Thread Abhishek Agarwal
You may want to open up logviewer ports as well (on each supervisor
machine). If you are using drpc (optional), drpc server port should be
opened up as well.

On Mon, May 2, 2016 at 2:11 PM, Joaquin Menchaca 
wrote:

> In AWS land, everything is locked off unless opened with SG.  I
> enabled standard supervisor ports for supervisor (6700-6703), enabled
> default storm nimbus port (tcp/6627), and default ui (tcp/8080), and
> of course have healthy zookeeper (tcp/2181).  Does anything else need
> to be opened?
>
> --
>
> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。
>



-- 
Regards,
Abhishek Agarwal


Re: UI showing no topologies

2016-05-02 Thread Joaquin Menchaca
Ignore the last part.  There was a permission problem on local.dir, no
idea how ownership changed on that... weird.



On Mon, May 2, 2016 at 2:33 AM, Joaquin Menchaca  wrote:
> I was doing originally:
>
> $ storm jar storm-starter-0.10.0.jar storm.starter.ExclamationTopology
>
> But finding some docs on HortonNetworks, I ran it with a parameter,
> which it uses to register a topology name, I guess:
>
> $ storm jar /vagrant/topologies/storm-starter-0.10.0.jar
> storm.starter.ExclamationTopology Exclamation
>
> This worked in my on one cluster (using vagrant), but when I did the
> same command on AWS (both Ubuntu 14), my results are different:
>
> $ storm jar storm-starter-0.10.0.jar storm.starter.ExclamationTopology
> Exclamation
> Running: /usr/lib/jvm/java-8-oracle/bin/java -client -Ddaemon.name=
> -Dstorm.options= -Dstorm.home=/usr/lib/apache/storm/0.10.0
> -Dstorm.log.dir=/usr/lib/apache/storm/0.10.0/logs
> -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
> -Dstorm.conf.file= -cp
> /usr/lib/apache/storm/0.10.0/lib/reflectasm-1.07-shaded.jar:/usr/lib/apache/storm/0.10.0/lib/kryo-2.21.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-api-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/clojure-1.6.0.jar:/usr/lib/apache/storm/0.10.0/lib/slf4j-api-1.7.7.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-slf4j-impl-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/minlog-1.2.jar:/usr/lib/apache/storm/0.10.0/lib/asm-4.0.jar:/usr/lib/apache/storm/0.10.0/lib/hadoop-auth-2.4.0.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-core-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/servlet-api-2.5.jar:/usr/lib/apache/storm/0.10.0/lib/storm-core-0.10.0.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/usr/lib/apache/storm/0.10.0/lib/disruptor-2.10.4.jar:storm-starter-0.10.0.jar:/usr/lib/apache/storm/0.10.0/conf:/usr/lib/apache/storm/0.10.0/bin
> -Dstorm.jar=storm-starter-0.10.0.jar storm.starter.ExclamationTopology
> Exclamation
> 726  [main] INFO  b.s.u.Utils - Using defaults.yaml from resources
> 848  [main] INFO  b.s.u.Utils - Using storm.yaml from resources
> 919  [main] INFO  b.s.u.Utils - Using defaults.yaml from resources
> 940  [main] INFO  b.s.u.Utils - Using storm.yaml from resources
> 948  [main] INFO  b.s.StormSubmitter - Generated ZooKeeper secret
> payload for MD5-digest: -6434592996563726871:-7362939470144077130
> 950  [main] INFO  b.s.s.a.AuthUtils - Got AutoCreds []
> 965  [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
> 994  [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
> 1017 [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
> baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
> Exception in thread "main" java.lang.RuntimeException:
> org.apache.thrift7.transport.TTransportException
> at backtype.storm.StormSubmitter.submitJarAs(StormSubmitter.java:399)
> at backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:229)
> at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:271)
> at 
> backtype.storm.StormSubmitter.submitTopologyWithProgressBar(StormSubmitter.java:307)
> at 
> backtype.storm.StormSubmitter.submitTopologyWithProgressBar(StormSubmitter.java:288)
> at storm.starter.ExclamationTopology.main(ExclamationTopology.java:76)
> Caused by: org.apache.thrift7.transport.TTransportException
> at 
> org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
> at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
> at 
> org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
> at 
> org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101)
> at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
> at 
> org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
> at 
> org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
> at 
> org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
> at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:69)
> at 
> backtype.storm.generated.Nimbus$Client.recv_beginFileUpload(Nimbus.java:420)
> at backtype.storm.generated.Nimbus$Client.beginFileUpload(Nimbus.java:408)
> at backtype.storm.StormSubmitter.submitJarAs(StormSubmitter.java:370)
> ... 5 more
>
> On Mon, May 2, 2016 at 2:25 AM, Matthew Lowe  wrote:
>> In your topology code, what call are you using to deploy to storm?
>>
>> Best Regards
>> Matthew Lowe
>>
>>> On 02 May 2016, at 10:43, Joaquin Menchaca  wrote:
>>>
>>> But I seem to be running topologies only on nimbus.
>>>
>>> The ui is not showing anything under Topology Summary.   It says "No
>>> data available in 

Re: Is Storm 1.0.0 compatible with Kafka 0.8.2.x?

2016-05-02 Thread Abhishek Agarwal
John, I think you have hit it right. I started using storm-kafka 1.0 with
kafka server 0.8.2.1 and started running into all sorts of issues including
the one you pointed out. Also 0.9 documentation clearly states that
upgraded clients will not be compatible with older kafka version.
http://kafka.apache.org/090/documentation.html#upgrade

On Wed, Apr 20, 2016 at 5:42 PM, John Yost  wrote:

> Argh, fat fingers...I am attempting to write to Kafka 0.8.2.1 from Storm
> 1.0.0 which is has a dependency upon Kafka 0.9.0.1.
>
> @Abhishek -> interesting you are seeing the same exception for Storm
> 0.10.0 because that has a dependency upon Kafka 0.8.1.1.
>
> On Wed, Apr 20, 2016 at 8:06 AM, John Yost  wrote:
>
>> Oh, gotcha, okay, will do. BTW, here's the link I failed to provide the
>> first time: https://github.com/confluentinc/examples/issues/15
>>
>> --John
>>
>> On Wed, Apr 20, 2016 at 7:44 AM, Abhishek Agarwal 
>> wrote:
>>
>>> @John -
>>> can you file a JIRA for this? I doubt it is related to 1.0.0 version in
>>> particular. I have run into "illegalArugmentExceptions" in KafkaSpout
>>> (0.10.0).
>>>
>>> On Wed, Apr 20, 2016 at 4:44 PM, John Yost  wrote:
>>>
 Also, I found this link that indicates the exception I reported
 yesterday can be symptomatic of a mismatch between the client and broker
 where the client is one version newer.  I am not saying that's the case
 here with Storm 1.0.0, but wanted to provide this info 
 troubleshooting-wise.

 Thanks

 --John

 On Tue, Apr 19, 2016 at 3:26 PM, John Yost 
 wrote:

> Hi Harsha,
>
> When the Storm 1.0.0 KafkaSpout (from the storm-kafka jar) attempts to
> read from the Kafka 0.8.2.1 partition an IlegalArgumentException is 
> thrown,
> the root exception of which is as follows:
>
> at java.nio.Buffer.limit(Buffer.java:267)
> at
> kafka.api.FetchResponsePartitionData$.readFrom(FetchResponse.scala:37)
> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:99)
> at kafka.api.TopicData$$anonfun$1.apply(FetchResponse.scala:97)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Range.foreach(Range.scala:141)
> at
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at kafka.api.TopicData$.readFrom(FetchResponse.scala:97)
> at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:169)
> at kafka.api.FetchResponse$$anonfun$4.apply(FetchResponse.scala:168)
> at
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
> at
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:251)
> at scala.collection.immutable.Range.foreach(Range.scala:141)
> at
> scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:251)
> at scala.collection.AbstractTraversable.flatMap(Traversable.scala:105)
>
> The corresponding source code in Kafka where the root exception is
> thrown is bolded:
>
> object FetchResponsePartitionData {
>   def readFrom(buffer: ByteBuffer): FetchResponsePartitionData = {
> val error = buffer.getShort
> val hw = buffer.getLong
> val messageSetSize = buffer.getInt
> val messageSetBuffer = buffer.slice()
>  *   messageSetBuffer.limit(messageSetSize)*
> buffer.position(buffer.position + messageSetSize)
> new FetchResponsePartitionData(error, hw, new
> ByteBufferMessageSet(messageSetBuffer))
>   }
>
> I am using all the default KafkaConfig settings for the KafkaSpout
> with the exception of startOffsetTime, so I don't *think* I have a
> misconfiguration, but I may be wrong.
>
> Please confirm if there is anything I need to do config-wise to make
> this work.
>
> Thanks
>
> --John
>
> On Sat, Apr 16, 2016 at 10:49 PM,  wrote:
>
>> Awesome, thanks Harsha!
>>
>> --John
>>
>> Sent from my iPhone
>>
>> > On Apr 16, 2016, at 1:28 PM, Harsha  wrote:
>> >
>> > John,
>> > I think you are asking if you will be able to run 0.8.2
>> kafka consumer in storm 1.0.0 . Yes we are shipping storm-kafka-client
>> which uses the new consumer api in kafka 0.9.0.1 but storm 1.0.0 still
>> ships with storm-kafka which uses older consumer api which can work with
>> all versions of kafka including 0.9.0.1.
>> >
>> > "I checked out the v1.0.0 tag, changed the kafka version to
>> 0.8.2.1, and I am getting compile errors in 

Re: UI showing no topologies

2016-05-02 Thread Joaquin Menchaca
I was doing originally:

$ storm jar storm-starter-0.10.0.jar storm.starter.ExclamationTopology

But finding some docs on HortonNetworks, I ran it with a parameter,
which it uses to register a topology name, I guess:

$ storm jar /vagrant/topologies/storm-starter-0.10.0.jar
storm.starter.ExclamationTopology Exclamation

This worked in my on one cluster (using vagrant), but when I did the
same command on AWS (both Ubuntu 14), my results are different:

$ storm jar storm-starter-0.10.0.jar storm.starter.ExclamationTopology
Exclamation
Running: /usr/lib/jvm/java-8-oracle/bin/java -client -Ddaemon.name=
-Dstorm.options= -Dstorm.home=/usr/lib/apache/storm/0.10.0
-Dstorm.log.dir=/usr/lib/apache/storm/0.10.0/logs
-Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
-Dstorm.conf.file= -cp
/usr/lib/apache/storm/0.10.0/lib/reflectasm-1.07-shaded.jar:/usr/lib/apache/storm/0.10.0/lib/kryo-2.21.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-api-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/clojure-1.6.0.jar:/usr/lib/apache/storm/0.10.0/lib/slf4j-api-1.7.7.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-slf4j-impl-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/minlog-1.2.jar:/usr/lib/apache/storm/0.10.0/lib/asm-4.0.jar:/usr/lib/apache/storm/0.10.0/lib/hadoop-auth-2.4.0.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-core-2.1.jar:/usr/lib/apache/storm/0.10.0/lib/servlet-api-2.5.jar:/usr/lib/apache/storm/0.10.0/lib/storm-core-0.10.0.jar:/usr/lib/apache/storm/0.10.0/lib/log4j-over-slf4j-1.6.6.jar:/usr/lib/apache/storm/0.10.0/lib/disruptor-2.10.4.jar:storm-starter-0.10.0.jar:/usr/lib/apache/storm/0.10.0/conf:/usr/lib/apache/storm/0.10.0/bin
-Dstorm.jar=storm-starter-0.10.0.jar storm.starter.ExclamationTopology
Exclamation
726  [main] INFO  b.s.u.Utils - Using defaults.yaml from resources
848  [main] INFO  b.s.u.Utils - Using storm.yaml from resources
919  [main] INFO  b.s.u.Utils - Using defaults.yaml from resources
940  [main] INFO  b.s.u.Utils - Using storm.yaml from resources
948  [main] INFO  b.s.StormSubmitter - Generated ZooKeeper secret
payload for MD5-digest: -6434592996563726871:-7362939470144077130
950  [main] INFO  b.s.s.a.AuthUtils - Got AutoCreds []
965  [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
994  [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
1017 [main] INFO  b.s.u.StormBoundedExponentialBackoffRetry - The
baseSleepTimeMs [2000] the maxSleepTimeMs [6] the maxRetries [5]
Exception in thread "main" java.lang.RuntimeException:
org.apache.thrift7.transport.TTransportException
at backtype.storm.StormSubmitter.submitJarAs(StormSubmitter.java:399)
at backtype.storm.StormSubmitter.submitTopologyAs(StormSubmitter.java:229)
at backtype.storm.StormSubmitter.submitTopology(StormSubmitter.java:271)
at 
backtype.storm.StormSubmitter.submitTopologyWithProgressBar(StormSubmitter.java:307)
at 
backtype.storm.StormSubmitter.submitTopologyWithProgressBar(StormSubmitter.java:288)
at storm.starter.ExclamationTopology.main(ExclamationTopology.java:76)
Caused by: org.apache.thrift7.transport.TTransportException
at 
org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129)
at 
org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
at 
org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:69)
at 
backtype.storm.generated.Nimbus$Client.recv_beginFileUpload(Nimbus.java:420)
at backtype.storm.generated.Nimbus$Client.beginFileUpload(Nimbus.java:408)
at backtype.storm.StormSubmitter.submitJarAs(StormSubmitter.java:370)
... 5 more

On Mon, May 2, 2016 at 2:25 AM, Matthew Lowe  wrote:
> In your topology code, what call are you using to deploy to storm?
>
> Best Regards
> Matthew Lowe
>
>> On 02 May 2016, at 10:43, Joaquin Menchaca  wrote:
>>
>> But I seem to be running topologies only on nimbus.
>>
>> The ui is not showing anything under Topology Summary.   It says "No
>> data available in table".
>>
>> It does show three systems in Supervisor Summary.
>>
>> I ran using storm starter topologies: "RollingTopWords",
>> "WordCountTopology", and "ExclaimationTopology".
>>
>> Still says no data.
>>
>> --
>>
>> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。



-- 

是故勝兵先勝而後求戰,敗兵先戰而後求勝。


Re: UI showing no topologies

2016-05-02 Thread Matthew Lowe
In your topology code, what call are you using to deploy to storm?

Best Regards
Matthew Lowe

> On 02 May 2016, at 10:43, Joaquin Menchaca  wrote:
> 
> But I seem to be running topologies only on nimbus.
> 
> The ui is not showing anything under Topology Summary.   It says "No
> data available in table".
> 
> It does show three systems in Supervisor Summary.
> 
> I ran using storm starter topologies: "RollingTopWords",
> "WordCountTopology", and "ExclaimationTopology".
> 
> Still says no data.
> 
> -- 
> 
> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。


Any Docs on Topologies (more than Readme)

2016-05-02 Thread Joaquin Menchaca
The readme mentions options, for things like parameters of
"production-topology remote" for RollingTopWords.

But the information is sparse, there's little to no information on
these topologies.  Are there any docs anywhere on them, some of their
options, etc.?  Other than going through the code?  (Ops guy here)

Starter Topologies* need lovin'... :)

https://github.com/apache/storm/tree/master/examples/storm-starter/src/jvm/org/apache/storm/starter

-- 

是故勝兵先勝而後求戰,敗兵先戰而後求勝。


Re: id for bolt tasks in Storm UI

2016-05-02 Thread Serega Sheypak
Hi Yury! Thanks for your reply, now it's clear to me.

2016-04-29 14:46 GMT+02:00 Yury Ruchin :

> Hi there Serega,
>
> What you see in the UI are probably compound executor "ID"s. They are
> actually ranges of task IDs assigned to respective executors. For example,
> [26-27] means executor with tasks 26 and 27 assigned to it. Task can
> determine its ID via TopologyContext.getThisTaskId() inside the component
> code. On the MetricsConsumer side, each DataPoint contains srcTaskId field.
> Those can be used further to match task-provided data against executor IDs.
> To avoid parsing executor ID strings, you may want to use Nimbus Thrift API
> to obtain ExecutorInfo structures that already have task_start and task_end
> as separate fields.
>
> Regards,
> Yury
>
> 2016-04-26 22:55 GMT+03:00 Serega Sheypak :
>
>> Hi, there is an id for each task displayed in UI. Id values are: [26-27]
>> or [44-45]. I want to publish application-specific metrics to Influx, I
>> want to publish the same id in metric name, so I can match basic Storm
>> metrics with my app metrics and find bottlenecks/skews/e.t.c
>>
>> What API shoud I use to get the same combnation of ids?
>>
>
>


Re: Cannot launch Supervisor, missing unknown file

2016-05-02 Thread Joaquin Menchaca
I bring up another scratch system to see if I can reproduce it.
Unfamiliar with Python, what would I need to do?  I installed Java
1.8, then ran 'bin/storm supervisor &' from storm home directory.

On Sun, May 1, 2016 at 11:47 PM, Erik Weathers  wrote:
> Do you know what precise package version of Python you were running before
> the upgrade?  Would be nice to be able to look up the exception backtrack
> lines of code.
>
> - Erik
>
>
> On Sunday, May 1, 2016, Joaquin Menchaca  wrote:
>>
>> I have no idea why this was happening.  It was on an Ubuntu 14.04 image,
>> but after I did an apt-get upgrade, everything worked.  I was curious if
>> anyone came across this, but it not, leave it as some bizarre anomaly.
>>
>> On Fri, Apr 29, 2016 at 4:59 PM, Joaquin Menchaca 
>> wrote:
>>>
>>> And it says missing file. This is from the 0.10.0 tarball.
>>>
>>> I tried a  bin/storm supervisor &
>>>
>>> $ Traceback (most recent call last):
>>>   File "bin/storm.py", line 568, in 
>>> main()
>>>   File "bin/storm.py", line 565, in main
>>> (COMMANDS.get(COMMAND, unknown_command))(*ARGS)
>>>   File "bin/storm.py", line 377, in supervisor
>>> jvmopts = parse_args(confvalue("supervisor.childopts", cppaths)) + [
>>>   File "bin/storm.py", line 137, in confvalue
>>> p = sub.Popen(command, stdout=sub.PIPE)
>>>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
>>> errread, errwrite)
>>>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
>>> raise child_exception
>>> OSError: [Errno 2] No such file or directory
>>>
>>> [1]+  Exit 1  bin/storm supervisor
>>>
>>>
>>> --
>>>
>>> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。
>>
>>
>>
>>
>> --
>>
>> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。



-- 

是故勝兵先勝而後求戰,敗兵先戰而後求勝。


Re: Cannot launch Supervisor, missing unknown file

2016-05-02 Thread Erik Weathers
Do you know what precise package version of Python you were running before
the upgrade?  Would be nice to be able to look up the exception
backtrack lines of code.

- Erik

On Sunday, May 1, 2016, Joaquin Menchaca  wrote:

> I have no idea why this was happening.  It was on an Ubuntu 14.04 image,
> but after I did an apt-get upgrade, everything worked.  I was curious if
> anyone came across this, but it not, leave it as some bizarre anomaly.
>
> On Fri, Apr 29, 2016 at 4:59 PM, Joaquin Menchaca  > wrote:
>
>> And it says missing file. This is from the 0.10.0 tarball.
>>
>> I tried a
>> *bin/storm supervisor &*
>> $ Traceback (most recent call last):
>>   File "bin/storm.py", line 568, in 
>> main()
>>   File "bin/storm.py", line 565, in main
>> (COMMANDS.get(COMMAND, unknown_command))(*ARGS)
>>   File "bin/storm.py", line 377, in supervisor
>> jvmopts = parse_args(confvalue("supervisor.childopts", cppaths)) + [
>>   File "bin/storm.py", line 137, in confvalue
>> p = sub.Popen(command, stdout=sub.PIPE)
>>   File "/usr/lib/python2.7/subprocess.py", line 710, in __init__
>> errread, errwrite)
>>   File "/usr/lib/python2.7/subprocess.py", line 1327, in _execute_child
>> raise child_exception
>> OSError: [Errno 2] No such file or directory
>>
>> [1]+  Exit 1  bin/storm supervisor
>>
>>
>> --
>>
>> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。
>>
>
>
>
> --
>
> 是故勝兵先勝而後求戰,敗兵先戰而後求勝。
>


Re: UI problem

2016-05-02 Thread Abhishek Agarwal
You might have to uninstall chrome extension which is hindering the
display.

On Mon, May 2, 2016 at 10:04 AM, Sai Dilip Reddy Kiralam <
dkira...@aadhya-analytics.com> wrote:

> yes ! UI is working in private mode but when I give a public ip then
> nothing is shown or it shows only UI with headers.
>
>
>
> *Best regards,*
>
> *K.Sai Dilip Reddy.*
>
> On Sat, Apr 30, 2016 at 1:34 AM, Abhishek Agarwal 
> wrote:
>
>> can you try to open the UI in incognito mode of your browser?
>>
>> On Fri, Apr 29, 2016 at 12:24 PM, Sai Dilip Reddy Kiralam <
>> dkira...@aadhya-analytics.com> wrote:
>>
>>> Hi,
>>>
>>> Looks like this error is due to browser.I will check once.
>>>
>>> Thank you
>>>
>>>
>>>
>>>
>>> *Best regards,*
>>>
>>> *K.Sai Dilip Reddy.*
>>>
>>> On Fri, Apr 29, 2016 at 11:26 AM, Jungtaek Lim 
>>> wrote:
>>>
 Hi,

 Could you open developer tools from your browser and check any API
 calls are failing with UI page load?

 Jungtaek Lim (HeartSaVioR)

 2016년 4월 29일 (금) 오후 2:47, Sai Dilip Reddy Kiralam <
 dkira...@aadhya-analytics.com>님이 작성:

>
> Hello,
>
> I Installed storm 0.10.0 in aws (ubuntu) instance.Nimbus,supervisor,ui
> services are running fine.I'm able to run Topologies also but UI is not
> showing any summary .I don't know where I'm going wrong.
>
> Below I attached the my ui,nimbus and ui log screen shorts.
>
>
>
>
> *Best regards,*
>
> *K.Sai Dilip Reddy.*
>

>>>
>>
>>
>> --
>> Regards,
>> Abhishek Agarwal
>>
>>
>


-- 
Regards,
Abhishek Agarwal