Please check the details in each Spout (click the spout), there has more data
for executors, you can find something there, and post the screenshot pls.
From: Navin Ipe [mailto:navin@searchlighthealth.com]
Sent: 2016年6月1日 13:27
To: user@storm.apache.org
Subject: [EXTERNAL] Re: Common for
@Huo: Yes, so the fact that 20 were ack'ed in spouts 1, 2 and 3, should
mean that the emits should have also been shown as 20. But the emits are
being shown as 0.
One person says only 5% of data is shown by the UI:
http://stackoverflow.com/a/36204192/453673
But still, it is a bit odd that the
Thank you. Is there any other optimizing method such as modifying storm config?
I set the TOPOLOGY_DISRUPTOR_BATCH_SIZE and
TOPOLOGY_DISRUPTOR_BATCH_TIMEOUT_MILLIS to 1, it seems better only when I set
the worker number to 1.
发件人: Kevin Conaway
It only displays numbers reached 20 at least, and increased per 20.
[cid:image003.png@01D1BBEC.8707B400]
Hao Wang | Head, Software R Dept, 3M Cogent Beijing
Traffic Safety and Security Division
Suite 708, Ideal Plaza, No.58, West Road North 4th Ring rd, Haidian District |
Beijing, 100080
Thanks a lot for your response lujinhong.
I have build a storm topology integrating storm-kafka integration, but my
immediate bolt from kafka-spout is not processing the tuples in order which is
causing many records to ignore without processing. Any help on what
configuration of storm bolt I
unsubscribe
Hi Leon,
This isn't an advocacy piece per se, but this analysis by several member of the
Storm community may be helpful. For a particular use case you can compare
performance and then assess whether the features, user-friendliness, or API of
a particular framework is worth switching to.
Hi,
We are getting the following error on each supervisor where the topology is
to run:
java.lang.StackOverflowError: null
at
java.io.ObjectInputStream$BlockDataInputStream.readByte(ObjectInputStream.java:2774)
~[na:1.8.0_65]
at
Thought you all might be interested.
I have now got capacity monitoring working with AWS Cloudwatch. These are 2
bolts within the same topology:
#! /bin/bash
API_KEY='XX'
# get all the topology id's
TOPOLOGY_IDS=`curl -s
Try using localOrShuffle grouping. Storm will attempt to pass messages
directly to the next component within the same JVM, if possible
On Tuesday, May 31, 2016, 林海涛(信息技术部交易云技术研发组) wrote:
> Hello.
> I do test with a simple topology to test the intercommunication latency of
Hello.
I do test with a simple topology to test the intercommunication latency of
spout/bolt. It’s just emit the current nano timestamp from a spout and print
the time difference when a bolt receive it.
I deploy my storm cluster in my own machine with docker container (one nimbus,
one
Hello!
I would like the community the following:
1. Are you using the G1 garbage collector for your workers/supervisors in
production?
2. Have you observed any improvement added by adding this GC style?
3. What are the JVM options that you are using and are a good fit for you?
Thank you in
Yes, definitely. If I change the url a little bit, the server will return 'page
not found', this shows the log viewer service is running.
Hao Wang | Head, Software R Dept, 3M Cogent Beijing
Traffic Safety and Security Division
Suite 708, Ideal Plaza, No.58, West Road North 4th Ring rd,
Unfortunately you cannot unsubscribe by emailing the actual list:
- https://storm.apache.org/community.html
==> Just send an email to:
- user-unsubscr...@storm.apache.org
On Tue, May 31, 2016 at 12:15 AM, Fang Chen wrote:
> unsubscribe
>
Thank you so much for your help!
The "profile" is a kind of step function, so I would store the values in an array. My key would be the program ID, further I would store the start time and the array with the values for each minute in Redis. Is it not possible to remove the key from Redis as
unsubscribe
unsubscribe
unsubscribe
Hi All,
I have successfully run our topo on the Storm 1.0.1, everything works fine, but
cannot not view the log of worker, the server always returns error, just like
below:
http://ant61:8000/log?file=BioLive-3-1464656958%5C6703%5Cworker.log
HTTP ERROR: 500
Problem accessing /log. Reason:
19 matches
Mail list logo