Hi,
I wonder if anyone who ever used streamparse (for storm topology in python
code), ever run into this issue I am facing right now.
Unfortunately, I don't see any error from streamparse log, but I see exception
below in the storm worker log.
The serializer exception section seems the culprit,
I think on 1.x you can work around it by directly accessing the Storm
metrics registry. That's what TopologyContext would do when you use it
https://github.com/apache/storm/blob/2ade13055315b69980f228ed786c6a76efb695a7/storm-core/src/jvm/org/apache/storm/task/TopologyContext.java#L397
Den tor. 4.
Is there any way to use TopologyContext in the Trident StateFactory?
Also, I tried directly using Dropwizard metrics JMX reporter, which worked
on the single node setup but when I deployed on the cluster then those
metrics via my own reporter were not visible.
On Thu, Apr 4, 2019 at 12:39 PM Stig
There might be an issue with the API for StateFactory here. You need a
TopologyContext to use the new metrics API, but makeState doesn't take one.
Others can correct me if this is not an issue, but IMO feel free to
register an issue at https://issues.apache.org/jira.
Den tor. 4. apr. 2019 kl.
As far as I can tell, the JMX reporting is only hooked up to the metrics v2
API. You're using metrics v1. Could you try to register your metric with
the new metrics system? You can find documentation at
https://storm.apache.org/releases/2.0.0-SNAPSHOT/metrics_v2.html.
Den ons. 3. apr. 2019 kl.
*Storm version*: 1.0.3 I'm registering custom metric in the makeState of
the StateFactory implementation.
@Overridepublic State makeState(final Map conf, final IMetricsContext
metricsContext, final int partitionIndex, final int numPartitions) {
ReducedMetric reducedMetric = new
-- Forwarded message --
From: 张博 <jyzhan...@gmail.com>
Date: 2017-09-12 13:29 GMT+08:00
Subject: I need help to run a topology
To: user@storm.apache.org
Hi,
I used springboot to develop a topology,but when I package it to a jar,and
run the jar,it has error.
The detail
evelo...@gmail.com>
> Date: Fri, Jul 28, 2017 at 5:15 PM
> Subject: need help on running storm examples
> To: user@storm.apache.org
>
>
> Hi Team,
> I new to storm .I am using storm-core and strom-hbase jar files of version
> 1.0.1 .
> I am using maven 3.5
-- Forwarded message --
From: Himabindu Koppula <himabindu.hadoopdevelo...@gmail.com>
Date: Fri, Jul 28, 2017 at 5:15 PM
Subject: need help on running storm examples
To: user@storm.apache.org
Hi Team,
I new to storm .I am using storm-core and strom-hbase jar files of v
Your supervisor's local host name is not getting resolved. You can
override this by configuring storm.local.hostname with a valid hostname.
Thanks,
Satish.
On Tue, Jul 19, 2016 at 12:22 AM, Joaquin Menchaca
wrote:
> Hi.
>
> Anyone have any suggestions how to debug this
Hi.
Anyone have any suggestions how to debug this and find out what is
happening? Any troubleshooting tools I can use to test the functionality
of the storm cluster?
3899 [main-EventThread] INFO o.a.s.s.o.a.c.f.s.ConnectionStateManager
- State change: CONNECTED
3908 [main] INFO
Hi!
In Storm UI, please have a look at the value that you get for the Capacity
(last 10m) for your bolt. If it closer If this value close to 1, then the
bolt is “at capacity”and is a bottleneck in your topology.Address such
bottlenecks by increasing the parallelism of the “at-capacity” bolts.
Hi All,
I'm running a topology in local machine.my bolts parse the json data and
insert the parsed fields into db.one of the bolt is turning into red
color.What does it mean.
does it indicate flow of tuples is more for that insert bolt? if yes!!
.what should I do?Do I need it increase the
If your workload does not saturate a single machine, it will of course
be more efficient to run within a single worker as you avoid intra-JVM
communication.
As long as CPU, memory, or network of a single machine is not utilized
completely, you will not benefit from multiple workers from a
Thank you
And i need this because i am developing a custom scheduler . And for this i
need to have topologies performing well with two worker processes instead
of one. And its becoming tough to saturate a single worker
On Saturday, 13 February 2016, Matthias J. Sax wrote:
> If
Thanks for the reply. So after some thinking , i figured if i shorten the
size of the worker incoming queue , i can create the scenario where using
more than one worker might result in better performance. Any thoughts on
how to do that?
The property topology, receiver.buffer.size should be the
Thanks for the reply. So after some thinking , i figured if i shorten the
size of the worker incoming queue , i can create the scenario where using
more than one worker might result in better performance. Any thoughts on
how to do that?
The property topology, receiver.buffer.size should be the
Thanks. Actually i have to create a scenario where 2 worker performs better
than one worker. But in reality , topology with a single worker performs
considerably better.
I sending csv lines to kafka (5 partitions ) and reading them from a
topology with kafka spout (parallelism hint 5)
Any
Topology param numWorkers i meant
On Thursday, 11 February 2016, Rudraneel chakraborty <
rudraneel.chakrabo...@gmail.com> wrote:
> More specifically , i have seen a topology performs better if it is
> assigned a single worker compared to more than one worker.
>
> I want a situation where a
I am not sure what you mean:
- number of worker slots per supervisor
or
- topology parameter "number of workers"
Can you clarify?
-Matthias
On 02/11/2016 05:14 AM, anshu shukla wrote:
> Not like that.. But i have used workers equal to number of cores. Each
> vm with 8 corea.
>
> On 11
More specifically , i have seen a topology performs better if it is
assigned a single worker compared to more than one worker.
I want a situation where a topology performs better with more than one
worker.
And it doesnt matter if both workers are on same supervisor or different
supervisor
On
Any situation where you require more CPU than 1 server can provide for you
- there are tuning parameters (e.g. localOrShuffleGrouping) that you can
use to reduce the amount of data sent over the network too.
Any situation where you need to have tolerance in case of machine failure.
On Thu, Feb
Not like that.. But i have used workers equal to number of cores. Each vm
with 8 corea.
On 11 Feb 2016 9:07 am, "Rudraneel chakraborty" <
rudraneel.chakrabo...@gmail.com> wrote:
> more than one worker on same node ? Did u use custom schedule r? because
> by default, the workers would be spread
Hello Good People,
I desperately need an example of a topology which performs better with more
than one worker process compared to a single worker. Could anyone help
Since every worker process have only one thread for transfer of message
from network to executor queue so more often it becomes a bottkeneck when
input rate is high.. That forces us to have more than 1 worker on same
node.
I dont think other than this there is any logic dependent topo case
more than one worker on same node ? Did u use custom schedule r? because by
default, the workers would be spread throughout the cluster
Hi,
I'm quite new to world of IBM Websphere MQ and need advise on one of the
Integration scenario.
In this scenario, we need to integrate MQ with Apache Storm, We need to
read the MQ in Apache Storm spout so that message can be processed in bolt.
Has anybody worked on this integration and is
Hi Sameer,
I worked with MQSeries and storm using the MQSeries JMS API implementation
and https://github.com/ptgoetz/storm-jms components
Best regards!
El jue., 21 ene. 2016 a las 13:26, Sameer Kirange ()
escribió:
> Hi,
>
> I'm quite new to world of IBM Websphere MQ
We do have both MQ and Storm in-house. However, we actually have another java
process running to read from MQ (polling and stopping every x seconds when the
queue is empty) and push message to kafka. We thought it was easier to use
KafkaSpout then creating another spout. Not saying this is the
i upgraded the version to storm -0.9.5 and got in supervisor log file that
it hasn't start still but i executed tha commend that launch between
supervisor and worker and got this
[ERROR] Halting process: ("Error on initialization")
java.lang.RuntimeException: ("Error on initialization") at
i tried this command ps -ef|grep 6703 and got this
st 2991 2595 41 14:51 pts/3 00:05:19 java -server -Xmx768m
-Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib
-Dlogfile.name=worker-6703.log -Dstorm.home=/home/st/storm-0.8.2
-Dlog4j.configuration=storm.log.properties -cp
I have resolved this problem several times.
There are two root cause.
(1) local temporary network ports are conflict with storm's ports.
(2) the old worker failed to be killed when kill topology.
Firstly, please do "ps -ef|grep 67xx" to check whether it is due to the
second problem. if there is
after submitting topology
supervisor log file
2015-11-01 09:59:48 executor [INFO] Loading executor b-1:[3 3] 2015-11-01
09:59:50 executor [INFO] Loaded executor tasks b-1:[3 3][INFO] Launching
worker with assignment
#backtype.storm.daemon.supervisor.LocalAssignment{:storm-id
"df-1-1446364738",
This is all the configuration options you can set in the storm.yaml file.
Of interest to you are the drpc.* keys:
https://storm.apache.org/javadoc/apidocs/constant-values.html#backtype.storm.Config
Regards,
Javier
On Mon, Sep 21, 2015 at 7:42 PM, researcher cs
wrote:
i'm new in storm how can i fill storm.yaml with appropriate configuration
here what i wrote
storm.zookeeper.servers:
- "127.0.0.1"
nimbus.host: "127.0.0.1"
storm.local.dir: /tmp/storm
but if i'll use drpc should i write it here or not and how ?
See the 0.9.4 release codebase @ https://github.com/apache/storm/tree/v0.9.4
There is a project called Storm Starter @
https://github.com/apache/storm/tree/v0.9.4/examples/storm-starter
Thank you for your time!
+
Jeff Maass maas...@gmail.com
linkedin.com/in/jeffmaass
Subject: Re: Need help
Totally my bad. I did not actually go look at the spouts to see if they were
implemented as reliable spouts or not.
If you haven't already read these, I would read them now:
https://storm.apache.org/documentation/Concepts.html
https://storm.apache.org/documentation
That project doesn't do anything about message delivery. I have to make
sure guaranteed processing of the message sent by the spout to bolt.
On Fri, May 15, 2015 at 5:33 PM, Jeffery Maass maas...@gmail.com wrote:
See the 0.9.4 release codebase @
https://github.com/apache/storm/tree/v0.9.4
Try this?
https://github.com/wurstmeister/storm-kafka-0.8-plus-test/blob/master/src/main/java/storm/kafka/trident/SentenceAggregationTopology.java
On Fri, May 15, 2015 at 3:51 PM, Asif Ihsan asifihsan.ih...@gmail.com
wrote:
That project doesn't do anything about message delivery. I have to make
:* Jeffery Maass [mailto:maas...@gmail.com]
*Sent:* Friday, May 15, 2015 10:18 AM
*To:* user@storm.apache.org
*Subject:* Re: Need help
Totally my bad. I did not actually go look at the spouts to see if they
were implemented as reliable spouts or not.
If you haven't already read these, I would
Thanks Taylor for your response.
In my case, I have seen that 4 of my 15 kafka executors do not process any
data; I will check what the kafka # of partitions is but looks like it may
be just 11 in which case I should reduce the number of kafka executors.
around 50 of the 550 mapperBoltExecutors
I am trying to troubleshoot an issue with our storm cluster where a worker
process on one of the machines in the cluster does not perform any work.
All the counts(emitted/transferred/executed) for all executors in that
worker are 0 as shown below. Even if I restart the worker, storm supervisor
More information about your topology would help, but..
I’ll assume you’re using a core API topology (spouts/bolts).
On the kafka spout side, does the spout parallelism == the # of kafka
partitions? (It should.)
On the bolt side, are you using fields groupings at all, and if so, what does
the
Hello,
Could any one help me on above mail query?
Regards,
Rajesh
On Sat, Nov 29, 2014 at 10:30 PM, Madabhattula Rajesh Kumar
mrajaf...@gmail.com wrote:
Hello,
I'm new to Storm and Kafka. I have tried Strom-Kafka integration example
program. Now I'm able to send message from Kafka and
Does your printer bolt ack the messages it received from KafkaSpout.
On Mon, Dec 1, 2014, at 06:38 PM, Madabhattula Rajesh Kumar wrote:
Hello,
Could any one help me on above mail query?
Regards, Rajesh
On Sat, Nov 29, 2014 at 10:30 PM, Madabhattula Rajesh Kumar
mrajaf...@gmail.com
Thank you Harsha for your response.
I'm just printing the messages in printer bolt.
Please find below printer blot code
*public class PrintBolt extends BaseRichBolt {private static final
long serialVersionUID = 1L;public void execute(Tuple tuple)
{
Thank you very much Harsha
Regards,
Rajesh
On Tue, Dec 2, 2014 at 8:50 AM, Harsha st...@harsha.io wrote:
Ok from the earlier logs it looks like your tuples are being timed out
and getting replayed.
In your PrintBolt.execute do collector.ack(tuple)
public class PrintBolt extends
Hello,
I'm new to Storm and Kafka. I have tried Strom-Kafka integration example
program. Now I'm able to send message from Kafka and receive those messages
in storm topology.
I have observed one thing in storm topology, same messages are processing
continuously
*I have sent three messages
48 matches
Mail list logo