Thanks for relying : i'm using Eclipse Java EE IDE for Web Developers.
Version: Kepler Release
On Wed, Sep 3, 2014 at 11:32 PM, P. Taylor Goetz ptgo...@gmail.com wrote:
What IDE are you using?
On Sep 3, 2014, at 5:26 PM, researcher cs prog.researc...@gmail.com
wrote:
any help .. ?
Hi ,
i have a topology where i am emmitting json objects onto a kafka topic
using a kafka producer to a storm bolt.
I have been relaying the same objects on the same topic over and over again
as i am figuring out few things.
after a while i can see that if i use kafkaConfig.forceFromStart=true;
Hi Siddharth,
Please check if your kafka producer is still running.
Regards,
Dhaval Modi
dhavalmod...@gmail.com
On 4 September 2014 19:39, siddharth ubale siddharth.ub...@gmail.com
wrote:
Hi ,
i have a topology where i am emmitting json objects onto a kafka topic
using a kafka producer
Hi Dhaval,
I am using a producer class to relay the data.
So how do i check if it is running??
Thanks,
Siddharth
On Thu, Sep 4, 2014 at 7:45 PM, Dhaval Modi dhavalmod...@gmail.com wrote:
Hi Siddharth,
Please check if your kafka producer is still running.
Regards,
Dhaval Modi
Run consumer without --from-beginning option and check if you are
receiving data.
Regards,
Dhaval Modi
dhavalmod...@gmail.com
On 4 September 2014 20:00, siddharth ubale siddharth.ub...@gmail.com
wrote:
Hi Dhaval,
I am using a producer class to relay the data.
So how do i check if it is
I am running a Topology on Storm Cluster that uses the Kafka Spout. One of
my supervisor disconnects after a certain amount of time and is restarted
again. My supervisor logs for that node report the following -
Does anybody have some clue about this?
2014-09-04 19:10:11 b.s.d.supervisor [INFO]
I've found this post describing the same problem. Unfortunately no answers:
https://www.mail-archive.com/user@storm.incubator.apache.org/msg03623.html
On 3 September 2014 18:58, Alberto Cordioli cordioli.albe...@gmail.com wrote:
Hi all,
I searched for similar problems without any luck.
I
Check what worker log has to stay. May be storm user permission issue. Try
running the command that's there in supervisor log and validate the output .
-Original Message-
From: Palak Shah [mailto:spala...@gmail.com]
Sent: Thu 9/4/2014 8:54 PM
To: user@storm.incubator.apache.org
I had to manually flush all the data from kafka. I did that by stopping the
server and deleting the entire content of kafka-logs directory. Then I
restarted kafka again. Then I started my storm. I get following error
message. Because the offset KafkaSpout is looking for is at 81573 and kafka
I guess, you need to empty the zookeeper data directory too.
On Thu, Sep 4, 2014 at 9:22 PM, Kushan Maskey
kushan.mas...@mmillerassociates.com wrote:
I had to manually flush all the data from kafka. I did that by stopping
the server and deleting the entire content of kafka-logs directory.
Is it full error log? I mean we can look into source code where the worker
is trying to make some connection and may be we can guess what is wrong
with it.
On Thu, Sep 4, 2014 at 9:09 PM, Alberto Cordioli cordioli.albe...@gmail.com
wrote:
I've found this post describing the same problem.
Which zookeeper data do I need to delete? Storm or Kafka?
--
Kushan Maskey
817.403.7500
On Thu, Sep 4, 2014 at 11:01 AM, Vikas Agarwal vi...@infoobjects.com
wrote:
I guess, you need to empty the zookeeper data directory too.
On Thu, Sep 4, 2014 at 9:22 PM, Kushan Maskey
That one is the full error log for the worker. No errors in
supervisors and nimbus.
That worker is associated with a spout that tries to make connection
to HDFS to read avro files. Could be a problem related to this?
On 4 September 2014 18:07, Vikas Agarwal vi...@infoobjects.com wrote:
Is it
Under zookeeper you should be able to find a /consumers path. I believe this is
where the kafka consumers write their offset but I am not 100% sure.
This might be the place where all consumers storm/non-storm will be writing
their offset so if you have non storm consumers , I would be super
I had this same issue yesterday, only solution was to shut everything down
and clean out the zookeeper, start it all up again.
On Thu, Sep 4, 2014 at 12:39 PM, Parth Brahmbhatt
pbrahmbh...@hortonworks.com wrote:
Under zookeeper you should be able to find a /consumers path. I believe
this is
Thanks guys.
I am able to reset kafka successfully. But the storm still looking for the
older offsets. How to reset storm offsets now?
--
Kushan Maskey
817.403.7500
On Thu, Sep 4, 2014 at 11:42 AM, Nick Beenham nick.been...@gmail.com
wrote:
I had this same issue yesterday, only solution was
I don’t believe so. We have switched to use logback for storm behind the
scenes and some of the code is actually quite tied to it at the moment.
From: Xueming Li james.xueming...@gmail.commailto:james.xueming...@gmail.com
Reply-To:
Delete (better would be keep backup) the version-2 directory from zookeeper
data directory (see zoo.cfg for data dir location). You can reference
http://stackoverflow.com/questions/22982919/wiping-out-the-zookeeper-data-directory
On Thu, Sep 4, 2014 at 9:44 PM, Kushan Maskey
I am not sure about it, however, looking at all possible config options for
storm would help. I did the same for one of my issues and found one config
option that was causing tuple failures.
On Thu, Sep 4, 2014 at 9:47 PM, Alberto Cordioli cordioli.albe...@gmail.com
wrote:
That one is the
Just out of curiosity, using log4j 1.x is not an issue with storm, right?
On Thu, Sep 4, 2014 at 11:23 PM, Bobby Evans ev...@yahoo-inc.com wrote:
I don’t believe so. We have switched to use logback for storm behind
the scenes and some of the code is actually quite tied to it at the moment.
Thank you. I actually cleared all the zookeeper/version-2/* folders and now
its all working fine. Thanks a lot.
--
Kushan Maskey
817.403.7500
On Thu, Sep 4, 2014 at 1:05 PM, Vikas Agarwal vi...@infoobjects.com wrote:
Delete (better would be keep backup) the version-2 directory from
zookeeper
Glad to know that. :)
On Thu, Sep 4, 2014 at 11:43 PM, Kushan Maskey
kushan.mas...@mmillerassociates.com wrote:
Thank you. I actually cleared all the zookeeper/version-2/* folders and
now its all working fine. Thanks a lot.
--
Kushan Maskey
817.403.7500
On Thu, Sep 4, 2014 at 1:05 PM,
Do you mean the config in the yaml file? I increased the worker memory and
the spout is able to emit more tuples; the error is delayed but still
there! The weird thing is that there are no tuple failures..
Il 04/Set/2014 20:08 Vikas Agarwal vi...@infoobjects.com ha scritto:
I am not sure about
We provide a log4j 1x compatible API. log4j-over-slf4j. It gets translated
into slf4j calls, that get translated into logback calls, which are written out
to the logs. slf4j gets really confused if you have both log4j and
log4j-over-slf4j. So when upgrading to 0.9.0 from 0.8.X you need to
If there is no tuple failure, it might be the intended way of working of a
worker. :) May be some with command over internal details of Storm can
comment here.
One more thing that comes into my mind, considering the delay in connection
reset after increasing the worker memory, is to check for
Hi everyone!
I am running a windowed aggregation topology. But the topology restart for
no reason. Has anyone met this problem? I am using netty for message
transfer. I have tried version 0.9.0-rc3 and 0.9.2 and cannot find reason
from logs. Can anyone help me?
netty configuration in storm.yaml
Thanks for clarification !!
We have log4j 1.2.17, log4j-over-slf4j 1.6.6 and slf4j-api 1.6.4 in class
path with storm 0.9.1 and it seems that is the reason why storm is
not honoring the log4j.xml. :)
One thing to note is that log4j-over-slf4j is coming from storm-core
itself, we haven't added it
27 matches
Mail list logo