Hi,
Can you please confirm whether the below bug is fixed in Stomr 1.1.0 version
https://issues.apache.org/jira/browse/STORM-1455
We are seeing that consumer offset is getting reset to earliest offset for
few topics in a group.
This is observed in prod environment and there was only info logs . T
7;s
> being fixed, but please leave maxUncommittedOffsets at the default if
> you're setting it to a custom value.
>
> What is your retry service configuration?
>
> 2017-09-02 0:11 GMT+02:00 pradeep s :
>
>> Yes Stig. Code posted is for DataBaseInsertBolt. Emit from last bolt is
>
so if an earlier tuple failed and is waiting for retry
> when you restart, that could also cause this.
>
> 2017-09-01 7:04 GMT+02:00 pradeep s :
>
>> Hi,
>> I am using Storm 1.1.0 ,storm kafka client version 1.1.1 and Kafka server
>> is 0.10.1.1.
>&g
Hi,
I am using Storm 1.1.0 ,storm kafka client version 1.1.1 and Kafka server
is 0.10.1.1.
Kakfa spout polling strategy used is UNCOMMITTED_EARLIEST.
Message flow is like below and its a normal topology
KafkaSpout --> AvroDeserializerBolt-->DataBaseInsertBolt.
If the message fails avro deserial
.
> * Use kafka-consumer-groups.sh to reset the offsets for your consumer
> group. Reference the KIP link to see how to do this, or just run
> kafka-consumer-groups.sh to get it to print usage.
> * Redeploy your topology
>
> 2017-08-29 10:05 GMT+02:00 pradeep s :
>
>> Hi Sti
> 2017-08-29 7:33 GMT+02:00 pradeep s :
>
>> Hi,
>> I want reset the storm consumer to latest offset for avoiding few
>> messages .
>>
>> I followed below steps
>>
>>
>>1. Stop storm consumer
>>2. Reset the retention period for the topic
Hi,
I want reset the storm consumer to latest offset for avoiding few messages .
I followed below steps
1. Stop storm consumer
2. Reset the retention period for the topic to 1 ms
3. Wait for few fins
4. Reset the retention period back to original value of 5 days
5. Start storm con
dependency and see
> if the issue goes away. If not we can look at debugging it.
>
>
> 2017-08-15 20:28 GMT+02:00 pradeep s :
>
>> Hi,
>> I have a storm kafka spout which listens to a group of topics. Max number
>> of partitions in a topic is 50. There are 50 spout thre
Hi,
I have a storm kafka spout which listens to a group of topics. Max number
of partitions in a topic is 50. There are 50 spout threads and 7 worker
nodes.
Issue is observed when there is a bigger load . All the partitions except
two or three will be processed .
When i check kafka manager , same c
you can get the
> source metrics via the metrics API if you don't want to go via Storm UI,
> take a look at https://github.com/revans2/inc
> ubator-storm/blob/88de24a5afd99df28c4fe304eafa5d53473a46c2/docs/Metrics.md
> .
>
> 2017-07-21 4:22 GMT+02:00 pradeep s :
>
>> Hi
Hi
How can I get consumer group related metrics for my kafka spout .i want to
expose consumer lag and messages per second metrics via hmm for reporting
and alerting
ion);*
* KafkaSpoutConfig spoutConfig =*
*
builder.setGroupId(kafkaGroupId).setProp(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,
"false").setFirstPollOffsetStrategy(FirstPollOffsetStrategy.UNCOMMITTED_EARLIEST).build();*
Regards
Pradeep S
erialization to retrieve the payload bytes to a java
POJO.
Can i use java deserialization in the bolt to acheive this . There is only
one bolt.
In docs , its mentioned java deserialization is not good in terms of
performance .
Whats the other option to deserialize tje payload bytes .
Regards
Pradeep S
there is in addition to the amount of Kafka
> partitions the executors have become idle.
> I have seen recommendations in hortonworks <http://hortonworks.com/blog/>
> that the number of executors does not exceed the number of cores.
>
>
> --
> Thomas Cristanis
>
>
= 40 cores in total, is 40 the max parallelism i can give
for spout and bolt .
Also if i assign 40 parallelism hint for spout, will the bolt parallelism
value can be the same?
Regards
Pradeep S
720b03b247
2017-02-25 21:20:50.381 o.a.k.c.u.AppInfoParser
Thread-65-merchMariaBolt-executor[32 33] [INFO] Kafka version : 0.10.1.1
2017-02-25 21:20:50.382 o.a.k.c.u.AppInfoParser
Thread-65-merchMariaBolt-executor[32 33] [INFO] Kafka commitId :
f10ef2720b03b247
Regards
Pradeep S
On Sat, Feb 25,
.invoke(util.clj:484)
[storm-core-1.0.3.jar:1.0.3]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Regards
Pradeep S
/KafkaSpoutConfig.Builder.html
But implementing this way I was getting commit failed exception due to
rebalance
Can you point out what's the proper way for implementing Kafka spout.
In storm 1.0.3 docs I have seen the way using zookeeper broker hosts
Regards
Pradeep S
On Tue, Feb 21, 2017 at 2:35 PM Priyank
Hi Priyank
Thanks for your reply.i was not able to find 1.1.0 version of storm
Can you please point to that.also can you please confirm on what specific
spout changes to make.
Regards
Pradeep S
On Tue, Feb 21, 2017 at 10:54 AM Priyank Shah wrote:
> Hi Pradeep,
>
>
>
> If you upg
Hi ,
I am using Storm 1.0.2 and Kafka 0.10.1.1 versions . Storm spout is
configured using zookeeper broker hosts. Is there a monitoring ui which i
can use to track the consumer offsets and lag.
I was using yahoo kafka manager , but its showing storm spout as a
consumer.Any help?
Regards
Pradeep S
)*.shuffleGrouping(s3BoltId);
StormSubmitter.submitTopology(topologyName, config,
topologyBuilder.createTopology());
Regards
Pradeep S
Hi,
After running our storm cluster in AWS for few days , we are getting a
Nullpointer exception in worker logs. Do you have any suggestions on this
issue?
2016-09-05 02:53:25.120 o.a.s.m.n.StormServerHandler [ERROR] server errors
in handling the request java.lang.NullPointerException at com.
esot
.cache.negative.ttl" ,
"0");
Can you please suggest the best way to set these properties and how?
Can we set these from storm.yaml?
Regards
Pradeep S
orm cluster with EC2.
> You may want to check each Zookeeper instance is accessible from your Storm
> cluster.
>
> Thanks,
> Jungtaek Lim (HeartSaVioR)
>
> 2016년 8월 28일 (일) 오후 5:27, pradeep s 님이 작성:
>
>> Hi Juntaek,
>> I am running Storm 1.0 on AWS .Figured
:07:40.713 o.a.s.s.o.a.z.ClientCnxn [INFO] Opening socket
connection to server 21.106.227/21.106.227:2181. Will not attempt
to authenticate using SASL (unknown error)
Regards
Pradeep S
On Sat, Aug 27, 2016 at 2:37 AM, Jungtaek Lim wrote:
> We need to see Nimbus log to find out why Nimbus is dow
Hi ,
While restarting nimbus and ui, i am getting NimbusLeaderNotFoundException..
While setting up the cluster also this error came. That time i have pointed
zookeper data directory to a new directory and issue was resolved. Any idea
on below exception.
org.apache.storm.utils.NimbusLeaderNotFoundE
Values(fullMessage));
outputCollector.ack(tuple);
S3 Bolt
outputCollector.emit(tuple, new Values(fullMessage));
outputCollector.ack(tuple);
SQS Delete Bolt
outputCollector.ack(tuple);
On Sun, Aug 7, 2016 at 1:06 PM, pradeep s
wrote:
> Time taken in bo
ay want to double-check
> your tuple anchor'ing -- this can sometimes be caused by improperly
> anchored or unanchored tuples, as well.
>
> --
> Andrew Montalenti | CTO, Parse.ly
>
> On Sat, Aug 6, 2016 at 1:22 PM, pradeep s
> wrote:
>
>> Hi ,
>> I am h
seeing spout failures in Storm UI.
But there are no failures in any of the bolts.Also no failure in log files.
Any suggestion on the reason for spout failure and how to debug this.
Topology timeout is set at default 30 secs.
Thanks
Pradeep S
Is there any way to automate storm cluster auto scaling in AWS. I have read
that simply adding a new worker wont work and we need to do storm rebalance.
Any pointers in implementing storm cluster autoscaling in AWS
Cassandra is async on the
> writes anyway, if you are going into a RDMS system that would be different.
>
> On Thu, Apr 14, 2016 at 6:09 PM, pradeep s
> wrote:
>
>> Thanks Jon. Ours is exactly similar usecase .We thought of using redis
>> cache as the storage layer.but s
to
> > do so, is if you can create your own Bolt to periodically push messages
> in
> > the database.
> >
> > I hope I helped.
> >
> > Cheers,
> > Nikos
> >
> > On Thu, Apr 14, 2016 at 12:54 AM, pradeep s >
> > wrote:
> >>
> &
re can be
scenarios like 1 million updates happening in onme transaction source
oracle system.
Can you please suggest a best approach for holding the messages and then
pushing to target db only when all messages for tran id is available in
storm.
Regards
Pradeep S
t(queueUrl),
SPOUT_PARALLELISM);
topologyBuilder.setBolt("mdpS3Bolt", new S3WriteBolt(),
BOLT_PARALLELISM).shuffleGrouping("mdpSpout");
topologyBuilder.setBolt("dbBolt", new DbBolt(), BOLT_PARALLELISM
).shuffleGrouping("mdpS3Bolt");
Regards
Pradeep S
In my storm topology i am reading message from queue and sending to s3 and
mysql.
Even though i send one message , topology shows emit count and transferred
count as 20. Any idea why 20 is coming.
Spout
spoutOutputCollector.emit(new Values(message), message.getReceiptHandle());
Bolt
outputColl
Pradeep S
36 matches
Mail list logo