Hi
I am observing that my worker processes take a little over 15 seconds to
start up (a single worker process with 6 executors with no ackers). Do you
guys find it normal?
Can i speed things up??
--
Rudraneel Chakraborty
Carleton University Real Time and Distributed Systems Reserach
Hi Alberto,
What is the use case for changing window duration/count at runtime?
Thanks,
Satish.
On Thu, Jun 23, 2016 at 11:56 PM, Alberto São Marcos
wrote:
> Thks Satish.
>
> On Thu, Jun 23, 2016 at 7:22 PM, Satish Duggana
> wrote:
>
>> No, you
Thks Satish.
On Thu, Jun 23, 2016 at 7:22 PM, Satish Duggana
wrote:
> No, you can not change windowing configuration at runtime.
>
> Thanks,
> Satish.
>
> On Thu, Jun 23, 2016 at 11:36 PM, Alberto São Marcos <
> alberto@gmail.com> wrote:
>
>> Like the title states,
No, you can not change windowing configuration at runtime.
Thanks,
Satish.
On Thu, Jun 23, 2016 at 11:36 PM, Alberto São Marcos
wrote:
> Like the title states, can one change the window bolt length/count in
> runtime?
> Already tried to do it using BaseWindowedBolt API
Like the title states, can one change the window bolt length/count in
runtime?
Already tried to do it using BaseWindowedBolt API but a NPE is thrown. The
property *Map windowConfiguration *is transient and not
available in runtime.
Is there any way to update the window
FYI - Filed issue from myself here:
https://issues.apache.org/jira/browse/STORM-1928
2016년 6월 23일 (목) 오후 11:33, Jungtaek Lim 님이 작성:
> Do get_simple_consumer() blocks for receiving messages? If it is, can we
> set timeout on this?
>
> Btw, I found edge-case from ShellSpout (not
Python Petrel Spout sample;
from pykafka import KafkaClient
from petrel import storm
from petrel.emitter import Spout
class MPDataSpoutInd(Spout):
"""Topology Spout."""
def __init__(self):
self.kafclient = KafkaClient(hosts="x.x.x.x:9092")
super(MPDataSpoutInd,
I changed python kafka version from 1.0.1 to 0.9.5
works fine.
Thanks
On Thu, Jun 23, 2016 at 6:04 PM, Jungtaek Lim wrote:
> Could you share implementation of spout?
> In multi-lang user level functions shouldn't block, so heartbeat timeout
> will occur if your spout
you need a supervisor node running, the node you currently are running
is just the nimbus node
Jacob Johansen
On Thu, Jun 23, 2016 at 5:05 AM, Walid Aljoby wrote:
> Hi all,
>
> I have submitted ExclamationTopology into single-node cluster Storm 1.0.1,
> as following:
>
Good question. At least one thing I know is that when a worker dies, it
won't be able to ack the tuples it received, so those tuples will fail and
you need to write code in your Spout's fail() method to re-emit them.
My best guess is that your tuple with key 1 will continue going to task 1
when it
Hi all,
I have submitted ExclamationTopology into single-node cluster Storm 1.0.1, as
following: storm jar target/storm-starter-*.jar
org.apache.storm.starter.ExclamationTopology
I have three basic questions:1- In sotrm UI, http://localhost:8080/index.html ,
the running topology and the number
Yes, The same topology, when I run in single node cluster, it works. And
when there is no data to consume from kafka, it shows heartbeat timeout
error.
Here, I am testing in multi node cluster and kafka server is in different
node
On Thu, Jun 23, 2016 at 1:48 PM, cogumelosmaravilha <
Hi!
Please check out your classpath (maven or gradle dependencies). It seems
that you are using two versions of Thrift library protocol.
Regards,
Florin
On Wed, Jun 22, 2016 at 10:40 AM, Venkatesh Bodapati <
venkatesh.bodap...@inndata.in> wrote:
> I am working on storm with hive,sql,kafka. i
Hi,
It happens to me to, but only when my kafka is empy.
I'm using Petrel because the generated jar file is really small like
300k Petrel vs 16M Streamparse.
On 23-06-2016 08:05, ram kumar wrote:
Hi,
*
*
*Version:
*
Storm : 0.10.0
Streamparse : 2.1.4
I am running a storm topology
Hi,
*Version:*
Storm : 0.10.0
Streamparse : 2.1.4
I am running a storm topology with a python streamparse "sparse run".
This topology stops executing in the middle and throw an exception
158031 [pool-37-thread-1] ERROR b.s.s.ShellSpout - Halting process:
> ShellSpout died.
>
stack overflow link here:
http://stackoverflow.com/questions/37967787/storm-trident-continuous-emits-from-aggregator-even-when-there-is-no-data-in-k
Regards,
Amber Kulkarni
How do we specify zookeeper config to OpaqueTridentKafkaSpout
Saw the below answer earlier: Can someone elaborate on this ?
Use the transactional.zookeeper.* configs to configure this.
What is happening is
messages pushed in Kafka while topology was down are not read by spout when
topology comes
17 matches
Mail list logo