You can join like this in main function.
Stream joinStreamInner =
topology.join(streams, joinFields,
new Fields("RequestId", "ColumnMapId", "FFValue", "SFValue",
"TFValue"),
//JoinType.mixed(JoinType.INNER, JoinType.OUTER))
JoinType.INNER)
.each(new
#9 - 3pts
#10 - 2pts
Thanks
Milinda
On Mon, May 19, 2014 at 9:48 PM, Neville Li wrote:
> #11 - 3 pts.
> #1 - 2 pts.
>
>
> On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van wrote:
>>
>> #9 - 4 pts.
>> #8 - 2 pts.
>>
>>
>> On Mon, May 19, 2014 at 9:57 AM, wrote:
>>>
>>> #11 - 3 pts
>>> #6- 2
Hi all,
In my topology I observers that one of the supervisor machines get
repeatedly disconnected from the Zookeeper and it prints the following
error,
EndOfStreamException: Unable to read additional data from client sessionid
0x146193a4b70073d, likely client has closed socket
at
org.apache.
You can do something like this: -Xloggc:/logs/gc-worker-%ID%.log
On Wed, May 14, 2014 at 2:01 PM, Sean Allen wrote:
> is anyone logging gc events for workers in their cluster?
>
> outside of the storm, the following jvm options are pretty standard for us:
>
> -XX:+PrintGCTimeStamps -XX:+PrintGCD
Hi Jing,
Was message.max.bytes changed in your Kafka server config to be higher than
the default value (100 bytes)?
-Nathan
On Mon, May 19, 2014 at 5:54 PM, Tao, Jing wrote:
> I finally found the root cause. Turns out the spout was reading a
> message exceeded the max message size. Aft
Hi susheel,
Tick tuple will work for this ? do you know anything about using tick tuple
with python topology ?
On Sat, May 17, 2014 at 10:02 AM, Susheel Kumar Gadalay wrote:
> Use tick tuple
> On 5/16/14, yogesh panchal wrote:
> > Hi, is it possible to emit top 5 word count every 5 minute in
Got the issue resolved.
1. I was not Anchoring to incoming tuple...so effectively, all the Bolts
after impactBolt , were not transactional. The ack of impact bolt was
causing spout's ack to be called. Proper DAG was not created. So the
number I was seeing in WIP was not the true number of tuples t
#9 - 5pts
On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage wrote:
> #9 - 3pts
> #10 - 2pts
>
> Thanks
> Milinda
>
> On Mon, May 19, 2014 at 9:48 PM, Neville Li wrote:
> > #11 - 3 pts.
> > #1 - 2 pts.
> >
> >
> > On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van
> wrote:
> >>
> >> #9 - 4 pts
#9 - 5pts
2014-05-20 18:43 GMT+02:00 Tom Brown :
> #9 - 5pts
>
>
>
>
> On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage
> wrote:
>
>> #9 - 3pts
>> #10 - 2pts
>>
>> Thanks
>> Milinda
>>
>> On Mon, May 19, 2014 at 9:48 PM, Neville Li
>> wrote:
>> > #11 - 3 pts.
>> > #1 - 2 pts.
>> >
>> >
>> >
#1 - 5pts
On Tue, May 20, 2014 at 11:46 AM, Gaspar Muñoz wrote:
> #9 - 5pts
>
>
> 2014-05-20 18:43 GMT+02:00 Tom Brown :
>
> #9 - 5pts
>>
>>
>>
>>
>> On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage > > wrote:
>>
>>> #9 - 3pts
>>> #10 - 2pts
>>>
>>> Thanks
>>> Milinda
>>>
>>> On Mon, May 19,
Yes, and I did not set the max message size properly on the Spout.
From: Nathan Leung [mailto:ncle...@gmail.com]
Sent: Tuesday, May 20, 2014 10:43 AM
To: user
Subject: Re: Kafka Spout 0.8-plus stops consuming messages after a while
Hi Jing,
Was message.max.bytes changed in your Kafka server conf
Hi!
I've been thinking about Nathan Marz lambda architecture with the
components:
1. Kafka as message bus, the entry point of raw data.
2. Camus to dump data into HDFS (the batch layer).
3. And Storm to dump data into HBase (the speed layer).
I guess this is the "classical architecture" (the the
Hi I'm confused as to what each field in the StormUI represents and how to
use the information.
[image: Inline image 1]
The bolts I have above are formed from trident. This is what operations I
believe each bolt represents
b-0 : .each(function) -> .each(filter)
b-1 : .aggregate
--split--
b-2 : .pe
The two bolts that emit and transfer 0 are most likely your
persistantAggregates. They're sinks, so they don't emit or transfer
anything.
I forget the exact definition of capacity, but it indicates when that bolt
is taking too long to process. If it's greater than one, it's a
bottleneck. Its some
I'm a researcher and need help from you to make a simple project on
twitter that using storm
as i'm new in open source generally
i searched and found "Storm-Election" as i'm new ! is it simple for me as i
want to know what's the algorithm used to make edit on the algorithm or use
another algorith
Executed refers to number of incoming tuples processed.
capacity is determined by (executed * latency) / window (time duration).
UI should give you description of those stats if you hover over table
headers.
On Tue, May 20, 2014, at 03:36 PM, Raphael Hsieh wrote:
I reattached the previous
The two bolts which emit/transfer 0 are likely your persistentAggregate
bolts. These are *sinks* so they don't logically emit/transfer tuples any
farther.
You can add add a name which will show up in the UI to help you see how
Trident compiles into your Storm topology.
.name("Aggregator 1")
17 matches
Mail list logo