Re: Nimbus UI fields

2014-05-20 Thread Cody A. Ray
The two bolts which emit/transfer 0 are likely your persistentAggregate
bolts. These are *sinks* so they don't logically emit/transfer tuples any
farther.

You can add add a name which will show up in the UI to help you see how
Trident compiles into your Storm topology.

.name("Aggregator 1")
.persistentAggregate(...)

.name("Aggregator 2")
.persistentAggregate(...)

-Cody


On Tue, May 20, 2014 at 7:38 PM, Harsha  wrote:

>  Executed refers to number of incoming tuples processed.
>
> capacity is determined by (executed * latency) / window (time duration).
>
> UI should give you description of those stats if you hover over table
> headers.
>
>
>
>
> On Tue, May 20, 2014, at 03:36 PM, Raphael Hsieh wrote:
>
> I reattached the previous image in case it was too difficult to read before
>
>
> On Tue, May 20, 2014 at 3:31 PM, Raphael Hsieh wrote:
>
> Hi I'm confused as to what each field in the StormUI represents and how to
> use the information.
> [image: Inline image 1]
>
> The bolts I have above are formed from trident. This is what operations I
> believe each bolt represents
>  b-0 : .each(function) -> .each(filter)
> b-1 : .aggregate
> --split--
> b-2 : .persistentAggregate
> b-3 : .persistentAggregate
>
> What does it mean for the first two bolts to emit and transfer 0 ?
>  What is the Capacity field ? What does that represent ?
> Does Execute refer to the tuples acked and successfully processed?
>
> Thanks
> --
> Raphael Hsieh
>
>
>
>
>
>
>
> --
> Raphael Hsieh
>
>
>
>
> Email had 2 attachments:
>
>- image.png
>-   41k (image/png)
>- NimbusUI.PNG
>-   22k (image/png)
>
>


-- 
Cody A. Ray, LEED AP
cody.a@gmail.com
215.501.7891


Re: Nimbus UI fields

2014-05-20 Thread Harsha
Executed refers to number of incoming tuples processed.

capacity is determined by (executed * latency) / window (time duration).

UI should give you description of those stats if you hover over table
headers.







On Tue, May 20, 2014, at 03:36 PM, Raphael Hsieh wrote:

I reattached the previous image in case it was too difficult to read
before


On Tue, May 20, 2014 at 3:31 PM, Raphael Hsieh
<[1]raffihs...@gmail.com> wrote:

Hi I'm confused as to what each field in the StormUI represents and how
to use the information.
Inline image 1

The bolts I have above are formed from trident. This is what operations
I believe each bolt represents
b-0 : .each(function) -> .each(filter)
b-1 : .aggregate
--split--
b-2 : .persistentAggregate
b-3 : .persistentAggregate

What does it mean for the first two bolts to emit and transfer 0 ?
What is the Capacity field ? What does that represent ?
Does Execute refer to the tuples acked and successfully processed?

Thanks
--
Raphael Hsieh






--
Raphael Hsieh



  Email had 2 attachments:
  * image.png
  *   41k (image/png)
  * NimbusUI.PNG
  *   22k (image/png)

References

1. mailto:raffihs...@gmail.com


Need ideas about twitter projects using storm

2014-05-20 Thread researcher cs
I'm a researcher and need help from you to make a simple project on
twitter  that using storm
as i'm new in open source generally

i searched and found "Storm-Election" as i'm new ! is it simple for me as i
want to know what's the algorithm used to make edit on the algorithm or use
another algorithm didn't used before on it

or what's the projects can i do on twitter ?

i hope you get my point


Re: Nimbus UI field

2014-05-20 Thread Cody A. Ray
The two bolts that emit and transfer 0 are most likely your
persistantAggregates. They're sinks, so they don't emit or transfer
anything.

I forget the exact definition of capacity, but it indicates when that bolt
is taking too long to process. If it's greater than one, it's a
bottleneck.  Its something like the percent of time a worker so processing
that bolt. (It can be greater than one so it might be percent of time over
all workers... Unclear)

-Cody

On May 20, 2014 4:32 PM, "Raphael Hsieh"  wrote:
>
> Hi I'm confused as to what each field in the StormUI represents and how
to use the information.
>
> The bolts I have above are formed from trident. This is what operations I
believe each bolt represents
> b-0 : .each(function) -> .each(filter)
> b-1 : .aggregate
> --split--
> b-2 : .persistentAggregate
> b-3 : .persistentAggregate
>
> What does it mean for the first two bolts to emit and transfer 0 ?
> What is the Capacity field ? What does that represent ?
> Does Execute refer to the tuples acked and successfully processed?
>
> Thanks
> --
> Raphael Hsieh
>
>
>


Nimbus UI fields

2014-05-20 Thread Raphael Hsieh
Hi I'm confused as to what each field in the StormUI represents and how to
use the information.
[image: Inline image 1]

The bolts I have above are formed from trident. This is what operations I
believe each bolt represents
b-0 : .each(function) -> .each(filter)
b-1 : .aggregate
--split--
b-2 : .persistentAggregate
b-3 : .persistentAggregate

What does it mean for the first two bolts to emit and transfer 0 ?
What is the Capacity field ? What does that represent ?
Does Execute refer to the tuples acked and successfully processed?

Thanks
-- 
Raphael Hsieh


storm to HDFS and lambda architecture

2014-05-20 Thread Javi Roman
Hi!

I've been thinking about Nathan Marz lambda architecture with the
components:

1. Kafka as message bus, the entry point of raw data.
2. Camus to dump data into HDFS (the batch layer).
3. And Storm to dump data into HBase (the speed layer).

I guess this is the "classical architecture" (the theory), however thinking
in the Storm-to-HDFS connector from P. Taylor: to dump processed data from
Storm to HDFS is a good idea taking into account the lambda architecture?
Do you think this could slow down the speed layer concept? Otherwise
Storm-to-HDFS connector is suitable for other plans.

Many thanks

--
Javi Roman


RE: Kafka Spout 0.8-plus stops consuming messages after a while

2014-05-20 Thread Tao, Jing
Yes, and I did not set the max message size properly on the Spout.

From: Nathan Leung [mailto:ncle...@gmail.com]
Sent: Tuesday, May 20, 2014 10:43 AM
To: user
Subject: Re: Kafka Spout 0.8-plus stops consuming messages after a while

Hi Jing,

Was message.max.bytes changed in your Kafka server config to be higher than the 
default value (100 bytes)?

-Nathan

On Mon, May 19, 2014 at 5:54 PM, Tao, Jing 
mailto:j...@webmd.net>> wrote:
I finally found the root cause.  Turns out the spout was reading a message 
exceeded the max message size.  After increasing the message size in 
SpoutConfig, it worked.  It would have been nice if the Spout threw an error or 
exception in such cases.

Jing

On May 16, 2014, at 12:09 PM, "Tao, Jing" 
mailto:j...@webmd.net>> wrote:
Hi,

I am using storm 0.9.0.1, kafka 0.8, and storm-kafka-0.8-plus.  In 
LocalCluster, after fetching and processing about 18,000 messages, the spout 
stops fetching from Kafka even though there are new messages in the queue.
There are no errors in the storm or kafka logs.  Any idea why this is happening?

Thanks,
Jing




Re: [VOTE] Storm Logo Contest - Round 1

2014-05-20 Thread Megan Kearl
#1 - 5pts


On Tue, May 20, 2014 at 11:46 AM, Gaspar Muñoz  wrote:

> #9 - 5pts
>
>
> 2014-05-20 18:43 GMT+02:00 Tom Brown :
>
> #9 - 5pts
>>
>>
>>
>>
>> On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage > > wrote:
>>
>>> #9 - 3pts
>>> #10 - 2pts
>>>
>>> Thanks
>>> Milinda
>>>
>>> On Mon, May 19, 2014 at 9:48 PM, Neville Li 
>>> wrote:
>>> > #11 - 3 pts.
>>> > #1 - 2 pts.
>>> >
>>> >
>>> > On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van 
>>> wrote:
>>> >>
>>> >> #9 - 4 pts.
>>> >> #8 - 2 pts.
>>> >>
>>> >>
>>> >> On Mon, May 19, 2014 at 9:57 AM,  wrote:
>>> >>>
>>> >>> #11 - 3 pts
>>> >>> #6- 2 pts
>>> >>>
>>> >>> Sent from Windows Mail
>>> >>>
>>> >>> From: Uditha Bandara Wijerathna
>>> >>> Sent: ‎Sunday‎, ‎May‎ ‎18‎, ‎2014 ‎4‎:‎28‎ ‎AM
>>> >>> To: user@storm.incubator.apache.org
>>> >>> Cc: d...@storm.incubator.apache.org
>>> >>>
>>> >>> On Thu, May 15, 2014 at 9:58 PM, P. Taylor Goetz 
>>> >>> wrote:
>>> 
>>>  This is a call to vote on selecting the top 3 Storm logos from the
>>> 11
>>>  entries received. This is the first of two rounds of voting. In the
>>> first
>>>  round the top 3 entries will be selected to move onto the second
>>> round where
>>>  the winner will be selected.
>>> 
>>>  The entries can be viewed on the storm website here:
>>> 
>>>  http://storm.incubator.apache.org/blog.html
>>> 
>>>  VOTING
>>> 
>>>  Each person can cast a single vote. A vote consists of 5 points
>>> that can
>>>  be divided among multiple entries. To vote, list the entry number,
>>> followed
>>>  by the number of points assigned. For example:
>>> 
>>>  #1 - 2 pts.
>>>  #2 - 1 pt.
>>>  #3 - 2 pts.
>>> 
>>>  Votes cast by PPMC members are considered binding, but voting is
>>> open to
>>>  anyone.
>>> 
>>>  This vote will be open until Thursday, May 22 11:59 PM UTC.
>>> 
>>>  - Taylor
>>> >>>
>>> >>>
>>> >>>
>>> >>> #9 - 2 Points
>>> >>> #1 - 3 Points
>>> >>> --
>>> >>> - Uditha Bandara Wijerathna -
>>> >>
>>> >>
>>> >
>>>
>>>
>>>
>>> --
>>> Milinda Pathirage
>>>
>>> PhD Student | Research Assistant
>>> School of Informatics and Computing | Data to Insight Center
>>> Indiana University
>>>
>>> twitter: milindalakmal
>>> skype: milinda.pathirage
>>> blog: http://milinda.pathirage.org
>>>
>>
>>
>


Re: [VOTE] Storm Logo Contest - Round 1

2014-05-20 Thread Gaspar Muñoz
#9 - 5pts


2014-05-20 18:43 GMT+02:00 Tom Brown :

> #9 - 5pts
>
>
>
>
> On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage 
> wrote:
>
>> #9 - 3pts
>> #10 - 2pts
>>
>> Thanks
>> Milinda
>>
>> On Mon, May 19, 2014 at 9:48 PM, Neville Li 
>> wrote:
>> > #11 - 3 pts.
>> > #1 - 2 pts.
>> >
>> >
>> > On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van 
>> wrote:
>> >>
>> >> #9 - 4 pts.
>> >> #8 - 2 pts.
>> >>
>> >>
>> >> On Mon, May 19, 2014 at 9:57 AM,  wrote:
>> >>>
>> >>> #11 - 3 pts
>> >>> #6- 2 pts
>> >>>
>> >>> Sent from Windows Mail
>> >>>
>> >>> From: Uditha Bandara Wijerathna
>> >>> Sent: ‎Sunday‎, ‎May‎ ‎18‎, ‎2014 ‎4‎:‎28‎ ‎AM
>> >>> To: user@storm.incubator.apache.org
>> >>> Cc: d...@storm.incubator.apache.org
>> >>>
>> >>> On Thu, May 15, 2014 at 9:58 PM, P. Taylor Goetz 
>> >>> wrote:
>> 
>>  This is a call to vote on selecting the top 3 Storm logos from the 11
>>  entries received. This is the first of two rounds of voting. In the
>> first
>>  round the top 3 entries will be selected to move onto the second
>> round where
>>  the winner will be selected.
>> 
>>  The entries can be viewed on the storm website here:
>> 
>>  http://storm.incubator.apache.org/blog.html
>> 
>>  VOTING
>> 
>>  Each person can cast a single vote. A vote consists of 5 points that
>> can
>>  be divided among multiple entries. To vote, list the entry number,
>> followed
>>  by the number of points assigned. For example:
>> 
>>  #1 - 2 pts.
>>  #2 - 1 pt.
>>  #3 - 2 pts.
>> 
>>  Votes cast by PPMC members are considered binding, but voting is
>> open to
>>  anyone.
>> 
>>  This vote will be open until Thursday, May 22 11:59 PM UTC.
>> 
>>  - Taylor
>> >>>
>> >>>
>> >>>
>> >>> #9 - 2 Points
>> >>> #1 - 3 Points
>> >>> --
>> >>> - Uditha Bandara Wijerathna -
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Milinda Pathirage
>>
>> PhD Student | Research Assistant
>> School of Informatics and Computing | Data to Insight Center
>> Indiana University
>>
>> twitter: milindalakmal
>> skype: milinda.pathirage
>> blog: http://milinda.pathirage.org
>>
>
>


Re: [VOTE] Storm Logo Contest - Round 1

2014-05-20 Thread Tom Brown
#9 - 5pts




On Tue, May 20, 2014 at 7:20 AM, Milinda Pathirage wrote:

> #9 - 3pts
> #10 - 2pts
>
> Thanks
> Milinda
>
> On Mon, May 19, 2014 at 9:48 PM, Neville Li  wrote:
> > #11 - 3 pts.
> > #1 - 2 pts.
> >
> >
> > On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van 
> wrote:
> >>
> >> #9 - 4 pts.
> >> #8 - 2 pts.
> >>
> >>
> >> On Mon, May 19, 2014 at 9:57 AM,  wrote:
> >>>
> >>> #11 - 3 pts
> >>> #6- 2 pts
> >>>
> >>> Sent from Windows Mail
> >>>
> >>> From: Uditha Bandara Wijerathna
> >>> Sent: ‎Sunday‎, ‎May‎ ‎18‎, ‎2014 ‎4‎:‎28‎ ‎AM
> >>> To: user@storm.incubator.apache.org
> >>> Cc: d...@storm.incubator.apache.org
> >>>
> >>> On Thu, May 15, 2014 at 9:58 PM, P. Taylor Goetz 
> >>> wrote:
> 
>  This is a call to vote on selecting the top 3 Storm logos from the 11
>  entries received. This is the first of two rounds of voting. In the
> first
>  round the top 3 entries will be selected to move onto the second
> round where
>  the winner will be selected.
> 
>  The entries can be viewed on the storm website here:
> 
>  http://storm.incubator.apache.org/blog.html
> 
>  VOTING
> 
>  Each person can cast a single vote. A vote consists of 5 points that
> can
>  be divided among multiple entries. To vote, list the entry number,
> followed
>  by the number of points assigned. For example:
> 
>  #1 - 2 pts.
>  #2 - 1 pt.
>  #3 - 2 pts.
> 
>  Votes cast by PPMC members are considered binding, but voting is open
> to
>  anyone.
> 
>  This vote will be open until Thursday, May 22 11:59 PM UTC.
> 
>  - Taylor
> >>>
> >>>
> >>>
> >>> #9 - 2 Points
> >>> #1 - 3 Points
> >>> --
> >>> - Uditha Bandara Wijerathna -
> >>
> >>
> >
>
>
>
> --
> Milinda Pathirage
>
> PhD Student | Research Assistant
> School of Informatics and Computing | Data to Insight Center
> Indiana University
>
> twitter: milindalakmal
> skype: milinda.pathirage
> blog: http://milinda.pathirage.org
>


Re: Understanding ACKing mechanism

2014-05-20 Thread P Ghosh
Got the issue resolved.
1. I was not Anchoring to incoming tuple...so effectively, all the Bolts
after impactBolt , were not transactional. The ack of impact bolt was
causing spout's ack to be called. Proper DAG was not created.  So the
number I was seeing in WIP was not the true number of tuples that were
pending. Whole thing was confusing. After I put in proper anchoring and the
fix below...I can see the pendingTuples becoming zero occassionally...which
means ...it is working as expected.
2. I was not doing a *parts.put(...)* after *parts = new
HashMap();* above (slip out). This was resulting in
leak.

Thanks,
Prasun


On Mon, May 19, 2014 at 1:12 AM, P Ghosh  wrote:

> I have a topology, that looks like
> *All Bolts emits Fields "id", "json"*
> *Spout emits only id*
> *All bolts/spout uses the stream _stream name while
> emitting*
>
> *Topology Definition*
>
> *==*
>
> builder.setSpout("citySpout", citySpout,10);
>
> builder.setBolt("impactBolt", impactBolt, 5).fieldsGrouping("citySpout",
> "citySpout_stream", new Fields("id"));
>
> builder.setBolt("restaurantBolt", restaurantBolt,
> 5).fieldsGrouping("impactBolt", "impactBolt_stream", new Fields("id"));
>
> builder.setBolt("theatresBolt", theatresBolt,
> 5).fieldsGrouping("impactBolt", "impactBolt_stream", new Fields("id"));
>
> builder.setBolt("libraryBolt", libraryBolt,
> 5).fieldsGrouping("impactBolt", "impactBolt_stream", new Fields("id"));
>
> builder.setBolt("transportBolt", transportBolt,
> 5).fieldsGrouping("impactBolt", "impactBolt_stream", new Fields("id"));
>
> builder.setBolt("crimeBolt", crimeBolt, 5).fieldsGrouping("impactBolt", "
> impactBolt_stream", new Fields("id"));
>
> builder.setBolt("combinerBolt", combinerBolt, 5)
>
>  .fieldsGrouping("restaurantBolt", "restaurantBolt_stream", new
> Fields("id"))
>
> .fieldsGrouping("theatresBolt", "theatresBolt_stream", new
> Fields("id"))
>
> .fieldsGrouping("libraryBolt", "libraryBolt_stream", new
> Fields("id"))
>
> .fieldsGrouping("transportBolt", "transportBolt_stream", new
> Fields("id"))
>
> .fieldsGrouping("crimeBolt", "crimeBolt_stream", new
> Fields("id"));
>
>
> *CombinerBolt*
>
> *== *
>
> public class CombinerBolt extends BaseRichBolt {
>
> ...
>
> ...
>
>  public void execute(Tuple input) {
>
> String id = getId(tuple); //Gets the value corresponding to "id" from
> tuple
>
> List idList = Arrays.asList(new Object[] { id });
>
> GlobalStreamId streamId = new GlobalStreamId(tuple.getSourceComponent(),
> tuple.getSourceStreamId());
>
> Map parts;
>
> if (!pendingTuples.containsKey(idList)) {
>
> parts = new HashMap();
>
> pendingTuples.put(idList, parts);
>
> } else {
>
> parts = pendingTuples.get(idList);
>
> logger.debug("Pending Tuples [Count:" + pendingTuples.size() + "]");
>
> if (parts.containsKey(streamId)) {
>
> logger.warn("Received same side of single join twice, overriding");
>
> }
>
> parts.put(streamId, tuple);
>
> if (parts.size() == _numSources) { //_numSources is computed at
> prepare(..) using  context.getThisSources().size();
>
> pendingTuples.remove(idList);
>
> List tuples = new ArrayList(parts.values());
>
> try {
>
> processTuple(tuples); //This is where the actual document merging is done
>
> } catch (Exception exc) {
>
> logger.error("There was an exception processing Tuples [" + tuples + "]",
> exc);
>
> }
>
> }
>
>  }
>
> getOutputCollector().ack(tuple);
>
> }
>
> ...
>
>  ...
>
> }
>
> In my citySpout I have a work-In-Progress (WIP) set (in redis) [having a
> "set" ensures that we don't have multiple transactions for the same city at
> the same time], where every id (city) that is emitted is put in , and it is
> removed when corresponding ack or failed is invoked on spout.
>
> *1. *I'm seeing a lot of "Received same side of single join twice,
> overriding"... my expectation, was otherwise, as I'm acking without waiting
> for join...so there shouldn't be a lot of retry happening
>
> *2. *I deactivated the topology and could see the WIP going down to "0"
> in few seconds, however I continued to see my bolts working even when the
> WIP has nothing , for another few seconds. items from WIP are removed
> only when an ACK/FAIL is received at Spout. Based on the details provided
> in
> https://github.com/nathanmarz/storm/wiki/Acking-framework-implementation , my 
> expectation was processing will stop the moment WIP is 0.
>
> I'm curious, why my bolts are getting data.
>
> I will continue to do more investigation. In the meant time  , if you see
> any glaring issue with this approach pls. let me know.
>
> Thanks,
> Prasun
>


Re: Storm: How to emit top 5 word count every 5 minutes ?

2014-05-20 Thread yogesh panchal
Hi susheel,

Tick tuple will work for this ? do you know anything about using tick tuple
with python topology ?


On Sat, May 17, 2014 at 10:02 AM, Susheel Kumar Gadalay  wrote:

> Use tick tuple
> On 5/16/14, yogesh panchal  wrote:
> > Hi, is it possible to emit top 5 word count every 5 minute in storm word
> > count example ?
> >
> > --
> > Thanks & Regards
> >
> > Yogesh Panchal
> >
>



-- 
Thanks & Regards

Yogesh Panchal


>>> If You Go Black There is No Other Way to Come Back !


Re: Kafka Spout 0.8-plus stops consuming messages after a while

2014-05-20 Thread Nathan Leung
Hi Jing,

Was message.max.bytes changed in your Kafka server config to be higher than
the default value (100 bytes)?

-Nathan


On Mon, May 19, 2014 at 5:54 PM, Tao, Jing  wrote:

>  I finally found the root cause.  Turns out the spout was reading a
> message exceeded the max message size.  After increasing the message size
> in SpoutConfig, it worked.  It would have been nice if the Spout threw an
> error or exception in such cases.
>
> Jing
>
> On May 16, 2014, at 12:09 PM, "Tao, Jing"  wrote:
>
>   Hi,
>
>
>
> I am using storm 0.9.0.1, kafka 0.8, and storm-kafka-0.8-plus.  In
> LocalCluster, after fetching and processing about 18,000 messages, the
> spout stops fetching from Kafka even though there are new messages in the
> queue.
>
> There are no errors in the storm or kafka logs.  Any idea why this is
> happening?
>
>
>
> Thanks,
>
> Jing
>
>
>
>


Re: logging gc event

2014-05-20 Thread Nathan Leung
You can do something like this: -Xloggc:/logs/gc-worker-%ID%.log


On Wed, May 14, 2014 at 2:01 PM, Sean Allen wrote:

> is anyone logging gc events for workers in their cluster?
>
> outside of the storm, the following jvm options are pretty standard for us:
>
> -XX:+PrintGCTimeStamps -XX:+PrintGCDetails  -XX:+UseGCLogFileRotation
> -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=1M -XX:+PrintGCDateStamps
> -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime
> -XX:+PrintSafepointStatistics -Xloggc:SOME_FILE
>
> I'm not sure how we could use these in storm and be able to create a per
> worker log file for gc events.
>
> --
>
> Ce n'est pas une signature
>


Supervisor getting repeatedly disconnected from the ZooKeeper

2014-05-20 Thread Sajith
Hi all,

In my topology I observers that one of the supervisor machines get
repeatedly disconnected from the Zookeeper and it prints the following
error,

EndOfStreamException: Unable to read additional data from client sessionid
0x146193a4b70073d, likely client has closed socket
at
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:662)
2014-05-20 06:51:20,631 [myid:] - INFO  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
client /204.13.85.2:37938 which had sessionid 0x146193a4b70073d
2014-05-20 06:51:20,631 [myid:] - WARN  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid
0x146193a4b700741, likely client has closed socket
at
org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:662)
2014-05-20 06:51:20,632 [myid:] - INFO  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1007] - Closed socket connection for
client /204.13.85.2:37942 which had sessionid 0x146193a4b700741
2014-05-20 06:51:20,634 [myid:] - WARN  [NIOServerCxn.Factory:
0.0.0.0/0.0.0.0:2181:NIOServerCnxn@357] - caught end of stream exception

In the supervisor log the following error is getting printed along with the
above error in zookeper

2014-05-20 06:59:33 b.s.d.supervisor [INFO]
dfa06019-0c29-4782-94da-c37fcc75243d still hasn't started
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Worker
dfa06019-0c29-4782-94da-c37fcc75243d failed to start
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Worker
4677c74c-8239-4cd3-8ff7-c95c3724e40e failed to start
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Worker
39c70558-c144-4da6-b685-841d7a531ec0 failed to start
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Worker
983c05ff-107e-483c-97e6-bb5c309606ec failed to start
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 39c70558-c144-4da6-b685-841d7a531ec0. Current supervisor time:
1400594374. State: :not-started, Heartbeat: nil
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:39c70558-c144-4da6-b685-841d7a531ec0
2014-05-20 06:59:34 b.s.util [INFO] Error when trying to kill 25682.
Process is probably already dead.
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shut down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:39c70558-c144-4da6-b685-841d7a531ec0
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 983c05ff-107e-483c-97e6-bb5c309606ec. Current supervisor time:
1400594374. State: :not-started, Heartbeat: nil
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:983c05ff-107e-483c-97e6-bb5c309606ec
2014-05-20 06:59:34 b.s.util [INFO] Error when trying to kill 25684.
Process is probably already dead.
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shut down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:983c05ff-107e-483c-97e6-bb5c309606ec
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down and clearing
state for id 4677c74c-8239-4cd3-8ff7-c95c3724e40e. Current supervisor time:
1400594374. State: :not-started, Heartbeat: nil
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:4677c74c-8239-4cd3-8ff7-c95c3724e40e
2014-05-20 06:59:34 b.s.util [INFO] Error when trying to kill 25680.
Process is probably already dead.
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shut down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:4677c74c-8239-4cd3-8ff7-c95c3724e40e
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down and clearing
state for id dfa06019-0c29-4782-94da-c37fcc75243d. Current supervisor time:
1400594374. State: :not-started, Heartbeat: nil
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shutting down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:dfa06019-0c29-4782-94da-c37fcc75243d
2014-05-20 06:59:34 b.s.util [INFO] Error when trying to kill 25679.
Process is probably already dead.
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Shut down
5dd6583a-a5a4-4d76-8797-e885eacdf18f:dfa06019-0c29-4782-94da-c37fcc75243d
2014-05-20 06:59:34 b.s.d.supervisor [INFO] Launching worker with
assignment #backtype.storm.daemon.supervisor.LocalAssignment{:storm-id
"LatencyMeasureTopology-17-1400594223", :executors [[4 4]]} for this
supervisor 5dd6583a-a5a4-4d76-8797-e885eacdf18f on port 6700 with id
05c9e509-c29f-4310-b959-02f083224518

What's going wrong here? I feel like this is a heartbeat expiry issue! If
so what are the parameters that i should tweak to avoid this issue.

Thanks,
Sajith.


Re: [VOTE] Storm Logo Contest - Round 1

2014-05-20 Thread Milinda Pathirage
#9 - 3pts
#10 - 2pts

Thanks
Milinda

On Mon, May 19, 2014 at 9:48 PM, Neville Li  wrote:
> #11 - 3 pts.
> #1 - 2 pts.
>
>
> On Mon, May 19, 2014 at 3:17 PM, Binh Nguyen Van  wrote:
>>
>> #9 - 4 pts.
>> #8 - 2 pts.
>>
>>
>> On Mon, May 19, 2014 at 9:57 AM,  wrote:
>>>
>>> #11 - 3 pts
>>> #6- 2 pts
>>>
>>> Sent from Windows Mail
>>>
>>> From: Uditha Bandara Wijerathna
>>> Sent: ‎Sunday‎, ‎May‎ ‎18‎, ‎2014 ‎4‎:‎28‎ ‎AM
>>> To: user@storm.incubator.apache.org
>>> Cc: d...@storm.incubator.apache.org
>>>
>>> On Thu, May 15, 2014 at 9:58 PM, P. Taylor Goetz 
>>> wrote:

 This is a call to vote on selecting the top 3 Storm logos from the 11
 entries received. This is the first of two rounds of voting. In the first
 round the top 3 entries will be selected to move onto the second round 
 where
 the winner will be selected.

 The entries can be viewed on the storm website here:

 http://storm.incubator.apache.org/blog.html

 VOTING

 Each person can cast a single vote. A vote consists of 5 points that can
 be divided among multiple entries. To vote, list the entry number, followed
 by the number of points assigned. For example:

 #1 - 2 pts.
 #2 - 1 pt.
 #3 - 2 pts.

 Votes cast by PPMC members are considered binding, but voting is open to
 anyone.

 This vote will be open until Thursday, May 22 11:59 PM UTC.

 - Taylor
>>>
>>>
>>>
>>> #9 - 2 Points
>>> #1 - 3 Points
>>> --
>>> - Uditha Bandara Wijerathna -
>>
>>
>



-- 
Milinda Pathirage

PhD Student | Research Assistant
School of Informatics and Computing | Data to Insight Center
Indiana University

twitter: milindalakmal
skype: milinda.pathirage
blog: http://milinda.pathirage.org


Re: Help Required: Exception doing LeftOuterJoining on Multiple Streams

2014-05-20 Thread Susheel Kumar Gadalay
You can join like this in main function.

  Stream joinStreamInner =
  topology.join(streams, joinFields,
 new Fields("RequestId", "ColumnMapId", "FFValue", "SFValue",
"TFValue"),
 //JoinType.mixed(JoinType.INNER, JoinType.OUTER))
 JoinType.INNER)
 .each(new Fields("RequestId", "ColumnMapId", "FFValue",
"SFValue", "TFValue"), new PrintFilter("JoinedOutputInner"));

  Stream joinStreamOuter =
  topology.join(streams, joinFields,
 new Fields("RequestId", "ColumnMapId", "FFValue", "SFValue",
"TFValue"),
 //JoinType.mixed(JoinType.INNER, JoinType.OUTER))
 JoinType.OUTER)
 .each(new Fields("RequestId", "ColumnMapId", "FFValue",
"SFValue", "TFValue"), new PrintFilter("JoinedOutputOuter"));

  topology.join(joinStreamInner, new Fields("RequestId",
"ColumnMapId", "FFValue", "SFValue", "TFValue"),
 joinStreamOuter, new Fields("RequestId", "ColumnMapId",
"FFValue", "SFValue", "TFValue"),
 new Fields("RequestId", "ColumnMapId", "FFValue", "SFValue",
"TFValue"),
 //JoinType.mixed(JoinType.INNER, JoinType.OUTER))
 JoinType.INNER)
 .each(new Fields("RequestId", "ColumnMapId", "FFValue",
"SFValue", "TFValue"), new PrintFilter("JoinedOutput"));

But the outer join is not giving the expected result.
It is still doing the inner join only.

I am getting this output.

## PrintFilter [SpoutOutput]: [RequestId_1]
## PrintFilter [FirstFunctionOutput]: [RequestId_1, ColumnMapId_1, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_1, ColumnMapId_2, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_1, ColumnMapId_3, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_1, ColumnMapId_4, Value1]
## PrintFilter [SecondFunctionOutput]: [RequestId_1, ColumnMapId_1, Value2]
## PrintFilter [SecondFunctionOutput]: [RequestId_1, ColumnMapId_2, Value2]
## PrintFilter [ThirdFunctionOutput]: [RequestId_1, ColumnMapId_1, Value3]
## PrintFilter [ThirdFunctionOutput]: [RequestId_1, ColumnMapId_3, Value3]
## PrintFilter [JoinedOutputOuter]: [RequestId_1, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutputInner]: [RequestId_1, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutput]: [RequestId_1, ColumnMapId_1, Value1,
Value2, Value3]
## PrintFilter [SpoutOutput]: [RequestId_2]
## PrintFilter [FirstFunctionOutput]: [RequestId_2, ColumnMapId_1, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_2, ColumnMapId_2, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_2, ColumnMapId_3, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_2, ColumnMapId_4, Value1]
## PrintFilter [SecondFunctionOutput]: [RequestId_2, ColumnMapId_1, Value2]
## PrintFilter [SecondFunctionOutput]: [RequestId_2, ColumnMapId_2, Value2]
## PrintFilter [ThirdFunctionOutput]: [RequestId_2, ColumnMapId_1, Value3]
## PrintFilter [ThirdFunctionOutput]: [RequestId_2, ColumnMapId_3, Value3]
## PrintFilter [JoinedOutputOuter]: [RequestId_2, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutputInner]: [RequestId_2, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutput]: [RequestId_2, ColumnMapId_1, Value1,
Value2, Value3]
## PrintFilter [SpoutOutput]: [RequestId_3]
## PrintFilter [FirstFunctionOutput]: [RequestId_3, ColumnMapId_1, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_3, ColumnMapId_2, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_3, ColumnMapId_3, Value1]
## PrintFilter [FirstFunctionOutput]: [RequestId_3, ColumnMapId_4, Value1]
## PrintFilter [SecondFunctionOutput]: [RequestId_3, ColumnMapId_1, Value2]
## PrintFilter [SecondFunctionOutput]: [RequestId_3, ColumnMapId_2, Value2]
## PrintFilter [ThirdFunctionOutput]: [RequestId_3, ColumnMapId_1, Value3]
## PrintFilter [ThirdFunctionOutput]: [RequestId_3, ColumnMapId_3, Value3]
## PrintFilter [JoinedOutputOuter]: [RequestId_3, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutputInner]: [RequestId_3, ColumnMapId_1,
Value1, Value2, Value3]
## PrintFilter [JoinedOutput]: [RequestId_3, ColumnMapId_1, Value1,
Value2, Value3]

On 5/12/14, Kiran Kumar  wrote:
> Below is the test code i am trying to do left outer join on multiple
> streams..
>
> The issue i am getting is something like..
> RuntimeException: Expecting 4 lists instead getting 3 lists.
>
> FYI: This works fine for InnerJoin, but failing with the above exception
> when i am trying for Left Outer Join.
>
> ==
>
> package com.trident.fork.joins.test;
> /**
>  * @author dkirankumar
>  */
> import java.util.ArrayList;
> import java.util.List;
>
> import storm.trident.JoinType;
> import storm.trident.Stream;
> import storm.trident.TridentTopology;
> import backtype.storm.Config;
> import backtype.storm.LocalCluster;
> import backtype.storm.tuple.Fields;
>
> public class Topology {
>
> public static void main(String[] args) {
> TridentTopology topology = new TridentTopology();
>
> Stream stream = topology.newStre