Re: Storm 0.10.x defining log file for topology.

2016-02-29 Thread Justin Hopper
0.10.0 uses log4j2. Look under log4j2 directory and your will find a worker.xml 
file or something to that effect. In there you can define the output. Keep in 
mind that this will affect all topologies in that supervisor.

On Feb 29, 2016, at 13:52, Stephen Powis 
> wrote:

Hey!

I recently upgraded to Storm 0.10.0 (and log4j as a result).  I've noticed that 
topologies no longer log into worker-.log and instead into a file 
named -worker-.log

I was curious as to if there's a way to control what the name of this log file 
is.  I've tried adding a topology config item named 
"logfile.name" but no luck.  I must be missing something.  
Is there a way to hardcode the topology Id, or set this logfile to a static 
value on a per topology basis?

Thanks!


How to configure broker ids in storm kafka?

2016-02-29 Thread Jason Kania
Hello,
I am trying to run storm-kafka with multiple zookeeper and kafka nodes, but I 
am getting "Node /brokers/ids/0 does not exist". In my case, /brokers/ids/8 and 
/brokers/ids/9 exist as per my kafka configuration. I cannot seem to find out 
how to configure storm-kafka to look for these broker ids instead of the 
/brokers/ids/0 which I also have no idea as to its source.
Can someone point me to documentation, a code example or a configuration 
parameter for doing this?
Thanks,
Jason


Storm 0.10.x defining log file for topology.

2016-02-29 Thread Stephen Powis
Hey!

I recently upgraded to Storm 0.10.0 (and log4j as a result).  I've noticed
that topologies no longer log into worker-.log and instead into a
file named -worker-.log

I was curious as to if there's a way to control what the name of this log
file is.  I've tried adding a topology config item named "logfile.name" but
no luck.  I must be missing something.  Is there a way to hardcode the
topology Id, or set this logfile to a static value on a per topology basis?

Thanks!


Re: unsubscribe

2016-02-29 Thread Erik Weathers
Unfortunately you cannot unsubscribe by emailing the actual list:

   - https://storm.apache.org/community.html

==> Just send an email to:

   - user-unsubscr...@storm.apache.org


On Mon, Feb 29, 2016 at 11:53 AM, manish belsare <
manishbelsare2...@gmail.com> wrote:

> Unsubscribe
>
> On Mon, Feb 29, 2016 at 11:42 AM Brunner, Bill 
> wrote:
>
>> unsubscribe
>> --
>> This message, and any attachments, is for the intended recipient(s) only,
>> may contain information that is privileged, confidential and/or proprietary
>> and subject to important terms and conditions available at
>> http://www.bankofamerica.com/emaildisclaimer. If you are not the
>> intended recipient, please delete this message.
>>
>


Re: unsubscribe

2016-02-29 Thread manish belsare
Unsubscribe
On Mon, Feb 29, 2016 at 11:42 AM Brunner, Bill 
wrote:

> unsubscribe
> --
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.bankofamerica.com/emaildisclaimer. If you are not the intended
> recipient, please delete this message.
>


order of execution topologies

2016-02-29 Thread Spico Florin
hello!
when all the free slots are occupied and you are still submitting the
topologies what will be the order of these holded topologies when the
existing one


Re: Storm replay duplicate handling

2016-02-29 Thread Lakshmanan Muthuraman
Most likely it is not possible. You need to deploy two difference
topologies for it. Or you need to have a de duplication logic implemented
downstream in your database layer and s3 layer.

On Fri, Feb 26, 2016 at 12:28 PM, pradeep s 
wrote:

> Hi ,
> I am processing CDC messages using storm. My topology has a 2 bolts .
> First one is a bolt to write data to S3 and  second one is a bolt to write
> to Database.
> i am using anchored tuples. Now i am facing with the issue of handling
> duplicate writes.
> When message is successful in S3 bolt and failure happens in DB bolt,
>  tuple replay is happening. During replay again S3 bolt call is invoked and
> data is written to S3 again.
> Any way i can have the tuple replayed only for the failed bolt.
>
> Topology
> ---
>
> topologyBuilder.setSpout("mdpSpout", new SQSMessageReaderSpout(queueUrl),
> SPOUT_PARALLELISM);
>
> topologyBuilder.setBolt("mdpS3Bolt", new S3WriteBolt(),
> BOLT_PARALLELISM).shuffleGrouping("mdpSpout");
>
> topologyBuilder.setBolt("dbBolt", new DbBolt(), BOLT_PARALLELISM
> ).shuffleGrouping("mdpS3Bolt");
>
>
>
>
> Regards
> Pradeep S
>


How to get/see the thrift counterpart of a topology

2016-02-29 Thread Spico Florin
Hello!
 I would like to know how can I get/see how a topology structure was packed
 for Thrift protocol.
More specific I would like to see the content of ComponentObject and
ComponentCommon, and whatever information is sent to nimbus.

As far as I know (please correct me, if I'm wrong) there are two
counterparts of the topology that are sent to nimbus
 - the fat jar that contains the classed and their dependencies (sent via
Thrift??)
- the topology structure as Thrift structures (sent also via Thrift).

As I said, I'm interested in the second point.
I look forward for your answers.
  Regards,
  Florin