Hi All,
We are organizing a Storm Meetup at Hortonworks HQ in Santa
Clara,CA. If you are interested in attending please RSVP here
https://www.meetup.com/Apache-Storm-Apache-Kafka/events/238975416/
Thanks,
Harsha
Hi All,
We are planning on scheduling a Storm Meetup in April 1st week.
Here is the meetup link https://www.meetup.com/Apache-Storm-Apache-Kafka/.
If you are interested in talking about your use-cases in storm there is 1
more slot available, please reach out to me.
Thanks,
Harsha
HI Davorin,
I recommend using hive 1.x or higher release. They had quite
few issues in hive streaming api and also we had issues on heartbeating
from storm hive side. These are all
fixed in storm 1.0 bolt and hive 1.0 had streaming api fixes as well.
Thanks,
Harsha
On Tue, Oct 25
Abhishek,
Are you looking rolling upgrade kafka cluster or storm?
Harsha
On Fri, Aug 26, 2016 at 6:18 AM Abhishek Agarwal <abhishc...@gmail.com>
wrote:
>
> On Aug 26, 2016 2:50 PM, "Abhishek Agarwal" <abhishc...@gmail.com> wrote:
>
> >
>
> >
Jira and patch avialable here
https://issues.apache.org/jira/browse/STORM-2041.
On Fri, Aug 12, 2016 at 8:05 AM Harsha Chintalapani <st...@harsha.io> wrote:
> sorry for the multiple emails. Something went wrong on my email provider
> side.
>
> Instead of splitting into tw
sorry for the multiple emails. Something went wrong on my email provider
side.
Instead of splitting into two separate threads as its hard keep track of
the discussion. We can continue this discussion on users list as it will
keep the storm users in the discussion as well.
Thanks,
Harsha
On Fri
test mail ignore.
Hi All,
Dropping java 7 support on master will allow us to use the new api
in Java 8 and since the master is being used for java migration
its good to make the decision now. Let me know your thoughts.
Thanks,
Harsha
unknownhost generally means your configuration in nimbus.seeds are unable
to resolve to a hostname. Make sure the entries you added in nimbus.seeds
can be pingable from the host you are submitting the topology
On Mon, Jul 4, 2016 at 2:27 PM Walid Aljoby wrote:
> Hi all,
ppens through distuptor queue.
If one needs to increase the size of buffers for netty take a look at netty
configs in storm.yaml. We recommend to go with the defaults.
Thanks,
Harsha
On Mon, Jul 4, 2016 at 9:59 AM Nathan Leung <ncle...@gmail.com> wrote:
> Double check how you are pushing d
Did you try setting your topology package name as another logger
https://github.com/apache/storm/blob/master/log4j2/worker.xml#L80 and
you can control the level and other details in there.
-Harsha
On Sun, Jun 5, 2016, at 01:21 PM, anshu shukla wrote:
> +1 any update ??
>
> On S
HI Stephen,
Can you try setting ui.header.buffer.bytes to higher value in
storm.yaml.
-Harsha
On Thu, May 5, 2016, at 10:08 AM, Stephen Powis wrote:
> Hey!
> We've started getting this error frequently when trying to view our
> topology details via the webUI. Does anyone have any
Jungtaek,
I think filters that can support a regex gives more felxibility.
Thanks,
Harsha
On Mon, May 2, 2016, at 07:48 PM, Jungtaek Lim wrote:
> Kevin,
>
> For specific task, you can register your own metrics which resides
> per task.
> But metrics doc on Storm is not kind enou
Jungtaek,
Probably a filter config to whitelist and blacklist certain metrics. So
that it will scale if there are too many workers and users can turn off
certain metrics.
Thanks,
Harsha
On Mon, May 2, 2016, at 06:19 AM, Stephen Powis wrote:
> Oooh I'd love this as well! I really
said above Kafka
0.9.0.1 contains two kafka apis new ones which will only works with
0.9.0.1 kafka cluster and old consumer apis which can work 0.8.2.
Even though you compile with 0.9.0.1 version it will work with
0.8.2.1 kafka cluster.
Let me know if you have any questions.
Thanks,
Harsha
O
said above Kafka
0.9.0.1 contains two kafka apis new ones which will only works with
0.9.0.1 kafka cluster and old consumer apis which can work 0.8.2.
Even though you compile with 0.9.0.1 version it will work with
0.8.2.1 kafka cluster.
Let me know if you have any questions.
Thanks,
Harsha
On F
Did you try using kinit with the keytab . Make sure its the same unix
user who is running storm UI
On Fri, Mar 18, 2016, at 02:58 PM, Andrey Dudin wrote:
> Hi guys.
>
> I try to configure Kerberos for Storm.
> I use storm 0.10.
> Now I try configure only UI, without other components.
>
>
> I
its for Nimbus HA
http://hortonworks.com/blog/fault-tolerant-nimbus-in-apache-storm/
On Sun, Mar 13, 2016, at 11:27 PM, Sai Dilip Reddy Kiralam wrote:
>
> Hi Harsha,
> can you explain why nimbus seeds are used - [nimbus.seeds: ["host1", "host2",
> "host3
l.example
make sure you copy the same storm.yaml in all the nodes in the
storm cluster.
-Harsha
On Sun, Mar 13, 2016, at 05:07 PM, Xiang Wang wrote:
> Hi,
>
> I guess you are in the wrong directory. Do "mvn package" under
> "$STORM_HOME/examples/storm-
> start
Rajashekar, Current storm kafka connector uses kafka's
simpleconsumer api. Only kafka's new consumer api has the security
enabled. There is work being done to port kafka connector to use new
consumer api.
Thanks, Harsha
On Thu, Jan 28, 2016, at 02:04 PM, Rajasekhar wrote:
>
&
to avoid this issue. This feature is
will be part of upcoming 1.0 release.
Thanks, Harsha
On Fri, Jan 8, 2016, at 02:20 PM, Ganesh Chandrasekaran wrote:
> When Nimbus went down, other topologies were still processing messages
> correctly. It’s only because when 1 half of my topology wen
@domain
-Harsha
On Sat, Dec 26, 2015, at 04:46 AM, Raja.Aravapalli wrote:
>
> Hi
>
> I am getting below exception when i am trying to write tuples into
> HDFS which is in a secured Hadoop cluster. Can someone pls share your
> thoughts and help me fix the issue
>
> java.lang
you need to package hdfs-site.xml and core-site.xml from your hadoop
cluster as part of your topology jar.
http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_storm-user-guide/content/storm-connectors-secure.html
-Harsha
On Thu, Dec 24, 2015, at 08:48 PM, Raja.Aravapalli wrote:
>
>
Mark, Here is the JIRA
https://issues.apache.org/jira/browse/STORM-650 to make the kafak
connector to use 0.9 . Its in the works. Thanks, Harsha
On Tue, Dec 8, 2015, at 03:25 AM, Davis, Mark (TS R Galway) wrote:
> Hi,
>
> Per Michael Noll’s article:
http://www.michael
Florin, You already have 2 & 3 working including doAs in
secure mode. For 1 as Bobby pointed out we removed it due to
security issue. -Harsha
On Thu, Nov 26, 2015, at 07:47 AM, Spico Florin wrote:
> Hello! I would like to ask you what is the status of the REST API in
> 0.10.
What are then numbers you are seeing. I believe there needs to be some
optimization needs to be done on phoenix.
On Fri, Nov 20, 2015, at 09:15 AM, Youzha wrote:
> i try to run trident topology with hbase writer and phoenix. but i
> feels the data upsert to phoenix seems too slow. is there any
Its supervisor config not nimbus.
https://github.com/apache/storm/blob/master/storm-core/src/clj/backtype/storm/daemon/supervisor.clj#L146
-Harsha
On Wed, Oct 28, 2015, at 08:08 AM, Dillian Murphey wrote:
> Is this a nimbus only config, or can my other supervsior codes have
> this
ng. Apart from that
depending on which version you are using forceFromStart or
ignoreZkOffsets set to false.
-Harsha
On Sun, Oct 25, 2015, at 06:18 AM, Craig Charleton wrote:
> Keep in mind that Zookeeper stores the Kafka offsets as they relate to
> the consumer group, not ove
whats your Kafka Spout parallelism and how many partitions you've in
your kafka topic. Also did you try to tune
topology.max.spout.pending -Harsha
On Wed, Oct 7, 2015, at 10:54 AM, Rohit Kelkar wrote:
> I have a kafka spout and single bolt topology running on a cluster in
> debug mode.
Tousif, As per fix version
https://issues.apache.org/jira/browse/STORM-130 it looks like its in
0.9.5 Thanks, Harsha
On Thu, Jul 23, 2015, at 06:33 AM, Tousif wrote:
2015-07-23T17:06:06.908+0530 b.s.d.worker [ERROR] Error on
initialization of server mk-worker
of throwing it back to
worker jvm
-Harsha
On Wed, Jul 22, 2015, at 07:43 AM, Eric Ruel wrote:
Hello
the workers in my topology dies after 1,2 minutes
I tried to change the config about the heartbeat, cluster or local
mode, but they always die
any idea?
10:38:38.019 ERROR
By default it runs on 8080 if not you can look at storm.yaml for configured
“ui.port
--
Harsha
On July 21, 2015 at 7:01:02 AM, Vamsikrishna Vinjam
(vamsikrishna.vin...@inndata.in) wrote:
storm web port number :
im trying with 8772 but it is not working
soumi, if your downstream bolt doesn't ack before tuple timeout (
by default its 30 secs) storm will consider it as failed tuple and
kafka spout will replay those. Since your last bolt is slower in acking
may be you shouldn't anchor the tuple to the last bolt .
-harsha On Wed, Jul 15, 2015
Storm does support multi-node setup in windows. Our customers using it
in multi-node setup . We haven't tested security features that recently
released in 0.10 but non-secure setup will work. -Harsha
On Wed, Jul 15, 2015, at 06:09 AM, Bobby Evans wrote:
Storm does support multi-node on windows
Hi Sergio,
I would recommend storm-kafka part of apache storm external. Its being
actively maintained by the storm community.
There is good documentation in the README file about the config options. Do let
us know if its hard to configure and use.
Thanks,
Harsha
On May 19, 2015 at 7
Are you using separate zk clusters for storm and kafka. If so which zookeepers
did you configure for kafka spout.
--
Harsha
Sent with Airmail
On May 12, 2015 at 8:40:26 PM, rajesh_kall...@dellteam.com
(rajesh_kall...@dellteam.com) wrote:
Dell - Internal Use - Confidential
Strom Kafka
kind of topology you are
using and what are the spouts you are using.
Thanks,
Harsha
On May 12, 2015 at 4:22:35 PM, 임정택 (kabh...@gmail.com) wrote:
Hi!
First of all, you want to compare Spark streaming and Storm Trident, not Storm
Spout-Bolt topology. It's not same.
Generally batching makes more
hi, you nimbus.host is listening on localhost nimbus.host: 127.0.0.1 .
Storm UI makes calls to nimbus to get storm cluster and topology info.
Make sure your nimbus and nimbus thrift port is reachable from storm ui
host. -Harsha
On Sun, May 3, 2015, at 03:19 PM, Chun Yuen Lim wrote:
I'm
I haven’t added two-way SSL in the current PR. I can add that as part of this
PR.
--
Harsha
On April 1, 2015 at 10:36:44 AM, Mike Thomsen (mikerthom...@gmail.com) wrote:
That's what I was afraid of. Any idea when that PR is going to be merged? Also,
will it support two-way SSL? Some of our
There is a JIRA open on this feature
https://issues.apache.org/jira/browse/STORM-167 .
--
Harsha
On March 23, 2015 at 9:28:37 PM, Andrew Xor (andreas.gramme...@gmail.com) wrote:
I think that's the only way of actually updating the code, since besides
rebalancing Storm does not (yet
Check your ulimit and increase if its too low and see this if happens again.
--
Harsha
On March 18, 2015 at 8:16:38 AM, hjh (apply...@163.com) wrote:
Check your ulimit and increase if its too low and see this if happens again.
--
Harsha
local mode is more for development debugging mode. It has in-process zookeeper
I am not sure how well that can handle few hundred meg per minute.
--
Harsha
On March 18, 2015 at 8:30:00 PM, clay teahouse (clayteaho...@gmail.com) wrote:
Hi All,
What could be the reasons for a topology hanging
://github.com/apache/storm/blob/master/external/storm-hive/README.md
--
Harsha
On March 17, 2015 at 9:39:44 PM, Sunit Swain (sunitsw...@gmail.com) wrote:
I am using storm 0.9.3 and trying to make use of the HiveBolt to stream the
data directly into hive tables.
I am following this example:
https
Hi Srividhya,
Yes your understanding is right. Single topology worker is dedicated
to a topology so if you have 4 worker slots and if you want to allocate 2
workers per topology than you can only deploy two topologies on that cluster.
-Harsha
On March 17, 2015 at 6:33:23 PM
Hi Pranesh,
Can you share your spoutconfig for kafka spout.
-Harsha
On March 12, 2015 at 11:58:29 AM, Pranesh Radhakrishnan
(praneshscri...@gmail.com) wrote:
I am new to Kafka and Storm. I have done simple program to post some messages
to Kafka broker and able to read the messages
Yes thats bad approach . Mostly users keep a static string for the “id” part in
the spoutConfig. Whats the need to use randomUUID.
--
Harsha
On March 9, 2015 at 11:09:46 PM, Tousif (tousif.pa...@gmail.com) wrote:
Thanks Harsha,
Does zkRoot in the spoutconfig is used along with random string
with
spoutConfig.froceFromStart=true for the first time if you want to read
from the beginning of the queue. For the subsequent times when you
redeploy the topology make sure you set spoutConfig.forceFromStart=false
so that your topology picks up the kafka offset from zookeeper and
starts where its left off.
-Harsha On Mon
name
as KafkaSpout uses topology name to store and retrieve the offsets from
zookeeper.
--
Harsha
On March 9, 2015 at 7:30:38 AM, Tousif (tousif.pa...@gmail.com) wrote:
If your topology has saved Kafka offset in your zookeeper it will start
processing from that otherwise It checks
. Having all of them on
the same machines is risky and performance will suffer.
-Harsha
On March 8, 2015 at 11:26:05 PM, Adaryl Bob Wakefield, MBA
(adaryl.wakefi...@hotmail.com) wrote:
Let’s say you put together a real time streaming solution using Storm, Kafka,
and the necessary Zookeeper
Srividhya, Storm topologies requires at least one worker to be available
to run. Hence the config will set as 1 for the topology.workers as
default value. Can you explain in more detail what you are trying to
achieve. Thanks, Harsha
On Thu, Feb 26, 2015, at 12:12 PM, Srividhya Shanmugam wrote
Are you settting numWorkers in you topology config like here
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java#L92
On Thu, Feb 26, 2015, at 12:40 PM, Srividhya Shanmugam wrote:
Thanks for the reply Harsha. We have distributed
with storm.conf file. Since the storm.conf file does
not have this property, the default is used.(hardcoded in
default.yaml)
Isn’t this a bug?
Thanks,
Srividhya
*From:* Harsha [mailto:st...@harsha.io]
*Sent:* Thursday, February 26, 2015 3:44 PM *To:*
user@storm.apache.org
-396096b37509\heartbeats\1417082031858'
you might be running into
https://issues.apache.org/jira/browse/STORM-682 Is your zookeeper
cluster on a different set of nodes and can you check you are able to
connect to it without any issues -Harsha
On Wed, Feb 25, 2015, at 03:49 AM, Martin Illecker wrote:
Hi
You might be loosing zookeeper connection. Try increasing these two
values storm.zookeeper.session.timeout: 2
storm.zookeeper.connection.timeout: 15000
On Tue, Feb 17, 2015, at 06:03 AM, Tousif wrote:
Hello, I have a bolt which uses a pool of large objects. When pool
reinitialises(once
Vineet, How are you looking at number of events in kafka. Did you
checked storm worker logs for any errors and what you mean by the
acknowledgement of 190 million events in storm are you looking at
number of acked messages? -Harsha
On Sun, Feb 15, 2015, at 04:40 AM, Vineet Mishra wrote:
Hi All
Are you running nimbus, supervisors in background? looks like you are
sshing into machines and running ./bin/storm nimbus in foreground which
will get killed when you exit the ssh session. Make sure you use
supervisord http://supervisord.org/ to run nimbus, supervisors.
On Sat, Feb 7, 2015, at
Hi Clay, I don't think there is a JIRA open for this. Can you please
open one and include steps to reproduce. Thanks, Harsha
On Sat, Feb 7, 2015, at 04:13 AM, clay teahouse wrote:
Hi All,
I emit my tuples in batches. Do I need to put the emit in a
synchronized block? The reason I am asking
My fetch and buffer are set to a couple of hundred meg and the max
spout pending is 1024 your fetch.size probably too large as it trying
to fetch 200mb of data at a time and your topic might not have
sufficient data.
On Fri, Feb 6, 2015, at 06:03 AM, clay teahouse wrote:
Hi all,
My
the
logs if your supervisors might be missing connection to zookeeper or
crashing! .
Which version of storm you are using. It might help if you can attach
screenshots for storm UI. Thanks, Harsha
On Thu, Feb 5, 2015, at 11:05 AM, David Shepherd wrote:
I have set up a Storm cluster on 3 vms running
LocalCluster should be used for debugging a topology . There is another
constructor you can use
LocalCluster cluster = new LocalCluster(localhost, new Long(2182));
first param is zookeeper host and second is the port.
-Harsha
On Tue, Feb 3, 2015, at 07:48 PM, Shivendra Singh wrote:
Hi Clay
Vineet, Which kafka spout are you using? -Harsha
On Mon, Feb 2, 2015, at 05:25 AM, Vineet Mishra wrote:
Hi,
I am running Kafka Storm Engine to process real time data generated on
a 3 node distributed cluster.
Currently I have set 10 Executors for Storm Spout, which I don't think
Tousif, You might be running into
https://issues.apache.org/jira/browse/STORM-130 . -Harsha
On Mon, Feb 2, 2015, at 12:28 AM, Tousif wrote:
Thank you, Can you tell me why it might happen? i recently tried zk
3.3.3 with 0.9.2 and found incompatible than moved back again to 3.4.6
. i
Hmm.. this is strange. It looks like supervisor unable to find kill
command. Can you check if its in path run which kill . -Harsha
On Wed, Jan 28, 2015, at 11:08 AM, Faisal Waris wrote:
Hello,
I have single node cluster with the default config. This kind of setup
works fine on windows
Denis,
I suggest its better to have your http requests going to kafka
and than use Storm's KafkaSpout to process. This allow you to
not loose any events as KafkaSpout can do replays of the message
incase if there is a failure in your topology.
-Harsha
On Mon, Jan
the same bolt, but different tasks, or
are they different bolts entirely? As Harsha pointed out, it may
help if you give more details of how your topology is constructed.
On Tue, Jan 20, 2015 at 4:42 PM, Kushan Maskey
kushan.mas...@mmillerassociates.com wrote:
I am only fieldGrouping on X
Kushan, Thats strange if you are using fieldsGrouping than this
shouldn't be a problem as there is one instance of your bolt updating
one (x,y) values. It probably helps if you can paste your
topologybuilder part of the code. -Harsha
On Tue, Jan 20, 2015, at 01:11 PM, Kushan Maskey wrote
Kushan, My question was about this B1 and B2 are the same bolt but
running on 2 separate tasks.. Are they both same code i.e updating
cassandra table?. If so don't you need to do fieldsGrouping on B1
too? -Harsha
On Tue, Jan 20, 2015, at 05:35 PM, Kushan Maskey wrote:
Bolts is pretty simple
Armando, slots means worker slots. In this case it looks like you
assigned 3 workers to your topology. -Harsha
On Mon, Jan 19, 2015, at 09:38 AM, Armando Martinez Briones wrote:
Hi.
I'm rebalancing a topology, on the log of nimbus I can see the line:
b.s.d.nimbus [INFO] Reassigning
Are you trying to increase the parallelism of a bolt in a running
topology. If so you can use storm rebalance command , run bin/storm
help rebalance for more info.
On Thu, Jan 15, 2015, at 03:14 PM, Armando Martinez Briones wrote:
Thanks Kosala
Hi.
I have a completed system with 3
you might be hitting this
https://issues.apache.org/jira/browse/STORM-598 do you have free worker
slots available for the new topology. -Harsha
On Sat, Jan 3, 2015, at 12:09 PM, Itai Frenkel wrote:
Anything in the worker log files?
*From:* Kushan Maskey kushan.mas...@mmillerassociates.com
It does read from the stored offsets. For the first time when you deploy
the topology and if you intend to read from the beginning of the topic
than set forceFromStart=true. If you kill and redeploy the topology and
you want to read from last saved position than make sure you set
Xioyong, It looks like a bug. Please file a JIRA here
https://issues.apache.org/jira/secure/Dashboard.jspa .Use create
button. make sure you select Apache Storm as project. example
https://issues.apache.org/jira/browse/STORM-187
-Harsha
On Wed, Dec 24, 2014, at 11:04 PM, Xiaoyong Zhu wrote
/StormKafkaConsumer.java#L71
This is handled properly in HolmesNL kafka-spout.
-Harsha.
On Tue, Dec 23, 2014, at 10:22 AM, Nilesh Chhapru wrote:
Hi Harsha,
PFB the link for git.
https://github.com/nchhapru/storm-kafka-consumer/tree/master/src/main/java/com/ugam/crawler/core/consumer
-token:aB5nEmd7TsQOeluQpRXqKo6rLfFDw3h+L4RwKGe7zVbhzMV9tJeX3bHu+Sh0vLa+vkbo71Rq2VoXfj4c'
http://localhost:8080/api/v1/topology/wordcount-1-1419399960/deactivate
The second curl request will succeed and will give you a 302 which is a
bug on the UI rest api part but above request will work.
-Harsha
On Tue, Dec 23
https://github.com/apache/storm/blob/master/storm-core/src/jvm/storm/trident/topology/state/TransactionalState.java#L62
Thanks,
Harsha
On Fri, Dec 19, 2014, at 12:34 PM, Josh Bronson wrote:
I'm looking at the secure storm branch. Specifically, I'm working off
of the v0.9.2-incubating-security branch
Nikhil, you can look at the code here
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/WordCountTopology.java#L82
All of the components will be appear under topology in storm UI with
given names.
-Harsha
On Fri, Dec 19, 2014, at 12:59 PM, Nikhil Singh
Hi Nilesh, Did you check if your kafka topic partitions all being
writtten to. -Harsha
On Thu, Dec 18, 2014, at 07:22 AM, Nilesh Chhapru wrote:
Hi All,
Please give some inputs as this is pending since long and need to meet
the deadlines
*Regards*,
*Nilesh Chhapru.*
*From
)
This seems suspicious to me. Have you given kafak zookeeper hosts to the
KafkaConfig ?
-Harsha
On Thu, Dec 18, 2014, at 01:52 AM, Nilesh Chhapru wrote:
Hi Kobi,
In standard implementation we have the offsets maintained by kafka
which causes issues, need to implement high level consumer
Hi Mark, This looks cool and much more cleaner in presentation of the
UI. Thanks for sharing, will give it a try. Thanks, Harsha
On Thu, Dec 18, 2014, at 02:09 AM, Mark Zang wrote:
https://github.com/deepnighttwo/yet-another-storm-ui
yet another storm ui based on the restful api provided
Its going to be difficult without looking your code. If its on github do
give us a link to it. -Harsha
On Thu, Dec 18, 2014, at 07:47 AM, Nilesh Chhapru wrote:
Hi Harsha,
The partitions and topics are having the data and the zookeeper also
shows the group id of all the consumer in its
Hi Luiz, Sorry for taking so long to putting this up on github. Here is
the PR for storm-hive https://github.com/apache/storm/pull/350 Please go
ahead and try it out. Any feedback on this will be greatly appreciated.
Thanks, Harsha
On Wed, Oct 22, 2014, at 06:12 AM, Luiz Geovani Vier wrote
Sheldon, You can also make a call to /api/v1/topology/summary to get a
list of topologies
https://github.com/apache/storm/blob/master/STORM-UI-REST-API.md#apiv1topologysummary-get
-Harsha
On Sun, Dec 14, 2014, at 12:50 PM, Manoj Jaiswal wrote:
Why dont you use the storm list command and use
Associates[2] kushan.mas...@mmillerassociates.com
On Thu, Dec 4, 2014 at 11:30 AM, Harsha st...@harsha.io wrote:
__
Kushan, Did you try modifying logback/worker.xml . Since you have
multiple topologies if they have distinct java packages you can add
per package log4j config to worker.xml . You
Yep my bad its in 0.9.3 and not in 0.9.2. In 0.9.2 workers uses
cluster.xml so you can add your log4j changes there.. In 0.9.2 workers
uses cluster.xml so you can add your log4j changes there.
On Fri, Dec 5, 2014, at 04:39 PM, John Reilly wrote:
Hi Harsha, It looks like worker.xml
Kushan, You won't be able to modify supervisor_slots_ports from the
topology config. Any reason for doing this in topology as you can define
this in storm.yaml. -Harsha
On Thu, Dec 4, 2014, at 08:58 AM, Kushan Maskey wrote:
Is there a way I can set supervisor_slots_port in the config
would try separating out topology logging . -Harsha
On Thu, Dec 4, 2014, at 09:23 AM, Parth Brahmbhatt wrote:
Harsha is right, if you are trying to do this as part of topology
submission, even though the param will get accepted it will have no
effect. On Dec 4, 2014, at 9:01 AM, Parth Brahmbhatt
can you post your storm UI executors page image. If there are 16
executors but only 1 seems to have fetching data. Can you please check
on your kafka producer if its distributing your data among all of your
partitions.
On Thu, Dec 4, 2014, at 12:32 PM, Huy Le Van wrote:
Could someone help me
to send messages to your partition.
On Thu, Dec 4, 2014, at 03:12 PM, Huy Le Van wrote:
Hi Harsha, I’ve attached 2 images below. You can see that I assigned
16 executors, only one seemed to work. The other screenshot is the
partition table.
Hi Andrew, That’s an interesting. I’m quite new to Kafka
directly to kafka producer using
bin/kafka-console-producer.sh so I guess the keys were all null. I’ll
write a producer to see. By the way, what is the command to show the
distribution of my data in kafka?
Best regards, Huy, Le Van
On Thursday, Dec 4, 2014 at 11:23 p.m., Harsha
st
information found, using
configuration to determine offset -Harsha
On Mon, Dec 1, 2014, at 01:48 AM, 이승진 wrote:
Hi all,
Kafkaspout periodically write each partition offset to zookeeper.
and spoutConfig.startOffsetTime=-2 means from the beginning, -1 from
the latest offset
which version of storm are you using? . In storm 0.9.3 we added jsonp
support https://issues.apache.org/jira/browse/STORM-361 you can pass
callback query param to the REST Api.
-Harsha
On Mon, Dec 1, 2014, at 07:51 AM, Jose Juan Martinez wrote:
Hello,
I need to get data from Storm Rest UI
offset -Harsha
On Mon, Dec 1, 2014, at 01:48 AM, 이승진 wrote:
Hi all,
Kafkaspout periodically write each partition offset to zookeeper.
and spoutConfig.startOffsetTime=-2 means from the beginning, -1 from
the latest offset.
Is there a option to read from last committed consumer
Does your printer bolt ack the messages it received from KafkaSpout.
On Mon, Dec 1, 2014, at 06:38 PM, Madabhattula Rajesh Kumar wrote:
Hello,
Could any one help me on above mail query?
Regards, Rajesh
On Sat, Nov 29, 2014 at 10:30 PM, Madabhattula Rajesh Kumar
mrajaf...@gmail.com
Daniel, which version of storm are you using. It might be that UI server
is binding to private ip. Can you check the ui.log -Harsha
On Sun, Nov 30, 2014, at 04:17 AM, Daniel Chan wrote:
Hi everyone,
I have use storm deploy to create a storm cluster on AWS EC2 machines.
The cluster
do you see any errors in logs/supervisor.log -Harsha
On Wed, Nov 26, 2014, at 01:23 PM, Sa Li wrote:
Seems I never be able to make supervisors started properly.
On Wed, Nov 26, 2014 at 12:52 PM, Sa Li sa.in.v...@gmail.com wrote:
I am using storm-0.9.0.1.
thanks
On Wed, Nov 26, 2014
might help and also check your JVM size for workers if the
-Xmx value is right for your workload. -Harsha
On Tue, Nov 25, 2014, at 06:42 PM, 이승진 wrote:
Hi all,
currently I'm using kafkaspout 0.9.2 as a topology spout(not trident).
and all the execute methods are wrapped with try/catch
Hi Ravi, Which version of storm are you using. -Harsha
On Sat, Nov 15, 2014, at 03:23 AM, Ravi Kiran wrote:
Hi , I have been trying to run Storm on Centos 7 with no luck. Has
anyone out there met any success ?
The exception I get is
Traceback (most recent call last): File ./storm, line
Ravi, It looks like you don't have java installed or atleast java
command is not in path. I tested this storm-0.9.3-rc1 on centos7
everything looks good to me. I was able to reproduce error by removing
java . So please check if java is installed on your machine and its in
path. -Harsha
On Sat
Kushan, Which KafkaSpout are you using. Do you see any errors in worker
logs . -Harsha
On Sun, Nov 9, 2014, at 07:22 AM, Kushan Maskey wrote:
THanks Harsha, here they are,
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE
-ui-host:8080/api/v1/topology/:id it will be
part of json response bit more details on the api
https://github.com/apache/storm/blob/master/STORM-UI-REST-API.md
Thanks, Harsha
On Thu, Nov 6, 2014, at 09:40 AM, Itai Frenkel wrote:
Harsha - Is the visualization based on a JSON model exposed
1 - 100 of 105 matches
Mail list logo