Hi,
You can look at the example code at
https://raw.githubusercontent.com/apache/storm/master/examples/storm-starter/src/jvm/org/apache/storm/starter/SlidingWindowTopology.java
and for trident at
https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/org/apache/storm/starter/tr
You should give datasource class name like `
org.postgresql.ds.PGSimpleDataSource` instead of Driver class.
Thanks,
Satish.
On Thu, Sep 8, 2016 at 3:45 PM, fanxi...@travelsky.com <
fanxi...@travelsky.com> wrote:
> Hi user,
>
> Recently I am doing a job using the storm-jdbc to insert into EDB.
Hi Junfeng,
It seems scheduler is configured as `ResourceAwareScheduler`. You may have
given respective resource configuration as part of the topology. This
scheduler may have found only those many workers are sufficient for the
given resource configuration for topology's spouts/bolts.
You can loo
+1 on moving to Java 8 for 2.x release.
As Jungtaek has already mentioned Oracle stopped supporting JDK/JRE 7 since
April 2015. Java 8 brings better APIs and it will make our code look better
and future user APIs can be designed lot better.
Thanks,
Satish.
On Fri, Aug 12, 2016 at 3:11 PM, Jungt
Hi Abhishek,
Did you check whether spout is really emitting messages?
On Thu, Aug 4, 2016 at 5:42 PM, Abhishek Raj wrote:
> Thanks for the quick response. According to storm documentation, if a
> worker/node dies it's automatically restarted. Also, the bolts still show
> up in storm ui. They jus
Which branch are you using?
On Wed, Aug 3, 2016 at 11:06 AM, Ascot Moss wrote:
> Hi,
>
> Please help!
> I am trying to build Storm, below is my command:
>
> mvn clean install package -DskipTests=true
>
> It returned error at "[INFO] storm-jdbc
> . FAILURE
You can try setting different txnId values for spout which is used in
maintaining opaque transactional spout's state in ZK. I guess you may have
same txnId for both of those topologies.
TridentTopology#newStream(txnId, spout);
On Wed, Jul 27, 2016 at 2:20 PM, Satish Duggana
wrote:
>
oth
> are not getting same data from kafka.
> I think 3rd param is client id and not consumer group id. any ideas ?
>
> On Wed, Jul 27, 2016 at 12:21 PM, Satish Duggana > wrote:
>
>> You can use below constructor and use respective clientId.
>>
>> TridentKafk
You can use below constructor and use respective clientId.
TridentKafkaConfig(BrokerHosts hosts, String topic, String clientId)
Can not you see that constructor in the version that you are using? I guess
you should have that.
Thanks,
Satish.
On Wed, Jul 27, 2016 at 11:42 AM, Amber Kulkarni wr
Why would each bolt need to wait till others to complete? Are you using
table locks while inserting the data to mysql, is it really intended?
Thanks,
Satish.
On Wed, Jul 27, 2016 at 11:54 AM, Erik Weathers
wrote:
> The heartbeating is done in separate threads from the work execution
> threads,
Your supervisor's local host name is not getting resolved. You can
override this by configuring storm.local.hostname with a valid hostname.
Thanks,
Satish.
On Tue, Jul 19, 2016 at 12:22 AM, Joaquin Menchaca
wrote:
> Hi.
>
> Anyone have any suggestions how to debug this and find out what is
> h
Hi,
Can you try storm-core and storm-cassandra with the same versions?
Thanks,
Satish.
On Thu, Jul 14, 2016 at 12:58 PM, Anirudh Chhangani
wrote:
> Hi,
>
> I am relatively new to storm and wanted to ask a very trivial question, I
> am facing issues while setting CassandraWriterBolt in my topol
Config.TOPOLOGY_WORKER_CHILDOPTS: Options which can override
WORKER_CHILDOPTS for a topology. You can configure any java options like
memory, gc etc
In your case it can be
config.put(Config.TOPOLOGY_WORKER_CHILDOPTS, "-Xmx1g");
Thanks,
Satish.
On Wed, Jul 13, 2016 at 1:45 PM, Navin Ipe
wrote:
AFAIK, there can be only one master-batch-coordinator for each batch
group. You should look at increasing parallelism of trident operations to
get better latencies.
Thanks,
Satish.
On Wed, Jul 13, 2016 at 12:26 PM, hong mao wrote:
> I get the resaon, there is only one MasterBatchCoordinator fo
You should see defaults.yaml in $storm-dir/conf dir.
Thanks,
Satish.
On Tue, Jul 12, 2016 at 9:38 AM, Walid Aljoby <
walid_alj...@yahoo.com.invalid> wrote:
> Hi all,
> In the previous releases of storm, I see that default configuration file
> is called defaults.yaml.
> For new release, like Stor
It may not be because of class is not available on those workers but there
may be some race condition in a static block of that class which throws an
error. Need more context/info and logs as others suggested.
Thanks,
Satish.
On Thu, Jul 7, 2016 at 8:14 AM, Jungtaek Lim wrote:
> I guess we need
Hi,
You can follow the link for instructions on using HDFS bolt.
http://storm.apache.org/releases/1.0.0/storm-hdfs.html
Thanks,
Satish.
On Tue, Jul 5, 2016 at 3:02 AM, praveen reddy
wrote:
> thanks for response, can you please help me on how can i emit csv data
> using bolt. i was able to rea
Hi Alberto,
What is the use case for changing window duration/count at runtime?
Thanks,
Satish.
On Thu, Jun 23, 2016 at 11:56 PM, Alberto São Marcos
wrote:
> Thks Satish.
>
> On Thu, Jun 23, 2016 at 7:22 PM, Satish Duggana
> wrote:
>
>> No, you can not change windo
No, you can not change windowing configuration at runtime.
Thanks,
Satish.
On Thu, Jun 23, 2016 at 11:36 PM, Alberto São Marcos
wrote:
> Like the title states, can one change the window bolt length/count in
> runtime?
> Already tried to do it using BaseWindowedBolt API but a NPE is thrown. The
In partial key grouping, the same fields (name) sometimes go to first and
second node even though I used "name" as partial key grouping fields.
Is it right behavior?
partial-key-grouping does not always send the tuples of the same fields
values to the same task. This grouping computes two hash v
of the scheduling
> information to be shared globally in the custom scheduler. And I am not
> sure if this is possible to do that. Thank you!
>
> Best regards!
>
>
> On 06/17/2016 06:10 AM, Satish Duggana wrote:
>
> Hi,
> Why do you need supervisor-id in a bolt/spout
Hi,
Why do you need supervisor-id in a bolt/spout task? What are you using for?
Thanks,
Satish.
On Fri, Jun 17, 2016 at 2:13 AM, applyhhj wrote:
> Hello,everyone!
> Is there anyway to get the id of the supervisor that the Bolt or the
> Spout is running on? And the also the id of the supervi
You need to rebalance topology with desired no of executors for respective
bolts in a topology.
Hope it helps,
Satish.
On Wed, Jun 15, 2016 at 1:04 PM, Adrien Carreira
wrote:
> I think I understood that.
>
> But, In my example :
>
> 1 machine on cluster with this basic topology and with 1 worke
Hi Kanagha,
You may want to look at the below link which demonstrates with an example
on how to store state in windowed bolt.
https://raw.githubusercontent.com/apache/storm/master/examples/storm-starter/src/jvm/storm/starter/StatefulWindowingTopology.java
Thanks,
Satish.
On Mon, Jun 13, 2016 a
Hi,
Your message says it is throwing OutOfMemory error. So, you should look
into what is causing that. It may not be really because of storm but it may
be because of application code also. You may want to use `
-XX:+HeapDumpOnOutOfMemoryError` and `-XX:HeapDumpPath=/worker/dumps` to
dump heap on ou
Hi,
It seems you wanted to send tuples to a bolt's task from which those tuples
were received in the current bolt.
Initial bolt can send the current
task-id(org.apache.storm.task.TopologyContext#getThisTaskId()) as part of
tuple fields and this can be used by subsequent bolt to emit tuples
direct
Trident stores internal txids in zookeeper.
Thanks,
Satish.
On Fri, Jun 10, 2016 at 3:48 AM, Francisco Lopes
wrote:
> Alberto,
>
> But isn't that exactly what Trident is supposed to do?
>
> http://storm.apache.org/releases/current/Trident-state.html
>
> After reading more about this subject, it
Below link maybe helpful in understanding parallelism in storm
http://storm.apache.org/releases/1.0.1/Understanding-the-parallelism-of-a-Storm-topology.html
On Thu, Jun 9, 2016 at 11:24 PM, Satish Duggana
wrote:
> Hi Michael,
> Kafka spout is given parallelism-hint as 1 and the topic ha
Hi Michael,
Kafka spout is given parallelism-hint as 1 and the topic has only one
partition. So, only one kafka spout task is run in one of the workers of
the topology. Are you asking that bolts connected directly/indirectly from
that kafka spout are all executed in the same worker as kafka spout?
Below link describes about REST API to do those kind of operations
http://storm.apache.org/releases/1.0.1/STORM-UI-REST-API.html
Thanks,
Satish.
On Thu, Jun 9, 2016 at 10:26 PM, Girish Reddy
wrote:
> Hello Storm Community,
>
> We are trying to build Streaming based Kappa Architecture in our sys
30 matches
Mail list logo