Hi, all:
I was trying to build a Storm topology with Hadoop and Hbase
dependencies. And I want to run this topology with Storm-on-Yarn. The
version of Storm in it is 0.9.0-wip21. I created the Jar file with Maven,
and the pom.xml file is attached.
I submitted the topology (with-dependenci
Silly me.
i did not provide the "storm jar" command with a topology name, so this ran
in local mode. when i gave the starter program a topology name it ran in
remote mode, and the log files showed up in
/logs/worker-.log as expected.
And the format of the logs did follow what was in
/logback/c
It maybe that hashcode and index is less than zero. You can add debug log to
print the two vars to verify.
--
Best Regards!
肖康(Kang Xiao)
在 2014年3月6日星期四,22:51,James Xu 写道:
> What is _targetTasks is empty?
>
> On 2014年3月6日, at 下午10:28, 李家宏 (mailto:jh.li...@gmail.com)> wrote:
> > hi, all
Hi there -
I have a problem with my println and logger output not appearing any where
i can find it when I deploy a topology to my one node
cluster. I don't have this issue when i debug the topology in local mode.
In local mode i see all the output.
As a simple example I have modified the Ex
Found this piece of the information:
http://storm.incubator.apache.org/documentation/Running-topologies-on-a-production-cluster.html
Updating a running topology
To update a running topology, the only option currently is to kill the current
topology and resubmit a new one. A planned feature is to
Hi,
In our product, we use Trident to do real-time aggregations at 5 minute
intervals with persistantAggregate and a state implementation of the RDBMS
implementation does the multi-puts into the RDBMS.
The system has the RDBMS doing higher level rollup aggregations like 15
mins, 1 hour, 1 day etc o
Hi,
I was running a Storm Topology in cluster mode, and I catching the
following error (collected from worker log files). Apparently it is related
to Log4j settings, but I really have no idea how should solve this.
By the way, I am also using Kafka in one of my bolts, so It might be that
log4j s
hello,
i'm having trouble getting the right idea how to parallelize my topology.
I have a bolt subscribing to 3 streams A,B,C. All of the streams have a
field1 which is between 0-n. so it makes sense to fieldgroup on field1
with a prallism.hint of n.
Stream A and B are hashmaps which are emitted
What is _targetTasks is empty?
On 2014年3月6日, at 下午10:28, 李家宏 wrote:
> hi, all
> i'am using CustomStreamGrouping with very simple mod selection. it thrown
> IndexOutOfBoundsException. From the trace below:
>
> Caused by: java.lang.IndexOutOfBoundsException: null
> at clojure.lang.Persiste
hi, all
i'am using CustomStreamGrouping with very simple mod selection. it thrown
IndexOutOfBoundsException. From the trace below:
Caused by: java.lang.IndexOutOfBoundsException: null
at clojure.lang.PersistentVector.arrayFor(PersistentVector.java:106)
~[clojure-1.4.0.jar:na]
at clojure.lang.Persi
Determined by these ifs:
./daemon/executor.clj-(if (and (.isEmpty overflow-buffer)
./daemon/executor.clj- (or (not max-spout-pending)
./daemon/executor.clj- (< (.size pending)
max-spout-pending)))
./daemon/executor.clj- (if activ
But what factors determine when the nextTuple() should be invoked?
Thx!
2014-03-06 13:36 GMT+01:00 James Xu :
> The caking of previous tuple has nothing to do with the invocation of
> nextTuple().
>
> On 2014年3月6日, at 下午8:19, Tian Guo wrote:
>
>
> Thanks for your advice!
>
> But my double stil
The caking of previous tuple has nothing to do with the invocation of
nextTuple().
On 2014年3月6日, at 下午8:19, Tian Guo wrote:
>
> Thanks for your advice!
>
> But my double still remains. Is the nextTuple method called only when the
> previous tuples are acked in the ack method? Anyone knows th
Thanks for your advice!
But my double still remains. Is the nextTuple method called only when the
previous tuples are acked in the ack method? Anyone knows the internal
strategy?
Thx!
Best,
2014-03-06 8:14 GMT+01:00 James Xu :
> use Tick Tuple.
>
> On 2014年3月6日, at 上午4:28, Tian Guo wrote:
>
>
Just adding on to my observation , forceStartOffsetTime accepts a
timestamp value in milliseconds, but it seems to be working just like how
it would have worked if I passed parameter -1. It is reading only from
current offset and not from the input timestamp!
I guess the kafka spout I'm using onl
Hi
Sorry for the late reply, I just got time to experiment today and,
realized forceStartOffsetTime
is not accepting timestamp(milli seconds) value as a parameter.
This doesn't seem to work. I'm using the kafka spout from storm-contrib,
and it is a normal storm topology not a trident topology!!
17 matches
Mail list logo