Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/631#issuecomment-121403303
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/275#issuecomment-94068063
I didn't get a notification that you were pinging me â but fwiw I'm +1
on it
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/508#issuecomment-91773263
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/349#issuecomment-87767343
Taking a look at it again, it looks like the introduction of
TransferDrainer is a regression. The way that class works doesn't make a whole
lot of sense... it has
Github user nathanmarz commented on a diff in the pull request:
https://github.com/apache/storm/pull/349#discussion_r27413411
--- Diff: storm-core/src/clj/backtype/storm/daemon/worker.clj ---
@@ -335,16 +333,14 @@
drainer (TransferDrainer.)
node+port
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/349#issuecomment-87778672
Looking inside task-node+port needs to be done inside the read lock (as it
was done before this TaskDrainer class got inserted here). If that's done,
there's
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/492#issuecomment-87942556
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/437#issuecomment-77919851
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/437#issuecomment-75356387
-1
Please change the catch block to explicitly return nil. Don't depend on
log-warn to do that for you.
---
If your project is set up for it, you can reply
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/429#issuecomment-74914806
Nimbus only knows a worker is having trouble when it stops sending
heartbeats. If a worker gets into a bad state, the worst thing to do is have it
continue trying
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/429#issuecomment-74036235
I retract my earlier -1. It was mentioned that this enables backpressure
for unacked topologies. Is this the case? If so, this is a great new feature of
Storm
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/339#issuecomment-67615349
Can you elaborate? It should just keep trying that last offset repeatedly.
This is a case I tested repeatedly when I originally wrote this, so if this is
the case it's
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/339#issuecomment-67581379
-1
Why are offset out of range exceptions happening when re-emitting a batch?
That shouldn't be possible so I'd like to know why this is happening
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/339#issuecomment-67581552
Once this code is no longer applied to the transactional spout, I'll be +1.
I'd still like to know why that exception is happening though.
---
If your project is set
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/339#issuecomment-67589509
Yea, most of this can be addressed by allocating more resources so Kafka
has enough retention to not truncate the data before the topology can be fixed.
+1
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/349#issuecomment-66975846
-1. That r/w lock ensures that send is never called on a closed connection.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/275#issuecomment-66858418
I view tick tuples as being built on top of the core ideas of streams and
tuples, not as fundamentally intertwined with them. So let's keep them separate
and have
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/167#issuecomment-63232706
You're right, that needs to be a reset!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/309#issuecomment-63094441
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user nathanmarz commented on a diff in the pull request:
https://github.com/apache/storm/pull/268#discussion_r18687511
--- Diff: storm-core/src/clj/backtype/storm/messaging/local.clj ---
@@ -45,10 +46,10 @@
(let [send-queue (add-queue! queues-map lock storm-id port
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/268#issuecomment-58609408
-1
You need to explain these changes more, especially the changes to worker.clj
---
If your project is set up for it, you can reply to this email and have
Github user nathanmarz commented on a diff in the pull request:
https://github.com/apache/storm/pull/268#discussion_r18687520
--- Diff: storm-core/src/jvm/backtype/storm/messaging/IConnection.java ---
@@ -40,7 +41,7 @@
* @param msgs
*/
-public void
Github user nathanmarz commented on the pull request:
https://github.com/apache/storm/pull/281#issuecomment-57974956
I see. I'd be fine with a different patch which does not use any
environment vars and instead sets -Dstorm.service= to be one of nimbus,
supervisor, or worker, passed
23 matches
Mail list logo