i forgot to mention that i tried to increase topology.message.timeout.secs
 to 180 but didn't work too

On Thu, Jul 27, 2017 at 9:52 PM, sam mohel <[email protected]> wrote:

> i tried to use debug . got in the worker.log.err
> 2017-07-27 21:47:48,868 FATAL Unable to register shutdown hook because JVM
> is shutting down.
>
> and this lines from worker.log
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-1:27, stream: __ack_ack, id: {},
> [3247365064986003851 -431522470795602124]
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:27, stream: __ack_ack, id: {}, [3247365064986003851
> -431522470795602124]
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:27, stream: __ack_ack, id: {}, [3247365064986003851
> -431522470795602124] TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-1:29, stream: __ack_ack, id: {},
> [3247365064986003851 -6442207219333745818]
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:29, stream: __ack_ack, id: {}, [3247365064986003851
> -6442207219333745818]
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:29, stream: __ack_ack, id: {}, [3247365064986003851
> -6442207219333745818] TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 5263752373603294688]
> 2017-07-27 21:47:48.811 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5263752373603294688]
> 2017-07-27 21:47:48.868 b.s.d.worker [INFO] Shutting down worker
> top-1-1501184820 9adf5f4c-dc5b-47b5-a458-40defe84fe9e 6703
> 2017-07-27 21:47:48.868 b.s.d.worker [INFO] Shutting down receive thread
> 2017-07-27 21:47:48.869 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-1:31, stream: __ack_ack, id: {}, [3247365064986003851
> 4288963968930353157]
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] Execute done TUPLE source:
> b-1:31, stream: __ack_ack, id: {}, [3247365064986003851
> 4288963968930353157] TASK: 1 DELTA: 60
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 5240959063117469257]
> 2017-07-27 21:47:48.872 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5240959063117469257]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 5240959063117469257] TASK: 1 DELTA: 1
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 7583382518734849127]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 7583382518734849127]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 7583382518734849127] TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 6840644970823833210]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 6840644970823833210]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 6840644970823833210] TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 -6463368911496394080]
> 2017-07-27 21:47:48.873 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> -6463368911496394080]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> -6463368911496394080] TASK: 1 DELTA: 1
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-3:33, stream: __ack_ack, id: {},
> [3247365064986003851 764549587969230513]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-3:33, stream: __ack_ack, id: {}, [3247365064986003851
> 764549587969230513]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-3:33, stream: __ack_ack, id: {}, [3247365064986003851 764549587969230513]
> TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-5:35, stream: __ack_ack, id: {},
> [3247365064986003851 -4632707886455738545]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> -4632707886455738545]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Execute done TUPLE source:
> b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> -4632707886455738545] TASK: 1 DELTA: 0
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] Processing received message
> FOR 1 TUPLE: source: b-5:35, stream: __ack_ack, id: {},
> [3247365064986003851 2993206175355277727]
> 2017-07-27 21:47:48.874 b.s.d.executor [INFO] BOLT ack TASK: 1 TIME: 0
> TUPLE: source: b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> 2993206175355277727]
> 2017-07-27 21:47:48.875 b.s.d.executor [INFO] Execute done TUPLE source:
> b-5:35, stream: __ack_ack, id: {}, [3247365064986003851
> 2993206175355277727] TASK: 1 DELTA: 1
> 2017-07-27 21:47:48.898 b.s.m.n.Client [INFO] creating Netty Client,
> connecting to lenovo:6703, bufferSize: 5242880
> 2017-07-27 21:47:48.902 b.s.m.loader [INFO] Shutting down
> receiving-thread: [top-1-1501184820, 6703]
> 2017-07-27 21:47:48.902 b.s.m.n.Client [INFO] closing Netty Client
> Netty-Client-lenovo/192.168.1.5:6703
> 2017-07-27 21:47:48.902 b.s.m.n.Client [INFO] waiting up to 600000 ms to
> send 0 pending messages to Netty-Client-lenovo/192.168.1.5:6703
> 2017-07-27 21:47:48.902 b.s.m.loader [INFO] Waiting for
> receiving-thread:[top-1-1501184820, 6703] to die
> 2017-07-27 21:47:48.903 b.s.m.loader [INFO] Shutdown receiving-thread:
> [top-1-1501184820, 6703]
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Shut down receive thread
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Terminating messaging context
> 2017-07-27 21:47:48.904 b.s.d.worker [INFO] Shutting down executors
> 2017-07-27 21:47:48.904 b.s.d.executor [INFO] Shutting down executor
> b-0:[8 8]
> 2017-07-27 21:47:48.905 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.905 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.906 b.s.d.executor [INFO] Shut down executor b-0:[8 8]
> 2017-07-27 21:47:48.906 b.s.d.executor [INFO] Shutting down executor
> b-8:[47 47]
> 2017-07-27 21:47:48.907 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.907 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shut down executor b-8:[47
> 47]
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shutting down executor
> b-0:[12 12]
> 2017-07-27 21:47:48.908 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.908 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shut down executor b-0:[12
> 12]
> 2017-07-27 21:47:48.908 b.s.d.executor [INFO] Shutting down executor
> b-8:[54 54]
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shut down executor b-8:[54
> 54]
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shutting down executor
> b-0:[2 2]
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shut down executor b-0:[2 2]
> 2017-07-27 21:47:48.909 b.s.d.executor [INFO] Shutting down executor
> b-2:[32 32]
> 2017-07-27 21:47:48.909 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.910 b.s.util [INFO] Async loop interrupted!
> 2017-07-27 21:47:48.910 b.s.d.executor [INFO] Shut down executor b-2:[32
> 32]
> 2017-07-27 21:47:48.910 b.s.d.executor [INFO] Shutting down executor
> b-8:[41 41]
> 2017-07-27 21:47:48.910 b.s.util [INFO] Asy
>
> On Thu, Jul 27, 2017 at 3:11 PM, Stig Rohde Døssing <[email protected]>
> wrote:
>
>> Yes, there is topology.message.timeout.secs for setting how long the
>> topology has to process a message after it is emitted from the spout, and
>> topology.enable.message.timeouts if you want to disable timeouts
>> entirely. I'm assuming that's what you're asking?
>>
>> 2017-07-27 15:03 GMT+02:00 sam mohel <[email protected]>:
>>
>>> Thanks for your patience and time. I will use debug now . But is there
>>> any settings or configurations about the time for spout? How can I increase
>>> it to try ?
>>>
>>> On Thursday, July 27, 2017, Stig Rohde Døssing <[email protected]> wrote:
>>> > Last message accidentally went to you directly instead of the mailing
>>> list.
>>> >
>>> > Never mind what I wrote about worker slots. I think you should check
>>> that all tuples are being acked first. Then you might want to try enabling
>>> debug logging. You should also verify that your spout is emitting all the
>>> expected tuples. Since you're talking about a result file, I'm assuming
>>> your spout output is limited.
>>> >
>>> > 2017-07-27 10:36 GMT+02:00 Stig Rohde Døssing <[email protected]>:
>>> >>
>>> >> Okay. Unless you're seeing out of memory errors or know that your
>>> garbage collector is thrashing, I don't know why changing your xmx would
>>> help. Without knowing more about your topology it's hard to say what's
>>> going wrong. I think your best bet is to enable debug logging and try to
>>> figure out what happens when the topology stops writing to your result
>>> file. When you run your topology on a distributed cluster, you can use
>>> Storm UI to verify that all your tuples are being acked, maybe your tuple
>>> trees are not being acked correctly?
>>> >>
>>> >> Multiple topologies shouldn't be interfering with each other, the
>>> only thing I can think of is if you have too few worker slots and some of
>>> your topology's components are not being assigned to a worker. You can see
>>> this as well in Storm UI.
>>> >>
>>> >> 2017-07-27 <20%2017%2007%2027> 8:11 GMT+02:00 sam mohel <
>>> [email protected]>:
>>> >>>
>>> >>> Yes I tried 2048 and 4096 to make worker more size but same problem .
>>> >>>
>>> >>> I have result file . It should contains the result of my processing
>>> . The size of this file should be 7 mb but what I got after sumbit the
>>> topology 50 kb only .
>>> >>>
>>> >>> I submitted this toplogy before . Since 4 months . But when I
>>> submitted it now I got this problem .
>>> >>>
>>> >>> How the toplogy working well before but now not ?
>>> >>>
>>> >>> Silly question and sorry for that
>>> >>> I submitted three topology except that one . Is that make memory
>>> weak ? Or should I clean something after that
>>> >>>
>>> >>> On Thursday, July 27, 2017, Stig Rohde Døssing <[email protected]>
>>> wrote:
>>> >>> > As far as I can tell the default xmx for workers in 0.10.2 is 768
>>> megs (https://github.com/apache/storm/blob/v0.10.2/conf/defaults.
>>> yaml#L134), your supervisor logs shows the following:
>>> >>> > "Launching worker with command: <snip> -Xmx2048m". Is this the
>>> right configuration?
>>> >>> >
>>> >>> > Regarding the worker log, it looks like the components are
>>> initialized correctly, all the bolts report that they're done running
>>> prepare(). Could you explain what you expect the logs to look like and what
>>> you expect to happen when you run the topology?
>>> >>> >
>>> >>> > It's sometimes helpful to enable debug logging if your topology
>>> acts strange, consider trying that by setting
>>> >>> > Config conf = new Config();
>>> >>> > conf.setDebug(true);
>>> >>> >
>>> >>> > 2017-07-27 <20%2017%2007%2027> 1:43 GMT+02:00 sam mohel <
>>> [email protected]>:
>>> >>> >>
>>> >>> >> Same problem with distributed mode . I tried to submit toplogy in
>>> distributed with localhost and attached log files of worker and supervisor
>>> >>> >>
>>> >>> >>
>>> >>> >>
>>> >>> >> On Thursday, July 27, 2017, sam mohel <[email protected]>
>>> wrote:
>>> >>> >> > I submit my topology by this command
>>> >>> >> > mvn package
>>> >>> >> > mvn compile exec:java -Dexec.classpathScope=compile
>>> -Dexec.mainClass=trident.Topology
>>> >>> >> >   and i copied those lines
>>> >>> >> > 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt
>>> b-4:(40)
>>> >>> >> > 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
>>> b-2:(14)
>>> >>> >> > 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
>>> b-5:(45)
>>> >>> >> > sam@lenovo:~/first-topology$
>>> >>> >> > from what i saw in terminal . I checked the size of the result
>>> file and found it's 50 KB each time i submit it .
>>> >>> >> > what should i check ?
>>> >>> >> > On Wed, Jul 26, 2017 at 9:05 PM, Bobby Evans <
>>> [email protected]> wrote:
>>> >>> >> >>
>>> >>> >> >> Local mode is totally separate and there are no processes
>>> launched except the original one.  Those values are ignored in local mode.
>>> >>> >> >>
>>> >>> >> >>
>>> >>> >> >> - Bobby
>>> >>> >> >>
>>> >>> >> >>
>>> >>> >> >> On Wednesday, July 26, 2017, 2:01:52 PM CDT, sam mohel <
>>> [email protected]> wrote:
>>> >>> >> >>
>>> >>> >> >> Thanks so much for replying , i tried to submit topology in
>>> local mode ... i increased size of worker like
>>> >>> >> >> conf.put(Config.TOPOLOGY_WORKER_CHILDOPTS,"-Xmx4096m" );
>>> >>> >> >>
>>> >>> >> >> but got in terminal
>>> >>> >> >> 11920 [Thread-121-b-4] INFO  b.s.d.executor - Preparing bolt
>>> b-4:(25)
>>> >>> >> >> 11935 [Thread-121-b-4] INFO  b.s.d.executor - Prepared bolt
>>> b-4:(25)
>>> >>> >> >> 11920 [Thread-67-b-5] INFO  b.s.d.executor - Preparing bolt
>>> b-5:(48)
>>> >>> >> >> 11936 [Thread-67-b-5] INFO  b.s.d.executor - Prepared bolt
>>> b-5:(48)
>>> >>> >> >> 11919 [Thread-105-b-2] INFO  b.s.d.executor - Prepared bolt
>>> b-2:(10)
>>> >>> >> >> 11915 [Thread-47-b-4] INFO  b.s.d.executor - Prepared bolt
>>> b-4:(40)
>>> >>> >> >> 11912 [Thread-111-b-2] INFO  b.s.d.executor - Prepared bolt
>>> b-2:(14)
>>> >>> >> >> 11934 [Thread-103-b-5] INFO  b.s.d.executor - Prepared bolt
>>> b-5:(45)
>>> >>> >> >> sam@lenovo:~/first-topology$
>>> >>> >> >> and didn't complete processing . the size of the result is 50
>>> KB . This topology was working well without any problems . But when i tried
>>> to submit it now , i didn't get the full result
>>> >>> >> >>
>>> >>> >> >> On Wed, Jul 26, 2017 at 8:35 PM, Bobby Evans <
>>> [email protected]> wrote:
>>> >>> >> >>
>>> >>> >> >> worker.childops is the default value that is set by the system
>>> administrator in storm.yaml on each of the supervisor nodes.
>>> topology.worker.childopts is what you set in your topology conf if you want
>>> to add something more to the command line.
>>> >>> >> >>
>>> >>> >> >>
>>> >>> >> >> - Bobby
>>> >>> >> >>
>>> >>> >> >>
>>> >>> >> >> On Tuesday, July 25, 2017, 11:50:04 PM CDT, sam mohel <
>>> [email protected]> wrote:
>>> >>> >> >>
>>> >>> >> >> i'm using 0.10.2 version . i tried to write in the code
>>> >>> >> >> conf.put(Config.WORKER_ CHILDOPTS, "-Xmx4g");
>>> >>> >> >> conf.put(Config.SUPERVISOR_ CHILDOPTS, "-Xmx4g");
>>> >>> >> >>
>>> >>> >> >> but i didn't touch any affect . Did i write the right
>>> configurations ?
>>> >>> >> >> Does this value is the largest ?
>>> >>> >> >>
>>> >>> >> >
>>> >>> >> >
>>> >>> >
>>> >
>>> >
>>>
>>
>>
>

Reply via email to