[nodejs] Re: Large process getting slower and slower

2015-06-04 Thread Tom Boutell
I believe this statement is not accurate. setImmediate *does* wait for the 
next loop, while nextTick runs at the end of the current loop; if you 
overuse nextTick, no I/O can take place.

http://stackoverflow.com/questions/15349733/setimmediate-vs-nexttick

As many commenters point out there, the names are pretty horrendously 
backwards. In the case of setImmediate that's because that name is used in 
the browser already and has those semantics, so node is just following the 
pattern. In the case of nextTick it's because the node developers, in their 
infinite wisdom, changed the behavior starting in node 0.9. Once upon a 
time it worked as you describe.

TL;DR: use setImmediate unless you've thought through why nextTick is safe 
in this case.

On Thursday, June 4, 2015 at 9:42:55 AM UTC-4, zladuric wrote:

 Well, for one, IIRC setImmediate will not put this step on the next loop - 
 it will just append the function to the end of the current loop - thus 
 never giving Node.js a chance to go off.

 Another thing - did you try to use a debugger or something? Even a simple 
 strace/dtrace would tell you what the node process is doing. Also, did the 
 first program exit? Maybe it didn't and you're leaking more and more 
 memory, thus swapping more and more?

 On Thursday, June 4, 2015 at 5:06:38 AM UTC+2, Blake McBride wrote:

 Greetings,

 I have a large, very recursive process that (at best) takes 6 minutes to 
 run.  This process does a lot of IO, and a lot of directory recursion.  In 
 order to play nicely in a single threading event oriented system, I:

 1.  use non-blocking function whenever available

 2.  rather than perform recursion directly, I call the next step with 
 nextTick or setImmediate and exit

 This has worked well in my smaller databases, and the first time through 
 the large one (6 minutes).  But when trying to run the large one a second 
 time, it gets slower and slower until it kind of never ends.  I don't know 
 what is causing the slowdown.

 One thing I thought of is that perhaps I am filling the event queue in 
 such a way so that the IO never gets a chance and it has to go through 
 many, many events before any real IO can occur.  Don't know.

 Sure appreciate any ideas.

 Thanks.

 Blake McBride



-- 
Job board: http://jobs.nodejs.org/
New group rules: 
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
--- 
You received this message because you are subscribed to the Google Groups 
nodejs group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
To post to this group, send email to nodejs@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/nodejs/b80709cf-0c12-48c4-97d2-7236d3764b59%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[nodejs] Re: Large process getting slower and slower

2015-06-04 Thread Peter Rust
If the time is exponentially increasing, it could be an indication that 
each iteration is adding event handlers and that old event handlers from 
previous iterations are being inadvertently fired. As zladuric mentions, 
you'll want to use a debugger or logging to get some insight into what your 
program is doing that you don't expect.

Personally, I like https://github.com/node-inspector/node-inspector. I 
would put a breakpoint or some logging at key points in the program 
(especially expensive event handlers) and ensure they're when I expect  
the number of times I expect, especially in subsequent runs of the program.

If you find that your program's execution is running as expected, then it 
could be a memory leak, causing a bunch of objects to get built up, which 
makes the garbage collection sweeps take an inordinate amount of time. 
Unfortunately, node-inspector doesn't have profiling built-in yet, but you 
could use https://github.com/c4milo/node-webkit-agent or Memwatch/heapdump 
https://github.com/bnoordhuis/node-heapdump or StrongLoop's profiling 
tools https://strongloop.com/node-js/devops-tools/ or other solutions 
linked to in Will Villanueva's node.js profiling guide 
http://www.willvillanueva.com/the-node-js-profiling-guide-that-hasnt-existed-profiling-node-js-applications-part-1/
.

-- 
Job board: http://jobs.nodejs.org/
New group rules: 
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
--- 
You received this message because you are subscribed to the Google Groups 
nodejs group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
To post to this group, send email to nodejs@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/nodejs/20b20161-43c4-47dd-863c-bdaef18474c8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[nodejs] Re: Large process getting slower and slower

2015-06-04 Thread Blake McBride
Memory is stable.  Also, I am running on a 16 GB machine with 10 GB free.


On Thursday, June 4, 2015 at 8:42:55 AM UTC-5, zladuric wrote:

 Well, for one, IIRC setImmediate will not put this step on the next loop - 
 it will just append the function to the end of the current loop - thus 
 never giving Node.js a chance to go off.

 Another thing - did you try to use a debugger or something? Even a simple 
 strace/dtrace would tell you what the node process is doing. Also, did the 
 first program exit? Maybe it didn't and you're leaking more and more 
 memory, thus swapping more and more?

 On Thursday, June 4, 2015 at 5:06:38 AM UTC+2, Blake McBride wrote:

 Greetings,

 I have a large, very recursive process that (at best) takes 6 minutes to 
 run.  This process does a lot of IO, and a lot of directory recursion.  In 
 order to play nicely in a single threading event oriented system, I:

 1.  use non-blocking function whenever available

 2.  rather than perform recursion directly, I call the next step with 
 nextTick or setImmediate and exit

 This has worked well in my smaller databases, and the first time through 
 the large one (6 minutes).  But when trying to run the large one a second 
 time, it gets slower and slower until it kind of never ends.  I don't know 
 what is causing the slowdown.

 One thing I thought of is that perhaps I am filling the event queue in 
 such a way so that the IO never gets a chance and it has to go through 
 many, many events before any real IO can occur.  Don't know.

 Sure appreciate any ideas.

 Thanks.

 Blake McBride



-- 
Job board: http://jobs.nodejs.org/
New group rules: 
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
--- 
You received this message because you are subscribed to the Google Groups 
nodejs group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
To post to this group, send email to nodejs@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/nodejs/7bda3d4d-9a28-4b63-8530-ae315abb90f2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[nodejs] Re: Large process getting slower and slower

2015-06-04 Thread zladuric
Well, for one, IIRC setImmediate will not put this step on the next loop - 
it will just append the function to the end of the current loop - thus 
never giving Node.js a chance to go off.

Another thing - did you try to use a debugger or something? Even a simple 
strace/dtrace would tell you what the node process is doing. Also, did the 
first program exit? Maybe it didn't and you're leaking more and more 
memory, thus swapping more and more?

On Thursday, June 4, 2015 at 5:06:38 AM UTC+2, Blake McBride wrote:

 Greetings,

 I have a large, very recursive process that (at best) takes 6 minutes to 
 run.  This process does a lot of IO, and a lot of directory recursion.  In 
 order to play nicely in a single threading event oriented system, I:

 1.  use non-blocking function whenever available

 2.  rather than perform recursion directly, I call the next step with 
 nextTick or setImmediate and exit

 This has worked well in my smaller databases, and the first time through 
 the large one (6 minutes).  But when trying to run the large one a second 
 time, it gets slower and slower until it kind of never ends.  I don't know 
 what is causing the slowdown.

 One thing I thought of is that perhaps I am filling the event queue in 
 such a way so that the IO never gets a chance and it has to go through 
 many, many events before any real IO can occur.  Don't know.

 Sure appreciate any ideas.

 Thanks.

 Blake McBride



-- 
Job board: http://jobs.nodejs.org/
New group rules: 
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
--- 
You received this message because you are subscribed to the Google Groups 
nodejs group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
To post to this group, send email to nodejs@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/nodejs/6c78c840-bcd9-4a1e-9139-58d05602b6d1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.