goes well, too.
>>
>> testing, testing, testing. Hope I'm not too boring with it.
>>
>>
>> On Fri, May 20, 2016 at 11:33 PM, Ω Alisson <thelinuxl...@gmail.com>
>> wrote:
>>
>>> Interesting. I wonder if it is better to use phantomjs for the
Interesting. I wonder if it is better to use phantomjs for the PNG
rendering. Although it is slower, I don't need child_process
On Fri, May 20, 2016 at 5:42 PM Zlatko wrote:
> You have multiple things there. Thousands requests (per second, I assume)
> is a lot even without
I need to develop a service where the user sends an array of values, I
generate a chart with d3.js and convert it to png with a child_process
calling imagemagick to send it back to the user. Are there any caveats for
child_process when scaling for thousands of requests?
--
Job board:
Interesting I wonder what would be the comparison to Koa 2.0 Alpha
On Fri, Jan 22, 2016 at 1:26 AM Welefen Li wrote:
> ThinkJS 2.0 is the first Node.js framework that fully supports all new
> ES2015/ES2016 syntax, and it was released on Oct 30 2015. By using the new
> syntax
I raised the limit to 6 million characters on a complex system. No memory
increase, huge boosts
On Mon, Oct 19, 2015 at 1:31 AM Michael Mathy
wrote:
> Maybe. It might also hurt in a bad way. The compiler will try to inline
> almost all functions (assuming that
Say I want to increase it to 5000. Is it worth it?
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You received this message
I'm using Node.js under nginx load balancer. Everything runs fine but a
large query string brings status 502 and I can see on nginx error logs this
stuff:
upstream prematurely closed connection while reading response header from
upstream
no live upstreams while connecting to upstream
On node.js
It's just that Node chokes with big JSON payloads and all async json
parsers are cumbersome to use
On Thu, May 28, 2015 at 4:21 AM, Dave Horton d...@dchorton.com wrote:
I had some luck using jansson (https://github.com/akheron/jansson) in my
C/C++ program for parsing / formatting JSON. Not
https://github.com/kostya/benchmarks/tree/master/json
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You received this message
is generally written in JS it can easily become a bottleneck while
your other cores that aren't running your JS are mostly idle.
~Ryan
On Mon, 18 May 2015 at 21:04 Ω Alisson thelinuxl...@gmail.com wrote:
I see many articles saying to spawn workes equal to CPU cores, but what
about context
I see many articles saying to spawn workes equal to CPU cores, but what
about context switching cost?
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
Does anyone knows if the V8 engine on latest io.js still deoptimizes
generators, const and let?
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
Looking at this example:
https://github.com/alexcrichton/rust-ffi-examples/tree/master/node-to-rust
Are there any caveats to using node ffi async support to delegate heavy cpu
work to rust code?
--
Job board: http://jobs.nodejs.org/
New group rules:
https://github.com/alexcrichton/rust-ffi-examples/tree/master/node-to-rust
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You
you mean sprinkling one-off process.nextTick() calls throughout
your code so that a check is queued at least once per event loop tick?
~Ryan
On Tue, Dec 16, 2014 at 7:39 PM, Ω Alisson thelinuxl...@gmail.com wrote:
Well I did it and the CPU is running at 0.2%
On Tue, Dec 16, 2014 at 11:22 PM
So I'm doing my cluster exactly like this gist:
https://gist.github.com/dickeyxxx/0f535be1ada0ea964cae but whenever the
service is stopped, the worker doesn't receive any message, inside its
file, I'm using process.on(message), but nothing is coming from the master
--
Job board:
Well I did it and the CPU is running at 0.2%
On Tue, Dec 16, 2014 at 11:22 PM, Sam Roberts s...@strongloop.com wrote:
On Wed, Dec 10, 2014 at 8:15 PM, Ω Alisson thelinuxl...@gmail.com wrote:
Interesting Sam, but what if I set a flag that is checked every
process.nextTick
You will eat 100
Interesting Sam, but what if I set a flag that is checked every
process.nextTick and suspends redis updates, so what's left is finishing
iterating over in-memory items?
On Wed, Dec 10, 2014 at 8:47 PM, Sam Roberts s...@strongloop.com wrote:
On Monday, December 8, 2014 at 3:47 PM, Ω Alisson
-suite/loopback/
On Monday, December 8, 2014 at 3:47 PM, Ω Alisson wrote:
I'm running a Node app that fetches Redis data into another database. It´s
using the cluster module, everything works fine but I need to ensure that
workers will finish properly their jobs(they use BRPOP then allocate
I'm running a Node app that fetches Redis data into another database. It´s
using the cluster module, everything works fine but I need to ensure that
workers will finish properly their jobs(they use BRPOP then allocate into a
object then insert in batches cleaning the objects), how do I do this?
Does anyone knows a good job queue that has atomicity and scheduled jobs?
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
---
You
It is very nice, but it doesn't seem to delegate the job processing to
other servers, or am I wrong?
On Tue, Nov 11, 2014 at 1:45 AM, Sameer Joshi sameer.joshi7...@gmail.com
wrote:
I have used Agenda which is backed with MongoDB. That has served my
purpose.
There is also an AgendaUI package
Andrew, is your implementation atomic?
On Tue, Nov 11, 2014 at 2:35 AM, Andrew Kelley superjo...@gmail.com wrote:
On Monday, November 10, 2014 5:13:09 PM UTC-7, Alisson Cavalcante Agiani
wrote:
Does anyone knows a good job queue that has atomicity and scheduled jobs?
Here's one:
How do people stumble upon this kind of error?
On Thu, Aug 21, 2014 at 3:24 PM, Alexey Petrushin
alexey.petrus...@gmail.com wrote:
In my understanding the problem is not that it's not possible to prevent
callback from being called twice, with underscore it's as simple as `cb =
_(cb).once()`
Found this: http://nginx.org/en/docs/http/ngx_http_limit_req_module.html
Thanks!
On Thu, Jul 3, 2014 at 6:48 PM, mscdex msc...@gmail.com wrote:
There's also: https://github.com/diosney/node-netfilter
--
Job board: http://jobs.nodejs.org/
New group rules:
So that a malicious user gets blocked for some time(maybe with HTTP 429)
when it hits a request limit in a predefined duration
--
Job board: http://jobs.nodejs.org/
New group rules:
https://gist.github.com/othiym23/9886289#file-moderation-policy-md
Old group rules:
This is awesome!
On Mon, Jun 30, 2014 at 11:19 AM, Rebecca Turner m...@re-becca.org wrote:
Abraxas is a end-to-end streaming Gearman client and worker library.
(Server implementation coming soon.)
https://www.npmjs.org/package/abraxas
Standout features:
* Support for workers handling
27 matches
Mail list logo