[nodejs] net.setTimeout

2012-06-29 Thread Joran Greef
I came across an issue today using net.setTimout in a POP3 server written 
with Node.

The server was set to timeout connections after one minute.

A client would start downloading an email, the server would write the email 
in one go to the socket.

The download would take some time for the client, and after a minute it 
would still be downloading, but the server would timeout the connection and 
drop the client.

In this case there is activity from the client side in terms of reading 
from the kernel buffer, but from the point of view of Node it would see no 
activity and timeout the connection.

Is that correct? Would it be helpful to flesh out the documentation for 
setTimeout to highlight issues like this?

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Re: Node.js memory, GC and performance

2012-07-13 Thread Joran Greef
To get a very rough idea of how much time your program spends in gc, you 
can expose the gc by running node with the --nouse_idle_notification 
--expose_gc flags and then call gc manually with "global.gc()" and time how 
long that takes.

I have a program where a single JS object was being used as a hash table 
with a few million entries. When the gc would run, as far as I am aware, it 
would iterate over every one of those entries and it would do this every 
few seconds, each time pausing for about 500ms. I switched to a Buffer 
backed open addressed hash table which is also more efficient in other 
respects.

On Friday, July 13, 2012 3:10:50 AM UTC+2, Alexey Petrushin wrote:
>
> There are rumors that current Node.js (or, more exactly V8 GC) performs 
> badly when there are lots of JS objects and memory used.
>
> Can You please explain what exatly is the problem - lots of objects or 
> lots of properties on one object (or array)?
>
> Maybe there are some benchmarks, would be interesting to see actual code 
> and numbers.
>
> As far as I know the main problem - lots of properties on one object, not 
> lots of objects itself (although I'm not sure). If so - would be the 
> in-memory graph database (about couple of hundreds of properties on each 
> node at max) a good case?
>
> Also I heard that latest versions of V8 has improved GC and that it solved 
> some parts of this problems - is this true, and when it will be available 
> in Node.js?
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Re: Node.js memory, GC and performance

2012-07-13 Thread Joran Greef
ty, it's just an implementation of the dense hash map described 
by http://sparsehash.googlecode.com/svn/trunk/doc/implementation.html

It stores the keys and values side by side, in a single buffer, using open 
addressing and triangular probing (requires the number of buckets to be a 
power of two in order for the triangular probing to reach every bucket), 
and a tabulation hash (http://en.wikipedia.org/wiki/Tabulation_hashing) 
with pre-generated random tables (the key length is fixed so this kind of 
hash is ideal for that and very fast with good distribution). The buffer is 
doubled in size whenever the (number of entries plus the number of deleted 
entries) over the (number of buckets) is more than 0.6 to avoid too much 
clustering.

Here's the original 
discussion: https://groups.google.com/d/topic/nodejs/yLbuS7YONTI/discussion

Here's a discussion on the hash function in 
v8-users: 
https://groups.google.com/forum/#!msg/v8-users/zGCS_wEMawU/6mConTiBUyMJ

On Friday, July 13, 2012 9:37:46 AM UTC+2, Yi wrote:
>
> Hi Joran,
>
> We are facing a similar performance problem as your million-entry-object. 
>
> May I ask you to explain a bit detail about the implementation of ”Buffer 
> backed open addressed hash table “
>
> Many thanks,
>
> ty
>
>
> 2012/7/13 Joran Greef 
>
>> To get a very rough idea of how much time your program spends in gc, you 
>> can expose the gc by running node with the --nouse_idle_notification 
>> --expose_gc flags and then call gc manually with "global.gc()" and time how 
>> long that takes.
>>
>> I have a program where a single JS object was being used as a hash table 
>> with a few million entries. When the gc would run, as far as I am aware, it 
>> would iterate over every one of those entries and it would do this every 
>> few seconds, each time pausing for about 500ms. I switched to a Buffer 
>> backed open addressed hash table which is also more efficient in other 
>> respects.
>>
>>
>> On Friday, July 13, 2012 3:10:50 AM UTC+2, Alexey Petrushin wrote:
>>>
>>> There are rumors that current Node.js (or, more exactly V8 GC) performs 
>>> badly when there are lots of JS objects and memory used.
>>>
>>> Can You please explain what exatly is the problem - lots of objects or 
>>> lots of properties on one object (or array)?
>>>
>>> Maybe there are some benchmarks, would be interesting to see actual code 
>>> and numbers.
>>>
>>> As far as I know the main problem - lots of properties on one object, 
>>> not lots of objects itself (although I'm not sure). If so - would be the 
>>> in-memory graph database (about couple of hundreds of properties on each 
>>> node at max) a good case?
>>>
>>> Also I heard that latest versions of V8 has improved GC and that it 
>>> solved some parts of this problems - is this true, and when it will be 
>>> available in Node.js?
>>>
>>  -- 
>> Job Board: http://jobs.nodejs.org/
>> Posting guidelines: 
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
>> You received this message because you are subscribed to the Google
>> Groups "nodejs" group.
>> To post to this group, send email to nodejs@googlegroups.com
>> To unsubscribe from this group, send email to
>> nodejs+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/nodejs?hl=en?hl=en
>>
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Re: Node.js memory, GC and performance

2012-07-13 Thread Joran Greef
Vyacheslav, to clarify, the suggestion of timing gc() was made in the 
context of a program with one large object containing a few million 
entries, in which case that would give a "very rough idea". In that case, 
there would be little difference between full collection and incremental 
collection, since the majority of work would be indivisible.

On Friday, July 13, 2012 1:37:17 PM UTC+2, Vyacheslav Egorov wrote:
>
> > To get a very rough idea of how much time your program spends in gc, you 
> can expose the gc by running node with the --nouse_idle_notification 
> --expose_gc flags and then call gc manually with "global.gc()" and time how 
> long that takes. 
>
> This will give you something that is _far_ from a realistic estimate. 
> First of all contrary to want you might think 
> --nouse-idle-notification does not disable automatic GC in V8. What it 
> does is tells V8 not to perform GC actions (be it advance the sweeper, 
> incremental marker or do a full 
> GC) in response to IdleNotifications that embedder (node.js in this 
> case) sends to V8. If V8 sees fit (e.g. on allocation failure) it 
> _will_ perform it and you can't disable that. Calling gc() force a 
> full non-incremental collection which is much more expensive than (and 
> is quite different from) from incremental collections that V8 tries to 
> use during the normal run of your program. 
>
> Now about large object with a few million entries. The biggest problem 
> here is that you stash all your entries into a single object, the 
> biggest problem will be spending time scanning that object during GC 
> (V8 does split marking into step, but it does not split scanning of a 
> single object into steps). If you allocate many objects it should not 
> be a problem for V8's incremental GC: 
>
> here is an excerpt from a --trace-gc output for a small stupid-stupid 
> test I've wrote (https://gist.github.com/3104414): 
>
> 10 keys created (total keys: 2080) 
>   88 ms: Mark-sweep 4126.8 (4174.9) -> 3978.6 (4042.0) MB, 89 ms 
> (+ 5605 ms in 2232 steps since start of marking, biggest step 
> 22.033936 ms) [StackGuard GC request] [GC in old space requested]. 
>
> As you can see there is no one huge pause. The work is done in increments. 
>
> Of course if you can avoid pauses altogether by allocating your huge 
> chunks of memory outside of GC managed heap it's even better. My 
> personal position here is that anybody who allocates 4GBs of GC 
> managed objects is doing something wrong. 
>
> [of course I am not claiming that there is nothing to improve here on 
> V8 side: marking, sweeping and evacuation steps can be optimized 
> further and even parallelized internally; some portions can be even 
> trivially made concurrent]. 
>
> [@Ben: it is not as sophisticated as a concurrent soft-real time 
> collector would be, hard to dispute that. It is however much more 
> sophisticated than a straightforward mark-sweep would be :-) I still 
> think that the best fit for node.js might be a combination of GC with 
> a region based memory management] 
>
> -- 
> Vyacheslav Egorov 
>
>
> On Fri, Jul 13, 2012 at 9:02 AM, Joran Greef  wrote: 
> > To get a very rough idea of how much time your program spends in gc, you 
> can 
> > expose the gc by running node with the --nouse_idle_notification 
> --expose_gc 
> > flags and then call gc manually with "global.gc()" and time how long 
> that 
> > takes. 
> > 
> > I have a program where a single JS object was being used as a hash table 
> > with a few million entries. When the gc would run, as far as I am aware, 
> it 
> > would iterate over every one of those entries and it would do this every 
> few 
> > seconds, each time pausing for about 500ms. I switched to a Buffer 
> backed 
> > open addressed hash table which is also more efficient in other 
> respects. 
> > 
> > 
> > On Friday, July 13, 2012 3:10:50 AM UTC+2, Alexey Petrushin wrote: 
> >> 
> >> There are rumors that current Node.js (or, more exactly V8 GC) performs 
> >> badly when there are lots of JS objects and memory used. 
> >> 
> >> Can You please explain what exatly is the problem - lots of objects or 
> >> lots of properties on one object (or array)? 
> >> 
> >> Maybe there are some benchmarks, would be interesting to see actual 
> code 
> >> and numbers. 
> >> 
> >> As far as I know the main problem - lots of properties on one object, 
> not 
> >> lots of objects itself (although I'm not sure). If so - would be the 
> >> in-memory graph database (about couple of hundreds of properties on 
&

Re: [nodejs] Version 0.8.10 (Stable)

2012-09-26 Thread Joran Greef
Thanks Ben, could you provide more detail as to the regression in fs.stat() 
file size reporting?

I use it as part of an append-only storage system, and any problems with it 
would be critical. The system would think it needs to repair and truncate 
the append-only files.

On Wednesday, September 26, 2012 1:26:36 AM UTC+2, Ben Noordhuis wrote:
>
> On Wed, Sep 26, 2012 at 12:39 AM, Isaac Schlueter > 
> wrote: 
> > 2012.09.25, Version 0.8.10 (Stable) 
> > 
> > * npm: Upgrade to 1.1.62 
> > 
> > * repl: make invalid RegExps throw in the REPL (Nathan Rajlich) 
> > 
> > * v8: loosen artificial mmap constraint (Bryan Cantrill) 
> > 
> > * process: fix setuid() and setgid() error reporting (Ben Noordhuis) 
> > 
> > * domain: Properly exit() on domain disposal (isaacs) 
> > 
> > * fs: fix watchFile() missing deletion events (Ben Noordhuis) 
> > 
> > * fs: fix assert in fs.watch() (Ben Noordhuis) 
> > 
> > * fs: don't segfault on deeply recursive stat() (Ben Noordhuis) 
> > 
> > * http: Remove timeout handler when data arrives (Frédéric Germain) 
> > 
> > * http: make the client "res" object gets the same domain as "req" 
> > (Nathan Rajlich) 
> > 
> > * windows: don't blow up when an invalid FD is used (Bert Belder) 
> > 
> > * unix: map EDQUOT to UV_ENOSPC (Charlie McConnell) 
> > 
> > * linux: improve /proc/cpuinfo parser (Ben Noordhuis) 
> > 
> > * win/tty: reset background brightness when color is set to default 
> > (Bert Belder) 
> > 
> > * unix: put child process stdio fds in blocking mode (Ben Noordhuis) 
> > 
> > * unix: fix EMFILE busy loop (Ben Noordhuis) 
> > 
> > * sunos: don't set TCP_KEEPALIVE (Ben Noordhuis) 
> > 
> > * tls: Use slab allocator for memory management (Fedor Indutny) 
> > 
> > * openssl: Use optimized assembly code for x86 and x64 (Bert Belder) 
> > 
> > 
> > Source Code: http://nodejs.org/dist/v0.8.10/node-v0.8.10.tar.gz 
> > 
> > Macintosh Installer (Universal): 
> http://nodejs.org/dist/v0.8.10/node-v0.8.10.pkg 
> > 
> > Windows Installer: http://nodejs.org/dist/v0.8.10/node-v0.8.10-x86.msi 
> > 
> > Windows x64 Installer: 
> http://nodejs.org/dist/v0.8.10/x64/node-v0.8.10-x64.msi 
> > 
> > Windows x64 Files: http://nodejs.org/dist/v0.8.10/x64/ 
> > 
> > Linux 32-bit Binary: 
> > http://nodejs.org/dist/v0.8.10/node-v0.8.10-linux-x86.tar.gz 
> > 
> > Linux 64-bit Binary: 
> > http://nodejs.org/dist/v0.8.10/node-v0.8.10-linux-x64.tar.gz 
> > 
> > Solaris 32-bit Binary: 
> > http://nodejs.org/dist/v0.8.10/node-v0.8.10-sunos-x86.tar.gz 
> > 
> > Solaris 64-bit Binary: 
> > http://nodejs.org/dist/v0.8.10/node-v0.8.10-sunos-x64.tar.gz 
> > 
> > Other release files: http://nodejs.org/dist/v0.8.10/ 
> > 
> > Website: http://nodejs.org/docs/v0.8.10/ 
> > 
> > Documentation: http://nodejs.org/docs/v0.8.10/api/ 
> > 
> > Shasums: 
> > 
> > ``` 
> > ae327ce5cc9f46e7d1bdd04f06ea299e44f9a0fc  node-v0.8.10-darwin-x64.tar.gz 
> > 812405695e3522bfd998d67b6de2daff55ff0a7b  node-v0.8.10-darwin-x86.tar.gz 
> > 8ef4e489817a79aaea75cffd09cc4b072c38fe2e  node-v0.8.10-linux-x64.tar.gz 
> > 76f289b12ba41730c43b59d286de5cee571c9064  node-v0.8.10-linux-x86.tar.gz 
> > 61e40e8a5e911b26889ad33856029d783388778a  node-v0.8.10-sunos-x64.tar.gz 
> > 2fd06dc2d145fdba6b2800186ffcaebb0fe3b109  node-v0.8.10-sunos-x86.tar.gz 
> > 3e49e1c958815d0144cafe5e43c2fa83e775dd1c  node-v0.8.10-x86.msi 
> > 9605340dca27725110eebcb15fdf61599622e308  node-v0.8.10.pkg 
> > c596cce77726724441cf7fc98f42df3a5335ab8e  node-v0.8.10.tar.gz 
> > f6d172e3452e2bf429dc75836a385eff22407c83  node.exe 
> > ccb8c0e0fa052d8da48ea421cc7a220bb89835f9  node.exp 
> > 7ea077e6ca1216d5e8f42e445f75f0542f7d85c4  node.lib 
> > d86ad55c37ac1a0975731ab444fc58c93a7bca47  node.pdb 
> > 90fc70b09f92f788fc68a116f589a86ca3309fdb  x64/node-v0.8.10-x64.msi 
> > 4b62cedf4bbbff1ffb8df6d52c9a57255f383ad3  x64/node.exe 
> > 6b4b813c3b065154ce437b7f36a15beceeb76af0  x64/node.exp 
> > 9e7dd3ddca3ba37483ac2125c39a52e72035e1d0  x64/node.lib 
> > 28de8dd05af5e390f5bd63508bd27b7c683177de  x64/node.pdb 
>
> It's probably better to skip this release, there's a small regression 
> in fs.stat() file size reporting. We'll release 0.8.11 later this 
> week, probably tomorrow. 
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Version 0.8.10 (Stable)

2012-10-15 Thread Joran Greef
Thank you Ben.

On Wednesday, September 26, 2012 2:12:38 PM UTC+2, Ben Noordhuis wrote:
>
> On Wed, Sep 26, 2012 at 12:46 PM, Joran Greef 
> > 
> wrote: 
> > Thanks Ben, could you provide more detail as to the regression in 
> fs.stat() 
> > file size reporting? 
>
> The stat() and fstat() functions report wrong file sizes for files > 1 
> GB on 32 bits systems and > 2 GB on 64 bits systems. 
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] TLS socket write + drain

2013-07-19 Thread Joran Greef
Thanks Ben, does a return code of true and the callback being called mean
exactly the same thing? Do both give the same guarantee?

The reason I ask is because I am pre-allocating a buffer for a websocket
server, and this is overwriting the buffer as soon as write returns true or
the callback is fired. I noticed that for large writes it's overwriting
data before it gets to the client.

Joran Greef

RONOMON


On Wed, Jul 17, 2013 at 11:02 PM, Ben Noordhuis  wrote:

> On Wed, Jul 17, 2013 at 4:01 PM, Joran Dirk Greef 
> wrote:
> > I have a question regarding how TLS socket write works and am going
> through
> > the source now too.
> >
> > The documentation details this behavior as:
> >
> > "Returns true if the entire data was flushed successfully to the kernel
> > buffer. Returns false if all or part of the data was queued in user
> memory.
> > 'drain' will be emitted when the buffer is again free. The optional
> callback
> > parameter will be executed when the data is finally written out - this
> may
> > not be immediately."
> >
> > If I pass a 1mb-2mb buffer to write(), does a true return code or
> executed
> > callback guarantee that I can overwrite parts of that buffer, without
> > changing any data that was passed to the write?
>
> Yes.
>
> --
> --
> Job Board: http://jobs.nodejs.org/
> Posting guidelines:
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
> You received this message because you are subscribed to the Google
> Groups "nodejs" group.
> To post to this group, send email to nodejs@googlegroups.com
> To unsubscribe from this group, send email to
> nodejs+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/nodejs?hl=en?hl=en
>
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "nodejs" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/nodejs/BC2ULjPqTi4/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> nodejs+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>
>
>

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.




[nodejs] Safe to modify buffer after flushed to kernel buffer?

2012-04-19 Thread Joran Greef
I need to write a few thousand 24 byte values to a TCP socket which 
represents a WebSocket connection.

I must retrieve them individually from disk, but I want to group a thousand 
of them into each write, since the write represents a WebSocket message and 
each message has additional message header overhead.

At first I was instantiating a 24 byte buffer for each value, reading into 
this from disk, iterating over the array of values, calculating the size, 
creating a large buffer, copying these into it, and then writing this 
aggregate buffer out to the socket. There were a few abstractions involved 
and this was the simplest code flow.

I was concerned about the needless copying and did a rough benchmark, and 
creating a large buffer up front and writing directly into it is 3 times 
faster than creating many small buffers, then creating a large buffer and 
copying the small buffers.

I would like to change tactics and create a buffer upfront, read the small 
values directly into this from disk, and then write this to the socket. 
After receiving the callback from the write (flushed to kernel buffers) I 
would like to start writing into the same buffer at index 0.

Is it safe to modify a buffer after it has been flushed to the kernel 
buffer?

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Safe to modify buffer after flushed to kernel buffer?

2012-04-19 Thread Joran Greef
Thanks Tim, Ben. Would it be possible to add this to the docs?

On Thursday, April 19, 2012 9:49:30 PM UTC+2, Ben Noordhuis wrote:
>
> On Thu, Apr 19, 2012 at 18:39, Tim Caswell  wrote:
> > As far as I know, the buffer won't be used by libuv/node after the write
> > callback is called, that's one of the purposes of this callback.  Someone
> > please correct me if I'm wrong.
>
> That's correct. Don't touch the buffer after the call to .write() but
> when the write callback fires, it's yours again.
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs]will kilos setTimeout spend too much resourece?

2012-04-22 Thread Joran Greef
If you pass a function reference to an existing function to setTimeout, and 
then call this function repeatedly, rather than creating a closure each 
setTimeout call, that will lower your memory and cpu overhead substantially.

On Monday, April 23, 2012 7:49:08 AM UTC+2, Jason.桂林 wrote:
>
> I need to write a service use something like in memory cache, and I want 
> to do `expire` things in setTimeout, will it very slow, If I expire too 
> much things, about kilos to millions.
>
> -- 
> Best regards,
>
> Jason Green
> 桂林
>
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Reverse DNS of ip from same ip address

2012-05-10 Thread Joran Greef
Testing reverse lookups and noticed the following:

dns.reverse('176.9.113.76', function(error, domains) { 
console.log(JSON.stringify(domains)); })

Calling this from the machine itself returns [].

Calling this from any other machine returns [ronomon.com].

Is this by design?

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Re: Reverse DNS of ip from same ip address

2012-05-10 Thread Joran Greef
Thanks, that was it.

On Thursday, May 10, 2012 9:15:09 AM UTC+2, mscdex wrote:
>
> On May 10, 3:00 am, Joran Greef  wrote: 
> > Is this by design? 
>
> It's because you have that IP explicitly mapped to something in your / 
> etc/hosts. Remove that entry and you should see your resolved hostname 
> as expected.

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Object as hash with 7 million entries slows V8

2012-05-11 Thread Joran Greef
I have posted this in v8-users but perhaps someone else here will also be 
familiar with this:

I am using V8 as part of Node and have written a Javascript implementation 
of Bitcask, using a Javascript object as a hash to keep file offsets in 
memory.

This object has 7 million entries and I'm noticing that while the JS code 
is resting, doing nothing, V8 is hitting 100% CPU every few seconds and 
doing this continually.

Attached is the full result of running V8 with --prof.

And of particular interest:

[C++]:
   ticks  total  nonlib   name
  73615   43.1%   43.1% 
 v8::internal::StaticMarkingVisitor::VisitUnmarkedObjects
  68436   40.1%   40.1%  _accept$NOCANCEL
   47962.8%2.8% 
 v8::internal::FlexibleBodyVisitor::VisitSpecialized<40>

Should I be using many smaller hashes to keep this overhead down? i.e. some 
sort of sparse hash implementation? Or using key mod 1000 to determine the 
hash it should be in?

Does V8 have limits on hash table sizes?

Thanks.

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en
Code move event for unknown code: 0x2c89a1006000
Statistical profiling result from v8.log, (723501 ticks, 2492 unaccounted, 0 
excluded).

 [Unknown]:
   ticks  total  nonlib   name
   24920.3%

 [Shared libraries]:
   ticks  total  nonlib   name

 [JavaScript]:
   ticks  total  nonlib   name
9670.1%0.1%  Stub: StringAddStub
8650.1%0.1%  LazyCompile: *ByteArray.readString
6220.1%0.1%  Function: Node.fs.readdir.queue.process.Store3.started
3190.0%0.0%  LazyCompile: ~Store3.Pointer
1960.0%0.0%  Builtin: A builtin from the snapshot
1880.0%0.0%  LazyCompile: *ByteArray.setReadOffset
1340.0%0.0%  Builtin: A builtin from the snapshot {1}
 570.0%0.0%  Stub: CEntryStub
 480.0%0.0%  LazyCompile: *Store3.Entry
 200.0%0.0%  KeyedLoadIC: A keyed load IC from the snapshot
 190.0%0.0%  Stub: CompareStub
 180.0%0.0%  Stub: CompareStub_EQ
 140.0%0.0%  Stub: CallConstructStub_Recording
 120.0%0.0%  KeyedStoreIC: A keyed store IC from the snapshot
 110.0%0.0%  Stub: BinaryOpStub
 100.0%0.0%  Stub: CompareICStub {1}
  90.0%0.0%  Stub: CompareICStub
  80.0%0.0%  Stub: KeyedLoadElementStub
  70.0%0.0%  KeyedLoadIC: args_count: 0
  40.0%0.0%  LazyCompile: ~ByteArray.readInteger
  40.0%0.0%  Builtin: A builtin from the snapshot {2}
  20.0%0.0%  LazyCompile: ~ByteArray.readString
  20.0%0.0%  LazyCompile: DefaultNumber native runtime.js:627
  10.0%0.0%  Stub: StringAddStub {1}
  10.0%0.0%  Stub: RecordWriteStub {1}
  10.0%0.0%  Stub: RecordWriteStub
  10.0%0.0%  Stub: NumberToStringStub
  10.0%0.0%  Stub: BinaryOpStub_MOD_Alloc_SMI
  10.0%0.0%  LazyCompile: ~global.Log.dateTime 
/Users/Joran/Ronomon/run:56
  10.0%0.0%  LazyCompile: ~global.Log /Users/Joran/Ronomon/run:32
  10.0%0.0%  LazyCompile: ~MessageQueue.get
  10.0%0.0%  LazyCompile: IN native runtime.js:354
  10.0%0.0%  LazyCompile: *exports.setInterval.timer.ontimeout 
timers.js:231
  10.0%0.0%  Function: ~Buffer buffer.js:210
  10.0%0.0%  Eval: 
  10.0%0.0%  CallPreMonomorphic: args_count: 0 {1}
  10.0%0.0%  Builtin: A builtin from the snapshot {3}

 [C++]:
   ticks  total  nonlib   name
  335199   46.3%   46.3%  
v8::internal::StaticMarkingVisitor::VisitUnmarkedObjects
  331471   45.8%   45.8%  _accept$NOCANCEL
  213823.0%3.0%  
v8::internal::FlexibleBodyVisitor::VisitSpecialized<40>
   43730.6%0.6%  
v8::internal::FixedBodyVisitor, void>::Visit
   34840.5%0.5%  v8::internal::MarkCompactCollector::MarkObject
   33530.5%0.5%  v8::internal::Heap::AllocateHashTable
   22930.3%0.3%  v8::internal::DiscoverGreyObjectsInSpace
   15570.2%0.2%  
v8::internal::IncrementalMarkingMarkingVisitor::VisitPointers
   14530.2%0.2%  
v8::internal::HashTable::FindEntry
   12970.2%0.2%  v8::internal::StoreBuffer::IteratePointersInStoreBuffer
   12750.2%0.2%  ___gettimeofday
   11000.2%0.2%  
v8::internal::HashTable::Rehash
5990.1%0.1%  v8::internal::MarkCompactCollector::EmptyMarkingDeque
5680.1%0.1%  
v8::internal::StoreBuffer::FindPointersToNewSpaceInRegion
5400.1%0.1%  v8::internal::MarkCompactCollector::SweepConservatively

Re: [nodejs] Object as hash with 7 million entries slows V8

2012-05-11 Thread Joran Greef
The thing is the JS is doing nothing, the huge hash is just sitting there.

On Friday, May 11, 2012 9:57:47 AM UTC+2, kapouer wrote:
>
> Isn't that gc doing its work ? 
> As a workaround, you can turn it off and run it manually 
> node --nouse_idle_notification --expose_gc 
> > global.gc(); 
>
> Regards, 
> J�r�my. 
>
> On 11/05/2012 09:51, Joran Greef wrote: 
> > I have posted this in v8-users but perhaps someone else here will also 
> be familiar with this: 
> > 
> > I am using V8 as part of Node and have written a Javascript 
> implementation of Bitcask, using a Javascript object as a hash to keep file 
> offsets in memory. 
> > 
> > This object has 7 million entries and I'm noticing that while the JS 
> code is resting, doing nothing, V8 is hitting 100% CPU every few seconds 
> and doing this continually. 
> > 
> > Attached is the full result of running V8 with --prof. 
> > 
> > And of particular interest: 
> > 
> > [C++]: 
> >ticks  total  nonlib   name 
> >   73615   43.1%   43.1% 
>  v8::internal::StaticMarkingVisitor::VisitUnmarkedObjects 
> >   68436   40.1%   40.1%  _accept$NOCANCEL 
> >47962.8%2.8% 
>  v8::internal::FlexibleBodyVisitor v8::internal::JSObject::BodyDescriptor, void>::VisitSpecialized<40> 
> > 
> > Should I be using many smaller hashes to keep this overhead down? i.e. 
> some sort of sparse hash implementation? Or using key mod 1000 to determine 
> the hash it should be in? 
> > 
> > Does V8 have limits on hash table sizes? 
> > 
> > Thanks. 
> > 
> > -- 
> > Job Board: http://jobs.nodejs.org/ 
> > Posting guidelines: 
> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines 
> > You received this message because you are subscribed to the Google 
> > Groups "nodejs" group. 
> > To post to this group, send email to nodejs@googlegroups.com 
> > To unsubscribe from this group, send email to 
> > nodejs+unsubscr...@googlegroups.com 
> > For more options, visit this group at 
> > http://groups.google.com/group/nodejs?hl=en?hl=en 
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Object as hash with 7 million entries slows V8

2012-05-11 Thread Joran Greef
Jeremy, I was trying to understand why GC was spending time when there 
should be no work to do.

Marcel, I came across your blog post on the subject an hour ago and spotted 
the v8 issue as well just before your post here.

Thanks for your suggestions.

On Friday, May 11, 2012 10:57:20 AM UTC+2, Marcel wrote:
>
> There is an issue in v8 where idle tick GC does not pick up where the old 
> GC left off and leads to lots of time wasted. See v8 issue #1458. This is 
> fixed in bleeding_edge, but hasn't landed in node yet, not even 0.7.x. Try 
> Jeremy's suggestion or you could also try using bleeding_edge v8 on node 
> 0.7.x. I imagine both would lead to improvements.
>
> On Fri, May 11, 2012 at 3:43 AM, Jérémy Lal  wrote:
>
>> Idle -> GC -> visiting objects (?)
>>
>> Hence my suggestion : control gc() calls yourself.
>>
>> On 11/05/2012 10:20, Joran Greef wrote:
>> > The thing is the JS is doing nothing, the huge hash is just sitting 
>> there.
>> >
>> > On Friday, May 11, 2012 9:57:47 AM UTC+2, kapouer wrote:
>> >
>> > Isn't that gc doing its work ?
>> > As a workaround, you can turn it off and run it manually
>> > node --nouse_idle_notification --expose_gc
>> > > global.gc();
>> >
>> > Regards,
>> > J�r�my.
>> >
>> > On 11/05/2012 09:51, Joran Greef wrote:
>> > > I have posted this in v8-users but perhaps someone else here will 
>> also be familiar with this:
>> > >
>> > > I am using V8 as part of Node and have written a Javascript 
>> implementation of Bitcask, using a Javascript object as a hash to keep file 
>> offsets in memory.
>> > >
>> > > This object has 7 million entries and I'm noticing that while the 
>> JS code is resting, doing nothing, V8 is hitting 100% CPU every few seconds 
>> and doing this continually.
>> > >
>> > > Attached is the full result of running V8 with --prof.
>> > >
>> > > And of particular interest:
>> > >
>> > > [C++]:
>> > >ticks  total  nonlib   name
>> > >   73615   43.1%   43.1% 
>>  v8::internal::StaticMarkingVisitor::VisitUnmarkedObjects
>> > >   68436   40.1%   40.1%  _accept$NOCANCEL
>> > >47962.8%2.8% 
>>  v8::internal::FlexibleBodyVisitor> v8::internal::JSObject::BodyDescriptor, void>::VisitSpecialized<40>
>> > >
>> > > Should I be using many smaller hashes to keep this overhead down? 
>> i.e. some sort of sparse hash implementation? Or using key mod 1000 to 
>> determine the hash it should be in?
>> > >
>> > > Does V8 have limits on hash table sizes?
>> > >
>> > > Thanks.
>> > >
>> > > --
>> > > Job Board: http://jobs.nodejs.org/
>> > > Posting guidelines: 
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines <
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines>
>> > > You received this message because you are subscribed to the Google
>> > > Groups "nodejs" group.
>> > > To post to this group, send email to nodejs@googlegroups.com> nodejs@googlegroups.com>
>> > > To unsubscribe from this group, send email to
>> > > nodejs+unsubscr...@googlegroups.com > nodejs%2bunsubscr...@googlegroups.com>
>> > > For more options, visit this group at
>> > > http://groups.google.com/group/nodejs?hl=en?hl=en <
>> http://groups.google.com/group/nodejs?hl=en?hl=en>
>> >
>>
>> --
>> Job Board: http://jobs.nodejs.org/
>> Posting guidelines: 
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
>> You received this message because you are subscribed to the Google
>> Groups "nodejs" group.
>> To post to this group, send email to nodejs@googlegroups.com
>> To unsubscribe from this group, send email to
>> nodejs+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/nodejs?hl=en?hl=en
>>
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Pass v8 options as part of bash script in Ubuntu

2012-05-11 Thread Joran Greef
I have a bash script which uses Node as the interpreter and I am trying to 
pass V8 options like this:

#!/usr/local/bin/node --nouse_idle_notification --expose_gc 
--max_old_space_size=32768
console.log('gc exposed: ' + !!global.gc);

This works on Mac but not on Ubuntu, the flags don't seem to have any 
effect.

Anyone have any idea what to change?

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Object as hash with 7 million entries slows V8

2012-05-11 Thread Joran Greef
Ja it was becoming a problem around 3 million entries.

Tried multiple smaller hashes, but no help, the GC was still visiting every 
key in every hash.

With nouse_idle_notification, there's no problem, works great now.

Tim, I was looking at nStore's file implementation and saw you were 
serializing reads. Were you doing this only to use a shared read buffer, or 
does this also work with the disk better? I was thinking allowing multiple 
concurrent readers MVCC style would work better together with the 
filesystem cache.

On Friday, May 11, 2012 4:26:00 PM UTC+2, Tim Caswell wrote:
>
> So this will finally get fixed!  This exact case was one of the blockers 
> that made me abandon nStore.  I was storing offsets in a js object and the 
> GC spun out of control around 1 million entries. (The other blocker was 
> node's fs interface was way too slow to implement a disk-based database, I 
> clocked 90% CPU time in mutex locks)
>
> On Fri, May 11, 2012 at 4:35 AM, Joran Greef  wrote:
>
>> Jeremy, I was trying to understand why GC was spending time when there 
>> should be no work to do.
>>
>> Marcel, I came across your blog post on the subject an hour ago and 
>> spotted the v8 issue as well just before your post here.
>>
>> Thanks for your suggestions.
>>
>>
>> On Friday, May 11, 2012 10:57:20 AM UTC+2, Marcel wrote:
>>>
>>> There is an issue in v8 where idle tick GC does not pick up where the 
>>> old GC left off and leads to lots of time wasted. See v8 issue #1458. This 
>>> is fixed in bleeding_edge, but hasn't landed in node yet, not even 0.7.x. 
>>> Try Jeremy's suggestion or you could also try using bleeding_edge v8 on 
>>> node 0.7.x. I imagine both would lead to improvements.
>>>
>>> On Fri, May 11, 2012 at 3:43 AM, Jérémy Lal  wrote:
>>>
>>>> Idle -> GC -> visiting objects (?)
>>>>
>>>> Hence my suggestion : control gc() calls yourself.
>>>>
>>>> On 11/05/2012 10:20, Joran Greef wrote:
>>>> > The thing is the JS is doing nothing, the huge hash is just sitting 
>>>> there.
>>>> >
>>>> > On Friday, May 11, 2012 9:57:47 AM UTC+2, kapouer wrote:
>>>> >
>>>> > Isn't that gc doing its work ?
>>>> > As a workaround, you can turn it off and run it manually
>>>> > node --nouse_idle_notification --expose_gc
>>>> > > global.gc();
>>>> >
>>>> > Regards,
>>>> > J�r�my.
>>>> >
>>>> > On 11/05/2012 09:51, Joran Greef wrote:
>>>> > > I have posted this in v8-users but perhaps someone else here 
>>>> will also be familiar with this:
>>>> > >
>>>> > > I am using V8 as part of Node and have written a Javascript 
>>>> implementation of Bitcask, using a Javascript object as a hash to keep 
>>>> file 
>>>> offsets in memory.
>>>> > >
>>>> > > This object has 7 million entries and I'm noticing that while 
>>>> the JS code is resting, doing nothing, V8 is hitting 100% CPU every few 
>>>> seconds and doing this continually.
>>>> > >
>>>> > > Attached is the full result of running V8 with --prof.
>>>> > >
>>>> > > And of particular interest:
>>>> > >
>>>> > > [C++]:
>>>> > >ticks  total  nonlib   name
>>>> > >   73615   43.1%   43.1%  v8::internal::**StaticMarkingVisitor::
>>>> **VisitUnmarkedObjects
>>>> > >   68436   40.1%   40.1%  _accept$NOCANCEL
>>>> > >47962.8%2.8%  v8::internal::**
>>>> FlexibleBodyVisitor>>> v8::internal::JSObject::**BodyDescriptor, void>::VisitSpecialized<40>
>>>> > >
>>>> > > Should I be using many smaller hashes to keep this overhead 
>>>> down? i.e. some sort of sparse hash implementation? Or using key mod 1000 
>>>> to determine the hash it should be in?
>>>> > >
>>>> > > Does V8 have limits on hash table sizes?
>>>> > >
>>>> > > Thanks.
>>>> > >
>>>> > > --
>>>> > > Job Board: http://jobs.nodejs.org/
>>>> > > Posting guidelines: https://github.com/joyent/**
>>>> node/wiki/Mailing

Re: [nodejs] Object as hash with 7 million entries slows V8

2012-05-19 Thread Joran Greef
Thanks Jorge, I turned off idle notifications and exposed the gc which runs 
every 30 minutes or so. That's had a good impact on CPU across the whole 
application. Then for the hash I switched to a Buffer backed hash 
implementation. I can keep the per key memory overhead much lower like that 
and the access times are on par with the native JS hash. It allows for 
shorter keys, for example 16 bytes binary is enough for 128 bit keys, 
whereas with the JS hash you would need about 24 bytes of Base62 to get the 
same. Also, since it's just a buffer, and the keys are coming from disk on 
startup, you can copy the keys from disk directly into the hash without 
having to do thousands of mem copies, so it's a few times faster than the 
original implementation. Another advantage is you can predict the size of 
the hash in advance and avoid expensive hash resizes (tens of seconds for a 
few million keys using the JS hash) for a much faster startup time. Have 
tested up to 60 million keys and it handles fine and starts up in about 60 
seconds parsing and loading the keys from disk. Memory overhead is much 
more predictable (exactly 25 bytes per key all included). JS strings alone 
on 64 bit are 24 bytes excluding JS hash overhead and value overhead. Just 
from this small exercise I think TypedArrays are really going to start 
making a huge difference for JS as a dynamic language.

On Saturday, May 19, 2012 1:08:40 PM UTC+2, Jorge wrote:
>
> If this is not fixed yet, you could move the hash to a thread_a_gogo so 
> that the hiccups won't happen in node's event loop thread, but in the 
> thread_a_gogo. You will still see delays when accessing the hash keys, but 
> as the access to the hash will be asynchronous they won't be blocking node. 
>
> <https://gist.github.com/2730481> 
>
> Supply a 2nd argument to run with threads_a_gogo, let it run for a while 
> and see that you still get a sane figure in "event loop ticks per second": 
>
> $ node keys.js yes 
> Multi thread 
> ... 
> ... 
> * Event loop ticks per second -> 366547, keys per second -> 1360 
> ... 
>
> Unlike when you run it single-threaded because the GC hiccups are blocking 
> it: 
>
> $ node keys.js 
> Single thread 
> ... 
> ... 
> * Event loop ticks per second -> 4138, keys per second -> 4138 
> ... 
>
> Threads_a_gogo calls the GC every 2000 or so turns IIRC, that's perhaps 
> too often, and that's why the "keys per second" figure is much lower. But 
> over a thousand per second on average might be plenty depending on your 
> application. 
>
> https://github.com/xk/node-threads-a-gogo 
> -- 
> Jorge. 
>
> On May 11, 2012, at 6:07 PM, Joran Greef wrote: 
>
> > Ja it was becoming a problem around 3 million entries. 
> > 
> > Tried multiple smaller hashes, but no help, the GC was still visiting 
> every key in every hash. 
> > 
> > With nouse_idle_notification, there's no problem, works great now. 
> > 
> > Tim, I was looking at nStore's file implementation and saw you were 
> serializing reads. Were you doing this only to use a shared read buffer, or 
> does this also work with the disk better? I was thinking allowing multiple 
> concurrent readers MVCC style would work better together with the 
> filesystem cache. 
> > 
> > On Friday, May 11, 2012 4:26:00 PM UTC+2, Tim Caswell wrote: 
> > So this will finally get fixed!  This exact case was one of the blockers 
> that made me abandon nStore.  I was storing offsets in a js object and the 
> GC spun out of control around 1 million entries. (The other blocker was 
> node's fs interface was way too slow to implement a disk-based database, I 
> clocked 90% CPU time in mutex locks) 
> > 
> > On Fri, May 11, 2012 at 4:35 AM, Joran Greef  wrote: 
> > Jeremy, I was trying to understand why GC was spending time when there 
> should be no work to do. 
> > 
> > Marcel, I came across your blog post on the subject an hour ago and 
> spotted the v8 issue as well just before your post here. 
> > 
> > Thanks for your suggestions. 
> > 
> > 
> > On Friday, May 11, 2012 10:57:20 AM UTC+2, Marcel wrote: 
> > There is an issue in v8 where idle tick GC does not pick up where the 
> old GC left off and leads to lots of time wasted. See v8 issue #1458. This 
> is fixed in bleeding_edge, but hasn't landed in node yet, not even 0.7.x. 
> Try Jeremy's suggestion or you could also try using bleeding_edge v8 on 
> node 0.7.x. I imagine both would lead to improvements. 
> > 
> > On Fri, May 11, 2012 at 3:43 AM, Jérémy Lal  wrote: 
> > Idle -> GC -> visiting objects (?) 
> > 
> > Hence my suggestion : c

[nodejs] Re: Node.js bindings for HyperDex

2012-05-23 Thread Joran Greef
Hi Robert, spotted the HyperDex project when it was released, the 
hyperspace hashing is really cool.

On Tuesday, May 22, 2012 6:40:34 PM UTC+2, Robert Escriva wrote:
>
> Hello, 
>
> I'm a core developer for the HyperDex[1] project.  One of the people 
> working on the project has added complete Node.js bindings[2]. 
>
> The HyperDex team is committed to keeping these bindings updated, and 
> would love feedback from the community to make them better.  WIth the 
> Node.js community's help, we'll make HyperDex integrate with Node.js as 
> seemlessly as possible. 
>
> Right now, the API is a direct port of the Python API.  This means that 
> HyperDex's event loop is not integrated with Node's event loop, and that 
> applications may not follow all of Node.js' idioms as well as they 
> could. 
>
> I have a few questions for the Node.js community that I was hoping the 
> folks on this mailing list can help answer: 
>
>  - I see one prior post to this list mentioning HyperDex[3].  Is there 
>other interest in official HyperDex support for the Node.js bindings? 
>
>  - Are there developers close to the Node.js project who are willing 
>to collaborate to make the bindings better?  We're specifically 
>thinking that the Node.js community will know of some areas in which 
>the HyperDex API can be expanded or altered to provide new and unique 
>features to Node.js applications. 
>
> I've CC'ed the HyperDex team.  Please feel free to contact us if you 
> have any questions or suggestions about how we can make HyperDex work 
> better for your Node.js app. 
>
> Thanks, 
> Robert 
>
> 1. http://hyperdex.org/ 
> 2. 
> https://github.com/rescrv/HyperDex/commit/b41377ccd539d2736134a7247aef335d6ef60d7e
>  
> 3.  
> http://groups.google.com/group/nodejs/browse_thread/thread/1ec2908cd5fafa28/367ef183923601e4?lnk=gst&q=hyperdex#367ef183923601e4
>  
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


[nodejs] Faster 32-bit hashes

2012-05-30 Thread Joran Greef
If you're doing 32-bit hashes in Javascript and are willing to trade a bit, 
then a 31-bit hash may be at least an order of magnitude faster: 
https://groups.google.com/d/msg/v8-users/zGCS_wEMawU/6mConTiBUyMJ

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] Faster 32-bit hashes

2012-05-30 Thread Joran Greef
Hash implementation quality being equal, C++ may be faster for hash inputs 
greater than N bytes where the cost of jumping between JS and C++ does not 
outweigh the benefit. For keys smaller than N bytes, hashing in JS may be 
faster.

On Wednesday, May 30, 2012 6:17:17 PM UTC+2, Mark Hahn wrote:
>
> Are you talking about calculating the hash in javascript?  If so then a 
> third way, using a C++ extension, would be much faster than either 31 or 32.
>
> On Wed, May 30, 2012 at 8:05 AM, Joran Greef  wrote:
>
>> If you're doing 32-bit hashes in Javascript and are willing to trade a 
>> bit, then a 31-bit hash may be at least an order of magnitude faster: 
>> https://groups.google.com/d/msg/v8-users/zGCS_wEMawU/6mConTiBUyMJ
>>
>> -- 
>> Job Board: http://jobs.nodejs.org/
>> Posting guidelines: 
>> https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
>> You received this message because you are subscribed to the Google
>> Groups "nodejs" group.
>> To post to this group, send email to nodejs@googlegroups.com
>> To unsubscribe from this group, send email to
>> nodejs+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/nodejs?hl=en?hl=en
>>
>
>

-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en


Re: [nodejs] HTTPS, NPN, freeParser

2013-12-17 Thread Joran Greef
Thank you Fedor,

I had looked through your code already and had tried just 
"removeAllListeners()" without passing an argument. I will try passing 
"secureConnection" in as an argument instead.

Joran

-- 
-- 
Job Board: http://jobs.nodejs.org/
Posting guidelines: 
https://github.com/joyent/node/wiki/Mailing-List-Posting-Guidelines
You received this message because you are subscribed to the Google
Groups "nodejs" group.
To post to this group, send email to nodejs@googlegroups.com
To unsubscribe from this group, send email to
nodejs+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/nodejs?hl=en?hl=en

--- 
You received this message because you are subscribed to the Google Groups 
"nodejs" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to nodejs+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.