Re: [rust-dev] net::tcp::TcpSocket slow?

2012-12-22 Thread Michael Neumann

Am 22.12.2012 17:35, schrieb Patrick Walton:

On 12/22/12 9:15 AM, Michael Neumann wrote:

The best thing I can do is to use blocking I/O here anyway as it's
better to have just one connection to Redis and multiplex that, so I can
easily use one native thread for that.
I am just very new to Rust, and the only thing I found was tcp_net. So I
think I should define my own FFI socket calls, right?


Yes, that's what I would do. I think there may be some bindings to BSD 
sockets in cargo -- Brian would know better here.


The scheduler modes here might be useful to ensure that your task gets 
its own OS thread: 
http://dl.rust-lang.org/doc/0.4/core/task.html#enum-schedmode


Thanks!

I rerun the same benchmark on a faster box with many cores. Now there is 
almost no difference in the single-thread case compared against Ruby. I 
get 15k requests per second.
Once I use multiple threads, Rust is the clear winner. It maxes out at 
30k requests per second, which I think is somehow the limit what can be 
obtained (I've seen this number in several other HTTP benchmarks). Each 
thread uses a separate connection of course.


I tried to create a separate iotask for each socket connection as well, 
but somehow when I do so:


  let iotask = uv::iotask::spawn_iotask(task::task());

performance decreases to 50% when compared against:

  let iotask = uv::global_loop::get();

I wonder what the reason for this is...

Anyway, I am quite happy to see that Rust's network performance is quite 
competitive!


Best,

  Michael
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] REPL is broken in 0.5

2012-12-22 Thread Patrick Walton

On 12/21/12 11:51 PM, Patrick Walton wrote:

On 12/21/12 10:42 PM, Patrick Walton wrote:

On 12/21/12 10:31 PM, Patrick Walton wrote:

All inputs fail with:

 rust: task failed at 'no mode for lval',
/Users/pwalton/Source/rust/master/src/librustc/middle/liveness.rs:1573

I'll look into this; I feel we should probably do a 0.5.1 to fix this.
Thoughts?


Fixed in incoming, although I'm still having troubles with fmt symbols
in the REPL. Trying a clobber build.


Would anyone mind testing the REPL in incoming?


OK, I have fixed the REPL. I will look into respinning a 0.5.1 release 
for this, if there are no objections.


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Question about lifetime analysis (a 0.5 transition question)

2012-12-22 Thread Lucian Branescu
I think the problem is the compiler can't guarantee the managed box will
survive, so it won't allow a borrowed pointer.

I think there are problems in general with @ and borrowing.
 I've converted the red-black tree I wrote to use iter::BaseIter but am now
fighting with lifetime analysis with the switch to 0.5.

https://github.com/stevej/rustled/blob/master/red_black_tree.rs#L91

And the error I'm getting with 0.5 is:

http://pastebin.com/YK8v7EdA

I've read the docs on lifetimes several times now but it's not quite enough
to get me over this hurdle.


Thanks!
Steve

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] how to call closures stored in struct slots (a 0.5 question)

2012-12-22 Thread Tim Chevalier
On Sat, Dec 22, 2012 at 10:32 AM, Steve Jenson  wrote:
> Yes, that's it!
>
> Are these migration-related questions suited for this list or should I use
> github issues?
>

github issues should generally be for situations where you're pretty
sure that rustc/libraries are wrong or need improvement. The list or
IRC is great for asking questions where you don't think there's a
compiler bug. So you're doing it right :-)

Cheers,
Tim



-- 
Tim Chevalier * http://catamorphism.org/ * Often in error, never in doubt
"We know there'd hardly be no one in prison / If rights to food,
clothes, and shelter were given." -- Boots Riley
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] how to call closures stored in struct slots (a 0.5 question)

2012-12-22 Thread Steve Jenson
Yes, that's it!

Are these migration-related questions suited for this list or should I use
github issues?

thanks again,
steve


On Sat, Dec 22, 2012 at 10:28 AM, Tim Chevalier wrote:

> On Sat, Dec 22, 2012 at 10:27 AM, Steve Jenson 
> wrote:
> > In 0.4, I had a struct that stored a fn that I later called as if it
> were a
> > method. In 0.5, this has ceased. what is the new syntax for calling
> > functions stored in slots?
> >
> > Here's the small code example (please excuse how naive it is):
> >
> > https://github.com/stevej/rustled/blob/master/lazy.rs#L28
> >
> > and here is the 0.5 compiler error I receive:
> >
> > lazy.rs:28:21: 28:33 error: type `lazy::Lazy<'a>` does not implement any
> > method in scope named `code`
> > lazy.rs:28 let result = self.code();
>
> (Warning: not tested.) I believe the way to do this is to write:
>
> let result = (self.code)();
>
> Cheers,
> Tim
>
>
>
> --
> Tim Chevalier * http://catamorphism.org/ * Often in error, never in doubt
> "We know there'd hardly be no one in prison / If rights to food,
> clothes, and shelter were given." -- Boots Riley
>
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Question about lifetime analysis (a 0.5 transition question)

2012-12-22 Thread Steve Jenson
I've converted the red-black tree I wrote to use iter::BaseIter but am now
fighting with lifetime analysis with the switch to 0.5.

https://github.com/stevej/rustled/blob/master/red_black_tree.rs#L91

And the error I'm getting with 0.5 is:

http://pastebin.com/YK8v7EdA

I've read the docs on lifetimes several times now but it's not quite enough
to get me over this hurdle.


Thanks!
Steve
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] how to call closures stored in struct slots (a 0.5 question)

2012-12-22 Thread Tim Chevalier
On Sat, Dec 22, 2012 at 10:27 AM, Steve Jenson  wrote:
> In 0.4, I had a struct that stored a fn that I later called as if it were a
> method. In 0.5, this has ceased. what is the new syntax for calling
> functions stored in slots?
>
> Here's the small code example (please excuse how naive it is):
>
> https://github.com/stevej/rustled/blob/master/lazy.rs#L28
>
> and here is the 0.5 compiler error I receive:
>
> lazy.rs:28:21: 28:33 error: type `lazy::Lazy<'a>` does not implement any
> method in scope named `code`
> lazy.rs:28 let result = self.code();

(Warning: not tested.) I believe the way to do this is to write:

let result = (self.code)();

Cheers,
Tim



-- 
Tim Chevalier * http://catamorphism.org/ * Often in error, never in doubt
"We know there'd hardly be no one in prison / If rights to food,
clothes, and shelter were given." -- Boots Riley
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] how to call closures stored in struct slots (a 0.5 question)

2012-12-22 Thread Steve Jenson
In 0.4, I had a struct that stored a fn that I later called as if it were a
method. In 0.5, this has ceased. what is the new syntax for calling
functions stored in slots?

Here's the small code example (please excuse how naive it is):

https://github.com/stevej/rustled/blob/master/lazy.rs#L28

and here is the 0.5 compiler error I receive:

lazy.rs:28:21: 28:33 error: type `lazy::Lazy<'a>` does not implement any
method in scope named `code`
lazy.rs:28 let result = self.code();

Thanks,
Steve
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] net::tcp::TcpSocket slow?

2012-12-22 Thread Patrick Walton

On 12/22/12 9:15 AM, Michael Neumann wrote:

The best thing I can do is to use blocking I/O here anyway as it's
better to have just one connection to Redis and multiplex that, so I can
easily use one native thread for that.
I am just very new to Rust, and the only thing I found was tcp_net. So I
think I should define my own FFI socket calls, right?


Yes, that's what I would do. I think there may be some bindings to BSD 
sockets in cargo -- Brian would know better here.


The scheduler modes here might be useful to ensure that your task gets 
its own OS thread: 
http://dl.rust-lang.org/doc/0.4/core/task.html#enum-schedmode


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] net::tcp::TcpSocket slow?

2012-12-22 Thread Michael Neumann

Am 21.12.2012 05:17, schrieb Patrick Walton:

I just profiled this. Some thoughts:

On 12/20/12 9:12 PM, Brian Anderson wrote:

First, stack switching. Switching between Rust and C code has bad
performance due to bad branch prediction. Some workloads can spend
10% of their time stalling in the stack switch.


This didn't seem too high, actually. It should only be ~20,000 stack 
switches (read and write) if we solve the following issue:



Second, working with uv involves sending a bunch of little work units to
a dedicated uv task. This is because callbacks from uv into Rust *must
not fail* or the runtime will crash. Where typical uv code runs directly
in the event callbacks, Rust dispatches most or all of that work to
other tasks. This imposes significant context switching and locking
overhead.


This is actually the problem. If you're using a nonblocking I/O 
library (libuv) for a fundamentally blocking workload (sending lots of 
requests to redis and blocking on the response for each one), *and* 
you're multiplexing userland green threads on top of it, then you're 
going to get significantly worse performance than you would if you had 
used a blocking I/O setup. We can make some of the performance 
differential up by switching uv over to pipes, and maybe we can play 
dirty tricks like having the main thread spin on the read lock so that 
we don't have to fall into the scheduler to punt it awake, but I still 
don't see any way we will make up the 10x performance difference for 
this particular use case without a fundamental change to the 
architecture. Work stealing doesn't seem to be a viable solution here 
since the uv task really needs to be one-task-per-thread.


Maybe the best thing is just to make the choice of nonblocking versus 
blocking I/O a choice that tasks can make on an individual basis. It's 
a footgun to be sure; if you use blocking I/O you run the risk of 
starving other tasks on the same scheduler to death, so perhaps we 
should restrict this mode to schedulers with 1:1 scheduling. But this 
would be in line with the general principle that we've been following 
that the choice of 1:1 and M:N scheduling should be left to the user, 
because there are performance advantages and disadvantages to each mode.


Once this sort of switch is implemented, I would suspect the 
performance differential between Ruby and Rust to be much less.


So I think I should benchmark it against Erlang for example or any other 
evented language which also do message passing instead of direct callbacks.
I can imagine that if I would use libuv directly (lets say in C), and as 
such avoid message sending and scheduling, it would have similar 
performance to the blocking solution.

Would you agree?

The best thing I can do is to use blocking I/O here anyway as it's 
better to have just one connection to Redis and multiplex that, so I can 
easily use one native thread for that.
I am just very new to Rust, and the only thing I found was tcp_net. So I 
think I should define my own FFI socket calls, right?


Thanks!

Best,

  Michael
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev