Re: [rust-dev] Declaring the API unsafe while keeping the internal safety checks

2015-01-01 Thread Huon Wilson

Attributes cannot (yet) be attached to blocks so this can't work atm, e.g.

fn main() {
#[whoops] { /* ... */ }
}

gives

:2:13: 2:14 error: expected item after attributes
:2 #[whoops] { /* ... */ }
 ^

It possibly makes sense to have a distinction between "functions that 
are unsafe to call" and "functions that can call unsafe functions 
internally", i.e. have `unsafe fn foo() { ... }` *not* be the same as 
`unsafe fn foo() { unsafe { ... } }`, although I imagine that this may 
make certain cases rather uglier.


Huon

On 01/01/15 22:46, Manish Goregaokar wrote:
It should be reasonably easy to write a lint as a compiler plugin such 
that the following function:


#[unsafe_specific]
unsafe fn foo () {
 #[allowed_unsafe] { // or just #[allow(unsafe_something_something)]
  // do unsafe things here
 }
 // no unsafe blocks or functions allowed here.
}

would not compile with any unsafe code in the latter half of the function.

Alternatively:

unsafe fn foo() {
 fn bar() {
  unsafe {
// unsafe stuff here
  }
  // no unsafe stuff here
 }
 bar();
}

-Manish Goregaokar

On Thu, Jan 1, 2015 at 4:47 PM, Vladimir Pouzanov > wrote:


I had this idea for some time and I'd like to discuss it to see if
it is something reasonable to be proposed for rust to implement or
there are other ways around the problem.

Let's say I have a low level function that manipulates the
hardware clock using some platform-specific argument. Internally
this function will do an unsafe volatile mem write to store value
in the register, but this is the only part of the code that is
unsafe compiler-wise, whatever else is in there in the function is
safe and I want the compiler to warn me if any other unsafe
operation happens.

Now, given that this function is actually unsafe in terms of its
realtime consequences (it does not do any validation of the input
for speed) I want to mark it as unsafe fn and make a safer wrapper
for end users, while keeping this for the little subset of users
that will need the actual speed benefits.

Unfortunately, marking it unsafe will now allow any other unsafe
operation in the body of the function, when what I wanted is just
to force users of it to be aware of unsafety via compiler validation.

A safe {} block could have helped me in this case. But is it a
good idea overall?

Some pseudo-code to illustrate

pub unsafe fn set_clock_mode(mode: uint32) {
  // ...
  // doing some required computations
  // this code must be 'safe' for the compiler
  unsafe {
// ...
// writing the value to mmaped register, this one is unsafe
but validated by programmer
  }
}

pub fn set_clock_mode_safe(mode: uint32) -> bool {
  // ...
  // validate input
  unsafe {
// I want this call to require unsafe block
set_clock_mode(mode);
  }
}

-- 
Sincerely,

Vladimir "Farcaller" Pouzanov
http://farcaller.net/

___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Count of lines

2014-09-15 Thread Huon Wilson
Are you compiling with optimisations? (Pass the -O flag to rustc.) For 
me, the Go and the optimised Rust are pretty similar, with the Go a few 
percent faster on a 1.3GB MP4 file, and the two languages 
indistinguishable on a 30MB XML file.



Also, Rust prefers iterators to manual indexing since it avoids 
unnecessary bounds checks, that is, replace the inner loop with:


for b in buf.slice_to(n).iter() {
if *b == b'\n' {
lines += 1;
}
}

However, in this case, LLVM optimises that to a large chunk of SSE 
instructions https://gist.github.com/huonw/37cdd3dea3518abdb1c4 which 
seem to be reading only 2 bytes per memory access and only 4 bytes per 
tick of the loop (i.e. it's rather unoptimal); this means that the 
iterator version is only minutely faster than the naive range(0, n) loop 
in this case.


I also wrote a 16-bytes-at-a-time SSE byte counter (it's unlikely to be 
optimal), which is about 20% faster than the Go on that same 1.3GB MP4 
file and 3-4 times faster on the XML, so everyone has room for 
improvement. :) Code: https://gist.github.com/huonw/b6bfe4ad3623b6c37717



Btw, the Go is significantly slower when the file contains a lot of 
newlines, e.g. this file is 75% \n's


yes 'foo' | head -1 > newlines.txt

The Rust built with -O takes ~0.4s for me, and the Go takes 2.6s. The 
crossover point on my computer for very regular files like that is about 
1 \n in every 30 bytes.



Huon



On 15/09/14 16:34, Petr Novotnik wrote:

Hello folks,

recently I've been playing around with Rust and I'm really impressed. 
I like the language a lot!


While writing a program to count the number of lines in a file, I 
realized it ran twice as slow as an older program I wrote in Go some 
time ago. After more experiments, I've figured out that Go's speed 
boost comes from an optimized function in its standard library 
utilizing SSE instructions, in particular the function bytes.IndexByte 
(http://golang.org/pkg/bytes/#IndexByte).


I was wondering whether Rust would ever provide such optimized 
implementations as well as part of its standard library or whether 
it's up to developers to write their own versions of such functions.


Maybe I just missed something, so I'm attaching the rust program:

  http://pastebin.com/NfFgMNGe

And for the sake of completeness, here's the Go version I compared with:

  http://pastebin.com/4tiLsRpu

Pete.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Can Rust Git HEAD build Git HEAD itself?

2014-08-29 Thread Huon Wilson
The information is recorded in 
https://github.com/rust-lang/rust/blob/master/src/snapshots.txt e.g. at 
the time of writing the most recent snapshot is of commit a86d9ad == 
a86d9ad15e339ab343a12513f9c90556f677b9ca.


In any case, building from source with anything other than the provided 
stage0 snapshots (i.e. the ones that are downloaded by default by the 
makefiles) is going to be annoying.



Huon

On 30/08/14 14:15, Kai Noda wrote:

Hi Ogino-san and Huon,

Thanks a lot for your quick but detailed answers!  Now I have a better 
understanding of the build process. So if I use the 
terminology defined in the below (a bit outdated) wiki page, it's like 
you removed "stage0" and then split "stage1" into two sub-stages (with 
and without #[cfg] clauses.)


https://github.com/rust-lang/rust/wiki/Note-compiler-snapshots

(~_~).oO(It seems to me that stage2 and 3 are identical now...)

Anyways, I get to understand that living only with the Git repo is 
practically impossible since there's no official way to know what 
version #[cfg(stage0)] clauses expect and thus there's no other way 
than to download the nightly build frequently enough.


Kai

2014-08-30 10:15 GMT+08:00 Huon Wilson <mailto:dbau...@gmail.com>>:


Yes, the first stage of bootstrapping uses a binary snapshot of
some older commit (you need a Rust compiler to build Rust, so we
need to do this). There's no particular guarantee that the
snapshot has the same behaviour as the version being compiled, and
in general it doesn't, as you see.

The libraries have to phase themselves, using #[cfg(stage0)] to
specify the code that is compiled with the snapshot, and
#[cfg(not(stage0))] for the code that should be compiled with the
non-snapshot stages (i.e. the newer compiler with all the latest
features). e.g. the error you see there has these markings

https://github.com/rust-lang/rust/blob/5419b2ca2c27b4745fa1f2773719350420542c76/src/libcore/cell.rs#L325-L341
. This means using a compiler inappropriate for stage0 as the
snapshot (like HEAD, which needs to be using the not(stage0) code)
will try to compile code that is broken for it.

The fact that it works with nightly is essentially just a
coincidence that the versions match closely enough (the Rust
nightly is 12 hours behind HEAD on average, and so may be missing
the breaking commits).


In summary: local-rust failing is not a bug, it's just a
consequence of us bootstrapping while changing the language.


Huon


On 30/08/14 11:49, Kai Noda wrote:

Hi Rustlers,

When I build Rust Git HEAD with the binary package from
rust-lang.org <http://rust-lang.org> in this way,

$ CC=gcc47 CXX=g++47 ./configure --prefix=$HOME/local/rust
--enable-valgrind --enable-local-rust
--local-rust-root=$HOME/local/rust-nightly-x86_64-unknown-linux-gnu/
$ make -j8 install

it (usually) works fine.  However, whey I try to use the above
installation of Rust to build the same Git HEAD itself like this way,

CC=gcc47 CXX=g++47 ./configure --prefix=$HOME/local/rust
--enable-valgrind --enable-local-rust
--local-rust-root=$HOME/local/rust
$ make

I've never been successful (as of today, I get errors quoted at
the end of this mail.)

Is this normal (do all developers usually use the binary
package?) or should I file bug reports to Github whenever I see
them?  I was a bit surprised to see them because I thought
(platform-triple)/stage[0-3] were bootstrapping phases.  Of
course I'm well aware that Rust is in a very early stage of
development...

Regards,
Kai

野田  開 mailto:noda...@gmail.com>>


rustc:

x86_64-unknown-linux-gnu/stage0/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore
/home/nodakai/src/rust-HEAD/src/libcore/lib.rs:61:37: 61:57
warning: feature has added to rust, directive not necessary
/home/nodakai/src/rust-HEAD/src/libcore/lib.rs:61
<http://lib.rs:61> #![feature(simd, unsafe_destructor,
issue_5723_bootstrap)]
^~~~
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337:1: 341:2
error: the parameter type `T` may not live long enough; consider
adding an explicit lifetime bound `T:'b`...
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337
<http://cell.rs:337> pub struct Ref<'b, T> {
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:338
<http://cell.rs:338> // FIXME #12808: strange name to try to
avoid interfering with
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:339
<http://cell.rs:339> // field accesses of the contained type
via Deref
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:340
<http://cell.rs:340> _parent: &'b RefCell
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:341
<http://cell.

Re: [rust-dev] Can Rust Git HEAD build Git HEAD itself?

2014-08-29 Thread Huon Wilson
Yes, the first stage of bootstrapping uses a binary snapshot of some 
older commit (you need a Rust compiler to build Rust, so we need to do 
this). There's no particular guarantee that the snapshot has the same 
behaviour as the version being compiled, and in general it doesn't, as 
you see.


The libraries have to phase themselves, using #[cfg(stage0)] to specify 
the code that is compiled with the snapshot, and #[cfg(not(stage0))] for 
the code that should be compiled with the non-snapshot stages (i.e. the 
newer compiler with all the latest features). e.g. the error you see 
there has these markings 
https://github.com/rust-lang/rust/blob/5419b2ca2c27b4745fa1f2773719350420542c76/src/libcore/cell.rs#L325-L341 
. This means using a compiler inappropriate for stage0 as the snapshot 
(like HEAD, which needs to be using the not(stage0) code) will try to 
compile code that is broken for it.


The fact that it works with nightly is essentially just a coincidence 
that the versions match closely enough (the Rust nightly is 12 hours 
behind HEAD on average, and so may be missing the breaking commits).



In summary: local-rust failing is not a bug, it's just a consequence of 
us bootstrapping while changing the language.



Huon

On 30/08/14 11:49, Kai Noda wrote:

Hi Rustlers,

When I build Rust Git HEAD with the binary package from rust-lang.org 
 in this way,


$ CC=gcc47 CXX=g++47 ./configure --prefix=$HOME/local/rust 
--enable-valgrind --enable-local-rust 
--local-rust-root=$HOME/local/rust-nightly-x86_64-unknown-linux-gnu/

$ make -j8 install

it (usually) works fine.  However, whey I try to use the above 
installation of Rust to build the same Git HEAD itself like this way,


CC=gcc47 CXX=g++47 ./configure --prefix=$HOME/local/rust 
--enable-valgrind --enable-local-rust --local-rust-root=$HOME/local/rust

$ make

I've never been successful (as of today, I get errors quoted at the 
end of this mail.)


Is this normal (do all developers usually use the binary package?) or 
should I file bug reports to Github whenever I see them?  I was a bit 
surprised to see them because I thought (platform-triple)/stage[0-3] 
were bootstrapping phases.  Of course I'm well aware that Rust is in a 
very early stage of development...


Regards,
Kai

野田  開 mailto:noda...@gmail.com>>


rustc: 
x86_64-unknown-linux-gnu/stage0/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore
/home/nodakai/src/rust-HEAD/src/libcore/lib.rs:61:37: 61:57 warning: 
feature has added to rust, directive not necessary
/home/nodakai/src/rust-HEAD/src/libcore/lib.rs:61  
#![feature(simd, unsafe_destructor, issue_5723_bootstrap)]

  ^~~~
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337:1: 341:2 error: 
the parameter type `T` may not live long enough; consider adding an 
explicit lifetime bound `T:'b`...
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337 
 pub struct Ref<'b, T> {
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:338 
 // FIXME #12808: strange name to try to avoid 
interfering with
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:339 
 // field accesses of the contained type via Deref
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:340 
 _parent: &'b RefCell

/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:341  }
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337:1: 341:2 note: 
...so that the reference type `&'b cell::RefCell` does not outlive 
the data it points at
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:337 
 pub struct Ref<'b, T> {
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:338 
 // FIXME #12808: strange name to try to avoid 
interfering with
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:339 
 // field accesses of the contained type via Deref
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:340 
 _parent: &'b RefCell

/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:341  }
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:392:1: 396:2 error: 
the parameter type `T` may not live long enough; consider adding an 
explicit lifetime bound `T:'b`...
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:392 
 pub struct RefMut<'b, T> {
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:393 
 // FIXME #12808: strange name to try to avoid 
interfering with
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:394 
 // field accesses of the contained type via Deref
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:395 
 _parent: &'b RefCell

/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:396  }
/home/nodakai/src/rust-HEAD/src/libcore/cell.rs:392:1: 396:2 note: 
...so that the reference type `&'b cell::RefCell` does not outlive 
the data 

Re: [rust-dev] std::num::pow() is inadequate / language concepts

2014-07-24 Thread Huon Wilson

On 25/07/14 09:21, Tommy M. McGuire wrote:

On 07/24/2014 05:55 PM, Huon Wilson wrote:

1.0 will not stabilise every function in every library; we have precise
stability attributes[1] so that the compiler can warn or error if you
are using functionality that is subject to change. The goal is to have
the entirety of the standard library classified and marked appropriately
for 1.0.


[1]: http://doc.rust-lang.org/master/rust.html#stability

How would that solve the general problem? What would the stability of
pow() be if Gregor had not brought up the issue now?




I was just pointing out that we aren't required to solve any/every 
library issue before 1.0 (since the text I was quoting was rightfully 
concerned about backwards incompatible API changes), not that this isn't 
an issue.




Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] std::num::pow() is inadequate / language concepts

2014-07-24 Thread Huon Wilson

On 25/07/14 08:46, Gregor Cramer wrote:


Probably in this case it might be a solution to move pow() into a 
trait, but


I'm speaking about a general problem. Rust 1.0 will be released, and 
someone


is developing a new module for version 1.1. But some of the functions 
in 1.0


are inadequate for the new module, how to solve this without changing 
the API


in 1.1?



1.0 will not stabilise every function in every library; we have precise 
stability attributes[1] so that the compiler can warn or error if you 
are using functionality that is subject to change. The goal is to have 
the entirety of the standard library classified and marked appropriately 
for 1.0.



[1]: http://doc.rust-lang.org/master/rust.html#stability


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Debugging rust for a newbie

2014-07-23 Thread Huon Wilson
It is unlikely to be a lifetimes thing; far, far more likely to be a 
"normal" infinite recursion. The size of the stack frame of each 
function is fixed at compile time, so the way to blow the stack is by 
calling a lot of functions deeply, e.g. it's not possible to write a 
loop that places more and more objects on the stack (not in safe code, 
anyway).


You can get a backtrace by running the test in a conventional debugger, 
e.g. `gdb --args ./tester produces_a_move`, then type `run`. When it 
hits the abort, gdb will freeze execution and you can run `backtrace` to 
see the function call stack, to see what is recursing deeply.


You can make rustc emit debug info which makes gdb far more useful, by 
compiling with `-g` or, equivalently, `--debuginfo=2`. (Depending on 
your platform, 'lldb' may be better.)



If all else fails, you can fall back to println debugging, e.g.

fn gen_move(&self, ...) -> Move {
println!("calling gen_move");

// ...
}

---

Just glancing over your code, it looks like there's mutual recursion 
between Playout::run and McEngine::gen_move:


- McEngine::gen_move calls Playout::run 
https://github.com/ujh/iomrascalai/blob/88e09fdd/src/engine/mc/mod.rs#L82
- Playout::run calls Playout::gen_move 
https://github.com/ujh/iomrascalai/blob/88e09fdd/src/playout/mod.rs#L42
- Playout::gen_move calls McEngine::gen_move 
https://github.com/ujh/iomrascalai/blob/88e09fdd/src/playout/mod.rs#L49



Huon


On 23/07/14 17:42, Urban Hafner wrote:

Hey there,

I'm still quite new to Rust. Until now I was able to fix all my bugs 
by writing tests and/or randomly adding lifetime parameters to keep 
the compiler happy. Now I've hit my first stack overflow. I assume 
it's due to the fact that I've screwed up the lifetimes and the 
objects live too long although I'm not even sure about that. Now my 
question is: How do I debug this? Is there a way to figure out how 
long objects live? Or how would one go about debugging this?


Oh, if you're interested in the failing code: 
https://github.com/ujh/iomrascalai/pull/46


Cheers,

Urban
--
Freelancer

Available for hire for Ruby, Ruby on Rails, and JavaScript projects

More at http://urbanhafner.com


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Mutable files

2014-07-22 Thread Huon Wilson

On 23/07/14 07:10, Tobias Müller wrote:



... in C++. Not in Rust. That's because, unlike C++, Rust is designed
from the ground up to support moves and copies in a first class way.


It's just strange that you can change the semantic of an already existing
operation just by adding new capabilities. Adding traits should define new
operations with new semantics, not changing the semantics of existing
operations. At least that's how it works for all other traits, and
deviating from that is at least surprising.

Hence the Opt-In Built-In Traits proposal

Opt-In built-In traits makes things a bit better but my point is still
valid. By adding Copy (implicitly or explicitly) you remove the possibility
of move semantics from the type.
Usually you don't work alone on a project and some coworker adding Copy to
a type that I expected to be Move may be fatal.

No other trait removed works like that.


You can't just add Copy to anything: the contents has to be Copy itself, 
and, you can't have a destructor on your type (i.e. a Drop 
implementation removes the possibility to be Copy). Thus, almost all 
types for which by-value uses *should* invalidate the source (i.e. "move 
semantics") are automatically not Copy anyway.


The only way one can get a fatal error due to an incorrect Copy 
implementation is if the type with the impl is using `unsafe` code 
internally. In this case, that whole API needs to be considered very 
carefully anyway, ensuring correctness by avoiding Copy is just part of it.



I'll also note that an implementation of Copy just states the a 
byte-copy of a value is also a semantic copy, it doesn't offer any 
control over how the copy is performed. At runtime, by-value use of a 
Copy type is essentially identical to a by-value use of a non-Copy type 
(both are memcpy's of the bytes), the only major difference is the 
compiler statically prevents further uses of the source for non-Copy ones.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] [ANN] Initial Alpha of Cargo

2014-06-24 Thread Huon Wilson

On 24/06/14 20:41, György Andrasek wrote:

The FAQ says:

> Our solution: Cargo allows a package to specify a script to run 
before invoking |rustc|. We plan to add support for platform-specific 
configuration, so you can use |make| on Linux and |cmake| on BSD, for 
example.


Just to make it perfectly clear, this will force a Cygwin dependency 
on cargo in practice. One popular package using autotools is enough to 
make it mandatory. Is this a conscious tradeoff?


Just to be clear: what's the trade-off here? That is, what is the 
alternative: not supporting running external scripts at all?



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Integer overflow, round -2147483648

2014-06-23 Thread Huon Wilson

On 23/06/14 14:46, comex wrote:

On Mon, Jun 23, 2014 at 12:35 AM, Daniel Micay  wrote:

An operation that can unwind isn't pure. It impedes code motion such as
hoisting operations out of a loop, which is very important for easing
the performance issues caused by indexing bounds checks. LLVM doesn't
model the `nounwind` effect on functions simply for fun.

No it doesn't!  Or maybe it does today, but an unwindable operation is
guaranteed to be repeatable without consequence, which I'd like to
think can account for most cases where operations are hoisted out of
loops (again, could be wrong :), and not to modify any memory (unless
it traps, but in Rust at that point you are guaranteed to be exiting
the function immediately).



I would think that something simple like

  let mut sum = 0;
  for x in some_int_array.iter() {
  sum += x;
  }

would be very hard to vectorise with unwinding integer operations.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] &self/&mut self in traits considered harmful(?)

2014-06-11 Thread Huon Wilson

On 11/06/14 23:27, SiegeLord wrote:
Aside from somewhat more complicated impl's, are there any downsides 
to never using anything but by value 'self' in traits?


Currently trait objects do not support `self` methods (#10672), and, 
generally, the interactions with trait objects seem peculiar, e.g. if 
you've implemented Trait for &Type, then you would want to be coercing a 
`&Type` to a `&Trait`, *not* a `&(&Type)` as is currently required.


However, I don't think these concerns affect the operator overloading 
traits.



https://github.com/mozilla/rust/issues/10672


Huon

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Porting a small DSP test from C++ to Rust: Comments and performance observations

2014-06-10 Thread Huon Wilson

On 11/06/14 07:42, Steve Klabnik wrote:

Rust doesn't have prefix/postfix increment? Or, I just didn't find the right 
syntax of using it?

It does not. x = x + 1. Much more clear, no confusion about what comes
back. Tricky code leads to bugs. :)


FWIW, primitive types offer `x += 1`, and #5992 covers extending this to 
all types.


https://github.com/mozilla/rust/issues/5992

Huon

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] how is Rust bootstrapped?

2014-06-09 Thread Huon Wilson

On 09/06/14 20:12, Zoltán Tóth wrote:
My question is rather theoretical, from the 
libre-and-open-source-software point of view.


Bootstrapping needs an already existing language to compile the first 
executable version of Rust.


I read that this was OCaml at some time. I do not have OCaml on my 
machine, but still managed to build from a cloned Rust repo. The 
documentation says that building requires a C++ compiler. These 
suggest that the project moved from OCaml to C++.


But there are also some texts on the web and in the source that 
suggests that stage0 is actually not compiled from the source 
repository, but is downloaded as a binary snapshot. If this latter is 
the case, then can someone compile a suitable stage0 from [C++|OCaml] 
source himself?



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




Yes, those texts are correct, one downloads a stage0 compiler as a 
binary snapshot to compile Rust from source. The stage0 compiler is just 
stored binaries compiled from some commit in the past. Every so often 
someone makes a new stage0 by making the buildbots build a snapshot of a 
more recent commit, allowing the libraries to be completely written in a 
newer iteration of Rust (they no longer have to be able to be compiled 
by the old snapshot).


There's a wiki page about this snapshot process: 
https://github.com/mozilla/rust/wiki/Note-compiler-snapshots



If one was really interested, one could theoretically backtrace through 
history, all the way back to the last version[1] of rustboot (the OCaml 
compiler), and use this to do a "full bootstrap". That is, use the 
rustboot compiler to build the first written-in-Rust compiler as a 
snapshot, and then use this snapshot to build the next one, following 
the chain of snapshotted commits[2] through to eventually get to modern 
Rust.



As others have said, the C++ dependency is just for building LLVM, which 
is linked into rustc as a library, it's not used by the snapshot (that 
is, LLVM is a dependency required when building librustc to get a rustc 
compiler for the next stage; one can use the snapshot to compile 
libraries like libstd etc. without needing LLVM).



Huon


[1]: 
https://github.com/mozilla/rust/tree/ef75860a0a72f79f97216f8aaa5b388d98da6480/src/boot

[2]: https://github.com/mozilla/rust/blob/master/src/snapshots.txt
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] 7 high priority Rust libraries that need to be written

2014-06-05 Thread Huon Wilson

On 05/06/14 19:11, kimhyunk...@gmail.com wrote:


I was also planning to add sql!() macro almost exactly same as Chris 
Morgan suggests. However, you can't directly access type-checking part 
of rustc in #![phase(syntax)] modules, which means you need some dirty 
hacks to peroperly type-check such macros.



The conventional approach is to expand to something that uses certain 
traits, meaning any external data has to satisfy those traits for the 
macro invocation to work. This technique is used by `println!` and 
`#[deriving]`, for example.


(I don't know if you regard this as a dirty hack or not.)


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Clone and enum<'a>

2014-06-03 Thread Huon Wilson

Somewhat. It is due to auto-deref: deriving(Clone) essentially expands to

fn clone(&self) -> List<'a> {
match *self {
Nil => Nil,
Next(ref x) => Next(x.clone())
}
}

`x` is of type `&&List<'a>`, but the `x.clone()` call auto-derefs 
through both layers of & to be calling List's clone directly (returning 
a `List<'a>`), rather than duplicating the reference.


This will be fixed with UFCS, which will allow deriving to expand to 
something like `Next(Clone::clone(x))` and this does not undergo auto-deref.



You can work around this by writing a Clone implementation by hand. In 
this case, List is Copy, so the implementation can be written as


impl<'a> Clone for List<'a> {
fn clone(&self) -> List<'a> {
*self
}
}

(Clone for more "interesting" List types (which aren't Copy, in general) 
will likely need to be implemented with a match and some internal Clones.)


Huon


On 03/06/14 19:59, Igor Bukanov wrote:

Consider the following enum:

#[deriving(Clone)]
enum List<'a> {
 Nil,
 Next(&'a List<'a>)
}


It generates en error:

:4:10: 4:22 error: mismatched types: expected `&List<>` but
found `List<>` (expected &-ptr but found enum List)
:4 Next(&'a List<'a>)
   ^~~~
note: in expansion of #[deriving]
:1:1: 2:5 note: expansion site
error: aborting due to previous error

Is it a bug in #[deriving] ?
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] A better type system

2014-05-31 Thread Huon Wilson

References (&T) are Copy.


Huon

On 01/06/14 09:42, Tommi wrote:
On 2014-06-01, at 1:02, Patrick Walton > wrote:



   fn my_transmute(value: T, other: U) -> U {
   let mut x = Left(other);
   let y = match x {
   Left(ref mut y) => y,
   Right(_) => fail!()
   };
   *x = Right(value);
   (*y).clone()
   }


If `U` implements `Copy`, then I don't see a (memory-safety) issue 
here. And if `U` doesn't implement `Copy`, then it's same situation as 
it was in the earlier example given by Matthieu, where there was an 
assignment to an `Option>` variable while a different 
reference pointing to that variable existed. The compiler shouldn't 
allow that assignment just as in your example the compiler shouldn't 
allow the assignment `x = Right(value);` (after a separate reference 
pointing to the contents of `x` has been created) if `U` is not a 
`Copy` type.


But, like I said in an earlier post, even though I don't see this 
(transmuting a `Copy` type in safe code) as a memory-safety issue, it 
is a code correctness issue. So it's a compromise between preventing 
logic bugs (in safe code) and the convenience of more liberal mutation.




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Detection of early end for Take

2014-05-31 Thread Huon Wilson

I believe this can be done with something like


let mut counted = iter.scan(0, |count, x| { *count += 1; Some(x) });

for x in counted { ... }

println!("I saw {} elements", counted.state)


or even just


let mut count = 0;

{ // scope to restrict the closure's borrow of `count`.
let mut counted = iter.inspect(|_| count += 1);
for x in counted { ... }
}

println!("I saw {} elements", count);



Huon


On 31/05/14 12:02, Kevin Ballard wrote:

I suspect a more generally interesting solution would be a Counted iterator 
adaptor that keeps track of how many non-None values it's returned from next(). 
You could use this to validate that your Take iterator returned the expected 
number of values.

pub struct Counted {
 iter: T,
 /// Incremented by 1 every time `next()` returns a non-`None` value
 pub count: uint
}

impl> Iterator for Counted {
 fn next(&mut self) -> Option {
 match self.iter.next() {
 x@Some(_) => {
 self.count += 1;
 x
 }
 None => None
 }
 }
 
 fn size_hint(&self) -> (uint, Option) {

 self.iter.size_hint()
 }
}

// plus various associated traits like DoubleEndedIterator

-Kevin

On May 30, 2014, at 9:31 AM, Andrew Poelstra  wrote:


Hi guys,


Take is an iterator adaptor which cuts off the contained iterator after
some number of elements, always returning None.

I find that I need to detect whether I'm getting None from a Take
iterator because I've read all of the elements I expected or because the
underlying iterator ran dry unexpectedly. (Specifically, I'm parsing
some data from the network and want to detect an early EOM.)


This seems like it might be only me, so I'm posing this to the list: if
there was a function Take::is_done(&self) -> bool, which returned whether
or not the Take had returned as many elements as it could, would that be
generally useful?

I'm happy to submit a PR but want to check that this is appropriate for
the standard library.



Thanks

Andrew



--
Andrew Poelstra
Mathematics Department, University of Texas at Austin
Email: apoelstra at wpsoftware.net
Web:   http://www.wpsoftware.net/andrew

"If they had taught a class on how to be the kind of citizen Dick Cheney
worries about, I would have finished high school."   --Edward Snowden

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] How to find Unicode string length in rustlang

2014-05-28 Thread Huon Wilson

On 29/05/14 06:38, Kevin Ballard wrote:
On May 28, 2014, at 1:26 PM, Benjamin Striegel > wrote:


> Unicode is not a simple concept. UTF-8 on the other hand is a 
pretty simple concept.


I don't think we can fully divorce these two ideas. Understanding 
UTF-8 still implies understanding the difference between code points, 
code units, and grapheme clusters. If we have a single unadorned 
`len` function, that implies the existence of a "default" length to a 
UTF-8 string, which is a lie. It also *fails* to suggest the 
existence of alternative measures of length of a UTF-8 string. 
Finally, the choice of byte length as the default length metric 
encourages the horrid status quo, which is the perpetuation of code 
that is tested and works in ASCII environments but barfs as soon as 
anyone from a sufficiently-foreign culture tries to use it. 
Dedicating ourselves to Unicode support does us no good if the 
remainder of our API encourages the depressingly-typical ASCII-ism 
that pervades nearly every other language.


Do you honestly believe that calling it .byte_len() will do anything 
besides confusing anyone who expects .len() to work, and resulting in 
code that looks any different than just using .byte_len() everywhere 
people use .len() today?


Forcing more verbose, annoying, unconventional names on people won't 
actually change how they process strings. It will just confuse and 
annoy them.


-Kevin


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Changing the names of methods on strings seems very similar how Path 
does not implement Show (except with even stronger motivation, because 
strings have at least 3 sensible interpretations of what the length 
could be).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Interaction between private fields and auto-dereferencing

2014-05-24 Thread Huon Wilson
I believe much of the underlying issue is covered by #12808. I have just 
filed a pull request #14402 to "fix" the immediate symptoms of this bug 
by making it harder to accidentally have struct fields taht conflict 
with the private fields of the libstd types.


There's also issue #13126 and RFC #25 about general issues with public 
fields/methods and Deref:



Huon


https://github.com/mozilla/rust/issues/12808
https://github.com/mozilla/rust/pull/14402
https://github.com/rust-lang/rfcs/pull/25
https://github.com/mozilla/rust/issues/13126


On 24/05/14 20:52, Paulo Sérgio Almeida wrote:
No, I was just getting feedback if this was a problem already 
considered. I have not been following all Rust progress very closely 
(trying to catch up now). But I can try to do so later.


Regards,
Paulo


On 23 May 2014 22:06, Kevin Ballard > wrote:


This looks like a legitimate problem. Have you filed an issue on
the GitHub issues page? https://github.com/mozilla/rust/issues/new

-Kevin

On May 23, 2014, at 6:04 AM, Paulo Sérgio Almeida
mailto:pssalme...@gmail.com>> wrote:


Hi all, (resending from different email address; there seems
to be a problem with my other address)

I don't know if this has been discussed, but I noticed an
unpleasant interaction between private fields in the
implementation of things like pointer types and auto-dereferencing.

The example I noticed is: if I want to store a struct with field
"x" inside an Arc, and then auto-dereference it I get the error:

error: field `x` of struct `sync::arc::Arc` is private

A program showing this, if comments are removed, where the ugly
form (*p).x must be used to solve it:

---
extern crate sync;
use sync::Arc;

struct Point { x: int, y: int }

fn main() {
let p =  Arc::new(Point { x: 4, y: 8 });
let p1 = p.clone();
spawn(proc(){
println!("task v1: {}", p1.y);
//println!("task v1: {}", p1.x);
println!("task v1: {}", (*p1).x);
});
println!("v: {}", p.y);
//println!("v: {}", p.x);
println!("v: {}", (*p).x);
}
-- 


The annoying thing is that a user of the pointer-type should not
have to know or worry about what private fields the pointer
implementation contains.

A better user experience would be if, if in a context where there
is no corresponding public field and auto-deref is available,
auto-deref is attempted, ignoring private-fields of the pointer
type.

If this is too much of a hack or with complex or unforeseen
consequences, a practical almost-solution without changing the
compiler would be renaming private fields in pointer
implementations, like Arc, so as to minimize the collision
probability, e.g., use something like __x__ in arc.rs
:

pub struct Arc {
priv __x__: *mut ArcInner,
}

Regards,
Paulo
___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev





___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Qt5 Rust bindings and general C++ to Rust bindings feedback

2014-05-22 Thread Huon Wilson

Hi Alexander,

I wrote up some feedback and tried to post it on your blog, but 
unfortunately submitting a comment was failing with an nginx error, so I 
posted it on /r/rust instead: 
http://www.reddit.com/r/rust/comments/269t6i/cxx2rust_the_pains_of_wrapping_c_in_rust_on_the/



(It's good to see people experimenting with GUI frameworks in Rust!)


Huon


On 23/05/14 06:27, Alexander Tsvyashchenko wrote:


Hi All,

Recently I was playing with bindings generator from C++ to Rust. I 
managed to make things work for Qt5 wrapping, but stumbled into 
multiple issues along the way.


I tried to summarize my "pain points" in the following blog 
post: http://endl.ch/content/cxx2rust-pains-wrapping-c-rust-example-qt5


I hope that others might benefit from my experience and that some of 
these "pain points" can be fixed in Rust.


I'll try to do my best in answering questions / acting on feedback, if 
any, but I have very limited amount of free time right now so sorry in 
advance if answers take some time.


Thanks!

--
Good luck! Alexander


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Brainfuck-compiler macro

2014-05-09 Thread Huon Wilson

Hi Rustillians,

There was some very serious discussion on IRC about writing macros for 
various esolangs (*obviously* very serious :P ), so I dived in and wrote 
a simple one for brainfuck.


https://github.com/huonw/brainfuck_macro

Example:

#![feature(phase)]

#[phase(syntax)] extern crate brainfuck;

use std::io;

fn main() {
let hello_world = brainfuck!{
[>[>++>+++>+++>+-]>+>+>->>+[<]<-]>>.>
---.+++..+++.>>.<-.<.+++.--..>>+.>++.
};

hello_world(&mut io::stdin(), &mut io::stdout());
}


It's pretty basic (the macro implementation is less than 200 lines), and 
so probably makes a reasonable example of a procedural macro, especially 
macros that wish to directly handle the token stream a macro receives 
(rather than extracting Rust expressions etc. from it).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Convincing compiler that *T is safe

2014-04-14 Thread Huon Wilson

On 14/04/14 06:12, Vladimir Pouzanov wrote:

I have a number of I/O mapped registers that look like:

struct Dev {
  .. // u32 fields
}

pub static Dev0 : *mut Dev = 0xsomeaddr as *mut Dev;

with macro-generated getters and setters:

pub fn $getter_name(&self) -> u32 {
  unsafe { volatile_load(&(self.$reg)) }
}

unfortunately, calling a getter is calling a method on *Reg, which is 
unsafe and looks like:


unsafe { (*Dev0).SOME_REG() };

is there any way to simplify the syntax, hopefully to simple 
Dev0.SOME_REG()? I'm ok with any "unsafe" tricks including transmuting 
it to &Dev (that doesn't seem to be possible though), as the 
getter/setter methods are always safe in this scenario.


--
Sincerely,
Vladimir "Farcaller" Pouzanov
http://farcaller.net/


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



The compiler cannot verify that these method calls are safe, so they 
`unsafe`'s are asserting "trust me compiler, I know more than you".



You could write a macro like `get!(Dev0, SOME_REG)` that expanded to the 
`unsafe { (*Dev0).SOME_REG() }` invocation.



But... I don't understand why transmuting to &Dev wouldn't be possible. 
I'd strongly recommend the weaker version `unsafe { &*Dev0 }`, to go 
from *mut Dev to &Dev0. However, this likely breaks the guarantees of &, 
since the data to which they point is supposed to be immutable, but 
being I/O mapped registers it would imply that external events can 
change the values. (I guess this is where Flavio's suggestion of storing 
a *Unsafe comes into play, but this still leaves you dealing with 
an *mut pointer to get at the data, at the end of the day.)



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Convincing compiler that *T is safe

2014-04-14 Thread Huon Wilson

On 14/04/14 19:04, Flaper87 wrote:




2014-04-13 22:22 GMT+02:00 György Andrasek >:


You could make a container struct:

struct Dev {
ptr: *mut InternalDev
}

and then impl your methods on that.


I'd recommend using `Unsafe` which was added to wrap types T and 
indicate an *unsafe interior*. It exposes a `get` method that returns 
a `*mut` pointer to the wrapped data.


Here's the link to the Unsafe docstring (which also contains an 
example): https://github.com/mozilla/rust/blob/master/src/libstd/ty.rs#L16



Flavio


--
Flavio (@flaper87) Percoco
http://www.flaper87.com
http://github.com/FlaPer87


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



I'm not sure that Unsafe is particularly useful here, since the goal is 
to avoid unsafety by providing a sensible wrapper around the raw pointer 
(since some methods are "known" to be safe). Also, the rendered docs are 
a better place to which to link: 
http://static.rust-lang.org/doc/master/std/ty/struct.Unsafe.html



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] TotalOrd and cmp::max

2014-04-05 Thread Huon Wilson
Floating point numbers don't have a total ordering (all of these are 
false: NaN < NaN, NaN == NaN, NaN > NaN).


Use the floating-point specific method, `self.a.max(self.b)`.


Huon

On 05/04/14 19:30, Rémi Fontan wrote:

Hi,

when compiling following code with rust 0.10 I get following error:

use std::cmp;
struct vec2d { a:f32, b:f32 }
impl vec2d {
pub fn max(&self) -> f32 {
cmp::max(self.a, self.b)
}
}

test.rs:6:9: 6:17 error: failed to find an implementation of trait 
std::cmp::TotalOrd for f32

test.rs:6  cmp::max(self.a, self.b)
^~~~
make: *** [test-test] Error 101


have I missed something?


cheers,

Rémi


--
Rémi Fontan : remifon...@yahoo.fr 
mobile: +64 21 855 351
93 Otaki Street, Miramar 6022
Wellington, New Zealand


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-03 Thread Huon Wilson

On 03/04/14 10:22, Niko Matsakis wrote:

On Wed, Apr 02, 2014 at 04:03:37PM -0400, Daniel Micay wrote:

I have no sane proposal to fix this beyond passing a size to free.

I don't believe there is a problem with just not using null to
represent such pointers (for example, 1 would suffice). This does
impose some additional burdens on slice conversion and the like.

This conversation has focused on low-level effects, which is important
to understand, but I think the bigger question is: how do we WANT the
language to look? Is it useful to have a distinct `Vec` and `~[T]`
or -- in our ideal world -- would they be the same? I think we can
make the interconversion fast for the default allocator, but we should
design for the language we want to use.

I could go either way on this. In the kind of programs I write, at
least, most vectors get built up to a specific length and then stop
growing (frequently they stop changing as well, but not
always). Sometimes they continue growing. I actually rather like the
idea of using `Vec` as a kind of builder and `~[T]` as the
end-product. In those cases where the vector continues to grow, of
course, I can just keep the `Vec` around. Following this logic, I
would imagine that most APIs want to consume and produce `~[T]`, since
they consume and produce end products.


I don't think the basic routines returning vectors in libstd etc. are 
producing end-products; they are fundamental building blocks, and their 
output will be used in untold ways. (There are not many that consume 
`~[T]`s by-value.)




On the other hand, I could imagine and appreciate an argument that we
should just take and produce `Vec`, which gives somewhat more
flexibility. In general, Rust takes the philosophy that "if you own
it, you can mutate it", so why make growing harder than it needs to
be? Preferring Vec also means fewer choices, usually a good thing.

Perhaps the best thing is to wait a month (or two or three) until DST
is more of a reality and then see how we feel.


Are you thinking we should also wait before converting the current uses 
of ~[T] to Vec? Doing the migration gives us the performance[1] and 
zero-length-zero-alloc benefits, but there were some concerns about 
additional library churn if we end up converting back to DST's ~[T].


(I'd also guess doing a complete migration now would make the transition 
slightly easier: no need for staging the libstd changes, and it would 
allow the current ~[] handling to be removed from libsyntax/librustc 
completely, leaving a slightly cleaner slate.)




Huon


[1]: https://github.com/mozilla/rust/issues/8981
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-02 Thread Huon Wilson

On 03/04/14 17:15, comex wrote:



You're talking about allocators designed around the limitation of an
API. The design no longer needs to make the same compromises if you're
going to know the size. The difference between no cache miss and a cache
miss is not insignificant...

I explained why I think a chunk header is necessary in any case.
Maybe it is still a significant win.  The C++14 proposal claims Google
found one with GCC and tcmalloc, although tcmalloc is rather
inefficient to start with... I would like to see numbers.


Really? I was under the impression that tcmalloc was one of the faster 
allocators in common use. e.g. two posts I found just now via Google:


- https://github.com/blog/1422-tcmalloc-and-mysql
- 
http://www.mysqlperformanceblog.com/2013/03/08/mysql-performance-impact-of-memory-allocators-part-2/



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-02 Thread Huon Wilson

On 03/04/14 08:54, Patrick Walton wrote:

On 4/2/14 2:51 PM, Huon Wilson wrote:

Specifically, I don't see any concrete positives to doing this for
library functions other than "lets keep using ~[T]" and ~[T] & &[T]
having the same in-memory representation (covered below).

Under any scheme I can think of, there are negatives:

1. without calling shrink_to_fit in the conversion, we lose the ability
to have sized deallocations (covered by others in this thread)

2. if we do call it, then anything returning a ~[T] after building it
with a Vec is unavoidably slower

3. either way, you're throwing away (the knowledge of) any extra
capacity that was allocated, so if someone wishes to continue extending
the slice returned by e.g. `foo`, then `let v = foo().into_vec();
v.push(1)` will always require a realloc. (And for library functions, we
shouldn't be dictating how people use the return values.)

4. it adds two vector-like types that someone needs to think about: in
the common case the benefits of ~[] (one word smaller) are completely
useless, it's really only mostly-immutable heavily-nested data types
with a lot of vectors like Rust's AST where it helps[1]. I.e. almost all
situations are fine (or better) with a Vec.

5. how will the built-in ~[] type use allocators? (well, I guess this is
really "how will the built-in ~ type use allocators?", but that question
still needs answering[2].)


On the representation of ~[T] and &[T] being the same: this means that
theoretically a ~[T] in covariant(?) position can be coerced to a &[T],
e.g. Vec<~[T]> -> Vec<&[T]>. However, this only really matters for
functions returning many nested slices/vectors, e.g. the same Vec
example, because pretty much anything else will be able to write
`vec.as_slice()` cheaply. (In the code base, the only things mentioning
/~[~[/ now are a few tests and things handling the raw argc/argv, i.e.
returning ~[~[u8]].)

I don't think this should be a major concern, because I don't see us
suddenly growing functions a pile of new functions returning ~[~[T]],
and if we do, I would think that they would be better suited to being an
iterator (assuming that's possible) over Vec's, and these internal Vec
can be then be mapped to ~[T] cheaply before collecting the iterator to
a whole new Vec (or Vec<~[]>) (assuming a &[Vec]/&[~[]] is wanted).



I'm concerned we are wanting to stick with ~[T] because it's what we
currently have, and is familiar; as I said above, I don't see many
positives for doing it for library functions.


What about strings? Should we be using `StrBuf` as well?

Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



I don't see why not. The same arguments apply.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Reminder: ~[T] is not going away

2014-04-02 Thread Huon Wilson
Personally, I'm strongly against doing using ~[] as return values from 
library functions.


Imagine we were in world were we only had Vec and were adding a new 
type OwnedSlice that was (pointer, length) like ~[T]. For how many 
library functions would we say "it is sensible to throw away the 
capacity information before returning"? I don't think anything in libstd 
etc. would have a strong 'yes' answer to this question.



Specifically, I don't see any concrete positives to doing this for 
library functions other than "lets keep using ~[T]" and ~[T] & &[T] 
having the same in-memory representation (covered below).


Under any scheme I can think of, there are negatives:

1. without calling shrink_to_fit in the conversion, we lose the ability 
to have sized deallocations (covered by others in this thread)


2. if we do call it, then anything returning a ~[T] after building it 
with a Vec is unavoidably slower


3. either way, you're throwing away (the knowledge of) any extra 
capacity that was allocated, so if someone wishes to continue extending 
the slice returned by e.g. `foo`, then `let v = foo().into_vec(); 
v.push(1)` will always require a realloc. (And for library functions, we 
shouldn't be dictating how people use the return values.)


4. it adds two vector-like types that someone needs to think about: in 
the common case the benefits of ~[] (one word smaller) are completely 
useless, it's really only mostly-immutable heavily-nested data types 
with a lot of vectors like Rust's AST where it helps[1]. I.e. almost all

situations are fine (or better) with a Vec.

5. how will the built-in ~[] type use allocators? (well, I guess this is 
really "how will the built-in ~ type use allocators?", but that question 
still needs answering[2].)



On the representation of ~[T] and &[T] being the same: this means that 
theoretically a ~[T] in covariant(?) position can be coerced to a &[T], 
e.g. Vec<~[T]> -> Vec<&[T]>. However, this only really matters for 
functions returning many nested slices/vectors, e.g. the same Vec 
example, because pretty much anything else will be able to write 
`vec.as_slice()` cheaply. (In the code base, the only things mentioning 
/~[~[/ now are a few tests and things handling the raw argc/argv, i.e. 
returning ~[~[u8]].)


I don't think this should be a major concern, because I don't see us 
suddenly growing functions a pile of new functions returning ~[~[T]], 
and if we do, I would think that they would be better suited to being an 
iterator (assuming that's possible) over Vec's, and these internal Vec 
can be then be mapped to ~[T] cheaply before collecting the iterator to 
a whole new Vec (or Vec<~[]>) (assuming a &[Vec]/&[~[]] is wanted).




I'm concerned we are wanting to stick with ~[T] because it's what we 
currently have, and is familiar; as I said above, I don't see many 
positives for doing it for library functions.





Huon


[1]: And even in those cases, it's not a particularly huge gain, e.g. 
taking *two* words off the old OptVec type by replacing it with a 
library equivalent to DST's ~[T] only gained about 40MB: 
http://huonw.github.io/isrustfastyet/mem/#f5357cf,bbf8cdc


[2]: The sanest way to support allocators I can think of would be 
changing `~T` to `Uniq`, and then we have `Uniq<[T]>` 
which certainly feels less attractive than `~[T]`.


On 03/04/14 02:35, Alex Crichton wrote:

I've noticed recently that there seems to be a bit of confusion about the fate
of ~[T] with an impending implementation of DST on the horizon. This has been
accompanied with a number of pull requests to completely remove many uses of
~[T] throughout the standard distribution. I'd like to take some time to
straighten out what's going on with Vec and ~[T].

# Vec

In a post-DST world, Vec will be the "vector builder" type. It will be the
only type for building up a block of contiguous elements. This type exists
today, and lives inside of std::vec. Today, you cannot index Vec, but this
will be enabled in the future once the indexing traits are fleshed out.

This type will otherwise largely not change from what it is today. It will
continue to occupy three words in memory, and continue to have the same runtime
semantics.

# ~[T]

The type ~[T] will still exist in a post-DST, but its representation will
change. Today, a value of type ~[T] is one word (I'll elide the details of this
for now). After DST is implemented, ~[T] will be a two-word value of the length
and a pointer to an array (similarly to what slices are today). The ~[T] type
will continue to have move semantics, and you can borrow it to &[T] as usual.

The major difference between today's ~[T] type and a post-DST ~[T] is that the
push() method will be removed. There is no knowledge of a capacity in the
representation of a ~[T] value, so a push could not be supported at all. In
theory a pop() can be efficiently supported, but it will likely not be
implemented at first.

# [T]

As part of DST, the type grammar will start accepti

Re: [rust-dev] Static_assert on a hash

2014-03-30 Thread Huon Wilson

On 30/03/14 20:12, Simon Sapin wrote:

On 30/03/2014 09:50, Vladimir Pouzanov wrote:

I have a chunk of code that toggles different functions on hardware
pins, that looks like this:

   enum Function {
 GPIO = 0,
 F1 = 1,
 F2 = 2,
 F3 = 3,
   }
   fn set_mode(port: u8, pin: u8, fun: Function)

What I would like to have is an enum of human-readable values instead of
F1/F2/F3

[...]

let fun_idx: u8 = FUNCTIONS[port][pin][fun]



I don’t know if you can directly access the static strings for the 
variants’ names behind this, but you can use #[deriving(Show)] to get 
a string representation of any enum value:


For example:

#[deriving(Show)]
enum Function {
GPIO = 0,
F1 = 1,
F2 = 2,
F3 = 3,
}

fn main() {
let f = GPIO;

println!("{:?}", f)

let s: ~str = format!("{:?}", f);
println!("{}", s)
}

Output:

GPIO
GPIO



Note that {:?} is an reflection-based formatter that works with every 
type (although has various problems, like being very slow), not the Show 
one. One should just use {} to use the Show impl, i.e. in your example:


println!("{}", f)


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compiling with no bounds checking for vectors?

2014-03-28 Thread Huon Wilson

On 28/03/14 23:49, Tommi wrote:

On 28 Mar 2014, at 14:27, Daniel Micay  wrote:


On 28/03/14 08:25 AM, Tommi wrote:

On 28 Mar 2014, at 05:56, Patrick Walton  wrote:

I think that Rust should give you the ability to opt out of safety, but on a 
per-operation basis. Having it as a compiler option is too much of a 
sledgehammer: often you want some non-performance-critical bounds to be checked 
in the name of safety, while you want some bounds checks to be turned off.

One other argument I can give for a "sledgehammer" feature like this is that it can be 
used as a marketing tool against people who are worried about performance. You can say to those 
people: "Look, if, at the end of the day, you decide that you'd rather take raw speed over 
safety, then there's this compiler flag you can use to disable all runtime memory safety checking 
in your code and get performance on par with C++".

It's called `unsafe`. There's a whole keyword reserved for it.

 From a marketing standpoint, I don't think that the following sounds very 
appealing:
"Look, if, at the end of the day, you'd rather choose raw speed over safety, then 
you can go over all the hundreds of thousands of lines of code you have and change 
everything to their unsafe, unchecked variants".




Flip it around: "Look, if, at the end of the day, you'd rather choose 
safety over raw speed, then you can go over all the hundreds of 
thousands of lines of code you have and change everything to their safe, 
checked variants". Getting code correct is the first step to getting it 
fast: it doesn't matter how fast a program runs if it's just doing the 
wrong thing really quickly (e.g. exposing the users computer to hijacking).


Most code isn't in a tight inner loop, and so the piece-of-mind of it 
being safe by default is worth the effort one has to put in to profile 
and examine the very core logic that gets called millions of times. It's 
much harder to use automated tools to find all of the memory safety 
bugs. And anyway, as Daniel and Patrick say, if you don't need the 
utmost safety, then Rust isn't the language you're looking for: things 
like C++ work well in the speed department, at the cost of safety.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compiling with no bounds checking for vectors?

2014-03-28 Thread Huon Wilson

On 28/03/14 22:04, Lee Braiden wrote:

On 28/03/14 03:48, Daniel Micay wrote:

On 27/03/14 11:04 PM, Tommi Tissari wrote:
Case by case is all fine and good. But you're trying argue what a 
programmer *should* do if he knew what was good for him.

Rust doesn't view the programmer as an infallible, trusted entity.


Actually, I believe it does, with this policy.  The "infallible 
programmer" it imagines, however, is not the developer; true.  It is 
worse: this policy currently assumes that the policy makers / compiler 
creators themselves are infallible; that language users (even language 
users of the future, who may have much more knowledge and experience 
than anyone participating in this discussion today) are idiots who 
don't know what they're doing, or at least, will never know more than 
the language creators.  This is NOT trusting the tireless work of a 
compiler: it's being arrogant, and trusting yourself more than others, 
whose abilities and circumstances you do not even know.


At no point do the compiler writers assume they themselves are 
infallible; if they did, the compiler and stdlib could just be written 
with unsafe everywhere, to avoid having to satisfy rustc, which can be 
annoyingly picky. You'll notice that this is not the case, and reviewers 
strongly discourage adding large amounts of `unsafe`, preferring 
completely safe code, or, if that's not possible, the unsafety wrapped 
into self-contained segments, to make auditing them for correctness 
easier. In fact, a large motivation for working on Rust is because the 
developers know how fallible they are.


A lot of work has gone, and is going, into making the compiler and 
libraries safe and correct. (Of course, getting a completely 
verified-correct compiler is really, really hard work, and even the 
proof of correctness of the type system is incomplete, but is currently 
being worked on.)



In any case, those writing the compiler are likely to be the ones with 
the best knowledge of Rust, and, in particular, the best knowledge of 
how to write correct unsafe/low-level code: e.g. how to avoid inducing 
undefined behaviour, and how to maintain the various invariants that 
exist. I think it's reasonable to assume that most other people using 
Rust will not have this in depth knowledge. Personally, I would prefer 
to put more trust in the experienced users.


(Experience in writing safe Rust code is less relevant here, since, 
assuming the proof works smoothly/we adjust the language to make it 
work, it will be impossible to write vulnerable code without `unsafe`.)




Worse: it is failing to learn from history.  The very reason that C / 
C++ succeeded is that they don't force things on developers: they 
assist, give options.  They choose defaults, yes, and make things 
easier, yes; but they always provide the option to move out of the 
way, when it turns out that those defaults are actually making things 
harder.  The very reason that many other languages fail is that they 
failed to provide the ability to adapt to changing needs.


Forcing bounds checking on everyone is really not that different from 
forcing garbage collection on everyone: it may seem like a good idea 
to some --- even many --- but to others, it is overly limiting.


Rust explicitly gives options too, and definitely does *not* force 
bounds checking on everyone: the `unsafe` escape hatch allows someone to 
choose to forgo bounds checking when it has a noticeable effect on 
performance, and there's no safe substitute (i.e. non-sequential vector 
accesses in a tight loop).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Help needed writing idiomatic rust to pass sequences of strings around

2014-03-24 Thread Huon Wilson
It would be necessary (but not sufficient*) for them to have the same 
in-memory representation, and currently ~str and &str don't.


~str is `*{ length: uint, capacity: uint, data... }`, a pointer to a 
vector with the length and capacity stored inline, i.e. one word; &str 
is just `{ data: *u8, length: uint }`, a pointer to a chunk of memory 
along with the length of that chunk, i.e. two words.



(*E.g. std::cell::Cell and uint have the same in-memory 
representation, but coercing a &[uint] to a &[Cell] is a very bad 
idea... when it would theoretically be possible relates to 
subtyping/type variance.)


Huon

On 24/03/14 17:36, Phil Dawes wrote:
To complete my understanding: is there a reason a 'sufficiently smart 
compiler' in the future couldn't do this conversion implicitly?


I.e. if a function takes a borrowed reference to a container of 
pointers, could the compiler ignore what type of pointers they are 
(because they won't be going out of scope)?


Thanks,

Phil


On Sun, Mar 23, 2014 at 7:14 AM, Patrick Walton > wrote:


On 3/23/14 12:11 AM, Phil Dawes wrote:

On Sun, Mar 23, 2014 at 2:04 AM, Patrick Walton
mailto:pcwal...@mozilla.com>
>>
wrote:

Why not change the signature of `search_crate` to take `~str`?

Patrick


Hi Patrick,

The main reason I haven't done this is that it is already used
from a
bunch of places where a path is &[&str] as the result of an
earlier
split_str("::")
e.g.
let path : ~[&str] = s.split_str("::").collect();
...
search_crate(path);


Ah, I see. Well, in that case you can make a trait (say,
`String`), which implements a method `.as_str()` that returns an
`&str`, and have that trait implemented by both `&str` and `~str`.
(IIRC the standard library may have such a trait already, for `Path`?)

You can then write:

fn search_crate(x: &[T]) {
...
for string in x.iter() {
... string.as_str() ...
}
}

And the function will be callable with both `&str` and `~str`.
Again, I think the standard library has such a trait implemented
already, for this use case.

Patrick




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Help needed writing idiomatic rust to pass sequences of strings around

2014-03-23 Thread Huon Wilson

On 23/03/14 18:14, Patrick Walton wrote:

On 3/23/14 12:11 AM, Phil Dawes wrote:

On Sun, Mar 23, 2014 at 2:04 AM, Patrick Walton mailto:pcwal...@mozilla.com>> wrote:

Why not change the signature of `search_crate` to take `~str`?

Patrick


Hi Patrick,

The main reason I haven't done this is that it is already used from a
bunch of places where a path is &[&str] as the result of an earlier
split_str("::")
e.g.
let path : ~[&str] = s.split_str("::").collect();
...
search_crate(path);


Ah, I see. Well, in that case you can make a trait (say, `String`), 
which implements a method `.as_str()` that returns an `&str`, and have 
that trait implemented by both `&str` and `~str`. (IIRC the standard 
library may have such a trait already, for `Path`?)


You can then write:

fn search_crate(x: &[T]) {
...
for string in x.iter() {
... string.as_str() ...
}
}

And the function will be callable with both `&str` and `~str`. Again, 
I think the standard library has such a trait implemented already, for 
this use case.


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


std::str::Str http://static.rust-lang.org/doc/master/std/str/trait.Str.html


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Documentation with links to github

2014-03-18 Thread Huon Wilson
Rustdoc actually renders the source code itself, and puts little [src] 
links on (most) things, e.g. the [src] link on the top right of the 
arena crate docs links to 
http://static.rust-lang.org/doc/master/src/arena/home/rustbuild/src/rust-buildbot/slave/doc/build/src/libarena/lib.rs.html#11-597


However, this isn't as good as it could be, e.g. the link is easy to 
miss, and:

- https://github.com/mozilla/rust/issues/12926
- https://github.com/mozilla/rust/issues/12932


Huon


On 19/03/14 11:20, benjamin adamson wrote:
I recently had to learn enough Ruby at work to implement some new 
behavior to a relatively old program. I ran across one website where 
the documentation of the API I was learning was embedded into the HTML 
directly. Having immediate access to the source code allowed to 
understand what the API was doing more-so than the actual 
documentation.This was immensely helpful, and I'm wondering (hoping) 
that Rust could steal the motivation for this idea. Here is an 
example, the source code is embedded onto the page:

http://apidock.com/ruby/URI/HTTP/request_uri

As a developer having as much information as possible is what I always 
want. My idea is to provide a link on the generated documentation page 
that links to the source code on github. I think that's a little more 
sane then having the source code embedded into the generated HTML.


As an example, currently I'm looking at the documentation for an 
arena, http://static.rust-lang.org/doc/master/arena/index.html and it 
would be *convenient* for the documentation to link to 
https://github.com/mozilla/rust/blob/master/src/libarena/lib.rs


There's a ton of useful documentation about the Arena in the source 
code the user can read too, 
https://github.com/mozilla/rust/blob/master/src/libarena/lib.rs#L66 
for example. Any shortcomings of the documentation can be some-what 
circumvented if the user wants to just look at the source code. I want 
to make one thing clear, I understand that users can go on github and 
find the source-code, this feature would just automate that 
(potentially distracting/long/difficult task, especially for newcomers 
to Rust) step reducing barrier to entry.


Another benefit is that this would get more rust users looking at the 
source code, possibly leading to more PR's improving documentation or 
implementation (educated guess). Furthermore users can understand 
performance implications of using any public library, conveniently. 
This may be really useful for users of under-documented 
modules/libraries. This might be an immensely useful addition to 
Rust's documentation.



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why we don't like glob use (use std::vec::*)?

2014-03-12 Thread Huon Wilson
Certain aspects of them dramatically complicate the name resolution 
algorithm (as I understand it), and, anyway, they have various downsides 
for the actual code, e.g. the equivalent in Python is frowned upon: 
http://python.net/~goodger/projects/pycon/2007/idiomatic/handout.html#importing


Maybe they aren't so bad in a compiled & statically typed language? I 
don't know; either way, I personally find code without glob imports 
easier to read, because I can work out which function is being called 
very easily, whereas glob imports require more effort.



Huon

On 12/03/14 20:44, Liigo Zhuang wrote:
"glob use" just make compiler loading more types, but make programmers 
a lot easy (to write, to remember). perhaps I'm wrong? thank you!


--
by *Liigo*, http://blog.csdn.net/liigo/
Google+ https://plus.google.com/105597640837742873343/


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] About RFC: "A 30 minute introduction to Rust"

2014-03-03 Thread Huon Wilson
I recently wrote a tool that helps with auditing unsafe blocks: 
https://github.com/huonw/unsafe_ls


It lists all the unsafe blocks in a crate and prints the lines with the 
actual unsafe actions on them, with a crude filter to omit FFI (and/or 
only see FFI). This doesn't do anything intelligent at all, just makes 
it easier for humans to find and read any unsafe code; and it's still up 
to the programmer to work out what safe code needs to be checked too.


(Works well with emacs' compilation-mode, in my experience.)


Huon

On 04/03/14 13:58, Patrick Walton wrote:
It's plain hyperbolic to call Rust's unsafe blocks something that 
leads to a false sense of security. If you don't want your unsafe code 
to be reliant on safe code, don't call that safe code, or be 
conservative and defend yourself against it going wrong. Unsafe code 
should be simple and easy to understand, and in practice this has 
worked well so far.


Such a tool would be useful and would help evaluate the unsafe code 
for correctness, but let's not pretend that it's needed for Rust to be 
much safer than C++. However that is determined, if the unsafe code is 
correct, all the safe code is guaranteed to be free from memory safety 
problems. Action-at-a-distance (unmarked code affecting safe code) is 
an unfortunate hazard, and one that we should mitigate, but in 
practice changing safe code hasn't affected much, because our unsafe 
code tends to be small and localized.


Patrick

Daniel Micay  wrote:

On 03/03/14 08:54 PM, Patrick Walton wrote:

On 3/3/14 5:53 PM, Daniel Micay wrote:

On 03/03/14 08:19 PM, Steve Klabnik wrote:

Part of the issue with that statement is that you may
or may not program in this way. Yes, people choose
certain subsets of C++ that are more or less safe, but
the language can't help you with that.

You can choose to write unsafe code in Rust too.

You have to write the *unsafe* keyword to do so. Patrick


You need an `unsafe` keyword somewhere, but the memory safety bug can
originate in safe co
  de. Any
safe code called by unsafe code is trusted
too, but not marked as such. A memory safety bug can originate
essentially anywhere in librustc, libsyntax, libstd and the other
libraries because they're freely mixed with `unsafe` code.

It's pretty much a false sense of security without tooling to show which
code is trusted by an `unsafe` block/function *somewhere*, even in
another crate.


--
Sent from my Android phone with K-9 Mail. Please excuse my brevity.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] reader.lines() swallows io errors

2014-02-19 Thread Huon Wilson
#12368 has 3 concrete suggestions for possible solutions (which are 
included in your list of 6):


- A FailingReader wrapper that wraps another reader and fails on errors
- A ChompingReader wrapper that "chomps" errors, but stores them so that 
they are externally accessible
- Have the lines() iterator itself store the error, so that it can be 
accessed after the loop



https://github.com/mozilla/rust/issues/12368


Huon


On 20/02/14 09:50, Kevin Ballard wrote:
On Feb 19, 2014, at 2:34 PM, Lee Braiden > wrote:


Then we could introduce a new struct to wrap any Reader that 
translates non-EOF errors into EOF specifically to let you say “I 
really don’t care about failure”.


It sounds like a very specific way to handle a very general problem.  
People like (modern, complete) scripting languages because they 
handle this sort of intricacy in elegant, ways, not because they 
gloss over it and make half-baked programs that don't handle errors.  
It's just that you can, say, handle IOErrors in one step, at the top 
of your script, except for one particular issue that you know how to 
recover from, six levels into the call stack.  Exceptions (so long as 
there isn't a lot of boilerplate around them) let you do that, 
easily.  Rust needs a similarly generic approach to propagating 
errors and handling them five levels up, whether that's exceptions or 
fails (I don't think they currently are flexible enough), or monads, 
or something else.


In my experience, exceptions are actually a very /inelegant/ way to 
handle this problem. The code 5 levels higher that catches the 
exception doesn’t have enough information about the problem in order 
to recover. Maybe it just discards the entire computation, or perhaps 
restarts it. But it can’t recover and continue.


We already tried conditions for this, which do let you recover and 
continue, except that turned out to be a dismal failure. Code that 
didn’t touch conditions were basically just hoping nothing went wrong, 
and would fail!() if it did. Code that did try to handle errors was 
very verbose because conditions were a PITA to work with.


As for what we’re talking about here. lines() is fairly unique right 
now in its discarding of errors. I can’t think of another example 
offhand that will discard errors. As I said before, I believe that 
.lines() exists to facilitate I/O handling in a fashion similar to 
scripting languages, primarily because one of the basic things people 
try to do with new languages is read from stdin and handle the input, 
and it’s great if we can say our solution to that is:


fn main() {
for line in io::stdin().lines() {
print!(“received: {}”, line);
}
}

It’s a lot more confusing and off-putting if our example looks like

fn main() {
for line in io::stdin().lines() {
match line {
Ok(line) => print!(“received: {}”, line),
Err(e) => {
println!(“error: {}”, e);
break;
}
}
}

or alternatively

fn main() {
for line in io::stdin().lines() {
let line = line.unwrap(); // new user says “what is 
.unwrap()?” and is still not handling errors here

print!(“received: {}”, line);
}
}

Note that we can’t even use try!() (née if_ok!()) here because main() 
doesn’t return an IoResult.


The other thing to consider is that StrSlice also exposes a .lines() 
method and it may be confusing to have two .lines() methods that yield 
different types.


Given that, the only reasonable solutions appear to be:

1. Keep the current behavior. .lines() already documents its behavior; 
anyone who cares about errors should use .read_line() in a loop


2. Change .lines() to fail!() on a non-EOF error. Introduce a new 
wrapper type IgnoreErrReader (name suggestions welcome!) that 
translates all errors into EOF. Now the original sample code will 
fail!() on a non-EOF error, and there’s a defined way of turning it 
back into the version that ignores errors for people who legitimately 
want that. This could be exposed as a default method on Reader called 
.ignoring_errors() that consumes self and returns the new wrapper.


3. Keep .lines() as-is and add the wrapper struct that fail!()s on 
errors. This doesn’t make a lot of sense to me because the struct 
would only ever be used with .lines(), and therefore this seems worse 
than:


4. Change .lines() to fail!() on errors and add a new method 
.lines_ignoring_errs() that behaves the way .lines() does today. 
That’s kind of verbose though, and is a specialized form of suggestion 
#2 (and therefore less useful).


5. Remove .lines() entirely and live with the uglier way of reading 
stdin that will put off new users.


6. Add some way to retrieve the ignored error after the fact. This 
would require uglifying the Buffer trait to have .err() and .set_err() 
methods, as well as expanding all the implementors to provide a field 
to store that information.


I’m in favor of solutions

Re: [rust-dev] Improving our patch review and approval process (Hopefully)

2014-02-19 Thread Huon Wilson
Another alternative is to have a few fast builders (e.g. whatever 
configuration the try bots run) just run through the queue as they're 
r+'d to get fast feedback.



(A (significant) problem with all these proposals is the increase in 
infrastructure complexity: there's already semi-regular automation 
failures.)



Huon

On 20/02/14 08:21, Clark Gaebel wrote:
As an alternative to "arbitrary code running on the buildbot", there 
could be a b+ which means "please try building this" which core 
contributors can comment with after a quick skim through the patch.



On Wed, Feb 19, 2014 at 3:38 PM, Felix S. Klock II 
mailto:pnkfe...@mozilla.com>> wrote:



On 19/02/2014 21:12, Flaper87 wrote:

2. Approval Process

[...] For example, requiring 2 r+ from 2 different reviewers
instead of 1. This might seem a bit drastic now, however as the
number of contributors grows, this will help with making sure
that patches are reviewed at least by 2 core reviewers and they
get enough attention.


I mentioned this on the #rust-internals irc channel but I figured
I should broadcast it here as well:

regarding fractional r+, someone I was talking to recently
described their employer's process, where the first reviewer (who
I think is perhaps part of a priveleged subgroup) assigned the
patch with the number of reviewers it needs so that it isn't a
flat "every patch needs two reviewers" but instead, someone says
"this looks like something big/hairy enough that it needs K reviewers"

just something to consider, if we're going to look into
strengthening our review process.

Cheers,
-Felix


On 19/02/2014 21:12, Flaper87 wrote:

Hi all,

I'd like to share some thoughts with regard to our current test
and approval process. Let me break this thoughts into 2 separate
sections:

1. Testing:

Currently, all patches are being tested after they are approved.
However, I think it would be of great benefit for contributors -
and reviewers - to test patches before and after they're
approved. Testing the patches before approval will allow folks
proposing patches - although they're expected to test the patches
before submitting them - and reviewers to know that the patch is
indeed mergeable. Furthermore, it will help spotting corner
cases, regressions that would benefit from a good discussion
while the PR is hot.

I think we don't need to run all jobs, perhaps just Windows, OSx
and Linux should be enough for a first test phase. It would also
be nice to run lint checks, stability checks etc. IIRC, GH's API
should allow us to notify this checks failures.

2. Approval Process

I'm very happy about how patches are reviewed. The time a patch
waits before receiving the first comment is almost 0 seconds and
we are spread in many patches. If we think someone else should
take a look at some patch, we always make sure to mention that
person.

I think the language would benefit from a more strict approval
process. For example, requiring 2 r+ from 2 different reviewers
instead of 1. This might seem a bit drastic now, however as the
number of contributors grows, this will help with making sure
that patches are reviewed at least by 2 core reviewers and they
get enough attention.


I think both of these points are very important now that we're
moving towards 1.0 and the community keeps growing.

Thoughts? Feedback?

-- 
Flavio (@flaper87) Percoco

http://www.flaper87.com
http://github.com/FlaPer87


___
Rust-dev mailing list
Rust-dev@mozilla.org  
https://mail.mozilla.org/listinfo/rust-dev



-- 
irc: pnkfelix onirc.mozilla.org  

email: {fklock,pnkfelix}@mozilla.com  


___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev




--
Clark.

Key ID : 0x78099922
Fingerprint: B292 493C 51AE F3AB D016  DD04 E5E3 C36F 5534 F907


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] reader.lines() swallows io errors

2014-02-19 Thread Huon Wilson
It would need a red mark because *one* convenience method doesn't 
provide the full error handling suite?? Surely Rust at least gets "Some".



That said, the failing-reader and/or `operate` techniques sound like 
they might be nice, on first blush. (I'll note that `operate` is very 
similar to the "conditions" we used to use for all IO errors.)



Huon

On 19/02/14 20:32, Lee Braiden wrote:

On 19/02/14 09:14, Phil Dawes wrote:
I understand, but it seems like a bad tradeoff to me in a language 
with safety as a primary feature.
'.lines()' looks like the way to do line iteration in rust, so people 
will use it without thinking especially as it is part of the std.io 
 introduction.




I agree.  IO is certainly an area where you want to know that ALL 
errors are propagated upwards.  Even with the best of intentions, it's 
very easy to write IO functions that handle 20 cases, across 5 levels 
of abstraction, and inadvertently not handle all failure conditions 
for one particular case.  It may not even matter for the purposes of 
the original code, but can later bite you pretty hard, when you come 
to add functionality, and find that something is missing from your API 
to handle the more general case correctly.



At the moment, Rust would have to get a red mark on this table:

http://en.wikipedia.org/wiki/Comparison_of_programming_languages#Failsafe_I.2FO_and_system_calls

Which would be quite sad, considering its goals.


IMO, this should be acknowledged as the simple oversight that it is, 
and fixed.



--
Lee



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: About the library stabilization process

2014-02-18 Thread Huon Wilson
There are some docs for these attributes: 
http://static.rust-lang.org/doc/master/rust.html#stability (which may 
need to be updated as we formalise exactly what each one means, and so on.)


And, FWIW, the default currently implemented is unmarked nodes are 
unstable: that is, putting #[deny(unstable)] on an item will emit errors 
at the uses of functions etc. that lack an explicit stability attribute.


Huon


On 19/02/14 12:40, Brian Anderson wrote:

Hey there.

I'd like to start the long process of stabilizing the libraries, and 
this is the opening salvo. This process and the tooling to support it 
has been percolating on the issue tracker for a while, but this is a 
summary of how I expect it to work. Assuming everybody feels good 
about it, we'll start trying to make some simple API's stable starting 
later this week or next.



# What is the stability index and stability attributes?

The stability index is a way of tracking, at the item level, which 
library features are safe to use backwards-compatibly. The intent is 
that the checks for stability catch all backwards-incompatible uses of 
library features. Between feature gates and stability


The stability index of any particular item can be manually applied 
with stability attributes, like `#[unstable]`.


These definitions are taken directly from the node.js documentation. 
node.js additionally defines the 'locked' and 'frozen' levels, but I 
don't think we need them yet.


* Stability: 0 - Deprecated

This feature is known to be problematic, and changes are
planned.  Do not rely on it.  Use of the feature may cause 
warnings.  Backwards

compatibility should not be expected.

* Stability: 1 - Experimental

This feature was introduced recently, and may change
or be removed in future versions.  Please try it out and provide 
feedback.
If it addresses a use-case that is important to you, tell the node 
core team.


* Stability: 2 - Unstable

The API is in the process of settling, but has not yet had
sufficient real-world testing to be considered stable. 
Backwards-compatibility

will be maintained if reasonable.

* Stability: 3 - Stable

The API has proven satisfactory, but cleanup in the underlying
code may cause minor changes.  Backwards-compatibility is guaranteed.

Crucially, once something becomes 'stable' its interface can no longer 
change outside of extenuating circumstances - reviewers will need to 
be vigilant about this.


All items may have a stability index: crates, modules, structs, enums, 
typedefs, fns, traits, impls, extern blocks;

extern statics and fns, methods (of inherent impls only).

Implementations of traits may have their own stability index, but 
their methods have the same stability as the trait's.



# How is the stability index determined and checked?

First, if the node has a stability attribute then it has that 
stability index.


Second, the AST is traversed and stability index is propagated 
downward to any indexable node that isn't explicitly tagged.


Reexported items maintain the stability they had in their original 
location.


By default all nodes are *stable* - library authors have to opt-in to 
stability index tracking. This may end up being the wrong default and 
we'll want to revisit.


During compilation the stabilization lint does at least the following 
checks:


* All components of all paths, in all syntactic positions are checked, 
including in

  * use statements
  * trait implementation and inheritance
  * type parameter bounds
* Casts to traits - checks the trait impl
* Method calls - checks the method stability

Note that not all of this is implemented, and we won't have complete 
tool support to start with.



# What's the process for promoting libraries to stable?

For 1.0 we're mostly concerned with promoting large portions of std to 
stable; most of the other libraries can be experimental or unstable. 
It's going to be a lengthy process, and it's going to require some 
iteration to figure out how it works best.


The process 'leader' for a particular module will post a stabilization 
RFC to the mailing list. Within, she will state the API's under 
discussion, offer an overview of their functionality, the patterns 
used, related API's and the patterns they use, and finally offer 
specific suggestions about how the API needs to be improved or not 
before it's final. If she can confidently recommend that some API's 
can be tagged stable as-is then that helps everybody.


After a week of discussion she will summarize the consensus, tag 
anything as stable that already has agreement, file and nominate 
issues for the remaining, and ensure that *somebody makes the changes*.


During this process we don't necessarily need to arrive at a plan to 
stabilize everything that comes up; we just need to get the most 
crucial features stable, and make continual progress.


We'll start by establishing a stability baseline, tagging most 
everything experimental or 

Re: [rust-dev] issue numbers in commit messages

2014-02-18 Thread Huon Wilson

I wrote a quick & crappy script that automates going from commit -> PR:

#!/bin/sh

if [ $# -eq 0 ]; then
echo 'Usage: which-pr COMMIT'
exit 0
fi

git log master ^$1 --ancestry-path --oneline --merges | \
tail -1 | \
sed 's@.*#\([0-9]*\) : .*@http://github.com/mozilla/rust/pull/\1@'

Putting this in your path gives:

$ which-pr 6555b04
http://github.com/mozilla/rust/pull/12345

$ which-pr a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2
http://github.com/mozilla/rust/pull/12162

Of course, I'm sure there are corner cases that don't work, and it's 
definitely not as usable as something directly encoded in the commit.



Huon


On 18/02/14 13:17, Nick Cameron wrote:
Right, that is exactly what I want to see, just on every commit. For 
example, 
https://github.com/mozilla/rust/commit/a02b10a0621adfe36eb3cc2e46f45fc7ccdb7ea2. 
has none of that info and I can't see any way to get it (without the 
kind of Git-fu suggested earlier). (Well, I can actually see that 
r=nikomatsakis from the comments at the bottom, but I can't see how 
that r+ came about, whether there was any discussion, whether there 
was an issue where this was discussed or not, etc.).



On Tue, Feb 18, 2014 at 3:02 PM, Corey Richardson > wrote:



https://github.com/mozilla/rust/commit/25147b2644ed569f16f22dc02d10a0a9b7b97c7e
seems to provide all of the information you are asking for? It
includes the text of the PR description, the PR number, the name of
the branch, and who reviewed it. I agree with your premise but I'm not
sure I agree that the current situation isn't adequate. But I wouldn't
be opposed to such a change.

On Mon, Feb 17, 2014 at 8:54 PM, Nick Cameron mailto:li...@ncameron.org>> wrote:
> Whether we need issues for PRs is a separate discussion. There
has to be
> _something_ for every commit - either a PR or an issue, at the
least there
> needs to be an r+ somewhere. I would like to see who reviewed
something so I
> can ping someone with questions other than the author (if they
are offline).
> Any discussion is likely to be useful.
>
> So the question is how to find that, when necessary. GitHub
sometimes fails
> to point to the info. And when it does, you do not know if you
are missing
> more info. For the price of 6 characters in the commit message
(or "no
> issue"), we know with certainty where to find that info and that
we are not
> missing other potentially useful info. This would not slow down
development
> in any way.
>
> Note that this is orthogonal to use of version control - you
still need to
> know Git in order to get the commit message - it is about how
one can go
> easily from a commit message to meta-data about a commit.
>
>
> On Tue, Feb 18, 2014 at 12:53 PM, Kevin Ballard mailto:ke...@sb.org>> wrote:
>>
>> This is not going to work in the slightest.
>>
>> Most PRs don't have an associated issue. The pull request is
the issue.
>> And that's perfectly fine. There's no need to file an issue
separate from
>> the PR itself. Requiring a referenced issue for every single
commit would be
>> extremely cumbersome, serve no real purpose aside from aiding an
>> unwillingness to learn how source control works, and would
probably slow
>> down the rate of development of Rust.
>>
>> -Kevin
>>
>> On Feb 17, 2014, at 3:50 PM, Nick Cameron mailto:li...@ncameron.org>> wrote:
>>
>> At worst you could just use the issue number for the PR. But I
think all
>> non-trivial commits _should_ have an issue associated. For
really tiny
>> commits we could allow "no issue" or '#0' in the message. Just
so long as
>> the author is being explicit, I think that is OK.
>>
>>
>> On Tue, Feb 18, 2014 at 12:16 PM, Scott Lawrence
mailto:byt...@gmail.com>> wrote:
>>>
>>> Maybe I'm misunderstanding? This would require that all commits be
>>> specifically associated with an issue. I don't have actual
stats, but
>>> briefly skimming recent commits and looking at the issue
tracker, a lot of
>>> commits can't be reasonably associated with an issue. This
requirement would
>>> either force people to create fake issues for each commit, or
to reference
>>> tangentially-related or overly-broad issues in commit
messages, neither of
>>> which is very useful.
>>>
>>> Referencing any conversation that leads to or influences a
commit is a
>>> good idea, but something this inflexible doesn't seem right.
>>>
>>> My 1.5¢.
>>>
>>>
>>> On Tue, 18 Feb 2014, Nick Cameron wrote:
>>>
 How would people feel about a requirement for all commit
messages to
 have
 an issue number in them? And could we make bors enforce that?
   

Re: [rust-dev] RFC: Conventions for "well-behaved" iterators

2014-02-14 Thread Huon Wilson
Taking `self` would make it much harder to thread iterators statefully 
through other function calls, and also make using Iterator trait objects 
harder/impossible, since `next` requires `~self` until 10672 is fixed, 
which is unacceptable, and mentions `Self` in the return value, making 
it uncallable through something with the type erased.



Huon

On 14/02/14 21:46, Gábor Lehel wrote:




On Fri, Feb 14, 2014 at 12:35 AM, Daniel Micay > wrote:


On Thu, Feb 13, 2014 at 6:33 PM, Erick Tryzelaar
mailto:erick.tryzel...@gmail.com>> wrote:
> On Thursday, February 13, 2014, Gábor Lehel
mailto:glaebho...@gmail.com>> wrote:
>>
>>
>>
>> This is not strictly true.
>>
>> If instead of
>>
>> fn next(&mut self) -> Option;
>>
>> we had something like
>>
>> fn next(self) -> Option<(Self, A)>;
>>
>> then access to exhausted iterators would be ruled out at the
type level.
>>
>> (But it's more cumbersome to work with and is currently
incompatible with
>> trait objects.)
>
> This is an appealing option. If it is really this simple to
close this
> undefined behavior, I think we should consider it. Are there any
other
> downsides? Does it optimize down to the same code as our current
iterators?

It's certainly not as convenient and would only work if all iterators
were marked as `NoPod`.


Even if it were `Pod` (i.e. copyable), the state of the old copy would 
be left unchanged by the call, so I don't think this is a problem.


You could also recover the behavior of the existing `Fuse` adapter 
(call it any number of times, exhaustion checked at runtime) by 
wrapping it in an `Option` like so:


fn next_fused>(opt_iter: &mut Option) 
-> Option {

opt_iter.take().and_then(|iter| {
iter.next().map(|(next_iter, result)| {
*opt_iter = Some(next_iter);
result
})
})
}

Dunno about performance. Lots of copies/moves with this scheme, so it 
seems possible that it might be slower.



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Chaining Results and Options

2014-02-10 Thread Huon Wilson
Ah, true enough. I think you were correct, and I misread the original 
email. `swap(&mut map, &mut and_then)` and fmap <-> monadic bind in my 
reponse.


Huon

On 10/02/14 23:00, Sergei Maximov wrote:
I believe it depends on what return type `.method` has, but I got your 
point.


02/10/2014 10:50 PM, Huon Wilson пишет:
It's actually Haskell's fmap, which we have in the form of .map for 
both Option[1] and Result[2], e.g. the proposed expr->method() is the 
same as expr.map(|x| x.method()) (which is still quite verbose).



Monadic bind comes in the form of the .and_then methods (which both 
Option and Result have).



[1]: 
http://static.rust-lang.org/doc/master/std/option/enum.Option.html#method.map
[2]: 
http://static.rust-lang.org/doc/master/std/result/enum.Result.html#method.map



Huon


On 10/02/14 22:45, Sergei Maximov wrote:
It looks very similar to Haskell's monadic bind operator (>>=) at 
first glance.


02/10/2014 10:40 PM, Armin Ronacher пишет:

Hi,

I was playing around with the new IO system a lot and got some very 
high level feedback on that. Currently the IoResult objects 
implement a few traits to pass through to the included success 
value and to dispatch method calls.


That's nice but it means that from looking at the code you can 
easily miss that a result is involved and if you add methods you 
need to add manual proxy methods on the result.


At the end of the day the only thing you actually need to pass 
through is the error. So I would propose a new operator "->" that 
acts as a "resolve deref" operator. It would be operating on the 
"Some" part of an Option and the "Ok" part of a result and pass 
through the errors unchanged:


So essentially this:

let rv = match expr() {
Ok(tmp) => tmp.method(),
err => err,
};

Would be equivalent to:

let rv = expr->method();

Likewise for options:

let rv = match expr {
Some(tmp) => tmp.method(),
None => None,
}

Would likewise be equivalent to:

let rv = expr->method();

As a result you could only ever call this on things that return 
results/options themselves.


Annoyingly enough this also means that the results need to be 
compatible which is still a problem. The example there would be an 
IO trait that is implemented by another system that also has its 
own error cases. Case in point: SSL wrapping that wants to fail 
with SSL errors in addition to IO errors. I fail to understand at 
the moment how library authors are supposed to deal with this.


Thoughts on this? Am I missing something entirely?


Regards,
Armin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Chaining Results and Options

2014-02-10 Thread Huon Wilson
It's actually Haskell's fmap, which we have in the form of .map for both 
Option[1] and Result[2], e.g. the proposed expr->method() is the same as 
expr.map(|x| x.method()) (which is still quite verbose).



Monadic bind comes in the form of the .and_then methods (which both 
Option and Result have).



[1]: 
http://static.rust-lang.org/doc/master/std/option/enum.Option.html#method.map
[2]: 
http://static.rust-lang.org/doc/master/std/result/enum.Result.html#method.map



Huon


On 10/02/14 22:45, Sergei Maximov wrote:
It looks very similar to Haskell's monadic bind operator (>>=) at 
first glance.


02/10/2014 10:40 PM, Armin Ronacher пишет:

Hi,

I was playing around with the new IO system a lot and got some very 
high level feedback on that. Currently the IoResult objects implement 
a few traits to pass through to the included success value and to 
dispatch method calls.


That's nice but it means that from looking at the code you can easily 
miss that a result is involved and if you add methods you need to add 
manual proxy methods on the result.


At the end of the day the only thing you actually need to pass 
through is the error. So I would propose a new operator "->" that 
acts as a "resolve deref" operator. It would be operating on the 
"Some" part of an Option and the "Ok" part of a result and pass 
through the errors unchanged:


So essentially this:

let rv = match expr() {
Ok(tmp) => tmp.method(),
err => err,
};

Would be equivalent to:

let rv = expr->method();

Likewise for options:

let rv = match expr {
Some(tmp) => tmp.method(),
None => None,
}

Would likewise be equivalent to:

let rv = expr->method();

As a result you could only ever call this on things that return 
results/options themselves.


Annoyingly enough this also means that the results need to be 
compatible which is still a problem. The example there would be an IO 
trait that is implemented by another system that also has its own 
error cases. Case in point: SSL wrapping that wants to fail with SSL 
errors in addition to IO errors. I fail to understand at the moment 
how library authors are supposed to deal with this.


Thoughts on this? Am I missing something entirely?


Regards,
Armin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] user input

2014-02-08 Thread Huon Wilson
There is read_line: 
http://static.rust-lang.org/doc/master/std/io/trait.Buffer.html#method.read_line


 use std::io::{stdin, BufferedReader};

 fn main() {
 let mut stdin = BufferedReader::new(stdin());
let line = stdin.read_line().unwrap();
 println!("{}", line);
 }



Huon

On 09/02/14 11:44, Liigo Zhuang wrote:



2014年2月9日 上午7:35于 "Alex Crichton" >写道:

>
> We do indeed want to make common tasks like this fairly lightweight,
> but we also strive to require that the program handle possible error
> cases. Currently, the code you have shows well what one would expect
> when reading a line of input. On today's master, you might be able to
> shorten it slightly to:
>
> use std::io::{stdin, BufferedReader};
>
> fn main() {
> let mut stdin = BufferedReader::new(stdin());
> for line in stdin.lines() {
> println!("{}", line);
> }
> }
>
> I'm curious thought what you think is the heavy/verbose aspects of
> this? I like common patterns having shortcuts here and there!
>

This is not a common pattern for stdin. Programs often need process 
something when user press return key, immediately. So read one line is 
more useful than read multiple lines, at least for stdin. I agree to 
need stdin.readln or read_line.


> On Sat, Feb 8, 2014 at 3:06 PM, Renato Lenzi > wrote:
> > I would like to manage user input for example by storing it in a 
string. I

> > found this solution:
> >
> > use std::io::buffered::BufferedReader;
> > use std::io::stdin;
> >
> > fn main()
> > {
> > let mut stdin = BufferedReader::new(stdin());
> > let mut s1 = stdin.read_line().unwrap_or(~"nothing");
> > print(s1);
> >  }
> >
> > It works but it seems (to me) a bit verbose, heavy... is there a 
cheaper way

> > to do this simple task?
> >
> > Thx.
> >
> > ___
> > Rust-dev mailing list
> > Rust-dev@mozilla.org 
> > https://mail.mozilla.org/listinfo/rust-dev
> >
> ___
> Rust-dev mailing list
> Rust-dev@mozilla.org 
> https://mail.mozilla.org/listinfo/rust-dev



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] The rustpkg source lives on

2014-02-04 Thread Huon Wilson

Hi all,

Some people have expressed unhappiness that rustpkg was removed rather 
than fixed, so I've extracted the last rustpkg source from mozilla/rust 
and thrown it up on github[1], keeping the git history.


I'm personally not planning on doing any particular work on it, but I 
have copied the docs in, as well as activated travis (just building it, 
no tests, for now; and not on Rust CI yet either); and am going to 
attempt to copy the issues tagged A-pkg across from mozilla/rust.


If anyone is interested in working on it (or thinks they may be possibly 
interested at some point), feel free to contact me to be added to the 
organisation. If there are people interested in being a 
leader/coordinator of some sort, that would be very nice... since I'm 
not going to be filling that role.



[1]: https://github.com/rustpkg/rustpkg


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Proposal: Change Parametric Polymorphism Declaration Syntax

2014-02-01 Thread Huon Wilson

On 02/02/14 14:18, Daniel Micay wrote:

On Sat, Feb 1, 2014 at 10:12 PM, Eric Reed  wrote:

Well there's only 260 uses of the string "size_of" in rustc's src/ according
to grep and only 3 uses of "size_of" in servo according to GitHub, so I
think you may be overestimating its usage.

The number of calls to `size_of` isn't a useful metric. It's the
building block required to allocate memory (vectors, unique pointers)
and in the slice iterators (to perform pointer arithmetic). If it
requires a bound, then so will any code using a slice iterator.


Either way, I'm not proposing we get rid of size_of. I just think we should
put it in an automatically derived trait instead of defining a function on
all types.
Literally the only thing that would change would be code like this:

fn foo(t: T) {
 let size = mem::size_of(t);
}

would have to be changed to:

fn foo(t: T) {
 let size = SizeOf::size_of(t); // or t.size_of()
}

Is that really so bad?

Yes, it is.


Now the function's type signature documents that the function's behavior
depends on the size of the type.
If you see a signature like `fn foo(t: T)', then you know that it
doesn't.
There's no additional performance overhead and it makes size_of like other
intrinsic operators (+, ==, etc.).

The operators are not implemented for every type as they are for `size_of`.


I seriously don't see what downside this could possibly have.

Using unique pointers, vectors and even slice iterators will require a
semantically irrelevant `SizeOf` bound. Whether or not you allocate a
unique pointer to store a value internally shouldn't be part of the
function signature.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


To add to this, a SizeOf bound would be essentially equivalent to the 
Sized bound from DST, and I believe experimentation a while ago decided 
that requiring Sized is the common case (or, at least, so common that it 
would be extremely annoying to require it be explicit).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] "let mut" <-> "var"

2014-01-29 Thread Huon Wilson
Running the same commands just inside src/librustc (which is essentially 
client code) gives:


let mut = 1051
let (no mut) = 5752

So the ratio is even more skewed toward immutable lets, and, librustc is 
written in "old" Rust, so I can only see the ratio moving away from let 
mut even more: e.g. there are still instances of patterns like `let mut 
v = ~[]; for x in iter { v.push(x); }` rather than just `let v = 
iter.collect();`.



(Obviously those grep commands give a very simplistic view, but I think 
it's accurate enough to demonstrate that `let mut` is not the common case.)


On 30/01/14 14:18, Samuel Williams wrote:
Do you think for client code it would be the same proportion as in 
"library" code?



On 30 January 2014 16:13, Huon Wilson <mailto:dbau...@gmail.com>> wrote:


On 30/01/14 14:09, Samuel Williams wrote:

I agree that it is syntactic salt and that the design is to
discourage mutability. I actually appreciate that point as a
programmer.

w.r.t. this specific issue: I think what concerns me is that it
is quite a high burden for new programmers (I teach COSC1xx
courses to new students so I have some idea about the level of
new programmers). For example, you need to know more detail about
what is going on - new programmers would find that difficult as
it is one more concept to overflow their heads.

Adding "var" as a keyword identically maps to new programmer's
expectations from JavaScript. Writing a program entirely using
"var" wouldn't cause any problems right? But, could be optimised
more (potentially) if using "let" for immutable parts.

Anyway, I'm not convinced either way, I'm not sure I see the
entire picture yet. But, if I was writing code, I'd certainly get
sick of writing "let mut" over and over again - and looking at
existing rust examples, that certainly seems like the norm..



Inside the main rust repository:

$  git grep 'let ' -- '*.rs' | grep -v mut | wc -l
17172
$ git grep 'let ' -- '*.rs' | grep mut | wc -l
5735

i.e. there are approximately 3 times more non-mutable variable
bindings than there are mutable ones.








On 30 January 2014 15:59, Samuel Williams
mailto:space.ship.travel...@gmail.com>> wrote:

I guess the main gain would be less typing of what seems to
be a reasonably common sequence, and the formalisation of a
particular semantic pattern which makes it easier to
recognise the code when you visually scanning it.


On 30 January 2014 15:50, Kevin Ballard mailto:ke...@sb.org>> wrote:

On Jan 29, 2014, at 6:43 PM, Brian Anderson
mailto:bander...@mozilla.com>> wrote:

> On 01/29/2014 06:35 PM, Patrick Walton wrote:
>> On 1/29/14 6:34 PM, Samuel Williams wrote:
>>> Perhaps this has been considered already, but when
I'm reading rust code
>>> "let mut" just seems to stick out all over the place.
Why not add a
>>> "var" keyword that does the same thing? I think there
are lots of good
>>> and bad reasons to do this or not do it, but I just
wanted to propose
>>> the idea and see what other people are thinking.
>>
>> `let` takes a pattern. `mut` is a modifier on
variables in a pattern. It is reasonable to write `let
(x, mut y) = ...`, `let (mut x, y) = ...`, `let (mut x,
mut y) = ...`, and so forth.
>>
>> Having a special "var" syntax would defeat this
orthogonality.
>
> `var` could potentially just be special-case sugar for
`let mut`.

To what end? Users still need to know about `mut` for all
the other uses of patterns. This would reserve a new
keyword and appear to duplicate functionality for no gain.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org <mailto:Rust-dev@mozilla.org>
https://mail.mozilla.org/listinfo/rust-dev





___
Rust-dev mailing list
Rust-dev@mozilla.org  <mailto:Rust-dev@mozilla.org>
https://mail.mozilla.org/listinfo/rust-dev



___
Rust-dev mailing list
Rust-dev@mozilla.org <mailto:Rust-dev@mozilla.org>
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] "let mut" <-> "var"

2014-01-29 Thread Huon Wilson

On 30/01/14 14:09, Samuel Williams wrote:
I agree that it is syntactic salt and that the design is to discourage 
mutability. I actually appreciate that point as a programmer.


w.r.t. this specific issue: I think what concerns me is that it is 
quite a high burden for new programmers (I teach COSC1xx courses to 
new students so I have some idea about the level of new programmers). 
For example, you need to know more detail about what is going on - new 
programmers would find that difficult as it is one more concept to 
overflow their heads.


Adding "var" as a keyword identically maps to new programmer's 
expectations from JavaScript. Writing a program entirely using "var" 
wouldn't cause any problems right? But, could be optimised more 
(potentially) if using "let" for immutable parts.


Anyway, I'm not convinced either way, I'm not sure I see the entire 
picture yet. But, if I was writing code, I'd certainly get sick of 
writing "let mut" over and over again - and looking at existing rust 
examples, that certainly seems like the norm..




Inside the main rust repository:

$  git grep 'let ' -- '*.rs' | grep -v mut | wc -l
17172
$ git grep 'let ' -- '*.rs' | grep mut | wc -l
5735

i.e. there are approximately 3 times more non-mutable variable bindings 
than there are mutable ones.








On 30 January 2014 15:59, Samuel Williams 
> wrote:


I guess the main gain would be less typing of what seems to be a
reasonably common sequence, and the formalisation of a particular
semantic pattern which makes it easier to recognise the code when
you visually scanning it.


On 30 January 2014 15:50, Kevin Ballard mailto:ke...@sb.org>> wrote:

On Jan 29, 2014, at 6:43 PM, Brian Anderson
mailto:bander...@mozilla.com>> wrote:

> On 01/29/2014 06:35 PM, Patrick Walton wrote:
>> On 1/29/14 6:34 PM, Samuel Williams wrote:
>>> Perhaps this has been considered already, but when I'm
reading rust code
>>> "let mut" just seems to stick out all over the place. Why
not add a
>>> "var" keyword that does the same thing? I think there are
lots of good
>>> and bad reasons to do this or not do it, but I just wanted
to propose
>>> the idea and see what other people are thinking.
>>
>> `let` takes a pattern. `mut` is a modifier on variables in
a pattern. It is reasonable to write `let (x, mut y) = ...`,
`let (mut x, y) = ...`, `let (mut x, mut y) = ...`, and so forth.
>>
>> Having a special "var" syntax would defeat this orthogonality.
>
> `var` could potentially just be special-case sugar for `let
mut`.

To what end? Users still need to know about `mut` for all the
other uses of patterns. This would reserve a new keyword and
appear to duplicate functionality for no gain.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev





___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compile-time function evaluation in Rust

2014-01-28 Thread Huon Wilson

On 29/01/14 10:45, Kevin Ballard wrote:

On Jan 28, 2014, at 3:16 PM, Pierre Talbot  wrote:


On 01/28/2014 11:24 PM, Kevin Ballard wrote:

It sounds like you're proposing that arbitrary functions may be eligible for 
CTFE if they happen to meet all the requirements, without any special 
annotations. This seems like a bad idea to me. I understand why it's 
attractive, but it means that seemingly harmless changes to a function's 
implementation (but not its signature) can cause compiler errors in other 
modules, or even other crates if the AST for the function happens to be made 
extern.
A more conservative approach would be to require the #[ctfe] annotation, which 
then imposes all the given restrictions on the function. The downside is such a 
function then is restricted to only calling other CTFE functions, so we'd have 
to go in to the standard libraries and add this annotation whenever we think 
it's both useful and possible.

This approach mirrors the approach being used by C++11/C++14.

-Kevin

I understand your point of view but adding #[ctfe] doesn't solve the problem 
either, the library designer could remove this annotation, isn't it? I didn't 
precise it, but I gave a different semantic to #[ctfe] than what you 
understood. Let me rephrase it:

* #[ctfe] hints the compiler that performing CTFE outside of the contexts (as 
specified) is safe. It means that for any input this function will terminate 
[in a reasonable amount of time and memory].

We should keep in mind the drawbacks of the constexpr semantic:

1. Force the library designer to think about CTFE, the user might be in a 
better position since he knows well which parameters he'll give to this 
function.
2. Annotate functions means more maintenance, more changes and more errors. 
Moreover, the C++11 constexpr only allow a subset of the language, which is 
practical for the compiler implementor but not for the library designer. In D, 
they specify when a function is *not* eligible.

Yes, I was using #[ctfe] to mean something slightly different than you were. In my case, 
it meant "mark this function as eligible for CTFE, and impose all the CTFE 
restrictions". And it does fix the problem I mentioned, because #[ctfe] would be 
considered part of the function signature, not the function implementation. Everyone is 
already used to the idea that modifying the function signature may cause compiler errors 
at the call site. But the only example I can think of right now for when changing a 
function's _implementation_ causes call site compiler errors is when you're using C++ 
templates.


FWIW, `transmute` causes such errors in Rust. e.g. `fn errors(x: T) { 
unsafe { std::cast::transmute::(x); } }` will fail to compile 
when passed u16, but not when passed uint, and this isn't encoded in the 
type signature.



Huon



Not only that, but with your approach, changing the implementation of one 
function could accidentally cause a whole host of other functions to become 
ineligible for CTFE. And the farther apart the actual source of the problem, 
and the resulting error, the harder it is to diagnose and fix such errors.

That said, I was not aware that D already takes this approach, of allowing CTFE 
by default. I'm curious how it works for them, and how they handle these 
problems.

-Kevin
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-01-28 Thread Huon Wilson

On 28/01/14 19:36, György Andrasek wrote:
I never quite understood the problem `rustpkg` was meant to solve. For 
building Rust code, `rustc --out-dir build` is good enough. For 
running tests and benchmarks, `rustc` is good enough. For downloading 
things, I still need to feed it a github address, which kinda takes 
away any value it could have over `git clone` or git submodules.


rustpkg (theoretically) manages fetching and building dependencies (with 
the appropriate versions), as well as making sure those dependencies can 
be found (i.e. what the -L flag does for rustc).




What I would actually need from a build system, i.e. finding 
{C,C++,Rust} libraries, building {C,C++,Rust} libraries/executables 
and linking them to said {C,C++,Rust} libraries, it doesn't do. It 
also doesn't bootstrap rustc.




rustpkg is unfinished and has several bugs, so describing its current 
behaviour/usage as if it were its intended behaviour/usage is not 
correct. I believe it was designed to handle native (non-Rust) 
dependencies to some degree.



Huon


[Disclaimer: I've never quite got a rustpkg workflow going. It's 
probably awesome, but completely overshadowed by `rustc`.]


On 01/28/2014 09:02 AM, Tim Chevalier wrote:

On Mon, Jan 27, 2014 at 10:20 PM, Val Markovic  wrote:


On Jan 27, 2014 8:53 PM, "Jeremy Ong"  wrote:


I'm somewhat new to the Rust dev scene. Would anybody care to 
summarize
roughly what the deficiencies are in the existing system in the 
interest of
forward progress? It may help seed the discussion for the next 
effort as

well.


I'd like to second this request. I haven't used rustpkg myself but 
I've read

its reference manual (
https://github.com/mozilla/rust/blob/master/doc/rustpkg.md) and it 
sounds

like a reasonable design. Again, since I haven't used it, I'm sure I'm
missing some obvious flaws.


Thirded. I implemented rustpkg as it's currently known, and did so in
the open, detailing what I was thinking about in a series of
exhaustively detailed blog posts. Since few people seemed very
interested in providing feedback on it as I was developing it (with
the exception of Graydon, who also worked on the initial design), I
assumed that it was on the right track. I rewrote rustpkg because
there was a perception that the initial design of rustpkg was not on
the right track, nor was cargo, but obviously simply rewriting the
whole system from scratch in the hopes that it would be better didn't
work, since people are talking about throwing it out. So, before
anybody embarks on a third rewrite in the hopes that *that* will be
better, I suggest that a working group form to look at what went wrong
in the past 2 or 3 attempts at implementing a build system / package
system for Rust, so that those mistakes can be learned from. Perhaps
all that needs to be done differently is that someone more central to
the community needs to write it, but if that's what it takes, it seems
preferable to the wasted time and effort that I imagine will ensue
from yet another rewrite for the sake of throwing out code.

Cheers,
Tim

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deriving keyword

2014-01-24 Thread Huon Wilson

On 24/01/14 19:28, Lee Braiden wrote:

On 24/01/14 04:37, Steven Fackler wrote:
The deriving infrastructure is implemented as a procedural macro (or 
syntax extension) that's built into the compiler. Historically, _all_ 
syntax extensions had to be built in but that is no longer the case: 
https://github.com/mozilla/rust/pull/11151. It's now possible to 
write something like #[deriving_Drawable] that will implement 
Drawable for types but you can't currently add new traits to 
#[deriving(..)] to make something like #[deriving(Clone, Drawable)] 
work. It would be possible to support that, but it would make 
#[deriving(..)] "special" in ways that other syntax extensions aren't 
and it's unclear whether or not that's a good idea.


What exactly is the point of this #[...] syntax, anyway?  I'm sure 
there's a reason, but I *currently* don't see how #[deriving(...)] is 
better than simply "deriving", like Haskell has.  Is maintaining a low 
keyword count really THAT important, that we have to have ugly #[] 
wrappers around things?  I had thought that #[] represented 
meta-information, like how to compile/link the file, but if deriving 
is in there, it's very much involving the language proper, too.


Also, if it's built into the compiler, that makes it special anyway, 
in my book.  However, the derivation feature provides such great 
functionality, that I'd be very OK with it being a keyword. At least, 
if it could be extended for other types -- i.e., was made to support 
deriving_Drawable and so forth.


Finally (and this is more curiosity than suggestion, because it could 
make the language too dynamic/magic), I wonder what's involved in 
dropping the "...deriving..." syntax altogether, and automatically 
deriving functionality for types that implement all the necessary 
underlying features?  It seems like that's what's done for types that 
fit POD, for example.




The #[] is just the form of attribute attached to an item[1], and these 
attributes are general annotations that can be used by any part of the 
compilation process (and even by external tools), e.g. #[no_mangle] to 
stop a function's symbol being mangled by the compiler, or 
#[allow(unused_variable)] to stop the 'unused_variable' compiler 
warning, and, syntax extensions (aka procedural macros), which is what 
#[deriving] is: it's just an AST based transformation (which 
unfortunately results in some weird error messages), and users can use 
the functionality added by #11151 to implement their own (e.g. one, if 
they were so inclined, could write a #[getters] syntax extension that 
would automatically create getter method for the fields of a struct).


Also, with some effort, you can *now* write custom derivings using the 
same core code as the real #[deriving] does; the only difference is you 
don't get to call it like #[deriving(Drawable)].



There are a few "kinds"[2] that automatically inherit their properties 
(Pod is among them), but, as Daniel says, it would be incorrect to do it 
for all traits automatically.



[1]: The exact syntax of these may/will be changing, see 
https://github.com/mozilla/rust/issues/2569


[2]: http://static.rust-lang.org/doc/master/std/kinds/index.html


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deriving keyword

2014-01-23 Thread Huon Wilson

On 24/01/14 15:32, benjamin adamson wrote:
Question, what constitutes whether a 'trait' is applicable for 
implementation by the #deriving() attribute?


According to the language specification on master:
http://static.rust-lang.org/doc/master/rust.html#deriving

There exists a static list. I found myself interested in the idea of 
using the deriving attribute to derive a simple drawable implementation:


https://github.com/JeremyLetang/rust-sfml/blob/master/src/rsfml/traits/drawable.rs

but then looked up the attribute in the rust manual, and noticed that 
there is a static list of what I will call 'traits that support the 
deriving attribute'. Why the restriction? Is there some prior reading 
on this? Is there any plan on letting libraries define more types that 
can be 'derivable'?



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


There's a static list because they're currently hard-coded into the 
compiler.


With external syntax extensions, it's now possible to define something 
like `#[deriving_Drawable]`, and, iirc, I left the core deriving 
infrastructure public in syntax::ext::deriving::generic, so you can get 
most of the work done for you that way.



There are various things to be worked out for adding traits directly to 
#[deriving(Foo)], including (but not limited to) how namespacing works 
(e.g. if two libraries both define a trait with the same name and 
provide #[deriving] implementations for it), and whether we want it to 
be a priviledged syntax extension where users can directly add to its 
map between traits and deriving implementation.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Lifetime required to outlive itself

2014-01-21 Thread Huon Wilson

On 22/01/14 12:41, Scott Lawrence wrote:
This code compiles successfully: http://ix.io/a34 . I believe this 
behavior is correct. Just so it's clear what this code does: f() takes 
a `&mut int` and adds it to an array - the idea is that all of the 
`&mut int` can be changed at some later time. Naturally, there's some 
fancy lifetime juggling involved in this (which I may have gotten wrong).


Uncommenting the commented parts (the method f() in the impl A, in 
particular) yields the error message shown at the bottom, which 
appears to say that the lifetime created in the second parameter of 
f() does not necessarily outlive itself.


Is there some especially complicated aspect of lifetimes as they 
interact with &self, or is this indeed a bug?




I think it's the compiler tricking you: the 'a in `fn f<'a>(&mut self, 
...)` is shadowing the `impl<'a> A<'a>` i.e. they are different 
lifetimes that happen to have the same identifier. Changing it to `fn 
f(&mut self, &'a mut int)` should work.



There's a bug open about warning on shadowed generics, because, as this 
demonstrates, you end up with hard-to-diagnose error messages: 
https://github.com/mozilla/rust/issues/11658



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] sandboxing Rust?

2014-01-18 Thread Huon Wilson

On 19/01/14 14:23, Jack Moffitt wrote:

Rust's safety model is not intended to prevent untrusted code from
doing evil things.

We'd like something like this for Servo, but I think the idea was to
see if we couldn't use NaCl to do this kind of sandboxing. The NaCl
devs seemed to think this might be interesting as well.

jack.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Isn't the "correct" way to do this to use the OS's security features?

FWIW, https://github.com/mozilla/rust/issues/6811 covers allowing 
spawning tasks as sandboxed tasks, and strcat wrote up something about 
sandboxing on Linux for Servo: 
https://github.com/mozilla/servo/wiki/Linux-sandboxing



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Call for central external packages repository

2014-01-13 Thread Huon Wilson

On 13/01/14 22:09, Hans Jørgen Hoel wrote:

Hi,

Regarding rust-ci, I'm about to deploy some changes for it sometime in 
the next few days.


Changes will include:

- project editing enabled (including removal of projects :))
- possibility for adding categories to projects and a category based index
- documentation upload from Travis CI builds (by adding two lines to 
the .travis.yml)


Woah, woah. This sounds awesome.



Huon




I've also started working on search which would index project 
descriptions, uploaded documentation and sources.


Source for the webapp will be available on GitHub soon if anyone is 
interested in collaborating.


Regards,

Hans Jørgen


On 13 January 2014 11:43, Gaetan > wrote:


Hi

I know this question has been debated, however I'd like to highly
recommend to give a clean infrastructure to register, list, search
and describe external libraries developed by everyone.

For instance, how do I know which http server lib should I use for
rust 0.9?

This mailing list is quite good for announcing new package, but
not for find existing project that might have solved a given
problem before me.

rust-ci


This is the main candidate for this job, however I find it quite
difficult to find which project does what. It miss a "one line
project description" column. Its main purpose seem to watch for
this set of projects still compile against the master git branch,
but there are other lib that are not listed here.

I would recommend a central repository web site, working like pypi
or other community based repo, that would stimulate user contribution.

Such central repository would provide the following features:
- hierarchical project organisation (look at here
)
- provide clean forms to submit, review, publish, vote project
- clealy display which version of rust compiler (0.8, 0.9,
master,...) this lib is validated. For master, this would be
linked to rust-ci. I also like the idea of having automatic
rust-ci validation for rust 0.8, 0.9,... Maybe with several level
of validation: compile validated, peer/administrator validated,
recommended,...
- good search form. This is how users look for a given project
- popular project. I tend to choose a project over its popularity.
The more "popular" a project is, or the more downloads count a lib
have, the more I think it will be actively maintained or more
stable than the others.
- clear project dependency listing
- be promoted by rust homepage (repo.rust.org
? rustpkg.rust.org
,...?), so any lambda user can easy find it

At first sight, I think we could just extending rust-ci to do
this, reoriented for package listing for a given rust version, by
adding new pages "package index for 0.9" with just a project name
column ("rust-http" and not "chris-morgan/rust-http
") and a description
column (extracted from github project description?.. this also
force to have to be on github for any project?). And what about
tarball or non github project?

What do you think about this idea? I am interested on working on
this matter, but would like to have your opinion on it.

Thanks
-
Gaetan


___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-12 Thread Huon Wilson

On 13/01/14 00:23, james wrote:

On 11/01/2014 22:38, Owen Shepherd wrote:
I agree, however, I feel that the names like "i32" and "u32" should 
be trap-on-overflow types. The non overflow ones should be "i32w" 
(wrapping) or similar.


Why? Because I expect that otherwise people will default to the 
wrapping types. Less typing. "It'll never be a security issue", or 
"Looks safe to me", etc, etc. Secure by default is a good thing, IMO
I don't think making 'i32' have different semantics by default from 
int32_t (or from the 'i32' typedef most of us will have used for 
years) is a good idea in a wannabe systems programming language.  It 
is too surprising.


There might be a good case for having a pragma control some 'check for 
overflow' in a paranoid test mode, but i think that most programmers, 
most of the time, will expect 2s complement arithmetic 'as usual'.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Coincidentally, i32 already has different semantics to int32_t: overflow 
of signed types is undefined behaviour in C, but is defined (as 
wrap-around) in Rust.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Macros expanding to multiple statements

2014-01-11 Thread Huon Wilson

That test is for multiple *items*, not statements.

For the moment, you just have to wrap the interior of a macro expanding 
to an expression in a set of braces, so that it becomes a single statement.


macro_rules! my_print(
($a:expr, $b:expr) => (
{
println!("{:?}", a);
println!("{:?}", b);
}
);
)

Multi-statement macros are covered by 
https://github.com/mozilla/rust/issues/10681 .



Huon


On 12/01/14 13:40, Ashish Myles wrote:
Rust 0.9 indicates that support for expansion of macros into multiple 
statements is now supported, and the following example from the test 
suite works for me.

https://github.com/mozilla/rust/blob/master/src/test/run-pass/macro-multiple-items.rs

However, I receive an error for the following code

#[feature(macro_rules)];

macro_rules! my_print(
($a:expr, $b:expr) => (
println!("{:?}", a);
println!("{:?}", b);
);
)

fn main() {
let a = 1;
let b = 2;
my_print!(a, b);
}

(Note that the ^~~ below points at println.)

$ rustc macro_ignores_second_line.rs 
macro_ignores_second_line.rs:6:9: 6:16 error: macro expansion ignores 
token `println` and any following
macro_ignores_second_line.rs:6  
println!("{:?}", b);

   ^~~
error: aborting due to previous error
task 'rustc' failed at 'explicit failure', 
/home/marcianx/devel/rust/checkout/rust/src/libsyntax/diagnostic.rs:75 

task '' failed at 'explicit failure', 
/home/marcianx/devel/rust/checkout/rust/src/librustc/lib.rs:453 




What's the right way to do this?

Ashish


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-10 Thread Huon Wilson

On 11/01/14 16:58, Isaac Dupree wrote:
Scheme's numeric tower is one of the best in extant languages.  Take a 
look at it.  Of course, its dynamic typing is poorly suited for Rust.


Arbitrary-precision arithmetic can get you mathematically perfect 
integers and rational numbers, but not real numbers.  There are an 
uncountably infinite number of real numbers and sophisticated computer 
algebra systems are devoted the problem (or estimates are used, or you 
become unable to compare two real numbers for equality).  The MPFR C 
library implements arbitrarily high precision floating point, but that 
still has all the pitfalls of floating-point that you complain about. 
For starters, try representing sqrt(2) and testing its equality with 
e^(0.5 ln 2).


In general, Rust is a systems language, so fixed-size integral types 
are important to have.  They are better-behaved than in C and C++ in 
that signed types are modulo, not undefined behaviour, on overflow.  
It could be nice to have integral types that are task-failure on 
overflow as an option too. 


We do already have some Checked* traits (using the LLVM intrinsics 
internally), which let you have task failure as one possibility on 
overflow. e.g. 
http://static.rust-lang.org/doc/master/std/num/trait.CheckedAdd.html 
(and Mul, Sub, Div too).



Huon


As you note, bignum integers are important too; it's good they're 
available.  I think bignum rationals would be a fine additional choice 
to have (Haskell and GMP offer them, for example).


-Isaac


On 01/11/2014 12:15 AM, Lee Braiden wrote:

This may be go nowhere, especially so late in Rust's development, but I
feel like this is an important, relatively small change (though a
high-profile one).  I believe it could have a large, positive impact in
terms of targeting new developer communities, gaining more libraries and
applications, giving a better impression of the language, AND on
performance and futureproofing.

However, a lot of people who are interested in performance will probably
baulk at this, on first sight.  If you're in that group, let me
encourage you to keep reading, at least until the points on performance
/improvements/.  Then baulk, if you like ;)

Also, I said it in the post as well, but it's late here, so apologies
for any readability / editing issues.  I tried, but sleep beckons ;)


http://blog.irukado.org/2014/01/an-appeal-for-correct-capable-future-proof-math-in-nascent-programming-languages/ 




--
Lee



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Appeal for CORRECT, capable, future-proof math, pre-1.0

2014-01-10 Thread Huon Wilson

On 11/01/14 16:48, Patrick Walton wrote:

On 1/10/14 9:41 PM, Corey Richardson wrote:

The current consensus on this subject, afaik, is the rename int/uint
to intptr/uintptr. They're awful names, but it frees up int for a
*fast* bigint type. Fast here is key. We can't have a suboptimal
numeric type be the recommended default. We need to perform at least
as well as GMP for me to even consider it. Additionally we'd have
generic numeric literals. I don't think anyone wants what we current
have for *numerics*. Fixed-size integers are necessary for some tasks,
but numerics is not one of them.


I wasn't aware of this consensus. I'm not sure what I think; int and 
uint as they are is pretty nice for array indexing.


The RFC/issue is https://github.com/mozilla/rust/issues/9940


Huon



Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Emscripten for Rust?

2014-01-05 Thread Huon Wilson
On first glance it seems like C -> Rust would be very feasible via a lot 
of `unsafe`, * and *mut.



On 06/01/14 13:21, Corey Richardson wrote:

Any such conversion is going to be lossy enough as to be worthless.
It's only acceptable for emscripten because the web platform can't run
native code. But any use of Rust is already going to be  targetting
something that can run C.

On Sun, Jan 5, 2014 at 9:11 PM, Greg  wrote:

I'd happy chip in for a Kickstarter-type project to automatically convert
C/C++ to Rust.

Anything like this exists or anyone planning on making this type of
announcement?

- Greg

--
Please do not email me anything that you are not comfortable also sharing
with the NSA.


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Thoughts on the Rust Roadmap

2013-12-31 Thread Huon Wilson

On 01/01/14 04:32, Patrick Walton wrote:

On 12/31/13 6:52 AM, Huon Wilson wrote:

 fn my_pipeline>(x: I) ->
MapIterator, and_again> {
  x.filter(|un| boxed).map(|also| unboxed)
 }

(where the function/closure is the second param to 
{Map,Filter}Iterator.)


Ah, OK. You're right -- in that case you would need to use a struct 
and implement `call` manually, unless we had `-> _`.


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Random thought: if we do get some form of -> _, but people dislike 
general return value inference a lot, we could restrict _ in return 
values to just anonymous types that can't be written explicitly, e.g.


-> MapIterator, _>

in this case.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Thoughts on the Rust Roadmap

2013-12-31 Thread Huon Wilson

On 01/01/14 01:43, Patrick Walton wrote:

On 12/31/13 6:09 AM, Huon Wilson wrote:

On 01/01/14 00:55, Armin Ronacher wrote:

Hi,

On 30/12/2013 17:29, Patrick Walton wrote:
This is the first time I've heard of this as a missing feature, and 
I'm

opposed. This would make typechecking significantly more complex.

I'm not saying someone should add decltype :)  Just that from using
the iterators we have now it becomes quite obvious that there are
missing tools to use them to the fullest extend.


The question in my mind is whether all these things that we want are 
backwards compatible, not whether they're nice to have. Anything 
backwards compatible that is not already in the language is probably 
post-1.0 at this point. *This is not because we don't want to add new 
features*—it's OK to add new features to the language. Version 1.0 
doesn't mean we are done and the language is frozen for all time. It 
just means we're going to stop making breaking language changes.



If we were to have unboxed closures similar to C++, where each closure
is an (implicit) unique type, we'd likely need something like decltype
(well, decltype(auto)) to make it possible to return them and things
containing them, e.g. higher-order iterators like .map and .filter.

(cc https://github.com/mozilla/rust/issues/3228 and
https://github.com/mozilla/rust/issues/10448)


Returning unboxed closures doesn't strike me as all that useful at the 
moment, unless we add back capture clauses to allow values to be moved 
in. Otherwise all you can refer to as upvars are passed-in parameters 
(due to lifetime restrictions).


In the event that you want to return closures that move in values, I 
would say just use an explicit struct and a trait for now. We can 
revisit this decision backwards compatibly if we need to in the future.


Returning an iterator containing an unboxed closure that was passed in 
doesn't require that feature, I don't think:


pub fn map>(x: &[T], f: F) -> MapIterator { ... }


I mean

fn my_pipeline>(x: I) -> 
MapIteratorwhat_do_I_write_for_the_function_type_here>, and_again> {

 x.filter(|un| boxed).map(|also| unboxed)
}

(where the function/closure is the second param to {Map,Filter}Iterator.)

Huon


Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Thoughts on the Rust Roadmap

2013-12-31 Thread Huon Wilson

On 01/01/14 01:15, Oren Ben-Kiki wrote:
Not to re-ignite the thread about this, but one way `proc`s aren't 
sufficient because they are send-able (that is, allocated on the 
heap). Rust lacks a call-once stack-allocated closure types, which are 
immensely useful for manipulating container elements, creating DSL-ish 
syntax, etc.




(Nitpick, being sendable and being allocated on the heap are independent 
in general. e.g. int is sendable, but isn't heap allocated (if we get 
unboxed closures that capture things by value, we'll likely be able to 
send them too despite not necessarily being heap allocated), and ~(@int) 
is heap allocated, but isn't sendable.)



That's separate from the `decltype` issue, though.

On Tue, Dec 31, 2013 at 3:55 PM, Armin Ronacher 
mailto:armin.ronac...@active-4.com>> wrote:


Hi,

On 30/12/2013 17:29, Patrick Walton wrote:

Is `proc` not sufficient? We could prioritize adding unboxed
closures,

but since they're backwards compatible as far as I know, I
don't see a
major need to add them before 1.0.

Procs can be called once.



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Thoughts on the Rust Roadmap

2013-12-31 Thread Huon Wilson

On 01/01/14 00:55, Armin Ronacher wrote:

Hi,

On 30/12/2013 17:29, Patrick Walton wrote:

This is the first time I've heard of this as a missing feature, and I'm
opposed. This would make typechecking significantly more complex.
I'm not saying someone should add decltype :)  Just that from using 
the iterators we have now it becomes quite obvious that there are 
missing tools to use them to the fullest extend.




If we were to have unboxed closures similar to C++, where each closure 
is an (implicit) unique type, we'd likely need something like decltype 
(well, decltype(auto)) to make it possible to return them and things 
containing them, e.g. higher-order iterators like .map and .filter.


(cc https://github.com/mozilla/rust/issues/3228 and 
https://github.com/mozilla/rust/issues/10448)




Huon


Is `proc` not sufficient? We could prioritize adding unboxed closures,
but since they're backwards compatible as far as I know, I don't see a
major need to add them before 1.0.

Procs can be called once.


It'd be best to file specific issues here. I'm sympathetic to wanting to
adding more features if they're necessary, but none of the *specific*
things mentioned in this post seem like blockers to me.
I will surely file issues for things that i encounter.  It's just that 
I was a bit surprised to see that there are ambitions to stabilize the 
language quickly.  It just feels like that might be to early.


Being active in the Python community I can tell you that a Python 3.0 
was the worst decision ever.  It would be a shame if a Rust 2.0 
suffers from the same problems.



Regards,
Armin

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Using libgreen/libnative

2013-12-28 Thread Huon Wilson

This is awesome!


I have a question: does the #[boot] addition mean that we now have 5 
ways to (partially) set-up the entry point of a program?

- fn main
- #[main]
- #[start]
- #[boot]
- #[lang="start"]


Huon


On 29/12/13 05:37, Alex Crichton wrote:

Greetings rusticians!

Recently pull request #10965 landed, so the rust standard library no longer has
any scheduling baked into it, but rather it's refactored out into two libraries.
This means that if you want a 1:1 program, you can jettison all M:N support with
just a few `extern mod` statements. A brief overview of the current state of
things is:

1. All programs not using std::rt directly should still continue to operate as
usual today
2. All programs still start up in M:N mode, although this will likely change
once 1:1 I/O work has been completed
3. There are two more libraries available, libgreen and libnative, which allow
custom fine-grained control over how programs run.
4. Whenever a new task is spawned, it is by default spawned as a "sibling" which
means that it is spawned in the same mode as the spawning thread. This means
that if a green thread spawns a thread then it will also be a green thread,
while a native thread will spawn another OS thread.

With this migration, there have been a few changes in the public APIs, and
things still aren't quite where I'd like them to be. PR #11153 is the last major
step in this process as it allows you to link to both libnative and libgreen,
yet still choose which one is used to boot your program. Some breaking changes
you may notice are:

* it's still not possible to easily start up in 1:1 mode - This is fixed by
   #11153. In the meantime, you can use #[start] with native::start in order to
   boot up in 1:1 mode. Be warned though that the majority of I/O is still
   missing from libnative (see PR #11159 for some progress)

   https://gist.github.com/8162357

* std::rt::{start, run} are gone - These are temporarily moved into green/native
   while #[boot] is getting sorted out. The green/native counterparts perform as
   you would expect.

   https://gist.github.com/8162372

* std::rt::start_on_main_thread is gone - This function has been removed with no
   direct counterpart. As a consequence of refactoring the green/native
   libraries, the "single threaded" spawn mode for a task has been removed (this
   doesn't make sense in 1:1 land). This behavior can be restored by directly
   using libnative and libgreen. You can use libgreen to spin up a pool of
   schedulers and then use libnative for the main task to do things like GUI
   management.

   https://gist.github.com/8162399

And of course with the removal of some features comes the addition of new ones!
Some new things you may notice are:

* libstd is no longer burdened with libgreen and libnative! This means that the
   compile times for libstd should be a little faster, but most notably those
   applications only using libstd will have even less code pulled in than 
before,
   meaning that libstd is that much closer to being used in a "bare metal"
   context. It's still aways off, but we're getting closer every day!

* libgreen has a full-fleged SchedPool type. You can see a bit of how it's used
   in gist I posted above. This type is meant to represent a dynamic pool of
   schedulers. Right now it's not possible to remove a scheduler from the pool
   (requires some more thought and possibly libuv modifications), but you can 
add
   new schedulers dynamically to the pool.

   This type supercedes the ThreadPool type in libextra at this point, and
   management of a SchedPool should provide any fine-grained control needed over
   the 'M' number in an M:N runtime.

* libgreen and libnative can be used directly to guarantee spawning a green or a
   native task, regardless of the flavor of task that is doing the spawning.

In the coming months, I plan on filling out more native I/O to bring it up to
speed with the M:N implementation. I also plan on rewriting the core components
of extra::comm to be performant in both scheduling modes in order to bring the
extra::{comm, arc, sync} primitives up to date with their std::comm
counterparts.

If there are any questions about any of this, feel free to ask me! This thread
is always available, and I'm also reachable as acrichto on IRC or alexcrichton
on github.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] IRC moderation due to attacks

2013-12-15 Thread Huon Wilson

And we're having a large amount of bot-spam in the main channel right now.

(Although, it did get the channel up to 1000 people at one point, so 
it's not all bad.)



Huon


On 15/12/13 23:41, Brendan Zabarauskas wrote:

On 15 Dec 2013, at 1:39 am, Jack Moffitt  wrote:


Some botnet is picking on #rust. Until the situation is resolved[1] or
the botnet gives up, we've been turning on moderation in the #rust IRC
room.

This means that you won't be able to join #rust or talk in the channel
unless you have registered with NickServ[2].

Right now the attack seems focused on #rust, but if the others
channels start getting attacked as well we will have to turn on
moderation there too.

It would be useful if someone who has some free time could update the
IRC page on the wiki to suggest people register their nicks and point
them to the instructions below. You might also mention that moderation
is employed from time to time to prevent spam bot attacks and that
will prevent unregistered participants from joining and/or talking.

We apologize for the inconvenience, and we'll hopefully be able to go
back to our previous, unmoderated setup soon.

jack.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=950187
[2] https://wiki.mozilla.org/IRC#Register_your_nickname
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

I was just hit by a bot spamming /querys on me using random nicks. Can’t 
connect to #rust. :(

~Brendan
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust crates and the module system

2013-12-13 Thread Huon Wilson

On 13/12/13 20:29, Liigo Zhuang wrote:

looks like a good idea.

I always think "extern mod extra" is repetitious, because, when i "use 
extra::Something", the compiler always know I'm using the mod "extra".




You only need one `extern mod extra` per program which is hardly 
repetitive, and, what if you have a submodule called `extra`, how can 
the compiler tell that you want to be importing from it, rather than the 
extra crate?


(In general, one has to explicitly import external crates, because they 
can be named anything.)




Huon



2013/12/13 Piotr Kukiełka >


Hi all,

I'm new to rust but I'm quite interested in learning it more
deeply and contributing to language itself.
However right now I'm just studying tutorial/manual/wiki to
understand it better, and sometimes I have doubts about language
design.
My biggest concern so far was connected with module system.
Right now rust have 3 different (if I'm counting correctly)
keywords used for marking which library/module should be imported
and used.
Why not just one? It would actually solve few problems at once.

So, this is my idea:
Drop *mod* and *extern mod *keywords and just leave *use*.
Since there would be only one keyword responsible for importing
left, you wouldn't need to care about their order.
Then, when compiler see new identifiers it checks:
1) If there is local definition of it available? If there is, then
we are done.
2) Is there matching import in scope?
a) If yes, then is it labeled with resource path marker?
*) If yes, then check if it's:
  +) Path to local file => it's equivalent of
#[path=".."] modrust;
  +) Something like
"http://github.com/mozilla/rust"; => it's equivalent of
externmodrust= "github.com/mozilla/rust
";
  +) Something like "rust:0.8" =>  it's equivalent
of externmodmy_rust(name= "rust", vers= "0.8");*) If
no, then first check local mods, and if nothing is found then scan
libs (extern mod)
b) If no, then it's unknown symbol and we got error

What are the real benefits?

1) End user just need to remember and understand one syntax, *use,
*examples:
usefarm;
usemy_farm(path= "farm", vers= "2.5");
usemy_farm(path= "http://github.com/farming/farm";, vers= "2.5");
usemy_farm(path= "../../farm/farm.rs ");
2) You don't need 3 special keywords
3) Order of use doesn't matter (and there are no doubts about
shadowing order).
4) You don't need special cases usesuper anduse self,you can use
path to replace them as well (i.e. to replace self:
usesome_child_module::some_item(path= ".");)
5) You won't link unused modules even if they are imported with *use.*
6) If I'll decide to move part of my modules to separate library
it can still work without code changes
7) It would be easy to implement imports which works only in scope
this way (as it is done in scala for example)
8) this way use works more like import on jvm than include in
C/C++ (and I think C style include should be avoided)

Downsides?
I probably don't know all corner cases and maybe there are good
and valid reasons to do it as it done right now.
But unless they are really good then I think simplification of
imports should be seriously considered.

Best regards,
Piotr Kukielka

*
*


___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev




--
by *Liigo*, http://blog.csdn.net/liigo/
Google+ https://plus.google.com/105597640837742873343/


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Interface around SQL databases

2013-12-11 Thread Huon Wilson

On 12/12/13 09:57, Carter Schonwald wrote:
as another point in the design space, a pretty idiom for SQL style dbs 
in haskell is the *-simple family of libs


postgres simple
http://hackage.haskell.org/package/postgresql-simple-0.3.7.0/docs/Database-PostgreSQL-Simple.html

mysql-simple
http://hackage.haskell.org/package/mysql-simple-0.2.2.4/docs/Database-MySQL-Simple.html
and

sqlite-simple
https://hackage.haskell.org/package/sqlite-simple-0.4.4.0/docs/Database-SQLite-Simple.html

the common pattern is a nice way to write query strings and 
argument/result tuples in a type safe way, and are only really related 
in being a nice pattern for sql db interaction.


note that by type safe, i mean injection safe! I think similar could 
be adapted to rust, perhaps with a wee bit of help using macros (which 
maybe aren't needed once higher kinded types/ traits) are around?


FWIW, I believe SQL tokenises, so a macro like

   let x = 10;
   let y = "foo";

   // injection-safe substitution.
   let query = sql!(SELECT relevant_column FROM table WHERE 
search_column = $y LIMIT $x);


   let result = db_driver.execute(&query);

might be possible (if anyone is interested in a raw SQL interface). Of 
course, it is extremely unlikely to be possible as a structural macro 
(i.e. macro_rules), it probably requires a full blown procedural one 
(i.e. a syntax extension, which have to be hard-coded into libsyntax for 
the moment).



Huon



On Wed, Dec 11, 2013 at 3:27 PM, spir > wrote:


On 12/11/2013 06:04 PM, Patrick Walton wrote:

We aren't likely to block 1.0 on this. Instead of stabilizing
all libraries once
and for all in 1.0 like Go did, we're taking a gradual
approach to libraries
similar to that of node.js, in which 1.0 will have some
library modules stable
and some modules unstable, and releases 1.1, 1.2, and beyond
will stabilize more
and more libraries as time goes on.


A good thing, I guess, especially in that the latest trend in Rust
seems to be moving primitives into the library.

Denis

___
Rust-dev mailing list
Rust-dev@mozilla.org 
https://mail.mozilla.org/listinfo/rust-dev




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Let’s avoid having both foo() and foo_opt()

2013-12-07 Thread Huon Wilson

On 07/12/13 20:55, Simon Sapin wrote:

On 07/12/2013 01:14, Huon Wilson wrote:

I personally think a better solution is something like Haskell's do
notation[1], where you can chain several computations that return
Option<..> such that if any intermediate one returns None, the later
ones are not evaluated and the whole expression returns None, which
saves having to call .get()/.unwrap()/.expect() a lot.


We have that, it’s Option’s .and_then() method.



This can work for types like Result too (in fact, the Haskell
implementation of `do` is sugar around some monad functions, so any
monad can be used there; we currently don't have the power to express
the monad typeclass/trait in Rust so the fully general form probably
isn't possible as a syntax extension yet, although a limited version 
is).





Yes, that's what my syntax extension[1] (to which Ziad alludes) 
desugared to (well, the former name, .chain).


However, even something simple like `foo.and_then(|x| bar.and_then(|y| 
baz.map(|z| x + y + z)))` is significantly worse than


do
x <- foo
y <- bar
z <- baz
return $ x + y + z

as it is in Haskell. And more complicated examples with additional 
within the closures are significantly worse for the .and_then form 
(leading to pyramids of doom, and Rust already has enough of those).



Huon

[1]: https://mail.mozilla.org/pipermail/rust-dev/2013-May/004182.html
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Let’s avoid having both foo() and foo_opt()

2013-12-06 Thread Huon Wilson

On 07/12/13 12:08, Jordi Boggiano wrote:

On Sat, Dec 7, 2013 at 2:01 AM, spir  wrote:

On 12/07/2013 01:12 AM, Gaetan wrote:

I am in favor of two version of the api: from_str which has already done
the unwrap, and a from_str_safe for instance that returns a Result or
option.

This provides the important semantic information (that I've evoked at the
end end of a long earlier reply in this thread) of whether func failure is
expected and belongs to the logic of the present app and we must deal with
it, or not.

But I'm still shared on this topic for finding it also annoying, like Simon,
to have to duplicate whole catogories of such funcs (of which we cannot know
in advance whther they'll fail or not), if only the interface as apparently
proposed by Gaëtan.

Syntax sugar like this would be nice:

let str = std::str::from_utf8("Parse this optimistically, and fail otherwise");
// str is a string or the task fails

vs.

let opt_str = std::str::from_utf?("Parse this if valid"); // note the
question mark
if opt_str.is_some() {  }

Problem is, this sounds scary to implement at the compiler level, if
it's possible at all :) I am just throwing it out there for others to
judge.

Cheers



I personally think a better solution is something like Haskell's do 
notation[1], where you can chain several computations that return 
Option<..> such that if any intermediate one returns None, the later 
ones are not evaluated and the whole expression returns None, which 
saves having to call .get()/.unwrap()/.expect() a lot.


This can work for types like Result too (in fact, the Haskell 
implementation of `do` is sugar around some monad functions, so any 
monad can be used there; we currently don't have the power to express 
the monad typeclass/trait in Rust so the fully general form probably 
isn't possible as a syntax extension yet, although a limited version is).



Huon

[1]: http://en.wikibooks.org/wiki/Haskell/do_Notation

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Safely writing iterators over idiomatic C structures

2013-12-06 Thread Huon Wilson

On 06/12/13 19:11, Edward Z. Yang wrote:

Imagine that we have a structure of the form:

 typedef struct {
 int payload1;
 foo *link;
 int payload2;
 } foo;

This structure is characterized by two things:

 1) It is a singly linked list, and thus has a simple ownership
 structure which can be captured by Rust's owned pointers

 2) The payload of this struct is interleaved with links, in
 order to save space and an extra indirection.  The layout may
 be fixed, by virtue of being exposed by a C library.

The question now is: can we write an Iterator<&mut foo> for
the corresponding Rust structure foo, without using any unsafe code?

There is a fundamental problem with this structure: iterator
invalidation.  If we are able to issue a &mut foo reference, then the
link field could get mutated.  Rust's borrow checker would reject this,
since the only possible internal state for the iterator (a mutable
reference to the next element) aliases with the mutable reference
returned by next().  I am not sure how to solve this without changing
the layout of the struct; perhaps there might be a way if one could
selectively turn off the mutability of some fields.

Suppose we are willing to change the struct, as per the extra::dlist
implementation, we still fall short of a safe implementation: the
internal state of the iterator utilizes a raw pointer (head), which
provides a function resolve() which simply mints a mutable reference to
the element in question. It seems to be using Rawlink to hide the fact
that it has its fingers on a mutable borrowed reference to the list.  It
recovers some safety by maintaining a mutable reference to the whole
list in the iterator structure as well, but it would be better if no
unsafe code was necessary at all, and I certainly don't feel qualified
to reason about the correctness of this code. (Though, I understand and
appreciate the fact that the back pointers have to be handled unsafely.)

So, is it possible? I just want (provably) safe, mutable iteration over
singly-linked lists...

Edward
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


One could have an Iterator<(&mut int, &mut int)>, where the references 
point to just the fields. Off the top of my head:


struct List {
payload1: int,
next: Option<~Foo>,
payload2: f64
}

struct ListMutIterator<'a> {
elem: Option<&'a mut List>
}

impl<'a> Iterator<(&'a mut int, &'a mut f64)> for ListMutIterator<'a> {
fn next(&mut self) -> Option<(&'a mut int, &'a mut f64)> {
 let elem = std::util::replace(self, None); // I think this 
might be necessary to get around the borrow checker.


 let (ret, next) = match elem {
 Some(&List { payload1: ref mut payload1, next: ref mut 
next, payload2: ref mut payload2 }) => {

  (Some((payload1, payload2)), next.as_mut_ref())
 }
 None => (None, None)
 };
 *self = next;
 ret
}
}

(The List pattern match will look nicer if 
https://github.com/mozilla/rust/issues/6137 gets solved; `List { ref mut 
payload1, ref mut next, ref mut payload2 }`.)


I imagine this will end up being very ugly if there are more than just 2 
fields before and after, although one could easily replace the tuple 
with a struct `ListMutRef<'a> { payload1: &'a mut int, payload2: &'a mut 
f64 }`.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Persistent data structures

2013-12-04 Thread Huon Wilson

On 04/12/13 20:51, Michael Woerister wrote:

I've implemented a persistent HAMT [1] a while back:
https://github.com/michaelwoerister/rs-persistent-datastructures/blob/master/hamt.rs 



It's the data structure used for persistent maps in Clojure (and 
Scala, I think). It's not too hard to implement and it's pretty nifty. 
I'm not sure about the performance with atomic reference counting 
being used for memory management. It will definitely be worse than 
with a  stop-the-world garbage collector. Although, it's noteworthy 
that look ups in the data structure only have to copy one `Arc` for 
returning the result, so the high fan-out of the data structure should 
not hurt if you mostly read from it. I'd be very interested in a 
performance comparison to other persistent map implementations in Rust 
(e.g. a red-black tree or splay tree).


Here are some things I came across during implementing this:
* I too discovered that I couldn't parametrize on the type of 
reference being used without higher kinded types. I did the 
implementation with regular @ pointers at first and later switched to 
`Arc`, since concurrent contexts are where persistent data structures 
really shine. Switching the implementation from @ to Arc was pretty 
straight forward.
* I found there is no standardized trait for persistent maps in libstd 
or libextra. It would be nice to have one!
* It's probably a very good idea to provide a non-persistent "builder" 
that avoids the excessive copying during the insertion phase. In 
Clojure one can switch between "transient" and persistent mode for a 
data structure instance which also allows for optimized batch 
modifications. An `insert_from(iterator)` method might also do the 
trick. There's quite a bit of design space here.



For reference, the FromIterator & Extendable traits are the things to 
implement if one has a structure that can be constructed from/extended 
with an iterator and wishes to share the behaviour.



http://static.rust-lang.org/doc/master/std/iter/trait.FromIterator.html
http://static.rust-lang.org/doc/master/std/iter/trait.Extendable.html


Huon

* I would have liked to avoid some allocations and pointer chasing by 
using fixed size vectors directly within nodes but I could not get 
that to work without a lot of unsafe code that I was not sure would be 
correct in all cases. So I just used owned vectors in the end.


Looking forward to seeing more in this area :)

-Michael

[1] https://en.wikipedia.org/wiki/Hash_array_mapped_trie


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Placement new and the loss of `new` to keywords

2013-11-30 Thread Huon Wilson

On 30/11/13 21:24, György Andrasek wrote:

On 11/30/2013 09:34 AM, Patrick Walton wrote:

IMHO sigils made sense back when there were a couple of ways to allocate
that were built into the language and there was no facility for custom
smart pointers. Now that we have moved all that stuff into the library,
sigils seem to make less sense to me



What really bugs me about `~` is that it conflates the idea of 
lifetime and ownership (which is a zero-cost annotation) with 
allocation (which is an actual expensive operation to stay away from). 
This wasn't a problem with `@`, but it's gone now.


My choice would be to keep `~` in types, but use `new` for allocation:

let foo: ~Foo = new Foo(bar);
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


The ownership ideas apply to all values always, i.e. `let foo: Foo = 
Foo(bar);` is equally as "owned" as `let foo: ~Foo = ...;`, the only 
difference is the latter is always pointer sized (and always has a 
destructor, even if `Foo` doesn't).




Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Define copyable types for [T, ..2] static vector initialization

2013-11-29 Thread Huon Wilson

On 30/11/13 10:33, Ashish Myles wrote:

Previously we had the Copy trait, which when implemented by trait T
allowed one to write
[Zero::zero(), ..SZ] where T implemented the Zero trait.  But now I am
not sure how to get that behavior.  Concretely, here is the code I
want to get compiling. (Just to check, I added both Clone and
DeepClone, even though they don't automatically allow implicit
copyability).


use std::num::Zero;

enum Constants {
 SZ = 2
}

struct Foo([T, ..SZ]);

impl Foo {
 pub fn new() -> Foo {
 Foo([Zero::zero(), ..SZ])
 }
}


The error I get is:

error: copying a value of non-copyable type `T`
tmp.rs:155 Foo([Zero::zero(), ..SZ])


Any way to do this? Or is this no longer supported for general types?
Any intentions to add this behavior again?  Otherwise, I can't even
initialize my struct while being agnostic to the specific value of SZ.

Ashish
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Initialising a fixed size vector [T, ..n] by [foo(), .. n] unfortunately 
requires that `T` is implicitly copyable (that is, it doesn't move 
ownership when passed by value), and there's no way around this other 
than [foo(), foo(), foo(),...].



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Problem to use Encodable as fn parameter

2013-11-29 Thread Huon Wilson
Yes; wait for Erick to submit the PR that fixes extra::json to use &mut 
Writer rather than @mut Writer, so that one can write something like


  pub fn buffer_encode>(to_encode_object: &T) -> 
~[u8]  {

  let mut m = MemWriter::new();
  { // I think this scope is necessary to restrict the borrows
  let mut encoder = Encoder(&mut m as &mut Writer);
  to_encode_object.encode(&mut encoder);
  }
  m.inner()
  }

One is then just moving the `~[u8]` around without copying all the elements.

Also, `str_encode` shouldn't need to do any additional copies, even 
right now, since the `std::str::from_utf8_owned` function exists. 
(`from_utf8` is bad with its implicit and normally unnecessary copy, 
I've got https://github.com/mozilla/rust/pull/10701 open that removes 
that function for that reason.)



(Tangentially (and a genuine question), do we really need these 
functions in extra::json (assuming this is part of the plan)? How often 
does one need to encode to a buffer or string, rather than to stdout, a 
file or network socket, which can be done by writing directly to the 
corresponding writers.)


Huon


On 29/11/13 23:57, Philippe Delrieu wrote:
I try to implement the two methods and I have a problem with the copy 
of memory.

The first attempt :

 pub fn buffer_encode>(to_encode_object: &T) -> 
~[u8]  {

   //Serialize the object in a string using a writer
let m = @mut MemWriter::new();
let mut encoder = Encoder(m as @mut Writer);
to_encode_object.encode(&mut encoder);
let buff:&~[u8] = m.inner_ref();
(buff.clone())
}

pub fn str_encode>(to_encode_object: &T) -> ~str  {
let buff:~[u8] = buffer_encode(to_encode_object);
str::from_utf8(*buff)
}

When I call str_encode I copy at least two times the content of the 
MemWriter buffer (one inside clone and one inside from_utf8).

If I implements str_encode like that

pub fn str_encode>(to_encode_object: &T) -> ~str  {
let m = @mut MemWriter::new();
let mut encoder = Encoder(m as @mut Writer);
to_encode_object.encode(&mut encoder);
let buff:&~[u8] = m.inner_ref();
str::from_utf8(*buff)
}

I'll do at least one copy (one less than the other impl) but the code 
is copied.


Is there a better way to manage pointer of memory across function calls?

Philippe

Le 29/11/2013 12:55, Philippe Delrieu a écrit :
I made a remark about that on the GitHub pull request where the idea 
was proposed. I'm agree with you. It's simplier  to return a str or 
perhaps a [u8] if it's use in a stream purpose.

I'm not very fan of creating a MemWriter and return it.

I'll modify to add two functions:
pub fn str_encode>(to_encode_object: &T) -> ~str
pub fn buffer_encode>(to_encode_object: &T) -> ~[u8]

and remove the other.

Any remaqs?

Philippe


Le 29/11/2013 10:44, Gaetan a écrit :


I would prefere this function returns a str.

Le 29 nov. 2013 09:27, "Philippe Delrieu" > a écrit :


Thank you for the help. I've try this signature and I had an
compile error. I thought it came from the signature but the
problem when from the call.
It works now.

For the return type @mut MemWriter I work on the json doc and
some example of use. I can make the change. I didn't find the
issue about it. Did you create it?

Philippe

Le 28/11/2013 22:27, Erick Tryzelaar a écrit :

Good afternoon Phillippe,

Here's how to do it, assuming you're using rust 0.8 and the
json library:

```
#[feature(managed_boxes)];

extern mod extra;

use std::io::mem::MemWriter;
use extra::serialize::{Encoder, Encodable};
use extra::json;

pub fn memory_encode<
T: Encodable
>(to_encode_object: &T) -> @mut MemWriter {
//Serialize the object in a string using a writer
let m = @mut MemWriter::new();
let mut encoder = json::Encoder(m as @mut Writer);
to_encode_object.encode(&mut encoder);
m
}

fn main() {
}
```

Regarding the trouble returning a `MemWriter` instead of a
`@mut MemWriter`, the easiest thing would be to fix library to
use `&mut ...` instead of `@mut ...`. I'll put in a PR to do that.



On Thu, Nov 28, 2013 at 3:55 AM, Philippe Delrieu
mailto:philippe.delr...@free.fr>> wrote:

I try to develop a function that take a Encodable parameter
but I have the error wrong number of type arguments:
expected 1 but found 0

pub fn memory_encode(to_encode_object:
&serialize::Encodable) -> @mut MemWriter  {
   //Serialize the object in a string using a writer
let m = @mut MemWriter::new();
let mut encoder = Encoder(m as @mut Writer);
to_encode_object.encode(&mut encoder);
m
}

The encodable trait is :
pub trait Encodable {
fn encode(&self, s: &mut S);
}

I try this definition
memory_encode>(to_encode_object:

Re: [rust-dev] function overloading, double dispatch technique and performance

2013-11-28 Thread Huon Wilson

On 28/11/13 20:26, Rémi Fontan wrote:

Hi,

would you know what is the cost of implementing the double dispatch 
technique as described on following link? (section "What if I want 
overloading?")


http://smallcultfollowing.com/babysteps/blog/2012/10/04/refining-traits-slash-impls/

does using traits means making used of something similar to vtable? Or 
would the compiler optimise those double dispatch to a very optimise code?


The motivation is to have multiple implementation of mathematical 
operators for struct like vector 2d, 3d, matrices, and having them as 
efficient as possible is important.


cheers,

Rémi



Generics are always monomorphised to be static dispatch; explicit trait 
objects (~Trait, &Trait, etc) are the only times that one gets dynamic 
dispatch (using a vtable). So, trait methods be purely compile time if 
you never box the types into trait objects, i.e.


fn static_with_generics(x: &T) { ... }
fn dynamic_with_vtables(x: &Trait) { ... }

The layered traits like in the example in that blog-post are the same as 
normal generics, just the compiler has to do a little more work (but 
this is only at compile time).


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Some questions about dead-code elimination pass

2013-11-24 Thread Huon Wilson

On 24/11/13 20:15, Alex Crichton wrote:

3. My code also marks the function `load_argc_and_argv` in
libstd/os.rs as dead when in fact it isn't. I would guess it's because
that function is only used when compiling the rustc source code on
Mac, whereas I'm compiling it on Linux. How do I modify my code to
take account of that?

The function should probably be #[cfg(target_os = "linux")]


This should be `#[cfg(target_os = "macos")]` since it's only used on Mac.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Put Unicode identifiers behind a feature gate

2013-11-22 Thread Huon Wilson

On 22/11/13 15:53, Patrick Walton wrote:
There are several issues in the backwards-compatible milestone related 
to Unicode identifiers:


#4928: XID_Start/XID_Continue might not be correct
#2253: Do NKFC normalization in lexer

Given the extreme lack of use of Unicode identifiers and the fact that 
we have much more pressing issues for 1.0, I propose putting support 
for identifiers that don't match 
/^(?:[A-Za-z][A-Za-z0-9]*|_[A-Za-z0-9]+)$/ behind a feature gate.


Thoughts?

Patrick
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Implemented in https://github.com/mozilla/rust/pull/10605 for if/when we 
make a decision. (Feel free to reject it.)


(Another small data-point against our current unicode support: 
https://github.com/mozilla/rust/issues/8706 )



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] A la python array access

2013-11-20 Thread Huon Wilson

On 20/11/13 21:03, Gaetan wrote:

Hello

I'd like to know if you think it could be feasible to have a 
Python-like syntax for indices in array and vector?


I find it so obvious and practical.

let i = ~[1, 2, 3, 4, 5]

i[1] returns a the second item ("2")
i[1:] returns ~[2, 3, 4, 5]
i[:-2] return ~[1, 2, 3, 4]
i[-2] returns ~[4, 5]
i[1,-1] returns ~[2, 3, 4]

Thanks !

-
Gaetan



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev



This has been proposed in #4160, and was even mentioned in Niko's recent 
"Treating vectors like any other container" blog post[1]; although, as 
Daniel says, they would return slices (&[T]), not new allocations.


[4160]: https://github.com/mozilla/rust/issues/4160
[1]: 
http://smallcultfollowing.com/babysteps/blog/2013/11/14/treating-vectors-like-any-other-container/



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Please simplify the syntax for Great Justice

2013-11-19 Thread Huon Wilson

On 19/11/13 20:51, Gaetan wrote:
In the french presentation for rust 0.8 [1], the author gives the 
analogy with C++ semantics

- ~ is a bit like unique_ptr
- @ is an enhanced shared_ptr
- borrowed pointer works like C++ reference

and I think it was very helpful to better understand them. I don't 
know if it is true or now, but this comparison helps a lot 
understanding the concepts.You can present them like this, and after, 
add more precision, and difference with the C++ counter parts.


A tutorial to make would be "Rust for C++ programmer" :)


Something like that already exists: 
https://github.com/mozilla/rust/wiki/Rust-for-CXX-programmers



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why does string formatting in Rust have to be different from other languages?

2013-11-18 Thread Huon Wilson

On 18/11/13 18:02, Derek Chiang wrote:

Hi all,

I'm a newcomer to Rust.  One of the things that I find confusing is 
the use of {} in formatted strings.  In all other languages I've ever 
used, it's always %.  So instead of writing "%d", you write "{:d}" in 
Rust.  Why is this so?  What benefits do we get?


Thanks,
Derek


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


In addition to what Daniel said, being statically typed mean you can 
normally just write `{}` and let the std::fmt::Default implementation 
handle everything (for the primitive types it's the same as the {:d}, 
{:f}, {:s} etc).



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] capnproto-rust: performance and safety compared to C++

2013-11-17 Thread Huon Wilson

On 18/11/13 06:30, David Renshaw wrote:

Hi everyone.

I wrote up some of my latest experiences implementing Cap'n Proto 
encoding for Rust.


A performance comparison to C++, or "capnproto-rust is pretty fast":
http://dwrensha.github.io/capnproto-rust/2013/11/16/benchmark.html

A discussion of safety, or "why I'm so keen to see support for static 
trait methods with lifetime parameters":

http://dwrensha.github.io/capnproto-rust/2013/11/17/lifetimes.html



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


This is cool!

I see that you mention slowness in IO and some extra string copies; are 
there any other low-hanging fruit that you know of? Or is it all down to 
micro-optimisation now?



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Type system thoughts

2013-11-15 Thread Huon Wilson

On 16/11/13 03:05, Gábor Lehel wrote:

Hello list,

I have some ideas about typey things and I'm going to write them down. 
It will be long.





No kidding.




It would be nice if `Trait1 + Trait2` were itself a trait, legal in 
the same positions as any trait. This is already partly true: in trait 
bounds on type parameters and super-traits of traits. Where it's not 
true is trait objects, e.g. `~(ToStr + Send)`. Having this could 
remove the need for the current `~Trait:OtherTraits` special syntax.


I wonder whether lifetimes could also be interpreted as traits, with 
the meaning: "[object of type implementing lifetime-trait] does not 
outlive [the given lifetime]". This is an honest wondering: I'm not 
sure if it makes sense. If it does make sense, it would fit in 
perfectly with the fact that 'static is already a trait. Together with 
the above, it might also allow a solution for capturing borrowed data 
in a trait object: you could write `~(Trait + 'a)`.




How does this interact with vtables?

Note we can already get something similar with

   trait Combine1And2: Trait1 + Trait2 {}
   impl Combine1And2 for T {}

   // use ~Combine1And2

which makes it clear how the vtables work, since Combine1And2 has its 
own vtable explicitly constructed from the two traits. (I guess ~(Trait1 
+ Trait2) would be most useful if one could cast down to ~Trait1 and 
~Trait2.)






The next few are going to be about higher- (or just different-) kinded 
generics. To avoid confusion, I'm going to use "built-in trait" to 
mean things like `Freeze` and `Send`, and "kind" to mean what it does 
everywhere else in the non-Rustic world.


I think the best available syntax for annotating the kinds of types 
would be to borrow the same or similar syntax as used to declare them, 
with either `type` or `struct` being the kind of "normal" non-generic 
types (the only kind of type that current Rust lets you abstract 
over). [I prefer `type`, because both structs and enums inhabit the 
same kind.] This is kind of like how C++ does it. For example, the 
kind of `Result` would be `type`. Our `type` corresponds 
to C++'s `typename` and Haskell's `*`, and `type` to C++'s 
`template class` and Haskell's `* -> * -> *`. So, 
for example, you could write the fully kind-annotated signature of an 
identity function restricted to Result-shaped types (yeah, actually 
doing this would be pointless!) as:


fn dumb_id R, type A, type B>(x: R) -> R;

To explicitly annotate the kind of the Self-type of a trait, we could 
borrow the `for` syntax used in `impl`s. Here's the fully 
kind-annotated version of the `Functor` trait familiar from Haskell:


trait Functor for type Self {
fn fmap(a: &Self, f: |&A| -> B) -> Self;
}

(Obviously, explicitly annotating every kind would be tiresome, and 
`Self` is a little redundant when nothing else could go there. I could 
imagine `trait Functor for type`, `trait Functor for Self`, 
and/or `trait Functor` all being legal formulations of the above. I'll 
get back to this later.)





Could this be:

   fn dumb_id, A, B>(x: R) -> R;

and

   trait Functor {
   fn fmap(a: &Self, f: |&A| -> B) -> Self;
   }

Then something like  A, type> would correspond to A :: (* -> 
*) -> * -> *. Speaking of which, could we just use *, `R<*, *>`, 
`R<*<*>, *>`, `trait Functor<*>`? Although that looks a little cryptic 
in the nested case.



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Treating Vectors Like Any Other Container

2013-11-15 Thread Huon Wilson

On 16/11/13 00:39, Oren Ben-Kiki wrote:
If the traits were polymorphic in the index type (instead of always 
expecting an integer), then one could use them to make hash tables use 
vector syntax (e.g., `hash["foo"] = 1`)... Ruby does that, for 
example. So something like bitset (with integer indices) isn't the 
only example.


Not sure whether we want to go that way, though...


Note that the traits in the blog posts are parameterised by the index type:

trait Index { fn index<'a>(&'a self, index: I) -> &'a E; }

Etc.


|Huon|

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] typed loop variables & int num types

2013-11-15 Thread Huon Wilson

On 15/11/13 23:27, spir wrote:

These are 2 points of secondary importance (or even less).

What about the following pattern:
for x:Type in expr {
// proceed with x
}
as equivalent to:
for y in expr {
let x = y as Type;
// proceed with x
}
both for iterator loops and range loops?
Actually, my need is to have uint's or u8's in range loops, while the 
default is int.


I'm also unhappy with "let n = 1;" yielding by default an int instead 
of an uint. I would suggest that if an int value is unsigned, the 
default type is uint. A signed int is, semantically, a difference, 
thus always signed: to get a default signed int, just write "let n = 
+1;". Seems both logical and self-documenting, imo.


Again, quite unimportant and definitely not of high priority. Just 
sending this for the record.


Denis
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


I'd like being able to write `pattern: Type` and `expression: Type` in 
arbitrary contexts, forcing the LHS to have that type, so `let (a: u8, 
b: f64) = foo(1: int);` would be valid (this would cover for-loops, 
since they are pattern contexts); I know it's been mentioned before, but 
I couldn't find any issues already, so I filed it as #10502.


In any case, you can write `range(0u8, 10)` or `range(0u, 10)` to 
override the inference and get a u8 or uint range, respectively.


BTW, Removing the int default has actually already been suggested in #6023.

6023: https://github.com/mozilla/rust/issues/6023
10502: https://github.com/mozilla/rust/issues/10502


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] "Implementation Inheritance" / mixins

2013-11-15 Thread Huon Wilson

On 15/11/13 18:47, Oren Ben-Kiki wrote:
In your code, when providing a default implementation for 
`inflate_by`, you are invoking the trait (hence "virtual") 
method `get_radius`. If the compiler compiles `inflate_by` when seeing 
just the `Inflate` source code, then this must be translated to an 
indirect call through the vtable.


The point of anonymous members (and, to a greater extent, of the 
single inheritance) is to ensure that access to data members is just 
that, without any function calls.


To achieve that with the approach you described, the compiler will 
need to re-compile `inflate_by` for each and every struct that 
implements it; only then it would be able to inline `get_radius`.


Is this what the Rust compiler does today? I have no specific 
knowledge of the answer, but the simplest thing for the compiler would 
be to keep `get_radius` as a virtual function call.



As I understand it, default methods are specialised/monomorphised for 
each type for which the trait is implemented. The only time a virtual 
call ever happens is when one explicitly has a trait object.



Huon



Doing otherwise would require quite a bit of machinery (e.g., what 
if `Inflate` is defined in another crate, and all we have is its 
fully-compiled shared object file? There would need to be some "extra 
stuff" available to the compiler to do this re-compilation, as the 
source is not available at that point).


Therefore I suspect that this approach would suffer from significant 
performance issues.



On 2013-11-15, at 2:09, Tommi mailto:rusty.ga...@icloud.com>> wrote:


trait Inflate {
fn get_radius<'s>(&'s mut self) -> &'s mut int;

fn inflate_by(&mut self, amount: int) {
*self.get_radius() += amount;
}
}

trait Flatten {
fn get_radius<'s>(&'s mut self) -> &'s mut int;

fn release_all_air(&mut self) {
*self.get_radius() = 0;
}
}

struct Balloon {
radius: int
}

impl Inflate for Balloon {
fn get_radius<'s>(&'s mut self) -> &'s mut int {
&mut self.radius
}
}

impl Flatten for Balloon {
fn get_radius<'s>(&'s mut self) -> &'s mut int {
&mut self.radius
}
}




___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust docs

2013-11-14 Thread Huon Wilson

On 15/11/13 08:08, Daniel Glazman wrote:

Honestly, I am not sure "tutorial quality" and "automatic generation" live
well together. We hire tech evangelists for their ability to present well
information and make messages percolate better, and similarly good doc
requires good tech writers who only do that.

As I said earlier, a tutorial is crucial material to attract people and I
think writing talent, excellent presentation and correct readability are
need for such a tutorial. When I say "attract people" or "community", I am
of course thinking of reaching critical masses for Rust-based projects,
including Servo, and that requires making sure documentation material are
good enough to self-generate a pool of potential hires.



___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Just for reference, the tutorial being suboptimal is a "known bug": 
https://github.com/mozilla/rust/issues/9874



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Built-in types should accept a pointer rhs-argument to binary operators

2013-11-13 Thread Huon Wilson

On 14/11/13 16:00, Tommi wrote:

On 2013-11-14, at 6:20, Tommi  wrote:


On 2013-11-14, at 5:54, Huon Wilson  wrote:


On 14/11/13 14:50, Tommi wrote:

let mut n = 11;
let p: &int = &123;

p + n; // OK
n + p; // error: mismatched types: expected `int` but found `&int`

Shouldn't this error be considered a compiler-bug? The Add trait says '&' for 
rhs after all:
fn add(&self, rhs: &RHS) -> Result;

-Tommi

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev

This is https://github.com/mozilla/rust/issues/8895 . I believe it is caused by the 
implementation details of the overloaded-operator desugaring (a + b is equivalent to 
`a.add(&b)`, so auto-deref allows for `a: &T` with `b: T`), I personally think 
the more sensible resolution would be for `p + n` to be disallowed as well as `n + p`.

Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Thanks, I tried to search for a bug report for this but didn't manage to find that one. To me 
it makes perfect sense that p + n would work because if a + b is specified to be 
"just" syntactic sugar for a.add(b), then a + b should behave exactly like 
a.add(b), with implicit pointer dereferencing and all that good stuff. But, I don't 
understand why is it a.add(&b) like you said, though.

By the way, I would hate all of this behavior if Rust had pointer arithmetic, 
but luckily it doesn't seem to have it.

-Tommi

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


You're right in that it's not a bug in built-in types, but affects user-defined 
types also. But n.add(p) works just fine, while n.add(&p) doesn't:

struct Num { n: int }

impl Add for Num {
 fn add(&self, other: &Num) -> Num { Num { n: self.n + other.n } }
}

fn main() {
 let n = 11i;
 let p = &22i;

 p + n; // OK
 n + p; // error: mismatched types: expected `int` but found `&int`
 n.add(p); // OK
 n.add(&p); // error: mismatched types: expected `&int` but found `&&int`

 let n = Num { n: 11 };
 let p = &Num { n: 22 };

 p + n; // OK
 n + p; // error: mismatched types: expected `Num` but found `&Num`
 n.add(p); // OK
 n.add(&p); // error: mismatched types: expected `&Num` but found `&&Num`
}

-Tommi


p has type &Num, which matches the type signature of add for Num, so it 
is expected that it compiles. And, &p has type & &Num, which doesn't 
match the signature of add for Num and so correspondingly doesn't 
compile. Similarly, for the + sugar, `n + p` and `n.add(&p)` (with the 
extra &) are the same (despite the error messages having one fewer 
layers of &).



The operator traits all take references because they (in the common 
case) don't want ownership of their arguments. If they didn't take 
references (i.e. `fn add(self, other: Num) -> Num`) then something like


  bigint_1.add(&bigint_2) // bigint_1 + bigint_2

would likely require

  bigint_1.clone().add(bigint_2.clone()) // bigint_1.clone() + 
bigint_2.clone()


if one wanted to reuse the values in bigint_1 and bigint_2 later.


The current definitions of the traits means that the operator desugaring 
needs to implicitly add the references (that is a + b -> a.add(&b), 
rather than a.add(b)`) to match the type signature, or else we'd be 
forced to write `x + &1` everywhere.


(Just to be clear the auto-deref only happens on the "self" value, i.e. 
the value on which you are calling the method, not any arguments.)



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Built-in types should accept a pointer rhs-argument to binary operators

2013-11-13 Thread Huon Wilson

On 14/11/13 14:50, Tommi wrote:

let mut n = 11;
let p: &int = &123;

p + n; // OK
n + p; // error: mismatched types: expected `int` but found `&int`

Shouldn't this error be considered a compiler-bug? The Add trait says '&' for 
rhs after all:
fn add(&self, rhs: &RHS) -> Result;

-Tommi

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


This is https://github.com/mozilla/rust/issues/8895 . I believe it is 
caused by the implementation details of the overloaded-operator 
desugaring (a + b is equivalent to `a.add(&b)`, so auto-deref allows for 
`a: &T` with `b: T`), I personally think the more sensible resolution 
would be for `p + n` to be disallowed as well as `n + p`.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] The future of M:N threading

2013-11-13 Thread Huon Wilson

On 14/11/13 09:37, Patrick Walton wrote:

On 11/14/13 6:45 AM, Patrick Walton wrote:

If we need to split out the Rust standard library into a core set of
interfaces that remain the same across different implementations, then I
think it would be much more productive to talk about doing that. I've
reviewed rust-core and I don't really see any fundamental differences
that prevent compatibility with the standard library--in fact, I'd
really like to try to just merge them: pulling in the I/O as the "native
I/O" module, eliminating redundant traits from libstd, eliminating
conditions, taking whichever algorithms are faster, and so on. We can
find and shake out bugs as we go.


To be more specific, here's how I propose to move forward.

1. Finish fleshing out the native I/O in libstd to achieve parity with 
the uv-based I/O.


Coincidentally, Alex opened #10457 in the last day which makes things 
like println() work with without libuv (and without the runtime!).


https://github.com/mozilla/rust/pull/10457

Examples: https://github.com/mozilla/rust/pull/10457/files#diff-10


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] mutability cascade

2013-11-13 Thread Huon Wilson

On 13/11/13 23:55, spir wrote:

Hello,

I have an error "cannot assign to immutable field". However, it is 
mutable as I understand according to "inherited mutability" rule 
exposed in the tutorial (section 'Structs'). The field in question is 
in a struct itself item of an array, itself field in a super-structure 
(lol!) which is self. All this happens in a method called on "&mut 
self". (And the compiler lets me change self elsewhere in the same 
method.)
Thus, what is wrong? Or is it so that the mutability cascade is broken 
due to the array? How then restore it and be able to change a struct 
item's field? I guess arrays are mutable by default aren't they? (Or 
else, here it should due to inherited mutability, since it is a field 
of a mutable struct.)

What do I miss?


You need to use the .mut_iter() method, [T].iter() yields &T references, 
while the former yields &mut T references (and can only be used on 
vectors that have mutable contents, like ~[T] in a mutable slot, or &mut 
[T] or @mut [T]).


http://static.rust-lang.org/doc/master/std/vec/trait.MutableVector.html#tymethod.mut_iter

(Unfortunately the docs are not good with respect to built-in types. 
Methods can only be implemented on built-ins via traits, and the 
indication that the built-ins implement a trait is only shown on the 
page for the trait itself.)



[Code below untested yet due to error.]

fn expand (&mut self) {
// A mod table expands by doubling its list bucket capacity.
// Double array of lists, initially filled with NO_CELL values.
let NO_CELL : uint = -1 as uint;// flag for end-of-list
self.n_lists *= 2;
self.lists = vec::from_elem(self.n_lists, NO_CELL);

// replace cells into correct lists according to new key modulo.
let mut i_list : uint;
let mut i_cell : uint = 0;
for cell in self.cells.iter() {
// Link cell at start of correct list.
i_list = cell.key % self.n_lists;
cell.i_next = self.lists[i_list];   // ERROR ***
self.lists[i_list] = i_cell;
i_cell += 1;
}
}

I take the opportunity to ask whether there's an (i, item) iterator 
for arrays.


You can use the .enumerate() method on iterators, so

for (i, cell) in self.mut_iter().enumerate() {
...
}

http://static.rust-lang.org/doc/master/std/iter/trait.Iterator.html#method.enumerate


(Also, you will possibly meet with mutable borrow complaints (I'm not 
100% sure of this), which you can address by destructuring self, so that 
the compiler knows all the borrows are disjoint:


let StructName { lists: ref mut lists, cells: ref mut cells, 
n_lists: ref mut n_lists } = *self;


and then use `lists`, `cells` and `n_lists` (they are just &mut 
references pointing to the corresponding fields of self).)


Huon

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] About owned pointer

2013-11-08 Thread Huon Wilson

On 09/11/13 10:06, Igor Bukanov wrote:

On 8 November 2013 23:10, Oren Ben-Kiki  wrote:

Yes, the down side is another level of indirection. This could be optimized
away for &'static str, but not for &str. Good point.

The level of indirection comes from passing strings as &str, not just
as a plain str. But this raises the question. For immutable values
implementing &T parameter as T should not be observable from the safe
code, right? If so why not to declare that &T as a parameter is not a
pointer but rather a hint to use the fastest way to pass an instance
of T to the function. Then one can use &str as a parameter without any
performance impact due to indirection even if str itself is
fixed-sized 3-word block pointing to the heap allocated data.
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


`&T` is pointer-sized but `T` isn't always.

(I believe that LLVM will optimise references to pass-by-value in 
certain circumstances; presumably when functions are internal to a 
compilation unit.)


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] About owned pointer

2013-11-08 Thread Huon Wilson

On 09/11/13 09:44, Patrick Walton wrote:

On 11/8/13 2:43 PM, Huon Wilson wrote:

This will make transmuting ~ to get a * (e.g., to allocate memory for
storage in an Rc, with automatic clean-up by transmuting back to ~ on
destruction) harder to get right, won't it?


You'll want a `Sized` bound, I think.

Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Ah, of course. Presumably this means that `Rc` or `Rc` would 
require a separate code-path?



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] About owned pointer

2013-11-08 Thread Huon Wilson

On 09/11/13 09:27, Patrick Walton wrote:

On 11/8/13 2:12 PM, Oren Ben-Kiki wrote:

Is there a place where one can see new proposals (is it all in the
issues list in github and/or here, or is there a 3rd place?)

E.g. there's the whole lets-get-rid-of-@, and now is the 1st time I
heard there's a "dynamically sized types" proposal... well, other than
in passing in this thread, that is.


Here's the stuff we have on dynamically sized types (DSTs):

https://github.com/mozilla/rust/issues/6308

http://smallcultfollowing.com/babysteps/blog/2013/04/30/dynamically-sized-types/ 



http://smallcultfollowing.com/babysteps/blog/2013/05/13/recurring-closures-and-dynamically-sized-types/ 



Notice that the representation changes discussed in Niko's initial 
blog post are out of date. We aren't planning to do them, since there 
doesn't seem to be a soundness problem with having smart pointers like 
like `~` having different representations depending on what they point 
to.


This will make transmuting ~ to get a * (e.g., to allocate memory for 
storage in an Rc, with automatic clean-up by transmuting back to ~ on 
destruction) harder to get right, won't it?




Patrick

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] struct def

2013-11-06 Thread Huon Wilson

On 06/11/13 18:46, spir wrote:


I can write this:
struct Points {xs:~[uint], ys:~[uint]}
fn main () {
   let mut ps = Points{xs:~[1u], ys:~[1u]};
   ...
}

But I cannot write that:
struct Points {xs:~[T], ys:~[T]}
fn main () {
   let mut ps = Points{xs:~[1u], ys:~[1u]};
   ...
}

In the second case, I get the error:
sparse_array.rs:106:31: 106:32 error: expected one of `; }` but found `:`
sparse_array.rs:106let mut ps = Points{xs:~[1u], ys:~[1u]};
   ^


The correct syntax is `let mut ps = Points:: { xs: ~[1u], ys: 
~[1u] };`, but this actually does nothing, it's totally ignored ( 
https://github.com/mozilla/rust/issues/9620 ). In general, type 
inference means that you can normally just write `let mut ps = Points { 
xs: ~[1u], ys: ~[1u] }` even with the generic declaration of `Points`.




Sorry to bother you with that, I find myself unable to find the right 
syntactic schema (and could not find any example in any doc online). 
I'm blocked, stupidly.


spir@ospir:~$ rust -v
rust 0.8
host: x86_64-unknown-linux-gnu


Also, I have a general problem with writing struct instances with the 
type apart; meaning, without any type param, I get the same error:

struct Points {xs:~[uint], ys:~[uint]}

fn main () {
   let mut ps : Points = {xs:~[1], ys:~[1]};
   ...
}
==>
parse_array.rs:106:28: 106:29 error: expected one of `; }` but found `:`
sparse_array.rs:106let mut ps : Points = {xs:~[1], ys:~[1]};
^


You need to write the struct name always, since { } is valid expression, 
e.g. `let x = { foo(); bar(); baz() };`. So, in this case, `let mut ps: 
Points = Points { xs: ~[1], ys: ~[1] };`.




More generally, I don't know why there are 2 syntactic schemas to 
define vars. I would be happy with the latter alone (despite the 
additional pair of spaces) since it is more coherent and more general 
in allowing temporalily uninitialised declarations.


Which two schema are you referring to? `let x: Type = value;` vs. `let x 
= value;`? If so, they're actually the same, the first just has the 
optional type specified.


(One can provide temporarily uninitialised declarations by just `let 
x;`, as long as the compiler has enough information to (a) know that the 
variable is assigned before every use, and (b) infer the type.)



Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Separating heaps from tasks

2013-11-04 Thread Huon Wilson

On 04/11/13 20:09, Oren Ben-Kiki wrote:
ARCs have their place, sure! But "letting it leak" isn't acceptable in 
my case.


Instead, in my use case, "no deletes  until the whole heap is 
released" makes way more sense (heaps are small, grow a bit, and get 
released). Since the lifetime of the object becomes == the lifetime of 
the heap, there's no issue with cycles. There's only an issue with 
multiple mutations, which like I said only needs a bit per pointer 
(and a non-atomic one at that as each heap is accessed by one thread - 
the only thing that gets sent between tasks is the whole heap!).


So... different use cases, different solutions. ARC is a different 
trade-off. I guess the right thing to do would be to implement some 
"sufficiently smart" AppendOnlyHeap pointer, but this seems hard to 
do (same heap can hold objects of multiple types, etc.) so for now I 
have some AlmostSafeHeapPointer instead :-(


Language support for heaps-separate-from-tasks would have solved it 
(and a bit more)...




Is this essentially an "arena allocator" where one can transfer the 
whole arena and all the objects allocated in it into another task?

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] real-time programming? usability?

2013-10-18 Thread Huon Wilson

On 19/10/13 10:37, Jerry Morrison wrote:


  * Use postfix syntax for pointer dereference, like in Pascal:
 (~rect).area() becomes  rect~.area() . That reads left-to-right
with nary a precedence mistake.

While Rust’s auto-dereference feature and type checker will
sometimes catch that mistake, it's better to just fix the failure
mode. All security holes in the field got past the type checker
and unit tests.



Do you realise that `~rect` means "create an owned box [a pointer] that 
contains `rect`" and isn't not a dereference in Rust? (That is performed 
by `*rect`.)




  * Don’t let ; default its second operand. Require an explicit value,
even if (). That fixes an opportunity to goof that might be
frequent among programmers used to ending every statement with a ;.



`;` isn't a binary operator, and anyway `a; b` doesn't affect the 
behaviour of `b` at all. Could you describe what you mean a little more?


(Also the type system means that writing `fn foo() -> int { 1; }` (with 
the extra & incorrect semicolon) is caught immediately.)



  * AIUI,  let mut x, y defines 2 mutable variables but  |mut x, y|
defines mutable x and immutable y. This is harder to learn and
easier to goof.



`let mut x, y` doesn't exist at the moment precisely because of the 
confusion; one has to write `let mut x; let mut y;`. (Relatedly There is 
a proposal to make `mut ` a valid pattern, so that `let (mut x, 
y)` is valid: https://github.com/mozilla/rust/issues/9792 )




Thanks for listening!

--
   Jerry


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] about unicode problem

2013-10-02 Thread Huon Wilson

On 03/10/13 12:50, leef huo wrote:
I want to print "hello world, 世界"(Chinese)to the console, but the 
result is garbled(messy code), the following code where the problem is?

fn main() {
let s=~"hello,世界";
println(s);
}
Should output:hello,世界

but it print " hello,涓栫晫 " to the console.



Is the file saved as UTF-8, and is your console using UTF-8? Rust 
essentially only supports UTF-8 for strings at the moment.


Huon
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] The new format!

2013-09-30 Thread Huon Wilson

On 01/10/13 16:26, Oren Ben-Kiki wrote:
Perhaps if the type system was smart enough to provide a default 
implementation of the Default trait for everything that had the ToStr 
trait, and allowing overrides for specific types?


I know that currently this sort of inference is not supported, but it 
is intended that it would be possible in the future, right?


I think we should just replace ToStr and the #[deriving] with Default 
and a default method .to_str() on that trait, since ToStr's current 
design makes it very allocation-heavy (it has to allocate a new ~str for 
each subfield, rather than just appending to a common buffer as using 
the new format infrastructure would allow).



Also, not directly related to this exact discussion, but we could 
probably cope with having fewer format specifiers, e.g. format!("{:b}", 
true) could just be format!("{}", true), and similarly for `c`. (and 
even `s` itself!)


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


  1   2   >