Re: [rust-dev] [ANN] Initial Alpha of Cargo

2014-06-24 Thread SiegeLord
It wasn't clear from the documentation I read, but are multi-package 
repositories supported? The manifest format, in particular, doesn't seem to 
mention it (unless the manifest format is also incomplete).

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Integer overflow, round -2147483648

2014-06-22 Thread SiegeLord

On 06/22/2014 11:32 AM, Benjamin Striegel wrote:

This is a mistaken assumption. Systems programming exists on the extreme
end of the programming spectrum where edge cases are the norm, not the
exception, and where 80/20 does not apply.


Even in systems programming not every line is going to be critical for 
performance. There is still going to be a distribution of some lines 
just taking more time than others. Additionally, in a single project, 
there's a nontrivial cost in using Rust for the 20% of code that's fast 
and using some other language for the remaining 80%. How are you going 
to transfer Rust's trait abstractions to, e.g., Python?


> If you don't require absolute speed, why are you using Rust?

Because it's a nice, general purpose language? Systems programming 
language is a statement about capability, not a statement about the sole 
type of programming the language supports.


C++ can be and is used effectively in applications where speed is of the 
essence and in applications where speed doesn't matter. Is Rust going to 
be purposefully less generally useful than C++? There's always this talk 
of "C++ programmers won't use Rust because of reason X". Which C++ 
programmers? In my experience the vast majority of C++ programmers don't 
push C++ to its performance limits. Are they using the wrong language 
for the job? I don't think so as there are many reasons to use C++ 
beside its speed potential.


Rust will never become popular if it caters to the tiny percentage of 
C++ users who care about the last few percent of speed while alienating 
everybody else (via language features or statements like yours). The 
better goal is a) enable both styles of programming b) make the 
super-fast style easy enough so that everybody uses it.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Why are generic integers not usable as floats?

2014-06-20 Thread SiegeLord

On 06/20/2014 07:36 AM, Nathan Typanski wrote:

On 06/19, Benjamin Striegel wrote:

I'm actually very pleased that floating point literals are entirely
separate from integer literals, but I can't quite explain why. A matter of
taste, I suppose. Perhaps it stems from symmetry with the fact that I
wouldn't want `let x: int = 1.0;` to be valid.


I agree that `let x: int = 1.0` should not be valid. But that is type
*demotion*, and with `let x: f32 = 1` we are doing type *promotion*.


This isn't promotion because 1.0 does not have a concrete type. E.g. 
consider this code:


let a: f32 = 1.0;
let mut b: f64 = 1.0;
b = a as f64; // Cast has to be here

Even though we used 1.0 to initialize both f32 and f64 variables, we 
can't assign f32 to f64 without a cast.


I'm in favor of this unification and I think somebody should write an 
RFC for this.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?)

2014-06-19 Thread SiegeLord

On 06/19/2014 07:59 AM, SiegeLord wrote:

I will note that you could very well implement a by-value self operator
overload trait


Forgot to finish this one. I was going to go into how you could 
implement it for a &Foo to get your 'by-ref' behavior back. Of course 
this ruins generics, but they are tricky regardless (see the rest of 
that email).


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?)

2014-06-19 Thread SiegeLord

On 06/19/2014 07:08 AM, Sebastian Gesemann wrote:

No, it's more like

   a + x * (b + x * (c + x * d)))


It can't be that and be efficient as it is right now (returning a clone 
for each binop). I am not willing to trade efficiency for sugar, 
especially not when trade is this bad (in fact that's the entire point 
of this thread). In my matrix library through the use of lazy evaluation 
I can make my code run as fast as a manual loop implementation would. 
This would not be the case here.


I will note that you could very well implement a by-value self operator 
overload trait



The difference is that the short operator-based expression *works*
right now whereas forcing a "moving self type" on to people makes this
expression not compile for types that are not Copy. So, given the two
options, I actually prefer the way things are right now.


It's inefficient, so it doesn't 'work'.


I think, to convince people (either way) a deeper analysis is needed.


I don't think merely changing the type of self/arg to be by move is the 
only solution, or the best solution. It does seem to be the case that 
different operators benefit from reusing temporaries differently, but 
it's clear that *some* operators definitely do (e.g. thing/scalar 
operators, addition/subtraction, element-wise matrix-operators) and the 
current operator overloading traits prevent that optimization without 
the use of RefCell or lazy evaluation. Both of those have non-trivial 
semantic costs. In the case of operators that do benefit from reusing 
temporaries, doing so via moving seems very natural and relatively clean.


Regardless, bignum and linear-algebra are prime candidates for operator 
overloading, but the way it is done now by most implementations is just 
unsatisfactory. Maybe bignums in std::num and all the linear algebra 
libraries could be converted to use a lazy evaluation, thus avoiding 
this issue of changing the operator overload traits. Alternatively, 
maybe the whole operator-overloading idea-via traits is broken anyway 
since operators do have such different semantics for different types (in 
particular, efficient semantics of operators differ from built-in 
numeric types and complicated library numeric types). Maybe adding 
attributes to methods (e.g. #[operator_add] fn my_add(self, whatever) {} 
) is more flexible since it is becoming clear that generic numeric code 
(generic to bignums and normal integers and matrices) might not be a 
possibility.


I should note that it is already an impossibility in many cases (I 
guarantee that my lazy matrix library won't fulfill the type bounds of 
any function due to how its return types differ from the argument types).


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?)

2014-06-19 Thread SiegeLord

On 06/17/2014 07:41 AM, Vladimir Matveev wrote:

Overloading Mul for matrix multiplication would be a mistake, since that 
operator does not act the same way multiplication acts for scalars.


I think that one of the main reasons for overloading operators is not their 
genericity but their usage in the code.

 let a = Matrix::new(…);
 let x = Vector::new(…);
 let b = Vector::new(…);
 let result = a * x + b;

Looks much nicer than

 let result = a.times(x).plus(b);

In mathematical computations you usually use concrete types, and having 
overloadable operators just makes your code nicer to read.


Fair enough (indeed I don't think the current operator overloading is 
usable for generics), but I still stand by my point. It is just more 
useful in practice to sugar the element-wise multiplication than matrix 
multiplication (i.e. I find that I use the elementwise multiplication a 
lot more often than matrix multiplication). This isn't unprecedented, as 
Python's Numpy library does this too for its multi-dimensional array 
class (notably it doesn't do this for the dedicated matrix class... but 
I found it to be very limited and not useful in generic code).


-SL


___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?)

2014-06-17 Thread SiegeLord

On 06/16/2014 06:17 PM, Cameron Zwarich wrote:

On Jun 16, 2014, at 3:03 PM, Cameron Zwarich  wrote:
I stated the right case, but the wrong reason. It’s not for vectorization, it’s 
because it’s not easy to reuse the storage of a matrix while multiplying into 
it.


Overloading Mul for matrix multiplication would be a mistake, since that 
operator does not act the same way multiplication acts for scalars. I.e. 
you'd overload it, but passing two matrices into a generic function 
could do very unexpected things if the code, e.g., relies on the 
operation being commutative. Additionally, if your matrices contain the 
size in their type, then some generic code wouldn't even accept them. 
Matrix scalar multiplication is a better candidate for the Mul overload.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Fwd: &self/&mut self in traits considered harmful(?)

2014-06-17 Thread SiegeLord

On 06/16/2014 07:03 PM, Sebastian Gesemann wrote:

Good example! I think even with scalar multiplication/division for
bignum it's hard to do the calculation in-place of one operand.


Each bignum can carry with it some extra space for this purpose.



Suppose I want to evaluate a polynomial over "BigRationals" using
Horner's method:

   a + x * (b + x * (c + x * d))

What I DON'T want to type is

   a + x.clone() * (b + x.clone() * (c + x.clone() * d))


But apparently you want to type

a.inplace_add(x.clone().inplace_mul(b.inplace_add(x.clone().inplace_mul(c.inplace_add(x.clone().inplace_mul(d))

Because that's the alternative you are suggesting to people who want to 
benefit from move semantics. Why would you deliberately set up a 
situation where less efficient code is much cleaner to write? This 
hasn't been the choice made by Rust in the past (consider the 
overflowing arithmetic getting sugar, but the non-overflowing one not).


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] &self/&mut self in traits considered harmful(?)

2014-06-11 Thread SiegeLord

On 06/11/2014 10:10 AM, Sebastian Gesemann wrote:

On Wed, Jun 11, 2014 at 3:27 PM, SiegeLord wrote:

[...] Along the same lines, it is not immediately obvious
to me how to extend this lazy evaluation idea to something like num::BigInt.
So far, it seems like lazy evaluation will force dynamic dispatch in that
case which is a big shame (i.e. you'd store the operations in one array,
arguments in another and then play them back at the assignment time).


I havn't tried something like expression templates in Rust yet. How
did you come to the conclusion that it would require dynamic dispatch?


It's just the first idea I had with how this could work, but you're 
right, I can envision a way to do this without using dynamic dispatch. 
It'd look something like something like this: 
https://gist.github.com/SiegeLord/f1af81195df89ec04d10 . So, if nothing 
comes out of this discussion, at least you'd be able to do that. Note 
that the API is uglier, since you need to call 'eval' explicitly. 
Additionally, you need to manually borrow 'm' because you can't specify 
a lifetime of the &self argument in mul (another problem with 
by-ref-self methods).


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] &self/&mut self in traits considered harmful(?)

2014-06-11 Thread SiegeLord
First, let me begin with a small discussion about C++ rvalue references. 
As some of you know, they were introduced to C++ in part to solve 
problems like this:


Matrix m;
m.data = {1.0, 2.0, 3.0};
Matrix m2 = m * 2.0 * 5.0 * 10.0;

Before C++11, most implementations the multiplications on the third line 
would create two (unnecessary) temporary copies of the Matrix, causing 
widespread inefficiency if Matrix was large. By using rvalue references 
(see the implementation in this gist: 
https://gist.github.com/SiegeLord/85ced65ab220a3fdc1fc we can reduce the 
number of copies to one. What the C++ does is that the first 
multiplication (* 2.0) creates a copy of the matrix, and the remaining 
multiplications move that copy around.


If you look at the implementation, you'll note how complicated the C++ 
move semantics are compared to Rust's (you have to use std::move 
everywhere, define move-constructors and move-assignment with 
easy-to-get-wrong implementations etc.). Since Rust has simpler move 
semantics, can we do the same thing in Rust?


It turns out we cannot, because the operator overloading in Rust is done 
by overloading a trait with a method that takes self by reference:


pub trait Mul
{
fn mul(&self, rhs: &RHS) -> Result;
}

This means that the crucial step of moving out from the temporary cannot 
be done without complicated alternatives (explained at the end of this 
email). If we define an a multiplication trait that takes self by value, 
however then this is possible and indeed relatively trivial (see 
implementation here: 
https://gist.github.com/SiegeLord/11456760237781442cfe ). This code will 
act just like the C++ did: it will copy during the first move_mul call, 
and then move the temporary around:


let m = Matrix{ data: vec![1.0f32, 2.0, 3.0] };
let m2 = (&m).move_mul(2.0).move_mul(5.0).move_mul(10.0);

So there's nothing in Rust move semantics which prevents this useful 
pattern, and it'd be possible to do that with syntax sugar if the 
operator overload traits did not sabotage it. Pretty much all the 
existing users (e.g. num::BigInt and sebcrozet's nalgebra) of operator 
overloading traits take the inefficient route of creating a temporary 
copy for each operation (see 
https://github.com/mozilla/rust/blob/master/src/libnum/bigint.rs#L283 
and 
https://github.com/sebcrozet/nalgebra/blob/master/src/structs/dmat.rs#L593 
). If the operator overloading traits do not allow you to create 
efficient implementations of BigNums and linear algebra operations, the 
two use cases why you'd even *have* operator overloading as a language 
feature, why even have that feature?


I think this goes beyond just operator overloading, however, as these 
kinds of situations may arise in many other traits. By defining trait 
methods as taking &self and &mut self, we are preventing these useful 
optimizations.


Aside from somewhat more complicated impl's, are there any downsides to 
never using anything but by value 'self' in traits? If not, then I think 
that's what they should be using to allow people to create efficient 
APIs. In fact, this probably should extend to every member generic 
function argument: you should never force the user to tie their hands by 
using a reference. Rust has amazing move semantics, I just don't see 
what is gained by abandoning them whenever you use most traits.


Now, I did say there are complicated alternatives to this. First, you 
actually *can* move out through a borrowed pointer using 
RefCell>. You can see what this looks like here: 
https://gist.github.com/SiegeLord/e09c32b8cf2df72b2422 . I don't know 
how efficient that is, but it is certainly more fragile. With my 
by-value MoveMul implementation, the moves are checked by the 
compiler... in this case, they are not. It's easy to end up with a 
moved-out, dangling Matrix. This is what essentially has to be done, 
however, if you want to preserve the general semantic of the code.


Alternatively, you can use lazy evaluation/expression templates. This is 
the route I take in my linear algebra library. Essentially, each 
operation returns a struct (akin to what happens with many Iterator 
methods) that stores the arguments by reference. When it comes time to 
perform assignment, the chained operations are performed element-wise. 
There are no unnecessary copies and it optimizes well. The problem is 
that its a lot more complicated to implement and it pretty much forces 
you to use interior mutability (just Cell this time) if you don't want a 
crippled API. The latter bit introduces a whole slew of subtle bugs (in 
my opinion they are less common than the ones introduced by RefCell). 
Also, I don't think expression templates are the correct way to wrap, 
e.g., a LAPACK library. I.e. they only work well when you're 
implementing the math yourself which is not ideal for the more 
complicated algorithms. Along

Re: [rust-dev] Specifying lifetimes in return types of overloaded operators

2014-04-16 Thread SiegeLord

On 04/16/2014 03:09 PM, Brendan Zabarauskas wrote:

For one, the Index trait is in dire need of an overhaul.

In respect to the operator traits in general, I have actually been thinking of 
submitting an RFC proposing that they take thier parameters by-value instead of 
by-ref. That would remove the auto-ref behaviour of the operators which is more 
consistent with the rest of Rust:


 impl<'a, 'b, T> Mul<&'b Mat, Mat> for &'a Mat {
 fn mul(&'a self, other: &'b Mat) -> T { ... }
 }

 let m2: Mat<_> = &m0 * &m1;



It's not super clear to me how this is different from what you can do 
right now, e.g. here's the Mul implementation from my linear algebra 
library:


impl<'l,
 RHS: VectorGet + Clone>
Mul> for
&'l Vector
{
fn mul(&self, rhs: &RHS) -> VectorBinOp<&'l Vector, RHS, OpMul>
{
VectorBinOp::new(self.clone(), rhs.clone(), OpMul::new())
}
}

impl<'l>
VectorGet for
&'l Vector
{
...
}

The usage syntax (with the explicit borrowing and all) is the same.

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Removing ~"foo"

2014-04-15 Thread SiegeLord

On 04/15/2014 01:12 PM, Patrick Walton wrote:

The new replacement for `~"foo"` will be `"foo".to_owned()`. You can
also use the `fmt!()` macro or the `.to_str()` function. Post-DST, you
will likely also be able to write `Heap::from("foo")`.


Why not `box "foo"`? Is that scheduled to break?

-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compiling with no bounds checking for vectors?

2014-03-27 Thread SiegeLord

On 03/27/2014 10:04 PM, Tommi Tissari wrote:

By the way, D is memory safe (although it's opt-in) and it has this 
noboundscheck flag. So I don't see what the problem is.


Rust takes a very different stance to safety than to D (e.g. making it 
safe by default). In the D community my perception was that for any 
benchmark written in D it has been the suggestion to turn on that 
noboundscheck flag in order to get extra speed, forming a perception 
that D is only fast if you turn off this safety feature completely. Not 
only is Rust's approach more fine grained (although it is possible to do 
unsafe array access in specific locations in D as well), but it also 
encourages Rust to be fast /despite/ the safety features.


A big reason for Rusts existence is that it wants to provides C++-level 
performance while /at the same time/ providing safety. If the only way a 
Rust user can match C++ speed is by using 'unsafe' everywhere, then Rust 
will have failed, in my opinion. I didn't look at how the shootout 
benchmarks are implemented, but if I had any say, I'd forbid them to use 
any unsafe code (except for FFI where absolutely necessary) to prove the 
above point.


There are many ways you can avoid bounds checks in safe Rust code. For 
example, if you restrict yourself to using iterators instead of indexing 
operators, there will be no bounds checks. You can also code up 
abstractions (like I did in my matrix library) so that you don't need to 
pay the bounds check cost as much as well. I really don't think this is 
a real concern.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] "Virtual fn" is a bad idea

2014-03-11 Thread SiegeLord

On 03/11/2014 04:52 PM, Brian Anderson wrote:

Fortunately, this feature is independent of others and we can feature
gate it until it's right.
I think that's the crux of the issue some have with this. If a whole 
another, completely disjoint path for inheritance and dynamic 
polymorphism is required for the sake of efficiency, then maybe trait 
objects should go?


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Using Default Type Parameters

2014-02-03 Thread SiegeLord

On 02/03/2014 12:41 PM, Gábor Lehel wrote:

Possibly, but it's not particularly well-trodden ground (I think Ur/Web
might have something like it?).

And would you really want to write `HashMap`?


Naturally it would be optional subject to some rules. I think this has 
nice parallels with the yet non-existent keyword/default function arguments.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Compile-time function evaluation in Rust

2014-01-29 Thread SiegeLord

On 01/29/2014 11:44 AM, Niko Matsakis wrote:

On Tue, Jan 28, 2014 at 07:01:44PM -0500, comex wrote:

Actually, Rust already has procedural macros as of recently.  I was
wondering whether that could be combined with the proposed new system.


I haven't looked in detail at the procedural macro support that was
recently added, but off hand I think I favor that approach. That is,
I'd rather compile a Rust module, link it dynamically, and run it as
normal, versus defining some subset of Rust that the compiler can
execute. The latter seems like it'll be difficult to define,
implement, and understand. Our experience with effect systems and
purity has not been particularly good, and I think staged compilation
is easier to explain and free from the twin hazards of "this library
function is pure but not marked pure" (when using explicit
declaration) or "this library function is accidentally pure" (when
using inference).



I was under the impression from some time ago that this was going to be 
the way CTFE is implemented in Rust. Having tried CTFE in D, I was not 
impressed by the nebulous definition of the constant language used 
there, it was never clear ahead of time what will work and what won't 
(although maybe the problem won't be as big in Rust, as Rust is a 
smaller language). Additionally, it was just plain slow (you are 
essentially creating a very slow scripting language without JIT).


It seems to me (judging at the size of the loadable procedural macro 
commit size) that using staged compilation approach will be easier to 
implement and be more powerful at the cost of, perhaps, less convenient 
usage.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Deprecating rustpkg

2014-01-28 Thread SiegeLord

On 01/27/2014 11:53 PM, Jeremy Ong wrote:

I'm somewhat new to the Rust dev scene. Would anybody care to summarize
roughly what the deficiencies are in the existing system in the interest
of forward progress? It may help seed the discussion for the next effort
as well.


I can only speak for myself, but here are some reasons why I abandoned 
rustpkg and switched to CMake.


Firstly, and overarchingly, it was the attitude of the project 
development with respect to issues. As a comparison, let me consider 
Rust the language. It is a pain to make my code pass the borrow check 
sometimes, the lifetimes are perhaps the most frustrating aspect of 
Rust. I put up with them however, because they solve a gigantic problem 
and are the keystone of Rust's safety-without-GC story. rustpkg also has 
many incredibly frustrating aspects, but they are there (in my opinion) 
arbitrarily and not as a solution to any real problem. When I hit them, 
I do not get the same sense of purposeful sacrifice I get with Rust's 
difficult points. Let me outline the specific issues I personally hit (I 
know of other ones, but I haven't encountered them personally).


Conflation of package id and source. That fact combined with the fact 
that to depend on some external package you have to write "extern mod = 
pkgid" meant that you needed to create bizarre directory structures to 
depend on locally developed packages (e.g. you'd have to put your 
locally developed project in a directory tree like so: 
github.com/SiegeLord/Project). This is not something I was going to do.


The package dependencies are written in the source file, which makes it 
onerous to switch between versions/forks. A simple package script would 
have solved it, but it wasn't present by design.


My repositories have multiple crates, and rustpkg is woefully 
under-equipped to handle that case. You cannot build them without 
dealing with pkg.rs, and using them from other projects seemed 
impossible too (the extern mod syntax wasn't equipped to handle multiple 
crates per package). This is particularly vexing when you have multiple 
example programs alongside your library. I was not going to split my 
repository up just because rustpkg wasn't designed to handle that case.


All of those points would be solved by having an explicit package 
description file/script which was THE overarching design non-goal of 
rustpkg. After that was made clear to me, I just ditched it and went to 
C++ style package "management" and a CMake build system.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Bitwise operations in rust.

2014-01-15 Thread SiegeLord

On 01/15/2014 11:29 AM, Nicolas Silva wrote:

I think enums are not a good fit for bitwise operations, it's not really
meant for that.
I came to the same conclusion and came up with a nice macro to automate 
that, seen here: 
https://github.com/SiegeLord/RustAllegro/blob/master/src/rust_util.rs#L3..L56 
and used here 
https://github.com/SiegeLord/RustAllegro/blob/master/src/allegro/internal/bitmap_like.rs#L9..L23 
.


This enables a nice, C-like API when composing/extracting flags via the 
bit-or and bit-and operators.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Future of the Build System

2014-01-10 Thread SiegeLord

On 01/10/2014 06:19 AM, Robert Knight wrote:

Hello,

CMake does have a few things going for it:


One more consideration is that LLVM can be built with CMake afaik, so if 
we switch to CMake we may be able to drop the autotools dependency, 
which is a more annoying dependency to fulfill (on Windows) than CMake 
(I don't know if Rust has other components that require autotools though).


Along the same lines, we also require Python for whatever reason, so 
SCons would be a natural option too (it can't build LLVM though). I'd 
only use SCons conditional on it accepting a Rust dependency scanner 
into its source: using its current custom scanner infrastructure is not 
practical as I found out.


As for waf... they and Debian have been having a tiff (e.g. see 
http://waf-devel.blogspot.com/2012/01/debian.html , 
https://lists.debian.org/debian-devel/2012/02/msg00207.html ). I would 
not suggest it based on that.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Using CMake with Rust

2014-01-01 Thread SiegeLord

On 01/02/2014 12:05 AM, György Andrasek wrote:

The proper way to support a language in CMake is outlined in
`Modules/CMakeAddNewLanguage.txt`:


I was guided away from that method by this email: 
http://www.cmake.org/pipermail/cmake/2011-March/043444.html . My 
approach is amenable to generating files for alternative build systems, 
like ninja. If you are aware that that email is incorrect, I'm glad to 
be corrected. Independently of that email, I have looked into doing it 
that way, but I found that it just did not mesh with the Rust 
compilation model, and I saw no clear way to using the information given 
by 'rustc --dep-info' to inform the build system. Additionally, my 
macros allow an easy way of doing documentation generation, which that 
method doesn't clearly allow.


I'll be glad to be corrected on all those points though.

-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Using CMake with Rust

2014-01-01 Thread SiegeLord
(Posting this here as I think it might be useful for some but I don't 
feel like getting a Reddit account and there's no rust-announce).


I've been experimenting with building Rust crates using some established 
build systems, focusing on SCons and CMake due to their popularity. 
Neither turned out to be a perfect fit for Rust, but CMake was 
marginally less bad so I chose it in the end. To that end I created some 
CMake modules that make that integration be as painless as possible. 
Here's what a minimal CMakeLists.txt would look like for a single 
library crate project:


~~~
cmake_minimum_required(VERSION 2.8)
project(testlib NONE)

list(APPEND CMAKE_MODULE_PATH "${CMAKE_SOURCE_DIR}/cmake")
find_package(rustc)
find_package(rustdoc)
include(Rust)

set(RUSTC_FLAGS "-L${CMAKE_BINARY_DIR}/lib")

rust_crate_auto(src/lib.rs TARGET_NAME TESTLIB)

add_custom_target(library_target
  ALL
  DEPENDS ${TESTLIB_FULL_TARGET})

install(FILES ${TESTLIB_ARTIFACTS}
DESTINATION lib)
~~~

And then you'd do the usual out-of-source build. It will reconfigure the 
build system if any of the files referenced by src/lib.rs (as reported 
by rustc --dep-info) get changed. A more complete example that shows 
building several inter-dependent crates, documentation and tests can be 
seen here: https://github.com/SiegeLord/RustCMake . The modules for this 
to work are also found there.


Caveats: CMake doesn't know what Rust is, so it has to reconfigure the 
entire build system whenever you change any of the source files needed 
for *ANY* crate in your project (it doesn't need to do that with C/C++ 
because it has an internal dependency scanner). This won't be a big deal 
in small projects, and I find the convenience of using CMake to be worth 
it, but it does suggest that CMake is not the ultimate solution to 
building Rust projects.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] [whoami] "crate", "package" and "module" confused me!

2013-12-15 Thread SiegeLord

On 12/15/2013 07:52 AM, Liigo Zhuang wrote:

Rust compiler compiles "crates", rustpkg manages "packages".

When develope a library for rust, I write these code in lib.rs
:
```
#[pkgid = "whoami"];
#[crate_type = "lib"];

```
Note, I set its "package" id, and set its "crate" type, and it is
compiled to an .so library. Now please tell me, what "it" is? A package?
A crate? or just a library? If it's a package, why I set its crate type?
If it's a crate, why I set its package id? Confusing.


I agree that 'pkgid' is a really unfortunate name that stems from the 
confusion of whether or not a package is allowed to contain multiple 
crates. I think it should, but it feels like I'm in a minority. 'pkgid' 
is used to generate the crate output name, at least for libraries, (i.e. 
lib--.so). Frankly, I liked the old link metadata 
idea better where you specified the Author, Version and output name 
separately, and it was not conflated with the rustpkg idea. I believe 
the movement to #[pkgid] was driven by the desire for external tools to 
be able to compute the hash (e.g. see 
http://metajack.im/2013/12/12/building-rust-code--using-make/)... but I 
think that's a mistake. I really don't think this is a solution:


RUST_CRATE_PKGID = $(shell sed -ne 's/^#[ *pkgid *= *"(.*)" *];$$/\1/p' 
$(firstword $(1)))
RUST_CRATE_HASH = $(shell printf $(strip $(1)) | shasum -a 256 | sed -ne 
's/^(.{8}).*$$/\1/p')

_rust_crate_pkgid = $$(call RUST_CRATE_PKGID, $$(_rust_crate_lib))
_rust_crate_hash = $$(call RUST_CRATE_HASH, $$(_rust_crate_pkgid))

I'd rather just do:

RUST_CRATE_HASH = $(shell rustc --get-crate-hash $1)
_rust_crate_hash = $$(call RUST_CRATE_HASH, $$(_rust_crate_lib))

Personally, I'd remove the notion of a package id from the language, and 
leave it entirely up to the external package management tool to deal with.


That'd mean the removal of the unfortunate `extern mod foo = "pkgid";` 
syntax which conflates crates and packages (it could be replaced with 
`#[pkgid="pkgid"] extern mod foo;` if source annotation of external 
dependencies is really desired. It'd have to have different semantics in 
order not to repeat the confusion.).




And when use it ("whoami"), I must write the code:
```
extern mod whoami;
```
Now was it transmuted to a "module"?? If it's a module, why I haven't
set its module id and module type? If it's not a module, why not use
`extern package whoami` or  `extern crate whoami` here? Full of confusion.

Can you please tell me WHAT it really is? Thank you.


For what it is worth, "extern mod" is slated to be changed to be 
"crate", I think: https://github.com/mozilla/rust/issues/9880 .


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust crates and the module system

2013-12-13 Thread SiegeLord

On 12/13/2013 02:57 PM, Piotr Kukiełka wrote:

I understand that one could want flexibility and matching file with
package is not always optimal.
And it's not what I wanted to suggest in first place ;)

Let's clear one confusion first. When I said that I think *mod* and
*extern mod* are redundant I thought about using them in context of
importing.


I do not think it is correct to think of `mod foo;` as importing 
anything. Rust's module system is not based upon files: files are just a 
convenience. "mod foo;" is literally just a shortcut for `mod foo { 
 here> }`. You could write 
every crate in Rust as a single file per crate without changing the 
semantics at all.


> Now, I don't see big difference between mod and  extern mod, because 
in both cases intention is very similar: you want to signalize that 
required modules are in external crate.


`mod foo;` does not signify that modules are in an external crate, it 
signifies that they are in the same crate. There are some big 
differences to whether a module is in a separate crate or not. For 
example, if crate Crate1 has a trait T and a type A, Crate2 cannot 
implement trait T for type A. If T and A were in a different module of 
Crate2, then it would be able to implement T for A. This is a very 
important property of the crate system, and making it confusing by 
merging `mod` and `extern mod` is a bad idea, in my opinion.


> This means to make it work correctly I need to add extern mod clib; 
in both files (then it compiles and works).


No, that's not the case. To make it work you can do:

~~~
//- a.rs ---
mod b;

pub mod a_inner {
  pub fn a_func() {  ::b::clib::c_func2(); }
}

fn main() {
  a_inner::a_func();
}
~~~

A similar modification will work with moving `extern mod clib` into a.rs.

The 'notices' you've written suggest to me that you don't completely 
understand how the module system works today, which is very important to 
do before suggesting changes to it (even if it's flawed you still need 
to understand it). I suggest reading the relevant portion of the 
tutorial again. All 4 'notices' you mentioned maybe become more clear then.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Rust front-end to GCC

2013-12-03 Thread SiegeLord

On 12/03/2013 02:02 PM, Val Markovic wrote:

Agreed with Daniel. The D approach would be best. They one frontend and
then dmd (frontend + proprietary backend), ldc (frontend + llvm) and gdc
(frontend + gcc backends) use that. This would be the best for the Rust
ecosystem, users and developers of the various compilers; there's no
duplication of efforts and critically, very few compiler-specific bugs.
One can reasonably assume that any code written for dmd will work with
ldc and gdc.


LDC and GDC do not use the same frontend by choice, but because 
historically every attempt to create a new D frontend has failed to bear 
fruit (SDC and DIL) in part because of a lack of a concrete 
specification and the resultant lack of clarity of how exactly some code 
should work. The reliance (over-reliance in my opinion) on the DMD 
frontend resulted in a lack of focus on creating a good language 
specification which to this day results in debates about what exactly 
constitutes a language change and what constitutes a bug fix.



Multiple independent compilers would require a common, detailed spec to
reach any level of interoperability. That comes with its own costs and
complexities and IMO I don't think they're worth it.


A new frontend implementation can motivate and help the writing of the 
spec to begin with, assuming Philip is willing to keep implementation 
notes as he goes along.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Practical usage of rustpkg

2013-12-03 Thread SiegeLord
So I've been thinking for awhile about how one would actually use 
rustpkg. Let me first outline my use case. I have a library and a 
program that uses that library, both of which I host on GitHub. One of 
the features of the rustpkg system is that I should be able to write 
this to refer to the library in my program:


extern mod lib = "package_id";

Unfortunately, there is no obvious thing to put into the 'package_id' 
slot. There are two options:


First, I could use "github.com/SiegeLord/library" as my package_id. This 
is problematic, as it would require one of these sub-optimal courses of 
action:


- Stick the source of the library into workspace/src/library where I 
would actually develop and then use a duplicate package in the 
workspace/src/github.com/SiegeLord/library that would be created by 
rustpkg (the program is located in workspace/src/program). Somehow this 
duplicate package will be synced to the actual package: either through 
pushing to GitHub and then pulling somehow via rustpkg (this is less 
than ideal, as I may want to test WIP changes without committing them 
elsewhere/I may have no internet connection e.g. when traveling), or 
some manual, local git operation.


- Stick the source of the library into 
workspace/src/github.com/SiegeLord/library and develop the library 
there. There is no duplication, but it really seems bizarre to me to 
locate a project in a directory named like that. Also, I'd be a bit 
paranoid about rustpkg not realizing that I never want to communicate 
with GitHub and having it accidentally overwriting my local changes.


The second option is to use "library" as the package id. This allows me 
to locate my library in a logical location (workspace/src/library), but 
it prevents other users of my program from building it automatically. 
Essentially what they'll have to do is to manually check out the library 
repository inside their workspaces so as to create the 
workspace/src/library directory on their system: the `extern mod` syntax 
will not work otherwise.


I don't think any of these options are ideal. I don't want to suggest 
solutions to these issues because I'm not sure how things are supposed 
to be used/what the planned design is. Does anybody use rustpkg 
seriously today? Is everybody making workspaces with a github.com/ 
directory where they develop their software?


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Danger of throwing exceptions through Rust code

2013-11-12 Thread SiegeLord

On 11/12/2013 10:18 PM, Patrick Walton wrote:

On 11/13/13 12:06 PM, Daniel Micay wrote:

If a library takes a callback, writing safe Rust bindings isn't going to
turn out well. Rust functions can fail, so the Rust API can't pass an
arbitrary function to the callback.


Yeah, this has been a concern for a while (years). Maybe we should have
an unsafe "catch" just for this case, to allow turning a Rust failure
into a C-style error code.

Patrick


Is that not, essentially, just spawning a task inside the Rust callback 
and catching the result (via task::try)?


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] State of private

2013-11-08 Thread SiegeLord

I find that in several cases I would like to have sibling modules
access private members, that is, allow "foo::bar::Bar" access the
private members of "foo::baz::Baz", but disallow any code in
"qux::*" from accessing these members.


The solution to prevent qux::* from accessing privates of bar and
baz is to introduce a private barrier module, and manually re-export
a subset of the internal API across that barrier. It'll look sort of 
like

this:

pub mod foo
{
   pub use self::barrier::bar::pub_bar;
   pub use self::barrier::baz::pub_baz;

   mod barrier
   {
   pub mod bar
   {
   pub fn priv_bar() {}
   pub fn pub_bar() { ::foo::barrier::baz::priv_baz(); }
   }

   pub mod baz
   {
   pub fn priv_baz() {}
   pub fn pub_baz() { ::foo::barrier::bar::priv_bar(); }
   }
   }
}

pub mod qux
{
   pub fn test()
   {
   // Can't do it, as barrier is private
   // ::foo::barrier::bar::priv_bar;
   ::foo::pub_bar();
   ::foo::pub_baz();
   }
}

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] New privacy rules and crate-local scope

2013-10-10 Thread SiegeLord

On 10/10/2013 06:33 PM, Andrei de Araújo Formiga wrote:

(hit send by mistake)

Maybe I'm misunderstanding something, but you don't need to put
everything inside the private module. There's even an example in the
manual that does something like this:

// this is the crate root
mod internal
{
pub fn internal_1() { }
}

pub mod external
{
// public functions here will be available outside the crate

use internal; // pub functions in internal are now usable here

pub fn pub_api_1() { internal::internal_1() }
}


The external API may need to access private implementation. Writing an 
internal API just so that the external API can access that 
implementation seems even more disruptive than sticking everything into 
a private module. E.g. in my example you'd have to write an accessor for 
the member m of struct S.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] New privacy rules and crate-local scope

2013-10-09 Thread SiegeLord

On 10/09/2013 11:48 PM, Alex Crichton wrote:

What you've described above is all correct and intended behavior. I
can imagine that it's difficult to port old libraries using the old
behavior, but the idea of the new rules is to as naturally as possibly
expose the visibility of an item.


It wasn't so much that I was balking at the prospect of rewriting my 
code, as it was that the rewrite was ending up so voluminous and 
roundabout. That said, I think I came up with a solution that seems to 
resonate a bit better with me:


pub use mod_a = internal::mod_a::external;

mod internal
{
pub mod mod_a
{
pub struct S { priv m: int }

pub mod external
{
pub use super::S;
impl super::S
{
pub fn pub_api(&self) -> int { self.m }
}
}

pub fn crate_api_2() -> S { S{m: 0} }
}
}

It seems to be a pleasant inversion of what was done with the old 
approximate rules, except that you have to root the entire module tree 
in a private module to establish the visibility wall (which is still 
unfortunate as far as documentation goes).


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] New privacy rules and crate-local scope

2013-10-09 Thread SiegeLord
I'll start right away with an example. Consider this crate that could 
have been written using the old privacy rules:


pub mod mod_a
{
pub struct S { priv m: int; }

impl S
{
pub fn pub_api(&self) -> int { self.m }
}

mod internal
{
use super::S;
impl S
{
/* This one doesn't actually work due to bugs */
pub fn crate_api_1() -> S { S{m: 0} }
}

pub fn crate_api_2() -> S { S{m: 0} }
}
}

Under the old privacy rules, items were accessible within a crate if 
they were pub'd, regardless of the intervening privacy. Note that 
crate_api_1() and crate_api_2() can only be used within the crate. Under 
the new privacy rules, this sort of looks like this:


mod internal
{
pub struct S { priv m: int; }

impl S
{
pub fn pub_api(&self) -> int { self.m }
}

pub fn crate_api_2() -> S { S{m: 0} }
}

pub mod mod_a
{
pub use internal::S;
}

The general idea is that you have a top-level, private module which 
contains the crate-scoped API. Note that I don't think you can even 
write crate_api_1 anymore. Also note how the public API moved inside the 
internal module: in fact, pretty much everything is now placed inside 
it, and the public API is now re-exported elsewhere (reminiscent of the 
old export lists). If you weren't planning to have crate-local API, then 
you'd really have to restructure your code to support it... so much so, 
that it might be prudent to code as if you had crate-scoped API even if 
you don't (yet).


Is my analysis incorrect? Is there a way to write crate_api_1 (i.e. a 
crate-scoped method) ? Is there a way to avoid moving the entire code 
base inside the 'internal' mod?


One last note... since the new privacy rules don't mention crates, this 
problem becomes a bit more general, i.e. it's very tricky to isolate a 
particular API to a sub-tree of modules. I feel like it's almost simpler 
to just make everything public and rely on documentation or place 
everything in a single module.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] Crate-scoped non-trait implementations

2013-09-27 Thread SiegeLord
Given the proposed privacy resolution rules ( 
https://github.com/mozilla/rust/issues/8215#issuecomment-24762188 ) 
there exists a concept of crate-scope for items: these can be used 
within the implementation of the crate, but are inaccessible from other 
crates. This is easy to do by introducing a private module:


mod private
{
   pub fn crate_scoped_fn();
   pub trait CrateScopedTrait;
   pub struct CrateScopedStruct;
}

It's not clear to me whether or not this is possible (or whether it 
should be) for non-trait implementations. Right now, if I do this:


pub struct S;
mod private
{
   impl super::S
   {
   pub fn new() -> super::S { super::S }
   pub fn crate_local_api(&self) -> {}
   }
}

I find that the associated function can't be used at all within a crate 
or cross crate (issue #9584), while the method resolves in both cases, 
but does not link cross-crate. What should happen in this case? I'd 
prefer for the function and method to resolve within a crate, but not 
cross crate.


Notably, this is not how trait implementations work today, as those are 
resolved by looking at the location/privacy of the trait and not the 
implementation.  I think crate-scoped methods and associated functions 
are very useful though, and it's worthwhile to have a different rule 
for them.


Or maybe there should be an explicit keyword for the crate scope and 
not bother with these bizarre privacy rules. Or maybe I am missing an 
alternate way to accomplish this?


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Syntax for "raw" string literals

2013-09-22 Thread SiegeLord

On 09/22/2013 07:45 PM, Kevin Ballard wrote:

It would require changing the rules for lifetimes, with no benefit (and no clear new rule to 
use anyway). &'foo"delim" is perfectly legal today, and I see no reason to 
change that.

It's not as big a change as you make it out to be, but fair enough.

Looking at the parser right now, it seems to me that implementing the 
leading 'R' in C++'s syntax will be just as difficult/easy as doing my 
delim"stuff"delim proposal so I'm sticking to that idea as my 'vote'.


If C++ way is chosen, I'd suggest the following permutation of the 
delimeters, as I think it looks lighter (by virtue of using smaller 
characters):


r'delim"raw string"delim'
r'"raw string"'

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Syntax for "raw" string literals

2013-09-22 Thread SiegeLord

On 09/22/2013 07:10 PM, Kevin Ballard wrote:

' doesn't work because 'delim is parsed as a lifetime.


The parser will have to be modified to support raw strings in any of 
their manifestations. Is it a fact that there is no possible parser than 
can differentiate between 'delim and 'delim" ? I guess it'll give 
trouble to this current syntax &'foo"blah", but it wouldn't be the first 
place in the grammar where a space was necessary to disambiguate between 
constructs (& & comes to mind).


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] RFC: Syntax for "raw" string literals

2013-09-22 Thread SiegeLord

On 09/22/2013 05:40 PM, Kevin Ballard wrote:

I've filed a summary of this conversation as an RFC issue on the GitHub issue 
tracker.

https://github.com/mozilla/rust/issues/9411


I've used a variation of the option 10 for my own configuration format's 
raw strings:


delim"raw text"delim

Where delim was an equivalent of an identifier.

If ` is a problem, then maybe using ' works too?

'delim"raw text"delim'

'"raw text"'

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rustdoc_ng: 95% done

2013-08-12 Thread SiegeLord

On 08/12/2013 12:15 PM, Evan Martin wrote:

You could make the links to source more stable by linking to exactly the
version of the source that the docs were built from.
E.g. rather than having
http://seld.be/rustdoc/master/std/either/fn.lefts.html
link to
https://github.com/mozilla/rust/blob/master/src/libstd/either.rs#L121-130
you could replace 'master' there with the commit, like
https://github.com/mozilla/rust/blob/fad7857c7b2c42da6081e593ab92d03d88643c81/src/libstd/either.rs#L121-130


Along the same lines, what I'm planning on doing is providing snapshots 
of the source (created via pygments or something similar), so it'd be 
nice to be able to specify the format of the line anchors/file 
extensions in some way. As far as I can tell, for pygments it will look 
like this: module.html#foo-123, for example.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] Crate local visibility

2013-07-30 Thread SiegeLord

On 07/30/2013 08:10 AM, Corey Richardson wrote:

On Tue, Jul 30, 2013 at 8:09 AM, Dov Reshef  wrote:

Hello,

I'd like to get people's opinions about crate local visibility. I feel that
the way the public / private scope is divided now is encouraging making
either too much code public or creating very large modules.


I agree! It's very count-intuitive that to access sibling modules they
need to be exposed to the whole world.


I don't think that's the case. I submitted a bug that concerns this: 
https://github.com/mozilla/rust/issues/7388 . I note, however, that now 
that code gives you a linker error (which is better than nothing). To 
reduce the example a bit:


// This is a crate-local module, inaccessible from the outside of the crate
mod private
{
pub fn private_fun()
{

}
}

// This is a public module, accessible from the outside of the crate
pub mod public
{
use super::private::*;
pub fn public_fun()
{
private_fun();
}
}

I'm not sure that's how it is supposed to be, but I'd prefer it to be by 
design. Incidentally, accessing that private_fun() from outside the 
crate gives you a linker error today.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] bikeshedding println() and friends

2013-07-14 Thread SiegeLord

On 07/13/2013 10:39 PM, Jack Moffitt wrote:


1) Do away with the formatting stuff as the default. print!() and
println!() should just take a variable number of arguments, and each
one should be printed in its default string representation with a
space between each one. This is how Clojure's (and Python's?) print
and println work.

This would change code like this: println!("The result is %f", foo);
to this: println!("The result is", foo)

It's much easier. There are no formatting codes to remember and it
does exaclty what you want in most cases. Consider: println!(a, ,b, c,
"d=", d); This seems great for the standard printf-style debugging.


Well, you could also do it D's way:

writefln("The result is %s", foo);

Where "%s" grabs the "default string representation" (note that is is 
very unlike "%?" in Rust). You (in all cases your proposal would work) 
don't need to remember any other formatting codes. Or you can do Tango's 
way (a D library):


Stdout.formatln("The result is {}", foo);

Where "{}" does the same as above. I think separating formatting from 
values is a valid approach that I sometimes (but not always) prefer 
using. Just like Brendan in another email, I'd prefer both the 
formatting and the non-formatting macros to exist, with the following 
semantics:


print!(a, b, c); // grabs the default string representation and puts a 
space between each one


print!(); // prints nothing

printf!("a = %f", a); // prints using the format string

printf!("a"); // prints just the format string

printf!(a); // illegal (or, alternatively grabs the default string 
representation of a and uses it as the format string, especially if you 
want to use this for gettext-like functionality)


printf!(a, b); // same as above

printf!(); // illegal

Speaking of "default string representation", it'd be only useful if it 
is obtained via a trait like ToStr (but with a writer interface, for 
efficiency). Using "%?" outside of debugging is almost never what I want 
(it's unsafe anyway). It'd allow for an efficient implementation of fmt! 
too (when used with this macro).




If formatting is needed, it's easy to get to:
println!("My name is",  name, "and I scored", fmt!("%0.2f", score));

2) Since println!() is likely to be used the most often, I feel like
it should have a shorter name. Ie, we should call it just print!(),
and have a newline-less version with a different name, or perhaps a
different style of invocation of the macro.


I'd prefer "ln" to stay, as it makes it clear what is happening. As a 
newcomer to python (and bash with it's echo) I always have to double 
check whether or not it outputs a newline. I like the documentation 
benefit of keeping the name.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rustpkg use scenarios

2013-06-27 Thread SiegeLord

On 06/27/2013 10:09 PM, Zack Corr wrote:


I think you're confused here. You don't need to name the directory
github.com/ <http://github.com/>/ at all, this is just an
abstracted URL that can be used to fetch a Rustpkg project from Github.
More abstracted URLs could be added to fetch projects from other sources.


Well, consider this. Let's say I have a project A has a different 
project B as its dependency, which I, by accident of history, host on 
github.


I.e. project A has this line in it (I think this is the proposed syntax):

extern mod "github.com/SiegeLord/ProjectB";

How would I get rustpkg to use my local copy of it? I assume it will try 
to do: rustpkg install github.com/SiegeLord/ProjectB which will only 
work the way I want if I re-create that directory structure on my system.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rustpkg use scenarios

2013-06-27 Thread SiegeLord

On 06/27/2013 06:14 PM, Graydon Hoare wrote:


This does not mean that you must fetch a package named
"github.com/graydon/foo" from github.com, but it means that if you don't
have any other source for that package, you can guess at where to get
it. And it has a unique name (assuming you decide to use that fact).


I have a rust project hosted on github, but the package name is 
certainly not github.com// in my mind. 
What if it was hosted on bitbucket? What if I delete the repository? I 
find it puzzling to have to place my local repository inside a directory 
named github.com// for me to be able to 
build my local copy using rustpkg.


It's the last bit that bothers me the most: I can't build my local 
project unless I introduce a completely irrelevant aspect (my git host 
and username) into my metadata (the directory structure).




This was chosen very carefully, very intentionally, and (for the time
being) we're not revisiting this choice. We experimented before with
having multiple points of name-indirection or metadata and it appears to
have just annoyed and confused everyone.


Most package system I've reviewed prior to writing that email used a 
metadata file in its package system. It really is not clear why Rust 
must be nearly unique in this point. In the 3 sources of documentation 
you've linked to there really wasn't a motivation behind why the 
widespread metadata file approach was wrong.


I understand that this is too late to change this, but at the same time 
I find integrating my (sole for now) Rust project into rustpkg to be 
really disruptive, relative to simply including a metadata file like 
I've done for my projects in other languages.


-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


[rust-dev] rustpkg use scenarios

2013-06-27 Thread SiegeLord
I've been trying to understand how rustpkg is meant to be used, and I 
found that it didn't quite mesh with my experience with other package 
management systems. Since it's quite incomplete and under-documented, 
instead of trying to comment on the status quo, I decided its best to 
form a few scenarios of usage, and hopefully have the people who 
understand where rustpkg is going help me to see if these scenarios are 
going to be possible (or if not, what's wrong with them). To that end, 
the syntax for rustpkg will be my own and not the current syntax.


Prelude 1:

1 Package != 1 crate. I think a package should be able to contain 
several crates. E.g. Debian packages routinely contain multiple 
binaries/libraries etc.


Prelude 2:

Package source != package name. Regardless of where you get the package 
from and no matter what the package file name was, the package metadata 
(name/author/version) should not be affected, and should be the only 
things that determine the identity of the package. I don't think these 
things are inferrable, so I envision a metadata file with all those 
things specified.


Each package in these scenarios is named 'bar' and has a 'baz' crate.

Scenario 1: Installation from a package file

Given a package file foo.zip, I want to be able to do this:

 sudo rustpkg install foo.zip

This would install it to /usr/local. `--prefix` switch would specify 
where to install it (perhaps something more clever can be done here for 
the sake of Windows).


You should be able to install multiple packages with the same name but 
different authors/versions, e.g this should install two separate 
packages if their metadata are different.


 rustpkg install ~/dir1/foo.zip
 rustpkg install ~/dir2/foo.zip

Scenario 2: Installation from an appropriately structured directory

Same thing should work with a local directory.

 rustpkg install ~/foo

...and a remote one (like a github repository):

rustpkg install https://github.com/user/foo.git --branch master

If nothing is specified, it uses the current directory.

Scenario 3: Uninstallation

This should work if there's only one bar.

 rustpkg uninstall --package bar

If there are multiple bar's, (different metadata) then you should be 
able to disambiguate between them:


 rustpkg uninstall --author Author --package bar

Or you can specify a package file here as well, instead of a package name.

Scenario 4: Fetching/building without installation

Building without installing:

 rustpkg build 

If you then run `rustpkg install ` or `rustpkg install --package 
bar`, then it'll use that pre-built copy without fetching the sources 
and building them again. Speaking of fetching, this would just fetch the 
sources:


 rustpkg fetch 

Scenario 5: Specifying a workspace

All of the above commands used some system workspace to store the 
intermediate files. You should also be able to specify a custom 
workspace, e.g. for the sake of development:


 rustpkg build  --workspace .

That command would fetch into the local directory as well as building in it.

-SL
___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rustdoc rewrite and redesign

2013-06-21 Thread SiegeLord

On 06/21/2013 02:32 PM, Corey Richardson wrote:

Using https://github.com/huonw/rust-rand as the example crate, I've
started a sketch at http://octayn.net/rdoc-proto/. Not intended to
work or be pretty or usable, but intended to present the structure of
documentation.


I don't know how I feel about the struct definition. I feel like it 
should be replaced with a list of public fields (in order, so you can 
use the documentation to instantiate a struct manually). Alternatively 
it could stay, but each field could link to an documentation entry for 
that field.


This also brings a point of public vs private API, it'd be nice if the 
doc generator could generate a public-only API documentation and a 
public + private API documentation.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev


Re: [rust-dev] rustdoc rewrite and redesign

2013-06-19 Thread SiegeLord

On 06/19/2013 04:01 PM, Corey Richardson wrote:


Please discuss, give me your feature requests, comments, etc.


I am unclear why the XML/JSON is part of the parsing/extraction step, it 
seems like it's on the same level as the generator step. I.e. after the 
parser/extractor/filter do their thing, the XML generator will take the 
internal representation and create an XML file for external tools to do 
what they want with, just like is done with Markdown for pandoc. I don't 
see what good the XML is with all the extra indentation that is only 
removed a step below, but without useful things like the brief 
descriptions. Speaking of brief descriptions... this might not be a 
concept that is universal to all generators (e.g. sphinx doesn't seem to 
use them).


I'm not sure what differentiates the filter step from the extraction 
step. Is it meant to be done in parallel while the rest is done 
serially? Overall it's not clear which steps are to be done in parallel 
(which should be expanded upon, as it seems to be one of the main 
motivations behind this).


Overall I guess what I'm really not clear about is how this is different 
than what is done today design-wise (aside from the multiple backends 
bit). It seems to me to be mostly a refactoring effort.


-SL

___
Rust-dev mailing list
Rust-dev@mozilla.org
https://mail.mozilla.org/listinfo/rust-dev