Hide input string from stdin

2016-05-22 Thread Michael Chen via Digitalmars-d
I tried to write a small program that receive string as password. 
 However, I didn't find available library for hide input string, 
even in core library.  Any suggestion?


Re: GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

2012-04-09 Thread Michael Chen
Hi, Cristi,
>From change log of D 2.059. I saw that uniform function call syntax
was implemented. I hope you can leverage this feature to make non
member function calls look nicer. Another suggestion, please use
shorter function name for example M.t() instead of M.transpose() so
that long expression is easy to read.

Best,
Mo

On Mon, Apr 9, 2012 at 6:52 AM, Cristi Cobzarenco
 wrote:
> On 8 April 2012 18:59, Caligo  wrote:
>>
>> On Tue, Apr 3, 2012 at 6:20 AM, Cristi Cobzarenco
>>  wrote:
>> >
>> > The point of these is to have light-weight element wise operation
>> > support.
>> > It's true that in theory the built-in arrays do this. However, this
>> > library
>> > is built on top BLAS/LAPACK, which means operations on large matrices
>> > will
>> > be faster than on D arrays.
>> >
>>
>> I can't agree with building it on top of LAPACK or any other BLAS
>> implementation, but perhaps I shouldn't complain because I'm not the
>> one who's going to be implementing it.  I like the approach Eigen has
>> taken where it offers its own BLAS implementation and, iirc, other
>> BLAS libraries can be used as optional back-ends.
>
>
> Yes, I agree with you. I already built naive implementations for BLAS &
> LAPACK functions as part of my last year project, using external libraries
> is optional (building with version=nodeps ensures absolutely no dependencies
> are needed). The argument still stands, though. If you _do_ use external
> BLAS libraries then using value arrays will _still_ be faster.
>
>>
>>
>> > Also, as far as I know, D doesn't support
>> > allocating dynamic 2-D arrays (as in not arrays of arrays), not to
>> > mention
>> > 2-D slicing (keeping track of leading dimension).
>> >
>>
>> I fail to see why there is any need for 2D arrays.  We need to make
>> sure multidimensional arrays (matrices) have data in very good
>> arrangements.  This is called tiling and it requires 1D arrays:  2D
>> arrays are stored as 1D arrays together with an indexing mechanism.
>
>
> That is precisely what I meant. We need wrappers around D arrays, because
> they, by themselves, do not support 2-D indexing. By providing a wrapper we
> allow the nice syntax of matrix[ a, b ]. This kind of wrapping is already in
> SciD by CowMatrix. I meant we shouldn't use D built-in arrays directly, not
> not at all. Also, as mentioned before, we can't use new for allocation,
> because we want the library to be GC-independent.
>>
>>
>> > Also I'm not sure how a case like this will be compiled, it may or may
>> > not
>> > allocate a temporary:
>> >
>> > a[] = b[] * c[] + d[] * 2.0;
>> >
>> > The expression templates in SciD mean there will be no temporary
>> > allocation
>> > in this call.
>> >
>>
>> Why are expression templates used?
>
>
> As H. S. Teoh rightfully pointed out, it is important not to allocate
> temporaries in matrix operations. You want to evaluate A = B * 3.0 + D * 2.0
> (where .* is element-wise multiplication) as (in BLAS terms):
>   copy( B, A );
>   scal( 3.0, A );
>   axpy( D, 2.0, A );
>
> Or with naive expression template evaluation:
>   for( i = 0 ; i < n ; ++ i ) {
>      A[ i ] = B[ i ] * 3.0 + D * 2.0;
>   }
>
> The D compiler would instead evaluate this as:
>   // allocate temporaries
>   allocate( tmp1, A.length );
>   allocate( tmp2, A.length );
>   allocate( tmp3, A.length );
>
>   // compute tmp1 = B * 3.0
>   copy( B, tmp1 );
>   scal( 3.0, tmp1 );
>
>   // compute tmp2 = D * 2.0;
>   copy( D, tmp2 );
>   scal( 2.0, tmp2 );
>
>   // compute tmp3 = tmp1 + tmp2;
>   copy( tmp1, tmp3 );
>   axpy( tmp2, 1.0, tmp1 );
>
>   // copy tmp3 into A
>   copy( tmp3, A );
>
> Plenty of useless computation. Note this is not a fault of the compiler, it
> has no way of knowing which temporaries can be optimised away for user types
> (i.e. our matrices).
>
>> Are you pretty much rewriting Eigen in D?
>
> No. It is just the interface - and even that only to a certain extent - that
> will mimic Eigen. The core the library would be very similar to what I
> implemented for SciD last year, which, you will find, is very D-like and not
> at all like Eigen.
>
>


Re: GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

2012-04-05 Thread Michael Chen
Thanks for the explanation, now I get it. In case you are interested,
there is excellent article about monad style c++ template meta
programming by Bartosz Milewski which might be helpful for compile
time optimization for evaluation order.
Really looking forward to official release of the SciD. D is really
suitable for scientific computation. It will be great to have an
efficient and easy-to-use linear algebra library.


On Thu, Apr 5, 2012 at 7:42 AM, Cristi Cobzarenco
 wrote:
> Thanks for the feedback!
>
> On 4 April 2012 10:21, Michael Chen  wrote:
>>
>> another btw, there is also another great c++ linear algebra library
>> besides Eigen: Amardillo which has very simple matlab like interface
>>  and performs on a par with Eigen.
>
>
> I'll look into it, thanks.
>
>>
>> On Wed, Apr 4, 2012 at 5:14 PM, Michael Chen  wrote:
>> > Btw, I really don't like the matrix api to be member functions. It is
>> > hard for user to extend the library in an unified way. It is also ugly
>> > when you want to chain the function calls.
>>
>> >
>> > On Wed, Apr 4, 2012 at 5:09 PM, Michael Chen  wrote:
>> >> For the Point 4, I really like to have high order functions like
>> >> reduceRow and reduceCol. then the function argument is simply the
>> >> reduceRow!foo(0,mat), here the foo is not a function operating on the
>> >> whole column but simply a function of two elements (e.g.
>> >> reduceRow!("a+b")(0,mat)). Or even better we could have a general
>> >> reduce function with row and col as template parameter so that we can
>> >> do reduce!(foo,row)(0,mat). I dont know whether we can optimize such
>> >> reduce function for different matrix type, but such a function will be
>> >> extremely useful from a user perspective
>
>
> Well, as I said before, there's nothing stopping us from providing the free
> functions that call the member functionsl. However there is something
> stopping us from not providing a member function alternative. D's lack of
> ADL means we would not allow easy extensibility of possible operations. Say
> Alice invents a new kind of matrix type, the DiagonalMatrix type which
> stores its elements in a 1D array (this isn't exactly how it would work with
> the design we have, she would in fact have to define a storage type, but
> bear with me). If she wants to implement sum on her matrix, she can't simply
> add a specialisation to the sum( T ) function because of the lack of ADL. If
> instead we implemented sum(T) as:
>
> auto sum( T )( T matrix ) {
>      static if( is( typeof( T.init.sum() ) ) )
>          return matrix.sum;
>      else
>          return reduce!"a+b"( 0, matrix.elements() );
> }
>
> then Alice could simply define a DiagonalMatrix.sum() method that wouldn't
> need to go through all the zero elements, for example. The idea of rowReduce
> and columnReduce still works - we can provide this for when a user wants to
> use her own operation for reducing. We could also have a way of
> automatically optimising rowReduce!"a+b"( mat ) by calling
> mat.rowwise().sum() if the operation is available - but that wouldn't be
> exactly high priority.
>
>>
>> .
>> >>
>> >> On Tue, Apr 3, 2012 at 7:20 PM, Cristi Cobzarenco
>> >>  wrote:
>> >>>
>> >>>
>> >>> On 3 April 2012 02:37, Caligo  wrote:
>> >>>>
>> >>>> I've read **Proposed Changes and Additions**, and I would like to
>> >>>> comment and ask a few questions if that's okay.  BTW, I've used Eigen
>> >>>> a lot and I see some similarities here, but a direct rewrite may not
>> >>>> be the best thing because D > C++.
>> >>>>
>> >>>> 2.  Change the matrix & vector types, adding fixed-sized matrix
>> >>>> support in the process.
>> >>>>
>> >>>> This is a step in the right direction I think, and by that I'm
>> >>>> talking
>> >>>> about the decision to remove the difference between a Vector and a
>> >>>> Matrix.  Also, fixed-size matrices are also a must.  There is
>> >>>> compile-time optimization that you won't be able to do for
>> >>>> dynamic-size matrices.
>> >>>>
>> >>>>
>> >>>> 3. Add value arrays (or numeric arrays, we can come up with a good
>> >>>> name).
>> >>>>
>> >>>> I really don't see

Re: GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

2012-04-04 Thread Michael Chen
another btw, there is also another great c++ linear algebra library
besides Eigen: Amardillo which has very simple matlab like interface
 and performs on a par with Eigen.
On Wed, Apr 4, 2012 at 5:14 PM, Michael Chen  wrote:
> Btw, I really don't like the matrix api to be member functions. It is
> hard for user to extend the library in an unified way. It is also ugly
> when you want to chain the function calls.
>
> On Wed, Apr 4, 2012 at 5:09 PM, Michael Chen  wrote:
>> For the Point 4, I really like to have high order functions like
>> reduceRow and reduceCol. then the function argument is simply the
>> reduceRow!foo(0,mat), here the foo is not a function operating on the
>> whole column but simply a function of two elements (e.g.
>> reduceRow!("a+b")(0,mat)). Or even better we could have a general
>> reduce function with row and col as template parameter so that we can
>> do reduce!(foo,row)(0,mat). I dont know whether we can optimize such
>> reduce function for different matrix type, but such a function will be
>> extremely useful from a user perspective.
>>
>> On Tue, Apr 3, 2012 at 7:20 PM, Cristi Cobzarenco
>>  wrote:
>>>
>>>
>>> On 3 April 2012 02:37, Caligo  wrote:
>>>>
>>>> I've read **Proposed Changes and Additions**, and I would like to
>>>> comment and ask a few questions if that's okay.  BTW, I've used Eigen
>>>> a lot and I see some similarities here, but a direct rewrite may not
>>>> be the best thing because D > C++.
>>>>
>>>> 2.  Change the matrix & vector types, adding fixed-sized matrix
>>>> support in the process.
>>>>
>>>> This is a step in the right direction I think, and by that I'm talking
>>>> about the decision to remove the difference between a Vector and a
>>>> Matrix.  Also, fixed-size matrices are also a must.  There is
>>>> compile-time optimization that you won't be able to do for
>>>> dynamic-size matrices.
>>>>
>>>>
>>>> 3. Add value arrays (or numeric arrays, we can come up with a good name).
>>>>
>>>> I really don't see the point for these.  We have the built-in arrays
>>>> and the one in Phobos (which will get even better soon).
>>>
>>>
>>> The point of these is to have light-weight element wise operation support.
>>> It's true that in theory the built-in arrays do this. However, this library
>>> is built on top BLAS/LAPACK, which means operations on large matrices will
>>> be faster than on D arrays. Also, as far as I know, D doesn't support
>>> allocating dynamic 2-D arrays (as in not arrays of arrays), not to mention
>>> 2-D slicing (keeping track of leading dimension).
>>> Also I'm not sure how a case like this will be compiled, it may or may not
>>> allocate a temporary:
>>>
>>> a[] = b[] * c[] + d[] * 2.0;
>>>
>>> The expression templates in SciD mean there will be no temporary allocation
>>> in this call.
>>>
>>>>
>>>>
>>>> 4. Add reductions, partial reductions, and broadcasting for matrices and
>>>> arrays.
>>>>
>>>> This one is similar to what we have in Eigen, but I don't understand
>>>> why the operations are member functions (even in Eigen).  I much
>>>> rather have something like this:
>>>>
>>>> rowwise!sum(mat);
>>>>
>>>> Also, that way the users can use their own custom functions with much
>>>> ease.
>>>
>>>
>>> There is a problem with this design. You want each matrix type (be it
>>> general, triangular, sparse or even an expression node)  do be able to
>>> define its own implementation of sum: calling the right BLAS function and
>>> making whatever specific optimisations they can. Since D doesn't have
>>> argument dependent look-up (ADL), users can't provide specialisations for
>>> their own types. The same arguments apply to rowwise() and columnwise()
>>> which will return proxies specific to the matrix type. You could do
>>> something like this, in principle:
>>>
>>> auto sum( T )( T mat ) {
>>>   return mat.sum();
>>> }
>>>
>>> And if we want that we can add it, but this will provide no addition in
>>> extensibility. By the way, you can use std.algorithm with matrices since
>>> they offer range functionality, but it will be much slower to use
>>> reduce!my

Re: GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

2012-04-04 Thread Michael Chen
Btw, I really don't like the matrix api to be member functions. It is
hard for user to extend the library in an unified way. It is also ugly
when you want to chain the function calls.

On Wed, Apr 4, 2012 at 5:09 PM, Michael Chen  wrote:
> For the Point 4, I really like to have high order functions like
> reduceRow and reduceCol. then the function argument is simply the
> reduceRow!foo(0,mat), here the foo is not a function operating on the
> whole column but simply a function of two elements (e.g.
> reduceRow!("a+b")(0,mat)). Or even better we could have a general
> reduce function with row and col as template parameter so that we can
> do reduce!(foo,row)(0,mat). I dont know whether we can optimize such
> reduce function for different matrix type, but such a function will be
> extremely useful from a user perspective.
>
> On Tue, Apr 3, 2012 at 7:20 PM, Cristi Cobzarenco
>  wrote:
>>
>>
>> On 3 April 2012 02:37, Caligo  wrote:
>>>
>>> I've read **Proposed Changes and Additions**, and I would like to
>>> comment and ask a few questions if that's okay.  BTW, I've used Eigen
>>> a lot and I see some similarities here, but a direct rewrite may not
>>> be the best thing because D > C++.
>>>
>>> 2.  Change the matrix & vector types, adding fixed-sized matrix
>>> support in the process.
>>>
>>> This is a step in the right direction I think, and by that I'm talking
>>> about the decision to remove the difference between a Vector and a
>>> Matrix.  Also, fixed-size matrices are also a must.  There is
>>> compile-time optimization that you won't be able to do for
>>> dynamic-size matrices.
>>>
>>>
>>> 3. Add value arrays (or numeric arrays, we can come up with a good name).
>>>
>>> I really don't see the point for these.  We have the built-in arrays
>>> and the one in Phobos (which will get even better soon).
>>
>>
>> The point of these is to have light-weight element wise operation support.
>> It's true that in theory the built-in arrays do this. However, this library
>> is built on top BLAS/LAPACK, which means operations on large matrices will
>> be faster than on D arrays. Also, as far as I know, D doesn't support
>> allocating dynamic 2-D arrays (as in not arrays of arrays), not to mention
>> 2-D slicing (keeping track of leading dimension).
>> Also I'm not sure how a case like this will be compiled, it may or may not
>> allocate a temporary:
>>
>> a[] = b[] * c[] + d[] * 2.0;
>>
>> The expression templates in SciD mean there will be no temporary allocation
>> in this call.
>>
>>>
>>>
>>> 4. Add reductions, partial reductions, and broadcasting for matrices and
>>> arrays.
>>>
>>> This one is similar to what we have in Eigen, but I don't understand
>>> why the operations are member functions (even in Eigen).  I much
>>> rather have something like this:
>>>
>>> rowwise!sum(mat);
>>>
>>> Also, that way the users can use their own custom functions with much
>>> ease.
>>
>>
>> There is a problem with this design. You want each matrix type (be it
>> general, triangular, sparse or even an expression node)  do be able to
>> define its own implementation of sum: calling the right BLAS function and
>> making whatever specific optimisations they can. Since D doesn't have
>> argument dependent look-up (ADL), users can't provide specialisations for
>> their own types. The same arguments apply to rowwise() and columnwise()
>> which will return proxies specific to the matrix type. You could do
>> something like this, in principle:
>>
>> auto sum( T )( T mat ) {
>>   return mat.sum();
>> }
>>
>> And if we want that we can add it, but this will provide no addition in
>> extensibility. By the way, you can use std.algorithm with matrices since
>> they offer range functionality, but it will be much slower to use
>> reduce!mySumFunction(mat) than mat.sum() which uses a BLAS backend.
>>
>>
>>>
>>>
>>>
>>> 6. Add support for interoperation with D built-in arrays (or pointers).
>>>
>>> So I take that Matrix is not a sub-type?  why? If we have something like
>>> this:
>>>
>>> struct Matrix(Real, size_t row, size_t col) {
>>>
>>>  Real[row*col] data;
>>>  alias data this;
>>> }
>>>
>>> then we wouldn't need any kind of interoperation with built-i

Re: GSoC 2012 Proposal: Continued Work on a D Linear Algebra library (SciD - std.linalg)

2012-04-04 Thread Michael Chen
For the Point 4, I really like to have high order functions like
reduceRow and reduceCol. then the function argument is simply the
reduceRow!foo(0,mat), here the foo is not a function operating on the
whole column but simply a function of two elements (e.g.
reduceRow!("a+b")(0,mat)). Or even better we could have a general
reduce function with row and col as template parameter so that we can
do reduce!(foo,row)(0,mat). I dont know whether we can optimize such
reduce function for different matrix type, but such a function will be
extremely useful from a user perspective.

On Tue, Apr 3, 2012 at 7:20 PM, Cristi Cobzarenco
 wrote:
>
>
> On 3 April 2012 02:37, Caligo  wrote:
>>
>> I've read **Proposed Changes and Additions**, and I would like to
>> comment and ask a few questions if that's okay.  BTW, I've used Eigen
>> a lot and I see some similarities here, but a direct rewrite may not
>> be the best thing because D > C++.
>>
>> 2.  Change the matrix & vector types, adding fixed-sized matrix
>> support in the process.
>>
>> This is a step in the right direction I think, and by that I'm talking
>> about the decision to remove the difference between a Vector and a
>> Matrix.  Also, fixed-size matrices are also a must.  There is
>> compile-time optimization that you won't be able to do for
>> dynamic-size matrices.
>>
>>
>> 3. Add value arrays (or numeric arrays, we can come up with a good name).
>>
>> I really don't see the point for these.  We have the built-in arrays
>> and the one in Phobos (which will get even better soon).
>
>
> The point of these is to have light-weight element wise operation support.
> It's true that in theory the built-in arrays do this. However, this library
> is built on top BLAS/LAPACK, which means operations on large matrices will
> be faster than on D arrays. Also, as far as I know, D doesn't support
> allocating dynamic 2-D arrays (as in not arrays of arrays), not to mention
> 2-D slicing (keeping track of leading dimension).
> Also I'm not sure how a case like this will be compiled, it may or may not
> allocate a temporary:
>
> a[] = b[] * c[] + d[] * 2.0;
>
> The expression templates in SciD mean there will be no temporary allocation
> in this call.
>
>>
>>
>> 4. Add reductions, partial reductions, and broadcasting for matrices and
>> arrays.
>>
>> This one is similar to what we have in Eigen, but I don't understand
>> why the operations are member functions (even in Eigen).  I much
>> rather have something like this:
>>
>> rowwise!sum(mat);
>>
>> Also, that way the users can use their own custom functions with much
>> ease.
>
>
> There is a problem with this design. You want each matrix type (be it
> general, triangular, sparse or even an expression node)  do be able to
> define its own implementation of sum: calling the right BLAS function and
> making whatever specific optimisations they can. Since D doesn't have
> argument dependent look-up (ADL), users can't provide specialisations for
> their own types. The same arguments apply to rowwise() and columnwise()
> which will return proxies specific to the matrix type. You could do
> something like this, in principle:
>
> auto sum( T )( T mat ) {
>   return mat.sum();
> }
>
> And if we want that we can add it, but this will provide no addition in
> extensibility. By the way, you can use std.algorithm with matrices since
> they offer range functionality, but it will be much slower to use
> reduce!mySumFunction(mat) than mat.sum() which uses a BLAS backend.
>
>
>>
>>
>>
>> 6. Add support for interoperation with D built-in arrays (or pointers).
>>
>> So I take that Matrix is not a sub-type?  why? If we have something like
>> this:
>>
>> struct Matrix(Real, size_t row, size_t col) {
>>
>>  Real[row*col] data;
>>  alias data this;
>> }
>>
>> then we wouldn't need any kind of interoperation with built-in arrays,
>> would we?  I think this would save us a lot of headache.
>>
>> That's just me and I could be wrong.
>
>
> Inter-operation referred more to having a matrix object wrapping a pointer
> to an already available piece of memory - maybe allocated through a region
> allocator, maybe resulting from some other library. This means we need to
> take care of different strides and different storage orders which cannot be
> handled by built-in arrays. Right now, matrices wrap ref-counted
> copy-on-write array types (ArrayData in the current code) - we decided last
> year that we don't want to use the garbage collector, because of its current
> issues. Also I would prefer not using the same Matrix type for
> pointer-wrappers and normal matrices because the former must have reference
> semantics while the latter have value semantics. I think it would be
> confusing if some matrices would copy their data and some would share
> memory.
>
>>
>>
>> I've got to tell you though, I'm very excited about this project and
>> I'll be watching it closely.
>>
>> cheers.
>
> ---
> Cristi Cobzarenco
> BSc in Artificial Intelligence and Computer Science
>

Re: Improvements to std.string

2011-06-13 Thread Michael Chen
I vote for the changes. They are better name for newbies like me.

On Mon, Jun 13, 2011 at 9:29 AM, Adam D. Ruppe
 wrote:
> Jonathan M Davis wrote:
>> Would it be better to rename toStringz to toCString when fixing it
>
> I think it should stay just how it is: toStringz, with a lowercase
> z.
>
> The reason is a "stringz" is actually a proper name of sorts -
> the z at the end isn't a new word, but part of the first one.
> At least that's the way it was in assembly!
>
>
> Also, it ain't broke. I've sometimes gotten tolower wrong due to
> case. I've never made a mistake on toStringz. I'd be surprised if
> anyone has.
>


Re: Problem with string.whitespace and newline

2011-06-11 Thread Michael Chen
Thanks Andrej, it works.

On Sun, Jun 12, 2011 at 9:14 AM, Andrej Mitrovic
 wrote:
> try return join(split(x),whitespace[]);
>
> It seems whitespace is a static array.
>
> On 6/12/11, Michael Chen  wrote:
>> The following code cannot be compiled
>> string clean(string x)
>> {
>>       return join(split(x),whitespace);
>> }
>>
>> The compile error is
>> Error 2       Error: template std.array.join(RoR,R) if (isInputRange!(RoR)
>> && isInputRange!(ElementType!(RoR)) && isForwardRange!(R)) cannot
>> deduce template function from argument types
>> !()(string[],immutable(char[6u]))
>>
>> However this line is fine
>>  join(split(x)," ");
>>
>>
>> The same problem happens when using newline. Is this a bug?
>>
>> Best,
>> Mike
>>
>


Problem with string.whitespace and newline

2011-06-11 Thread Michael Chen
The following code cannot be compiled
string clean(string x)
{
return join(split(x),whitespace);
}

The compile error is
Error   2   Error: template std.array.join(RoR,R) if (isInputRange!(RoR)
&& isInputRange!(ElementType!(RoR)) && isForwardRange!(R)) cannot
deduce template function from argument types
!()(string[],immutable(char[6u]))

However this line is fine
 join(split(x)," ");


The same problem happens when using newline. Is this a bug?

Best,
Mike


Re: duck!

2010-10-16 Thread Michael Chen
totally agreeed. let advertisability to influence a function name is
ridiculous to me. you gotta have some princeple for names, but
advertisability? i dont think so.

On Sunday, October 17, 2010, Steven Schveighoffer  wrote:
> On Sat, 16 Oct 2010 16:26:15 -0400, Walter Bright 
>  wrote:
>
>
> Steven Schveighoffer wrote:
>
> Think of it another way. Remember zip files? What a great name, and yes, it 
> seemed silly at first, but zip entered the lexicon and D has a zip module and 
> it never occurs to anyone it might be better named std.compressedArchive. 
> Phil Katz renamed arc files "zip" files, called his compressor "pkzip" and 
> blew away arc so badly that most people are unaware it even existed.
>
> I think the catchy, silly "zip" name was a significant factor in getting 
> people to notice his program. In contrast, the superior "lharc" with its 
> "lzh" files never caught on.
>
>  These are completely unsubstantiated statements focused on a very narrow set 
> of variables.  It's like all those studies that say X causes cancer because 
> look most people who use X have cancer.  Well, yeah, but they are all 40-70 
> yr old people, who freaking knows how many factors went into them getting 
> cancer!!!  And it proves itself again and again when the next year, they say, 
> 'well that study was flawed, we now *know* that it was really Y'.
>
>
> It's an example of a phenomenon I've seen over and over. How about the names 
> Google and Yahoo? Boy did I think they were stupid names for companies and 
> products. Boy was I wrong. How about the perjorative name "twitter" and the 
> hopelessly undignified verb "tweet"? I still can't bring myself to say I 
> "tweeted". Ugh.
>
>
> This is called cherry picking.  What about microsoft, IBM, apple, gillette, 
> DOS, etc.  All these names aren't "wacky", yet they are still successful.  
> How do you explain that?  You might lump GoDaddy.com as one of those 'wacky' 
> names that made it, but that has nothing to do with it.
>
> Google and Yahoo succeeded because their product was good.  D will succeed 
> because it does duck typing, not because the function that does duck typing 
> is called 'duck'.  Now, if D was all about duck typing, and you called it 
> 'ducky', then I think that the name might be appropriate, and actually help 
> with marketing.  But naming the function that does duck typing 'duck' doesn't 
> seem to me like it makes or breaks D at all.  I want to be clear that duck is 
> not my first choice, but it's certainly a name that makes sense.  I'm just 
> saying that marketability of D does not change no matter what appropriate 
> term you choose.
>
>
> I also couldn't believe all the mileage Borland got out of naming minor 
> features "zoom technology" and "smart linking". So I don't buy that we 
> programmers are above all that.
>
>
> But were there functions named zoomTechnology() and smartLink()?  Were their 
> tools named zoom or smartl or something?  Is that what pushed them over the 
> edge, or was it the bullet on the packaging that said:
>
> * Includes zoom technology!
>
>
>
> "duck" *is* indicative of what the feature does, and so it is a lot better 
> than "zoom" or "smart" or "yahoo", which I'd have a hard time justifying. I 
> guess that's why I'm not a marketer!
>
>
> Yes, duck is a valid option.  And the fact that duck typing is what it does 
> is a very good reason to use it.  I just don't see 'marketing draw' as being 
> a factor whatsoever.  It's useless noise.
>
>
> Besides, duck isn't the compiler name, it's a very very small part of the 
> library.  I think you associate more weight to this than there actually is.
>
>
> A lot of people do think duck typing is very important.
>
>
> And D already does duck typing.  Templates do duck typing.  'adaptTo' does it 
> too, and it's cool, but it's not *that* important (no offense, Kenji).
>
>
> Let's concentrate on finding the name that best describes the function.  This 
> might be 'duck', but let's leave marketing considerations out of it.  If duck 
> was a verb that meant 'walk like a...'  then I'd agree it was a fine term.
>  How about if we can say D's functions are named intuitively instead of after 
> some colloquial term that describes the function?
>  And yeah, I agree zip is now a de-facto term, so much so that I think 
> std.range.Zip should be renamed :)  But was it zip that made the tool famous 
> or the tool that made zip famous?
>  Let's also not forget the hundreds, probably thousands, of 'cute' names that 
> didn't save their respective products because the marketing material sucked.
>
>
> I think 'zip' got peoples' attention, and then pkzip delivered the goods 
> (better than arc). lharc, on the other hand, had a ponderous name and failed 
> despite being significantly better. So yeah, I think the name got pkzip on 
> the map, but yes, the product also had to deliver. A cute name is not enough 
> to save a crap product, but it will help with a good one.
>
> If you want peopl

Re: What would you rewrite in D?

2010-10-05 Thread Michael Chen
There is DDMD.

On Wed, Oct 6, 2010 at 10:04 AM, BCS  wrote:
> DMD
>
> --
> ... <
>
>
>
>


Is there anybody working on a linear algebra library for D2?

2010-10-05 Thread Michael Chen
I remember that one of D's goal is easy scientific computation.
However I haven't seen any linear algebra package for D2. My work
heavily relays on all kinds of matrix stuff (matrix multiplication,
factorization, linear system etc). I like D and am willing to work
with D. However without these facilities I can hardly start. I'd like
to have a matrix library of which the API is kind of like Matlab.
Is there anybody working on this or planning to work on this?

Regards,
Michael


Re: Fedora 14 will integrate D into the distribution

2010-09-28 Thread Michael Chen
Somebody should say something to them to use D 2.0 instead.

On Tue, Sep 28, 2010 at 11:37 PM, Paulo Pinto  wrote:
> Hi,
>
> it seems that Fedora will provide D out of the box in their distribution.
>
> But they seem to be providing an old version of it.
>
> https://fedoraproject.org/wiki/Features/D_Programming
>
> Cheers,
> Paulo
>
>
>