Re: Tuple to tuple conversion
On Tue, 08 Jun 2010 03:29:05 +0200, Simen kjaeraas wrote: > Sounds stupid, don't it? > 123456789012345678901234567890123456789012345678901234567890123456789012 > Carrying in my hat my trusty foo, a std.typecons.Tuple!(float), I want > to use it as a parameter to a function taking non-tuple parameters, i.e. > a single float. foo.tupleof gives me an unwieldy conglomerate of > tuple((Tuple!(float))._field_field_0,(Tuple!(float))._0). First, I'm not > sure what all of this means, second I'm completely sure it does not mean > what I want. > > foo.field seems much more close to what I want, returning a nice and > clean (float) when I ask for it. However, doing so in the context of > being a function parameter yields other problems, in the form of: > > src\phobos\std\typecons.d(424): Error: static assert > (is(Tuple!(string,float) == Tuple!(string,float))) is false > src\phobos\std\typecons.d(413):instantiated from here: > Tuple!(string,float) > src\phobos\std\typecons.d(423):instantiated from here: > slice!(1,3) problem.d(15):3 recursive instantiations from here: > Tuple!(float) FWIW, I've run across the same error, while writing code that had nothing to do with tuples. And I've seen others complaining about it too. It seems to be a rather elusive bug in Phobos. -Lars
Re: why is this cast necessary?
On 06/07/2010 10:02 PM, Graham Fawcett wrote: Hi folks, This program works as expected in D2: import std.stdio; import std.algorithm; T largestSubelement(T)(T[][] lol) { alias reduce!"a>b?a:b" max; return cast(T) max(map!max(lol)); // the cast matters... } void main() { auto a = [[1,2,3],[4,5,6],[8,9,7]]; assert (largestSubelement(a) == 9); auto b = ["howdy", "pardner"]; assert (largestSubelement(b) == 'y'); auto c = [[1u, 3u, 45u, 2u], [29u, 1u]]; assert (largestSubelement(c) == 45u); } But if I leave out the 'cast(T)' in line 7, then this program will not compile: lse.d(6): Error: cannot implicitly convert expression (reduce(map(lol))) of type dchar to immutable(char) lse.d(14): Error: template instance lse.largestSubelement!(immutable(char)) error instantiating Where did the 'dchar' came from? And why does the cast resolve the issue? Best, Graham Curious. in std.array, for string types front is defined as dchar front(A)(A a); ick. unicode.
Re: why is this cast necessary?
On Mon, 07 Jun 2010 23:02:48 -0400, Graham Fawcett wrote: Hi folks, This program works as expected in D2: import std.stdio; import std.algorithm; T largestSubelement(T)(T[][] lol) { alias reduce!"a>b?a:b" max; return cast(T) max(map!max(lol)); // the cast matters... } void main() { auto a = [[1,2,3],[4,5,6],[8,9,7]]; assert (largestSubelement(a) == 9); auto b = ["howdy", "pardner"]; assert (largestSubelement(b) == 'y'); auto c = [[1u, 3u, 45u, 2u], [29u, 1u]]; assert (largestSubelement(c) == 45u); } But if I leave out the 'cast(T)' in line 7, then this program will not compile: lse.d(6): Error: cannot implicitly convert expression (reduce(map(lol))) of type dchar to immutable(char) lse.d(14): Error: template instance lse.largestSubelement!(immutable(char)) error instantiating Where did the 'dchar' came from? And why does the cast resolve the issue? In a recent update, Andrei changed char[] and wchar[] to bi-directional ranges of dchar instead of straight arrays (at least, I think that was the change) in the eyes of the range types. I think this is where the dchar comes from. If you had a char[], and the 'max' element was a sequence of 2 code points, how do you return a single char for that result? -Steve
why is this cast necessary?
Hi folks, This program works as expected in D2: import std.stdio; import std.algorithm; T largestSubelement(T)(T[][] lol) { alias reduce!"a>b?a:b" max; return cast(T) max(map!max(lol)); // the cast matters... } void main() { auto a = [[1,2,3],[4,5,6],[8,9,7]]; assert (largestSubelement(a) == 9); auto b = ["howdy", "pardner"]; assert (largestSubelement(b) == 'y'); auto c = [[1u, 3u, 45u, 2u], [29u, 1u]]; assert (largestSubelement(c) == 45u); } But if I leave out the 'cast(T)' in line 7, then this program will not compile: lse.d(6): Error: cannot implicitly convert expression (reduce(map(lol))) of type dchar to immutable(char) lse.d(14): Error: template instance lse.largestSubelement!(immutable(char)) error instantiating Where did the 'dchar' came from? And why does the cast resolve the issue? Best, Graham
Re: Tuple to tuple conversion
Simen kjaeraas wrote: I guess what I'm asking for here is, is there a way to do what I want? Hm, it seems the problem was not where I thought it was. However, this is getting curiouser and curiouser. -- Simen
Tuple to tuple conversion
Sounds stupid, don't it? 123456789012345678901234567890123456789012345678901234567890123456789012 Carrying in my hat my trusty foo, a std.typecons.Tuple!(float), I want to use it as a parameter to a function taking non-tuple parameters, i.e. a single float. foo.tupleof gives me an unwieldy conglomerate of tuple((Tuple!(float))._field_field_0,(Tuple!(float))._0). First, I'm not sure what all of this means, second I'm completely sure it does not mean what I want. foo.field seems much more close to what I want, returning a nice and clean (float) when I ask for it. However, doing so in the context of being a function parameter yields other problems, in the form of: src\phobos\std\typecons.d(424): Error: static assert (is(Tuple!(string,float) == Tuple!(string,float))) is false src\phobos\std\typecons.d(413):instantiated from here: Tuple!(string,float) src\phobos\std\typecons.d(423):instantiated from here: slice!(1,3) problem.d(15):3 recursive instantiations from here: Tuple!(float) Especially interesting might be line 424, as that assert ought to be true in most cases. I guess what I'm asking for here is, is there a way to do what I want? -- Simen
Re: Handy templates
Simen kjaeraas wrote: Another few that showed up now with my work on combinatorial products of ranges: /** Determines whether a template parameter is a type of value (alias). Example: template foo( T... ) if (allSatisfy!( isAlias, T ) {...} */ template isAlias( alias T ) { enum isAlias = true; } template isAlias( T ) { enum isAlias = false; } /** Switches between template instantiations depending on the parameters passed. Example: alias staticSwitch( foo, 1, 2, 3 ).With callFoo; callFoo( 2 ); // Actually calls foo!(2)( ) */ template staticSwitch( alias F, T... ) if ( allSatisfy!( isAlias, T ) ) { auto With( CommonType!T index, ParameterTypeTuple!( F!( T[0] ) ) args ) { switch ( index ) { foreach ( i, e; T ) { mixin( Format!( q{case %s:}, e ) ); return F!( e )( args ); break; } } assert( false ); } } version( unittest ) { int foo( int n ) { return n; } } unittest { assert( staticSwitch!( foo, 1, 2 ).With( 2 ) == 2 ); } The latter does currently not work, due to bug 4292, but a patch has been submitted. Granted, a simple template would work around that problem, but better to remove it at the root. Philippe, if you or anyone else want to add any of these templates to your dranges or their own collection of templates, I would be pleased to allow it. But please do give credit. -- Simen
Re: delegates with C linkage
Simen kjaeraas wrote: dennis luehring wrote: D still won't accept an delegat in an extern C because this type does not exists in the C world Nor do classes, and those certainly can be passed to a C-linkage function. Yes, but I think that's a bug too. Quite a horrible one, in fact, since the class may get GC'd. On the "interfaceToC" page, class, type[], type[type] and delegate() are listed as having no C equivalent. They should all fail to compile.