On 15/08/2012, at 3:42 AM, Dobes Vandermeer wrote:

> 
> Yes yes, that's not the point. The parser isn't doing the unrolling, its 
> generating
> classes with a virtual procedure and instances that will do the unrolling.
> 
> You can do what you want now, but you have to write
> 
>         class myfor[T] { virtual proc P: T; }
>         instance[T,U] myfor[T ** U] { ... }
>         instance[T,U] myfor [T * U} { ... }
> 
> 
> for every such loop. The parser would just do all that housekeeping for you.
> The body of the loop is  the procedure P (with a tail call to P[U] to make
> the polyjmorphic recursion happen).
> 
> Ah, hmm, I was thinking I would just unroll the loop there somehow - copy the 
> loop body once for each element of the tuple, with a different type.

Felix used to be able to do that with the syntax macro system.

I would like to point out the problem. This worked just fine:

        macro for i in (1,2,"three") do ... done

That works. But this does not:

        var x = 1,2,"three";
        macro for i in x do .. done

because the "tuple" in the syntax macro version of "for" has to be
a literal tuple. You could do this:

        macro var x = 1,2,3;

and then it worked again. But this doesn't solve the problem:

        fun f() => 1,2,3;

So yes, you can say:

        macro fun () => 1,2,3;

and again you solved the problem but now the point is clear: the tuple
has to be calculated at compile time for it to work.

With the tuple**cons technology, the tuple TYPE has to be calculated
at compile time, but not the actual tuple; the value can be calculated
at run time.

Its a big difference! What's more you can write a for loop which uses
an existential type (a type variable) which is not known when
you write the loop.

It's not polymorphic, the type variable eventually has to resolve
to an actual type at compile time, but it is "generic" in the sense
you can write the loop without knowing the type.

So, what the tuple cons stuff is doing is "an order of magnitude"
more powerful than mere syntax macros, and it is
an "order of magnitude" weaker than dynamic typing.

The particular structure is "type safe, with late discovery
of type errors at instantiation time", but "not as late as
run time".

Type classes provide that feature in general: delayed dispatch,
but only delayed "compile time" to "link time". This is one hell
of a lot better than "run time" in those cases where the problem
can be represented with this technology.

That isn't always. but it is a LOT of cases where weaker languages
would only allow run time checking (or worse -- run time crashing!)
(or MUCH worse -- no crash, no error, just the wrong answer!!)

>  I suppose this approach might work as well, though.
> 
> I think a similar concept should apply to a sum type, except the body is 
> executed once using the appropriate type of variable.

Yes, the same concept has to be applied to sums.
I am doing the tuple stuff first because it is easier to understand for
my poor brain. Once working there's a model for doing the sum type,
by turning some of the tuple code "inside out". In particular the places
I have to handle the tuple stuff in the compiler are likely to also be
the places to handle the sum stuff.

///////////

the biggest problem I have at the moment is that I'm half way through
making a mess of implementing compact linear types. I thought what
was supported was working but no .. the "array join" function doesn't
work at all now.

The idea of compact linear types is simple enough: you just pack
several small enumerations into a single enumeration,
a bit like packing bitfields into a single int in C, except we use
division, multiplication and modulo instead of bit shifts, to ensure
the resulting representation is compact (no "wasted bits" as it were).
in turn this means you can enumerate all the values by an enumeration
of the underlying integer from 0 to max-value of the compact type,
which allows complicated array indexes to be turned into a simple
integer index.

All we need to do is cast the array from a multi-dimensional
array to a linear array with a multi-index, and then coerce that
index to integer type and we have polyadic arrays (fold, map,
etc over the array written ONCE and working for all ranks
and dimensions).

but the theory is confused by needing to keep the old style
arrays using type class based abstraction and methods
working as well: these use index of type "size".

I will eventually scuttle that, but I can't until the compact linear
type stuff works so I have to support two concepts of an
array, which I'm doing based on the type of the index.
Which is all very confusing because the "old" way of
providing projection function was this:

        fun projection : size * array -> value = "$2[$1]";

i.e. it was just done in C.     


--
john skaller
skal...@users.sourceforge.net
http://felix-lang.org




------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and 
threat landscape has changed and how IT managers can respond. Discussions 
will include endpoint security, mobile security and the latest in malware 
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Felix-language mailing list
Felix-language@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/felix-language

Reply via email to