Thanks for the clarification on types.  I started to follow the links on
the "twelf" page but ended up here -
http://twelf.org/wiki/Proving_metatheorems:Representing_the_syntax_of_the_natural_numbers
- where the crucial equations don't display because of an error in
assembling the page(?) - so that's frustrating.  It also hints at an
essential danger to the quest of avoiding errors in advance that seems to
motivate at least some of the type-based programming efforts: sources of
error are infinite in their variety and Murphy often seems to have the last
word.


On Mon, Apr 6, 2015 at 3:46 PM, Justin R. Slepak <[email protected]>
wrote:

> In the interest of pushing some terminology/definitions back towards the
> conventional usage...
> (I've built a type system for a language with J-like operator lifting, but
> the following is more about embedding J-style arrays into already-existing
> type theory and based on a couple afternoons Pierre-Evariste Dagand and I
> spent with Agda).
>
> > A "type" is an enumeration of possible values.
>
> A "type" in type theory is really a set of possible *program terms*
> (typically expressions). It is common to use types to classify expressions
> according to what possible results they might produce, but the
> classification might ascribe different types to expressions that produce
> the same value.
>
> > A "sum type" is a type which can represent one of a set of values.
>
> Specifically, a thing whose type is a sum of several options must be
> *exactly one* of those options. So `2` can be a sum of even-number and
> natural-number, but we would have to specify whether we mean two-as-even or
> two-as-natural. For possibly-overlapping options, it may be more convenient
> to use union types, which do not need to have this specified (by
> comparison, you can think of a sum as a tagged union).
>
> > A "product type" is a type that can reprsent a sequence of values.
>
> These are more commonly described as tuples rather than sequences. Saying
> "sequence" usually gives people the idea that it can be of any length,
> whereas a particular product type must have exactly one length.
>
> > Now, traditional type theory sort of peters out here.
>
> One key bit from simple type theory that's missing here (though of limited
> applicability in J itself) is function types. Given any two types S and T,
> we can talk about the type S->T whose instances are functions that consume
> things of type S and produce things of type T. J verbs can *almost* be
> described this way (each can be used as a noun->noun or as a
> noun*noun->noun).
>
> There's also a lot more than just simple types, sums, and products out
> there these days.
>
> Adding recursive types lets us, for example, define singly-linked lists of
> integers as, "a LinkedList is either (sum type!) Empty or a tuple
> (product!) with a Integer and another LinkedList." (Empty is just some
> designated special value here). Now we have the ability to use one type to
> describe sequences of any length.
>
> We can also have terms and types that are parameterized over other types
> ("parametric polymorphism" and "higher-kinded types"... or simply
> "generics"). Now we can make our LinkedList definition handle any element
> type: "a LinkedList<T> is either Empty or a tuple with a T and another
> LinkedList<T>." Now `LinkedList` itself is like a function, but at the type
> level. Its input is a type (perhaps Float), and it's output is a type
> (LinkedList<Float>).
>
>
> Two main tasks remain if we want to describe arrays with type theory.
>
>
> First, how do we talk about what we can *do* with arrays?
>
> We have a few options for ascribing types to arrays (which will specify
> how they can be used), and they'll give us differing levels of safety
> guarantee but also differing levels of complexity to implement and use.
>
> 1. All arrays of a given element type have the same type (Array<E>). It's
> easy to describe the behavior of a function/verb when you only need to
> state what type of elements its arguments and results should have (and you
> can always write a verb whose result shape is "too dynamic"). On the other
> hand, we get no guarantee about having shape-matched arguments. `1 2 3 + i.
> 2 2` would have no type error.
>
> 2. All arrays of a given element type *and rank* have the same type. This
> is a sensible enough plan for languages without J's implicit operator
> lifting. There are a couple ways we could do it:
> 2a. Describe arrays with a Vector<E> type, where E is a scalar, Vector<E>
> has rank 1, Vector<Vector<E>> has rank 2, etc. It fits nicely with the type
> theory we have, but it's hard to describe the behavior of some functions
> that change the rank of their input (our "language" of rank behavior is
> limited to adding or subtracting a constant).
> 2b. The type for a rank-N array of E is Array<N,E>. This calls for
> somewhat more complicated type theory: we need to have types parameterized
> over terms ("dependent types") or at least have a "type-level" version of
> natural numbers.
>
> 3. All arrays of a given element type and *shape* have the same type.
> Again, we want types parameterized over terms, but now the terms include
> sequences of natural numbers (not just naturals themselves). This is enough
> to describe even argument shape agreement, but we're forced to be much more
> particular in describing function behavior (sometimes too particular --
> option #1 can describe the behavior of dyadic `#`, but option #3 cannot).
> Again, two basic options:
> 3a. Vector<N,E> -- like 2a, but with the added parameter specifying the
> size of this axis.
> 3b. Array<S,E> -- like 2b, but with a sequence of naturals specifying the
> dimensions rather than one natural specifying the rank
>
> A bit about dependent types...
> One way to think about a type that's parameterized over, say, a natural
> number is as a very large (infinite, really) product type. For a verb that
> takes rank-n arrays to rank-(n+1) arrays, we can think of it as offering
> many different versions (one for each possible n):
> (Array<0,E>->Array<1,E>) * (Array<1,E>->Array<2,E>) *
> (Array<2,E>->Array<3,E>) * ...
>
> We could shorten this with product notation:
> Π(n:Natural).(Array<n,E> -> Array<(n+1),E>)
>
> If we do something similar for sums, we can have an array of "unknown"
> rank. It's one of rank-0, a rank-1, a rank-2, etc.:
> Array<0,E> + Array<1,E> + Array<2,E> + ...
>
> Using summation notation:
> Σ(r:Natural).Array<r,E>
>
> This gives us a sort of escape hatch when we need to describe something in
> more detail than we actually know. It also works for the case where an
> array's type specifies its shape. This lets us say that the entire shape is
> unknown (like a box), or even that specific pieces of the shape are unknown.
>
>
> Second, how do we say what an array *is*?
> When embedding arrays into an existing dependent type theory, the simplest
> ways are:
> 1. Use a singly linked list, since we can specify those pretty easily and
> even include the length as part of the type.
> 2. Use a function whose domain is the finite set {0,...,n-1}. It's
> unambiguous which element is "at" position 0, position 1, etc., even though
> we don't have this sequentialized datum to pick apart piece by piece
> whenever we want to do anything with it.
> In writing a *new* language (not embedding in another), this is largely a
> moot point.
>
>
>
> What does this not handle?
>
> Array types that specify shapes falls does not by itself show the result
> shape of a lifted verb. That said, we can build that reasoning into a new
> language if we're not trying to embed it in an existing one (or if we are,
> we can write a procedure that takes a verb and some nouns and does the
> shape magic).
>
> It doesn't cover the full flexibility of J's boxes. In
> Σ(s:Shape).Array<s,elt>, the `elt` part effectively specifies whether the
> contents of the box itself contains boxes. A box of boxes is fine, but only
> as long as all of those boxes have the same "box nesting" depth.
>
> Many J verbs have shape behavior that is difficult to describe or is not
> even fully determined by the shapes of its arguments (would require using
> the Σ escape hatch) or include some special cases which act differently.
>
> J verbs can have separate monadic and dyadic cases (with completely
> different behavior). This type theory only allows a particular function to
> have one domain. Typed Racket has a `case->` type for this sort of thing,
> but most formulations of type theory do not include it.
>
> Sums may not be the best way for describing J's atomic values. A verb
> which produces booleans can have its result passed to a verb which consumes
> floats without extracting from and then reinjecting into a sum.
>
>
> > But I never find a shortage of people willing to tell me how wonderful
> > it is, nor how wrong I am. Which is maybe a bit amazing since they
> > rarely have the patience to get past the promotional issues.
>
> Generally speaking, dynamism is at odds with static typing. There are
> researchers working on designing type systems to handle things that are
> "too dynamic" for existing systems to handle, but when dynamism is the
> goal, I suspect it's better for a language designer to avoid trying to keep
> everything within reach of a particular type theory. There's enough
> existing code, expectation, intuition, idiom, etc. which would break that I
> don't think J itself should try to integrate type theory.
> That said, the prefix-agreement and implicit operator lifting is the piece
> of dynamism I really care about in J, and I have found mainstream type
> theory a convenient enough way to identify that behavior statically.
>
> ---
> Justin Slepak
> PhD student, Computer Science dept.
>
> ----- Original Message -----
> From: Raul Miller <[email protected]>
> To: Chat forum <[email protected]>
> Sent: Sun, 05 Apr 2015 20:06:29 -0400 (EDT)
> Subject: [Jchat] Type theory terminology
>
> We've started using type theory terminology, so I thought I'd go over
> the definitions I use for "types", "sum types" and "product types".
>
> A "type" is an enumeration of possible values.
>
> For example, an 8 bit literal can represent 2^8 distinct values, while
> a 32 bit integer can represent 2^32 distinct values.
>
> A "sum type" is a type which can represent one of a set of values. So
> a sum type that let you represent either an 8 bit literal or a 32 bit
> integer can represent (2^8)+(2^32) possible values. J arrays are sum
> types in the sense that they can represent any of a variety of
> possible underlying array types.
>
> A "product type" is a type that can reprsent a sequence of values. So
> a pair of 32 bit integers can represent (2^32)*(2^32) possible values.
> J arrays are product types in the sense that each additional element
> in the array represents an additional set of possible values.
>
> Now, traditional type theory sort of peters out here. Anything is
> possible, which means the burden is on the programmer to impose any
> additional structure.
>
> Meanwhile, J arrays carry considerable structure but are somewhat
> intractable to deal with, using that sort of type theory. You might be
> tempted to call J arrays "infinite types" though they really aren't
> because of machine limitations. Or maybe they could be called
> "polynomial types" though that term also doesn't really do them
> justice.
>
> Conceptually, the J array type is a type of the form C*i0*i1*i2*...
> where C represents the sum of various element type and i0, i1, ..
> represents the dimensions.
>
> But boxed arrays mean that C is incredibly large.
>
> Anyways, the net result is that I have never gotten very far when I
> have tried to find a useful application for type theory.
>
> But I never find a shortage of people willing to tell me how wonderful
> it is, nor how wrong I am. Which is maybe a bit amazing since they
> rarely have the patience to get past the promotional issues.
>
> Meanwhile, other people are creating amazing things, and those are the
> people I want to be learning from.
>
> --
> Raul
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
>



-- 
Devon McCormick, CFA
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to