There's been lots of talk about functors on the list in the last few days.
Here are few remarks about functors, starting with general mathematical, and
then winding down to Haskell and J.  Who survives through the most of
what follows or just sufficiently skips over will get one nontrivial example
of functor in J.

Notion of functors in contemporary mathematics comes from the category
theory.  In short, functor is a mapping from one category to another that 
preserves categorical structure.  To get a feel for it it's not the worst
thing
to have an idea of what category is.  In the context of programming 
languages, category is a collection of one or more types (objects) with
functions (maps) that map one type (domain) to another (codomain) with some
generic properties characteristic for composition of functions (like
associativity)--the other important concept here is that of monoid, so
whenever some system has monoidal structure it is likely that category
theory
will work, and vice-versa: if it doesn't have monoidal structure, 
then one should better reformulate it to have one :)

(Should I even mention that function always has domain and codomain?
Well, it has, by definition.)

Back to functors in their general setting: they are important because they
precisely connect two different categorical descriptions.  So once there is
a
bunch of categories describing either a single
system/language/formalism/structure or several different ones, we can try to
find out functors going from one desciption to another.

First important thing here is that we can now treat these categories as
objects
and functors between them as maps, and the newly formed system will have
properties of, yes, category.  So in this precise way functors are not
"meta" or "external" or "higher-order" things, but just maps in a
suitable category, namely category describing maps between a collection of
categories.  

The second imporant thing is that if we consider a functional programming
language, which has, say, just one type, we can actually still describe it
alternatively by breaking down or parametrizing or inventing several types,
under condition that our new description describes exactly the same
language.
So if we categorize two descriptions, there will be most certainly one or
more
functors precisely connecting two descriptions.

How about functors in programming?  Well, somebody already mentioned 
functors in Haskell, and they are in accordance with the category theory.  
Here is the declaration:

 class Functor f where
   fmap :: (a -> b) -> f a -> f b 

So then, is functor a function fmap that maps some function (a->b) to 
another function f a -> f b.  No, not really.  Ok, it says "class Functor f"
so that means it describes a type (actually a class of types.)  Is
then functor a family of types?  No, not really.  It is two things:
a type, called Functor f, *together* with a function, called fmap (with some
properties.)  In a sense, functor here describes "mapability" of a function
with one argument (a -> b) over a type f, be it list, array, tree, whatever
that is mappable.

So, the third important thing is that although there are instances of
functors everywhere, and lots of programming languages with sophisticated
type systems, mappings and mappers, not just any language will let us define
functors in the general form.

Ok, back to Haskell and category theory: I said that functor maps one
category to another, so which two categories are refered to in the above
declaration?  They are both one and the same, and functor here maps all
single
argument functions from Haskell category to itself (it's so called
endofunctor).

Wonderful, we'll then just find another categorical description of Haskell
and the game can go on indefinitely!  We could of course formulate another
pure functional language, better or worse than Haskell, but the important
thing
here is the deep result that there is a particular kind of category,
Cartesian
Closed Category (CCC), which is "good enough" for any Turing-complete
*language*.

So we may roughly think of it like this: once upon a time
Turing-completeness 
was, and of course still is, the characteristic of our language that
guarantees
that we can program arbitrarily complicated algorithms.  With CCC, we 
can go further: CCC-equivalence provides us with a formal framework that
guarantees us mathematical soundness of our languages in the sense,
so to speek, that the language "describes itself" formally.

Now to the example of a non-trivial functor in J:

Claim: 
  J has the following type: nouns that are one-dimensional arrays of
  non-negative integers (vectors) that have rank *strictly* greater than 1.
  Furthermore, J will do the type-checking of this type automatically.

Demonstration:

  Consider the following functor:

  f =: $~ 0&,
  fmap =: 1 : 'f @: u @: }.@$'

NB. f <vector> is our "type", fmap let us map a monadic
NB. verb u over the vector in the new type.

   vec=:1 2 3 4 5
   *: vec
1 4 9 16 25

   f vec
NB. nothing shows up.

   *: fmap fvec
NB. nothing shows up.

   rank =: #@$        NB. the definition of rank.
   rank *: fmap fvec  NB. The result was hidden but nevertheless has rank 6.
6

   $ *: fmap fvec  NB. When we peek with $ into the hidden world, voila!
0 1 4 9 16 25

NB. J will even do typechecking for us:

   f _1 0 1 2 3    NB. It's not vector of non-negative integers.
|domain error: f
|       f _1 0 1 2 3

   f 1.1 1 2 3 4   NB. It's not vector of integers.
|domain error: f
|       f 1.1 1 2 3 4

NB. Magic of functors.
-- 
View this message in context: 
http://old.nabble.com/Functors-in-mathematics%2C-Haskell-and-an-example-in-J-tp33544962s24193p33544962.html
Sent from the J Programming mailing list archive at Nabble.com.

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to