Send Beginners mailing list submissions to
        beginners@haskell.org

To subscribe or unsubscribe via the World Wide Web, visit
        http://www.haskell.org/mailman/listinfo/beginners
or, via email, send a message with subject or body 'help' to
        beginners-requ...@haskell.org

You can reach the person managing the list at
        beginners-ow...@haskell.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Beginners digest..."


Today's Topics:

   1. Re:  Reactive Banana Data Structures Sanity Check
      (Heinrich Apfelmus)
   2.  Comfortable handling of module hierarchies (Christopher Howard)
   3.  Algorithmic Advances by Discovering Patterns in  Functions
      (Costello, Roger L.)


----------------------------------------------------------------------

Message: 1
Date: Fri, 14 Sep 2012 15:45:49 +0200
From: Heinrich Apfelmus <apfel...@quantentunnel.de>
Subject: Re: [Haskell-beginners] Reactive Banana Data Structures
        Sanity Check
To: beginners@haskell.org
Message-ID: <k2vcec$du1$1...@ger.gmane.org>
Content-Type: text/plain; charset=UTF-8; format=flowed

Michael Litchard wrote:
> I'm making an implementation of Grand Theft Wumpus using
> reactive-banana. I'm using the slot machine example to work from.
> 
> For now, I'm just building a simple graph where a Player can move from
> Node to Node. I'm not sure I have the Data Structures right, so I
> wanted to run it by the community. I'm conceptualizing each Node (a
> Street) and the Player as a Behavior.  I reason that since the Graph
> won't change, just the values inside a Node, I can update any Node as
> needed, instead of creating a new Graph whenever a value in a Node
> changes. It seems though, as I scale up, I'd end up with a big union
> of Behaviors.  Does it make sense to describe each Node as a Behavior?
> Even though I'm starting simply, I intend to write a complete
> implementiation.
> 
> http://nostarch.com/download/Lisp08.pdf
> 
> data StreetName = [...]
> 
> type Player t = Behavior t Player_
> type Street t = Behavior t Street_
> 
> data Player_ = Player {location :: StreetName } deriving Show
> data Street_ = Street_ {sName :: StreetName
>                        ,player :: Maybe Player_
>                        } deriving Show
> 
> data GameEvent = MovePlayer Street
>                | Look
> 
> does that look okay so far?

That's hard to say. It certainly doesn't look wrong, but whether it's a 
good idea or not depends on what you will do later.


My process for programming with Functional Reactive Programming (FRP) is 
  usually this:

* What is the "end product", i.e. the thing that users ultimately 
interact with? In your case, it's probably the drawing of a map of the city.

* Does the end product "vary in time"? Yes? Then it's going to be a 
Behavior City  , i.e. a data structure that describes the evolution of 
the city in time.

* How does the city evolve in time? Here I start to think of the events 
and other behaviors that I use to define the behavior, for instance by 
writing

     bcity :: Behavior City
     bcity = stepper ... -- step function from events

or

     bcity = (++) <$> bsuburbs <*> bcenter  -- combine other behaviors

and so on.


The process is a bit like thinking as a physicist in god-mode: for each 
object in my little universe, I specify where it should be at any 
particular time, like 5 o'clock.


That said, I found it useful to decouple the interaction of individual 
parts -- city, node, player -- from their time evolution. Have a look at 
the Asteroids.hs example

http://www.haskell.org/haskellwiki/Reactive-banana/Examples#asteroids

The bottom part of the source contains function like  advanceRocks  or 
collide  that specify more complicated interactions between different 
objects. The FRP part of the source file just specifies *when* these 
interactions take place, i.e.  advanceRocks  happens when  etick 
happens  while the player moves when  eleft  or  eright  happen.

In other words, my rule of thumb is

   How  does it happen? => ordinary functions
   When does it happen? => events and behaviors

In particular, you can write your Wumpus game logic by asking the "how" 
questions first and only then ask the "when" questions. Thus, I would 
rename  Player_  to  Player  for the "how" part and simply use  Behavior 
t Player  for the fairly small "when" part.

Hope that helps.


Best regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com




------------------------------

Message: 2
Date: Fri, 14 Sep 2012 17:23:45 -0800
From: Christopher Howard <christopher.how...@frigidcode.com>
Subject: [Haskell-beginners] Comfortable handling of module
        hierarchies
To: Haskell Beginners <beginners@haskell.org>
Message-ID: <5053d8a1.7030...@frigidcode.com>
Content-Type: text/plain; charset="iso-8859-1"

So, let's say I've got this hierarchy of modules:

Plant
  `-- Vegetable
  |    `-- Celery
  |    `-- Lettuce
  `-- Fruit
       `-- Raspberry
       `-- Apple

What I would like to be able to do is import and use modules like so:

code:
--------
import qualified Plant.Vegetable as V

-- call a function in module Celery
V.Celery.grow
--------

and

code:
--------
import qualified Plant as P

P.Fruit.Raspberry.jam
--------

I tried using connecting modules, which I read about online:

code (Plant.hs):
--------
module Plant ( module Plant.Vegetable
                     , module Plant.Fruit
                     )
import Plant.Vegetable
import Plant.Fruit
--

(And likewise for Plant/Vegetable.hs and Plant/Fruit.hs.) However, this
pools the functions from the sub-modules all into one name space, which
does not allow me to make the sort of function calls I described above.
Although sometime it is okay to have the functions all in one name
space, it doesn't fit the style I'm aiming for, and it can be a real
pain if multiple modules have a function of the same name (e.g., if all
fruits and veggies have a "grow" function). Is it possible to do what
I'm trying to do?

-- 
frigidcode.com
indicium.us


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 551 bytes
Desc: OpenPGP digital signature
URL: 
<http://www.haskell.org/pipermail/beginners/attachments/20120914/81fac387/attachment-0001.pgp>

------------------------------

Message: 3
Date: Sat, 15 Sep 2012 08:36:38 +0000
From: "Costello, Roger L." <coste...@mitre.org>
Subject: [Haskell-beginners] Algorithmic Advances by Discovering
        Patterns in     Functions
To: "beginners@haskell.org" <beginners@haskell.org>
Message-ID:
        <b5fee00b53cf054aa8439027e8fe17751e65b...@imcmbx04.mitre.org>
Content-Type: text/plain; charset="us-ascii"

Hi Folks,

This is a story about discovery. It is a story about advancing the 
state-of-the-art by discovering patterns in functions.

Scientists look for recurring patterns in nature. Once a pattern is discovered, 
its essential ingredients are identified and formalized into a law or 
mathematical equation. Discovery of a pattern is important and those 
individuals who make such discoveries are rightfully acknowledged in our annals 
of science.

Scientists are not the only ones who look for recurring patterns, so do 
functional programmers. Once a pattern is discovered, its essential ingredients 
are identified and formalized into a function. That function may then become 
widely adopted by the programming community, thus elevating the programming 
community to a new level of capability.

The following is a fantastic example of discovering a pattern in multiple 
functions, discerning the essential ingredients of the pattern, and replacing 
the multiple functions with a single superior function. The example is from 
Richard Bird's book, Introduction to Functional Programming using Haskell.

Before looking at the example let's introduce an important concept in 
functional programming: partial function application. 

A function that takes two arguments may be rewritten as a function that takes 
one argument and returns a function. The function returned also takes one 
argument. For example, consider the min function which returns the minimum of 
two integers:

min 2 3         -- returns 2

Notice that min is a function that takes two arguments.

Suppose min is given only one argument:

min 2

[Definition: When fewer arguments are given to a function than it can accept it 
is called partial application of the function.  That is, the function is 
partially applied.]

min 2 is a function. It takes one argument. It returns the minimum of the 
argument and 2. For example:

(min 2) 3       -- returns 2

To see this more starkly, let's assign g to be the function min 2:

let g = min 2

g is now a function that takes one argument and returns the minimum of the 
argument and 2:

g 3     -- returns 2

Let's take a second example: the addition operator (+) is a function and it 
takes two arguments. For example:

2 + 3   -- returns 5

To make it more obvious that (+) is a function, here is the same example in 
prefix notation:

(+) 2 3 -- returns 5

Suppose we provide (+) only one argument:

(+) 2

That is a partial application of the (+) function. 

(+) 2 is a function. It takes one argument. It returns the sum of the argument 
and 2. For example:

((+) 2) 3       -- returns 5

We can succinctly express (+) 2 as:

(+2)

Thus,

(+2) 3          -- returns 5

Okay, now we are ready to embark on our journey of discovery. We will examine 
three functions and find their common pattern.

The functions process values from this recursive data type:

data Nat = Zero | Succ Nat

Here are some examples of Nat values:

Zero, Succ Zero, Succ (Succ Zero), Succ (Succ (Succ Zero)), ...

The following functions perform addition, multiplication, and exponentiation on 
Nat values. Examine the three functions. Can you discern a pattern?

-- Addition of Nat values:
m + Zero = m
m + Succ n = Succ(m + n)

-- Multiplication of Nat values:
m * Zero = Zero
m * Succ n = (m * n) + m

-- Exponentiation of Nat values:
m ** Zero = Succ Zero
m ** Succ n = (m ** n) * m

For each function the right-hand operand is either Zero or Succ n: 

(+m) Zero
(+m) Succ n

(*m) Zero
(*m) Succ n

(**m) Zero
(**m) Succ n

So abstractly there is a function f that takes as argument either Zero or Succ 
n:

f Zero
f Succ n

Given Zero as the argument, each function immediately returns a value:

(+m)  Zero      = m
(*m)  Zero      = Zero
(**m) Zero      = Succ Zero

Let c denote the value. 

So, invoke f with a value for c and Zero:

f c Zero = c

For addition provide m as the value for c:

m + Zero = f m Zero

For multiplication provide Zero as the value for c:

m * Zero = f Zero Zero

For exponentiation provide (Succ Zero) as the value for c:

m ** Zero = f (Succ Zero) Zero

Next, consider how each of the functions (addition, multiplication, 
exponentiation) deals with Succ n.

Given the argument Succ n,  each function processes n and then applies a 
function to the result:

(+m)  Succ n    = (Succ)((+m) n)
(*m)  Succ n    = ((*m) n) (+m)
(**m) Succ n    = ((**m) n) (*m)

Let h denote the function that is applied to the result. 

So, Succ n is processed by invoking f with a value for h and Succ n:

f h (Succ n) = h (f n)

For multiplication provide (+m) as the value for h:

m * Succ n = f (+m) (Succ n)

Recall that (+m) is a function: it takes one argument and returns the sum of 
the argument and m. 

Thus one of the arguments of function f is a function. 

f is said to be a higher order function. 

[Definition: A higher order function is a function that takes an argument that 
is a function or it returns a function.] 

Continuing on with the other values for h:

For addition provide (Succ) as the value for h:

m + Succ n = f (Succ) (Succ n)

For exponentiation provide (*m) as the value for h:

m ** Zero = f (*m) (Succ n)

Recap: to process Nat values use these equations:

f c Zero = c
f h (Succ n) = h (f n)
 
Actually we need to supply f with h, c, and the Nat value:

f h c Zero = c
f h c (Succ n) = h (f h c n)

f has this type signature: its arguments are a function, a value, and a Nat 
value; the result is a value. So this is f's type signature:

f :: function -> value -> Nat -> value

That type signature is well-known in functional programming. It is the 
signature of a fold function. Here is some intuition on why it is called the 
fold function:

Imagine folding a towel. You make your first fold. The second fold builds on 
top of the first fold. Each fold builds on top of the previous folds. That is 
analogous to what the function f does. "You make your first fold" is analogous 
to immediately returning c:

f h c Zero = c

"Each fold builds on top of the previous folds" is analogous to processing Succ 
n by applying h to the result of processing the n'th Nat value: 

f h c (Succ n) = h (f h c n)

Recap: In analyzing the functions (addition, multiplication, exponentiation) on 
Nat we have discovered that they are all doing a fold operation. 

That is a remarkable discovery. 

Let's now see how to perform addition, multiplication, and exponentiation using 
f. However, since we now recognize that f is actually the fold function, let's 
rename it foldn (fold Nat). Here is its type signature and function definition:

foldn                   :: (a -> a) -> a -> Nat -> a
foldn h c Zero          =  c
foldn h c (Succ n)      =  h (foldn h c n)

The functions are implemented as:

m + n                   = foldn Succ m n
m * n                   = foldn (+m) Zero n
m ** n                  = foldn (*m) (Succ Zero) n

Let's take an example and trace its execution:

Zero + Succ Zero
        = foldn Succ Zero (Succ Zero)
        = Succ (foldn Succ Zero Zero)
        = Succ (Zero)

The holy trinity of functional programming is:

1.  User-defined recursive data types.
2.  Recursively defined functions over recursive data types.
3.  Proof by induction: show that some property P(n) holds for each element of 
a recursive data type.

Nat is a user-defined recursive data type. The addition, multiplication, and 
exponentiation functions are recursively defined functions over Nat. 

We saw that the fold function can be defined for the Nat data type. In fact, a 
suitable fold function can be defined for every recursive data type.

Wow!

There are two advantages of writing recursive definitions in terms of fold:

1.  The definitions are shorted; rather than having to write down two 
equations, we have only to write down one.
2.  It is possible to prove general properties of fold and use them to prove 
properties of specific instantiations. In other words, rather than having to 
write down many inductive proofs, we have only to write down one.

Recap: we have approached programming like a scientist; namely, we have 
examined a set of algorithms (the addition, multiplication, and exponentiation 
functions), discovered that they have a recurring pattern, and then evolved the 
algorithms to a higher order algorithm. That is advancing the field. 
Incremental advancements such as this is what takes Computer Science to new 
heights.

/Roger



------------------------------

_______________________________________________
Beginners mailing list
Beginners@haskell.org
http://www.haskell.org/mailman/listinfo/beginners


End of Beginners Digest, Vol 51, Issue 22
*****************************************

Reply via email to