Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-02-05 Thread Dylan Thurston
On Fri, Feb 04, 2005 at 03:08:51PM +0100, Henning Thielemann wrote:
 On Thu, 3 Feb 2005, Dylan Thurston wrote:
 
 On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 
 O(n)
which should be O(\n - n) (a remark by Simon Thompson in
The Craft of Functional Programming)
 
 I don't think this can be right.  Ken argued around this point, but
 here's a more direct argument: in
 
  f(x) = x + 1 + O(1/x)
 
 all the 'x's refer to the same variable; so you shouldn't go and
 capture the one inside the 'O'.
 
 I didn't argue, that textually replacing all O(A) by O(\n - A) is a 
 general solution. For your case I suggest
 
 (\x - f(x) - x - 1)   \in   O (\x - 1/x)

This kind of replacement on the top level is exactly what
continuations (which Ken was suggesting) can acheive.  If you think
carefully about exactly what the big-O notation means in general
expressions like this, you'll be led to the same thing.

 I haven't yet seen the expression 2^(O(n)). I would interpret it as 
 lifting (\x - 2^x) to sets of functions, then applying it to the function 
 set O(\n - n). But I assume that this set can't be expressed by an O set.

That's right; for instance, in your terminology, 3^n is in 2^(O(n)).

 But I see people writing f(.) + f(.-t) and they don't tell, whether this 
 means
 
   (\x - f x) + (\x - f (x-t))
 
 or
 
   (\x - f x + f (x-t))
 
 Have you really seen people use that notation with either of those
 meanings?
 
 In principle, yes.

I'm curious to see examples.

 That's really horrible and inconsistent.  I would have interpreted f(.) + 
 f(.-t) as
 
 \x \y - f(x) + f(y-t)
 
 to be consistent with notation like .*. , which seems to mean
 \x \y - x*y
 in my experience.
 
 The problems with this notation are: You can't represent constant 
 functions, which is probably no problem for most people, since they 
 identify scalar values with constant functions. But the bigger problem is 
 the scope of the dot: How much shall be affected by the 'functionisation' 
 performed by the dot? The minimal scope is the dot itself, that is . would 
 mean the id function. But in principle it could also mean the whole 
 expression.
  I think there are good reasons why such a notation isn't implemented for 
 Haskell. But I have seen it in SuperCollider.

I certainly don't want to defend this notation...

Now that you mention it, Mathematica also has this notation, with
explicit delimiters; for instance, `(#+2)' is the function of adding
two.

Peace,
Dylan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-02-04 Thread Dylan Thurston
(Resurrecting a somewhat old thread...)

On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 On Fri, 28 Jan 2005, Chung-chieh Shan wrote:
  But I would hesitate with some of your examples, because they may simply
  illustrate that mathematical notation is a language with side effects --
  see the third and fifth examples below.
 I can't imagine mathematics with side effects, because there is no order
 of execution.

Not all side effects require an order of execution.  For instance,
dependence on the environment is a side effect (in the sense that it
is related to a monad), but it does not depend on the order of
execution.  There are many other examples too, like random variables.

   O(n)
  which should be O(\n - n) (a remark by Simon Thompson in
  The Craft of Functional Programming)

I don't think this can be right.  Ken argued around this point, but
here's a more direct argument: in

  f(x) = x + 1 + O(1/x)

all the 'x's refer to the same variable; so you shouldn't go and
capture the one inside the 'O'.

This is established mathematical notation, it's very useful, and can
be explained almost coherently.  The one deficiency is that we should
interpret 'O' as an asymptotically bounded function of... but that
doesn't say what it is a function of and where we should take the
asymptotics.  But the patch you suggest doesn't really help.

 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

I think this is the only reasonable generalization from the
established usage of, e.g., 2^(O(n)).  In practice, this means that
1/O(n^2) is the set of functions asymptotically bounded below by
1/kn^2 for some k.

 Hm, (3+) is partial application, a re-ordered notation of ((+) 3), which
 is only possible if the omitted value is needed only once. But I see
 people writing f(.) + f(.-t) and they don't tell, whether this means

   (\x - f x) + (\x - f (x-t))
 
 or
 
   (\x - f x + f (x-t))

Have you really seen people use that notation with either of those
meanings?  That's really horrible and inconsistent.  I would have
interpreted f(.) + f(.-t) as

 \x \y - f(x) + f(y-t)

to be consistent with notation like .*. , which seems to mean
 \x \y - x*y
in my experience.

 It seems to me that the dot is somehow more variable than variables, and a
 dot-containing expression represents a function where the function
 arguments are inserted where the dots are.

Right.  I don't know how to formalize this, but that doesn't mean it
can't be done.

Peace,
Dylan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-02-04 Thread William Lee Irwin III
On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

On Thu, Feb 03, 2005 at 08:16:49PM -0500, Dylan Thurston wrote:
 I think this is the only reasonable generalization from the
 established usage of, e.g., 2^(O(n)).  In practice, this means that
 1/O(n^2) is the set of functions asymptotically bounded below by
 1/kn^2 for some k.

Careful, 2^x is monotone increasing; 1/x is monotone decreasing. I
said 1/O(n^2) is Omega(1/n^2) for a good reason. Inequalities are
reversed by monotone decreasing functions. Likewise, sech(O(n^2)) =
Omega(sech(n^2)), which is happily immune to the effects of sign.
Usually f(n) = O(g(n)) is done as there exist N, K so that
|f(n)| = K*|g(n)| for all n  N so e.g.
e^x \in O((-1)^{\chi_\mathbb{Q}}\cdot e^x) etc.

Also, you're in a bit of trouble wrt. 2^(O(n)). O(2^n) is something
rather different. O(2^n) has |f(n)| = K*|2^n| but 2^(O(n)) is 2^|f(n)|
where |f(n)| = K*|n|. If K = 1/log(2) is sharp then then we have
2^|f(n)| = e^|n| \in omega(2^n).


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-31 Thread Chung-chieh Shan
(Is Lemming the same person as Henning Thielemann?)

On 2005-01-30T21:24:24+0100, Lemming wrote:
 Chung-chieh Shan wrote:
  Wait a minute -- would you also say that 1+x has no meaning at the
  first glance, because x is a variable whereas 1 is an integer, so
  some lifting is called for?
 For me 'x' is a place holder for a value.

But you can only adds two *numbers* to get a number.  It makes no sense
to add a number to a placeholder.  So it makes no sense to write 1+x,
no?

 For the expression '1+x' I 
 conclude by type inference that 'x' must be a variable for a scalar 
 value, since '1' is, too. But the expression '1/O(n^2)' has the scalar 
 value '1' on the left of '/' and a set of functions at the right side. 
 Type inference fails, so my next try is to make the operands compatible 
 in a somehow natural way.

It seems to me that your classification of certain notations as wrong
and others as right assumes a certain type inference system that
allows adding a number to a placeholder but disallows dividing a
function by a set of functions.

 Since mathematical notation invokes many 
 implicit conversions, it's sometimes no longer unique or obvious what 
 implicit conversion to use. Many users of O(n^2) seem to consider it as 
 a placeholder for some expression, where the value of the expression is 
 bounded by n^2.

Lambda notation also involves much ambiguity and many implicit
conversions.  In particular, x can mean the identity function from
integers to integers as well as the identity function from booleans to
booleans, as well as a function that maps an integer-boolean pair to the
integer of the pair, as well as a function that maps an integer-boolean
pair to the boolean of the pair, and so on.

  Right; they are control operators in the sense that call/cc is a control
  operator.
 So they seem to be operators that work on expressions rather than 
 values. In this respect they are similar to the lambda operator, aren't 
 they?

Yes.

 You use 'ask' twice in the second expression. Does this mean that there 
 may be two different values for 'ask'? If this is the case your second 
 interpretation differs from my second interpretation.

No -- when I use ask I mean the one defined in Control.Monad.Reader.

 Church style, System F(-omega), alpha-conversion, lambda calculus, eta 
 reduction, currying  - Where can I find some introduction to them? What 
 about Haskell Curry? What about Bourbaki? - I have heard they worked 
 hard to find a unified notation.

(I'm sure that other people would have better suggestions, but I rather
like Benjamin Pierce's book Types and programming languages.)

Ken

-- 
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
War crimes will be prosecuted. War criminals will be punished. And it
will be no defense to say, I was just following orders.'
George W. Bush, address to the nation, 2003-03-17
  http://www.whitehouse.gov/news/releases/2003/03/20030317-7.html


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-31 Thread Henning Thielemann

On Mon, 31 Jan 2005, Chung-chieh Shan wrote:

 (Is Lemming the same person as Henning Thielemann?)

Yes. :-)

  For the expression '1+x' I 
  conclude by type inference that 'x' must be a variable for a scalar 
  value, since '1' is, too. But the expression '1/O(n^2)' has the scalar 
  value '1' on the left of '/' and a set of functions at the right side. 
  Type inference fails, so my next try is to make the operands compatible 
  in a somehow natural way.
 
 It seems to me that your classification of certain notations as wrong
 and others as right assumes a certain type inference system that
 allows adding a number to a placeholder but disallows dividing a
 function by a set of functions.

Let's see if we share a common interpretation of notation before
discussing who uses it consistently and who does not. :-)
 For me 1, x and + are identifiers. The strings 1, x and + are not
mathematical objects. But '1' denotes a unique mathematical object. Only
almost, because depending on the underlying explanation it may be
identified with the natural number 1, with the rational number 1, with the
real 1, with the complex 1, you will probably need a different
representation. Further this representation depends on how you represent
natural numbers, reals and so on, e.g. by sets. '+' also denotes a
mathematical object, more precisely a function, and again we have
ambiguities like that for '1', since '+' might be the natural addition,
the rational one or even a mixed one. 'x' denotes no special mathematical
object but it may be replaced by one, and the special thing is, that
within a certain scope all occurences of 'x' must be replaced with the
same mathematical object.  Functions are special, in the sense that if I
write 1+2 I don't want to consider this as the sequence of three objects
denoted by 1, +, 2 but I want that the function denoted by + is
applied on the values denoted by 1 and 2.
 So I imagine a layer of notation and a layer of mathematical objects. Do
you share this interpretation?
 What are consequences of this interpretation? 2+2 and 4 denote the
same object. A function has only access to the value (the mathematical
object) but not to the generating expression. So how can I define e.g.  
differentation with respect to a variable? I can't, but I don't miss it
because I can define it for functions, and yes Jerzy :-), also for other
objects. For me differentiation was introduced by limits. The definition
of limits don't need expressions as mathematical objects, the
differentation of functions don't need them, too.  But later we got used
to differentiate expression (e.g. x^2 + y^2 is turned into 2*x * dx + 2*y
* dy), but no-one defined what that is.
 What are the consequences of treating expressions as mathematical
objects, too? 2+2 and 4 are different expressions. I guess we would
still ask for a value, thus we need a mathematical function which maps
expressions to values. I think Mathematicas Evaluate is made for this
purpose. If mathematical functions can transform expressions - is it
possible that they change my writing? ;-)
 Indeed, I really like this separation: On the one side expression, on the
other side mathematical objects. Simplifications of expressions are
nothing I allow a mathematical function. But if I simplify an expression I
must assert that the denoted mathematical object remains the same.  
Differentation of a function is possible if you only know its graph, but a
computer algebra system can find an expression for the derivation if you
have an expression for the function. There are many functions that can be
integrated, but a computer algebra system cannot find an expression
without the Integrate function for it. When a pupil says the function can
not be integrated he means that he couldn't find an expression for the
integrated function without the integral sign. When a professor teaching
calculus says the function can not be integrated he means that there is
some divergence. It seems to me that in this case the pupil considers
expressions as mathematical objects and the professor does not.

 Lambda notation also involves much ambiguity and many implicit
 conversions.

Following the interpretation above I like to see lambda as a notation
instead of an mathematical operator. lambda expressions denote functions.  
I can replace every occurence of lambda expressions textually. If \A -
B occurs I can also write f and where holds \forall A f(A)=B. So
lambda notation involves no more ambiguities than the rest of the
notation.

  You use 'ask' twice in the second expression. Does this mean that there 
  may be two different values for 'ask'? If this is the case your second 
  interpretation differs from my second interpretation.
 
 No -- when I use ask I mean the one defined in Control.Monad.Reader.

Sorry for my imprecise question :-) What I meant was that 'ask' somehow
injects a value into the mathematical operation, I wondered if the values
it introduces are equal or not.

[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-30 Thread Lemming
Chung-chieh Shan wrote:
On 2005-01-28T20:16:59+0100, Henning Thielemann wrote:
I can't imagine mathematics with side effects, because there is no order
of execution.
To clarify, I'm not saying that mathematics may have side effects, but
that the language we use to talk about mathematics may have side
effects, even control effects with delimited continuations.
I understand.
But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
functions bounded to the upper by f.  So 1/O(f) has no meaning at the
first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
(i.e. lifting from scalar reciprocal to the reciprocal of a function) and
then as lifting from a reciprocal of a function to the reciprocal of each
function of a set. Do you mean that?
Wait a minute -- would you also say that 1+x has no meaning at the
first glance, because x is a variable whereas 1 is an integer, so
some lifting is called for?
For me 'x' is a place holder for a value. For the expression '1+x' I
conclude by type inference that 'x' must be a variable for a scalar
value, since '1' is, too. But the expression '1/O(n^2)' has the scalar
value '1' on the left of '/' and a set of functions at the right side.
Type inference fails, so my next try is to make the operands compatible
in a somehow natural way. Since mathematical notation invokes many
implicit conversions, it's sometimes no longer unique or obvious what
implicit conversion to use. Many users of O(n^2) seem to consider it as
a placeholder for some expression, where the value of the expression is
bounded by n^2.
I never heard about shift and reset operators, but they don't seem to be
operators in the sense of higher-order functions.
Right; they are control operators in the sense that call/cc is a control
operator.
So they seem to be operators that work on expressions rather than
values. In this respect they are similar to the lambda operator, aren't
they?
But I see
people writing f(.) + f(.-t) and they don't tell, whether this means
 (\x - f x) + (\x - f (x-t))
or
 (\x - f x + f (x-t))
In this case for most mathematicians this doesn't matter because in the
above case (+) is silently lifted to the addition of functions.
Yes, so in my mind an environment monad is in effect (so to speak) here,
and the difference between the two meanings you pointed out is the
difference between
liftM2 (+) (liftM f ask) (liftM f (liftM2 (-) ask (return t)))
and
(+) (liftM f ask) (liftM f (liftM2 (-) ask (return t)))
(where import Monad and Control.Monad.Reader :).
You use 'ask' twice in the second expression. Does this mean that there
may be two different values for 'ask'? If this is the case your second
interpretation differs from my second interpretation.
I found that using a notation respecting the functional idea allows very
clear terms. So I used Haskell notation above to explain, what common
mathematical terms may mean.
But Haskell notation does -not- respect the functional idea.  First
there's the issue of variables: to respect the functional idea we must
program in point-free style.
Hm, you are right, I also use function definitions like (\x - f x + g
x) ... Sure, I know from the theory of partial recursive functions that
I can do everything with notationally pure functions, but I don't
know if it is convenient.
Second there's the issue of types: to
respect the (set-theoretic) functional idea we must abolish polymorphism
and annotate our lambda abstractions in Church style.  Surely we don't
want the meaning of our mathematical formulas to depend on the semantics
of System F(-omega)!
Church style, System F(-omega), alpha-conversion, lambda calculus, eta
reduction, currying  - Where can I find some introduction to them? What
about Haskell Curry? What about Bourbaki? - I have heard they worked
hard to find a unified notation.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-29 Thread Stefan Monnier
 a  b  c
which is a short-cut of a  b \land b  c

The confusion between f(x) and x.f(x) is indeed a real bummer.
OTOH I like the abc shorthand because it's both obvious and unambiguous
(as long as the return value of  can't be passed as an argument to , which
is typically the case when the return value is boolean and there's no
ordering defined on booleans).


Stefan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-01-29 Thread Marcin 'Qrczak' Kowalczyk
Stefan Monnier [EMAIL PROTECTED] writes:

 OTOH I like the abc shorthand because it's both obvious and
 unambiguous (as long as the return value of  can't be passed as an
 argument to , which is typically the case when the return value is
 boolean and there's no ordering defined on booleans).

It's unambiguous even if the return value of  can be passed as an
argument to . Operators are usually left-associative, right-associative
or non-associative. A non-associative operator can have an additional
semantics defined when it's used multiple times, just like a,b,c in
OCaml is neither a,(b,c) nor (a,b),c, or even a*b*c as a type.

-- 
   __( Marcin Kowalczyk
   \__/   [EMAIL PROTECTED]
^^ http://qrnik.knm.org.pl/~qrczak/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-29 Thread Aaron Denney
On 2005-01-29, Stefan Monnier [EMAIL PROTECTED] wrote:
 a  b  c
which is a short-cut of a  b \land b  c

 The confusion between f(x) and ?x.f(x) is indeed a real bummer.
 OTOH I like the abc shorthand because it's both obvious and unambiguous
 (as long as the return value of  can't be passed as an argument to , which
 is typically the case when the return value is boolean and there's no
 ordering defined on booleans).

Of course, it _is_ defined on Bools in Haskell, with True  False.
But see Martin's answer.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread Chung-chieh Shan
Henning Thielemann [EMAIL PROTECTED] wrote in article [EMAIL PROTECTED] in 
gmane.comp.lang.haskell.cafe:
 Over the past years I became more and more aware that common mathematical
 notation is full of inaccuracies, abuses and stupidity. I wonder if
 mathematical notation is subject of a mathematical branch and whether
 there are papers about this topic, e.g. how one can improve common
 mathematical notation with the knowledge of functional languages.

I would like to know too!

But I would hesitate with some of your examples, because they may simply
illustrate that mathematical notation is a language with side effects --
see the third and fifth examples below.

 Things I'm unhappy about are for instance
 f(x) \in L(\R)
where f \in L(\R) is meant

(I'm not sure what you are referring to here -- is there a specific L
that you have in mind?)

 F(x) = \int f(x) \dif x
where x shouldn't be visible outside the integral

(Not to mention that F(x) is only determined up to a constant.)

 O(n)
which should be O(\n - n) (a remark by Simon Thompson in
The Craft of Functional Programming)

I'm worried about this analysis, because would O(1) mean O(\n - 1), and
1/O(n^2) mean 1/O(\n - n^2)?  And what about the equal sign in front of
most uses of big-O notation?  I would rather analyze this notation as

O = shift k. exists g. g is an asymptotically bounded function
   and k(g(n))

where shift is Danvy and Filinski's control operator (paired with
reset).  This way, we can use (as people do)

reset(f(n) = 2^{-O(n)})

to mean that

exists g. g is an asymptotically bounded function
  and f(n) = 2^{-g(n)*n}.

Note that the argument to g is not specified in the original notation,
and neither is the reset explicit.  But the parentheses in O(n) is now
regarded as mere multiplication (making O-of-n a mispronunciation).

With some more trickery underlying the equal sign, one can state
meanings such that O(n) = O(n^2) is true but O(n^2) = O(n) is false.

 a  b  c
which is a short-cut of a  b \land b  c

What do you think of [a,b,c] for lists?

 f(.)
which means \x - f x or just f

I regard this as reset(f(shift k. k)).  Besides, even Haskell has (3+).

 All of these examples expose a common misunderstanding of functions, so I
 assume that the pioneers of functional programming also must have worried
 about common mathematical notation.

But AFAIK, nobody ever promised that the mathematical notation used to
talk about functions must denote functions themselves.  To take another
example, even though programs are morphisms, we don't always program
in point-free style.  And the English word nobody does not denote
anybody!

-- 
Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
War crimes will be prosecuted. War criminals will be punished. And it
will be no defense to say, I was just following orders.'
George W. Bush, address to the nation, 2003-03-17
  http://www.whitehouse.gov/news/releases/2003/03/20030317-7.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread Henning Thielemann

On Fri, 28 Jan 2005, Chung-chieh Shan wrote:

 Henning Thielemann [EMAIL PROTECTED] wrote in article [EMAIL PROTECTED] 
 in gmane.comp.lang.haskell.cafe:
  Over the past years I became more and more aware that common mathematical
  notation is full of inaccuracies, abuses and stupidity. I wonder if
  mathematical notation is subject of a mathematical branch and whether
  there are papers about this topic, e.g. how one can improve common
  mathematical notation with the knowledge of functional languages.

 I would like to know too!

 But I would hesitate with some of your examples, because they may simply
 illustrate that mathematical notation is a language with side effects --
 see the third and fifth examples below.

I can't imagine mathematics with side effects, because there is no order
of execution.

  Things I'm unhappy about are for instance
  f(x) \in L(\R)
 where f \in L(\R) is meant

 (I'm not sure what you are referring to here -- is there a specific L
 that you have in mind?)

Erm yes, my examples are taken from functional analysis. L(\R) means the
space of Lebesgue integrable functions mapping from \R to \R.

  F(x) = \int f(x) \dif x
 where x shouldn't be visible outside the integral

 (Not to mention that F(x) is only determined up to a constant.)

right, so \int needs a further parameter for the constant

  O(n)
 which should be O(\n - n) (a remark by Simon Thompson in
 The Craft of Functional Programming)

 I'm worried about this analysis, because would O(1) mean O(\n - 1), and
 1/O(n^2) mean 1/O(\n - n^2)?

O(n^2) means O(\n - n^2), yes.

People say, it is obvious what O(n^2) means. For me it is obvious that
they probably want to pass a constant function in, because O takes a
function as argument and because I know people often don't distinguish
between constant functions and scalar values.  Then O(n) = O(n^2) because
O(n) and O(n^2) denote the set of constant functions. :-)

But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
functions bounded to the upper by f.  So 1/O(f) has no meaning at the
first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
(i.e. lifting from scalar reciprocal to the reciprocal of a function) and
then as lifting from a reciprocal of a function to the reciprocal of each
function of a set. Do you mean that?

  And what about the equal sign in front of most uses of big-O notation?

This must be an \in and this prevents us from any trouble.

  I would rather analyze this notation as

 O = shift k. exists g. g is an asymptotically bounded function
and k(g(n))

 where shift is Danvy and Filinski's control operator (paired with
 reset).  This way, we can use (as people do)

 reset(f(n) = 2^{-O(n)})

 to mean that

 exists g. g is an asymptotically bounded function
   and f(n) = 2^{-g(n)*n}.

I never heard about shift and reset operators, but they don't seem to be
operators in the sense of higher-order functions.

 With some more trickery underlying the equal sign, one can state
 meanings such that O(n) = O(n^2) is true but O(n^2) = O(n) is false.

Sticking to the set definition of O we would need no tricks at all:

O(\n - n) \subset O(\n - n^2)

and

O(\n - n)  /=  O(\n - n^2)

  a  b  c
 which is a short-cut of a  b \land b  c

 What do you think of [a,b,c] for lists?

I learnt to dislike separators like commas, because
 1. (practical reason) it's harder to reorder lists in a editor
 2. (theoretical reason) it's inconsistent since there is always one
separator less than list elements, except when the list is empty. In this
case there are 0 separators instead of -1.

So I think (a:b:c:[]) is the better notation.

  f(.)
 which means \x - f x or just f

 I regard this as reset(f(shift k. k)).  Besides, even Haskell has (3+).

Hm, (3+) is partial application, a re-ordered notation of ((+) 3), which
is only possible if the omitted value is needed only once. But I see
people writing f(.) + f(.-t) and they don't tell, whether this means

  (\x - f x) + (\x - f (x-t))

or

  (\x - f x + f (x-t))

In this case for most mathematicians this doesn't matter because in the
above case (+) is silently lifted to the addition of functions.

It seems to me that the dot is somehow more variable than variables, and a
dot-containing expression represents a function where the function
arguments are inserted where the dots are.

  All of these examples expose a common misunderstanding of functions, so I
  assume that the pioneers of functional programming also must have worried
  about common mathematical notation.

 But AFAIK, nobody ever promised that the mathematical notation used to
 talk about functions must denote functions themselves.

I found that using a notation respecting the functional idea allows very
clear terms. So I used Haskell notation above to explain, what common
mathematical terms may mean.


[Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread Chung-chieh Shan
On 2005-01-28T20:16:59+0100, Henning Thielemann wrote:
 On Fri, 28 Jan 2005, Chung-chieh Shan wrote:
  But I would hesitate with some of your examples, because they may simply
  illustrate that mathematical notation is a language with side effects --
  see the third and fifth examples below.
 I can't imagine mathematics with side effects, because there is no order
 of execution.

To clarify, I'm not saying that mathematics may have side effects, but
that the language we use to talk about mathematics may have side
effects, even control effects with delimited continuations.

Shift and reset, and other delimited control operators such as control
and prompt (which date earlier than shift and reset), are neat.  Given
that we are talking about language semantics, you may be interested in
my paper at http://www.eecs.harvard.edu/~ccshan/cw2004/cw.pdf (which
quickly introduces shift and reset).

   F(x) = \int f(x) \dif x
  where x shouldn't be visible outside the integral
  (Not to mention that F(x) is only determined up to a constant.)
 right, so \int needs a further parameter for the constant

Or you can take integration to return a set of functions, and = as \in.
That's not actually how I would ultimately analyze things, but it seems
to be a start.

 People say, it is obvious what O(n^2) means. For me it is obvious that
 they probably want to pass a constant function in, because O takes a
 function as argument and because I know people often don't distinguish
 between constant functions and scalar values.  Then O(n) = O(n^2) because
 O(n) and O(n^2) denote the set of constant functions. :-)

I am interested in descriptive mathematical notation rather than
prescriptive mathematical notation (these are just my terms), in the
sense that I want to first ask people what they take certain existing
pieces of mathematical notation to mean, then figure out formal rules
that underlie these obvious interpretations.

 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

Wait a minute -- would you also say that 1+x has no meaning at the
first glance, because x is a variable whereas 1 is an integer, so
some lifting is called for?

 I never heard about shift and reset operators, but they don't seem to be
 operators in the sense of higher-order functions.

Right; they are control operators in the sense that call/cc is a control
operator.

  With some more trickery underlying the equal sign, one can state
  meanings such that O(n) = O(n^2) is true but O(n^2) = O(n) is false.
 Sticking to the set definition of O we would need no tricks at all:
 O(\n - n) \subset O(\n - n^2)
 and
 O(\n - n)  /=  O(\n - n^2)

By trickery I meant taking = to not denote equality.  So for me,
taking = to denote the subset relation would count as a trick.

   f(.)
  which means \x - f x or just f
 
  I regard this as reset(f(shift k. k)).  Besides, even Haskell has (3+).
 
 Hm, (3+) is partial application, a re-ordered notation of ((+) 3), which
 is only possible if the omitted value is needed only once. But I see
 people writing f(.) + f(.-t) and they don't tell, whether this means
   (\x - f x) + (\x - f (x-t))
 or
   (\x - f x + f (x-t))
 In this case for most mathematicians this doesn't matter because in the
 above case (+) is silently lifted to the addition of functions.

Yes, so in my mind an environment monad is in effect (so to speak) here,
and the difference between the two meanings you pointed out is the
difference between

liftM2 (+) (liftM f ask) (liftM f (liftM2 (-) ask (return t)))

and

(+) (liftM f ask) (liftM f (liftM2 (-) ask (return t)))

(where import Monad and Control.Monad.Reader :).

  But AFAIK, nobody ever promised that the mathematical notation used to
  talk about functions must denote functions themselves.
 I found that using a notation respecting the functional idea allows very
 clear terms. So I used Haskell notation above to explain, what common
 mathematical terms may mean.

But Haskell notation does -not- respect the functional idea.  First
there's the issue of variables: to respect the functional idea we must
program in point-free style.  Second there's the issue of types: to
respect the (set-theoretic) functional idea we must abolish polymorphism
and annotate our lambda abstractions in Church style.  Surely we don't
want the meaning of our mathematical formulas to depend on the semantics
of System F(-omega)!

Or, as I would prefer, we could design our notation to be both intuitive
and formally tractable, without requiring that the concrete syntax of
our language directly correspond to the semantics of mathematics or
programming.  The (simply-typed, pure) 

Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread Ketil Malde
Chung-chieh Shan [EMAIL PROTECTED] writes:

 O(n)
which should be O(\n - n) (a remark by Simon Thompson in
The Craft of Functional Programming)

It's a neat thought, IMHO.  I usually try to quantify the variables
used, making the equivalent of 'let n = .. in O(n)'

 And what about the equal sign in front of most uses of big-O notation? 

This is a peeve of mine; I've always preferred to view O(n) etc. as sets
of functions, so that a particular function can be a *member* of this
set. 

Otherwise, you could have:   f=O(n) /\ g=O(n) = f=g  :-)

 With some more trickery underlying the equal sign, one can state
 meanings such that O(n) = O(n^2) is true but O(n^2) = O(n) is false.

I would rather say that O(n) is a subset of O(n^2).

 a  b  c
which is a short-cut of a  b \land b  c

What's the problem with this one?

-kzm
-- 
If I haven't seen further, it is by standing in the footprints of giants

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread William Lee Irwin III
On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

My first guess is Omega(1/n^2).


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe