Re: [Haskell] Guidelines for respectful communication

2018-12-10 Thread Jerzy Karczmarczuk

Le 09/12/2018 à 19:03, Richard Eisenberg a écrit :

What this email seems to suggest to me is that our guidelines assume 
good faith, and yet some participants act in bad faith. I agree this 
is not well accounted-for in the guidelines.

...

I don't really think that Philippa Cowderoy's warning
/... guidelines like this risk doing even more damage than not having 
any. Not only do they lack the means to handle incidents that have 
already occurred, they actively discourage the community from finding 
those means.

/
points to a true danger. Teaching a "correct" behaviour is anyway a 
never-ending process.
Although I have seen a good deal of nastiness on the Web, practically 
never related to Haskell. There have been some doctrinal, not very 
serious disputes, occasionally an X or Y had too much adrenaline, but 
the true bad faith is something at most marginal. Perhaps the reason is 
-- I cite Simon: /The Haskell community is such a rich collection of 
*intelligent*, passionate, and committed people/.
The intelligence is crucial here. It is not democratically distributed 
[[my goodness, am I already insulting people?!]], so we will always need 
Constitutions, Catechisms, sportmanship rules, etc., even without the 
accompanying  "criminal codes".  The text of Simon is NOT a proposal to 
introduce  Haskell Inquisition.


In the context of the Haskell community, spending time on prevention & 
punishment of potential bad faith seems to me a bit horrible.


Ben Lippmeier says
/The way I see it, guidelines for Respectful Communication are 
statements of the desired end goal, but they don’t provide much 
insight as to the root causes of the problems, or how to address them. 
At the risk of trivialising the issue, one could reduce many such 
statements to “Can everyone please stop shouting and be nice to each 
other.”/
It is true  that most etiquette rules, as vestimentary  codes, etc. are 
somehow superficial, but the "root causes of the problem" may be 
terribly complicated. It is possible to degenerate a communication 
system without shouting or being manifestly brutal/impolite, and here 
and there the wish to be '/effective/' wins over the diplomacy.


Some of my students stopped  asking questions on the Stack Overflow 
forum because of that, and there are many other places avoided by 
newbies, by fragile people... Sending people away because of 
(apparently; often not so) duplicate questions, "downvoting", forming 
casts of power-enabled "gurus", who behave disrespectfully, since they 
are gurus, issuing statements such as: "read /some/ tutorial, and /then/ 
come back", etc., all this exists, may trigger angry answers, but does 
not implies bad faith (although too often signals somehow weak knowledge 
of psychology).


Let's be optimistic. I think that it would do a favour for the [larger] 
community, if Simon agreed to send the guidelines to haskell-cafe (and 
perhaps to some forum outside Haskell as well), I knew many people (my 
former students for example), who read only the  -café list...


Live long and prosper.  🖖
Jerzy Karczmarczuk
[France.]
___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


Re: [Haskell] please improve this code - thanks

2013-08-24 Thread Jerzy Karczmarczuk

Le 24/08/2013 14:45, Me a écrit :

I'm new to haskell. I have a puny piece of code:

... blah.

Is it good haskell, bad haskell or average haskell? How can it be 
rewritten? 


Don't you think that

1. Telling us WHAT DO YOU WANT from your code
2. Signing your letter with a human name

might help?

Jerzy Karczmarczuk
Caen, France



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Image manipulation

2007-10-30 Thread jerzy . karczmarczuk

Dan Piponi adds to a short exchange:


jerzy.karczmarczuk:

[iso-8859-1] Bj�rn Wikstr�m writes:

> Hi! I have lots and lots of images (jpegs) that I would like to manipulate
> and shrink (in size). They are around 5 Mb big, so I thought this would
> be a good Haskell project since it's a lazy evaluating language.
...
I must say that I don't see much use of laziness here.



Laziness plays a big role in real world image processing. Typically,
in applications like Apple's Shake, you build a dataflow
representation of the image processing operations you wish to perform,
and the final result is computed lazily so as to reduce the amount of
computation. For example, if you blur an image, and then zoom in on
the top left corner, then only the top left corner will be loaded up
from the original image (assuming your image file format supports
tiled access). You still work on tiles or scan-lines, rather than
individual pixels, so the laziness has a 'coarse' granularity.

But I'm not sure if this is what the original poster was talking about.


I am neither...
Still, Dan, I think that there is quite a difference between incremental
processing of signals, and images, etc., and the *lazy evaluation* of
them. Of course, a stream is consumed as economically as it can, but
not less. If you filter an image (shrinking, so some low-pass MUST be
done), a pixel must be loaded with its neighbourhood, which means *some*
scan lines.
With a JPEG this means that a 8x8 block should be loaded also with its
vicinity. But would you suggest that individual pixel processors should
be lazy? It would be useless, and probably resulting in some penalties.

So, the laziness of Haskell for me here is less than useful.
Nw, the lazy *generation* of streams is another story...
Generating music (low level, i.e. sound patterns) through lazy algorithms
is quite interesting.

Jerzy Karczmarczuk


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Image manipulation

2007-10-29 Thread jerzy . karczmarczuk

[iso-8859-1] Bj�rn Wikstr�m writes:


Hi! I have lots and lots of images (jpegs) that I would like to manipulate
and shrink (in size). They are around 5 Mb big, so I thought this would
be a good Haskell project since it’s a lazy evaluating language.

...

I must say that I don't see much use of laziness here. In any language you
can read an image as incrementally as its format permits to do, but anyway
some solid chunks must be present in the memory, in order to do the
filtering, the index mapping, or whatever, in order to resize (or rotate,
or...) the image. Actually, filling the memory with thunks may degrade
the performance of an image processing tool...

Jerzy Karczmarczuk

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Power series in a nutshell

2007-07-12 Thread jerzy . karczmarczuk
Derek Elkins writes: 

Doug McIlroy wrote: 

For lovers of things small and beautiful,
http://www.cs.dartmouth.edu/~doug/powser.html

...

and a link to your earlier Functional Pearl,
http://citeseer.ist.psu.edu/mcilroy98power.html


If somebody is interested in similar manipulations, sometimes a bit
more involved, there is a paper (sorry for shameless auto-ad) published
more that 10 years ago, copy here:
http://users.info.unicaen.fr/~karczma/arpap/lazysem.pdf 

Jerzy Karczmarczuk 



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Silly question on interactive import

2005-12-15 Thread Jerzy Karczmarczuk

Mirko Rahn answers my query:


:m + Char
:l parse

 would work, but loading destroys the access to the module Char. (:add 
as well).



I remember that accessing functions from Char in a qualified matter 
should still be possible.


Sure. I know that I can write, e.g.   Char.isUpper 'c'  interactively.
But I cannot put that into may parse.hs file anyway, it is not recognized.
So, I believe the *only* way is to import Char etc. by my private files.
There is nothing wrong with it, I was just interested whether an "incremental,
interactive" import is possible. Seems not. Unless somebody knows how, but
then, I would have already an answer, you are all very helpful. Thanks.

Jerzy Karczmarczuk


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Silly question on interactive import

2005-12-14 Thread Jerzy Karczmarczuk

I tried to add a module, and then to load a file into GHCi, in order to use
both.

Tomasz Zielonka wrote:


Try doing it in reverse order.
:load resets the session, or whatever it is called
:module +/- doesn't


Mmm, no. The file cannot be loaded, since it won't compile without that
incriminated module...

Jerzy Karczmarczuk
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Silly question on interactive import

2005-12-14 Thread Jerzy Karczmarczuk

Could you remind me what is the easiest way to load interactively a specific
module before loading a afile into ghci? For example, I have a file parse.hs
with some functions such as isUpper, which belongs to the module Char.

I thought that writing

:m + Char
:l parse

 would work, but loading destroys the access to the module Char. (:add as well).

Well, I won't die if there is no way to do it.
But am I obliged to import Char within parse? Of course, interactive import
is illegal...

Thanks.

Jerzy Karczmarczuk
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Can anyone help...

2005-11-28 Thread Jerzy Karczmarczuk

I apologize for the posting in which I mention the inadequacy of Doaitse
Swierstra partition program, it has been commented by others, and the thread
is obsolete. But my posting (issued immediately then) got delayed by the
moderator because of the schizoidal nature of my e-mail address... Sorry.

Jerzy Karczmarczuk
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Can anyone help me with partition numbers?

2005-11-28 Thread Jerzy Karczmarczuk

Doaitse Swierstra wrote:

Or (since we started to do someone's  homework anyway)

generate 0 = [[]]
generate n = [x:rest | x <- [1..n], rest <- generate (n-x)]


Unless I am misled, this will generate the *unordered* partitions,
e.g., for n=7, 64 of them, not 15.


Jerzy Karczmarczuk
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: About Random Integer without IO

2004-11-12 Thread Jerzy Karczmarczuk
This is my *last* word, promised...
Keean Schupke wrote:
Hmm... It is impossible to write a purely functional program to 
generate random numbers. Not only that it is impossible for a computer 
to generate random numbers (except using hardware like a noise 
generator). Pseudo random numbers require a seed. Functional programs 
by definition only depend on their inputs - therefore the seed is 
either fixed (same numbers each run) or one of the inputs (which means 
it must be IO).
Will some of you, folks, finally *will to understand* what the issue is 
about?

First, we don't care about 'real random' numbers, actually there are 
problems even with their
definition. We need sequences which *behave* randomly, from the point of 
view of feasible
tests, spectral/statistical; correlational, etc.  RN generators work 
well, and that's it. Stop with
that slogans that computers don't do anything random. It reminds me some 
discussion on
other lists, where people for three months discuss whether the brain is 
a computer, or if
the Universe can be assimilated to a Turing machine. I wish them and you 
all the best...

Second, as the example of the ergodic function I told you about before 
demonstrates, there
exist plenty of functions which are pure, don't propagate any 'seed', 
and which behave
"wildly", which *can* be used as a pure "random function". I hate to do 
this, but you will find
such a definition and even the plot thereof  in my recent paper abount 
sound synthesis:
http://users.info.unicaen.fr/~karczma/arpap/cleasyn.pdf
Clive Brettingham-Moore points out very correctly that a Chaos is not 
the same as the
Randomness. But, still, unstable dynamical systems, hardware and 
simulated, are used to
make noise,  "random" sequences with adequate properties. Continuous 
systems produce
quasi-regular functions: Lorenz equations, Chua circuit, etc., but if 
discretized, the results
of, say, HÃnon system, etc. may be used as weakly correlated random 
generators.

Third, OK, let's assume we need a seed, and we use a standard RN gen. 
which propagates it.
Now, of course everybody knows that if you launch a functional program 
57686514 times, you
will get 57686514 identical results. My goodness, what a tragic 
perspective, what horror!!
Of course everybody launches the same program several times just in 
order to get different
results, no?
Seriously, if somebody has a computational problem which is <> stateful,
let him use Monads, or whatever. Haskell conceptors put a lot of effort 
into it. But,
conceptually, I thought that Haskell is mainly for people who elaborate 
functional programs
in functional style, using functional design patterns and thinking 
functionally.
And, personally I use random streams. Or, once constructed Perlin noise, 
and then used
in different program instances with different initializations. I provide 
these initializations
manually, outside any 'random' context, since I still think that Georg 
Martius is really wrong
writing

I think automatic random initialisation is very important and handy in 
programs that run non-deterministic simulations. 

This is perhaps my own idiosyncrasy, but I taught simulation for some 
years, I am not a
speculator. The first slogan I tried to convey to my students is:
-- The FIRST thing you should learn is that a good simulation should 
share one common
   property with a good experiment: that you be able to REPRODUCE IT. --

People who do Monte-Carlo requiring many weeks of computing, and who 
break their
program in temporal slices: Two days now, let's see, then continue for a 
week more...
never, repeat *never* initialize their RNG automatically. The first run 
outputs the result
together with the current value of the seed, and this value is 
reinjected into the next
run,in order to prevent the improbable, but possible repetition of the 
sequence, which
would invalidate the soundness of the gathering of statistical data.

Thank you for your interest (if you got down to here...)
Jerzy Karczmarczuk
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Re: 2-D Plots, graphical representation of massive data

2004-08-28 Thread Jerzy Karczmarczuk




These awful graphic problems are immortal...

Glynn Clements comments my suggestion about using PostScript as 
the output interface

  
Well, more than often proposing another *language* to process raw data
might be an overkill.

  
  
I wasn't proposing *processing* the data in PostScript.
  

Well, if you think about writing a PostScript program, containing
- instead of raw data, say 10, 25, 65, the commands
0 10 moveto 1 10 lineto 2 25 lineto, etc., with stroking, filling,
curveto's,
scaling - as proposes Sergey Zaharchenko, then you *do* write a data
processing program in Postscript.
  Where the output goes, it is another story.
  Sergey gives some simple-minded examples, quite OK. But I challenge
him or you to tell *to a newbie* how to perform the automatic scaling of
the plot axes. And how to make an EPS document embeddable in another
one with all those bounding boxes (which depend on the chosen scale,
etc.)
How to make legends. And histogram fills with patterns.  No, don't tell
me
this is something self-evident.  I know Postscript.

  

Well, there are lots of ways in which you could draw stuff from
Haskell.

One is to use an in-process graphics library (e.g. GLUT/OpenGL,
wxHaskell, GTK+HS). However, if your Haskell environment doesn't
already include these, it could be highly non-trivial to actually get
to the point where you can use them.

Another is to use the core I/O functions (e.g. writeFile) to generate
files for an external program.

Either approach requires that you learn (or already know) the details
of a graphics library or file format. If you don't already know one, I
don't feel that PostScript would necessarily be any more involved than
e.g. OpenGL or GDK. You don't really have to understand the *language*
as such; there's no reason why you can't treat it as simply data, i.e. 
just write lots of 'show x ++ " " ++ show y ++ " lineto\n"'.
  

You don't need any special file format to prepare data for Matlab.
Use free format, fscanf it into a matrix, and then call hist or
whatever.
The advantage is that all scaling, smoothing, annotating, filling etc.
is there, at your disposal. Just imagine the amount of extra text
which would have to be output by a Haskell program...

The alternative, if there is no Matlab by hand, is Scilab, Gnuplot, etc.
Also here, output data only, and let the formatting to be done by the 
specialized package.

It would be a very good idea to prepare a comprehensive library of
high-level Postscript codes, permitting to a Haskell programmer to
output a well-tuned, colourful plot, with axes, etc. fast. A step in
that
direction has been made quite a time ago, the diploma thesis of
Joachim Korittky: "Functional MetaPost"; see, e.g. the pages of Ralf
Hinze, some doc by Marco Kuhlmann, and some work of  Feri Wagner
and (doc) Meik Hellmund.  If you manage to find that stuff, the
'revival'
of the system was rather chaotic...
It is still not *very* high level, and for the moment it seems less 
appropriate for serious plots than piping raw data to Matlab or
similar. 
But to draw diagrams or simplistic curves, why not?

Jerzy Karczmarczuk


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] 2-D Plots, graphical representation of massive data

2004-08-27 Thread Jerzy Karczmarczuk
Jacques Carette wrote:
John Meacham <[EMAIL PROTECTED]> wrote:
What would be cooler (IMHO) would be brining all of matlabs
functionality into haskell via haskell libraries so one may use 'ghci'
sort of as one uses matlab, but with the advantages haskell brings.

One could create Haskell libraries that are matlab-like, but most of 
the advantages of haskell (ie stong typing) are not realizable in 
Haskell.  To express even the most basic of matrix datatypes and 
operations requires dependent types.
I did not understand what is not realizable where...
I wonder what for do you need those dependent types.
Matlab is quite orthodox, its main flexibility comes from the dynamical 
typing, resolution
of the dimensions at the run-time, etc. The Numerical Python gives a 
good share of Matlab
functionalities.
Now, overloading arith operations for some bulk data (lists, lists of 
lists, arrays, etc.) casting
them to some "matrix" general types, should not be impossible without 
dependent types.
Hm. I will not bet my head, but, please, *provide an example* of such 
situation.

...  It is too bad that Aldor (www.aldor.org) was too far ahead of its 
time with its first-class and dependent type system :-(  Scarily, it 
is essentially deemed a 'failure' in Computer Algebra circles, as its 
type system, powerful as it is, is still too weak to conveniently 
express the mathematics of calculus.  And calculus/analysis is what 
most people use Matlab, Maple and Mathematica for.
I have the impression that the true calculus/math analysis percentage in 
Matlab programs
is negligible. Look at the composition of Matlab toolboxes. With 
symbolic packages, such
as Maple or Mathematica it is a bit different, but statistically what 
counts is pure algebra +
a good deal of visualization facilities. Actually, with the development 
of the Automatic
differentiation techniques, one needs much less of symbolic processing 
nowadays...

Anyway, the scientific computing and its direct concrete applications 
(robotics, DSP,
experiment simulation, etc.) remains still an unexploited niche for 
Haskell, and I hope that
it will change one day.

I would like to ask the original poster, who asked first about the 
Matlab<->Haskell links
what are her/his *concrete* problems...

Jerzy Karczmarczuk



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] System.Random

2004-03-02 Thread Jerzy Karczmarczuk
Simon Marlow wrote:
 
why is the Random module situated under System?  Wouldn't 
something like Data be more adequate?
 
There is usually an external source of randomness, which is why the
library in placed in System rather than Data.  A purely functional
random library would be rather less useful...


Now, I don't understand this at all...

All the development of the Random stuff in all languages has nothing
of random whatsoever. Perhaps *some* people like to seed the generator
with the clock time, but most *real* developers *known to me* usually choose
the seed deterministically, in order to reproduce the sequence, until
the program is ready to run in the end-user environment.
Anyway, conceptually, the behaviour of random generators is very far from
any "external source of randomness", so the question 'why "System"' for
me remains valid. The module Random might of course use Time or similar
entities for the randomization/initialization, but this is a contingency.
Jerzy Karczmarczuk





___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Why are strings linked lists?

2003-12-10 Thread Jerzy Karczmarczuk
Robert Will wrote:

Why is 'last' so much slower than 'head'?  Why is 'head' not called
'first'?  Why does 'but_last' (aka init) copy the list, but 'but_first'
(aka tail) does not?
Are those rhetoric questions, asked just to inspire some discussion, or
you *really* don't know why?


Jerzy Karczmarczuk



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: show function

2003-12-03 Thread Jerzy Karczmarczuk
Christian Maeder wrote:
rui yang wrote:

I want to print a function which itself have some functions as it's parameters 
and will return some functions as the results, and I want to print out the 
result, does anyone knows how to define the instance declaration of show class 
to this function type?


I don't know if it is possible to define a Show-instance for a function
type, but you should not do so, because functions are usually
"unshowable". If you have a function as a result you can only apply it
to some further argument and (try to) show the result of that application.


Please, don't be so categorical, and if you confess that you don't know
whether it is possible, then test first.
There are cases where you might create some complex data structures containing
functional objects: for specific dispatching, simulating OO, for writing
interpreters, etc.
Then you *MAY* need - if only for debugging - to look at your data. Of course
you cannot "print the sine function", but it is easy to write, say, in Hugs,
instance Show (a->b) where
  show f = ""


Moreover, if we get down from the Haskell crystal mountain, we might see
languages where functional entities KEEP A LOT of secondary information,
which permit to auto-document them in a more specific, communicative way.
Look at Python functions...
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Why are strings linked lists?

2003-11-28 Thread Jerzy Karczmarczuk
John Meacham wrote:
[EMAIL PROTECTED] wrote:
...
As a matter of pure speculation, how big an impact would it have if, in
the next "version" of Haskell, Strings were represented as opaque types
with appropriate functions to convert to and from [Char]?  Would there be
rioting in the streets?
I also have wondered how much the string representation hurts haskell
program performance.. Something I'd like to see (perhaps a bit less
drastic) would be a String class, similar to Num so string constants
would have type 
String a => a 

then we can make [Char], PackedString, and whatnot instances. It should
at least make working with alternate string representations easier.
One - among many - reasons why I use Clean [[and for some years I cannot
decide whether Haskell is the legitimate wife, and Clean a responsive
mistress, or vice-versa...]] is that strings being unboxed arrays permit
easily to communicate with lower-level binary file processing, which may
be then processed by higher-level code. Thus, I can easily read and write
image files (at least uncompressed, say .bmp), binary sound files, etc.
There is nothing fundamental there, simply a string *is* almost directly
the file buffer. Haskell introduces some overhead.
I believe that the only advantage of keeping string as lists is to
facilitate their lazy processing. Writing parsers and other lng string
consumers. But, as ajb said above, it can pass through a lazy conversion
stage, comprehensions, etc.
Jerzy Karczmarczuk
Caen, France


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: "interact" behaves oddly if used interactively

2003-10-01 Thread Jerzy Karczmarczuk
Christian Maeder wrote:
Colin Runciman wrote:

Let not the eager imperative tail wag the lazy functional dog!


Ideally functional programs should be independent of evaluation strategy 
and I assume that this is the case for about 90% of all Haskell 
programs. This leaves maybe the head or only the nose for laziness of 
the "functional dog".


"Ideally"?

You just proved that you never *needed* laziness in your life.
There is a full-fledged category of functional programs which wouldn't work
without laziness. Saying that it is 10, or 0.1% has simply no sense.
Colin demonstrated one such category.
I need laziness to implement co-recursive data structures for scientific
applications.
(If you wish, have another Great Truth:

   "Ideally any programs should be independent of the language used for
coding them..."
Now, try to convince the world.

)



Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell for non-Haskell's sake

2003-09-01 Thread Jerzy Karczmarczuk
Since the opening of this thread by Hal Daume 11 (binary), we see a constant
flow of interesting contributions/confessions. Plenty of applications, it
seems that Haskell is really used in a wider context than we might think.
It is a pleasure to read all this.
I have just one question thus. Why the application-oriented papers devoted
to Haskell at ICFP, including the Haskell workshop are rather rare?
People are reluctant to contribute, or the reviewers are not so fascinated?

(Well, it happened once to me, but it had *nothing to do with Haskell*, and
nothing to do with ICFP, just some other workshop, elsewhere. Simply a BTW
remark. One reviewer wrote: "this is just an application work, not a scientific
paper". Presumably this reviewer has his particular visions what a science is,
but I don't believe that such people dominate in the milieu of FPL. I believe
that it would be interesting to organize some workshops on "practical"
applications of functional programming...)
Jerzy Karczmarczuk
Caen, France
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: User-Defined Operators

2003-07-17 Thread Jerzy Karczmarczuk
Wolfgang Jeltsch wrote:
On Thursday, 2003-07-17, 09:08, CEST, Johannes Waldmann wrote:

A similar discussion sometimes surfaces in mathematics - where they have
"user-defined" operators all over the place, and especially so since LaTeX.


Well, for the most part, LaTeX only provides common operators. One problem, I 
came across some weeks ago, is that it is *not* possible to define his/her own 
operators (or, at least, that Lamport's "LaTeX - A Document Preparation 
System" doesn't tell you how you can define them).
I am sorry, but it is simply a countertruth. You can define \mathop with all
the \limits, \nolimits etc. properties. You have \mathchardef's etc. How do you
think the AMS package has been constructed? Everything is written in a standard
way, your liberty to create the most disgusting operators is unlimited. Some
Haskell-related papers dealing with lenses, bananas and barbed-wires exploited
already this possibility.
/// in another posting,commenting the "graphical" ways to make operator-like
icons, from posting by Robert Ennals///
I think, in both cases you don't define an *operator*. LaTeX probably won't 
use the correct spacing around the symbol.

A related problem is that I cannot see a way to define a new "log-like 
function" (as Lamport names them), i.e., a function with a name consisting of 
several letters which have to be set in upright font with no spaces between 
them. Examples are log, min, max, sin, cos and tan.


What's wrong with $ ...  \mathrm{brumble}(2\cdot x) ...$  ?

How do you think, the existing "standard ones" have been manufactured?

\def \arctan {\mathop {\rm Arctan}}

You can also put \hbox'es inside a math environment, which will prevent the
automatic choice of \mathitalic.
Read something about families, about \mathchardef, and about such options
as \displaystyle \scriptstyle, etc., in order to choose automatically the
correct size of the math. fonts. Also read something about big operators
useful to define objects like sum, product, etc.
Cheer up. YOU CAN DO EVERYTHING YOU WISH, and much more.

Jerzy Karczmarczuk





Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: How overload operator in Haskell?

2003-07-09 Thread Jerzy Karczmarczuk
Hal Daume answers a question on how to define nice, infix ops
acting on vectors:
What you want to do is make your Vector an instance of the Num(eric)
type class.  For instance:
instance Num Vector where
  (+) v1 v2 = zipWith (+) v1 v2
  (-) v1 v2 = zipWith (-) v1 v2
  negate v1 = map negate v1
  abs v1 = map abs v1
  (*) v1 v2 = ...
  fromInteger i = ...
  signum v1 = ...
I've left the last three blank because it's unclear what should go
there.  Since (*) has type Vector->Vector->Vector (in this instance),
you can't use dot product or something like that.
signum :: Vector->Vector could be written and just return [-1], [0] or
[1], I suppose.
fromInteger :: Integer->Vector is a little harder.  Perhaps just
'fromInteger i = [fromInteger i]' would be acceptable.  Or you could
leave these undefined.
 --
 Hal Daume III   | [EMAIL PROTECTED]
 "Arrest this man, he talks in maths."   | www.isi.edu/~hdaume
While this is a possible solution, I would shout loudly: "Arrest this man, he
is disrespectful wrt math!". Actually, this shows once more that the Num class
and its relatives is a horror...
Signum in this context has no sense. The multiplication might be the cross
product, but its anti-commutativity shows plainly that this is not a 'standard'
multiplication. 'fromInteger' has even less sense than signum...
I am particularly horrified by "abs v = map abs v", and I am sure all of you
see why.
I think that a more sane solution would be the definition of a particular class
with operations porting names like <+>, or ^+^, or whatever similar to standard
ones, but different.
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: let (d,k) = (g x y, h x d) ..

2003-06-27 Thread Jerzy Karczmarczuk
Serge D. Mechveliani wrote:

>   f' n m l = let (d,k) = (gcd n m, quot n d)  in  (k, l*k)
...
>
> The intended program was
>
>   f  n m l = let {d = gcd n m;  k = quot n d} in  (k, l*k)
>
> But f' gives the intended results, at least in the GHC
> implementation.
> So that I did not notice the `error' for a long time.
>
> Is really the Haskell pattern matching semantic so that f and f'
> are equivalent ?
But, in a lazy language it is the same, the let defs. are processed
colaterally, the only difference is the (possibly optimized away)
creation of the intermediate tuple (d,k).
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: palm?

2003-03-10 Thread Jerzy Karczmarczuk
Bjorn Lisper wrote:

There is an interesting research question in here: how to design "lean"
implementations of lazy functional languages so they can run on small
handheld and embedded systems with restricted resources. In particular the
restricted memory available poses an interesting challenge. What I would
like to see is an implementation that is designed to be easy to port among
different handheld/embedded systems, since there are quite a few of them (in
particular there are many embedded processors). Probably a bytecode
implementation is good since byte code is compact.  Nhc might provide a good
starting point since it uses bytecode and was designed to be resource lean
in the first place. I think the people at York even did some experiments
putting it on some embedded system some years ago.
Just a side remark.
I wonder whether the byte-code approach is the best possible solution
taking into account the overload of the decoder. Why not threaded code?
The FORTH (and similar) experience, PostScript implementations, etc.
show that this paradigm may be more interesting. Anyway, when you read
for the first time the Talmud, ehmmm., I mean the description of
the STG machine by Simon PJ and others, you see that some of their
ideas are not very far from code threading.
The classical FORTH style, with the separation between tha data and
return stacks seems quite appropriate for easy implementations of
higher-order control structures. If you saw the bells and whistles
inside a FORTH processor implemented on 8bit machines, you would
agree with me.
But I do not exclude the possibility that all this has been already
discussed and rejected for some serious reasons...
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Random Permutations

2003-03-06 Thread Jerzy Karczmarczuk
[EMAIL PROTECTED] comments my suggestion:

1. Generate N random numbers r_k, say, uniform between 0 and 1.
2. Form N pairs (1,r_1), (2,r_2), (3, r_3) ... (N,r_n).
3. Sort this list/vector wrt the *second* (random) number.
4. In the sorted list the first values (indices) give you the result.

I'm sorry but this algorithm does NOT in general provide the perfect
random permutation. Here's an excerpt from
http://pobox.com/~oleg/ftp/Haskell/perfect-shuffle.txt
that deals extensively with this issue:
Oleg, I have read that, I know also the comments here:
http://www.nist.gov/dads/HTML/perfectShuffle.html
This is a known issue, and I am not *so* ignorant.

Let us consider the simplest example (which happens to be the worst
case): a sequence of two elements, [a b]. According to the
shuffle-by-random-keys algorithm, we pick two binary random numbers,
and associate the first number with the 'a' element, and the second
number with the 'b' element. The two numbers are the tags (keys) for
the elements of the sequence. We then sort the keyed sequence in the
ascending order of the keys. We assume a stable sort algorithm. There
are only 4 possible combinations of two binary random
numbers. Therefore, there are only four possible tagged sequences:
[(0,a) (0,b)]
[(0,a) (1,b)]
[(1,a) (0,b)]
[(1,a) (1,b)]
...

In this context the generator of the random tags should be a little
more serious than choosing randomly two binary digits, don't you think?
Your criticism is perfectly valid, as most hair-splitting objections
are, but it is really not practical.
Furthermore, if we have a sequence of N elements and associate with
each element a key -- a random number uniformly distributed within [0,
M-1] (where N!>M>=N), we have the configurational space of size M^N
(i.e., M^N ways to key the sequence). There are N! possible
permutations of the sequence. Alas, for N>2 and M   I am still waiting to see *any* practical consequences of this
   theoretical conclusion, when N is of order of dozens, hundreds or
   more. Nobody interested in practical generation of those
   permutations would dream of generating *all* of them; if this is
   the case, the exhaustive enumeration would be better So,
   the fact that some permutations are slightly more likely than
   others is practically not too meaningful.
The swapping perfect shuffle method is obviously quite fine. Why did I
suggested the sorting method? Because I just glimpsed over the original
query, and I was not sure whether the author of the posting used
vectors or lists. With lists the swapping algorithm becomes inefficient,
as you may clearly see. With mergesort I believe that you get your
permutations faster. Anyway, I will not defend the sorting approach
as something ideal.
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Random Permutations

2003-03-06 Thread Jerzy Karczmarczuk
[EMAIL PROTECTED] wrote:
Is there a library routine for random permutations?

I didn't find any and did a quick hack...
There are many algorithms.
One, quite natural and quite fast
(n log n; slower than linear, though...)
consists in:
1. Generate N random numbers r_k, say, uniform between 0 and 1.
2. Form N pairs (1,r_1), (2,r_2), (3, r_3) ... (N,r_n).
3. Sort this list/vector wrt the *secon* (random) number.
4. In the sorted list the firs values (indices) give you the result.
This is of course quite general, not restricted to any Haskell
peculiarities.
Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Interesting Read (fwd)

2003-02-19 Thread Jerzy Karczmarczuk
Iavor S. Diatchki wrote:


my programs always prove IO().  this must be the best proven theorem in 
Haskell.  and people just keep on proving it :-)

I believe that I have proven more often that

undef = undef


and my students prove usually that GHC typechecker is a nasty,
unforgiving beast.

Jerzy Karczmarczuk

(of course this posting belongs rather to the list haskell-beer  ...)


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: ANNOUNCE: Learning Haskell portal, version 0.1

2003-02-13 Thread Jerzy Karczmarczuk
Arjan van IJzendoorn wrote:


Often we see messages from people who want to learn Haskell (something we
applaud), but don't know where to begin.

...



Enough talk: http://www.cs.uu.nl/~afie/haskell/LearningHaskell.html


Thanks, Arjan, nice work.
I would add some significant papers, such as John Hughes'
"Why functional programming matters", etc.

and perhaps - for some, a little bit advanced readers - some
other papers, introducing type classes, perhaps monads (Wadler)
etc. Everything can be found through the Home of Haskell, but
gathering essential references on your page would shorten the
search path.

I believe - from some discussions here and on comp.lang.functional
(some of them quite annoying...) that it would perhaps be a good
idea to put down a relatively comprehensive, easy comparison
between Haskell and other languages, notably functional: Clean,
also: Scheme, absolutely: ML variants, and Erlang.
Such questions are, and will continue to be recurring.


Jerzy Karczmarczuk




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Dispatch on what? (Was: seeking ideas for short lecture on type classes)

2003-02-04 Thread Jerzy Karczmarczuk
This is a somewhat older thread, but I ask you to enlighten me.

Norman Ramsey wrote:

A fact that I know but don't understand the implication of is that
Haskell dispatches on the static type of a value, whereas OO languages
dispatch on the dynamic type of a value.  But I suspect I'll leave
that out :-)



Dean Herington:

Perhaps I misunderstand, but I would suggest that "fact" is, if not 
incorrect, at least oversimplified.  I would say Haskell dispatches on the
dynamic type of a value, in the sense that a single polymorphic function
varies its behavior based on the specific type(s) of its argument(s).
What may distinguish Haskell from typical OO languages (I'm not an expert
on them) is that in Haskell such polymorphic functions could (always or at
least nearly so) be specialized statically for their uses at different types.


Fergus Henderson wrote:
> I agree.  The above characterization is highly misleading.  It would be
> more accurate and informative to say that both Haskell and OO languages
> dispatch on the dynamic type of a value.
>




Now my brain ceased to understand... Are you sure that OO dispatch schemas
are based on the *argument* type?

I would say that - unless I am dead wrong, the OO languages such as Smalltalk
do not dispatch on dynamic types of a value. The receiver is known, so its vir.
f. table (belonging to the receiver's class) is known as well, the dispatching
is based on the *message identifiers* independently of subsidiary arguments.
Only after that - perhaps - some "reversions", message propagation depending on
the arg(s) value(s), etc. may take place, but all this is irrelevant...
Forgive me if I write stupidities.


Jerzy Karczmarczuk




___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Lazy evaluation alternative

2003-01-24 Thread Jerzy Karczmarczuk
Chris Clearwater wrote:

On Fri, Jan 24, 2003 at 01:51:57PM +0100, Jerzy Karczmarczuk wrote:


> Hey, Maestro, why don't you check before posting, hm? What is the type
> of ones? I am afraid you will get a nasty surprise...


Check what, the type? Or are you refering to the double posting?...

It seems the type would recursive? Is it that a problem?
Enlighten me? :)


Whatever some gentlemen impute about my politeness (as below), I don't
react to double postings, because such things happen to everybody. Of
course I commented the fact that you sent a a Haskell expression to
a quite wide audience before attempting to ask :t  Hugs or GHCi. Here
you are their answers. If:

ones c = c (1:ones)

then the answers are, GHCi, then Hugs:

claz.hs:1:
Couldn't match `[a]' against `(t1 -> t2) -> t'
Expected type: [a]
Inferred type: (t1 -> t2) -> t
In the second argument of `(:)', namely `ones'
In the first argument of `c', namely `(1 : ones)'

**

Type checking
ERROR claz.hs:1 - Type error in function binding
*** Term   : ones
*** Type   : ([a] -> b) -> b
*** Does not match : [a]


Now, simply: it costs really nothing to check that, and BTW the result
is almost obvious when you look at (1:ones). (Note that Ghci and Hugs
give slightly different answers...)

You might construct a tree, or just for testing replace (1:ones) by
(1,ones), but I suspect that then you will get the infinite recursion
in type definition. CPS is delicate...

Next posting:


On Fri, Jan 24, 2003 at 03:07:48PM +0100, Thomas Johnsson wrote:

...

I think Jerzy (in his usual polite manner :-) refers to the 

every group has it's moshez (don't ask :)


cons operator, the :, which in a strongly typed language
the right argument, the tail, is required to be a list.


...


Well, let's pretend I made my own datatype then that supports the right
type class interfaces, and has a function as a tail :)


I must confess that I don't know what moshez are, but I won't ask.
Thanks to both of you for a new English word I learned.

I suspect that the infinite type unification gets in the way anyhow.
You can define your 'ones' in, say, Scheme, but this is clumsy, much
less transparent than using macros (delay, cons-stream, etc.), or
just the lambda-ification, as proposed by Kevin Millikin. It is
not even clear *to what* you should apply your continuation.

However, Kevin S Millikin is too pessimistic about


So your trick *is* used to implement lazy evaluation in other
languages.  It's not very pleasant if you write a lot of lazy code,
because you have to explicitly suspend evaluation of values using
delay and explicitly demand evaluation using force.


because if macros are there, all the administrative chores can be hidden.
Hm. By the way, the fact that Haskell has no macros was a conscious
decision, or by default, because nobody needed them?

On the other hand, there are languages where the construction and
processing of objects representing lazy streams is different, uses
*generators* (Smalltalk, Icon, Python); they are not functions, but
'objects' with an internal updateable state and a 'method' "next" or
equivalent. They can also be simulated in Haskell.

===

I suppose that it would be better to move this thread to haskell-café.


Jerzy Karczmarczuk







___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Lazy evaluation alternative

2003-01-24 Thread Jerzy Karczmarczuk
Chris Clearwater wrote:

It seems to
me you could get some of the desired effects of lazy evaluation by using
continuation passing style in code. For example, take this psuedo-code
using CPS to represent an infinite data type.

Using non-CPS this would be something like:
ones = 1 : ones

using strict evaluation this would lead to an infinite loop.

However using CPS this could be represented as:
ones c = c (1 : ones)

where c is the continuation.

This would not infinite loop as ones is still waiting for the
continuation argument. Another example:

natural n c = c (n : natural n+1)



Hey, Maestro, why don't you check before posting, hm? What is the type
of ones? I am afraid you will get a nasty surprise...

... BTW, are you sure there aren't any missing parentheses in the
def. of natural? (But they won't help anyway...)


Jerzy Karczmarczuk



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Implementing forward refs in monadic assembler and interpreter

2002-11-15 Thread Jerzy Karczmarczuk


Glynn Clements comments the request for the implementation of
forward references in an assembly code simulated in Haskell.


If you are assembling into a list, the simplest approach is to perform
the assembly phase twice. The first phase generates the list of
label/address pairs. The second phase, which has the complete list of
label addresses available, performs the complete assembly process.

Both phases could use identical code; you just need to ensure that the
first phase can assemble a branch instruction for which the label is
unknown.

Alternatively, you could perform one pass plus a post-processing phase
which "fixes" any forward references. This would require either that
you can store a label in an assembled branch instructions in place of
an actual address, or that you generate a list incomplete branch
instructions so that you can go back and fix them.

OTOH, if you're actually interpreting the "assembly language"
directly, then a forward branch would have to store the label in a
"variable" to indicates that instructions are just to be skipped until
that label is reached.


This is squeezing the power of a modern lazy language into a soap
box...

What are the labels good for, hm?
Just to identify your chunks of code which are targets of some
jumps?

Well, use these chunks themselves, their references as your
targets; the branching instruction picks up this chunk as
the next segment of code to execute.

Connect your chunks lazily. Then no forward reference can hurt
you.

If I may, a shameless personal plug. Look at my paper presented
at the last FDPE, a construction of a CPS "assembly-style"
interpreter, with lazy code deployment tricks.

Jerzy Karczmarczuk


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



A question concerning functional dependencies

2002-09-02 Thread Jerzy Karczmarczuk

I wanted to write a small package implementing vector spaces,
etc. A part of it is

class Module v s
 where
  (*>) :: s->v->v

defining the multiplication of a vector by a scalar: w = a*>v
Now, as in many other circumstances, concrete vectors are based
on concrete scalars, and I defined really:   class Module v s | v->s  .

One typical instance of vectors is the orthodox functional 
construction

instance Num s => Module (v->s) s 
 where
  (s*>f) x = s * (f x)

and such tests:  u = 2.5 *> sin;   res = u 3.14
pass without tears.

But I wanted also that operators of the type (b->s) -> (b->s),
for example:  inver f = recip . f . recip
be vectors. So:

instance ...=> Module ((v->s)->(v->s)) s
 where
  (s*>op) f = s*>(op f)

But GHCi yells that two instances in view of the functional 
dependency declared are in conflict. Since I believe that 
I do not really understand fundeps in Haskell, and this is not
a GHC 'feature' only, I send this query to the haskell list.
I don't see this conflict. I could remove the fundep, but then
I have other (smaller, but a bit annoying) problems, so I want
to keep it, if only for my own instruction. Good people, help,
please.

Why v [->s]  cannot "coexist" in this context with
 ((v->s)->v) [->s]

Of course all extensions, including overlapping instances are on.

Jerzy Karczmarczuk
Caen, France
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Lists representations (was: What does FP do well? (was ...))

2002-06-01 Thread Jerzy Karczmarczuk

[I removed private addresses from the header, and I invite cordially
all you folks *not* to send your posting simultaneously to the Haskell
list and to the guy who reads the list as well, otherwise the exchange
would never take place...]


Just 3 centimes [former French currency, survived ; nobody will say
euro-cents here...]

Claus Reinke:

...
> But the moral for the current discussion: a more intelligent list
> representation could have substantially more benefits for the
> average Haskell program than any compiler optimization twiddling,
> and I'd really like to see someone (PhD student?) investigating that
> topic seriously, as the answers are unlikely to be obvious.
> 
> The representation chosen in the reduction systems could be a first
> hint, but as Jerzy points out, things may be more complicated in the
> context of Haskell.  For comparison, Haskell array performance was
> somewhere between non-existent and terrible in those days (another
> clear win for both the compiled and the interpreted reduction
> systems) and has only recently improved somewhat. That needs to
> continue and, please, someone do the same for lists.
...

Alastair Reid:

...
> Zhong's work [1] was in the context of a strict language (SML) which
> meant that you can know how long a list is as you are building it so
> you can use the Cons4 cells a lot.
> 
> Cordy's work [2] was in the context of a lazy language (Haskell) which
> meant that you usually don't know the length of a list (if it is even
> finite) as you are building it.  This requires a bit of cunningness to
> overcome.
> 
> IIRC, the key part of that cunningness was that Cordy does the most
> interesting stuff near the tail of the list while Zhong does the most
> interesting things near the head of the list.
...


Maestros,
I know you know all that, but some of the new readers migh have some
doubts since for years and years people pose the same question: why
all this damned lazy business is about?! <>?. The laziness
seems to be such a nuisance that it seems incredible that it is still
there.

So, please recall the following:

In a lazy program it is not an issue the fact that "I don't know"
the length of my list.
I DON'T WANT TO KNOW!!!
The very notion of length is or may be spurious, it depends on the
list consumer, not on its creator.

Lazy structures may - in my workbenches they always do - represent
iterative, dynamic, sometimes wildly interacting processes.
They may emulate backtracking. 
Lazy continuations may be used to implement coroutines, dataflow
control structures, etc. etc.

===

OK, for me the moral is now - I wouldn't say clean [pun intended], but
quite clear:

Strict data structures in Haskell should belong to different types than 
the co-recursive ones. With different implementation, in particular, 
for arrays.



Jerzy Karczmarczuk
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: What does FP do well? (was How to get ...)

2002-05-17 Thread Jerzy Karczmarczuk

Bjorn Lisper:


> ...sometimes the length of a list being returned from a
> function can be a simple function of the function arguments (or the sizes of
> the arguments), think of map for instance. In such cases, a static program
> analysis can sometimes find the length function. If we know thee functions
> for all list-producing functions in a closed program, then the lists could
> be represented by arrays rather than linked structures.
> 
> I know Christoph Herrmann worked on such a program analysis some years
> ago. Also, I think Manuel Hermenegildo has done this for some logic
> language.


Andrew Appel wrote something about "pointer-less" lists as well.

What bothers me quite strongly is the algorithmic side of operations
upon such objects. 

Typical iterations map- (or zip-) style: do something with the head, pass
recursively to the tail, would demand "intelligent" arrays, with the indexing
header detached from the bulk data itself. The "consumed" part could not be
garbage collected. In a lazy language this might possibly produce a considerable
amount of rubbish which otherwise would be destroyed quite fast. The
concatenation of (parts of) such lists might also have very bad behaviour.

Can you calm my anxiety?

Jerzy Karczmarczuk
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: ideas for compiler project

2002-01-25 Thread Jerzy Karczmarczuk

Simon Peyton-Jones:
 
> Lots of people have observed that Haskell might be a good "scripting
> language" for numerical computation.  In complicated numerical
> applications, the program may spend most of its time in (say) matrix
> multiply, which constitutes a tiny fraction of the code for the
> application. So write the bulk of the application in Haskell (where the
> logic is complex but the performance doesn't matter) and then link to a
> C or Fortran library to do the small part that is really compute
> intensive.
...
> You'd need to find a "real" application though. The classical "kernels"
> (matrix multiply, inversion etc) are precisely the things you may not
> want to do in Haskell.

That's it. With one "grain of salt". It happened to me that I wanted to
write some structurally trivial routines for matrix inversion, iterators for
ODEs, etc., but I wanted them *polymorphic*. (And, sometimes, lazy: 
manipulators of power series, asymptotic expansions, some automatic dif-
ferentiation stuff, etc.)

Then Haskell was a decent tool, and - as Björn Lisper remarks, the hindrance
was the lack of integrated tools. Writing a triple loop to invert a matrix
instead of having something like `recip` defined within the field of square
matrices, and implemented with some efficiency considerations in mind, is
a bit clumsy.

Björn quotes and comments:

> >The classical "kernels" [...] you may not want to do in Haskell.
> 
> With the current compiler technology for Haskell, one would add. I don't
> think it would be impossible to compile such Haskell programs into efficient
> code. Functional languages for matrix/array computing was a quite active
> research area 10-15 years ago, with efforts like Sisal and Id. These
> languages were strict and first order, but you can write such programs in
> Haskell. I think it would be possible to have a Haskell compiler that could
> manage a subset of Haskell matrix programs quite well. 

Steven Bevan wrote interesting numeric routines a long time ago.

Thorsten Zoerner wrote a Clean package Class with several Lin. Algebra
and some "slicing" utilities, which emulated the "vectorized" approach
of Matlab. This can be in its greater part translated into Haskell.

--- But, please, some criticisms of Matlab are weakly justified. BL says:

> Also, MATLAB is very ill-suited to
> expressing block-recursive matrix algorithms, which are becoming
> increasingly important in numerical computing. And, of course, there is no
> decent type system, no higher order functions, etc...

Block recursive Schemes in Matlab are easier than in C++. Implementing
pyramid algorithms is not difficult. Slicing, reshaping, cloning, etc.
of matrices are very powerful tools, but they are so imperative, that
it is not easy to see how to replace them with something "functionally
purified".

The Matlab type system is dynamic and "indecent", but you have objects
and inheritance, and you *HAVE* higher-order functions as well. All the
Matlab GUI tools, very powerful and reconfigurable are based on objects,
which accept callbacks as parameters. This is not so clean as we would
like to have, but pragmatically OK.

What bothers me a bit in our Haskell world is the fact that the efforts
are atomized. People work on GUIs, and don't care about drawing/painting
routines. Those who care, are often far away from the numerical world.
The numerically-oriented folk often disregard with a lot of desinvolture 
the attempts to put the Haskell numerical classes in an abstract algebraic
framework (really useful from the point of view of code reusing), etc.
I have the impression that this is changing, but slowly, while the scientific
computation/visualization world is marching very fast, not only in the
direction of very-fast-even-more-dirty routines, but also in the direction
of new conceptualization/representation models (like, e.g., "actors" in VTK).


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: differentiation. Reply

2002-01-14 Thread Jerzy Karczmarczuk

Ketil Z Malde adds a comment concerning the query of Zhe Fu:
>

> >>> Is there any built-in functions in Haskell to implement
> >>> diffential operation and partial diffential operation?
> >>> Or can anyone give me some advices about how to implement them
> >>> with Haskell? Thanks.
 
> Jerzy Karczmarczuk has some interesting papers on his web site using a
> different approach to differentiation, maintaining, IIRC, a lazy list
> of all derivatives of functions.  I don't have the URL ready, but a
> your favorite search engine should be able to help you out.


===
1. First, I suggest to move this to Haskell-café. I leave the original
   address, though.

2. My work is about the "Automatic Differentiation" in Haskell. The paper
   http://users.info.unicaen.fr/~karczma/arpap/diffalg.pdf
   has been published in HOSC last year.
   But somebody might get interested in some geometric extensions thereof
   (differential forms)
   http://users.info.unicaen.fr/~karczma/arpap/ltforms.pdf


Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Haskell in the teaching of Maths

2001-11-22 Thread Jerzy Karczmarczuk

John Hughes:
> 
> Look at Rex Page's Beseme project
> 
> http://www.cs.ou.edu/research/beseme.shtml
> 
> (which uses the Hall and O'Donnell book to do some interesting educational
> research).

Just two useless words, on that project and *many* others.

Rex Page focuses on *discrete math*. I believe that most of other
people interested in dancing on a bridge between math and functional
programming are interested in discrete math. structures.

Not too much about analysis, topological problems... About differential
equations. About the manipulation of "continuous objects": functions ::
Real -> Real and generalizations thereof.

Convergence of numerical algorithms. Algorithms for the asymptotic behaviour
of some functions (and concrete asymptotic expansions: see the book by Knuth/
Graham/Patashnik...) etc. etc.

Well, this is my personal field of interest, so I cannot be objective, but 
I assure you that all enormous niche of mathematical programming is still
open. Maths, especially applied maths are often taught with the aid of
computer algebra programs. Teachers, students, implementors, use Maple or
Mathematica, or you name it, in order to transform *formulae*, to crunch, munch,
and digest the external representation of mathematical entities not because
this is something very profound, but because "standard" programming languages 
do not offer any reasonable facilities to manipulate objects with some
mathematical contents. Named symbols, the "indeterminates"  replace those
entities: algebra generators, differential forms, fields, operators, etc.

Good, polymorphic functional languages offer those missing tools. On this very
mailing list we had at least 100 postings about math structures and their
implementation in Haskell. Everybody agrees that the situation is far from
ideal. Classes are not categories. Types are not domains. We don't know how
to specify operationally  such properties as commutativity for arbitrary 
binary functions, etc. We need an "object-oriented" type system adapted to
math hierarchies, and this is far from trivial, people from Axiom, Magma and
MuPAD zones worked for years on that.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



GHC installation

2001-11-16 Thread Jerzy Karczmarczuk

I might be dead wrong, in that case I apologize...

I just took the Windows installer and tried to put GHC etc. on my laptop.
I suspect that the installer absolutely wants to put the stuff in Program Files
and doesn't give the user the opportunity to install it on another disk.
Anyway, I have plenty of space on another partition, but the installer complains
that it lacks space.

Any suggestions, please?


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



strong typing is not a panaceum, and, anyway...

2001-10-19 Thread Jerzy Karczmarczuk

Brian Boutel to Sergey Mechveliani:

> > There is no scientific reason why  all  computations with types and
> > type resolution should preceed all computations with non-types.

> No scientific reason, but a strong engineering reason.
> 
> The engineering idea is to test a design with all available tools before
> building it. That way there will be no disasters that could have been
> forseen. The computing equivalent of an engineering disaster is for a
> program to get a run-time error or to produce an incorrect result. If
> this outcome is acceptable, then the program probably wasn't important
> enough to be worth writing in the first place.

If an entity is sufficiently complex, there will be always a margin of
error. Good if avoidable, but...

Would you apply the same philosophy of "non-importance" of a possibly bugged
result, to procreating children?...

Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Standard Prelude - Floating Class

2001-10-16 Thread Jerzy Karczmarczuk

George Russell wants to terminate the discussion with Dylan Thurston who
corrects some inadequacies of his previous posting :

> > ... Surely sinh x is at least 1/2 of exp x, leaving only a
> > very narrow range for this to happen.  Behaviour of sinh x near 0 is
> > more important, unless I'm missing something?

> If we are planning to introduce bugs into the Haskell standard, I am not
> going to argue about which bug is more important than which other bug.
> Personally I think we should avoid all bugs.

There IS a big difference between "bugs in standard" and numerically unstable
or incomplete algorithms. I go to Canossa now, I agree of course that the def.
sinh = (exp - recip.exp)/2 is disgraceful, although as a math. proper default
is OK. Yes, in this sense - as Lennart pointed out - the complex sinh which
uses the real sinh is numerically better near zero, (provided that the real
sinh is properly implemented!! Did Joe Fasel include this consciously? If yes,
my respect - already almost infinite, is even bigger now). 
But the defaults should find a reasonable compromise between accuracy and ease.

The passage below is methodologically dangerous.

> I'm afraid that I have very little faith in the numerical analysis
> expertise of the typical Haskell implementor, so I think it is dangerous
> to give them an incorrect "default" implementation.  I am reminded of
> the notorious ASCII C (very)-pseudo-random number generator . . .


> > I don't think it's worth worrying about much.


> This is a good argument for leaving things as they are.

Absolutely NO. Unless you don't care at all about the potential scientific 
users of the language. Leaving the details which are of utmost importance for
professional applications is killing the language. Most  readers of this forum
are very far away from numerics, and this is normal. But languages live through
their libraries. At least 4 times a year somebody on this list complains about
lack of such a support even if the actual libraries are already quite
impressive.

So, I would encourage to organize one day a group - not necessarily a "task
force"
like the GUI people - of people who would test all the numerics, and at least
give to the freshmen some implementation prototypes, e.g. Padés for small
arguments
of sinh, etc.

And what is this: "typical Haskell implementor"? Do you know many of them? Do
you
think really that some fellow totally inconscious in the domain of STANDARD
numeric
maths, somebody who never heard about IEEE etc. will NOW engage in implementing
Haskell? What is the rationale behind your little faith, Man of Little Faith?


Jerzy Karczmarczuk
===

PS. One more thing. HARMFUL SPAMMERS ARE AMONG US.
May I humbly suggest that people who send postings to haskell@ avoid  
sending copies to all individuals who ever took part in the discussion?

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Haskell 98 - Standard Prelude - Floating Class

2001-10-15 Thread Jerzy Karczmarczuk

Simon Peyton-Jones:
> 
> Russell O'Connor suggests:

> | but sinh and cosh can easily be defined in terms of exp
> |
> | sinh x = (exp(x) - exp(-x))/2
> | cosh x = (exp(x) + exp(-x))/2

> | I suggest removing sinh and cosh from the minimal complete
> | definition, and add the above defaults.
> 
> This looks pretty reasonable to me.  We should have default methods
> for anything we can.
> 
> Comments?

Three.

1. Actually, I wouldn't even call that "default definitions". These ARE
   definitions of sinh and cosh.

2. So, they hold for the Complex numbers as well. The gymnastics with
   complex sinh and cosh seems to be redundant.

3. The above code is less than useful for a person who
   really needs it. I would propose rather the most obvious

   sinh x = (u-recip u)/2 where u=exp x

   etc.

Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



macros. Was: Arrow notation, etc.

2001-10-12 Thread Jerzy Karczmarczuk

Dylan Thurston:
> 
> On Fri, Oct 12, 2001 at 01:02:07PM +0100, Keith Wansbrough wrote:
> > Sadly, there's not a concrete proposal - it seems that no one sees a
> > need for macros in a lazy language.  Most of what they do can be
> > achieved through laziness - you can write "if" in Haskell already, for
> > example, whereas you need a macro for it in Lisp.  Your arrow notation
> > example may provide some motivation, though.
> 
> I wonder if macros could also be used to implement views.

They are heavily used in Clean, so, there *are* people who see a need for them
in a lazy language.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Harmful spammers

2001-10-10 Thread Jerzy Karczmarczuk

Sorry for the pollution.

Is there a way  to kill the guys from: @bid4placement.com ?

They managed already 3 times to block my mailer with their HTML,
via Haskell list.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Unary minus (was: micro-rant)

2001-08-13 Thread Jerzy Karczmarczuk

matt hellige:

> ... i think people would
> be pretty frustrated if "-56.2" worked but not "-x"...

Weell, the Cleaners around have to live with it. -5 works. For variables
they write ~x. But "-" in front of a number is not a lexical entity, but
a syntactic one, so f -5 and f (-5) are a bit different.

I didn't follow this discussion, but please avoid this mess in Haskell.

Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Homeworks and art of flying.

2001-05-28 Thread Jerzy Karczmarczuk

Ashley Yakeley answers the query of Rab Lee:
> 
> >hi, i'm having a bit more touble, can anyone help me
> >or give me any hints on how to do this :
> >"x 2 3 4" = ("x", [2, 3, 4])
> 
> Generally we don't solve homework for people. Unless they're studying
> under Prof. Karczmarczuk, of course.
> 
> --
> Ashley Yakeley, Seattle WA


Now, now, this is cruel and ati-pedagogical. I changed my mind, we should
all be friends, and help each other. Here is the ideal solution, very
fast, and protected against viral contamination

homework "x 2 3 4" = ("x", [2, 3, 4])
homework _ = error "You can't"

Note the cleverness and universalness of the solution. 

Jerzy Karczmarczuk
Caen, France

PS. A deep philosophical quotation seems appropriate here. Here is one from
   Douglas Adams:

   Flying is an art, or rather a knack.
   The knack consists in throwing oneself to ground, and miss.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Templates in FPL?

2001-05-23 Thread Jerzy Karczmarczuk

Fergus Henderson :

> I agree that it would be very nice if Haskell and other FPLs had some
> equivalent feature, with a nicer syntax.  I think you might be able to
> do a lot of it using ordinary Haskell syntax with just some additional
> annotation that directs the compiler to evaluate part of the program at
> compile time.

Actually, whole my posting was driven by that. (And by Clean macros which
are not macros, but not yet templates, and by the fact that you can
parameterize templates with constants, and Haskell classes not).

Yours anwers, very thorough, concentrated sometimes on the realization
of templates in C++, and I am the last to defend them, or to want to see
them implemented in Haskell. But, as a - not too run-time-expensive way
to deal with some facets of polymorphism by compiling specialized functions,
it is a possible option. The syntactic issues, and the relation of these
"macros" to class system is another story. 

Thanks.

Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Templates in FPL?

2001-05-18 Thread Jerzy Karczmarczuk

Maestri, Primaballerine,

I have a really provocative question.
One of my student posed it, and I could not respond in a satisfactory
manner, especially for myself it was really unsatisfactory.


We know that a good part of "top-down" polymorphism (don't ask me what
do I mean by that...) in C++ is emulated using templates.

Always when somebody mentions templates in presence of a True Functionalist
Sectarian, the reaction is "What!? Abomination!!".

Now the question: WHY?

Why so many people say "the C++ templates are *wrong*" (and at the same time
so many people use it every day...)

Is it absolutely senseless to make a functional language with templates?
Or it is just out of fashion, and difficult to implement?

==
This is a sequel to a former discussion about macros, of course...


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monads

2001-05-17 Thread Jerzy Karczmarczuk

Rijk-Jan van Haaften >>= Hannah Schroeter:


> > ... However, not using the Monadic do syntax results in
> > hardly-readible code.
> 
> I don't really think so. The operator precedences for >> and >>= are
> quite okay, especially combined to the precedence of lambda binding.

...
> main = do
> putStr "Hello! What's your name?"

...

> Yes, I use do syntax where appropriate (e.g. also for usual parser
> monads), however, the operator syntax can be written quite readably
> too.

I would add that sometimes you may be interested in Monadic SEMANTICS
at a more profound level, trying to hide it completely at the surface.
Then, the <> syntax is an abomination.

The examples are already in the Wadler's "Essence". Imagine the 
construction of a small interpreter, a virtual machine which not only
evaluates the expressions (belonging to a trivial Monad), but perform
some side effects, or provides for exceptions propagated through a
chain of Maybes. Then the idea is to

* base the machine on the appropriate Monad
* "lift" all standard operators so that an innocent user can write (f x)
  and never x >>= f (or even worse).

The <> construct in such a context resembles the programming in 
assembler, and calling it more readable is h... not very convincing.

(My favourite example is the "time-machine" monad, a counter-clock-wise
State Monad proposed once by Wadler, and used by myself to implement the
reverse automatic differentiation algorithm. Understanding what's going
on is difficult. The <> syntax makes it *worse*.)

Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monads

2001-05-17 Thread Jerzy Karczmarczuk

Ashley Yakeley comments:
> 
> Jerzy Karczmarczuk wrote:
> 
> >Monads are *much* more universal than that. They are "convenient patterns"
> >to code the non-determinism (lazy list monads), to generalize the concept
> >of continuations, to add tracing, exceptions, and all stuff called
> >"computation" by the followers of Moggi. They are natural thus to construct
> >parsers. Imperative programming is just one facet of the true story.
> 
> Perhaps, but mostly monads are used to model imperative actions. And
> their use in imperative programming is the obvious starting point to
> learning about them.

"Mostly" is very relative. The real power of monads is their universality.
This "modelling of imperative actions" is just a way to hide the State,
which in IO is rather unavoidable. But in my opinion it is rather
antipedagogic to introduce monads in such a way to beginners.

"Obvious starting point"? My goodness, but this is selling a black, closed  
box, which smells badly (imperatively) to innocent souls. People see then
just 
  do
 rubbish <- rubbish
 more_rubbish

and don't know anything about the true sense of return, of the relation of
<- to >>=, and finally they can use ONLY the IO Monad, nothing else. 

They start posing questions what is the difference between
 a <- b
and
 let a = b ...

and they get often ungodly answers to that, answers which say that the
main difference is that <- "executes side-effects", and let doesn't. It
choked me a bit. (Was it on comp.lang.functional, or on one of Haskell
lists?)


My philosophy is completely opposite. Introduce Monads as a natural way
of chaining complex data transfer and hiding useless information, and when
the idea is assimilated, then pass to IO. I begin usually with Maybe, then
with backtrack Monad, and some simple State Transformers. Then, the students
can grasp the Wadler's style slogan "Monads can Change the World".

Oh, well, all teaching approaches are imperfect.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monads

2001-05-17 Thread Jerzy Karczmarczuk

Ashley Yakeley answer to Mads Skagen:

> >My question is why are monads necessary in the
> >language ?
> >
> >Is it not possible to construct the features provided
> >by Monads using basic functional constructs ?
> 
> Monads themselves are made purely out of basic functional constructs.
> 
> >What do I gain using Monads ?
> 
> They happen to be a very convenient pattern. Mostly they're used to model
> imperative actions: while a purely functional language cannot actually
> execute actions as part of its evaluation, it can compose them, along the
> lines of "AB is the action of doing A, and then doing B with its result".
> Monads happen to be a useful pattern for such things.

PLEASE!!!

I disagree quite strongly with such severely limited answers addressed to 
people who don't know about monads.

Monads are *much* more universal than that. They are "convenient patterns"
to code the non-determinism (lazy list monads), to generalize the concept
of continuations, to add tracing, exceptions, and all stuff called
"computation" by the followers of Moggi. They are natural thus to construct
parsers. Imperative programming is just one facet of the true story.

Mads Skagen: please read the paper by Wadler on the essence of functional
programming, and other stuff picked, say, from here:

http://hypatia.dcs.qmw.ac.uk/SEL-HPC/Articles/FuncArchive.html

That's right you don't really NEED monads (unless you are forced to do
IO...), but when you learn them you will feel better and older.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: BAL paper available >> graphic libraries

2001-05-16 Thread Jerzy Karczmarczuk

Dylan Thurston cites :

> > ... if we [...] want an
> > adapted functional language, either we will have to wait quite long,
> > or perhaps it is the time to invent another language, with a more
> > dynamic type system, with intrinsic graphic utilities, and other
> > goodies.
> 
> For myself, I don't see a problem here.  For "intrinsic graphic
> utilities", someone has to write the graphic libraries.  (You don't
> mean they should be built into the language, do you?)  The type system
> is about as strong as anyone knows how to make a static type system;
> and it's not too hard to fall back on run-time type checking when
> necessary (as Sergey does in BAL).


Look what happens with functional graphics. Dozens of really good papers,
I could mention many names here. (Some bad papers as well, some people 
formalize and formalize without any contact with reality...)

So, one may want to check all this. You take Alastair Reid's Hugs graphic
library, and you discover that:
The IO monadic framework is so rigid that you program imperatively, and
after a few minutes you ask yourself why for goodness sake do it in Haskell?
The programming style is not functional at all. In C++ or Java you do it
faster and without more pain.
(From the syntactic point of view this is a step backward wrt. the 
Henderson's book showing how to compose graphics objects in Scheme...)

You can't draw individual pixels. (OK, lines with length 1 work). 
So, impossible to create complicated textures. Impossible to generate
complex geometric models rendered pixel by pixel. Try, please, to fill
a rectangle with a texture. The graphic updates by the Draw monad will
explode the memory quite fast.

No relation between bitmaps and arrays. (If the bitmaps work at all.)

Similar problems are visible in the Clean library.

Both, Hugs and Clean libraries have been added ad hoc to the language
environment. Plenty of horrible Windows quirks squeezed into a functional
interface.

What I mean by *intrinsic* graphics: The graphic primitives should be
WELL integrated with the underlying virtual machine.
No silly "external" bitmaps impossible to garbage-collect, and impossible
to process (thresholding, transfer curves, algebra).
No pixels drawn using "line". True, fast access to primitive data.
Efficient 2dim and 3dim vectors, with optimized processing thereof. 
Mapped arrays, easy passage from screen to bitmap DContext.

Possibility to have decent functional binding of OpenGL calls. And 
mind you, OpenGL is not just a bunch of graphic calls, but a "state
machine".

Some graphic data must be processed by strict algorithms, laziness
may deteriorate the efficiency considerably.

No, this is not JUST the problem of external libraries. It would be, if
the language were at the level of, say, C. But if the runtime is complex
with plenty of dynamical data to protect and to recover, and if the
internal control transfer is non-trivial (continuations, private stacks,
etc.) then adding efficient and powerful graphics <> is not easy.

===

In my opinion one of best decisions taken by the Rice mafia was to base
the DrScheme interface on WxWindows. What a pleasure to produce graphic
exercises for students under Linux, test it under Solaris, and work with
them under W2000 without a single incompatibility.

I am still unhappy, because it is too slow to generate textures at 
a respectable rate, but no comparison with Hugs which bombs. But perhaps
the next version (if ...) will optimize a few things.

===

And, if you want to have *interactive* graphics, then obviously you must
provide some kind of event-processing functionalities. Is this just an 
external library question?

**

No place to discuss type systems here, but "falling back" into run-time
checks is not enough in this context, we know that we need a genuine
object-oriented genericity for graphical entities. Perhaps even with
multiple inheritance or the java-style "interfaces". So, again, a bit more
than just "graphic library".


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: BAL paper available

2001-05-15 Thread Jerzy Karczmarczuk

Serge Mechveliani :
> 
> Paper announcement
> --
> 
> The file
> http://www.botik.ru/pub/local/Mechveliani/basAlgPropos/haskellInCA1.ps.zip
> 
> contains more expanded explanations on the BAL (Basic Algebra Library)
> project
> (previous variant was  haskellInCA.ps.zip).
> 
> My real intention in whole this line of business was always not just
> to propose a standard but rather to discuss and to find, what may be
> an appropriate way to program mathematics in Haskell.
> The matter was always in parametric domains ...
> Whoever tried to program real CA in Haskell, would agree that such a
> problem exists.


Absolutely.
The point is that this seems - for various, perfectly understandable reasons -
not to be the priority of the implementors. 
Graphics/imagery neither.
Nor hard numeric work (efficient, easy to manipulate arrays).

And now I will say something horrible.

Despite my extremely deep respect for all people contributing to Haskell,
despite my love for the language etc. I begin to suspect that it has been
standardized too early, and if we (Sergey, other people interested in math, 
as Dylan Thurston, myself, etc., as well as people who want to do *serious*
graphics in a functional way) want an adapted functional language, either
we will have to wait quite long, or perhaps it is the time to invent another
language, with a more dynamic type system, with intrinsic graphic utilities,
and other goodies. For the moment - this is my personal viewpoint - it
might be better to write concrete applications, and papers describing those
applications. Then we shall perhaps know better what kind of structures,
algorithms, representations, genericities, etc. we REALLY need for practical
purposes.

Anyway, Sergey did a formidable work, and this should be acknowledged even
by those who on this list criticized his presentation. Thanks.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Class RealFrac: round

2001-05-09 Thread Jerzy Karczmarczuk

Lennart Augustsson comment to :

> Rijk-Jan van Haaften wrote:
 ...
> > The strange case is if signum (abs r - 0.5) is 0:
> > such numbers are round to the nearest EVEN integer. In mathematics,
> > computer science (the studies I'm doing) and physics, as far as I
> > know, it is usual to round such numbers up, rather than to the nearest
> > integer. For example:
> ...
> Rounding to the nearest even number is considered the best practice by
> people doing numerical analysis.  Even when I went to school (25 - 30
> years ago) we were taught to round to the nearest even number, so it's
> not exactly new.
> 
> -- Lennart

I wonder whether it has anything to do with the "best practice". I suppose
that this is the natural way to get rid of one more bit of mantissa.

Anyway, there is the standard IEEE 754 (with its 4 rounding modes...) and
it is good to have a published standard, even if it results in some hangover
of some people. The reported behaviour is also visible in Clean, and I spent
2 days on debugging because of that "round-ties-to-even rule"... It is not
very "natural" psychologically, and from time to time I do some mistakes,
although I know the stuff.

Look here: (among 100 other references I could give you)

http://www.validgh.com/goldberg/addendum.html
http://developer.intel.com/technology/itj/q41999/articles/art_6.htm
http://www.cs.umass.edu/~weems/CmpSci535/535lecture6.html


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: List of words

2001-05-02 Thread Jerzy Karczmarczuk

I am relatively new to Haskell.

Somebody told me that it is a very good language, because all the
people on its mailing list are so nice that they solve all 
homeworks, even quite silly, of all students around, provided they
ask for a solution in Haskell.

Is that true, or a little exaggerated?

Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: toRational (0.9). Reply

2001-04-18 Thread Jerzy Karczmarczuk

Lennart Augustsson wrote:

> "S.D.Mechveliani" wrote:
... ...
> > Probably, the source of a `bug' is a language agreement that the
> > input is in decimal representation (`0.9') and its meaning is a
> > floating approximation in _binary_ representation.
> 
> What are you talking about?  Input in decimal representation is
> stored as a Rational number.  There is absolutely no loss of
> precision.

No need for whatareyoutalkingabout preamble.
Input in decimal representation *in general* is stored as the
implementors wish. You can't a priori know all, if you are far
from the implementors, and if the relevant documentation is
hard to find... How many people on this mailing list are really
au courant?

I had a few weeks ago a very nasty surprise: the "educational
variant/teaching language" of Rice DrScheme stores a decimal
constant as an EXACT number, and the "full language" as a
floating INEXACT. For two days I thought that the function 'floor'
is buggy.


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: constants and functions without arguments

2001-03-30 Thread Jerzy Karczmarczuk

Andreas Leitner wrote at the end of his discussion about
constants/functions sans arguments:


> I mean couldn't one say that there are no constants, just functions
> with no arguments or the Void/Unit argument that return an expression.
> Since we have lazy evaluation, there won't be a problem at runtime,
> but would the type system allow such a thing?



Lennart Augustsson:

> From a pedantic point of view your question makes no sense.  
> The definition of a function is something that takes an argument 
> and transforms it to a result.  So a function always has exactly 
> one argument.  Period.
> 
> But from a practical point of view, yes you can regard constants 
> as functions with no arguments.  And it makes sense from a syntactic 
> point of view:
...

There are different kinds of pedantry.

In Clean there are constants-constants, and constants-functions,
or rather unevaluated graphs, and an assignment

x = expr

may mean something different from

x =: expr

If expr produces a lng lazy structure, sometimes treating it
as an unevaluated thunk (or not reduced graph) is better than
having the "final" result, although in a pure functional language
there are no differences.

This is another problem, most probably beyond what interests A. L.,
but as you see, people think about such things.



Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: A GUI toolkit looking for a friend

2001-02-22 Thread Jerzy Karczmarczuk

Simon Peyton-Jones wrote:


> Nothing's wrong with ambitious systems!  But there's an 
> ambition/pragmatism tradeoff.  If there was a consensus about the 
> Right Way to build an ambitious (more declarative) system then we 
> could all go for it.  But there isn't.  So let the experiments 
> flourish, but meanwhile it would be of practical use to many
> people to have a stable, portable (if less sexy) platform on which 
> to build applications.
> 
> Simon


As a (sometimes quite nervous...) user of the Clean GUI, I assure
all profans who never touched it that

* It is quite ambitious. Oh yes, ambitious it is.
* It is quite sexy, especially for those for whom everything is
  in a sense sexy. But it is inspiring, and has some metallic
  elegance. (For some people Robocop *IS* sexy).
* It is powerful!

===

I began to scratch my head.
Presumably all this business of local and global states is translatable
to Haskell without much disturbances.
But I am not sure whether I could monadise all unique-access objects
(including Picture etc.) without serious troubles. Even in my short
private essays, where I tried to "Haskellize" the Clean I/O, refraining
from using this pseudo-imperative style:
 # object = doSomethingWith object
 # object = andMoreProcessingOf object
...

and constructing monadic chains, I finally gave up, because my 
fingers generated too many bugs. But this is a personal observation,
I am sure that more disciplined people can do it better.



Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: 'any' and 'all' compared with the rest of the Report

2001-01-24 Thread Jerzy Karczmarczuk

Eric Shade:

> It would be one thing if the
> Report were littered with functions whose specifications were
> obviously not intended as implementations.  But 'any', 'all', and
> 'findIndices' were the only inefficient ones I noticed out of the
> entire Report.
> 
> And it's obvious that clarity is not the only goal in the
> specifications.  For example, why bother to write a messy O(log n)
> version of x^n when the following is more clear *and* more concise?
> 
> x ^ n | n >= 0 = product (replicate n x)
> _ ^ _  = error "Prelude.^: negative exponent"

===
1. elem, notElem, etc. follow the same pattern.
2. The rational number package is not too optimal, as far as I can
   judge it.

3. Now, now, you fight against the inefficiency of linear algorithms
   which use map, and here you propose EXACTLY the same for the product?
   (Unless, as Bjorn hopes, one day the compilers will do all the
   parallelisation/logarithmoptimisation for us.)

   BY THE WAY.

   The power algorithm which uses the binary splitting of the exponent
   is very popular in the pedagogical context, and sometimes abused, for
   example to compute huge powers of "infinite precision" integers.
   If we assume that the multiplication algorithm for two long numbers
   of lengths M and N is proportional to M*N, see for yourself what is
   the asymptotic complexity of the power which uses the logarithmic
   method vs. the linear one. You might be surprised. //Sorry for
   deviating from Haskell...//

   On the other hand, having an even more generic logarithmic iterator
   for associative operations seems to me a decent idea. You might even
   need it one day (I had this pleasure) for the multiplication of an
   object by an integer, where the object was so non-standard, that the
   only way of implementing N*X was: X+X+...+X.
   So, Eric, don't call this algorithm "messy". (I suspect that you
   are joking, but ALL comp. sci students should know it, and perhaps
   some of them read this list and may believe you...)

Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Specifications of 'any', 'all', 'findIndices'

2001-01-23 Thread Jerzy Karczmarczuk

Koen Claessen wrote:

(about the definitions of any, all, etc.)

> The definitions in the Haskell report are a *specification*,
> not an implementation. It probably depends on the compiler
> which particular implementation runs faster.
> 
> Therefore, the Haskell report provides a clear (yes, this is
> debatable) *possible* implementation, and the compiler
> writer is free to implement this in whatever way (s)he
> likes. As long as the implementation has the same functional
> behavior as the specification in the report.


I am sorry, but 

any p = or . map p

is not an implementation-neutral specification, a
*functional* specification. This is a very concrete way 
of doing things. As everybody knows, this is a folding process.
Of course, 'or' uses (normally, again, according to the Report
if I am not mistaken) 'foldr', and it is essentially trivial
to get rid of 'map', putting (||) and 'p' together in the
fold function, but:

1. Perhaps it is too optimistic to think that the compilers will
   to that optimisation by themselves. Hugs uses literally this
   "specification"

2. I maintain my opinion that from the pedagogical point of view
   this definition is imperfect. I think that the specification
   should say no more nor less what 'any', 'notElem' etc. functions
   provide, and put (possibly) in the Report that possible
   implementations are (...)
   But the generation of this "garbage", the intermediate list of
   booleans, whether real or virtual only, goes beyond the semantics
   of 'all', etc.

As Koen said, several people already commented on that.
And, I am afraid that this will continue. I can promise
you that...

Bjorn Lisper adds:

> ...  What I in turn would like to add is that specifications like
> 
> any p = or . map p
> 
> are on a higher level of abstraction than definitions like
> 
> any p [] = False
> any p (x:xs) = p x || any p xs
> 
> This makes it easier to find different implementations, which makes it
> easier to adapt the implementation to fit different architectures. 
> The first specification is, for instance, directly data parallel 
> which facilitates an implementation on a parallel machine or in hardware.
> 
> Björn Lisper

Pardon?
map is data parallel. foldr not so obviously...
I am not sure about this higher level of abstraction. Unless, of course,
we want to use generalized, monadic maps, but then, also folds. And
we will produce, say, trees of boolean garbage instead of lists.



Jerzy Karczmarczuk

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Specifications of 'any', 'all', 'findIndices'

2001-01-23 Thread Jerzy Karczmarczuk

Hannah Schroeter wrote:

> Eric Shade wrote:
> > I have some questions about the specifications of 'any', 'all', and
> > 'findIndices' 
...
> > any p = or . map p
> > all p = and . map p

> > ...It seems clearer and more efficient to me to use the
> > following definitions:
> 
> > any p [] = False
> > any p (x:xs) = p x || any p xs
> 
> > all p [] = True
> > all p (x:xs) = p x && all p xs


...

> > Even if the apparent inefficiencies melt away, I think that my
> > versions of 'any', 'all', 'and', and 'or' are clearer as
> > specification
> > than the current ones.
> 
> I don't think so. The specifications are quite concise.
> Hannah.

Just a moment, please. Do we speak about "concise" or "clear"?
Johannes Waldman makes the same fusion, first saying that it
is concise, and terminating with a statement on clear programming.

Personally I am a convinced lazy programmer, I adore concise
and obfuscated style, and I used with some internal pleasure the
original definitions, until I started using Haskell for teaching.
(I do not teach Haskell, we *use* it on compilation stuff, some-
times on some graphics projects, and they have to learn it
"off-line".) THREE TIMES I've been asked about that. Somebody
quite clever remarked that any or all are *typical* cases for
fold rather than for map.

There is plenty of historical accidents in the standard prelude.
[I won't complain any more about the Num stuff...]

Johannes Waldmann last sentence:

> Who said this, "premature optimization is the root of all evil".

Who said that what Eric Shade proposes is an evil optimization,
while a curried "pearl": "any p = or . map p" is a nice shorthand,
plenty of vitamines, especially for beginners.
BTW., why not promote something like

any =  (or .) . map

to make everybody happy? 

Jerzy Karczmarczuk
Caen, France

PS. Johannes Waldman raises some doubts:
> so it's not at all clear that the above implementation
> is indeed more efficient.

Please, don't speculate. If you have something to say in this
context, perform some tests. I did it with Hugs. Eric Shade
implementation seems to be indeed more efficient, but very
slightly (on my test, I won't claim anything general).

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: group theory. Reply

2000-10-25 Thread Jerzy Karczmarczuk

S.D.Mechveliani wrote:
> 
> Hi, all,
> 
> To   Eric Allen Wohlstadter's 
> 
> : Are there any Haskell libraries or programs related to group theory? 

...

> Marc van Dongen <[EMAIL PROTECTED]>  writes
> 
> > I think Sergey Mechveliani's docon (algebraic DOmain CONstructor)
> > has facilities for that. 
...

> Sorry,
> DoCon  (<http://www.botik.ru/pub/local/Mechveliani/docon/2.01/>)
> 
> really supports the Commutative Rings,
> but provides almost nothing for the Group theory.
> 

EAW again:
> : ... I think it might be a fun exercies to write myself but
> : I'd like to see if it's already been done or what you guys 
> : think about it.

SM:
> I never programmed this. It looks like some exercise in algorithms.
> There are also books on the combinatorial group theory, maybe, they
> say something about efficient procedures for this.

==
"Some exercise in algorithms". Hm. There is more to that than this...

This issue has been recently stirred a bit in the comp.functional
newsgroup, in a larger context, general Math, not necessarily the
group theor. There are at least two people *interested* in it, 
although they didn't do much yet (for various reasons...)

Suggestion: Take GAP!
( http://www-history.mcs.st-and.ac.uk/~gap/ )

Plenty of simply coded algorithms, specifically in this domain.
I coded just for fun a few simple things in Haskell some time ago,
and it was a real pleasure. The code is cleaner and simpler. Its
presentation is also much cleaner than the original algorithms
written in GAP language. But I discarded all this stuff, thinking
that I would have never time enough to get back to it...

This is a nice project, and I would participate with pleasure in it,
although the time factor is still there...
Dima Pasechnik (<[EMAIL PROTECTED]>; does he read it?) 
- apparently - as well.  


Jerzy Karczmarczuk
Caen, France

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: numericEnumFromThenTo strangeness

2000-07-10 Thread Jerzy Karczmarczuk

George Russell wrote:
> 
> Lennart Augustsson wrote:
> > By definition, if you follow the standard you can't be wrong. :)
> > But the standard can be wrong.  Perhaps this is a typo in the report?
> I think I looked at this a while back.  The standard is kaput.  It gets even
> worse if you try to make sense of the definitions of succ and pred as applied
> to floating-point number.  My suggestion: get rid of Enum on floating-point
> numbers.  Maybe it'll make floating point loops a little lengthier to code,
> but at least it will be clear what exactly is being coded.

Clear?

I remind you that there is still an uncorrected bug in the domain of
rationals (at least in Hugs, but I believe that also elsewhere, since
this is a plain Haskell bug in the Prelude).

succ (3%2)

gives 2%1.

[3%2 .. something]

gives [1%1, 2%1, ... etc.]

Well, if you see this definition: fromEnum = truncate
for Rationals, then this is hardly a surprise.

[Unless I have an obsolete version of everything, which is possible.
I apologize then, but the following paragraph remains.]

My permanent, constant suggestion: revise all the numeric classes
very thoroughly. Beginning at the beginning.

Jerzy Karczmarczuk
Caen, France




Re: More on Quantum vectors...

2000-06-05 Thread Jerzy Karczmarczuk

Frank Atanassow wrote:

> ... maybe Jerzy could write up something which elaborates this remark:
> 
>  > I confess that I became interested in Haskell *because* of its possible
>  > applications to scientific computing, and *in particular* to quantum
>  > physics. (And some statistical physics; the underlying math is very
>  > similar, and this is not accidental).
>  >
>  >
>  > Mind you, this is a domain where you see immediately the necessity of
>  > computing using higher-order functions!
>  >
>  > Your states are functions. Your mathematical objects are functions. Your
>  > physical quantities (observables) are functions acting on states.

etc. 

Well, try to have a look here:

http://www.info.unicaen.fr/~karczma/arpap/quantfun.pdf


Concretely: section 3, page 6. This is an introduction to *such* 
applications of FP. The beginning of this paper is an elementary
introduction to FP you won't probably need...

Anyway, thank you *very* much for your interest. 

==


Jerzy Karczmarczuk
Caen, France




More on Quantum vectors...

2000-06-05 Thread Jerzy Karczmarczuk

...although apparently there are exactly two readers/writers
of this thread on this list. Oh, well, it is as boring as any 
other subject.

Jan Skibinski comments my observation:

> > ... I assure you that some non-orthogonal
> > bases are of extreme importance in physics, a canonical example
> > being the coherent states in optics.


> I think Jerzy is talking about cases conceptually sketched below.
> 
>   / e2|e2'
>  /|
> / |
>/___   | contravariant basis
>e1  \
>covariant basis  \
>  \ e1'
> 
> e1 * e2' = 0   e1*e1' = 1
> e2 * e1' = 0   e2*e2' = 1
==

Not really. This is not a contra/co geometric problem. In fact, this
begins to be interesting in infinite-dimensional spaces. The coherent
states (eigenstates of the annihilation operator) describe laser beams,
superfluidity, currents in the Josephson junction, and (quasi)
"classical"
distribution within the quantum formalism. They are truly
non-orthogonal,
and redundant (it is an over-complete basis; there are *analytic*
relations
between various basis vectors, which are labeled by complex numbers.
That's why they are called "coherent").


Now, what has all that to do with Haskell?
For most of you probably nothing.

I confess that I became interested in Haskell *because* of its possible
applications to scientific computing, and *in particular* to quantum
physics. (And some statistical physics; the underlying math is very
similar, and this is not accidental).


Mind you, this is a domain where you see immediately the necessity of
computing using higher-order functions!

Your states are functions. Your mathematical objects are functions. Your
physical quantities (observables) are functions acting on states.

Most problems in QM cannot be solved without using perturbation methods.
The perturbation formulae are usually very tedious to implement, unless 
one dares to use some lazy coding.

Then you can economize a few days of pencil work, and you can spend this
time rolling on the ground and laughing at the people who claim that
Haskell is useless for practical computations, because they don't know
how to implement some middle-Chinese chess in it.


Jerzy Karczmarczuk
Caen, France




Re: Module QuantumVector

2000-06-02 Thread Jerzy Karczmarczuk

Just a remark on:

Jan Skibinski begin to put finally this down:

> Here is our first attempt to model the abstract Dirac's
> formalism of Quantum Mechanics in Haskell.
> www.numeric-quest.com/haskell/QuantumVector.html

.

> The base vectors are abstract: on one hand they are just
> used for identification purposes, on another -- they obey
> all the rules of a vector space. Any vector | x > can
> be represented as a linear combination of the base vectors
> and complex scalars. [..]
> 
> We only require and impose the condition, that any two
> base vectors from the same basis are orthonormal, as in:
> 
> < (i, j) | (p, q) > = d (i, j) (p, q)
> 
> where the left hand side is a scalar product and on the
> right is a generalized definition of the classical Kronecker's
> delta.

=

There is a lot of interesting things one can do in the domain of
Quantum Mechanics using the FP paradigms. The first thing to notice
is that the states in QM are elements of the Hilbert space, and in
order to do something with them at some abstract level (not just
symbolic manipulations like in M. Horbasch book) we *need* powerful,
higher-order functional system integrated with a decent mathematic
layer. That's why I insist that Haskell Num hierarchy should be
replaced or augmented by something more scholarly.

Having a possibility to work on quantum systems using functional
languages is not just a pastime for failed physicists. This is
a practical issue, a wonderful application domain. Moreover, FP
is - for the moment - a good starting point to construct a simulator
of a "quantum computer", which would be horribly slow, of course,
but at least it would permit many people to try to understand the
basic ideas. I know several computer scientists who would like to
know something more on that, but they have been repelled by the
fact that the examples are elaborated purely theoretically...



I hope that this work will progress. For the moment I would only
say, that the orthogonality requirement of Jan is a bit constraining,
limiting the applicability of the theory to discrete (even finite?)
spaces, while it would be interesting to work with, say |x>, with
x belonging to R3... Moreover, I assure you that some non-orthogonal
bases are of extreme importance in physics, a canonical example
being the coherent states in optics.



Jerzy Karczmarczuk




Re: Haskell -> Java bytecode?

2000-05-24 Thread Jerzy Karczmarczuk

Johannes Waldmann:

> Wouldn't it be nice if there were a Haskell compiler backend
> that produced Java bytecode? Then I could write applets
> in my favourite language, compile them, put them on my web page,
> and everyone could execute them in their browser...
> 
> Seriously, is there any work in that direction?
> Surely someone must have investigated this before.
> Perhaps there are convincing arguments why it can't/shouldn't be done?

I would pose a different question: could you tell us *what kind
of applets* would you like to write in a pure lazy language, why
it would be more pleasant than Java (or Tcl/Tk, or ...)
*IN THIS CONCRETE CONTEXT*?

(There are two problems here: the coding of some nice algorithm
which does something very inspiring, and ...
... the interfacing. And here Haskell and Java are worlds apart.)


Jerzy Karczmarczuk
Caen, France




Re: import List(..) // fromInteger etc.

2000-05-22 Thread Jerzy Karczmarczuk

Fergus Henderson quoting Simon P J:

> > ... Sergey essentially wants to
> > replace the entire prelude, special syntax and all.  There are lots
> > of small but important things under the heading of special syntax:
> >
> >   Explicit lists [a,b,c]
> >   List comprehensions
> >   Numeric constants (1 means 'fromInteger 1')
> >   do notation
> >
> > Here is an idea ...

> >   import {-# SYNTAX #-} MyPrelude
> >
> > Here, I've expressed it as a pragma.  ... 
> > I wonder what other people think?
> 
> I like this proposal.  I'm not rapt about the particular syntax you've
> chosen for it, though.  I think I'd prefer something that was part
> of the language syntax proper, rather than a pragma. 

===

My opinion: Anything, any syntax, any protocol, but *DO IT PLEASE*!
And convince the Hugs people to include that as well.

I don't want to replace the entire Prelude. But I work with 
non-standard (*) mathematical objects, and this will continue for 
some time. 

In contrast with Sergey, I don't want to reconstruct the full-fledged
formal algebra, but I could mention here several dozens specific
datatypes which have some mathematical/structural "personalities"
needing some fundamental support from the compiler. (From audio-streams
and procedural graphical objects: 3d models and textures; differential
forms, lazy state transformers pretending to be normal arithmetical
functions, ... up to Feynman diagrams...)

The suggestion of M. Kowalczyk that Num should be the superclass of
AdditiveBlahBlah is - as I see it - erroneous. It won't help.


In Paris I had the honour of exchanging a few words with Simon Peyton
Jones, and his comment on this part of my checklist was (more or
less): "So, you want that the compiler recognize that explicit numeric
constants should be *automatically* converted using *your*, user-defined
fromInt, fromDouble, etc."

EXACTLY. This is also what Sergey wants. I don't know whether this
would imply some hidden inefficiency, but I don't care. Other things
are more important. You might know that the exponentiation, or taking
a logarithm has some structural meaning, for example in the theory of
graphs. (The differentiation as well; it is related to labelling).
Yet, the current Haskell hierarchy forces you to declare the concerned
data structures as belonging to "Floating", which at the beginning I
found annoying, and now I consider very harmful.

Down with the slavery imposed by the Standard Prelude!


Jerzy Karczmarczuk
Caen, France. 

===


(*) Sorry, this is not true. Standard as potatoes, just a bit non-
trivial...

Another three euro-cents:
Monsieur Kowalczyk rides again:

> Standard Prelude has to be present anyway. You cannot construct
> a reasonable implementation of Char from the thin air, and also
> fromInteger must use a known to the compiler Integer type.

> If they cannot be redefined, I see no reason to be able to export
> and import them. And I think that they should not be redefined;...

Perhaps we should not confuse the syntactic (and attributed) 
recognition of constants by the compiler with their "lifting"
to user-specified datatypes. When you write "sialaBabaMak" 
assigned within a declaration to a variable of your special class
in C++, the compiler launches *this constructor YOU wish to be
used*, and I don't see why we cannot have the same facility in
Haskell. 

You write that you are happy with the Standard Prelude. May Allah
be with you. But show us please some of your Haskell programs, OK?!




Re: Block simulation / audio processing

2000-05-19 Thread Jerzy Karczmarczuk

Mike Jones wrote:


>.. . My be problem is how to
> dynamically control the step size of algorithms to support algorithms that
> have non-uniform step size. Perhaps some kind of clock divider scheme.

In general this is one of the gray areas between continuous (discretized
anyway) and discrete (clocked) simulations. Synchronous, clocked
approach
is less than well adapted to this sort of problems, because the link
between the global clock time and the local discrete time for an
adaptive
DE algorithm ceases to exist. Yes, why not division? Or just, if you are
interested in the final solutions as trajectories and *not* as sequences
of events, you forget about clocks. (This is my very naive and minima-
listic viewpoint...)

The *strictly technical* problem on how to control the step size depends
on the algorithm. I wonder whether there are some generic strategies
here.

Jerzy Karczmarczuk
Caen, France




Re: Block simulation / audio processing

2000-05-19 Thread Jerzy Karczmarczuk

Koen Claessen wrote:

> The reason we removed the monads was that circuits with
> feedback (loops) in them became very tedious to define. One
> had to use monadic fixpoint operators (or "softer" variants
> on them), which were really unnatural to use. Also, the
> monadic style enforces an ordering on the components that
> you are using, while in a circuit, there is no such ordering
> (everything works in parallel).

I always thought that monads (or just more concretely: CPS) *help*
to sequentialize the processing of streams, but that one is
never obliged to put them where unneeded.

Loops ("short" loops which in Matlab are called "algebraic")
either must be sequentialized anyway, or - as in Matlab - they
generate some equations which must be solved globally; one gets
into something like constraint programming.

I wonder what is the Lava approach to those loops then. OK, I will
read the cited paper. For the moment Koen mentioned that the system
"detect" loops. And the real fun *begins* here...

Jerzy Karczmarczuk
Caen, France




Re: Block simulation / audio processing

2000-05-18 Thread Jerzy Karczmarczuk

Johannes Waldmann :

> > > Has anyone built any block simulators (for modeling continuous electronic
> > > systems, like OP Amps, RC networks, etc) in Haskell?
> 
> I'm also interested in this. I am thinking of extending
> Paul Hudak's Haskore system to generate and handle true audio data
> (instead of, or in addition to) MIDI data.


> In fact one student who read the course announcement
> (and the book's web page) already asked me
> about functional audio signal processing.
> 
> Any pointers appreciated,

There are two distinct problems/areas here.

1. Block simulators, dataflow interfacing etc...
   People mentiond FRAM, but somehow I missed (improbable
   that nobody fired the *obvious* keyword here): HAWK!!!
   See the Haskell Home page, you find all about.

2. DSP, audio streams, etc.
   This is another story, although DSP in a dataflow style is
   something full of sex-appeal (at least for me, 
   an old physicist...).

   Frankly not too much about the functional approach to DSP on
   the Web. I can give you some dozens of pointers to tutorials,
   algorithm description, etc., since I am interested (at least
   conceptually) myself. Lazy algorithms for the filter design,
   for mad recursive special effects (flanging, reverb), for the
   spectral synthesis, pitch shifting - all this is nice, 
   elegant, fascinating, clever...

   ...and horribly inefficient ...

   Do you realize the amount of data processed in order to generate
   10 seconds of audio stream at 96kHz of the sampling frequency?

   First, real-time generation might have severe problems with the
   garbage-collection. Generating all this off-line is OK.
   (BTW. I remember that Paul Hudak thought about generating CSound
   streams from Haskore, but I lost tracks of it...)

   Generating true audio data might be quite heavy. Frankly, I think
   that perhaps one should begin with something intermediate between
   MIDI and real audio streams, we could for example make a functional
   tracker which combines (and transforms) pre-formed audio samples.

Thank you for the inspiration. If I had time enough...

Jerzy Karczmarczuk
Caen, France




classes and algebra

2000-05-12 Thread Jerzy Karczmarczuk

Rob MacAulay quoting M. Kowalczyk:

> > Classes are not the appropriate tool for modelling domains of a
> > sufficiently advanced algebra system.

> If I understand correctly, you propose a system where Domains are
> record types, whose fields are functions corresponding to
> operations in the Domain.

> Maple uses this technique. There is a package called "Gauss"
> which sets up domains in this manner. ...

> To be honest, I have always felt that this was a bit clumsy, and I
> was hoping that Haskell would provide a more elegant solution,
> though I am less sure of this now!


I thought that Gauss is more or less dead, because it integrated
very badly with the rest of Maple. The Waterloo team wanted to
introduce the domain/categories in a more clean way. 


Axiom and Magma of course use all that. But if you really want to
see some details, to look at the OO code full of dynamic bindings,
but trying to resolve things statically, and to see what is
considered as really needed by an active CA community, you should
look at MuPAD, which is free (there is a Windows commercial
bellsAndWhistles version also) and decently documented. There are
there things I love, and things I personally hate, such as lack
of lexical closures, which makes it difficult to construct our
favourite HO functional algorithms (they might have changed
something in the last version, but I doubt it. I tried once
to implement some of my lazy numerical stuff in MuPAD, and I had
very severe difficulties).

A "Category" is a *property* of a data structure, not its clas-
sification to a class (domain). I would compare Domains here to
classes in Python. Haskell is worlds apart. It might provide for
a more elegant solution, and although I share with Rob the
doubts about it, I am sure that some offspring of Haskell might
make the life of mathematically oriented people more sweet.

[Unless - which is 3.1416 times more probable - the conceptors
of popular CA systems recognize finally the importance of HO
functional techniques, of laziness, etc., and most people trying
to use Haskell for mathematically oriented manipulations will
leave this ship and move elsewhere.]


Jerzy Karczmarczuk
Caen, France




Re: Impasse for math ...

2000-05-02 Thread Jerzy Karczmarczuk

Just for the record:

Jan Skibinski writes (comment on my divagations on
mathematical "goodies" in Haskell):

> ... then you worry about "jury"
> and "their benevolent consideration". Forget about
> the later - there is no jury and never be.


BUT I KNOW THAT!! I KNOW, I KNOW...

I am not worrying at all.

It was a bit sarcastic echo of Sergey's posting, whose
philosophy is to *change* this layer of the language, so 
he wants a kind of "official imprimatur" or "nihil obstat"
or whatever you find in your personal Latin vocabulary.

For me it is obvious that Simon PJ, the Oregon Strong Team,
Lennart, and others who actively work/ed *on the language
itself* have different priorities! Changing a mature
programming language is dangerous, everybody knows that
(in particular if one teaches it...).

==

I believe that the situation will get unblocked with the
birth of the *successor* to Haskell. But the question is
whether squeezing the general math hierarchy into a universal 
language is worth the effort. Shall we reconstruct Axiom or 
Magma in Haskell? Most probably NOT.

Although - on the other hand - I deplore veery strongly the
fact that all computer algebra packages on the market use
strict algorithms. Laziness is so nice and crazy that I *had*
to rework some known algorithms in this way: in Haskell
(and Clean).

[A shameless self-advertizing follows]

I found even the usage for a -seemingly- mad Wadler's monadic
Time-Vehicle (the counterclock-world State Transformer) for
the implementation of the Adjoint Computational Differentiation 
Technique... (If somebody  here read my drunk posting on the
non-existent language "Søren" on comp.funct. newsgroup, sent
just for fun - well, it was not +completely+  meaningless.)
Yes, I need a good math in Haskell!!

==


About the Category theory and Sergey's remarks - this merits
a longer discussion.

I will say only one thing about the "uselessness" of it:

You draw a commutative diagram, you prove that it is 
commutative, blahBlah, and somebody will say that it doesn't
solve any real problem. But now you start to implement 
something, and you realize that you essentially get some
"theorems for free". Something is easy to be formally
specified along one path, and the actual code goes along the
other. And you deforestate the chain of "maps" without even
moving your finger. 


Jerzy Karczmarczuk
Caen, France




Re: FFT in Haskell, plagiarism?

2000-05-02 Thread Jerzy Karczmarczuk

O Tempora, O Mores !!

You (?) give as an assignment the recursive, list-based
coding of the FFT algorithm, ...

and you worry that the solution might not be original?

Well, it isn't.

I would write more or less the same code (I didn't check
it for bugs...) 
Everybody would.

There is almost no choice, only some details might be different,
the "frequency" decimation instead of "time", different
administration of complex numbers, etc.

My opinion - of an old teacher (seriously).

1. Check the bugs. If the program works, accept it.
2. Ask about the source.

3. If the student claims that he invented it himself,
   propose him to write an article about, and to send
   it directly to the Nobel prize committee.

Jerzy Karczmarczuk
Caen, France.




Re: Impasse for math in Haskell 2

2000-05-02 Thread Jerzy Karczmarczuk

A sane mathematical structure - or rather: sane description
of math structures in Haskell is something which worries
me for years. On this list, on funct. newsgroup and elsewhere
this is a recurring, cyclic theme.

-- And we have still this horrible Num hierarchy, which does
not correspond to anything serious. So, in my opinion such
initiatives as Docon of Sergey Mechveliani merit all our
attention, and objections written in the style

"... did I say that it is too complicated?"

are not very constructive, even if Docon is really 
complicated. Personally I don't use it, I have my own,
private library with AdditiveGroups, Rings, Modules, etc.,
and I have used it to work with differental algebras, 
forms, generators of parametric surfaces, lazy sequences,
power series, and other silly stuff I like. But other
people like other things, and will not submit my stuff
to any "jury" or "committee" for their benevolent consi-
deration, because it is intrinsically ill and incomplete.

And it will remain so, because Haskell doesn't seem ready
to permit a global approach to math. structure definition.

Jan Skibinski writes:

> It appears to me that we have reached some impasse
> in a design of basic mathematical structure for
> Haskell 2. Sergey's proposal [...] does not seem to
> reverberate on this group.
> 
> Shouldn't we thus start with something more moderate,
> that does not offer a concrete solution as yet,
> but at least presents some framework for a serious
> discussion?

and later:

> If I could suggest [...]
> 
> + Start with a big picture and forget details
>   for a moment.
>   Use standard naming convention from Mathematics
>   Subject Classification, so we all could refer
>   to it, check it, and compare notes. Graphic
>   representation would be nice.
> 
> + Justify the needs for all those elements
>   from the big picture. What am I buying
>   from this as a whole and why I need this
>   particular structure? What can I do with it?
>   [That's why I cited Tegmar's diagram yesterday:
>   he evidently knew where he was heading]

Well, all this is ambiguous. A "big picture" and
"something moderate" contradict themselves IMHO.

Such diagrams as presented in the TOE paper are to be
found elsewhere. See the cover of the AXIOM manual for
example. The "object-like" classification of math.
structures *is not enough*. 

Not only some properties of operations, such as the
commutativity cannot be expressed by such diagrams, but
several links: subsumptions, implicit inheritance etc.
will be always missing. For example:

Any additive group *must* be a Module over integers. 
A Ring inherits twice a semi-group.
A modular Ring with N generators for N prime becomes
"miraculously" a Field.
The ordering generates an algebraic structure.

etc. In Axiom, Magma and MuPAD (and also GAP) there is
plenty of dynamics, the "types" (categories, domains,
axioms, hyla, callThemAsYouLike,...) combine the class
approach, only *partially* resolved statically, with
some constraint semantics.

===

I believe that a modest approach is really what we need,
but for me the modesty means - try to *apply* to concrete
problems whatever you have, and if you miss something -
CRY LOUD! (Perhaps in such a way I will see one day the
possibility to use my own *IMPLICIT* fromInt or fromDouble
conversion of constants, and not those inserted by the
compiler "who" naively thinks that I use the standard
preludes... /Hugs/)

===

Mad Max Tegmark "TOE" seems to ignore the theory of categories, 
his approach to math structures in the Universe is a little
"Bourbakiste"...
I see why Jan Skibinski liked this paper, some physics
background becomes visible. This is also my case. But I
disagree with the statement that Tegmarks knows where he
is heading. 

Getting back to categories, they began to appear in math.
physics as well, although I still remember when one of
my professors many years ago told us publicly that there
are some branches of mathematics which belong to a purely
speculative layer of science/philosophy, and will *never*
find any applications, for example the theory of categories,
or non-classical logic.

http://www.math.sunysb.edu/~kirillov/tensor/tensor.html
http://math.nwu.edu/~getzler/conf97.html

===

Jerzy Karczmarczuk
Caen, France




Re: doubly linked list

2000-04-28 Thread Jerzy Karczmarczuk

Chris Angus:
> 
> Would it not be better to tag a start point then we can manipulate this
> easier
> and move it back to a singly linked list etc.
> 
> data Db a = Dd (Db a) a (Db a)
>   | DStart (Db a) a (Db a)
> 
> ...

Well, I am sufficiently old to confess that one of my favourite OO
languages, and the one where I found doubly-linked lists for the first
time was ...

Yes, Simula-67.

Actually *they did* that. A "node" had two sub-classes, the link and the
head, and the link chain was doubly attached to the head. This structure
has been havily used for the maintenance of the co-routine bedlam
exploited in simulation programs.

The idea of double lists was to permit a fast two-directional
navigation,
and the ease of insertion/deletion.

But in Haskell, where the beasts are not mutable:

... Actually, has anybody really used them for practical purposes?

Jerzy Karczmarczuk
Caen, France




Re: doubly linked list

2000-04-28 Thread Jerzy Karczmarczuk

> Jan Brosius wrote:

> I wonder if it is possible to simulate a doubly linked list in
> Haskell.

... and the number of answers was impressive...

Want some more?
This is a short for *making* true double
lists, and as an extra bonus it is circular. Slightly longer than
the solution of Jan Kort, no empty lists.

A data record with three fields, the central is the value, other
are pointers.

> data Db a = Dd (Db a) a (Db a) deriving Show
-- (don't try to derive Eq...)


dlink constructs a circular list out of a standard list. Cannot
be empty. The internal fct. dble is the main iterator, which constructs
a dlist and links it at both ends to prev and foll.

> dlink ll = 
>  let (hd,lst)=dble ll lst hd
>  dble [x] prev foll = 
>let h = Dd prev x foll in (h,h)
>  dble (x:xq) prev foll =
>let h=Dd prev x nxt
>(nxt,lst) = dble xq h foll
>in (h,lst)
>  in hd

You might add some navigation utilities, e.g.

> left  (Dd a _ _) = a
> right (Dd _ _ a) = a
> val   (Dd _ x _) = x

etc. At least you don't need Monads nor Zippers. Keith Wansbrough
proposes his article. I don't know it, when you find it please
send me the references. But there are previous works, see the
article published in Software 19(2), (1989) by Lloyd Allison,
"Circular programs and self-referential structures".


Jerzy Karczmarczuk
Caen, France

PS. Oh, I see now that the KW article has been found...
Well, I send you my solution anyway.




Just for fun

2000-03-28 Thread Jerzy Karczmarczuk

If you try to exercize some popular Web search engines on:

"Haskell Library"

you will get this: http://www.vtonly.com/hstydec9.htm

Perhaps we should send them the Report?

Jerzy Karczmarczuk
Caen, France.




Re: Ratio: (-1:%1) < (-1:%1)?

2000-03-24 Thread Jerzy Karczmarczuk

Marc van Dongen wrote:

> I am not quite sure how to express this in Haskell
> terms but here it goes anyway: Why is :% in Ratio
> not hidden?
> 
> By allowing a user program to construct elements of
> the form (a:%b) one can create objects which lead to
> inconsistencies 

...

>  (-1:%1) < (0:%1) < (1:%-1) == (-1:%1),
> The last equality follows from (3) and the fact that:
>  (1:%-1) <= (-1:%1) and (-1:%1) <= (1:%-1).
> Using (2) it now follows that (-1:%1) < (-1:%1) which
> according to (1) should not be true.
> 
> According to the language definition (1:%-1) != (-1:%1).



First of all: at least in Hugs (:%) is *not* exported by
the Prelude.

So, it is hidden, and a sane, well educated gentleman would
not procreate a fraction with negative denominator.


The form (1:%-1) is an abomination. Perhaps less than (1:%0), 
but anyway. But how to have a low-level efficiency and a 
direct access to data structures, and the respect of all 
mathematic constraints? In principle we could have a polar
representation of complexes: (r,theta), and somebody really 
funny could put a negative r inside.

And then somebody really sad would cry that (r,theta) is equal
but not really, to (r,theta+2PI).

The only safe way of dealing with it is to use a brutal
screening of the constructor and the selectors, to make them
private, to use OO methods to access everything. The automatic
derivation of Eq is obviously silly, but I don't think we would
like to abandon its simplicity.

Jerzy Karczmarczuk
Caen, France




Re: speed of everything

2000-03-21 Thread Jerzy Karczmarczuk

"Andreas C. Doering" wrote:

...
> I would love to get higher performance without much effort.

> For one result I had to wait for over a week, ...


So do I, so do I!

Twice I had to wait 9 months. But the results are nice.

Jerzy Karczmarczuk
Caen, France




Nothing important

2000-03-13 Thread Jerzy Karczmarczuk

...
but could someone fix the word Feburary on the Hugs98 page?
BTW, it is very nice, and sounds Japanese.

Jerzy Karczmarczuk
Caen, Rfance



Re: rounding in haskell

2000-02-08 Thread Jerzy Karczmarczuk

Ian.Stark about:
> 
> > > Prelude> 1.0 - 0.8 - 0.2
> > > -1.49012e-08


> ... I don't quote
> this example because it is a fault with single vs double precision,
> a mistake in Hugs, or indeed a problem at all.  It's just
> interesting to see how our perception of real numbers can clash with
> the (entirely sensible) mechanics of floating-point arithmetic.

Still, I prefer to get -5.551115123125783e-17 than 1.5e-08, especially
if the "standard" specification says that the system is using doubles.

Jerzy Karczmarczuk



Re: rounding in haskell

2000-02-08 Thread Jerzy Karczmarczuk

I remarked:

> > Since the Paleozoic Era Hugs is distributed with HAS_DOUBLE_PRECISION
> > desactivated (can some gurus explain why?...), ...

and Julian Seward (Intl Vendor) answers:

> The newer STG Hugs which we are developing has "real" Doubles as
> standard, so you should get the same(ish) results as with GHC.
> It runs standard Haskell98 fairly stably, if you want to try it.
> (available from the Hugs site, "thrill seekers" section).


I regret, but both November 1999 and February 2000 versions
in win32/options.h contain:

#define USE_DOUBLE_PRECISION 0


Perhaps you would recommend that I abandon using Windows...?

Jerzy Karczmarczuk
Caen, France.



Re: rounding in haskell

2000-02-08 Thread Jerzy Karczmarczuk

Ian Stark:
 
> George rightly points out how tricky trig functions are.  My own
> favourite curious operation is subtraction:
> 
> Prelude> 1.0 - 0.8 - 0.2
> -1.49012e-08

Since the Paleozoic Era Hugs is distributed with HAS_DOUBLE_PRECISION
desactivated (can some gurus explain why?...), and the first thing
I do with, is its recompilation. Such nasties as above become then less
dangerous.

About the trickyness of trig functions: it is my firm conviction
that people who permit in their programs such horrors as sin(10^100)
either never *really need* to compute them seriously, or they are
negligent. Sometimes people really need long oscillating streams,
e.g. in digital signal processing, and some delicate algorithms as
phase shifting/adjustment preclude the usage of sampling periods
commensurable with 2Pi, but one should always try to perform all
the needed reductions before the floating precision gets out of
control. There are nice recurrences which simplify the computations
with trigonometric functions. But between 0 and Pi/4 it should be
well done.

By the way: would it be too much to ask for more flexible and complete
decimal conversion/printing routines for real numbers? (Perhaps it
has been done already, and I am behind, in this case I apologize)

Jerzy Karczmarczuk
Caen, France.



Re: More on randoms

2000-02-04 Thread Jerzy Karczmarczuk

Two things.

We have seen many times (last was Matt Harden) such definitions :

> > class RandomGen g where
> >next :: g -> (Int, g)
> >split :: g -> (g, g)
> >genRange :: g -> (Int, Int)
> >genRange _ = (minBound, maxBound)

Do you always use integer random numbers?

I don't know about you, but in my milieu 99% of random number
applications need *real*, floating RN, as fast as possible. If
the Haskell standard libraries offer only the basic integer RNG,
which will force all the users to reconstruct the needed reals,
this is not extremely painful, but anyway.
I would love having 'next' returning reals as well...
And vectors (with decently uncorrelated elements). Etc.

Do you think that all that must be manufactured by the user, or
can one parameterize the R. Gen. class a bit differently?

==

I haven't follow this discussion since the beginning, so I might
try to break an open door. The question is the following: would it
be a bad idea to provide a 'randomize' primitive, generating an
unexpected random value based on the internal clock or other
system properties? I haven't seen that here. It *is* useful.

Jerzy Karczmarczuk
Caen, France



More on randoms

2000-02-03 Thread Jerzy Karczmarczuk

Simon Peyton-Jones:

> ... --->>> genRange, 
> I think it's worth adding.  The reasons not to are
> 
> a) it's a change to H98, which should be strongly discouraged
> 
> b) I suppose it's possible that some RNG might not know
> its own range, or the range might change --- but
> in that case I'm even less sure whether the cunning
> tricks would give statistically good results.

A generator which "does not know" its own range, is IMHO an abomination
which should be never used, with or without tricks. Well, I can see
some exceptions to this strong statement, for example Gaussian
generators
between +/- infinity, which truncate at +/- 6*sigma (the Dozen
algorithm)
or +/- log(BigNumber)*sigma (if you add some dirty nonlinear tricks),
etc. For a casual user it has most probably no importance, but *uniform*
generators without explicit range are deadly.


=

Fergus Henderson:

> Wouldn't it be easiest if the RNG just guaranteed to
> return integers that range over the whole range of `Int'?
> Note that the range of `Int' can be obtained using the `minBound'
> and `maxBound' functions in the standard prelude.

This is a limitation, which precludes the usage of some clever
generation algorithms needing e.g. a prime modulus, or introducing
a computational overhead, most probably useless.


>  * The next operation allows one to extract a pseudo-random Int
>from the generator, returning a new generator as well.
>The integer returned may be positive or negative.
>Over a sufficiently large sequence of calls to next,
>the integers returned should be uniformly distributed
>over the whole range of Int (from minBound to maxBound,
>inclusive).
> 
> If that clarification were made, there would be no need to
> introduce the `genRange' method that Simon P-J suggested.

No, please, *no clarification*, but *sound and sane generators*.
Please, add range. I consider myself a real user of those beasts,
and I need it not just to speculate about.



Jerzy Karczmarczuk
Caen, France



Unevaluated worlds

2000-01-26 Thread Jerzy Karczmarczuk

Adrian Hey commenting Michael Hobbs:

> > ... if you describe 'IO a' values simply
> > as _unevaluated_ imperative actions and throw away notions of
> > referential transparency and World states, then *poof* no more nasty
> > philosophy debates. :-)
> 
> This agree with this, though I would use a word like 'unexecuted' rather
> than 'unevaluated'. A critisism often made of the 'many worlds'
> interpretation of quantum physics is that it's superfluous. 
> You can't disprove it, but there's nothing that can be explained with 
> it that can't be explained just as well without it. I feel this way 
> about 'functions' operating on 'world values'.

In both cases (IO and Everett model of QM) you may, and in physics you
*should* distinguish between "explaining", "modelling", "predicting",
etc.

Many worlds model has the conceptual elegance of getting rid of the
acausality, the theory becomes fully deterministic. It is not true
that you cannot explain anything, on the contrary, you "explain why",
(sometimes "unexplain", throwing away such questions as why the
electron turned left)
but you cannot *predict* anything, you cannot falsify the model, 
violating thus the basic laws of Saint Popper.

The IO issue is really different, and your analogy is too philosophical
(and that's why I enjoy it, who dared to use the word "nasty"?!).
The World must be really implemented, and even if you provide to the
Functional User the strict minimum of knowledge, there exist people 
who know much more than speculators, people who really put their
fingers into the implementation, as Mark Jones or Lennart.

BTW. Hobbs term "_unevaluated_ imperative action" is something as
nice and meaningful for me as, say "revolutionary justice", or
"popular democracy". [[appropriate smiley, please.]]

Jerzy Karczmarczuk
Caen, France.



"Main" reasoning

2000-01-25 Thread Jerzy Karczmarczuk

I permitted myself to protest (veeery mildly) against some very strict
statements concerning a non-strict language, of S. Marlow and
P.E. Martinez Lopez:

> In fact, there's no way to perform an IO computation in Haskell 
> other than binding it to a value called 'Main.main', compiling 
> and running it.

> ... you have NO means to produce that execution other that binding 
> your IO to Main.main.

==

I have shown an utterly silly script in Hugs, which used
readfile f >>= putStr, and I got from Simon Marlow:

> Well, pedantically speaking that's not a program (see Section 5 of 
> the H98 report). 
> Simon


Oh, thank you. (I will reread the remaining sections as well.)
But, if you permit -

- with this kind of pedantic reasoning I might have (if I were
malicious,
which is not the case) said:

"There is no way to do ANYTHING even *without* IO, without writing
a program containing Main, because it will not execute...
(And you don't even need Haskell, this is true in "C" as well).

(And I remind you that in your incriminated statement you did not
say anything about "Haskell program", but "in Haskell".) But please,
let's stop discussing formulations.

Jerzy Karczmarczuk
Caen, France



Re: Haskell & Clean

2000-01-25 Thread Jerzy Karczmarczuk

Simon Marlow:

> Strictly speaking, you can't "evaluate" a value of type (IO a) and 
> have it perform some I/O operations.  In fact, there's no way to 
> perform an IO computation in Haskell other than binding it to a 
> value called 'Main.main',
> compiling and running it. (*)
 
Pablo E. Martinez Lopez:

> Recall: the type (IO a) is
> that of PROGRAMS performing I/O, but the EXECUTION of those programs is
> another matter. And as Simon Marlow pointed out, you have NO means to
> produce that execution other that binding your IO to Main.main.


Well, I see what you mean, no way, NO means, etc. So, the program below
in Hugs would not work as it should? Too bad...

-- ===
-- ioio.hs

dump f = readFile f >>= putStr
gimmeThat = dump "ioio.hs"
-- ===

Jerzy Karczmarczuk
Caen, France



On Haskell and Freedom

2000-01-13 Thread Jerzy Karczmarczuk
But stop
these nonsensical offenses addressed to people who think
differently.

The revolution will not come, I sincerely hope. Do you know
what is the most crazy and annoiyng feature of the communist
system |*in this context*!|??

Well, the answer is: they work as hell to find the "correct"
way to distribute goods, and they don't give a damn about
producing them...

Jerzy Karczmarczuk
Caen, France



Re: Clean and Haskell

2000-01-12 Thread Jerzy Karczmarczuk

Ian Jackson defends Haskell, and attacks Clean for "obvious reasons"
Clean is not free, etc. :


> The operating system I run on my computers, Debian (www.debian.org),
> consists only of software and documentation to which I have (or can
> download) the source code, which I can use at work as well at home, to
> which I can make modifications if I need or want to, and which I can
> share (modified or not) with anyone else.  The same applies to the
> implementations I use of the languages I write in.  Millions of people
> like me have made the same choice.

I am not an advocate of Rinus Plasmeijer, but I use, and I WILL USE
Clean, for me it *is* free. I find it slightly preposterous to insist 
on the freedom to modify the source code, almost nobody does that.
Millions of people??? Compare those "millions" to this "small bunch" of
users of Windows...

BTW, do you know a reasonable Computer Algebra package free with
sources?

> Why should anyone want to tie themselves to a language with only one
> implementation, where you don't get the source code, where the
> provider insists that you may not share it (or your improvements to
> it) with others, where you are dependent on a corporation for support
> and which isn't available on all the platforms you might work on ?

The fact that there is only one implementation is *NOT THE FAULT OF
HILT*.
You may write your own if you wish, isn't it? The Clean language is not
patented as far as I know.

==

Haskell is wonderful, its authors as well. But the FSF philosophy is a
bit
extremal, and I do not appreciate at all their comparison of the
commercial
attitudes wrt. software with the Soviet tyranny. On the contrary, the
Soviets managed to corrupt in a very harmful way the notion of
"property",
with well known consequences. So, please, let the liberal people live
their life. Don't buy their products if you don't want to (I am with
you!),
but this never-ending criticism is becoming annoying. There is no point
in
throwing offenses, Haskellians at Clean, and Yahoos at Haskell.

===

Please note that

Fran apparently works under Windows only.

The "visual Haskell" project is based on a commercial interfacing tool.

Many Haskell gurus love using "Visio".

If Mark Jones and now the OGI+Yale team who distribute Hugs kept this
whiter-than-white philosophy, they could never make it multi-platform,
because they use compilers whose source code is not available.



So, I would have nothing again a commercial implementation of Haskell.
This might promote the language, facilitate its teaching, and contribute
to its development.

Personally I find much more harmful, and even strongly disgusting if
not worse, but the appropriate swearwords I know only in Polish...,
those funny fellows who patent *algorithms*. Especially algorithms
developed during their work in an educational institution.

Jerzy Karczmarczuk
Caen, France



Re: Clean and Haskell

2000-01-06 Thread Jerzy Karczmarczuk

Steve Tarsk wrote:

> I just want to say that Haskell is a fat old slow
> dinosaur compared with Clean. Download Clean at
> www.cs.kun.nl/~clean and get rid of your Haskell
> installation.
> 
> __
> Do You Yahoo!?


==

I do not appreciate offensive nonsense on this list and elsewhere.
Please don't behave as a Yahoo, in the meaning of

http://www.jaffebros.com/lee/gulliver/bk4/index.html

(especially Chapter VIII).



Jerzy Karczmarczuk
Caen, France.



Ray tracing again

2000-01-04 Thread Jerzy Karczmarczuk

Brett disagrees with my statement that

> > ... Concretely, you generate pixel by pixel, and you have to 
> > operate upon an *updateable* frame buffer.

> I was planning on just writing the resultant image to a file.  
> I'm not sure what I would gain by accessing an updateable frame 
> buffer, as displaying the resultant image would be accomplished by 
> some other program.

If you wish so...
Of course, you might produce a stream of pixels, and this is as
functional as any stream generation.
   But I cannot stop thinking about a *serious* tracer with all
kind of standard optimizations, for example being able to
undersample the rendering and fill the holes by interpolation. This
doesn't need a full-fledged framebuffer either, but doing it while
generating a stream seems a bit clumsy.
   And anyway, dumping this pixel stream into an output file will
be *much, much* more expensive than putting everything in an array.

===

I mentioned the shader language of Renderman-compatible packages.
Brett:

> The RT I am familiar with (Pov-Ray) also has its own specific language.
> This was also part of my motivation, as haskell seems like a very good
> replacement for it.

I don't think so. The POV language, as the Renderman RIB (and VRML,
etc.)
are *scene description languages*. Of course, Haskell is sufficiently
powerful to represent 3D objects, no problem. But this is very far
from the Ray Tracing engine. The external, linguistic descriptions of
objects and scenes are massaged quite a lot before becoming  *fast*
representations, adapted to the RT algorithm. So Haskell "data" will
help the human user, but it should not be used as the object implemen-
tation language for the rendering. (IMHO)

On the other hand, the shaders are dynamical procedures, and here we
may play with the power of a higher-order FL as we really like it!
===


> I am unfamiliar with Clean.  Is this a speed issue?

Yes. But Clean is a lazy, pure functional language which is very
similar to Haskell, so it is a matter of a few days to learn it.
It has some nice features, but as it is considered to be a conpetitor,
I should not elaborate upon this theme, because the Haskell
list gurus will electrocute me///


Jerzy Karczmarczuk
Caen, France



Re: Inverse function (and ray tracing)

2000-01-03 Thread Jerzy Karczmarczuk
 which should be noted here: a good part of
its visualisation code is quite functional in *STYLE*, but relies
upon very efficient array processing procedures. 

Give me that, and I will make you a reasonable ray tracer in Haskell. 
For the moment I plan (?) to do something of that kind in Clean.

Ray tracers show their power outside the polygonized world, they
are good for rendering blobs and other fuzzy objects, to cast
shadows in an easy way (not necessarily super efficiently, but easy
to program, in fact there is nothing to do...), to play with a
textured light sources etc. The underlying models might be very
nicely represented functionally, but the RT engine seems unfortunately
out of this game.


Jerzy Karczmarczuk
Caen, France



Re: Dynamic scopes in Haskell

1999-12-02 Thread Jerzy Karczmarczuk

José Romildo Malaquias:

> M.E.F.N ==>
> > In fact you would probably be better of by hiding the prelude
> > and overloading + and friends on your own.


> Is there any directions on how to hide the prelude and still use the
> definitions it exports?
> 
> Romildo.

Selective import or selective hiding from the Prelude is easy,
you mention explicitly what do you want to hide: classes, methods
etc.

Cuidado, though!

If you tinker with the class Num, you might get some nasty surprises,
for example the automatic overloading of numeric constants through
fromInt etc. will not work. I did it, I introduced such classes as
AdditiveGroup, Monoid, Ring, Field, etc., getting rid of Num and
its friends, and I passed a few very bad nights.

Jerzy Karczmarczuk
Caen, France.

PS. Caeterum censeo, categoria Num delendam esse puto!!



Re: Dynamic scopes in Haskell

1999-12-01 Thread Jerzy Karczmarczuk

José Romildo Malaquias:

> One of the algorithms I have to implement is the
> addition of symbolic expressions. It should have
> two symbolic expressions as arguments and should
> produce a symbolic expression as the result. But
> how the result is produced is depending on series
> of flags that control how the expressions is to
> be manipulated. This set of flags should then be
> passed as a third argument to addition function.
> This is the correct way of doing it. But, being
> a Mathematics application, my system should preserve
> the tradicional Math notation (that is, infix
> operators with suitable associations defined). So
> my symbolic expression type should be an instance
> of the Num class so that the (+) operator can
> be overloaded for it. But, as the function has
> now three arguments, it cannot be a binary operator
> anymore.

... then about Monads e algumas outras coisinhas mais
ou menos bonitas.

==

I don't fully understand the issue. If it is only 
a syntactic problem, and for a given chunk, say,
a module, your set of flags is fixed, and does not change
between one expression and another, you can always define

add flagSet x y = ...-- your addition function--

and then overload

x+y = add myCurrentEnv x y

in this module. (I can't resist complaining once more about
the inadequacy of the Num class hierarchy in Haskell ...;
one will have to do the same in the Rational or Floating
instance definitions, which is clumsy).

Jerzy Karczmarczuk
Caen, France



Re: Scientific uses of Haskell?

1999-11-30 Thread Jerzy Karczmarczuk

Rob MacAulay a propos of the visual/dataflow programming:


> Visual programming sounds nice, but in practice it is of limited
> use. If you have a smallish number of modules to link together, you
> could do this as easily by hand-coding. If you have a larger number
> of modules, you probably ought to think about simplifying the
> system!

> Rob MacAulay
> Cambridge

First, there are least two different concepts in visual programming.
The first one is the visual building of event-drivent interfaces,
- you know what i mean - putting widgets on the screen and filling
the popped-up forms for the callbacks. Visual Basic or C++, NetBeans
or other Java Builders, etc. I am not particularly interested in this,
although it is an important domain.

I spoke about the dataflow-style languages, the "circuit builders":
Simulink, Scilab/SciCos, WiT, Khoros, IBM Data Explorer (Now Open 
Source) a diagrammatic layer in MathCad, LabView, etc., (+ the defunct 
Java Studio).
And, of course, the notorious Visio used by some Haskell gurus
around, for example by Eric Meijer.

The whole idea is to connect together some modules through *several*
data paths. Making it through a linear, textual program is of
course possible, but unwieldy. Please, look at the definition of
a Simulink "circuit" (M-file). It is frustrating. Dataflow languages
are mostly functional, without side-effects, with a kind of laziness
in the sense that the order of execution is not specified - a block
should wait for the data.

If the block present themselves as "objects", not functions on
the screen, their re-using consists in packaging smaller modules
in a bigger one, and you will end up with a large hierarchy, which
might to be difficult to code manually. 
So, I disagree, the 2-dimensional, graphical programming has a nice
future, although the dataflow languages like Lucid won't ever
become very popular. 

I am quite sure that such a system could be built in Haskell or 
Clean in a clean way. BTW. already in Abelson&Sussman^2 you can
find such constructions in Scheme in the context of constraint
programming.

Jerzy Karczmarczuk
Caen, France.



Haskell and Computer Algebra

1999-11-30 Thread Jerzy Karczmarczuk

Andreas C. Doering:

> not only the collection of algorithms is important but also the
> data base of algebraic objects.
> For instance the group theoretic package Magma (formerly Caley) comes
> with as much information on finite groups as the the libraries of algorithms.
> This data base represents condensed information from hundrets of papers
> and an incredible large amount of computation.

Yes.
(<>; see http://www.maths.usyd.edu.au:8000/u/magma/).

See also GAP: http://www-history.mcs.st-and.ac.uk/~gap/ which is FREE.

Magma is advertised as something with a powerful functional subset:
higher
order functions, partial evaluation, closures, etc. I suppose that this
part
would be easily implementable in Haskell. One has in Magma such
mathematical
hierarchies as Rings, Lattices, Modules, and God knows what, and it was
one
of my old dreams to represent them through the Haskell type class
system. It 
didn't work properly, but in order to explain why, I would need several
pages.

(Anyway, I would start rather with the categories of Axiom or MuPAD ...)

Magma operates upon sequences and other iterative structures which are
well adapted to lazy algorithms, so, implementable in Haskell with a lot
of fun. Well, you might have a quick look here: 
http://www.info.unicaen.fr/~karczma/arpap/laseq.pdf, but I was mainly
interested in numerical processing...

So, I have still some optimism, but I wholeheartily disagree with what
follows:

> One option would be interfacing or translating a FPL into the CA system.
> For instance Axion uses an own language (strongly typed, very powerful)
> which is translated into lisp, than to C, assembler and finally machine language
> (at least these are the steps done on IBM mainframes).
> I could imagine that a second front end, Haskell could translated in compatible
> Lisp code easily. This would allow using the libraries (data base and
> algorithms) of Axiom.


Sorry, but this is a typical Frankensteinisation if you know what I
mean. 
Getting rid of the type system, of the lazy formulation of algorithms
and
transforming Haskell into Lisp? I would rather go to a Buddhist
monastery.

No, I believe that CA pkgs could be eventually rewritten, and
*especially* the
algorithms. But it is pointless to start with Maple as such, this *IS*
crazy. 
Why not implement *some* symbolic algorithms in Haskell? 
There is plenty of things to do. 
I managed to implement in a simplistic way (but which killed the
audience at 
Stirling anyway...) Differential Forms. Marc McConnell implemented some 
homological algebra and topology in Lisp:
 
http://www.math.okstate.edu/~mmcconn/shh0596.html

Implement just a partial polynomial package for a specific purpose:
Gröbner
bases, Galois Field manipulation (e.g. for CRC), etc. ...

It will (for some time) remain an academic initiative, but we can gather
some experience. 

===
I don't know anything (yet) about the system of Malaquias, but I assure
you
that I have seen at least 20 small CA experimental systems written very
often
as PH theses. (It seems that Jacal based on Aubrey Jaffer's SCM Scheme
is
such a system, quite known.) There is plenty of material to look up, one
of
my favourites, dead for more than 25 years, was a nice system (made
somewhere
in Germany) embedded in Algol68, which used its type system.




A short comment on Sergey hypothesis, who reacted to my statement that
CA
packages are popular because of the mathematical bedlam concerning the 
status of symbolic indeterminates:

> Maybe, Maple is popular for other reason? Maybe, due to the rich 
> library of efficient algorithms?  Integration, numeric methods, and
> so on. I suppose this, i do not know. 


Well, I have a strong opinion about that, oscillating between my
previous
world of physicists, and Comp. Sci., especially the educative Comp. Sci.

Nope!
The power, the perennity, the commercial success among professionals is
obviously the result of delivering a complete, universal package, with
all
kind of goodies, bells, whistles, demos, etc. But let me assure you:

1. Numerical methods in Maple are lousy! Inefficient as hell for all
serious
   numerists. For prototyping they are OK, but not for long runs.

2. A lot of other algorithms is also inefficient (I suppose that even
the
   basic pattern substitution algorithms are somewhat obsolete, but it
   changes all the time...). It doesn't really matter, they work well.

3. The *POPULARITY*, the flourishing life of MUG discussion
lists/newsgroups,
   the usage of Maple in schools, etc. - this has *nothing* to do with
this
   super-power of algebraic hierarchies, or whatever. The system was
conceived
   to be used interactively in a most silly way. A typical user wants
that
   typing "x+x" gives "2x", and "x-y" where y was assigned x, should
give zero.
   No question about typing, no questions about 'what is x&#

Scientific uses of Haskell?

1999-11-26 Thread Jerzy Karczmarczuk

Eduardo Costa:

> With a little make up, things
> like Zermello-Frankel notation would give a good replacement
> for SQL. A good computer algebra library (like the one that
> prof. R. Malaquias is creating) would make Haskell a good
> scripting language to replace things like Mathlab, Maple, etc.
> I really think that it is possible to lure a software company
> into investing in Haskell.
> 
> You could say that it would be better to have groups
> of voluntary programmers (like the people who created Linux
> and GNU), instead of companies like Microsoft. Well, I guess
> that Haskell has atractive features to these groups too. For instance,
> Haskell could be used to produce a free version of Maple,
> Matlab, or even Labview.

The problem is that no, even a particularly nice *notation* can
replace the existing heap of applications, scripts, etc. It is
not the functional notation which pushed me towards this domain,
but the clean and powerful semantics of functional languages.

So, I don't understand at all the idea to use Haskell as 'scripting'
language. It might be a very nice *implementation* language for all
kind of scientific computation packages, especially those who
need an elaborate memory management schemes, or tortuous algorithms. 
For the moment it is less adapted to the psyche of this community 
(physicists, engineers, etc.) as the interface language.

Do you know what makes Maple so attractive for newbies, for teachers,
etc? One of the reasons is simply scandalous, awful, unbelievably
silly : the lack of distinction between a symbolic indeterminate,
and the program variable. You write  ... f(x) ... and if x has not
been previously assigned, you get 'x'. Computer algebra packages are
- from the programming discipline perspective - monsters. You won't
try to do things like that, in Haskell, ML, or anything reasonable.

Or, change drastically the user habitudes...

Reimplementing a computer algebra system, why not? Sergey Mechveliani
works for some years on computer algebra library. I play with some
lazy algorithms on mathematical structures trying to avoid "symbols"
altogether, working directly with objects which they should represent
(which is far from the universal CA philosophy).

What really matters here is 

1. A huge, huge library of algorithms. Rewriting Maple is a horrendal
   task. Making some toy system with limited functionality is aimless.

2. Nice graphical interfacing.

(All this applies also to Matlab).

There *is* a free Maple-like system: MuPAD from Paderborn, Germany. It
is semantically more powerful than Maple, object-oriented (a little
bit in the style of Python), but less functional - new Maple has at
least closures, the binding protocol of MuPad is dynamic and obscure.

Manufacturing it was so tiresome, that finally Benno Fuchssteiner 
decided to commercialize it, keeping still the free version, open
for everybody. (I see some small analogies with Clean project, don't 
you?...)

There *are* free clones of Matlab: RLab, SciLab, Octave, Tela.

There is one *absolutely essential* requirement for the implementation
of the kernel of such systems: VERY EFFICIENT array processing, with
vectorized interface, slicing, sparse matrices handled efficiently,
etc. Haskell has - it seems - still quite a mileage to go in this
direction.

The visual, dataflow style programming systems like LabView, Khoros,
WiT,
etc. -- h. this is one of my dreams, to do a thing like that. To,
say, transform a Fudget-like package into a genuine development system
for a functional language, with blocks linked together with data paths
(and joyful Monads implementing these chains), with a lot of simulation-
oriented blocks: signal generators, filters, image processors, etc.

But is this dataflow style really compatible with canonical FP? I don't
really know. I have seen the thesis of Hideki John Reekie (and another
one, farther from FP, of Choi). If you know anything about the
functional
approach to dataflow, please let me know.

Jerzy Karczmarczuk,
Caen, France



Some Cleaning

1999-11-25 Thread Jerzy Karczmarczuk

Eduardo Costa answers to (?):

> > Please, correct me if I am wrong: Clean is a proprietary language.


> Yes. You are right. What is worse: They do not make this point very
> clear (for instance, I could not find the price anywhere). You know, I do
> not mind if the language is proprietary or not. However, if it is proprietary,
> it should offer the services of a good proprietary system: A publisher,
> books, advertising to produce a volume of installed system large enough
> and competifive prices (a compiler should cost something around 200 dollars).

You can't find what? Oh, please,...

http://www.hilt.nl/

(this page is referenced from the main Clean page). You will find
all about the *free evaluation*, prices -- well, $495 anyhow...
the statement about free educational use,
courses, technical support, and the offer to advertize *your*
applications.

Perhaps it should be noticed that Clean arose also from within an
academic community. Rinus is not (yet?) Bill Gates..., and he works
at Nijmegen university, not at Microsoft Research. So, I disagree
with Simon Peyton Jones (...

hmmm, I am still there, I was afraid that a lightning would kill me
on the spot when I wrote these sinful words ... )

that making an industrial compiler is a matter of resources. Yes, but
not only, perhaps here it is not the main point. It seems
to be rather a question of philosophy. Both have their niches, both
are good for some people. I love Haskell (even if I want to have
it changed...), but I use Clean as well, I don't see any real sense
in sweeping it out because it is "proprietary".


Jerzy Karczmarczuk
Caen, France



How to murder a cat

1999-06-10 Thread Jerzy Karczmarczuk

There is a law obeyed by newsgroups, which seems to be
respected here: the most trivial problem, when presented in
a provocative sauce focuses the attention of so many people,
that the issue becomes disturbing.


Lars Lundgren continues to save the soul of Friedrich Dominicus:


FD> I disagree, small scripts spend most of the time doing I/O if I
don't
> > understand how to do that I'm not able to even write the most simple
> > things. This is eg. true for my cat ...
> 
> If you just want to read stdin and write to stdout - use interact, if you
> whant to read a file - use readFile, if you want to write to a file - use
> writeFile. What is the problem? I think you are making it harder than it
> is.
> 
> /Lars L


I don't want to be just acrimonious, but I would say: if you just
want a "cat", use a "cat" in any imperative language, and don't waste
your time on Haskell. 90% of my functional programs were <>
in Haskell, CAML, Scheme, etc. and I find the statement that such
programs spend most of time "doing IO", completely misplaced. It is true
that the system layer of the program (or of the interpreter) uses most
of the execution time, but this is completely irrelevant for the sense
of the program. Using the interpreter main loop is easier than launching
"interact". But first, pose the question: what do you *really* need?,
because otherwise the problem reduces to the following:

   "Yes, these knives you want to sell me are very nice. Sharp,
but comfortable to handle, not expensive, and easy to clean.
But, you see, I spend *MOST* of my time in the kitchen opening
bottles and drinking water. So, if your knife can't help me to
drink water more efficiently, go away, and take your knife with 
you".

More seriously, I jumped into this FP paradise from a good, Fortran
loving milieu, when I found that there were problems very awkward to
solve using imperative programming. I wouldn't have started to use FP
just to test that it is possible to repeat the standard imperative
archetypic codes functionally, because it is not very rewarding.

Sorry for bothering you with my pseudopedagoguerese.

Jerzy Karczmarczuk
Caen, France.





  1   2   >