Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-10 Thread David Barbour
On Wed, Nov 9, 2011 at 7:18 PM, David Barbour dmbarb...@gmail.com wrote:

 That said, I have no doubts that anti-aliased rasterization can be
 achieved in a functional style - I've read a few simple functional
 ray-tracing engines, for example, and I understand that raycast per pixel
 results in images of a quality on par with many anti-aliasing techniques
 even without super-sampling or sub-pixel sampling. I would only
 express some concerns about their efficiency.


Ah yes, it seems the anti-aliased quality images really depends on the
content of those images (e.g. a bunch of big spheres) and something about
image frequencies that I haven't really studied. But I still expect that
ray tracing is the most likely option for finding anti-aliased
rasterization in a pure functional `style`.

Whatever. Need to get back to useful work.

Regards,

Dave
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread David Barbour
On Tue, Nov 8, 2011 at 11:13 PM, Dan Amelang daniel.amel...@gmail.comwrote:


 I have never seen input prefixing in a stream-processing/dataflow
 language before. I could only find one passing reference in the
 literature, so unless someone points me to previous art, I'll be
 playing this up as an original contribution in my dissertation :)


It's old, old art. Even C file streams and C++ iostreams allow get, put,
putback - where `putback` means put something back onto a stream you just
`get` from. I've seen this pattern many times - often in lexers and
parsers, Iteratees, and various other stream-processing models.


 Regarding your question about which processes would map poorly: the
 built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
 too). Many Gezira processes are a problem, such as ExpandSpans,
 CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
 the processes in the file stroke.nl (pen stroking). There's probably
 more, these are off the top of my head.


Thanks. I'll peruse these.



 The theory behind using Unicode in Nile is that source code is read a
 lot more than it is written. So I'm willing to make code a bit harder
 to write for a payoff in readability. And if Nile becomes what it
 should be, one shouldn't have to write much code anyway.


With that philosophy, maybe we should be writing markup. That way we can
read code in a comfortable `document` format. I think Fortress takes that
approach.



 He's never taken on pen stroke approximation (which is vital for

2D vector graphics).


Why is this vital? I think there are different understandings of the
`image` abstraction here. One can understand images in terms of drawing
arcs then filling between edges - and such a model is commonly seen in
PostScript and Cairo and apparently Gezira. But it is not an authoritative
abstraction. Pen-strokes with fill is a very imperative approach to
graphics modeling.

Elliott favors modeling lines in terms of areas. So do I. This seems to
shift pen stroke approximation to a utility role - valuable, but not vital.

Areas seem an effective basis for scalable scene-graph maintenance,
declarative models, occlusion, and level-of-detail indexing compared to a
line/fill approach. With the assumption that a pen stroke is modeled as an
area - perhaps defined by a cubic bezier path, a width, and a brush (e.g.
for dashes and colors and flair) - one is still left with a challenge of
building a useful library of glyphs and brushes.



He's never taken on, say, analytical geometry clipping.


Granted. Elliott focuses on the rather generic (Real,Real)-PixelData
abstractions, and doesn't bother with a static ontology of geometries
subject to easy analysis. Clipping is certainly achieved, though.

One could work with geometry based analyses, bounding boxes, and the like.
The diagrams package certainly does so.



there's a lot _after_ rasterization


True. And your ability to squeeze all this stuff into a few hundred lines
of Nile code is certainly a valuable contribution to the Steps project.


  Anti-aliased rasterization can certainly be modeled in
  a purely functional system,

 Easier said than done, I think. Again, I struggled quite a bit to come
 up with the Gezira rasterizer (which is basically purely functional).
 I don't know of any previous anti-aliased rasterizer done in a purely
 functional style, do you? Pointers appreciated.


I think the challenge you are imagining is a technical one, not a logical
one. Modeling anti-aliased rasterization in a purely functional system is
quite straightforward, at least if you aren't composing images in
rasterized form. The best anti-aliasing is very much mathematical (cf.
claims by Morphic 3 project,
http://www.jvuletich.org/Morphic3/Morphic3-201006.html). The trick is to
make such a model high performance. At the moment, one will still
ultimately compile down to an imperative machine.



 You could just reproduce Foley et. al, but that's such an imperative
 algorithm, I would think you'd end up with C-in-Haskell looking code.
 If so, I wouldn't count that.


It is true that there may be some imperative algorithms to implement pure
functions. Haskell offers facilities for doing this (ST monad, State
monad). But, while writing the algorithms may be imperative, using them can
still be pure functional. So I guess the question is whether you'll be
spending more time writing them or using them. ;)


  My own interest in this: I've been seeking a good graphics model for
  reactive systems, i.e. rendering not just one frame, but managing
  incremental computations and state or resource maintenance for future
  frames. I don't think Gezira is the right answer for my goals,

 I think you're probably right. Gezira is fundamentally about the
 ephemeral process of rendering. Managing state and resources is a
 whole other ball game. At Viewpoints, I think the Lesserphic project
 is closer to what you're looking for.


Thanks for the suggestion.

One 

Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread K. K. Subramaniam
On Wednesday 09 Nov 2011 12:43:00 PM Dan Amelang wrote:
 Input prefixing is what I call this pushing of data onto the input
 stream, though I'm not set on that term. You used the term pushback,
 which I like, but the problem is that we're pushing onto the front
 of the input stream, and pushfront just doesn't have the same ring
I thought 'pushback' means 'push back into', so if you have a input stream,
abcdefgh 
and you read in 'h' and then 'g', you could push 'g' back into the input 
stream for later processing. I suppose one could also 'putback' g into the 
input stream.

But then many computer terms border on the weird :-). I can understand 
'output' and something that one 'puts out' but then what is 'input'?  If it is 
something one 'gets in', shouldn't it have been 'inget' ;-) ?

Subbu

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread Ondřej Bílka
On Wed, Nov 09, 2011 at 09:55:22PM +0530, K. K. Subramaniam wrote:
 On Wednesday 09 Nov 2011 12:43:00 PM Dan Amelang wrote:
  Input prefixing is what I call this pushing of data onto the input
  stream, though I'm not set on that term. You used the term pushback,
  which I like, but the problem is that we're pushing onto the front
  of the input stream, and pushfront just doesn't have the same ring
 I thought 'pushback' means 'push back into', so if you have a input stream,
 abcdefgh 
 and you read in 'h' and then 'g', you could push 'g' back into the input 
 stream for later processing. I suppose one could also 'putback' g into the 
 input stream.
 
 But then many computer terms border on the weird :-). I can understand 
 'output' and something that one 'puts out' but then what is 'input'?  If it 
 is 
 something one 'gets in', shouldn't it have been 'inget' ;-) ?
You should use russian terminology. They use words Putin and getout.
 
 Subbu
 
 ___
 fonc mailing list
 fonc@vpri.org
 http://vpri.org/mailman/listinfo/fonc

-- 

clock speed

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-09 Thread Dan Amelang
On Wed, Nov 9, 2011 at 1:31 AM, David Barbour dmbarb...@gmail.com wrote:

 On Tue, Nov 8, 2011 at 11:13 PM, Dan Amelang daniel.amel...@gmail.com
 wrote:

 I have never seen input prefixing in a stream-processing/dataflow
 language before. I could only find one passing reference in the
 literature, so unless someone points me to previous art, I'll be
 playing this up as an original contribution in my dissertation :)

 It's old, old art. Even C file streams and C++ iostreams allow get, put,
 putback - where `putback` means put something back onto a stream you just
 `get` from. I've seen this pattern many times - often in lexers and parsers,
 Iteratees, and various other stream-processing models.

Of course I'm aware of these :) There's a Nile parser written in
OMeta, and there's one in Maru now. Both put objects on their input.
And I'm familiar with C++ streams, notice how I based the Nile 
and  syntax on them.

Notice the first sentence of the paragraph that you quoted. I'm
pointing out that, as useful as input prefixing is, it doesn't appear
at all in stream processing languages. Furthermore, it doesn't appear
in stream processing models of computation.

Here's a bit of background. Take the early research, such as Duane
Adams' A Computation Model with Data Flow Sequencing in 1968.
(Strachey used streams to model I/O before that, like UNIX uses file
handles). Around this time, you also had Seror's DCPL, and Scott's
Outline of a Mathematical Theory of Computation.

If you start there, and go through Karp and Miller Properties of a
Model for Parallel Computations, Kahn's process network papers,
Dennis' dataflow work (esp. Id and VAL), Wadge and Ashcroft's dataflow
(particularly GLU), McGraw's SISAL, Lee's Dataflow Process Networks,
up to recent work like Streamit and GRAMPS, you won't find a single
one that even proposes input prefixing (corrections welcome).

My point is that introducing this feature into a stream processing
language and demonstrating its utility might be a research
contribution.

I do appreciate your interest in Nile/Gezira, and you've brought up
interesting questions. Due to time constraints, though, I'm going to
have to put less effort into comments like the above that strike me as
somewhat glib. I hope not to offend anyone or dismiss truly informed
comments, though. I just have a lot on my plate right now.

 Regarding your question about which processes would map poorly: the
 built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
 too). Many Gezira processes are a problem, such as ExpandSpans,
 CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
 the processes in the file stroke.nl (pen stroking). There's probably
 more, these are off the top of my head.

 Thanks. I'll peruse these.

As you look those over, it might help to know that the double arrow
⇒ is for process substitution, which is analogous to Kahn's
reconfiguration (see Kahn and MacQueen, 1976). That is, the effect
of the statement is to dynamically replace the current process with
the newly created sub-network following the arrow.

 The theory behind using Unicode in Nile is that source code is read a
 lot more than it is written. So I'm willing to make code a bit harder
 to write for a payoff in readability. And if Nile becomes what it
 should be, one shouldn't have to write much code anyway.

 With that philosophy, maybe we should be writing markup. That way we can
 read code in a comfortable `document` format. I think Fortress takes that
 approach.

Yes, similar idea. Though as Alan points out, markup is very weak, and
we can do better with interactive, graphical environments. Thus, I've
always felt that my games with Nile syntax are somewhat futile.

 He's never taken on pen stroke approximation (which is vital for

 2D vector graphics).

 Why is this vital? I think there are different understandings of the `image`
 abstraction here. One can understand images in terms of drawing arcs then
 filling between edges - and such a model is commonly seen in PostScript and
 Cairo and apparently Gezira. But it is not an authoritative abstraction.
 Pen-strokes with fill is a very imperative approach to graphics modeling.
 Elliott favors modeling lines in terms of areas. So do I. This seems to
 shift pen stroke approximation to a utility role - valuable, but not vital.

Is this conclusion really important enough to argue for? That
rendering lines should be considered valuable but not vital? I think
graphic designers would generally disagree. Regardless, just replace
all instances of vital with valuable in my original argument, and
I still stand by it.

I'm sorry but at this point, I think you're grasping at straws. I can
address one more comment, then I have to move on:

 Pen-strokes with fill is a very imperative approach...

This is just too much. Let's go over the details. In Gezira, I use the
stroke-to-path approach to pen stroking. This means that the
stroking pipeline takes a stream of Beziers that 

[fonc] Nile/Gezira (was: Re: +1 FTW)

2011-11-08 Thread Dan Amelang
Hi David,

On Tue, Nov 8, 2011 at 6:23 PM, David Barbour dmbarb...@gmail.com wrote:

 The high-level model of computation is a variation of Kahn process
 networks. The low-level part is a single-assignment,
 mathematics-oriented language for specifying the internal behavior of
 a process.

 I've been reading through Nile and Gezira code and understand the model
 better at this point. It's basically pure functional stream processing,
 consuming and generating streams. I understand that `` generates one
 output, and `` seems to push something back onto the input stream for
 re-processing.

Yes, you are correct. The spacial metaphor here is that streams flow
from left to right, so  x pushes x to the right, onto the tail of
the output stream.  x pushes (pulls? :) x to the left, onto the
head of the input stream. Pipeline construction works this way too,
e.g., ClipBeziers → Rasterize → ApplyTexture → WriteToImage . A bit
silly, perhaps, but it works.

Input prefixing is what I call this pushing of data onto the input
stream, though I'm not set on that term. You used the term pushback,
which I like, but the problem is that we're pushing onto the _front_
of the input stream, and pushfront just doesn't have the same ring
:)

Whatever the name, this feature is vital to writing expressive
programs in Nile. It provides a recursion-like capability. For
example, the DecomposeBeziers process successively decomposes Beziers
until they are small enough to process. This is done by splitting the
Bezier into two parts (à la De Casteljau), and pushing each sub-Bezier
onto the input stream.

I have never seen input prefixing in a stream-processing/dataflow
language before. I could only find one passing reference in the
literature, so unless someone points me to previous art, I'll be
playing this up as an original contribution in my dissertation :)

 Which Nile operators do you anticipate would translate poorly to shaders? I
 guess `zip` might be a problem. SortBy and pushback operators - at least if
 finite - could be modeled using shader global state, but that would be a bit
 of a hack (e.g. receive some sort of EOF indicator to emit final elements).
 Hmmm

Yes, this is the beginning of the difficulties. Just taking the input
prefixing issue, it's problematic to model the unbounded input stream
as global state. You have issues because of the finiteness of the
global state, and because of the inefficiency of global write/read
access in the shader (see GPU docs).

And as I brought up before, even if one can get something to run on
the GPU, that's very different from getting something to run much
faster than on the CPU.

Regarding your question about which processes would map poorly: the
built-in Nile processes DupZip, SortBy, and Reverse (maybe DupCat,
too). Many Gezira processes are a problem, such as ExpandSpans,
CombineEdgeSamples, ClipBeziers, DecomposeBeziers, pretty much all of
the processes in the file stroke.nl (pen stroking). There's probably
more, these are off the top of my head.

 I think I'd be in trouble actually writing Nile code... I don't have a text
 editor with easy Unicode macros. Which do you use?

I use vim. So I hit ctrl-v u2200 for ∀.

Ideally, we'd have a Nile IDE with keyboard macros in addition to a
little char map to click on (Bert built one for Frank).

The theory behind using Unicode in Nile is that source code is read a
lot more than it is written. So I'm willing to make code a bit harder
to write for a payoff in readability. And if Nile becomes what it
should be, one shouldn't have to write much code anyway.

 I agree that Conal Elliott's focus has certainly been on composable,
 morphable, zoomable graphics models

I'm glad we agree...wait a second...when did I say the above?

 - primarily, everything that happens
 before rasterization.

Ah, well, now I don't agree that his focus has been on everything
that happens before rasterization. He's left out a lot. He's never
taken on pen stroke approximation (which is vital for 2D vector
graphics). I had to struggle a bit to come up with my functional
approach to pen stroking (if I missed prior art, let me know!). He's
never taken on, say, analytical geometry clipping. On top of that,
there's a lot _after_ rasterization, and he doesn't addresses that
territory much either.

I like Conal's work, really. I read all his papers on functional
graphics several years ago, and it probably subconsciously influenced
my research. I'm just objecting to the idea that he covered very much
functionality in the computer graphics space. I think he took on the
easiest niche to model in a purely functional language.

 Anti-aliased rasterization can certainly be modeled in
 a purely functional system,

Easier said than done, I think. Again, I struggled quite a bit to come
up with the Gezira rasterizer (which is basically purely functional).
I don't know of any previous anti-aliased rasterizer done in a purely
functional style, do you? Pointers appreciated.

You could