Am 06.05.2008, 09:28 Uhr, schrieb Chris.Barker <[EMAIL PROTECTED]>:

> Nitro wrote:
>> 1) We need an IRenderer interface ("who"). There will be two concrete
>> implementation, DCRenderer and GCRenderer. Right now I am not sure if we
>> can make both share the same interface, but it should be similar enough
>> to get away with it. The renderer is responsible for drawing an object
>> (or list of them whenever possible).
>
> I'm a bit on the fence about maintaining DC use -- it can be faster and
> easier, but there a number of features it simply doesn't support, and
> I'm concerned about the overhead of maintaining two rendering APIs.
> Perhaps some profiling is in order to see how difference there really is.

Ok, I'll whip up a quick peformance test on my machine and see how it  
goes. If the result is not clear, I'll post the benchmark here so people  
can try on different machines and graphics cards.

> The other issue is that the hit-test system won't work with
> anti-aliasing -- can you turn that off with a GC? Or should we go with a
> different system for that anyway?

That's a good question. As I understand it you draw objects with different  
colours now and then take a look at the colour of the pixel the user  
clicked on. Then you map this colour to the polygon. If you use  
anti-aliasing (or alpha-blending) then the colours are mixed and you  
cannot do the mapping anymore. The wxWidgets docs don't show any sign of  
being able to turn anti-aliasing on/off.
The good thing about the system is we don't have to worry about how  
objects are drawn exactly, we just use the result. If we'd want to do  
hit-testing without drawing anything we'd need to know how complex pathes  
are drawn, which areas are filled etc, so that's not a good idea. There  
are probably other ways to do the hit-testing even with  
anti-aliasing/alpha-blending, but those certainly require multiple passes.

>> 2) Separate the data to be drawn from the draw object itself ("what",
>> IRenderable). This way you can instantiate thousand circle objects which
>> all share the same circle object. This saves memory and has another
>> advantage I'll outline later. Basically IRenderable will have a "Render"
>> method which gets passed an IRenderer object. Then it can call various
>> methods on this object.
>
> I'm a bit lost here -- I think I need a more concrete example.

The comment I made here is a bit obsolete with MVC taken into account.  
With MVC it would look like this:

class IFlowerData(object):
     radius = 5
     center = (2,1)
     noBlades = 8

class FlowerRenderer(IRenderable):
     def Render(self, renderer, data):
         # first draw center
         renderer.DrawCircle( data.center, data.radius )

         # draw blades
         for i in range(0, data.noBlades):
             # setup some polygon for the blade here
             renderer.DrawPolygon( vertices )

So as you said somewhere below, if you want to have 100 flowers with  
radius 5 etc, you can create one FlowerData object and one FlowerRenderer.  
The only thing a FlowerRenderer basically knows is how to render a  
FlowerData object. The FlowerData object could be anything though, it  
could be a FlowerDataFromUserDatabase object which implements the  
FlowerData interface (radius, center, noBlades attributes).
You could also have multiple FlowerRenderers for the same FlowerData (they  
might render it differently). For example you might have a MapData object  
which stores roads and a heat map. You could create two renderers for  
this, one knows how to render the heat map and one how to render the  
roads. If the renderer object is not shared between several different data  
objects it could also be used to cache render-related things (for example  
draw the roads into a bitmap, remember it and then just draw this bitmap  
the next time).

>> 3) Separate the way the data is drawn from the draw object ("how",
>> ILook). This means things like brushes, pens, fonts. This allows you to
>> draw the same circle object in different fashions. You just create
>> different description for the Look.
>
> Again, a more concrete example may help me get this -- how is this
> different than a different Object, with different "Look" properties --
> though I like the idea of encapsulating a lot of the "Look" in an class,
> rather than having it be simply a collection of attributes (or
> properties, or traits). That will make it easier to copy or change the
> look, and also provide a consistent API across different Objects.

It's different from attributes since it is orthogonal to the object. The  
look encapsulates only "look" functionality. This allows it to be shared  
between objects, duplicated, if you change one look object it might affect  
thousands of drawn objects etc. You could probably think of this similar  
as the separation between CSS and HTML.

>> 4) Based on 2 and 3 it is possible to do more optimization. This means
>> we need some kind of RenderSet object which binds 2) and 3) together.
>> For example one can now sort objects by their RenderSet (data and look).
>> This might enable fc to automatically determine objects suitable for
>> list drawing without the user having to pay attention to this. If the
>> look is the same for lots of objects it also saves lots of calls to the
>> DC's "look" functions.
>
> Do you mean avoiding extra calls for DC.SetBrush() and the like? Are
> these calls expensive enough to bother trying to avoid?

If you have 10.000 objects and you have to call SetBrush, SetPen,  
SetTransform etc then the calls itself can quickly generate a lot of  
overhead, even if they were empty functions.
We don't have to implement this sorting right from the start. It's just  
something that's possible with the design and might be useful in the  
future. There might also be other uses for a "RenderManager" like this.  
The purpose of a "RenderManager" is to gather all nodes that should be  
rendered by some criterion and output the final list which nodes and in  
which order the nodes should be rendered.

>> Additionally this might screw up the order,
>
> Key problem -- if the objects don't overlap, then I suppose order (often
> called z-order) doesn't matter, I suppose, but often objects do overlap,
> so now we have to sort by "look" and z-order. My first instinct is that
> it isn't worth the complication.

You'll have to do the sorting for z-order in any case. So sorting by a  
second criterion is just a very small modification to the sorting  
criterion.

>> "priority" attribute to the RenderSet. Higher priorities are drawn more
>> front.
>
> It has been suggested on the floatcanvas list, and it's a pretty common
> approach to have a z-order property -- then you can change that value to
> move objects up and down -- I'm not sure that's the best way to go,
> though -- I kind of like having an ordered list -- and the methods to
> move things up and down that list (which FC does not have now). This
> mirrors a lot of drawing programs (InkScape is a good example), where
> you can move an object: "up", "down", "to the top", or "to the bottom".

Basically the list is the same as the outcome of sorting the priorities. I  
can see your concerns about the relative nature about priorities though.  
I'll think about this a bit more.

>> This could replace the current "draw in foreground" mode
>
> I don't think so. The key point of the foreground is that it is a
> separate buffer (actually not buffered), so that you can re-draw the
> foreground objects often and fast, and the background buffer does not
> change. This allows animation in front of a complex background, for
> instance -- it's handy for GUI manipulation of objects, too.

What you do with priorties is assign very high numbers to the front  
objects, say 100000 and upwards. Then you could for instance create two  
RenderManagers like discussed above (instead of sorting objects these  
render managers select which objects to render). The first gathers all  
objects with priorities greater than 100000, the second all with smaller  
priorities. Then the second takes care of rendering them to a bitmap,  
creates a BitmapRenderable and adds it to the node list of the first  
RenderManager. This way things are more flexible. For example you could  
easily add a third layer which is also rendered to a bitmap.
Example: you have a topographic map, a road map and a heat map. assume the  
topographic map does not change at all, the road map changes not very  
often, but the heat map does. The user will probably draw the topographic  
into a bitmap and the roads into a separate one. Now when the roads are  
updated the topographic bitmap does not have to be redrawn, too. This is  
good since redrawing the topographic map might take a long time. In this  
example you don't have the strict foreground/background distinction  
anymore. But you can easily re-use the functionality used if there was  
only fore- and background layer.

So there should probably be a notion of what a renderer draws to. This  
could either be the window or a bitmap for now. Let's call it  
RenderSurface.

>> For convenience one can also add an
>> "enabled" property and if it is set to False skip rendering this object.
>
> yup. One key here is that if an object can be on more that one Canvas,
> then whether it is enabled or not may be a property of both the Object
> and the canvas -- not just the object. Or is that what your Renderable
> is -- the connection between a drawobject and a canvas -- hmmm.

No, you are right. I mentioned this in one of the addendums, the property  
should be part of the scene node and no other class.

>> 5) Create the concept of a scenegraph ("where"). A scene graph node has
>> a root node and can have children. Each node holds a transform and the
>> children inherit this.
>
> This seems like a performance killer -- but maybe inmost cases, it won't
> be a deep structure -- in the simple case, you have one root node, and
> it  has all the objects as children.

It might be a performance killer. We can also optimize for the common  
case. Note that the transforms will be updated lazily. That is, there is a  
dirty flag which tells whether the transform has changed. Now when drawing  
an object it is only re-transformed if either its own or one of its  
ancestors' (parent, grandparent, ...) transform has changed since last  
time. This means if your objects are all static (no movement) then the  
final transforms will be calculated exactly once. If you're moving a leaf  
node, then only its final transformation has to be recalculated since the  
other ones are up-to-date.
For linear transform the cost of concattenating a few matrices on today's  
SIMD is small compared to other costs. For the general case where we have  
to evaluate a general transform function things tend to be a lot more  
costly. But again lots of things can be cached and only updated when they  
change. Then the update costs are really minimal.

> This would open up the ability to do things like create plotting axes,
> and the like.

Yes, you have an arbitrary number of frames of reference instead of only  
the world frame.

>> This implies adding yet another set of coordinates
>> to your suggestion: local coordinates. Meaning objects are always
>> expressed in local coordinates (for example a circle is likely to be
>> centered around (0,0)) and then the objects are transformed to world
>> coordinates via their node's transform.
>
> At moment, some DrawObjects are like this: Rectangles, Circles, but
> others are not: Polygons, Polylines. Certainly, something like a polygon
> could be expressed in terms of a local coordinate system, and then
> transformed to world coords later, but what I tried to do in the past is
> have the coordinate system be natural for the object and use. For
> example, when one describes a polygon on map on latitude-longitude
> coordinates, one usually thinks in terms of each vertex having
> coordinates in world space. In a way, each vertex is its own entity with
> a position on the earth, rather in in a position relative to the other
> vertices's.

Vertices are not relative to other vertices. There's still one frame of  
reference for all vertices of an object. For a world map object it  
probably makes sense to put it at the intersection between equator and the  
greenwhich meridian. Then you express the position of any place relative  
to this point. But you can also choose to place the reference point at the  
northpole or whatever your coordinates are in.

>> For example there will be at
>> least two concrete implementations, first is the default node (since it
>> can have children it will replace the current group drawobject,
>
> And indeed, the whole canvas or document, or whatever would be a node,
> wouldn't it?

The canvas should still be a separate object I think. But it could have on  
attribute called "rootNode" which holds all the other nodes.

>> The second implementation is a RenderableNode.
>> It holds a RenderSet object.
>
> I'm still a bit confused to what a RenderSet object is.

It's basically something as simple as this

class RenderSet(object):
     def __init__(self, renderable, look):
         self.renderable = renderable
         self.look = look

Together this is enough information to render an object. There might be an  
additional renderer attribute. Right now I am not sure whether the  
renderable holds the renderer and the data or whether the renderset should  
hold both. It's not really important right now though.

>> The scene graph can further be used to cull away large portions of the
>> drawing. We can establish a parallel bounding box tree where objects are
>> inserted/removed from whenever a node is added to the regular scene
>> graph.
>
> would it be parallel? or could it simple be the way that the nodes are
> stored?

It would have to be parallel. For example the apples of a tree would be  
child objects of the stem. But the r-tree would make some of the apples  
parent and other apples in the vicinity children. So the spatial tree does  
not correspond with the logical tree.

> Also where would z-order fit in to a node -- would all the objects in a
> node render all above or all below other nodes?

That's up to the RenderManager to decide. The RenderManager can output the  
nodes in depth-first traversal or sort by priority or do whatever.

>  > If somebody feels up to the task he can implement this:
>> http://en.wikipedia.org/wiki/R-tree
>
> I've thought about that -- there is a python wrapper around an r-tree
> implementation. It could be very handy if we want to re-factor the
> hit-test code a different way.

Yes.

>> That way it's even self balancing
>> :-) For now I suggest something less sophisticated.
>
> I agree -- maybe make an API that could have an r-tree underneath, but
> keep in simple in implementation for now.

Ok, I'll keep an eye on this.

>> 6) So the common final high level object is probably the RenderableNode
>> one. It addresses the "object on multiple canvasses" problem as well.
>> Each canvas is associated with a scenegraph. So if you have 2 canvasses
>> and you want the same object to be drawn on both of them, you'd do this:
>>
>> - create the renderer (let's say GCRenderer)
>> - create the IRenderable (let's say a line)
>> - create the ILook (say width = 3, colour = red)
>> - create the RenderSet (with the newly created renderable and look)
>> - create a renderable scene node, set its renderset to the newly created
>> renderset and then attach it to the first canvas
>> - create a renderable scene node, set its renderset to the newly created
>> renderset and then attach it to the second canvas
>>
>> Note how objects are shared.

> Let's say we want it visible on one canvas, and not on the other (though
> maybe we don't need to support that at all) -- could we do that by
> putting the "visible" flag in the renderable scene node?

Yes.

>> There can be helper functions to perform some of the steps (like
>> creating scene node, renderable and look could all be done in one call).
>
> That is key -- while it's be nice to have more control when needed, I
> really want FC to have a simple API to do simple things -- that's why I
> put in all the Canvas.Add*** helper functions. One should be able to put
> an object in a canvas with one call.

Definately! I totally agree about making it simple to use. Keep in mind  
that most things we discuss here are about the internal implementation.  
The user will see a strongly simplified picture most of the time. However,  
if he wants to tweak something or needs advanced functionality he can also  
do this by using fc's capabilities directly.

>> 9) How do bounding boxes interact with arbitrary transformations? If the
>> transformations are non-continous bigger problems arise.
>
> Wow! I'm not sure how easy (or necessary) it is to support a
> non-continuous transformation -- do you have an example? Even with
> fairly simple transformations, a rectangle transformed may well no
> longer be a rectangle. In the current version, I try to only deal with
> BBs in World coords -- so I'm never looking at the transformed box.

No, I don't have an example :-) And I doubt anybody would use a  
non-continous transform. Not sure what to do about other transforms for  
now.

>> Frankly I am not sure whether FloatCanvas should support any kind of
>> default controller/document model.
>
> I think it's critical -- first of all, there can be no view without a
> model to view. Second, what I'd really like to improve is the ability to
> manipulate object with a GUI -- when I started the while thing, I saw
> that as a userland problem, but it's very highly desired,and hard to do
> cleanly -- we need to make sure FloatCanvas is well structured to
> support it, and the only way to do that is to have an implementation.
>
> That being said, I do want to be able to have the Document model stand
> on it's own.
>
> By the way, I already have another view I'd like to support (not part of
> floatcanvas, though): A tree view of the document -- you could see the
> layers, the objects on them, and click on them to see and set
> properties, make them visible or not, etc.

Sounds good.

>> The user application will have to provide the model and controller part
>> of the MVC pattern.
>
> We should make that possible, but there should be default versions too.
>
>> # adapter (assuming user data does not change)
>>
>> def adoptMyDatabaseCircle(db_connection):
>>     c = fc.Circle()
>>     c.center, c.radius = db_connection.RetrieveCircleData()
>>     return c
>
> Is this an adapter? or just a factory function? Or is there a difference?

Might be more of a factory function :-) Adaptors are different again I  
think. I should look closer at them again when we are at this point.

> It would be nice to be able to do that too. Hmm maybe have the data (
> position, dimensions, etc) and the look (color, line thickness, etc) be
> two properties, and their getters and setters could be overridded ins a
> subclass to get the data from somewhere else, rather than stroing them
> directly in the object.

Yes, see above.

>> I am not entirely sure about the MVC deal here,
>
> I have the same problem -- Are we working on the level where the Model
> is the whole document? or does each DrawObject have a Model (the data it
> represents)? or both? If we can do both without too much overhead, that
> could be pretty cool.

I guess we might have both. The model thing still confuses me a bit. We  
have a model in model problem as you mentioned. The scene graph is another  
Document/Model. I'll think more about how to solve this the best way.

>> what do you think? Say a user changes the position of a circle, should
>> fc be able to automatically change the underlying model (user does not
>> have to register for any event)?
>
> I think so, but I may not quite understand. Do you mean:
>
> MyCircle.Move(cx, dy)
>
> And the Canvas just updates itself?

I guess the question was a bit silly. It has to change the model in order  
to make the circle move.

>> In this case it probably makes sense to
>> divide the IRenderable even further, having it split up in something
>> like CircleData (holds center, radius) and finally the Render(renderer)
>> method which knows how to draw a CircleData. What do you think?
>
> I'd have to see sample code to see how ugly it gets, but I think I like  
> it.

This is basically what was discussed at the very beginning of this mail.  
The CircleData would be the model and the IRenderable knows how to render  
it. CircleData is probably an ICircleData really.

>> As a sidenote, sharing objects like outlined in my last email should
>> also reduce the storage space for persisting by a great deal.
>
> I'm still a bit confused about how this can work. Are you talking about
> a situation in which you have 100 Red circles of diameter 10, but at
> different locations? so they would all share the same Circle object, but
> have different Position objects of some sort? I'm not sure how that
> would work.

Yes. See above for how it works.

>> I think fc should be split into separate subpackages.
>
> subpackages or submodules? Either way, yes.

Probably subpackages. I am not too keen on having lots of files in the  
FloatCanvas folder.

>> Example: A user has a database with blossom objects. He can retrieve
>> number of blades, colour of blades and colour of blossom center. A view
>> for this would probably draw a circle (blossom center) and a bunch of
>> polygons in some way. Now say the database changed, then the user has to
>> make sure the flower data is updated and flagged "dirty" so that the
>> bounding box is recalculated and all caches are invalidated. This is
>> basically the model sending an event to the view.
>
> yup. I'm thinking lib.pubsub might be a good idea here -- the model
> isn't sending an event to the view (it should know nothing of the view),
> it is sending an event out to the world -- and the view happens to be
> listening for it.

Yes.

> An issue here. in the current version, you have to call
> FloatCanvas.Draw() t0 re-draw. This is so that if you are changing a
> lot, you can make all the changes, and then re-render, rather than
> having it re-render with each change. With MVC, that may be too much
> coupling -- so, does the drawobject send out a message that it has
> changed, that gets picked up by the Document object, which decides if it
> wants to send out a message that the document needs updating? This is
> MVC within MVC... Or model within model, anyway.

I guess something like this sounds sensible. It's the model within model  
problem. We basically have on Document/Model for the view and one for the  
data we want to view. The question is how to treat this. It's also a  
problem because the view actually *is* the view model, at least partially.  
For example a node with a transform and the "enabled" attribute is just  
some data. If you dig further down the object hierarchy, then the type of  
an IRenderable (e.g. FlowerRenderer) is also just data. So not sure if the  
view is the same as a "view model".

>> In the flower example above
>> view data is something like flower is at position (x,y) on screen. This
>> coordinate attribute is not part of the underlying data model though.
>
> You have a model within a model problem here. The position of the flower
> may not be part of the flower model, but it is part of the document  
> model.
>
> I'd tend to say that it's part of the flower too -- it'll just get too
> ugly to try to be that pure about it.

I think it's not that easy. Say the user really retrieves the flower data  
out of a database. You can't expect him to change his database so the  
position of flowers are stored there. So i guess we have to be pure about  
it and consider the view a separate model.

Thanks for your input and concerns!

-Matthias
_______________________________________________
FloatCanvas mailing list
[email protected]
http://mail.mithis.com/cgi-bin/mailman/listinfo/floatcanvas

Reply via email to