Nitro wrote:
>> Perhaps some profiling is in order to see how difference there really is.
>
> Ok, I'll whip up a quick peformance test on my machine and see how it
> goes.
That would be great.
> If the result is not clear, I'll post the benchmark here so people
> can try on different machines and graphics cards.
And perhaps suggest optimizations for GCs. One thing noted on the list
in the past is that apparently it takes longer to create a path than to
render it.
> That's a good question. As I understand it you draw objects with different
> colours now and then take a look at the colour of the pixel the user
> clicked on. Then you map this colour to the polygon.
exactly -- it has the advantage of being O(1) to do the hit test -- thus
fast enough for mouse-over tests, regardless of the complexity of the
drawing. The downsides are: you're drawing every hitable object twice,
so it's slower there, and you can only get the top object under the
mouse if there is more than one object.
> anti-aliasing (or alpha-blending) then the colours are mixed and you
> cannot do the mapping anymore.
Yup.
> The wxWidgets docs don't show any sign of
> being able to turn anti-aliasing on/off.
Darn.
> The good thing about the system is we don't have to worry about how
> objects are drawn exactly, we just use the result. If we'd want to do
> hit-testing without drawing anything we'd need to know how complex pathes
> are drawn, which areas are filled etc, so that's not a good idea.
no -- way too much work for complex objects
> There
> are probably other ways to do the hit-testing even with
> anti-aliasing/alpha-blending, but those certainly require multiple passes.
The other options I know are:
1) use the Path.Contains() method -- see Robin's note.
2) first do a Bounding Box check, then draw the object in a small bitmap
surrounding the mouse point, and see if that pixel changes -- this is
similar to the current system, but you draw on demand, instead of ahead
of time, and you're only drawing one object at a time, so any color
change will be detected, so anti-aliasing is OK.
The downside of both of those is that they are O(n) (and the constant is
larger), but some of that could be improved with a spatial index
(O(log(n)), and maybe it's plenty fast anyway. Bounding box checks are
pretty quick.
>> I'm a bit lost here -- I think I need a more concrete example.
>
> The comment I made here is a bit obsolete with MVC taken into account.
> With MVC it would look like this:
>
> class IFlowerData(object):
> radius = 5
> center = (2,1)
> noBlades = 8
>
> class FlowerRenderer(IRenderable):
> def Render(self, renderer, data):
> # first draw center
> renderer.DrawCircle( data.center, data.radius )
>
> # draw blades
> for i in range(0, data.noBlades):
> # setup some polygon for the blade here
> renderer.DrawPolygon( vertices )
I get it now. I think I like it, but... ( note: playing the skeptic here)
1) It just may be more complicated than we need -- it should be easy to
define new Objects, and having to define multiple classes to do so feels
onerous. On the other hand, you could probably re-use a lot of existing
classes and mix-ins and combine them in different ways to get new things
without writing much new code.
2) maybe it's because this example is a fairly simple, but I fail to see
the advantage of the FlowerRenderer class over a Flower class with a
Render method. It seems that FlowerData and FlowerRenderer are so
closely coupled that there is not a lot of point to separating them like
that. Note that if .radius, .center and .noBlades are properties, then a
subclass could redefine the getters to pull from a database, etc. also.
Perhaps we should select a subset of the current DrawObjects to use as
samples to see how things will all fit together with the new code --
including a couple more complex ones, like the PieChart.
> So as you said somewhere below, if you want to have 100 flowers with
> radius 5 etc, you can create one FlowerData object and one FlowerRenderer.
Well, no. FlowerData.center is different for each one, so you'd need a
separate one for each anyway (or have I missed something). They could
all share a FlowerLook object though.
> You could also have multiple FlowerRenderers for the same FlowerData (they
> might render it differently).
That is cool -- though you could do this with subclassing too -- which
is easier to use??
I've been thinking a bit about MVD at the DrawObject level. this makes
sense, as I've always thought of FloatCanvas as benign a tool to
visualize data, rather than to draw a picture (though it is both, of
course). So example data may be the population of a bunch of cities,
each at a location. So the data model looks like:
class PopulationData:
population = 50000
location = (-89.5, 47.5)
Now you may want to represent those as circles, with the color
representing population. Or you could want the radius to scale with the
population -- two views of the same data. However, this gets
complicated, as I was thinking of radius as being a property of the
CircleData object, and color as a Property of the CircleLook Object, but
in this example, either one could represent the data value. So why are
we keeping them apart? and how do we decide what to put where?
Also, to see if I'm understanding you, would there be a Flower object
that tied these together:
class Flower:
def __init__(....)
self.Look = FlowerLook
self.Data = FlowerData
self.Renderer = FlowerRenderer
def Render(....)
self.Renderer.Render(renderer, self.Data, self.Look)
AS I wrote that, I realized that there is nothing Flower specific to
that --it's simply a generic DrawObject (or node?). However, some
objects have a lot more complications -- see ScaledTextBox for example
-- where would all that other code go?
> If the renderer object is not shared between several different data
> objects it could also be used to cache render-related things
I do think we need to do that -- I expect that we'll need to cache GC
paths, for example.
>>> 3) Separate the way the data is drawn from the draw object ("how",
>>> ILook).
By the way, does the "I" signify an instance?
> It's different from attributes since it is orthogonal to the object. The
> look encapsulates only "look" functionality. This allows it to be shared
> between objects, duplicated, if you change one look object it might affect
> thousands of drawn objects etc.
Yes, I like this. I'm still not sure about the Data object, as the kind
of data is more likely to be coupled to the type of object -- OK, I'm
wrong here already! -- for instance, a polyline, spline, and polygon all
have the same type of data -- s sequence of vertices.
Anyway, the "Look" can be shared among a LOT of objects -- fill color,
line(stroke) color, etc...
OK, I'm coming around!
>> Do you mean avoiding extra calls for DC.SetBrush() and the like? Are
>> these calls expensive enough to bother trying to avoid?
>
> If you have 10.000 objects and you have to call SetBrush, SetPen,
> SetTransform etc then the calls itself can quickly generate a lot of
> overhead, even if they were empty functions.
Maybe, but maybe not compared to the drawing itself. The exception to
this is very simple objects, like single points -- which is why I
created PointSet objects -- to avoid just that! However, we may want
that anyway, even one python<=>C++ translation may be too much for
things like 1000s of points (A use case of mine!)
> We don't have to implement this sorting right from the start. It's just
> something that's possible with the design and might be useful in the
> future. There might also be other uses for a "RenderManager" like this.
> The purpose of a "RenderManager" is to gather all nodes that should be
> rendered by some criterion and output the final list which nodes and in
> which order the nodes should be rendered.
Fair enough. We'll see how it shakes out.
> Basically the list is the same as the outcome of sorting the priorities.
yes, but the user has to keep track of the z=order value of all their
objects in order to do something like move an object up a bit
> What you do with priorties is assign very high numbers to the front
> objects, say 100000 and upwards. Then you could for instance create two
> RenderManagers like discussed above (instead of sorting objects these
> render managers select which objects to render). The first gathers all
> objects with priorities greater than 100000, the second all with smaller
> priorities.
OK, so it's kind of like auto-generating layers. It seems a bit like
magic though -- I like it to be more explicit -- "I want these objects
in the foreground"
However, if you can now blit transparent bitmaps on top of each-other
fast (something else to profile), then you could kind of auto-generate a
bunch of separate off screen bitmaps -- each with N objects, and then
only re-draw the layers that have changes on them -- cool optimization
(now that memory is cheap!)
> For example you could
> easily add a third layer which is also rendered to a bitmap.
I do want to be able to do multiple layers like that -- the only reason
FC doesn't have that now is that it was painfully slow to render a
transparent bitmap a few wx versions ago (2.4 maybe?)
> So there should probably be a notion of what a renderer draws to. This
> could either be the window or a bitmap for now. Let's call it
> RenderSurface.
yup. probably always a bitmap, though. Except maybe for printing, PDF,
(SVG?), etc.
Which reminds me -- have you looked at the SVG model at all? Does it
follow any of these ideas?
> It might be a performance killer. We can also optimize for the common
> case. Note that the transforms will be updated lazily. That is, there is a
> dirty flag which tells whether the transform has changed. Now when drawing
> an object it is only re-transformed if either its own or one of its
> ancestors' (parent, grandparent, ...) transform has changed since last
> time.
that means you're caching a lot. as you zoom in and out, move around the
canvas -- the pixel coords keep changing -- or do you not cache those,
just the intermediate transforms.
> This means if your objects are all static (no movement) then the
> final transforms will be calculated exactly once. If you're moving a leaf
> node, then only its final transformation has to be recalculated since the
> other ones are up-to-date.
Got it -- that may work well.
> Vertices are not relative to other vertices. There's still one frame of
> reference for all vertices of an object. For a world map object it
> probably makes sense to put it at the intersection between equator and the
> greenwhich meridian.
Right, so the local coords can happen to coincide with the world coords.
You're right, that's not a limitation.
> class RenderSet(object):
> def __init__(self, renderable, look):
> self.renderable = renderable
> self.look = look
>
> Together this is enough information to render an object.
Don't you need the data object too? Or is that referenced by the renderable?
There might be an
> additional renderer attribute. Right now I am not sure whether the
> renderable holds the renderer and the data or whether the renderset should
> hold both. It's not really important right now though.
This may be more clear as you write examples...
> It would have to be parallel. For example the apples of a tree would be
> child objects of the stem. But the r-tree would make some of the apples
> parent and other apples in the vicinity children. So the spatial tree does
> not correspond with the logical tree.
you're right, of course.
> Definately! I totally agree about making it simple to use. Keep in mind
> that most things we discuss here are about the internal implementation.
> The user will see a strongly simplified picture most of the time. However,
> if he wants to tweak something or needs advanced functionality he can also
> do this by using fc's capabilities directly.
Fair enough.
> I think it's not that easy. Say the user really retrieves the flower data
> out of a database. You can't expect him to change his database so the
> position of flowers are stored there. So i guess we have to be pure about
> it and consider the view a separate model.
well, as I stated above,what's "view" and what's "data" is domain
specific. You might have just the properties of a flower in your DB, and
the location somewhere else, or the location could be in the DB -- it's
all application dependent.
> Thanks for your input and concerns!
No, thank you for you work on this!
> - I'd like to introduce the concept of a camera/view. This holds data
> about the spot you are looking at, things like zooming etc. It also allows
> for multiple views of the same document.
yes, I do want that -- FC now has a ViewPort and a Scale. I'm not sure
you need anything else for 2-d.
> - Any coding guidelines?
You mean coding style? Let's follow the wxPython guidelines:
http://wxpython.org/codeguidelines.php
and
http://wiki.wxpython.org/wxPython%20Style%20Guide
Then there's version support: I'm inclined to say wxPython2.8+ and
Python 2.5+ (though wxPython supports 2.4), but we might want to poll
users about that -- I'm often surprised to find people constrained to
old versions.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
[EMAIL PROTECTED]
_______________________________________________
FloatCanvas mailing list
[email protected]
http://mail.mithis.com/cgi-bin/mailman/listinfo/floatcanvas