Am 07.05.2008, 19:56 Uhr, schrieb Christopher Barker  
<[EMAIL PROTECTED]>:

> Just a couple notes -- more later:
>  > Bitmaps are about 100 times slower
>
> well Darn. I was hoping they'd be faster!, at least with transparency.

Robin replied that bitmap drawing will get faster in future versions  
(unless the bitmaps are constantly changing). Do you want to rely on that  
or not?

>> My results were obtained on Vista x64, so keep this in mind.
>
> hmm. you'd think that would support native drawing for all of this, and
> thus be as fast as it gets -- anyway, it's the newest platform, so we'd
> better support it well.

I guess if the platform was used in a completely native way the conversion  
problem would not arise. On the other side wx's main target is not  
real-time bitmap drawing :-)

>> or only matters for lots of objects.
>
> I thought it mattered for objects with large paths -- This came from
> Chris Mellon's experience with an SVG renderer. You might look for that
> in the wxPython-users archives.

Here it is:  
http://article.gmane.org/gmane.comp.python.wxpython/50278/match=mellon+svg+graphicspath
  
.
Also interesting:  
http://article.gmane.org/gmane.comp.python.wxpython/49545/match=mellon+svg+graphicspath
  
.

The conclusion is a bit indeterminate. I am all for coding things, then  
benchmarking, then optimizing. Caching the pathes cannot be too hard if we  
need it.

>> This is true. Might be hard to select an object which is completely  
>> hidden
>> behind another one.
>
> I have had that feature request.

Ok, sounds like we should abandon the hit-test bitmap approach in this  
case. Maybe use bounding boxes and then for the fine things something like  
the "pixel changes" or the Path.Contains approach. If we test in reverse  
z-order then chances are very high we'll find the hit object after 1 or 2  
checks. Additionally, if depth complexity is really high, then drawing  
will probably be the bottleneck and not "mouse over" checking. There can  
probably be further optimizations like exploiting spatial coherence  
between mouse moves. However, I think theorizing about this does not help,  
we should just test it and see how it goes. After all optimization should  
come after coding and benchmarking :-)

> Yes, though from a user perspective, they have their data, and the
> FloatCanvas object is the view (see my population example), so this
> comes down to: Is there a benefit to MVC style at the FloatCanvas object
> level?

Say the user has a database of a flowerbed with positions for all flowers.  
Then he could hook fc somehow so that the flowers in the db correspond  
with the flowers seen in fc. This means creating/deleting the nodes in the  
fc scenegraph (document). Not sure if you count this as "Float canvas  
object level" or "each flower is a view and together they form the entire  
document".

>> And yes, the IFlowerData is an interface (hence the I- prefix)
>
> OK -- I'll look up zope interfaces, though it feels a bit JAVA-ish to me!

Yes, it doesn't feel to pythonic. But the user needs some  
code/documentation when he wants to extend things. Interfaces tend to be  
ok for that.

>> Well, maybe ICircleData will not have a center property :-). The scene
>> node takes already care of transforming it,
>
> So then the location of the circle becomes a property of the Renderer?
> Or Node? I'm getting confused here... That makes sense for this example,
> but sometimes the location is part of the data, but the diameter not --
> I think it may be too implementation specific to specify this way.

Ok, we can do it this way: The user plugs his custom views in at the  
nodes. The nodes hold the transform (their main job), the data, the look  
and the renderable. This is something like the DrawObject then. Then the  
user can change things just as he likes. He can introduce functionality  
which sets the node's transform from the data and set the diameter to  
something fixed for example. He can also change the look depending on the  
data (like in your population example).
So he plugs in at the node level if he wants to do something which affects  
look, renderable and where he needs lots of power. Of course he is also  
free to just code a new renderable or to change the look only. No need to  
subclass the entire node then. Just exchange the attributes on the fly.  
For example if you'd want to change the renderable to render rectangles  
instead of circles, you could change it on the fly. Inhertiance is bad  
here since you can't change the way an object (object=data) is rendered on  
the fly (like changing between a renderable which renders a flower in  
different ways) easily.

> If each Circle is the same class, the what is the advantage of having
> one Circle, and 100 nodes, rather than 100 circles? Maybe some memory
> advantage, but since I don't think we can determine a priori which data
> is shared, and which individual, I don't know that we can do it. On the
> other hand, this structure allows users to design their own objects that
> share what they want to share. Can't that be done with subclassing and
> class attributes, though:
>
> class PopulationCircle(FC.DrawObjects.Circle):
>      color = 'red'
>
>      def __init__(self, diameter, xy):
>          self.diameter = diameter
>
> Now you can create a bunch of PopulationCircle objects, each with a
> different diameter, but they will all be red, and they all share
> everything else the same.

I think sharing can be useful sometimes. Say instead of drawing 100  
circles, you draw a complex map 100 times. Suppose the map is drawn into a  
bitmap for caching purposes. If each map is created as an individual  
object then you don't know that you can share the cache, too and end up  
with 100 bitmaps instead of 1. Now if you want to add a line or two to the  
map then change 1 object is easier than 100.
Doing it by inheritance seems to have a problem: Say you want to draw 100  
red circles and 100 green circles. How do you do this without creating two  
different PopulationCircle classes?
I think inheritance doesn't really scale well to compositing. It destroys  
the idea that things are orthogonal to each other.

>>>> You could also have multiple FlowerRenderers for the same FlowerData
>>>> (they might render it differently).
>>> That is cool -- though you could do this with subclassing too -- which
>>> is easier to use??
>>
>> Subclassing what from what?
>
> Subclassing from a FlowerObject -- overriding the Render method.
>
> That may not allow you to drop a different renderer into an existing
> instance though (well, you can do almost anything with python, but not
> as cleanly). An example I've though of for this is the Line and Spline
> objects - they really are the same except for the renderer. In the
> current version, spline is subclassed from Line, with the _Draw method
> overridden:
>
> class Spline(Line):
>      def __init__(self, *args, **kwargs):
>              Line.__init__(self, *args, **kwargs)
>
>      def _Draw(self, dc , WorldToPixel, ScaleWorldToPixel, HTdc=None):
>          Points = WorldToPixel(self.Points)
>          dc.SetPen(self.Pen)
>          dc.DrawSpline(Points)
>          if HTdc and self.HitAble:
>              HTdc.SetPen(self.HitPen)
>              HTdc.DrawSpline(Points)
>
> So now we have two classes that share everything except how they are
> rendered. This works fine, until you ask: How do you turn a Line into a
> Spline? This is a common operation in drawing programs.
>
> Of course, I could have built them as a single class, with a "IsSpline"
> attribute, and the rendering would switch on that -- but that doesn't
> anticipate users adding their own way to render a Line-like object, and
> being able to switch between them.

I am not sure I understand the problem here and how it relates to having  
more than one renderer for an object. I'd write your example like this

class PointsData(list):
     # instead of subclassing from list, one could also subclass from  
object and then add this:
     # def __init__(self, points):
     #     self.points = points

# can eventually omit this
class LinesData(PointsData):
     pass
     # Do you want to add a ConvertToSpline() method here?

# can eventually omit this
class SplinesData(PointsData):
     pass

class LineRenderer(IRenderable):
     def Render(self, renderer, data):
         renderer.DrawLines(data)

class SplineRenderer(IRenderable):
     def Render(self, renderer, data):
         renderer.DrawSpline(data)

myLine = LinesData( [ (1,2), (3,4) ] )
myLineRenderer = LineRenderer()
myLineRenderer.Render( renderer, myLine )

>> Maybe you
>> are right and we should collapse renderable and look into one object ala
>> RenderSet and change the RenderSet in this case. Or the  
>> IRenderable.Render
>> function is not only passed the renderer, but also the look which can  
>> then
>> be modified by the Renderable. Or the IRenderable can set/return a  
>> custom
>> look instead of the default one. This needs further thought though.
>
> OK -- we need a straw-man example -- I'm still confused about what is
> what here!

I agree. We need a complete use case example. I'll write up one with the  
usage as I intend it. Then we'll continue discussion on that. After we  
have finished the use case discussion I'd like to stop theorizing and then  
actually start coding and do refinements as we go along. What do you think?

>  >> some
>>> objects have a lot more complications -- see ScaledTextBox for example
>>> -- where would all that other code go?
>
> Maybe a straw man would help here. And ScaledTextBox is a good example,
> as it's as complicated as it gets in the current floatcanvas (though
> ScaledBitmap is getting there!

I'll take care of this in the example. I am thinking of a "Flowerbed"  
application with labels for each flower.

>> OK. I'd like to make the "foreground" thing part of a general layer
>> concept though.
>
> Agreed -- what I'm not sure about is if each later gets it's own buffer
> -- that would depend on how fast it is to blit a transparent bitmap --
> it used to be dog slow. Also memory use, of course.

I think this should be configurable. Hence the notion of a RenderSurface  
(we can also call it RenderTarget). If you tell the layer to draw to a  
bitmap, it's drawn to a bitmap, if you tell the layer to directly draw to  
the window, then it will be drawn there. Layers should not be coupled to  
any buffers directly.

>>> Which reminds me -- have you looked at the SVG model at all? Does it
>>> follow any of these ideas?
>
> I haven't looked closely either, but it does have a path-based and
> object-based drawing model -- it may be informative.

I know more about it now. It might be interesting to be used write an "SVG  
renderer" in addition to the DC/GC ones. The renderer just records all the  
commands issued to it and writes them out as an svg file. Doing the  
reverse (loading svg) might also be possible, we'd need a special "svg  
renderable" for this probably where the data is set to an element of the  
svg file (for example a "rect" one). You'd create one renderable for each  
element in the tree and then the svg renderable knows how to interpret  
this.
The other options is to make some kind of importer hook where the svg file  
is loaded and convert it to fc native renderables.
Using svg as a general format to load/save a whole fc is impossible I  
think.

>> The intermediary transforms would probably be cached. Things like  
>> zooming
>> can be handled by the DC.SetUserScale and GC.SetTransform methods.
>
> not DC.SetUserScale -- DCs require integers, so that all goes to heck
> when your world coordinates may vary between 0-1. That's why I didn't
> use it. I'm not sure about GCs -- do they use float or double?

Ahh, I see your point. Then this transformation has to be done manually  
anyways. GC uses wxDouble everywhere, it probably maps to the C++ double  
type (python float == C++ double). But I don't know whether the underlying  
platform supports the real double range/precision or is cast to float. In  
any case, it's better than an int :-)

>> Or no location at all! And that's when you need a separate view model.
>
> So the location is part of the Document(Scene?) model, rather than the
> FlowerModel?

Yes. Say you have a db of flowers with different properties and you want  
to use fc to visualize and edit them. User would initially position them  
on the canvas somewhere, set their properties, test which flowers look  
good together. They'd save the view for use in next time. The flower  
database would only be updated with the change to properties such as color  
and size and number of blades.

>>> yes, I do want that -- FC now has a ViewPort and a Scale. I'm not sure
>>> you need anything else for 2-d.
>>
>> Probably not. Maybe some weird projection :-)
>
> actually, that is an issue -- the viewport is rectangular in Pixel
> coords, it has to be, and I think it has to be in Projected coords, but
> it may not be in World coords. Then we'd need to go to an arbitrary
> polygon (quadrilateral, at least?). As a straight line in Projected
> isn't a straight line in World -- now I know why GIS systems generally
> work in projected coordinates. But I think we can probably get away with
> that for now -- if need be, it could be accommodated by a
> projection-dependent Bounding Box check -- define the four corners, and
> ask the question -- is this object inside that box? more complex math,
> but the same question.

Ok.

>> Ok. Please tell me when you see ugly parts in my code (not counting the
>> Benchmark).
>
> Will do. I tend to write a bit in my own style, which isn't quite the
> wxPython style, so correct me too!

Ok.

>>> Then there's version support: I'm inclined to say wxPython2.8+ and
>>> Python 2.5+ (though wxPython supports 2.4), but we might want to poll
>>> users about that -- I'm often surprised to find people constrained to
>>> old versions.
>>
>> Ok, I will developing and testing on wxPython 2.8.7 msw-unicode and  
>> python
>> 2.5.
>
> Do you have any other platforms? I can test on OS-X and WinXP. I'm not
> using Linux much lately, though I'd like to get back to that.

I can only test on XP. No other platforms around here. If we really need  
to I can probably setup a virtual machine with linux, but not unless it's  
absolutely neccessary. Any float canvas users here who use linux and would  
like to help testing?

-Matthias
_______________________________________________
FloatCanvas mailing list
[email protected]
http://mail.mithis.com/cgi-bin/mailman/listinfo/floatcanvas

Reply via email to