On Mon, Feb 11, 2013 at 8:55 PM, Rick Johnson <rantingrickjohn...@gmail.com> wrote: > On Monday, February 11, 2013 7:27:30 AM UTC-6, Chris Angelico wrote: > >> So... >> flatten([None, 23, [1, 2, 3], (2, 3), ["spam", "ham"]]) >> >> would return >> >> [None, 23, 1, 2, 3, (2, 3), "spam", "ham"] >> >> I think that's even more unexpected. > > Why? Are you over-analyzing? Show me a result that /does/ make you happy. > > Do you remember when i was talking about how i attempt to intuit interfaces > before reading any docs? Well i have news for you Chris, what you are doing > is NOT "intuiting" how flatten will work, what you are doing is "projecting" > how flatten will work; these are two completely different concepts Chris. > > You can't procrastinate over this method forever because NEWSFLASH you will > /never/ find a perfect flatten algorithm that will please /everyone/, so just > pick the most logical and consistent, and MOVE ON!
Yeah, this is where one has to consider the idea of a unified data model (a sort of OOPv2). Right now, it's all confused because people are using their own internal, subconscious ideas of data. There are natural ways of working with data that ***actually map onto the world we all share*** and there are other ways which are purely abstract and not-pragmatic however "pure". (Apart from this, there is the ultra-concrete data model, like C, which only maps onto the machine architecture). This is where pretty much every computer language is today. What I'm suggesting I think is somewhat novel. The first version of OOP was too concrete in the sense that it was actually trying to make real-world objects in the machine (class Chevy(Car):). This is ridiculous. There needs to be a refactor of the OOP paradigm. In practice OOP never was used to represent real-world objects. It came to model virtual world objects, a very different world with different relationships. It became the evolution of the data type itself. The unified object model needs to do for OOP what arithmetic did for number: defined a very basic and general set of operations on the concept of "quantificiation". But here were trying to do that not for quantification but for structures. My suggestion is to create the "fractal graph" data type to end (and represent) all data types. (Keep all the special, high-speed matrix ideas in SciPi/VPython.) But generally, re-arrange the data model around the fractal graph for efficiency and start watching the magic happen. markj pangaia.sf.net -- http://mail.python.org/mailman/listinfo/python-list