On Thu, Apr 17, 2008 at 3:03 PM, Greg Ewing <[EMAIL PROTECTED]> wrote:
> Patrick Mullen wrote: > > > Also, if you are using sqrt for your > > distance check, you are likely wasting cpu cycles, if all you need to > > know is whether they are "close enough." > > Nope; in this case, the calculations' margin of error must be very small. > Also note that if you do need to compare exact Euclidean > distances for some reason, you can avoid square roots > by comparing the squares of the distances instead of > the distances themselves. I already do that. On Thu, Apr 17, 2008 at 4:02 PM, Greg Ewing <[EMAIL PROTECTED]> wrote: > Rather than a tree, you may be just as well off using a regular > array of cells. That makes it much easier to find the neighbouring > cells to test, and there's also less overhead from code to manage > and traverse the data structure. I meant that. What is the difference between a tree and a cell? Cells are regular? Anyway, I had planned to do cells. > The only time you would really need a tree is if the distribution > of the objects can be very clumpy, so that you benefit from an > adaptive subdivision of the space. > > Another possibility to consider is instead of testing neighbouring > cells, insert each object into all cells that are within the > collision radius of it. That might turn out to be faster if the > objects don't move very frequently. I like that idea. Still, the objects can and do move. On Thu, Apr 17, 2008 at 4:23 PM, Greg Ewing <[EMAIL PROTECTED]> wrote: > There are an extremely large number of modifications > that could be made to Python. Only a very small number > of them will result in any improvement, and of those, > all the easy-to-find ones have already been found. The harder ones must then be attacked--solving a difficult speed issue might save the tiresome implementation of optimizations on the part of hundreds of users. > If you want to refute that, you're going to have to > come up with an actual, specific proposal, preferably > in the form of a code patch together with a benchmark > that demonstrates the improvement. If you can't > do that, you're not really in a position to make > statements like "it can't be that hard". Like I said, because I am the programmer, not the Python modifier, it is my job to make the programs run fast. By "can't be that hard", I mean that if C++ can do it, Python should be able to too. Obviously, I don't know how Python is structured, and I doubt I have the experience of the people on the Python team, but if I can make optimizations in my code, they should be able to make modifications in Python. > If wishing could make it so, Python would already > be blazingly fast! <Ian is wishing...> On Thu, Apr 17, 2008 at 4:32 PM, Greg Ewing <[EMAIL PROTECTED]> wrote: > Even if your inefficient algorithm is being executed as fast > as possible, it's still an inefficient algorithm, and you > will run into its limitations with a large enough data set. > Then you will have to find a better algorithm anyway. > > Part of the skill of being a good programmer is having > the foresight to see where such performance problems are > likely to turn up further down the track, and choosing > an algorithm at the outset that at least isn't going > to be spectacularly bad. > > Doing this saves you work in the long run, since you > spend less time going back and re-coding things. Not necessarily. I've had situations where I've decided to do something, then draft-coded it, then decided that the game feature wasn't necessary, was the wrong approach, or simply to scrap the entire project. If I had decided to spend the time to code something with all of the optimizations, it would take longer. When I delete it, that extra effort would have been worthless. Once a feature is implemented, tested, and so on, I decide if it is needed, then add optimizations and comments. Because optimizations often rewrite code in a more efficient way, the original code works as a guideline for the operation. All this saves me effort in the long-run; but I can't speak for anyone else. > Greg Ian