I wish sometimes that we could actually talk because typing can become
cumbersome when trying to convey ideas.  But basically imagine that I have a
yellow marble laying on a map, and I want to know how many blue marbles lay
within a mile of that marble.  I go through all the conditionals to make
sure that it is a round blue marble, and if it is, I calculate the distance
between the yellow and blue marble, and if it is less than a mile, I record
the distance to a sparse array.  Because of your guy's suggestions, I now
also add it to the sparse array in 2 places, because if I know the distance
between the yellow and blue marble, I also know the distance from the blue
to the yellow marble.  So now I am doing half the number of distance
calculations, but have added the overhead of placing and getting information
from a new array to be evaluated later.  None of the averages, medians, and
other calculations that I need to do can be done at this time because we
have made the sacrifice of increasing speed.

Now I go to the next record, but this time it is a red square and it is
looking for all of the green squares within a mile of it.  So I HAVE to go
through all of the records again with respect to what the new record
specifies.

Now, once all distances have been claculated, I can go back through the
sparse array and if a distance has been recorded, calculate the averages and
sums for that particular record.

So, yes, we could probably get it closer to a minute if distance
calculations were all we had to do, but a lot of numbers have to be
calculated, and they  have to be calculated with respect to the target
record.  Just because we are cutting down the number of distance
calculations does not change the fact that other numbers have to be
calculated in addition to the distance for every record.



--
View this message in context: 
http://apache-flex-users.2333346.n4.nabble.com/Workers-and-Speed-tp13098p13230.html
Sent from the Apache Flex Users mailing list archive at Nabble.com.

Reply via email to