I probably should quit whining on here about this issue, but sometimes I get
so baffled that you just want someone to share in the misery :)

So, I have the two different versions of my program.  One that just chugs
through all of the numbers in a little over 2 min.  The other calculates
half of the distances and writes the data to a sparse array, and then
calculates my equations, all in about 1 minute and 40 seconds.

I want to use the faster version, but the data sets that I get out of the
two programs are just slightly different.  This is where I am baffled.  I
exported my data to CSV files so that I could compare them in Excel, and in
95% of the records, the two programs calculate exactly all the same answers,
all the totals and sums and averages.  But in about 200 of 38K records, it
just goes off the rails and finds no objects within the calculated distance,
while the original program will find like 9.  It seems to happen more
frequently towards the end of the dataset.  But I find this baffling.  How
does the logic work 95% if the time, and not the other 5%?  I wonder if the
sparse array is unstable or unrelieable, but it always returns the exact
same, but faulty, data.  Ugh.  End of rant.



--
View this message in context: 
http://apache-flex-users.2333346.n4.nabble.com/Workers-and-Speed-tp13098p13238.html
Sent from the Apache Flex Users mailing list archive at Nabble.com.

Reply via email to