I get what Terry means now. But it still uses 625/800 = 78% of the
naive method in terms of diskspace (or memory, whatever), so I think
the save is not big enough (the job interview is R&D targeted, which I
assume they want to hear one with "large" saving).

Prateek's idea is to reduce the time of whole set read in (suppose they
are saved to disk since there are many of them). It should work if the
give k does not show up frequently. For each k, on average, I think we
only need to read in 200/5000 = 4% percent of the whole sets, on 96%
times we only need to check the begining and ending index. But I am
kind of worry for the step 4), which will vary the value of k for many
times.

beelzebub mentioned "co-prime", interesting. I will think a little bit
more of it.


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to algogeeks@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/algogeeks
-~----------~----~----~----~------~----~------~--~---

Reply via email to