Hi all:

   I met a problem in the programming pearls as follow:

  Q: Given a very long sequence(say, billions or trillions) of bytes,
how would you efficiently count the total number of one bits?

  A: The first approach is to count the number of one bits in each
input unit(perhaps an 8-bit character or perhaps a 32-bit integer),
and sum them.  To find the number of one bits in a 16-bit integer, one
could look at each bit in order, or iterate through the bits that are
on, or perform a lookup in a table of.  What effect will the cache
size have on your choice of unit?

     The second approach is to count the number of each input unit in
th input, and then at the end take the sum of that number multiplied
by the number of one bits in that unit.


   My question is, I am puzzled about the solution.  What's the
difference of the two approaches? In the second approach, what are
multiplied?

   Thanks!


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Algorithm Geeks" group.
To post to this group, send email to algogeeks@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/algogeeks
-~----------~----~----~----~------~----~------~--~---

Reply via email to