I thought about trie first but then I've change my mind and decided that I'd rater use a simple binary tree or even an sorted array.
As we have quite limited set of first names and surnames we can easily index them i.e. store them in sorted array and use an index in the array indead of the entire name/surname (the array like this can also be easily compressed). Then we store pairs (surname_index, name_index) in sorted array and perform binary search by request. The complexity will be. log(S) - to find the index of surname where S is number of different surnames + log(N) - to find the index of name where N is number of different names + log(10 000 000) - to find pair (surname_index, name_index) The memory consumption estimate is clear. Can you estimate complexity and memory consmption of trie? That would be interesting!!! On Nov 9, 2:59 pm, "Varun S V" <[EMAIL PROTECTED]> wrote: > I was asked the same question in my google interview!! The best > solution for this is to use the TRIE data structure!! > > Google TRIE data structure for more details. It also gives an > optimized search complexity. > > On Nov 8, 2007 11:40 AM, Rajat Gogri <[EMAIL PROTECTED]> wrote: > > > > > If you have to implement a phone book of 10 millin people in NYC, what > > data structure would you use and why ? > > Show the implementation if HashTable or Binary Trees? > > > Thanks, > > Raj --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Algorithm Geeks" group. To post to this group, send email to algogeeks@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/algogeeks -~----------~----~----~----~------~----~------~--~---