I don't really understand why the analysis suggest two different approaches for the "small" and "large" dataset. The only difference is that the small contains strings only composed of the letters A and B and the ones for the large dataset can be made of any of the 26 letters. But they both have the constraint L <= 50.
This means that the brute force solution works for both datasets (I just tried and it takes 1 sec). The only difference is of a constant factor of 13 because you need to compare an array of size 26 instead of just two variables. I noticed that during the contest and found it really weird. I did implement the O(n^2) solution for the contest though, just to be sure, but it seems like it wasn't necessary. Then my question is : why two different analysis then? It seems like the analysis and the data don't really match. Was there a miscommunication and the large dataset was supposed to be larger? -- You received this message because you are subscribed to the Google Groups "Google Code Jam" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/google-code/f4ff428c-9836-41b5-9141-9422af71f564%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
