Hi Ikai,

The purpose of this JSP page is to make my AJAX pages crawlable, by 
following these guidelines http://code.google.com/web/ajaxcrawling/

The query parameters are parsed by the JSP page and a RPC function is called 
if those parameters are present, else some default text is used for 
results.  

I know this code is ugly and needs revising, but here it is:

<%
    String pageA = "", pageB = "";
    String query = "";
    try {
        query = "?"
                + URLDecoder.decode(request.getQueryString(), "UTF-8");
    } catch (Exception e) {
    }

    String title = "Wiki-Hop People Search";
    String metaUrl = "";
    String description = "";
    String result = "... default result text ...";

    if (query.matches("^\\?!?/.*")) {
        query = query.replaceFirst("^\\?!?", "?_escaped_fragment_=");
    }

    if (query.matches("^\\?a=.*&b=.*")) {
        query = query.replace("?a=", "?_escaped_fragment_=");
        query = query.replace("&b=", "//");
        query = query.replace("+", "_");
    }

    if (query.indexOf("?_escaped_fragment_=") == 0) {
        String anchor = query.substring(20);

        if (anchor.matches("^.*//.*")) {
            String[] p = anchor.split("//", -1);
            if (p.length == 2) {
                if (p[0].matches("^!.*"))
                    p[0] = p[0].substring(1, p[0].length());
                if (p[0].matches("^/.*"))
                    p[0] = p[0].substring(1, p[0].length());
                pageA = p[0].replace("\\/\\/\\", "//")
                        .replace("_", " ").trim();
                pageB = p[1].replace("\\/\\/\\", "//")
                        .replace("_", " ").trim();
                SearchPathRequest search = new SearchPathRequest();
                try {
                    pageA = pageA.replace("<", "&#60;").replace(">",
                            "&#62;");
                    pageB = pageB.replace("<", "&#60;").replace(">",
                            "&#62;");
                    title = "Wiki-Hop: " + pageA + " // " + pageB;
                    metaUrl = "?/" + p[0].replace("\"", "%22") + "//"
                            + p[1].replace("\"", "%22");
                    description = ", from "
                            + pageA.replace("\"", "&#34;") + " to "
                            + pageB.replace("\"", "&#34;");
                    result = search.find(pageA, pageB);
                } catch (Exception e) {
                    result = "<span class='serverResponseLabelError'>"
                            + e.getMessage() + "</span>";
                }
            }
        }
    }
%>

In the above code I instantiate a new SearchPathRequest only if we have a 
query with two pages in the URL.  All of the CPU usage I have reported is 
for a null query (SearchPathRequest should not be instantiated and no search 
performed).  I am having a hard time seeing any correlation with excessive 
CPU use and other events, like time since last request.

Thanks for your help,
 -Erik

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to