Excellent suggestions.
I will have to investigate a bit more on how much text will be on the
different pages and whether I can make any improvements on the number
of names returned in each xml file. (I might also be able to get list
of names as a json feed).


I was also thinking that the routine you built could be converted into
a search-term highlighter.

Thanks,
Stephen

On Feb 24, 12:05 pm, Dave Stewart <[EMAIL PROTECTED]>
wrote:
> As well, when your name list gets rather large, I'm sure other pre-
> processing routines could speed things up.
> For example, you could split the XML list into 26 sections, based on
> First Name, then text the html for which Capital letters it contains.
> You then only loop through again for those names which contain those
> capital letters.
>
> I'm sure there's lots of other optimization routines out there too,
> like B-Trees (http://en.wikipedia.org/wiki/B-tree) and what have you.
>
> , then only for those which show up do you run th
>
> On Feb 24, 4:21 pm, sspboyd <[EMAIL PROTECTED]> wrote:
>
> > Dave, That is great! Thanks.
> > I've tested it out with an array of 1000 names as a worst case
> > scenario and it is pretty slow. I'll have to see about refining the
> > list of names if possible to keep it as small as possible.
>
> > Thanks again,
> > Stephen
>
> > On Feb 23, 7:16 pm, Dave Stewart <[EMAIL PROTECTED]>
> > wrote:
>
> > > Heh heh, this is cool!
>
> > > var names = ['Stephen Boyd','Fred Von Brown']
> > > $("body").each(
> > > function(){
> > >     var html = $(this).html()
> > >     $(names).each(
> > >             function(i, e){
> > >                 var rx    = new RegExp(e, 'gi')
> > >                 html = html.replace(rx, '<a href="javascript:void(0);"
> > > onclick="alert(this.innerHTML)">' +e+ '</a>')
> > >                 }
> > >         )
> > >     $(this).html(html)
> > >     }
> > > )

Reply via email to