The string I am using is following:

    x = "";
    while ( s.length < 850 * 1024 ){
        s += "x";
    }

If you call utf8value or asciivalue on this string it takes about 2-3s
to complete.
But weirdly enough if I start with a huge ascii string, and then
concatenate some "x", then it seems to remain ascii and conversion
takes no time.
thx

On Oct 14, 1:22 pm, ajg <[email protected]> wrote:
> This is a bit of a repeat of this 
> thread:https://groups.google.com/group/v8-users/browse_thread/thread/66fef74...
> But my question is a bit different:
> when creating a string from JS code, does it default to ascii? Under
> what circumstance does it become ucs2?
> I am creating a string with same ascii char, doing some concatenation
> to obtain a large string, and it seems to end up ucs2.
>
> On Oct 14, 9:53 am, ajg <[email protected]> wrote:
>
>
>
>
>
>
>
> > I am looking to potentially switch my project from spidermonkey to v8.
> > v8 is quite faster on most of my tests but then is 10x slower on a
> > few.
> > It involves large strings (about 1MB) being converted from JS to a C++
> > cstr.
> > The string is pure ascii, and I was hoping it would be ascii by
> > default in JS.
> > But when converting, it seems that it's using wchar:
>
> > conversion to utf8: 2724ms
> > conversion to ascii: 2622ms
> > conversion to wchar: 10ms
>
> > I know that when creating a string from C++ I can force it to be ascii
> > in JS.
> > But is there a way to get strings created in JS to be ascii by
> > default?
> > Here it's way too slow and also takes 2x the memory.
> > thanks for pointers
> > AG

-- 
v8-users mailing list
[email protected]
http://groups.google.com/group/v8-users

Reply via email to