Re: [webkit-dev] ARM JIT for WinCE

2010-01-07 Thread Zoltan Herczeg
Hi Patrick,

hm, I feel I found something. Please have a look at
JavaScriptCore/jit/JITOpcodes.cpp : privateCompileCTIMachineTrampolines.
The second one, when JSVALUE32_64 is disabled. If JIT_OPTIMIZE_NATIVE_CALL
is enabled, a specialized code is generated to call native builtin
functions (like Date.toString). This code for arm is around line 1733.
Perhaps WinCE ABI wants the arguments in a different way than GCC. The
faulting address according to your call stack is 0x003e01d4, which is the
call(Address(regT1, OBJECT_OFFSETOF(JSFunction, m_data))); macro
assembler instruction in line 1768. (Thank you for sending the instruction
dump). Please try to fix this code according to WinCE ABI, since I am not
sure JIT_OPTIMIZE_NATIVE_CALL can be disabled.

Regards
Zoltan

 Hi Gabor,

 Thanks for your prompt reply.

 Make sure your assembler does not break ctiVMThrowTrampoline
 and ctiOpThrowNotCaught functions. This approach requires that the
 ctiVMThrowTrampoline fall-backs to ctiOpThrowNotCaught
 after 'bl cti_vm_throw' call. Or you can simply copy the body of
 ctiOpThrowNotCaught into ctiVMThrowTrampoline after the
 call.
 I've copied it, but I think it's unnecessary (see disassembly)

 Did you do anything with DEFINE_STUB_FUNCTION macro?
 I've done it like for the RVCT compiler. (e.g. see cti_op_end in
 disassembly)

 When I run jsc.exe tests\mozilla\ecma_2\shell.js it crashes with the
 following callstack:
 0x
 jsc.EXE!JSC::JSCell::inherits(JSC::ClassInfo* info = 0x00189818) Line:
 335,
 Byte Offsets: 0x2c
 jsc.EXE!JSC::JSValue::inherits(JSC::ClassInfo* classInfo = 0x00189818)
 Line:
 345, Byte Offsets: 0x40
 jsc.EXE!JSC::dateProtoFuncGetTimezoneOffset(JSC::ExecState* exec =
 0x00601b60,
 JSC::JSObject* __formal = 0x00601b40, JSC::JSValue thisValue = {...},
 JSC::ArgList __formal = {...}) Line: 764, Byte Offsets: 0x1c
 0x003e01d4

 Is there a better javascript file to start with? When I enter a simple
 1+2+3
 into the interactive jsc.exe it prints the correct result.

 Here are some parts of the disassembly:

 // Execute the code!
 inline JSValue execute(RegisterFile* registerFile, CallFrame*
 callFrame, JSGlobalData* globalData, JSValue* exception)
 {
 000A7868  mov r12, sp
 000A786C  stmdb   sp!, {r0 - r3}
 000A7870  stmdb   sp!, {r12, lr}
 000A7874  sub sp, sp, #0x20
 return
 JSValue::decode(ctiTrampoline(m_ref.m_code.executableAddress(),
 registerFile,
 callFrame, exception, Profiler::enabledProfilerReference(), globalData));
 000A7878  bl  |JSC::Profiler::enabledProfilerReference ( 1b2e0h )|
 000A787C  str r0, [sp, #0x14]
 000A7880  ldr r0, this
 000A7884  bl  |WTF::RefPtrJSC::Profile::operator- ( d2e3ch )|
 000A7888  str r0, [sp, #0x18]
 000A788C  ldr r3, globalData
 000A7890  str r3, [sp, #4]
 000A7894  ldr r3, [sp, #0x14]
 000A7898  str r3, [sp]
 000A789C  ldr r3, exception
 000A78A0  ldr r2, callFrame
 000A78A4  ldr r1, registerFile
 000A78A8  ldr r0, [sp, #0x18]
 000A78AC  bl  0014A000
 000A78B0  str r0, [sp, #0x1C]
 000A78B4  ldr r1, [sp, #0x1C]
 000A78B8  ldr r0, [sp, #0x2C]
 000A78BC  bl  |JSC::JSValue::decode ( 1b94ch )|
 000A78C0  ldr r3, [sp, #0x2C]
 000A78C4  str r3, [sp, #0x10]
 }
 000A78C8  ldr r0, [sp, #0x10]
 000A78CC  add sp, sp, #0x20
 000A78D0  ldmia   sp, {sp, pc}

 

 ctiTrampoline:
 0014A000  stmdb   sp!, {r1 - r3}
 0014A004  stmdb   sp!, {r4 - r8, lr}
 0014A008  sub sp, sp, #0x24
 0014A00C  mov r4, r2
 0014A010  mov r5, #2, 24
 0014A014  mov lr, pc
 0014A018  bx  r0// r0 = 0x003e0270
 0014A01C  add sp, sp, #0x24
 0014A020  ldmia   sp!, {r4 - r8, lr}
 0014A024  add sp, sp, #0xC
 0014A028  bx  lr
 ctiVMThrowTrampoline:
 0014A02C  mov r0, sp
 0014A030  bl  0014A6D4
 0014A034  add sp, sp, #0x24
 0014A038  ldmia   sp!, {r4 - r8, lr}
 0014A03C  add sp, sp, #0xC
 0014A040  bx  lr
 ctiOpThrowNotCaught:
 0014A044  add sp, sp, #0x24
 0014A048  ldmia   sp!, {r4 - r8, lr}
 0014A04C  add sp, sp, #0xC
 0014A050  bx  lr
 cti_op_convert_this:
 0014A054  str lr, [sp, #0x20]
 0014A058  bl  |JITStubThunked_op_convert_this ( ae718h )|
 0014A05C  ldr lr, [sp, #0x20]
 0014A060  bx  lr
 cti_op_end:
 0014A064  str lr, [sp, #0x20]
 0014A068  bl  |JITStubThunked_op_end ( ae878h )|
 0014A06C  ldr lr, [sp, #0x20]
 0014A070  bx  lr

 

 003E017C  mov pc, r0
 003E0180  mov r0, lr
 003E0184  str r0, [r4, #-0x14]
 003E0188  ldr r1, [r4, 

[webkit-dev] webkit-patch

2010-01-07 Thread Adam Barth
In the wee hours of the morning, I renamed bugzilla-tool to
webkit-patch to reflect the tool's general purpose of helping
contributors manage their patches.  I also renamed a bunch of the
common commands to make more sense with this name.  Here's a brief
summary of my three favorite commands:

* webkit-patch upload

This command automates the process of uploading a patch for review.
After you're done writing the code, the command will create a bug for
you (or let you pick an existing bug), prepare a ChangeLog, let you
edit the ChangeLog, and then upload the patch to the bug and mark it
for review.

* webkit-patch land

This command will help you land a patch that's been reviewed.  The
command will fill in the reviewer from the bug, build WebKit, and run
the LayoutTests.  If the tests pass, the command will land your patch.

* webkit-patch land-from-bug

This command does everything that land does except it downloads the
reviewed patch from the bug and applies it to your working copy first.
 If the bug has more than one patch, it will land all the reviewed
patches in the order they appear in the bug.

You can find out about these and other commands by running
webkit-patch --help.  If you'd like more details about a command
(e.g., land), run webkit-patch help land.

These commands still need some amount of polish, especially upload,
which is the newest.  If you have suggestions for improving
webkit-patch, please let me, Eric Seidel, or David Kilzer know.

Happy patching!
Adam
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ARM JIT for WinCE

2010-01-07 Thread Patrick Roland Gansterer
Hi,

many thanks! It works already when I disable OPTIMIZE_NATIVE_CALL (other 3 
OPTIMIZE are turned on). I think you're right with the ABI problem. Maybe you 
can help me with it too: Here are the instruction dumps with and without the 
OPTIMIZE_NATIVE_CALL:

==
== #define OPTIMIZE_NATIVE_CALL = 1 ==
==

003E0100  ldr r8, [r2, #8] 
003E0104  cmp r8, #0 
003E0108  bgt 003E012C 
003E010C  mov r7, lr 
003E0110  mov r0, sp 
003E0114  str r4, [sp, #0x40] 
003E0118  mov lr, pc 
003E011C  ldr pc, [pc, #0x128] 
003E0120  ldr r1, [sp, #0xC] 
003E0124  mov lr, r7 
003E0128  ldr r2, [r0, #0x18] 
003E012C  ldr r8, [r2, #8] 
003E0130  cmp r8, r1 
003E0134  beq 003E0160 
003E0138  mov r7, lr 
003E013C  str r7, [sp, #8] 
003E0140  mov r0, sp 
003E0144  str r4, [sp, #0x40] 
003E0148  mov lr, pc 
003E014C  ldr pc, [pc, #0x100] 
003E0150  mov r4, r1 
003E0154  ldr r1, [sp, #0xC] 
003E0158  mov lr, r7 
003E015C  ldr r2, [r0, #0x18] 
003E0160  str r1, [r4, #-0xC] 
003E0164  ldr r1, [r0, #0x1C] 
003E0168  ldr r8, [pc, #0xE8] 
003E016C  str r8, [r4, #-4] 
003E0170  str r0, [r4, #-8] 
003E0174  str r1, [r4, #-0x1C] 
003E0178  ldr r0, [r2, #0xC] 
003E017C  mov pc, r0 
003E0180  mov r0, lr 
003E0184  str r0, [r4, #-0x14] 
003E0188  ldr r1, [r4, #-0x18] 
003E018C  ldr r1, [r1, #-0x1C] 
003E0190  str r1, [r4, #-0x1C] 
003E0194  ldr r0, [r4, #-0xC] 
003E0198  subssp, sp, #8 
003E019C  subsr0, r0, #1 
003E01A0  str r0, [sp, #4] 
003E01A4  mov r1, r4 
003E01A8  subsr1, r1, #0x20 
003E01AC  mov r3, #4 
003E01B0  mulsr0, r3, r0 
003E01B4  subsr1, r1, r0 
003E01B8  str r1, [sp] 
003E01BC  ldr r2, [r1, #-4] 
003E01C0  ldr r1, [r4, #-8] 
003E01C4  mov r0, r4 
003E01C8  mov r3, sp 
003E01CC  mov lr, pc 
003E01D0  ldr pc, [r1, #0x1C] 
// R0 = 0x003f8080 R1 = 0x00601780 R2 = 0x00601760 R3 = 0x182af984
// R4 = 0x003f8080 R5 = 0x0200 R6 = 0x0060 R7 = 0x003e07b8
// R8 = 0x R9 = 0x182afbfc R10 = 0x R11 = 0x002b0370
// R12 = 0x182af8f0 Sp = 0x182af984 Lr = 0x003e01d4
// Pc = 0x00073468 Psr = 0x201f
003E01D4  addssp, sp, #8 
003E01D8  ldr r3, [pc, #0x7C] 
003E01DC  ldr r2, [r3] 
003E01E0  bicsr3, r2, #0 
003E01E4  bne 003E01F8 
003E01E8  ldr r1, [r4, #-0x14] 
003E01EC  ldr r4, [r4, #-0x18] 
003E01F0  mov lr, r1 
003E01F4  mov pc, lr 
003E01F8  ldr r1, [r4, #-0x14] 
003E01FC  ldr r2, [pc, #0x60] 
003E0200  str r1, [r2] 
003E0204  ldr r2, [pc, #0x5C] 
003E0208  ldr r4, [r4, #-0x18] 
003E020C  str r4, [sp, #0x40] 
003E0210  mov lr, r2 
003E0214  mov pc, lr 

==

JSValue JSC_HOST_CALL dateProtoFuncGetTimezoneOffset(ExecState* exec, 
JSObject*, JSValue thisValue, const ArgList)
{
00073468  mov r12, sp 
0007346C  stmdb   sp!, {r0 - r3} 
00073470  stmdb   sp!, {r4, r12, lr} 
00073474  sub sp, sp, #0x1C 
if (!thisValue.inherits(DateInstance::info))
00073478  ldr r1, [pc, #0x100] 
// R0 = 0x003f8080 R1 = 0x00601780 R2 = 0x00601760 R3 = 0x182af984
// R4 = 0x003f8080 R5 = 0x0200 R6 = 0x0060 R7 = 0x003e07b8
// R8 = 0x R9 = 0x182afbfc R10 = 0x R11 = 0x002b0370
// R12 = 0x182af984 Sp = 0x182af94c Lr = 0x003e01d4 
// Pc = 0x00073478 Psr = 0x201f 
0007347C  add r0, sp, #0x34 
00073480  bl  |JSC::JSValue::inherits ( 6997ch )| 
00073484  strbr0, [sp, #0xC] 
00073488  ldrbr3, [sp, #0xC] 
0007348C  cmp r3, #0 
00073490  bne |JSC::dateProtoFuncGetTimezoneOffset + 0x54 ( 734bch )| 
return throwError(exec, TypeError);
00073494  mov r1, #5 
00073498  ldr r0, exec 
0007349C  bl  |JSC::throwError ( 5dd78h )| 
000734A0  str r0, [sp, #0x10] 
000734A4  ldr r1, [sp, #0x10] 
000734A8  ldr r0, [sp, #0x28] 
000734AC  bl  |
WTF::OwnArrayPtrJSC::Register::OwnArrayPtrJSC::Register ( 110e8h )| 
000734B0  ldr r3, [sp, #0x28] 
000734B4  str r3, [sp, #8] 
000734B8  b   |JSC::dateProtoFuncGetTimezoneOffset + 0x100 ( 73568h )| 

DateInstance* thisDateObj = asDateInstance(thisValue); 
000734BC  ldr r0, thisValue 
000734C0  bl  |JSC::asRegExpConstructor ( 697b8h )| 
000734C4  str r0, [sp, 

[webkit-dev] Accept Header and application/xml

2010-01-07 Thread Tobias Tom
Hello,

While developing our web-application framework we implemented content 
negotiation, based on the accept header. In all webkit based browsers in my 
tests (Safari, Chrome) the accept header was the same, so I assume it's right 
to ask the question here. If it's not please forgive me and give me a hint into 
the right direction. 

The default accept header of webkit seems to be
application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5

Which has as a result that content of the type application/xml and xhtml+xml 
are the most preferred ones. 

Is there any reason why text/html has a lower quality then application/xml? I 
can understand the fact that xhtml should be preferred before html, but Webkit 
itself is not able to render application/xml in a human format anyway (no 
offense here, I think that's really ok). It is able to render text/html. 

My real-life problem here is the following: A resource is available inside an 
(machine-optmized) XML format and in (human optimized) HTML format under the 
same URL. Webkit will always get the XML version. Is there any reason for that? 

From what I understand, removing the application/xml header seems to most 
reasonable solution. Currently I cannot see any sense in this value because, 
as mentioned above, webkit isn't even able to render it for humans. 
Even if it could, e.g. like Gecko, shouldn't html be the format of choice for 
humans?

Thank you very much for your comments.
Tobias
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] elementFromPoint in webkit

2010-01-07 Thread Simon Fraser
On Jan 7, 2010, at 3:12 AM, Ritesh Ranjan wrote:

 Hi Sam ,
 This is regarding 
 https://bugs.webkit.org/attachment.cgi?id=39511action=edit 
 I am facing similar problem .
 For me,AT TIMES elementFromPoint doesn't return the exact node . 
 Any clue ?

Please file a new bug with a testcase that shows the problem you are seeing.

Simon

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Fwd: Accept Header and application/xml

2010-01-07 Thread Tobias Tom
Hey Alexey,

2010/1/7 Alexey Proskuryakov a...@webkit.org:

 07.01.2010, в 07:53, Tobias Tom написал(а):

 My real-life problem here is the following: A resource is available inside 
 an (machine-optmized) XML format and in (human optimized) HTML format under 
 the same URL. Webkit will always get the XML version. Is there any reason 
 for that?

 Historically, this is a result of mimicking what Firefox did at some point.

Thanks for the information.

 They don't do that any more, so I think that WebKit should prefer text/html, 
 too. As of version 3.5, Firefox sends (for main resources):

 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8

 https://bugs.webkit.org/show_bug.cgi?id=27267 tracks this problem.

I fully agree that this would help everyone. Sorry for not researching
the issue first.

 I can understand the fact that xhtml should be preferred before html, but 
 Webkit itself is not able to render application/xml in a human format anyway 
 (no offense here, I think that's really ok).

 Actually, WebKit can render plain XML if it has an appropriate XSL stylesheet 
 (e.g. one that converts it to HTML, or to SVG). Also, XHTML or SVG can be 
 sent as application/xml. But anyway, we should probably just mimic Firefox 
 again.

Of course you are right, still for most XML files there's none – but
that's another topic.

Thank you for your time.
Tobias
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Possible bug in elementFromPoint-relative-to-viewport LayoutTest or not?

2010-01-07 Thread Afonso Costa
On 01/06/2010 08:30 PM, Darin Adler wrote:
 On Jan 6, 2010, at 3:13 PM, Afonso Costa wrote:

   
 Is there any thing wrong or not?
 
 I think the zoomOrNot argument and if statement are both unnecessary now that 
 bug 30034 is fixed, but the test is otherwise OK.

 -- Darin


   
Hi Darin,

thanks for replying.

I was looking at the bug 30689 and the values inside the if statement
were changed to the same of the else. This was my doubt and I suspected
that there was something wrong in that block of code.

Anyway, I've created a patch removing the zoomOrNot argument, but I
don't know if I need to create a bugzilla entry for it or just put it in
a place where someone can review it.

Br,

-- 
Afonso R. Costa Jr.
openBossa Labs 
Instituto Nokia de Tecnologia - INdT

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Possible bug in elementFromPoint-relative-to-viewport LayoutTest or not?

2010-01-07 Thread Darin Adler
On Jan 7, 2010, at 10:13 AM, Afonso Costa wrote:

 Anyway, I've created a patch removing the zoomOrNot argument, but I
 don't know if I need to create a bugzilla entry for it or just put it in
 a place where someone can review it.

A bugzilla entry for it is the best place for someone to review it.

-- Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Dimitri Glazkov
Are we planning to run pixel tests on the build bots? What's the
general opinion here?

We're running them over at Chromium and it seems like a really good
idea. Case in point:

Change http://trac.webkit.org/changeset/52900 broke a bunch of layout
tests, all pixel results, and as such didn't register on the
waterfall.

I rolled out the change for now.

:DG
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Darin Adler
On Jan 7, 2010, at 10:19 AM, Dimitri Glazkov wrote:

 Are we planning to run pixel tests on the build bots?

If we can get them green, we should. It’s a lot of work. We need a volunteer to 
do that work. We’ve tried before.

-- Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Dirk Schulze
Would be great to have pixel tests on a bot back. And it would be great,
if the commit queue runs them too. Especially for patches of
non-commiters.

-Dirk

Am Donnerstag, den 07.01.2010, 10:19 -0800 schrieb Dimitri Glazkov:
 Are we planning to run pixel tests on the build bots? What's the
 general opinion here?
 
 We're running them over at Chromium and it seems like a really good
 idea. Case in point:
 
 Change http://trac.webkit.org/changeset/52900 broke a bunch of layout
 tests, all pixel results, and as such didn't register on the
 waterfall.
 
 I rolled out the change for now.
 
 :DG
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Blacklisting some sqlite functions

2010-01-07 Thread Darin Fisher
On Thu, Jan 7, 2010 at 10:02 AM, Brady Eidson beid...@apple.com wrote:


 On Jan 6, 2010, at 2:55 PM, Dumitru Daniliuc wrote:

 while doing a security review of chromium's implementation of HTML5 DBs,
 chris noted that some sqlite functions are potential security risks. thus,
 we would like to blacklist them (or rather, have a list of whitelisted
 functions). currently, WebCore's sqlite authorizer allows all functions, but
 has a FIXME comment that wonders what the right thing to do is
 (WebCore/storage/DatabaseAuthorizer.cpp:281).


 This code has long since shipped and although it hasn't achieved what you
 might call wide use, a widespread change in behavior might break important
 clients.

 That said... the less function surface area available, the better.

 here are the functions we'd like to whitelist:
 http://www.sqlite.org/lang_corefunc.html: all of them, except
 load_extension(), random() and randomblob() (once we fix some layout tests
 that currently use randomblob()).


 No argument on disallowing load_extension().

 Are random() and randomblob() security risks?  Could you point us to a
 source explaining this?

 http://www.sqlite.org/lang_datefunc.html: all of them

 http://www.sqlite.org/lang_aggfunc.html: all of them


 Seems okay.

 in addition to these standard functions, we'd like to whitelist some
 functions from a few extensions chromium uses:
 full text search (fts2.c): whitelist snippet(), offsets(), optimize(), but
 not fts2_tokenizer().
 unicode data (icu.c): whitelist regexp(), lower(), upper(), like(), but not
 icu_load_collation().


 Is there any reason these are still Chromium-only?  Even though we're
 having problems getting different vendors to agree on SQL dialect issues
 with the spec, I think we should make an effort to keep WebKit unified.


Yes, it would be great if all WebKit based browsers supported these same
sqlite features.
-Darin




 I'm also going to forward this message on to some of our security
 colleagues at Apple, and we might have more feedback shortly.

 ~Brady


 any objection?

 thanks,
 dumi


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Blacklisting some sqlite functions

2010-01-07 Thread Adam Barth
On Thu, Jan 7, 2010 at 10:02 AM, Brady Eidson beid...@apple.com wrote:
 Are random() and randomblob() security risks?  Could you point us to a
 source explaining this?

They're fairly low risk, but you tend to leak a surprising amount of
information when you expose non-cryptographic random sources to
attackers.  We've already gotten a rather detailed report of the leaks
from Math.random, for example.  If these functions are useful, we can
keep them, but it does cost some amount of attack surface.

Adam
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
Hi -

I've been working on SPDY, but I think I may have found a good performance
win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
scanning ahead within an HTML document to find subresources, is throttled
today.  The throttling is intentional and probably sometimes necessary.
 Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
configurations.  I believe Antti is no longer working on this?  Is there
anyone else working in this area that might have data on how aggressive the
PreloadScanner should be?  Below I'll describe some of my tests.

The PreloadScanner throttling happens in a couple of ways.  First, the
PreloadScanner only runs when we're blocked on JavaScript (see
HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
it may delay or reject loading the subresource at all due to throttling in
loader.cpp and DocLoader.cpp.  The throttling is very important, depending
on the implementation of the HTTP networking stack, because throwing too
many resources (or the low-priority ones) into the network stack could
adversely affect HTTP load performance.  This latter problem does not impact
my Chromium tests, because the Chromium network stack does its own
prioritization and throttling (not too dissimilar from the work done by
loader.cpp).

*Theory*:
The theory I'm working under is that when the RTT of the network is
sufficiently high, the *best* thing the browser can do is to discover
resources as quickly as possible and pass them to the network layer so that
we can get started with fetching.  This is not speculative - these are
resources which will be required to render the full page.   The SPDY
protocol is designed around this concept - allowing the browser to schedule
all resources it needs to the network (rather than being throttled by
connection limits).  However, even with SPDY enabled, WebKit itself prevents
resource requests from fully flowing to the network layer in 3 ways:
   a) loader.cpp orders requests and defers requests based on the state of
the page load and a number of criteria.
   b) HTMLTokenizer.cpp only looks for resources further in the body when
we're blocked on JS
   c) preload requests are treated specially (docloader.cpp); if they are
discovered too early by the tokenizer, then they are either queued or
discarded.

*Test Case*
Can aggressive preloadscanning (e.g. always preload scan before parsing an
HTML Document) improve page load time?

To test this, I'm calling the PreloadScanner basically as the first part of
HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
effectiveness.

*Benchmark Setup*
Windows client (chromium).
Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
loss.
I run through a set of 25 URLs, loading each 30 times; not recycling any
connections and clearing the cache between each page.
These are running over HTTP; there is no SPDY involved here.

*Results:*
Baseline
(without my changes)UnthrottledNotesAverage PLT2377ms2239ms+5.8% latency
redux.Time spent in the PreloadScanner1160ms4540msAs expected, we spend
about 4x more time in the PreloadScanner. In this test, we loaded 750 pages,
so it is about 6ms per page. My machine is fast, though.Preload Scripts
discovered262194404x more scripts discoveredPreload CSS discovered34810223x
more CSS discoveredPreload Images discovered11952391443x more images
discoveredPreload items throttled99830Preload Complete hits38036950This is
the count of items which were completely preloaded before WebKit even tried
to look them up in the cache. This is pure goodness.Preload Partial hits1708
7230These are partial hits, where the item had already started loading, but
not finished, before WebKit tried to look them up.Preload
Unreferenced42130These
are bad and the count should be zero. I'll try to find them and see if there
isn't a fix - the PreloadScanner is just sometimes finding resources that
are never used. It is likely due to clever JS which changes the DOM.



*Conclusions:*
For this network speed/client processor, more aggressive PreloadScanning
clearly is a win.   More testing is needed for slower machines and other
network types.  I've tested many network types; the aggressive preload
scanning seems to always be either a win or a wash; for very slow network
connections, where we're already at capacity, the extra CPU burning is
basically free.  For super fast networks, with very low RTT, it also appears
to be a wash.  The networks in the middle (including mobile simulations) see
nice gains.

*Next Steps and Questions:*
I'd like to land my changes so that we can continue to gather data.  I can
enable these via macro definitions or I can enable these via dynamic
settings.  I can then try to do more A/B testing.

Are there any existing web pages which the WebKit team would like tested
under these configurations?  I don't see a lot of testing 

Re: [webkit-dev] Blacklisting some sqlite functions

2010-01-07 Thread Dumitru Daniliuc

 in addition to these standard functions, we'd like to whitelist some
 functions from a few extensions chromium uses:
 full text search (fts2.c): whitelist snippet(), offsets(), optimize(), but
 not fts2_tokenizer().
 unicode data (icu.c): whitelist regexp(), lower(), upper(), like(), but not
 icu_load_collation().


 Is there any reason these are still Chromium-only?  Even though we're
 having problems getting different vendors to agree on SQL dialect issues
 with the spec, I think we should make an effort to keep WebKit unified.


FTS and ICU are sqlite standard extensions that live in the sqlite tree.
Chromium compiles its own sqlite library and includes these 2 extensions;
I'm not sure if they're included in WebKitLibraries/libWebCoreSQLite3.a
though.


 I'm also going to forward this message on to some of our security
 colleagues at Apple, and we might have more feedback shortly.


great, thanks!

dumi
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Maciej Stachowiak


On Jan 7, 2010, at 12:09 PM, Mike Belshe wrote:


Hi -

I've been working on SPDY, but I think I may have found a good  
performance win for HTTP.  Specifically, if the PreloadScanner,  
which is responsible for scanning ahead within an HTML document to  
find subresources, is throttled today.  The throttling is  
intentional and probably sometimes necessary.  Nonetheless, un- 
throttling it may lead to a 5-10% performance boost in some  
configurations.  I believe Antti is no longer working on this?  Is  
there anyone else working in this area that might have data on how  
aggressive the PreloadScanner should be?  Below I'll describe some  
of my tests.


The PreloadScanner throttling happens in a couple of ways.  First,  
the PreloadScanner only runs when we're blocked on JavaScript (see  
HTMLTokenizer.cpp).  But further, as it discovers resources to be  
fetched, it may delay or reject loading the subresource at all due  
to throttling in loader.cpp and DocLoader.cpp.  The throttling is  
very important, depending on the implementation of the HTTP  
networking stack, because throwing too many resources (or the low- 
priority ones) into the network stack could adversely affect HTTP  
load performance.  This latter problem does not impact my Chromium  
tests, because the Chromium network stack does its own  
prioritization and throttling (not too dissimilar from the work done  
by loader.cpp).


The reason we do this is to prevent head-of-line blocking by low- 
priority resources inside the network stack (mainly considering how  
CFNetwork / NSURLConnection works).




Theory:
The theory I'm working under is that when the RTT of the network is  
sufficiently high, the *best* thing the browser can do is to  
discover resources as quickly as possible and pass them to the  
network layer so that we can get started with fetching.  This is not  
speculative - these are resources which will be required to render  
the full page.   The SPDY protocol is designed around this concept -  
allowing the browser to schedule all resources it needs to the  
network (rather than being throttled by connection limits).   
However, even with SPDY enabled, WebKit itself prevents resource  
requests from fully flowing to the network layer in 3 ways:
   a) loader.cpp orders requests and defers requests based on the  
state of the page load and a number of criteria.
   b) HTMLTokenizer.cpp only looks for resources further in the body  
when we're blocked on JS
   c) preload requests are treated specially (docloader.cpp); if  
they are discovered too early by the tokenizer, then they are either  
queued or discarded.


I think your theory is correct when SPDY is enabled, and possibly when  
using HTTP with pipelining. It may be true to a lesser extent with non- 
pipelining HTTP implementations when the network stack does its own  
prioritization and throttling, by reducing latency in getting the  
request to the network stack. This is especially so when issuing a  
network request to the network stack may involve significant latency  
due to IPC or cross-thread communication or the like.




Test Case
Can aggressive preloadscanning (e.g. always preload scan before  
parsing an HTML Document) improve page load time?


To test this, I'm calling the PreloadScanner basically as the first  
part of HTMLTokenizer::write().  I've then removed all throttling  
from loader.cpp and DocLoader.cpp.  I've also instrumented the  
PreloadScanner to measure its effectiveness.


Benchmark Setup
Windows client (chromium).
Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0%  
packet loss.
I run through a set of 25 URLs, loading each 30 times; not recycling  
any connections and clearing the cache between each page.

These are running over HTTP; there is no SPDY involved here.


I'm interested in the following:

- What kind of results do you get in Safari?
- How much of this effect is due to more aggressive preload scanning  
and how much is due to disabling throttling? Since the test includes  
multiple logically indpendent changes, it is hard to tell which are  
the ones that had an effect.




Results:
Baseline
(without my changes)Unthrottled Notes
Average PLT 2377ms  2239ms  +5.8% latency redux.
Time spent in the PreloadScanner	1160ms	4540ms	As expected, we spend  
about 4x more time in the PreloadScanner. In this test, we loaded  
750 pages, so it is about 6ms per page. My machine is fast, though.

Preload Scripts discovered  262194404x more scripts discovered
Preload CSS discovered  348 10223x more CSS discovered
Preload Images discovered   11952   39144   3x more images discovered
Preload items throttled 99830   
Preload Complete hits	3803	6950	This is the count of items which  
were completely preloaded before WebKit even tried to look them up  
in the cache. This is pure goodness.
Preload Partial hits	1708	7230	These are partial hits, where the  
item had already started 

Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Joe Mason
I don't think every port should be required to implement prioritization
and throttling itself - that's just duplication of effort.  Maybe
there's a good middle-ground, where PreloadScanner is run more often but
still does the priority sorting?



Joe



From: webkit-dev-boun...@lists.webkit.org
[mailto:webkit-dev-boun...@lists.webkit.org] On Behalf Of Mike Belshe
Sent: Thursday, January 07, 2010 3:09 PM
To: webkit-dev@lists.webkit.org
Subject: [webkit-dev] PreloadScanner aggressiveness



Hi -


I've been working on SPDY, but I think I may have found a good
performance win for HTTP.  Specifically, if the PreloadScanner, which is
responsible for scanning ahead within an HTML document to find
subresources, is throttled today.  The throttling is intentional and
probably sometimes necessary.  Nonetheless, un-throttling it may lead to
a 5-10% performance boost in some configurations.  I believe Antti is no
longer working on this?  Is there anyone else working in this area that
might have data on how aggressive the PreloadScanner should be?  Below
I'll describe some of my tests.



The PreloadScanner throttling happens in a couple of ways.  First, the
PreloadScanner only runs when we're blocked on JavaScript (see
HTMLTokenizer.cpp).  But further, as it discovers resources to be
fetched, it may delay or reject loading the subresource at all due to
throttling in loader.cpp and DocLoader.cpp.  The throttling is very
important, depending on the implementation of the HTTP networking stack,
because throwing too many resources (or the low-priority ones) into the
network stack could adversely affect HTTP load performance.  This latter
problem does not impact my Chromium tests, because the Chromium network
stack does its own prioritization and throttling (not too dissimilar
from the work done by loader.cpp).



Theory:

The theory I'm working under is that when the RTT of the network is
sufficiently high, the *best* thing the browser can do is to discover
resources as quickly as possible and pass them to the network layer so
that we can get started with fetching.  This is not speculative - these
are resources which will be required to render the full page.   The SPDY
protocol is designed around this concept - allowing the browser to
schedule all resources it needs to the network (rather than being
throttled by connection limits).  However, even with SPDY enabled,
WebKit itself prevents resource requests from fully flowing to the
network layer in 3 ways:

   a) loader.cpp orders requests and defers requests based on the state
of the page load and a number of criteria.

   b) HTMLTokenizer.cpp only looks for resources further in the body
when we're blocked on JS

   c) preload requests are treated specially (docloader.cpp); if they
are discovered too early by the tokenizer, then they are either queued
or discarded.



Test Case

Can aggressive preloadscanning (e.g. always preload scan before parsing
an HTML Document) improve page load time?



To test this, I'm calling the PreloadScanner basically as the first part
of HTMLTokenizer::write().  I've then removed all throttling from
loader.cpp and DocLoader.cpp.  I've also instrumented the PreloadScanner
to measure its effectiveness.



Benchmark Setup

Windows client (chromium).

Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0%
packet loss.

I run through a set of 25 URLs, loading each 30 times; not recycling any
connections and clearing the cache between each page.

These are running over HTTP; there is no SPDY involved here.



Results:

Baseline
(without my changes)

Unthrottled

Notes

Average PLT

2377ms

2239ms

+5.8% latency redux.

Time spent in the PreloadScanner

1160ms

4540ms

As expected, we spend about 4x more time in the PreloadScanner. In this
test, we loaded 750 pages, so it is about 6ms per page. My machine is
fast, though.

Preload Scripts discovered

2621

9440

4x more scripts discovered

Preload CSS discovered

348

1022

3x more CSS discovered

Preload Images discovered

11952

39144

3x more images discovered

Preload items throttled

9983

0


Preload Complete hits

3803

6950

This is the count of items which were completely preloaded before WebKit
even tried to look them up in the cache. This is pure goodness.

Preload Partial hits

1708

7230

These are partial hits, where the item had already started loading, but
not finished, before WebKit tried to look them up.

Preload Unreferenced

42

130

These are bad and the count should be zero. I'll try to find them and
see if there isn't a fix - the PreloadScanner is just sometimes finding
resources that are never used. It is likely due to clever JS which
changes the DOM.







Conclusions:

For this network speed/client processor, more aggressive PreloadScanning
clearly is a win.   More testing is needed for slower machines and other
network types.  I've tested many network types; the aggressive preload
scanning seems to always be either a win or a wash; for very slow

[webkit-dev] Question about PopupMenuClient

2010-01-07 Thread Kenneth Christiansen
Hi there,

A co-worker of mine is implementing the ability to reimplement how popup
menus (comboboxes) etc will appear from our Qt WebKit API.

As these UI delegates are per page, we thought about adding a method to our
ChromeClientQt like AbstractPopupMenu* createPopupMenu(PopupMenuClient*
client), but currently PopupMenuClient has no access to the page so that we
can get hold of that.

I guess that it is not safe to assume that PopupMenuClient::hostWindow() is
always a Chrome, so would it be acceptable substituting the
PopupMenuClient::hostWindow()  with a PopupMenuClient::page() [1]  method
and then change the uses to page-chrome() ?

Cheers,
Kenneth

[1] easily implemented as return document()-page() instead of return
document()-view-hostWindow()

-- 
Kenneth Rohde Christiansen
Technical Lead / Senior Software Engineer
Qt Labs Americas, Nokia Technology Institute, INdT
Phone  +55 81 8895 6002 / E-mail kenneth.christiansen at openbossa.org
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Question about PopupMenuClient

2010-01-07 Thread Darin Fisher
Chromium actually does something like this.  See ChromeClientChromium.

In our case, we have two modes:  one where the embedder just has to wrap the
given PopupContainer with a window frame, and in another mode, where the
embedder is responsible for drawing the popup menu.

We are forced to delegate out to the embedding application because of
Chromium's sandbox, which prevents WebKit from directly communicating with
the native widget system (cannot directly create an HWND for example).

In the case where the embedder only supplies framing, the popup menu is
drawn by embedding a ScrollView.  Then, all of the drawing happens in
WebCore.  You can see this code in PopupMenuChromium.cpp.

-Darin



On Thu, Jan 7, 2010 at 2:44 PM, Kenneth Christiansen 
kenneth.christian...@openbossa.org wrote:

 Hi there,

 A co-worker of mine is implementing the ability to reimplement how popup
 menus (comboboxes) etc will appear from our Qt WebKit API.

 As these UI delegates are per page, we thought about adding a method to our
 ChromeClientQt like AbstractPopupMenu* createPopupMenu(PopupMenuClient*
 client), but currently PopupMenuClient has no access to the page so that we
 can get hold of that.

 I guess that it is not safe to assume that PopupMenuClient::hostWindow() is
 always a Chrome, so would it be acceptable substituting the
 PopupMenuClient::hostWindow()  with a PopupMenuClient::page() [1]  method
 and then change the uses to page-chrome() ?

 Cheers,
 Kenneth

 [1] easily implemented as return document()-page() instead of return
 document()-view-hostWindow()

 --
 Kenneth Rohde Christiansen
 Technical Lead / Senior Software Engineer
 Qt Labs Americas, Nokia Technology Institute, INdT
 Phone  +55 81 8895 6002 / E-mail kenneth.christiansen at openbossa.org

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Question about PopupMenuClient

2010-01-07 Thread Adam Treat
On Thursday 07 January 2010 05:44:46 pm Kenneth Christiansen wrote:
 I guess that it is not safe to assume that PopupMenuClient::hostWindow() is
 always a Chrome, so would it be acceptable substituting the

Why do you say this? 

Maybe add a virtual like:

virtual PlatformPopupMenu platformPopupMenu() const = 0; 

to HostWindow.h ?

Much like Qt already returns QWebPageClient for 'platformPageClient()' ?  
Maybe you can even re-use this QWebPageClient for the popup menu delegation 
too?

Cheers,
Adam

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ARM JIT for WinCE

2010-01-07 Thread Patrick Roland Gansterer
Hi,

I did some further investigation today.

I did a quick hack in the privateCompileCTIMachineTrampolines to get the same 
maybe correct register values like without OPTIMIZE_NATIVE_CALL.

 move(callFrameRegister, regT0);

+move(ARMRegisters::r2, ARMRegisters::r3);
+move(ARMRegisters::r1, ARMRegisters::r2);
+move(ARMRegisters::r0, ARMRegisters::r1);
-move(stackPointerRegister, ARMRegisters::r3);
+move(stackPointerRegister, ARMRegisters::r0);
-call(Address(regT1, OBJECT_OFFSETOF(JSFunction, m_data)));
+call(Address(regT2, OBJECT_OFFSETOF(JSFunction, m_data)));
 
 addPtr(Imm32(sizeof(ArgList)), stackPointerRegister);

Now it produces the following code:

003E01B0  mulsr0, r3, r0 
003E01B4  subsr1, r1, r0 
003E01B8  str r1, [sp] 
003E01BC  ldr r2, [r1, #-4] 
003E01C0  ldr r1, [r4, #-8] 
003E01C4  mov r0, r4 
003E01C8  mov r3, r2 
003E01CC  mov r2, r1 
003E01D0  mov r1, r0 
003E01D4  mov r0, sp 
003E01D8  mov lr, pc 
003E01DC  ldr pc, [r2, #0x1C] 
003E01E0  addssp, sp, #8 
003E01E4  ldr r3, [pc, #0x80] 
003E01E8  ldr r2, [r3] 
003E01EC  bicsr3, r2, #0 
003E01F0  bne 003E0204 

The arguments seam to be sane now in the call to 
dateProtoFuncGetTimezoneOffset, but it crashes afterwards.
When i step through it with the debugger i get the following register after 
the function finished and it jumps to 0x000139d8 instead of 0x003e01e0:
(lr = 0x003e01e0 when i enter the function!)

R0 = 0x182af984 R1 = 0x003f8054 R2 = 0x00601500 R3 = 0x0060
R4 = 0x003f8054 R5 = 0x0200 R6 = 0x182af984 R7 = 0x003f8054
R8 = 0x R9 = 0x182afbfc R10 = 0x R11 = 0x002b0370
R12 = 0x182af8f0 Sp = 0x182af95c Lr = 0x003e01e0 
Pc = 0x000139d8 Psr = 0x201f 

I then tried to return jsNaN(exec) always. So R4 won't be used and 
prolog/epilog changed:

00071600  mov r12, sp 
00071604  stmdb   sp!, {r0 - r3} 
00071608  stmdb   sp!, {r4, r12, lr} 
0007160C  sub sp, sp, #0x1C 

00071700  ldr r0, [sp, #8] 
00071704  add sp, sp, #0x1C 
00071708  ldmia   sp, {r4, sp, pc} 

changed to

000734EC  mov r12, sp 
000734F0  stmdb   sp!, {r0 - r3} 
000734F4  stmdb   sp!, {r12, lr} 
000734F8  sub sp, sp, #0x1C 

000735A4  ldr r0, [sp, #8] 
000735A8  add sp, sp, #0x1C 
000735AC  ldmia   sp, {sp, pc} 

I now get following registers and it jumps to the correct address 
(0x003e01e0), but it crashes then in functionPrint.

R0 = 0x182af984 R1 = 0x182af8f8 R2 = 0x R3 = 0x182af984
R4 = 0x003f8080 R5 = 0x0200 R6 = 0x0060 R7 = 0x003e07c8
R8 = 0x R9 = 0x182afbfc R10 = 0x R11 = 0x002b0370
R12 = 0x03fc2c50 Sp = 0x182af984 Lr = 0x0001bc18 
Pc = 0x003e01e0 Psr = 0x601f

I tried jsc.exe with the following javascript file:
print(getTimeZoneDiff());
function getTimeZoneDiff() { 
return (new Date(2000, 1, 1)).getTimezoneOffset();
}

This doesn't make many sense to me in the moment.

- Patrick
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] [webkit-changes] [52439] trunk/WebCore

2010-01-07 Thread Eric Seidel
I think the git svn diff one is a good fix.  Such coudl start out as a
wrapper like the svn-create-patch we use for SVN.  We could even just
teach svn-create-patch how to do the right thing for git. :)

-eric

On Wed, Dec 23, 2009 at 4:34 AM, Evan Martin e...@chromium.org wrote:
 On Mon, Dec 21, 2009 at 12:20 PM, David Kilzer ddkil...@webkit.org wrote:
 Setting [diff] renames = copies in ~/.gitconfig or in your .git/config 
 file for each project will make git diff try to do rename detection when 
 creating a patch.  (You may also use --find-copies-harder or 
 --find-copies-harder -C switches on the command line.)  This will provide 
 hints in the git diff about file renames, but it still only uses a 
 heuristic, and svn-apply currently doesn't know about these hints:

 This sort of thing has been a persistent problem for Chrome as well.

 Since our code review tool and our trybot also rely on SVN-specific
 features (including stuff like revprops, as well as the way it handles
 new files and renames), we are already doing work in multiple places
 to extend these tools to either understand git-style diffs or produce
 SVN-style diffs from Git.

 See for example GetMungedDiff in:
 http://src.chromium.org/viewvc/chrome/trunk/tools/depot_tools/git-try?revision=34087view=markup

 One option I've been considering is extending git-svn to include a
 git svn diff that produces an SVN-style patch.  That would fix
 this problem at the source, at the cost of needing to retrain everyone
 to use it when submitting WebKit patches.  :\

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] DOM Touch Event Support

2010-01-07 Thread Eric Seidel
I think David Kilzer may be your best bet for getting feedback on
things like this.  Or at least he would know who to point you to get
feedback.

-eric

2009/12/9 Simon Hausmann hausm...@kde.org:
 Hi,

 Kim and I have been looking into DOM/JavaScript touch event support (see
 https://bugs.webkit.org/show_bug.cgi?id=32114 ). After some research we
 concluded that the best option for the interface is the de-facto standard
 shipped by the iPhone, Android and Palm-Pre.

 So we took the BSD licensed IDL files from Android's eclaire branch, simple
 PlatformTouchEvent and PlatformTouchPoint abstraction classes, a Qt
 implementation for these and some basic layout tests to test the values and
 behaviour of the JS events.

 We've uploaded the patches for review and would love to get feedback on them.


 Simon
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Bot redness :-(

2010-01-07 Thread Eric Seidel
I went through again last night and recorded all known flakey tests as bugs.

I related them all to this bug:
https://bugs.webkit.org/showdependencytree.cgi?id=33296hide_resolved=1

which is about teaching webkit-patch and the commit-queue to block
commits on more bots than it does now.  Right now the commit-queue
won't commit when a subset of bots are red, and webkit-patch land
will warn for the same subset.  I'm interested in making that set
include all builders over time once we can make them all reliable.

-eric

On Tue, Dec 8, 2009 at 1:16 PM, Maciej Stachowiak m...@apple.com wrote:

 There's been some persistent bot redness the past few days. Here's the
 current problems I see:

 - storage/change-version-handle-reuse.html crashed on the Leopard Intel
 Debug bot - apparent threadsafety bug
 - some media tests are sporadically failing on SnowLeopard Intel Release
 - many storage tests failed on SnowLeopard Intel Leaks
 - fast/js/lastModified.html crashed on SnowLeopard Intel Leaks
 - various SVG, DOM and JS failures on Windows
 - fast/js/method-check.html crashing on Windows
 - websocket/tests/cross-origin.html crashing on Windows

 Is anyone looking into these?

 Regards,
 Maciej

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Blacklisting some sqlite functions

2010-01-07 Thread Chris Evans
On Thu, Jan 7, 2010 at 11:13 AM, Adam Barth aba...@webkit.org wrote:

 On Thu, Jan 7, 2010 at 10:02 AM, Brady Eidson beid...@apple.com wrote:
  Are random() and randomblob() security risks?  Could you point us to a
  source explaining this?

 They're fairly low risk, but you tend to leak a surprising amount of
 information when you expose non-cryptographic random sources to
 attackers.  We've already gotten a rather detailed report of the leaks
 from Math.random, for example.  If these functions are useful, we can
 keep them, but it does cost some amount of attack surface.


[reposting with my @chromium.org address]

I'd prefer to have JavaScript going to just one source of random. For now,
Math.random(). It makes a lot of things simpler in the future. Perhaps one
day all the browsers will adopt a standard secure random API.
I think Apple Safari was the only browser to adjust their Math.random()
implementation based on this report?
http://www.trusteer.com/files/Temporary_User_Tracking_in_Major_Browsers.pdf
It's not serious at all, but is interesting.

Anyway, I think we get better options for the future by not randomly adding
more sources of randomness available to JavaScript.


Cheers
Chris


 Adam

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Blacklisting some sqlite functions

2010-01-07 Thread Chris Evans
On Thu, Jan 7, 2010 at 12:13 PM, Dumitru Daniliuc d...@chromium.org wrote:

 in addition to these standard functions, we'd like to whitelist some
 functions from a few extensions chromium uses:
 full text search (fts2.c): whitelist snippet(), offsets(), optimize(), but
 not fts2_tokenizer().
 unicode data (icu.c): whitelist regexp(), lower(), upper(), like(), but
 not icu_load_collation().


 Is there any reason these are still Chromium-only?  Even though we're
 having problems getting different vendors to agree on SQL dialect issues
 with the spec, I think we should make an effort to keep WebKit unified.


 FTS and ICU are sqlite standard extensions that live in the sqlite tree.
 Chromium compiles its own sqlite library and includes these 2 extensions;
 I'm not sure if they're included in WebKitLibraries/libWebCoreSQLite3.a
 though.


[resending from @chromium.org]

Probably not. Last time I tested Safari for a sqlite-related security bug, I
didn't see any fts or icu extensions.
Chromium uses icu and fts2 (even though there is a newer fts3 module). These
modules are useful to offline apps that want powerful text indexing and
search.

Isn't it every WebKit consumer's own job to deal with which version of
sqlite to use? This is unfortunate because there are a lot of factors here.
For example, we get grief on Linux for not linking to the system sqlite
library. However one of the problems with doing that is that these sqlite
extensions are compile-time options that are often simply not compiled in.
etc. etc.


Cheers
Chris




 I'm also going to forward this message on to some of our security
 colleagues at Apple, and we might have more feedback shortly.


 great, thanks!

 dumi

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Nikolas Zimmermann


Am 07.01.2010 um 19:19 schrieb Dimitri Glazkov:


Are we planning to run pixel tests on the build bots? What's the
general opinion here?

We're running them over at Chromium and it seems like a really good
idea. Case in point:

Change http://trac.webkit.org/changeset/52900 broke a bunch of layout
tests, all pixel results, and as such didn't register on the
waterfall.

I rolled out the change for now.


I'd also love to see pixel tests again, otherwhise we have to rely  
that ie. Dirk  me run
SVG pixel tests on a regular base, to find these regressions. Just  
checking the -expected.txt

files is not sufficient for SVG.

Though enabling pixel tests for all layout tests will be a lot of  
work, as Darin already pointed out.
How about we'd start only with svg/ pixel tests? Getting SVG pixel  
tests working across the ports

would be a huge leap forward.

Cheers,
Niko


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Did I break the build?

2010-01-07 Thread Eric Seidel
http://build.webkit.org/console

Will let you know.

The tree has been very red today, and even redder yesterday.  I'm
working on yet another bot to help with this...

-eric
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Did I break the build?

2010-01-07 Thread Eric Seidel
Oh, btw, you should see green all the way across, always.

I'm currently chasing down the tests which are making some of the bots
less reliable.

The leaves of 
https://bugs.webkit.org/showdependencytree.cgi?id=33296hide_resolved=1
are all the bot reliability issues I've seen in the last 12 hours. :(

-eric

On Thu, Jan 7, 2010 at 4:21 PM, Eric Seidel e...@webkit.org wrote:
 http://build.webkit.org/console

 Will let you know.

 The tree has been very red today, and even redder yesterday.  I'm
 working on yet another bot to help with this...

 -eric

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
On Thu, Jan 7, 2010 at 12:49 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jan 7, 2010, at 12:09 PM, Mike Belshe wrote:

 Hi -

 I've been working on SPDY, but I think I may have found a good performance
 win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
 scanning ahead within an HTML document to find subresources, is throttled
 today.  The throttling is intentional and probably sometimes necessary.
  Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
 configurations.  I believe Antti is no longer working on this?  Is there
 anyone else working in this area that might have data on how aggressive the
 PreloadScanner should be?  Below I'll describe some of my tests.

 The PreloadScanner throttling happens in a couple of ways.  First, the
 PreloadScanner only runs when we're blocked on JavaScript (see
 HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
 it may delay or reject loading the subresource at all due to throttling in
 loader.cpp and DocLoader.cpp.  The throttling is very important, depending
 on the implementation of the HTTP networking stack, because throwing too
 many resources (or the low-priority ones) into the network stack could
 adversely affect HTTP load performance.  This latter problem does not impact
 my Chromium tests, because the Chromium network stack does its own
 prioritization and throttling (not too dissimilar from the work done by
 loader.cpp).


 The reason we do this is to prevent head-of-line blocking by low-priority
 resources inside the network stack (mainly considering how CFNetwork /
 NSURLConnection works).


Right - understood.




 *Theory*:
 The theory I'm working under is that when the RTT of the network is
 sufficiently high, the *best* thing the browser can do is to discover
 resources as quickly as possible and pass them to the network layer so that
 we can get started with fetching.  This is not speculative - these are
 resources which will be required to render the full page.   The SPDY
 protocol is designed around this concept - allowing the browser to schedule
 all resources it needs to the network (rather than being throttled by
 connection limits).  However, even with SPDY enabled, WebKit itself prevents
 resource requests from fully flowing to the network layer in 3 ways:
a) loader.cpp orders requests and defers requests based on the state of
 the page load and a number of criteria.
b) HTMLTokenizer.cpp only looks for resources further in the body when
 we're blocked on JS
c) preload requests are treated specially (docloader.cpp); if they are
 discovered too early by the tokenizer, then they are either queued or
 discarded.


 I think your theory is correct when SPDY is enabled, and possibly when
 using HTTP with pipelining. It may be true to a lesser extent with
 non-pipelining HTTP implementations when the network stack does its own
 prioritization and throttling, by reducing latency in getting the request to
 the network stack.


right.


 This is especially so when issuing a network request to the network stack
 may involve significant latency due to IPC or cross-thread communication or
 the like.


I hadn't considered IPC or cross thread latencies.  When I've measured these
in the past they are very very low.  One problem with the single-threaded
nature of our preloader and parser right now is that if the HTMLTokenizer is
in the middle of executing JS code, we're not doing anything to scan for
preloads; tons of data can be flowing in off the network which we're
oblivious to.  I'm not trying to change this for now, though, it's much more
involved, I think, due to thread safety requirements for the webcore cache.




 *Test Case*
 Can aggressive preloadscanning (e.g. always preload scan before parsing an
 HTML Document) improve page load time?

 To test this, I'm calling the PreloadScanner basically as the first part of
 HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
 and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
 effectiveness.

 *Benchmark Setup*
 Windows client (chromium).
 Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
 loss.
 I run through a set of 25 URLs, loading each 30 times; not recycling any
 connections and clearing the cache between each page.
 These are running over HTTP; there is no SPDY involved here.


 I'm interested in the following:

 - What kind of results do you get in Safari?


I've not done much benchmarking in Safari; do you have a good way to do
this?  Is there something I can read about or tools I can use?

For chromium, I use the benchmarking extension which lets me run through
lots of pages quickly.



 - How much of this effect is due to more aggressive preload scanning and
 how much is due to disabling throttling? Since the test includes multiple
 logically indpendent changes, it is hard to tell which are the ones that had
 an effect.


Great 

Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
On Thu, Jan 7, 2010 at 12:52 PM, Joe Mason jma...@rim.com wrote:

  I don’t think every port should be required to implement prioritization
 and throttling itself – that’s just duplication of effort.

I agree.  I wasn't thinking of turning this on globally; rather thinking
about how to turn it on selectively for ports that want it.

Mike



 Maybe there’s a good middle-ground, where PreloadScanner is run more often
 but still does the priority sorting?



 Joe



 *From:* webkit-dev-boun...@lists.webkit.org [mailto:
 webkit-dev-boun...@lists.webkit.org] *On Behalf Of *Mike Belshe
 *Sent:* Thursday, January 07, 2010 3:09 PM
 *To:* webkit-dev@lists.webkit.org
 *Subject:* [webkit-dev] PreloadScanner aggressiveness



 Hi -


 I've been working on SPDY, but I think I may have found a good performance
 win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
 scanning ahead within an HTML document to find subresources, is throttled
 today.  The throttling is intentional and probably sometimes necessary.
  Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
 configurations.  I believe Antti is no longer working on this?  Is there
 anyone else working in this area that might have data on how aggressive the
 PreloadScanner should be?  Below I'll describe some of my tests.



 The PreloadScanner throttling happens in a couple of ways.  First, the
 PreloadScanner only runs when we're blocked on JavaScript (see
 HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
 it may delay or reject loading the subresource at all due to throttling in
 loader.cpp and DocLoader.cpp.  The throttling is very important, depending
 on the implementation of the HTTP networking stack, because throwing too
 many resources (or the low-priority ones) into the network stack could
 adversely affect HTTP load performance.  This latter problem does not impact
 my Chromium tests, because the Chromium network stack does its own
 prioritization and throttling (not too dissimilar from the work done by
 loader.cpp).



 *Theory*:

 The theory I'm working under is that when the RTT of the network is
 sufficiently high, the *best* thing the browser can do is to discover
 resources as quickly as possible and pass them to the network layer so that
 we can get started with fetching.  This is not speculative - these are
 resources which will be required to render the full page.   The SPDY
 protocol is designed around this concept - allowing the browser to schedule
 all resources it needs to the network (rather than being throttled by
 connection limits).  However, even with SPDY enabled, WebKit itself prevents
 resource requests from fully flowing to the network layer in 3 ways:

a) loader.cpp orders requests and defers requests based on the state of
 the page load and a number of criteria.

b) HTMLTokenizer.cpp only looks for resources further in the body when
 we're blocked on JS

c) preload requests are treated specially (docloader.cpp); if they are
 discovered too early by the tokenizer, then they are either queued or
 discarded.



 *Test Case*

 Can aggressive preloadscanning (e.g. always preload scan before parsing an
 HTML Document) improve page load time?



 To test this, I'm calling the PreloadScanner basically as the first part of
 HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
 and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
 effectiveness.



 *Benchmark Setup*

 Windows client (chromium).

 Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
 loss.

 I run through a set of 25 URLs, loading each 30 times; not recycling any
 connections and clearing the cache between each page.

 These are running over HTTP; there is no SPDY involved here.



 *Results:*

 *Baseline
 (without my changes)*

 *Unthrottled*

 *Notes*

 Average PLT

 2377ms

 2239ms

 +5.8% latency redux.

 Time spent in the PreloadScanner

 1160ms

 4540ms

 As expected, we spend about 4x more time in the PreloadScanner. In this
 test, we loaded 750 pages, so it is about 6ms per page. My machine is fast,
 though.

 Preload Scripts discovered

 2621

 9440

 4x more scripts discovered

 Preload CSS discovered

 348

 1022

 3x more CSS discovered

 Preload Images discovered

 11952

 39144

 3x more images discovered

 Preload items throttled

 9983

 0

 Preload Complete hits

 3803

 6950

 This is the count of items which were completely preloaded before WebKit
 even tried to look them up in the cache. This is pure goodness.

 Preload Partial hits

 1708

 7230

 These are partial hits, where the item had already started loading, but not
 finished, before WebKit tried to look them up.

 Preload Unreferenced

 42

 130

 These are bad and the count should be zero. I'll try to find them and see
 if there isn't a fix - the PreloadScanner is just sometimes finding
 resources that are never used. It is likely due to clever JS 

Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Ojan Vafai
On Thu, Jan 7, 2010 at 10:22 AM, Darin Adler da...@apple.com wrote:

 On Jan 7, 2010, at 10:19 AM, Dimitri Glazkov wrote:
  Are we planning to run pixel tests on the build bots?

 If we can get them green, we should. It’s a lot of work. We need a
 volunteer to do that work. We’ve tried before.


Two possible long-term solutions come to mind:
1. Turn the bots orange on pixel failures. They still need fixing, but are
not as severe as text diff failures. I'm not a huge fan of this, but it's an
option.
2. Add in a concept of expected failures and only turn the bots red for
*unexpected* failurs. More details on this below.

In chromium-land, there's an expectations file that lists expected failures
and allows for distinguishing different types of failures (e.g. IMAGE vs.
TEXT). It's like Skipped lists, but doesn't necessarily skip the test.
Fixing the expected failures still needs doing of course, but can be done
asynchronously. The primary advantage of this approach is that we can turn
on pixel tests, keep the bots green and avoid further regressions.

Would something like that make sense for WebKit as a whole? To be clear, we
would be nearly as loathe to add tests to this file as we are about adding
them to the Skipped lists. This just provides a way forward.

While it's true that the bots used to be red more frequently with pixel
tests turned on, for the most part, there weren't significant pixel
regressions. Now, if you run the pixel tests on a clean build, there are a
number of failures and a very large number of hash-mismatches that are
within the failure tolerance level.

-Ojan

For reference, the format of the expectations file is something like this:

// Fails the image diff but not the text diff.
fast/forms/foo.html = IMAGE

// Fails just the text diff.
fast/forms/bar.html = TEXT

// Fails both the image and text diffs.
fast/forms/baz.html = IMAGE+TEXT

// Skips this test (e.g. because it hangs run-webkit-tests or causes other
tests to fail).
SKIP : fast/forms/foo1.html = IMAGE
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Eric Seidel
I'm totally in favor of adding test_expectations.txt like
functionality to webkit (and we'll get it for free when Dirk finishes
up-streaming run_webkit_tests.py)

But the troubles with the pixel tests in the past were more to do with
text metrics changing between OS releases, and individual font
differences between machines.  I suspect that those issues are very
solvable.

I think we mostly need someone willing to set up the pixel test bots.

-eric

On Thu, Jan 7, 2010 at 5:01 PM, Ojan Vafai o...@chromium.org wrote:
 On Thu, Jan 7, 2010 at 10:22 AM, Darin Adler da...@apple.com wrote:

 On Jan 7, 2010, at 10:19 AM, Dimitri Glazkov wrote:
  Are we planning to run pixel tests on the build bots?

 If we can get them green, we should. It’s a lot of work. We need a
 volunteer to do that work. We’ve tried before.

 Two possible long-term solutions come to mind:
 1. Turn the bots orange on pixel failures. They still need fixing, but are
 not as severe as text diff failures. I'm not a huge fan of this, but it's an
 option.
 2. Add in a concept of expected failures and only turn the bots red for
 *unexpected* failurs. More details on this below.
 In chromium-land, there's an expectations file that lists expected failures
 and allows for distinguishing different types of failures (e.g. IMAGE vs.
 TEXT). It's like Skipped lists, but doesn't necessarily skip the test.
 Fixing the expected failures still needs doing of course, but can be done
 asynchronously. The primary advantage of this approach is that we can turn
 on pixel tests, keep the bots green and avoid further regressions.
 Would something like that make sense for WebKit as a whole? To be clear, we
 would be nearly as loathe to add tests to this file as we are about adding
 them to the Skipped lists. This just provides a way forward.
 While it's true that the bots used to be red more frequently with pixel
 tests turned on, for the most part, there weren't significant pixel
 regressions. Now, if you run the pixel tests on a clean build, there are a
 number of failures and a very large number of hash-mismatches that are
 within the failure tolerance level.
 -Ojan
 For reference, the format of the expectations file is something like this:
 // Fails the image diff but not the text diff.
 fast/forms/foo.html = IMAGE
 // Fails just the text diff.
 fast/forms/bar.html = TEXT
 // Fails both the image and text diffs.
 fast/forms/baz.html = IMAGE+TEXT
 // Skips this test (e.g. because it hangs run-webkit-tests or causes other
 tests to fail).
 SKIP : fast/forms/foo1.html = IMAGE
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Language Bindings

2010-01-07 Thread AS
Hello

I'm looking into adding bindings for Python to the webkit DOM objects. 

Ideally, what I would like to create is WebKit based application that can use 
Python for DHTML manipulation. 

From the code, I can see how there are perl scripts that parse the idl files 
and generate binding files, such as those for Javascript and Objective C, I 
would like to add a set that does the same for Python. 

From the generated files, I a bit unclear as to where the actual objects are 
instaniated, i.e. if some jscript created a new say node, is there a table of 
class names / ctores somewhere?

Would it be possible for someone to give a quick overview of what process 
should be followed to create a new language binding. 

thanks___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Ojan Vafai
Do we really need a separate set of bots for pixel tests? Lets just turn the
pixel tests on for the current bots. The only thing stopping us doing that
is the currently failing tests, hence the suggestion for adding an
expectations file (or we could skip all the failures).

I don't know enough about text metrics changes between Mac releases. With
Windows releases, we've been able to support XP, Vista and 7 pretty easily
by using a generic theme for OS controls. Also, I think we have some hooks
to turn off cleartype or something. There are only ~10 tests that needed
custom results for Vista  Windows 7. I wonder if a similar set of steps
could be taken for supporting different Mac releases.

Ojan

On Thu, Jan 7, 2010 at 5:08 PM, Eric Seidel e...@webkit.org wrote:

 I'm totally in favor of adding test_expectations.txt like
 functionality to webkit (and we'll get it for free when Dirk finishes
 up-streaming run_webkit_tests.py)

 But the troubles with the pixel tests in the past were more to do with
 text metrics changing between OS releases, and individual font
 differences between machines.  I suspect that those issues are very
 solvable.

 I think we mostly need someone willing to set up the pixel test bots.

 -eric

 On Thu, Jan 7, 2010 at 5:01 PM, Ojan Vafai o...@chromium.org wrote:
  On Thu, Jan 7, 2010 at 10:22 AM, Darin Adler da...@apple.com wrote:
 
  On Jan 7, 2010, at 10:19 AM, Dimitri Glazkov wrote:
   Are we planning to run pixel tests on the build bots?
 
  If we can get them green, we should. It’s a lot of work. We need a
  volunteer to do that work. We’ve tried before.
 
  Two possible long-term solutions come to mind:
  1. Turn the bots orange on pixel failures. They still need fixing, but
 are
  not as severe as text diff failures. I'm not a huge fan of this, but it's
 an
  option.
  2. Add in a concept of expected failures and only turn the bots red for
  *unexpected* failurs. More details on this below.
  In chromium-land, there's an expectations file that lists expected
 failures
  and allows for distinguishing different types of failures (e.g. IMAGE vs.
  TEXT). It's like Skipped lists, but doesn't necessarily skip the test.
  Fixing the expected failures still needs doing of course, but can be done
  asynchronously. The primary advantage of this approach is that we can
 turn
  on pixel tests, keep the bots green and avoid further regressions.
  Would something like that make sense for WebKit as a whole? To be clear,
 we
  would be nearly as loathe to add tests to this file as we are about
 adding
  them to the Skipped lists. This just provides a way forward.
  While it's true that the bots used to be red more frequently with pixel
  tests turned on, for the most part, there weren't significant pixel
  regressions. Now, if you run the pixel tests on a clean build, there are
 a
  number of failures and a very large number of hash-mismatches that are
  within the failure tolerance level.
  -Ojan
  For reference, the format of the expectations file is something like
 this:
  // Fails the image diff but not the text diff.
  fast/forms/foo.html = IMAGE
  // Fails just the text diff.
  fast/forms/bar.html = TEXT
  // Fails both the image and text diffs.
  fast/forms/baz.html = IMAGE+TEXT
  // Skips this test (e.g. because it hangs run-webkit-tests or causes
 other
  tests to fail).
  SKIP : fast/forms/foo1.html = IMAGE
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Language Bindings

2010-01-07 Thread Brent Fulgham
Please refer to the Appcelerator project, which does something similar
to what you describe.

On Thu, Jan 7, 2010 at 5:15 PM, AS szoylent.gr...@gmail.com wrote:
 Hello

 I'm looking into adding bindings for Python to the webkit DOM objects.

 Ideally, what I would like to create is WebKit based application that can
 use Python for DHTML manipulation.

 From the code, I can see how there are perl scripts that parse the idl files
 and generate binding files, such as those for Javascript and Objective C, I
 would like to add a set that does the same for Python.

 From the generated files, I a bit unclear as to where the actual objects are
 instaniated, i.e. if some jscript created a new say node, is there a table
 of class names / ctores somewhere?

 Would it be possible for someone to give a quick overview of what process
 should be followed to create a new language binding.

 thanks
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] postProgressFinishedNotification

2010-01-07 Thread Brian Edmond
Hello,

 

I have a test application which tries to load a list of sites in an
automated manner.  This is being done by loading a new site when the
FrameLoaderClient postProgressFinishedNotification is called.  When this
happens a new URL in the list is loaded and this repeats in a loop.  I am
seeing some odd things and at some point it seems to get into an endless
loop where a complete comes in and it immediatley tries to load a new plage
which sends a complete and so on.  I was wondering if it is safe to load a
new URL from the postProgressFinishedNotification method?  Or is this the
wrong place to do this?

 

Thanks,

Brian

 

Brian Edmond

Crank Software Inc.

Office: 613-595-1999

Mobile: 613-796-1320

Online:  http://www.cranksoftware.com/ www.cranksoftware.com

Check out:  http://cranksoftware.com/blog/ Crank Software's Blog 

 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Running pixel tests on build.webkit.org

2010-01-07 Thread Darin Fisher
On Thu, Jan 7, 2010 at 5:17 PM, Ojan Vafai o...@chromium.org wrote:

 Do we really need a separate set of bots for pixel tests? Lets just turn
 the pixel tests on for the current bots. The only thing stopping us doing
 that is the currently failing tests, hence the suggestion for adding an
 expectations file (or we could skip all the failures).

 I don't know enough about text metrics changes between Mac releases. With
 Windows releases, we've been able to support XP, Vista and 7 pretty easily
 by using a generic theme for OS controls. Also, I think we have some hooks
 to turn off cleartype or something.


...and making sure all of the right / same fonts are installed :)
-darin



 There are only ~10 tests that needed custom results for Vista  Windows 7.
 I wonder if a similar set of steps could be taken for supporting different
 Mac releases.

 Ojan

 On Thu, Jan 7, 2010 at 5:08 PM, Eric Seidel e...@webkit.org wrote:

 I'm totally in favor of adding test_expectations.txt like
 functionality to webkit (and we'll get it for free when Dirk finishes
 up-streaming run_webkit_tests.py)

 But the troubles with the pixel tests in the past were more to do with
 text metrics changing between OS releases, and individual font
 differences between machines.  I suspect that those issues are very
 solvable.

 I think we mostly need someone willing to set up the pixel test bots.

 -eric

 On Thu, Jan 7, 2010 at 5:01 PM, Ojan Vafai o...@chromium.org wrote:
  On Thu, Jan 7, 2010 at 10:22 AM, Darin Adler da...@apple.com wrote:
 
  On Jan 7, 2010, at 10:19 AM, Dimitri Glazkov wrote:
   Are we planning to run pixel tests on the build bots?
 
  If we can get them green, we should. It’s a lot of work. We need a
  volunteer to do that work. We’ve tried before.
 
  Two possible long-term solutions come to mind:
  1. Turn the bots orange on pixel failures. They still need fixing, but
 are
  not as severe as text diff failures. I'm not a huge fan of this, but
 it's an
  option.
  2. Add in a concept of expected failures and only turn the bots red for
  *unexpected* failurs. More details on this below.
  In chromium-land, there's an expectations file that lists expected
 failures
  and allows for distinguishing different types of failures (e.g. IMAGE
 vs.
  TEXT). It's like Skipped lists, but doesn't necessarily skip the test.
  Fixing the expected failures still needs doing of course, but can be
 done
  asynchronously. The primary advantage of this approach is that we can
 turn
  on pixel tests, keep the bots green and avoid further regressions.
  Would something like that make sense for WebKit as a whole? To be clear,
 we
  would be nearly as loathe to add tests to this file as we are about
 adding
  them to the Skipped lists. This just provides a way forward.
  While it's true that the bots used to be red more frequently with pixel
  tests turned on, for the most part, there weren't significant pixel
  regressions. Now, if you run the pixel tests on a clean build, there are
 a
  number of failures and a very large number of hash-mismatches that are
  within the failure tolerance level.
  -Ojan
  For reference, the format of the expectations file is something like
 this:
  // Fails the image diff but not the text diff.
  fast/forms/foo.html = IMAGE
  // Fails just the text diff.
  fast/forms/bar.html = TEXT
  // Fails both the image and text diffs.
  fast/forms/baz.html = IMAGE+TEXT
  // Skips this test (e.g. because it hangs run-webkit-tests or causes
 other
  tests to fail).
  SKIP : fast/forms/foo1.html = IMAGE
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
 
 



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev