Re: [webkit-dev] Deployment of new EWS Non-Unified builder

2022-06-04 Thread Mark Lam via webkit-dev
I think Ross’ point about supported ports being bitten by missing includes is 
valid.  I also agree that it can take more time (possibly a lot more time) for 
an engineer stepping on the issue later in time to fix the missing includes vs 
the original author fixing it if a non-unified EWS points out where the issue 
is.

I have personally encountered this myself on Mac builds. The build error seemed 
non-sensical at first as it had nothing to do with the patch I was developing. 
This wasted some time for me to figure out that it wasn’t because I 
accidentally made a bad edit somewhere (especially when my patch is necessarily 
large).  Once I realized that it was due to skew in the unification boundaries, 
figuring out what include needed to be added also wasn’t obvious (at least at 
the time where I encountered this years ago).  I will grant that it appears to 
not happen very often anymore (at least in my experience, I hasn’t happened to 
me since), but there is an argument to be made that this may be thanks to 
Italia’s effort of fixing them on a regular basis, and preventing it from 
becoming a problem.

Anecdotal data aside, I would like to make a few points:

1. Build problems with skew in the unified files will only occur when one adds 
new .cpp files into the unified mix.  This means:
a. It will not affect most engineers who are just modifying code in 
existing files.
b. It may affect engineers who are building features that require adding 
new .cpp files (not .h files).
c. It may affect engineers who are refactoring code when it requires adding 
new .cpp files.
d. It may affect engineers working on non-supported ports that are not 
using unified ports.

The activities in (a) and (b) are common.  (c) is more rare.  (d), we agree we 
shouldn’t care about (in the sense that it should not be a burden on engineers 
of supported ports).  As someone who does a lot of (b) and (c) activities, I 
appreciate having this non-unified EWS.

Note that (a), (b), and (c) applies to supported ports working with unified 
ports.  This is not a port-specific issue, nor is just an unsupported port 
issue.

2. Unlike build issues due platform specific code, build issues due to missing 
includes tend to be easy to fix if identified by a non-unified EWS because:
a. The error will point exactly at the .cpp file that is failing to compile 
which need the missing include (as opposed to a unified file with many .cpp 
files).
b. The missing include is directly relevant to the patch being worked by 
the engineer at the time.  Hence, it is very likely the author of the patch 
will immediately recognize the type / variable declaration that is missing, and 
know which header file to include.

There is a rare chance that the non-unified build error involves some platform 
specific code.  In this case, I think it is fair to just ping the platform 
maintainers on slack to give them a heads up about it, and ask them to help 
resolve it.  It does not need to block the patch from landing.  Even so, this 
makes it way easier for the platform maintainers to figure out the issue rather 
than unsuspectingly stepping on it much later after hundreds of patches have 
landed.

Some more thoughts in response to Darin below ...

> On Jun 4, 2022, at 5:42 AM, Darin Adler via webkit-dev 
>  wrote:
> 
> Yes, I don’t oppose adding another EWS bot, and I was not trying to argue 
> against that proposal. I did intend to express my disagreement with many 
> points made in follow-up replies that seem quite wrong to me.
> 
> Three other thoughts:
> 
> 1) Even though I don’t object to adding a new bot, I will say that the idea 
> that a single “non-unified” bot will add a lot of value at very little cost 
> to the WebKit project doesn’t sound right to me. These arguments about how 
> long things will take don’t seem correct. My experience is that it’s quite 
> difficult to find, understand, and resolve errors seen in bot builds, far 
> more difficult than resolving errors I can see locally as I make code 
> changes. In my view every EWS bot we add helps weigh down the project, making 
> each change more difficult.

I agree that this is true in general.  However, it the build issue is due to a 
missing include, I think this is not true (see my point (2) above).

> 2) In my contributions, I don’t just want to add missing includes, I want to 
> remove unnecessary ones taking full advantage of forward declarations and 
> moving code out of headers. Too many includes and too much dependency has a 
> dramatic bad effect on the project, making colossal project build times even 
> worse.

I too try to do this when writing my patches.  I would argue that a non-unified 
EWS gives me greater confidence that a forward declaration is sufficient, and 
that I’m not just mistakenly thinking that it is only because some other file 
in the same unified unit also happen to include the header.  So, again, a 
non-unified EWS is helping here.

> 3) We 

Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-24 Thread Mark Lam
I forgot to add ...

> On Jun 24, 2020, at 10:26 AM, Mark Lam  wrote:
> 
> Based on all the feedback given so far, it looks like we can move forward 
> with the following plan:
> 1. JSC Debug test bots will build their own local jsc with O3 before running 
> the tests.

1.5 JSC EWS bot will also run with an O3 Debug build.

Mark

> 2. The rest of the build and test bots will remain unchanged.
> 
> Let's move forward with this and get the Debug JSC test bot functional again.
> 
> Thanks.
> 
> Mark
> 
> 
>> On Jun 19, 2020, at 2:58 PM, Alexey Proskuryakov > <mailto:a...@webkit.org>> wrote:
>> 
>> 
>> 
>>> 19 июня 2020 г., в 1:11 PM, Mark Lam >> <mailto:mark@apple.com>> написал(а):
>>> 
>>> 
>>> 
>>>> On Jun 19, 2020, at 10:24 AM, Geoffrey Garen >>> <mailto:gga...@apple.com>> wrote:
>>>> 
>>>>>> On Jun 19, 2020, at 8:04 AM, Geoffrey Garen >>>>> <mailto:gga...@apple.com>> wrote:
>>>>>> 
>>>>>> Can you explain more about what "O3 with no-inlining” is? How does 
>>>>>> --force-opt=O3 avoid inlining? Would this fully resolve Simon concern 
>>>>>> about stack traces, or would something still be different about stack 
>>>>>> traces?
>>>>> There doesn't exist a way to do this now, but it'd be trivial to add a 
>>>>> way. I won't claim it fixes all stack traces differences, but I'd think 
>>>>> compiling using "-fno-inline -fno-optimize-sibling-calls" would get us 
>>>>> pretty far in crashing stack traces being similar enough.
>>>> 
>>>> Sounds good.
>>>> 
>>>> I think we should try to refine the proposal along these lines, to 
>>>> minimize the downsides. I won’t speak for Simon, but for me, being able to 
>>>> ensure a clear backtrace from a bot would be a big improvement.
>>>> 
>>>>>> And again, I think this discussion would get a lot more focused if the 
>>>>>> change could apply only to JSC code, and not also to WTF code.
>>>>> I believe Mark's proposal, initially, is just to make JSC do this. So I 
>>>>> don't see the point of compiling WTF differently. JSC can kick off its 
>>>>> own build, and run Debug+O3 tests faster than it can run Debug+O0 tests. 
>>>>> Given people working on JSC want this, and people working on JSC defend 
>>>>> these tests, and that these test results are more stable (see below), we 
>>>>> should make this change for JSC.
>>>>> 
>>>>> I was trying to convince folks defending non-JSC testing, that they too, 
>>>>> should want this. I'm not going to pull teeth here. If folks want their 
>>>>> tests to take ~10x longer to run, they're entitled to make that tradeoff.
>>>> 
>>>> Got it.
>>> 
>>> I'm of the same mind as Saam.  We want this change for the JSC bots, and 
>>> from the time measurements I’ve collected, we can afford to do a clean 
>>> build for the JSC Debug test runs using O3, and still come out way ahead.
>> 
>> This seems like a reasonable plan. You didn't mention what hardware you 
>> measured with, but it seems certain to be beneficial on any current hardware.
>> 
>> I need to mention that we saw unexplained and very large impact on JSC test 
>> speed from enabling/disabling TCSM. That may be a good thing to look into 
>> while optimizing JSC test speed.
>> 
>> - Alexey
>> 
>> 
>>> As for non-JSC test runs, I have not actually measured what the time 
>>> savings are.  Given there is resistance to going with O3 there, we don’t 
>>> have to share the build artifacts for running the tests.  JSC test runs 
>>> should be able to just build JSC for each O3 Debug JSC test run and it is 
>>> still a win over the current status quo i.e. test runs never complete.
>>> 
>>> Regarding Geoff’s earlier question about whether I know for sure that 
>>> switching to O3 will fix the current Debug JSC bot failures to run tests, 
>>> the answer is I’m not certain.  The failure is a timeout due to the master 
>>> bot not seeing any output from the tester bot for more than 20 minutes.  
>>> I’ve not been able to reproduce this yet.  But with a Debug build test run 
>>> taking 4+ hours, it can only be a progression to switch the Debug JSC test 
>>> bots to O3.
>>> 
>>> Mark
>>> 
>>

Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-24 Thread Mark Lam
Based on all the feedback given so far, it looks like we can move forward with 
the following plan:
1. JSC Debug test bots will build their own local jsc with O3 before running 
the tests.
2. The rest of the build and test bots will remain unchanged.

Let's move forward with this and get the Debug JSC test bot functional again.

Thanks.

Mark


> On Jun 19, 2020, at 2:58 PM, Alexey Proskuryakov  wrote:
> 
> 
> 
>> 19 июня 2020 г., в 1:11 PM, Mark Lam > <mailto:mark@apple.com>> написал(а):
>> 
>> 
>> 
>>> On Jun 19, 2020, at 10:24 AM, Geoffrey Garen >> <mailto:gga...@apple.com>> wrote:
>>> 
>>>>> On Jun 19, 2020, at 8:04 AM, Geoffrey Garen >>>> <mailto:gga...@apple.com>> wrote:
>>>>> 
>>>>> Can you explain more about what "O3 with no-inlining” is? How does 
>>>>> --force-opt=O3 avoid inlining? Would this fully resolve Simon concern 
>>>>> about stack traces, or would something still be different about stack 
>>>>> traces?
>>>> There doesn't exist a way to do this now, but it'd be trivial to add a 
>>>> way. I won't claim it fixes all stack traces differences, but I'd think 
>>>> compiling using "-fno-inline -fno-optimize-sibling-calls" would get us 
>>>> pretty far in crashing stack traces being similar enough.
>>> 
>>> Sounds good.
>>> 
>>> I think we should try to refine the proposal along these lines, to minimize 
>>> the downsides. I won’t speak for Simon, but for me, being able to ensure a 
>>> clear backtrace from a bot would be a big improvement.
>>> 
>>>>> And again, I think this discussion would get a lot more focused if the 
>>>>> change could apply only to JSC code, and not also to WTF code.
>>>> I believe Mark's proposal, initially, is just to make JSC do this. So I 
>>>> don't see the point of compiling WTF differently. JSC can kick off its own 
>>>> build, and run Debug+O3 tests faster than it can run Debug+O0 tests. Given 
>>>> people working on JSC want this, and people working on JSC defend these 
>>>> tests, and that these test results are more stable (see below), we should 
>>>> make this change for JSC.
>>>> 
>>>> I was trying to convince folks defending non-JSC testing, that they too, 
>>>> should want this. I'm not going to pull teeth here. If folks want their 
>>>> tests to take ~10x longer to run, they're entitled to make that tradeoff.
>>> 
>>> Got it.
>> 
>> I'm of the same mind as Saam.  We want this change for the JSC bots, and 
>> from the time measurements I’ve collected, we can afford to do a clean build 
>> for the JSC Debug test runs using O3, and still come out way ahead.
> 
> This seems like a reasonable plan. You didn't mention what hardware you 
> measured with, but it seems certain to be beneficial on any current hardware.
> 
> I need to mention that we saw unexplained and very large impact on JSC test 
> speed from enabling/disabling TCSM. That may be a good thing to look into 
> while optimizing JSC test speed.
> 
> - Alexey
> 
> 
>> As for non-JSC test runs, I have not actually measured what the time savings 
>> are.  Given there is resistance to going with O3 there, we don’t have to 
>> share the build artifacts for running the tests.  JSC test runs should be 
>> able to just build JSC for each O3 Debug JSC test run and it is still a win 
>> over the current status quo i.e. test runs never complete.
>> 
>> Regarding Geoff’s earlier question about whether I know for sure that 
>> switching to O3 will fix the current Debug JSC bot failures to run tests, 
>> the answer is I’m not certain.  The failure is a timeout due to the master 
>> bot not seeing any output from the tester bot for more than 20 minutes.  
>> I’ve not been able to reproduce this yet.  But with a Debug build test run 
>> taking 4+ hours, it can only be a progression to switch the Debug JSC test 
>> bots to O3.
>> 
>> Mark
>> 
>> 
>>> 
>>>>> And again, on the run more tests front, it would be helpful to know 
>>>>> whether this change was enough to get the stress tests running or not.
>>>> My experience running the tests locally supports this fully. I don't get 
>>>> timeouts when running O3+Debug locally. When running Debug+O0 locally, I'd 
>>>> get timeouts all the time, and the total test run would take ~4-8 hours. 
>>>> We can wait for official confirmation from Mark.
&

Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-19 Thread Mark Lam
gt;> 
>>>> 
>>>> Thanks,
>>>> Geoff
>>>> 
>>>>> On Jun 18, 2020, at 9:30 PM, Saam Barati >>>> <mailto:sbar...@apple.com>> wrote:
>>>>> 
>>>>> Why are we insisting on doing something on the bots that takes ~10x 
>>>>> longer to run than necessary? I’d rather have that time spent running 
>>>>> more tests.
>>>>> 
>>>>> Overall, how we’re doing things now feels like a bad allocation of bot 
>>>>> resources. The differences I see between O3 with no-inlining vs O0 is:
>>>>> - Some race conditions will behave differently. Race conditions are 
>>>>> already non predictable. I don’t think we’re losing anything here.
>>>>> - O0 vs O3 is a different compiler. We may encounter bugs in O3 we don’t 
>>>>> in O0, and vice versa. In general, we probably care more about O3 
>>>>> compiler bugs than O0, since we don’t ship O0, but ship a lot of O3.
>>>>> 
>>>>> (And if we’re going to insist on “I want it to run what I build at my 
>>>>> desk”: I run debug with O3 at my desk, and I can run debug tests in a 
>>>>> reasonable amount of time now.)
>>>>> 
>>>>> In evaluating what’s the better setup, I think it’s helpful to think 
>>>>> about this from the other side. Let’s imagine we had Debug+O3 as our 
>>>>> current setup. And someone proposed to move it to O0 and make our tests 
>>>>> take ~10x longer. I think that’d be a non-starter.
>>>>> 
>>>>> - Saam
>>>>> 
>>>>>> On Jun 17, 2020, at 9:48 PM, Simon Fraser >>>>> <mailto:simon.fra...@apple.com>> wrote:
>>>>>> 
>>>>>> I also object to losing good stack traces for crashes on Debug bots.
>>>>>> 
>>>>>> Also, I don't think Debug bots should build something different from 
>>>>>> what I build at my desk.
>>>>>> 
>>>>>> Simon
>>>>>> 
>>>>>>> On Jun 17, 2020, at 1:36 PM, Mark Lam >>>>>> <mailto:mark@apple.com>> wrote:
>>>>>>> 
>>>>>>> Hi folks,
>>>>>>> 
>>>>>>> We're planning to switch the JSC EWS bot and build.webkit.org 
>>>>>>> <http://build.webkit.org/> Debug build and test bots to building with 
>>>>>>> the following set first:
>>>>>>> ./Tools/Scripts/set-webkit-configuration --force-opt=O3
>>>>>>> 
>>>>>>> This means the Debug builds will be built with optimization level 
>>>>>>> forced to O3.
>>>>>>> 
>>>>>>> Why are we doing this?
>>>>>>> 1. So that the JSC EWS will start catching ASSERT failures.
>>>>>>> 2. JSC stress test Debug bots have been timing out and not running 
>>>>>>> tests at all.  Hopefully, this change will fix this issue.
>>>>>>> 3. Tests will run to completion faster and we’ll catch regressions 
>>>>>>> sooner.
>>>>>>> 
>>>>>>> The downside: crash stack traces will be like Release build stack 
>>>>>>> traces.  But I don’t think we should let this deter us.  It’s not like 
>>>>>>> there’s no stack information.  And just as we do with debugging Release 
>>>>>>> build test failures, we can always do a Debug build locally to do our 
>>>>>>> debugging.
>>>>>>> 
>>>>>>> We would like to apply this change to all Debug build and test bots, 
>>>>>>> not just the JSC ones.  Does anyone strongly object to this change?
>>>>>>> 
>>>>>>> Thanks.
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> Mark
>>>>>>> 
>>>>>>> ___
>>>>>>> webkit-dev mailing list
>>>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>>>>> 
>>>>>> ___
>>>>>> webkit-dev mailing list
>>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>>>> ___
>>>>> webkit-dev mailing list
>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>>> 
>>> 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-19 Thread Mark Lam
tter setup, I think it’s helpful to think about 
>>>> this from the other side. Let’s imagine we had Debug+O3 as our current 
>>>> setup. And someone proposed to move it to O0 and make our tests take ~10x 
>>>> longer. I think that’d be a non-starter.
>>>> 
>>>> - Saam
>>>> 
>>>>> On Jun 17, 2020, at 9:48 PM, Simon Fraser >>>> <mailto:simon.fra...@apple.com>> wrote:
>>>>> 
>>>>> I also object to losing good stack traces for crashes on Debug bots.
>>>>> 
>>>>> Also, I don't think Debug bots should build something different from what 
>>>>> I build at my desk.
>>>>> 
>>>>> Simon
>>>>> 
>>>>>> On Jun 17, 2020, at 1:36 PM, Mark Lam >>>>> <mailto:mark@apple.com>> wrote:
>>>>>> 
>>>>>> Hi folks,
>>>>>> 
>>>>>> We're planning to switch the JSC EWS bot and build.webkit.org 
>>>>>> <http://build.webkit.org/> Debug build and test bots to building with 
>>>>>> the following set first:
>>>>>> ./Tools/Scripts/set-webkit-configuration --force-opt=O3
>>>>>> 
>>>>>> This means the Debug builds will be built with optimization level forced 
>>>>>> to O3.
>>>>>> 
>>>>>> Why are we doing this?
>>>>>> 1. So that the JSC EWS will start catching ASSERT failures.
>>>>>> 2. JSC stress test Debug bots have been timing out and not running tests 
>>>>>> at all.  Hopefully, this change will fix this issue.
>>>>>> 3. Tests will run to completion faster and we’ll catch regressions 
>>>>>> sooner.
>>>>>> 
>>>>>> The downside: crash stack traces will be like Release build stack 
>>>>>> traces.  But I don’t think we should let this deter us.  It’s not like 
>>>>>> there’s no stack information.  And just as we do with debugging Release 
>>>>>> build test failures, we can always do a Debug build locally to do our 
>>>>>> debugging.
>>>>>> 
>>>>>> We would like to apply this change to all Debug build and test bots, not 
>>>>>> just the JSC ones.  Does anyone strongly object to this change?
>>>>>> 
>>>>>> Thanks.
>>>>>> 
>>>>>> Cheers,
>>>>>> Mark
>>>>>> 
>>>>>> ___
>>>>>> webkit-dev mailing list
>>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>>>> 
>>>>> ___
>>>>> webkit-dev mailing list
>>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>>> ___
>>>> webkit-dev mailing list
>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>>> 
>> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-19 Thread Mark Lam


> On Jun 19, 2020, at 10:02 AM, Alexey Proskuryakov  wrote:
> 
> 
> 
>> 19 июня 2020 г., в 9:59 AM, Mark Lam > <mailto:mark@apple.com>> написал(а):
>> 
>> 
>> 
>>> On Jun 19, 2020, at 9:53 AM, Alexey Proskuryakov >> <mailto:a...@webkit.org>> wrote:
>>> 
>>> 
>>> 
>>>> 18 июня 2020 г., в 9:30 PM, Saam Barati >>> <mailto:sbar...@apple.com>> написал(а):
>>>> 
>>>> Why are we insisting on doing something on the bots that takes ~10x longer 
>>>> to run than necessary? I’d rather have that time spent running more tests.
>>> 
>>> Replying to this point specifically, I wanted to point out that WebKit 
>>> tests take 2x longer in debug, not 10x longer. JSC tests take 3.6x longer.
>> 
>> Since I collected real data on this last night on the actual bot that runs 
>> the JSC stress tests, I’ll share the data here:
>> 
>> Build time
>> A clean build using buid-jsc for a normal Debug build on bot638 takes about 
>> 4.5 minutes.
>> A clean build using build-jsc for a --force-opt=O3 Debug build on bot638 
>> takes about 6 minutes.
>> 
>> Test run time
>> Running with a regular Debug build, the test completed in about 4 hours 41 
>> minutes with 1 timeout.
>> Running with a --force-opt=O3 Debug build, the test completed in about 39 
>> minutes with 0 timeouts.
>> 
>> The difference in test run time is 281 minutes vs 29 minutes.  That is a 
>> 7.2x ratio, not 3.6x.
> 
> https://build.webkit.org/builders/Apple-Catalina-Debug-JSC-Tests/builds/1080 
> <https://build.webkit.org/builders/Apple-Catalina-Debug-JSC-Tests/builds/1080>
>  - 4 hrs, 5 secs
> https://build.webkit.org/builders/Apple-Catalina-Release-JSC-Tests/builds/2546
>  
> <https://build.webkit.org/builders/Apple-Catalina-Release-JSC-Tests/builds/2546>
>  - 1 hrs, 6 mins, 27 secs
> 
> That's 3.6x.

Some details to consider:

1. 
https://build.webkit.org/builders/Apple-Catalina-Release-JSC-Tests/builds/2546 
<https://build.webkit.org/builders/Apple-Catalina-Release-JSC-Tests/builds/2546>
 runs on a different bot than 
https://build.webkit.org/builders/Apple-Catalina-Debug-JSC-Tests/builds/1080 
<https://build.webkit.org/builders/Apple-Catalina-Debug-JSC-Tests/builds/1080> .

2. You’re comparing a Release build run vs a Debug build run, not an O3 Debug 
build vs a regular Debug build.

3. Release builds do less work e.g. all debug ASSERTs are disabled.  Debug only 
validation code is disabled.

4. Not every file in a Release build is compiled with O3.  If I remember 
correctly, some are O3, others are not.  None are O0.

5. The collection of JSC tests run for Debug builds and Release builds are 
slightly different.  For example, Debug builds run 71793 jsc stress tests.  
Release builds runs 69950 jsc stress tests.  Hence, the Debug builds run 1843 
more tests.

My main point here is that Release builds are not the same as O3 Debug builds: 
some things are faster, some things are slower.  A Release build test run is at 
best an approximate representation of how a Debug O3 build test run will 
behave.  FWIW, my data came from actually running an O3 Debug build and a 
regular Debug build on the same bot.  

But my number are purely from timing the run of the tests.  If the numbers 
reported from the bot includes time for any bots scripts in addition to the 
test run times (I don’t know if there are any), then the total run time will 
also be different than the numbers I collected.  Feel free to use the 3.6x 
ratio if you prefer.

About -Og numbers (using the same measurement methology as my O3 Debug and 
regular Debug runs):

Build time: A clean build using build-jsc for a --force-opt=Og Debug build on 
bot638 takes about 5.5 minutes.
Test run time: Running with a --force-opt=Og Debug build, the test completed in 
about 2 hours 19 minute minutes with 0 timeouts.

That's about 3.5x slower than the Debug O3 run.

Mark


> 
> - Alexey
> 
> 
> 
>>>> Overall, how we’re doing things now feels like a bad allocation of bot 
>>>> resources. The differences I see between O3 with no-inlining vs O0 is:
>>> 
>>> Last time this was discussed, we talked about -Og, which is specifically 
>>> designed for the purpose I believe. Where do we stand on understanding and 
>>> adopting that?
>> 
>> I tried -Og last time after your suggestion, and I remember thinking that 
>> the perf was not acceptable back then.  I’ll collect the data again and 
>> report back with real number later today.
>> 
>> Mark
>> 
>>> 
>>> - Alexey
>>> 
>>>> - Some race conditions will behave differently. 

Re: [webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-19 Thread Mark Lam


> On Jun 19, 2020, at 9:53 AM, Alexey Proskuryakov  <mailto:a...@webkit.org>> wrote:
> 
> 
> 
>> 18 июня 2020 г., в 9:30 PM, Saam Barati > <mailto:sbar...@apple.com>> написал(а):
>> 
>> Why are we insisting on doing something on the bots that takes ~10x longer 
>> to run than necessary? I’d rather have that time spent running more tests.
> 
> Replying to this point specifically, I wanted to point out that WebKit tests 
> take 2x longer in debug, not 10x longer. JSC tests take 3.6x longer.

Since I collected real data on this last night on the actual bot that runs the 
JSC stress tests, I’ll share the data here:

Build time
A clean build using buid-jsc for a normal Debug build on bot638 takes about 4.5 
minutes.
A clean build using build-jsc for a --force-opt=O3 Debug build on bot638 takes 
about 6 minutes.

Test run time
Running with a regular Debug build, the test completed in about 4 hours 41 
minutes with 1 timeout.
Running with a --force-opt=O3 Debug build, the test completed in about 39 
minutes with 0 timeouts.

The difference in test run time is 281 minutes vs 29 minutes.  That is a 7.2x 
ratio, not 3.6x.

>> Overall, how we’re doing things now feels like a bad allocation of bot 
>> resources. The differences I see between O3 with no-inlining vs O0 is:
> 
> Last time this was discussed, we talked about -Og, which is specifically 
> designed for the purpose I believe. Where do we stand on understanding and 
> adopting that?

I tried -Og last time after your suggestion, and I remember thinking that the 
perf was not acceptable back then.  I’ll collect the data again and report back 
with real number later today.

Mark

> 
> - Alexey
> 
>> - Some race conditions will behave differently. Race conditions are already 
>> non predictable. I don’t think we’re losing anything here.
>> - O0 vs O3 is a different compiler. We may encounter bugs in O3 we don’t in 
>> O0, and vice versa. In general, we probably care more about O3 compiler bugs 
>> than O0, since we don’t ship O0, but ship a lot of O3.
>> 
>> (And if we’re going to insist on “I want it to run what I build at my desk”: 
>> I run debug with O3 at my desk, and I can run debug tests in a reasonable 
>> amount of time now.)
>> 
>> In evaluating what’s the better setup, I think it’s helpful to think about 
>> this from the other side. Let’s imagine we had Debug+O3 as our current 
>> setup. And someone proposed to move it to O0 and make our tests take ~10x 
>> longer. I think that’d be a non-starter.
>> 
>> - Saam
>> 
>>> On Jun 17, 2020, at 9:48 PM, Simon Fraser >> <mailto:simon.fra...@apple.com>> wrote:
>>> 
>>> I also object to losing good stack traces for crashes on Debug bots.
>>> 
>>> Also, I don't think Debug bots should build something different from what I 
>>> build at my desk.
>>> 
>>> Simon
>>> 
>>>> On Jun 17, 2020, at 1:36 PM, Mark Lam >>> <mailto:mark@apple.com>> wrote:
>>>> 
>>>> Hi folks,
>>>> 
>>>> We're planning to switch the JSC EWS bot and build.webkit.org 
>>>> <http://build.webkit.org/> Debug build and test bots to building with the 
>>>> following set first:
>>>> ./Tools/Scripts/set-webkit-configuration --force-opt=O3
>>>> 
>>>> This means the Debug builds will be built with optimization level forced 
>>>> to O3.
>>>> 
>>>> Why are we doing this?
>>>> 1. So that the JSC EWS will start catching ASSERT failures.
>>>> 2. JSC stress test Debug bots have been timing out and not running tests 
>>>> at all.  Hopefully, this change will fix this issue.
>>>> 3. Tests will run to completion faster and we’ll catch regressions sooner.
>>>> 
>>>> The downside: crash stack traces will be like Release build stack traces.  
>>>> But I don’t think we should let this deter us.  It’s not like there’s no 
>>>> stack information.  And just as we do with debugging Release build test 
>>>> failures, we can always do a Debug build locally to do our debugging.
>>>> 
>>>> We would like to apply this change to all Debug build and test bots, not 
>>>> just the JSC ones.  Does anyone strongly object to this change?
>>>> 
>>>> Thanks.
>>>> 
>>>> Cheers,
>>>> Mark
>>>> 
>>>> ___
>>>> webkit-dev mailing list
>>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>>

[webkit-dev] Switching open source Debug bots to building and testing with configuration --force-opt=O3

2020-06-17 Thread Mark Lam
Hi folks,

We're planning to switch the JSC EWS bot and build.webkit.org 
 Debug build and test bots to building with the 
following set first:
./Tools/Scripts/set-webkit-configuration --force-opt=O3

This means the Debug builds will be built with optimization level forced to O3.

Why are we doing this?
1. So that the JSC EWS will start catching ASSERT failures.
2. JSC stress test Debug bots have been timing out and not running tests at 
all.  Hopefully, this change will fix this issue.
3. Tests will run to completion faster and we’ll catch regressions sooner.

The downside: crash stack traces will be like Release build stack traces.  But 
I don’t think we should let this deter us.  It’s not like there’s no stack 
information.  And just as we do with debugging Release build test failures, we 
can always do a Debug build locally to do our debugging.

We would like to apply this change to all Debug build and test bots, not just 
the JSC ones.  Does anyone strongly object to this change?

Thanks.

Cheers,
Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] PSA: on make targets "testing" and "release+assert", ASSERT_ENABLED, and NDEBUG

2020-01-08 Thread Mark Lam
Hi folks,

Ever wanted to run tests with Debug ASSERTs enabled but the speed of the Debug 
build is so slow that it is a drag on your productivity?  Well, now you can get 
a faster testing build in one of two ways:

1. Build WebKit with "make testing” (or “make t”)
2. Build WebKit with “make release+assert” (or “make ra”)

testing
The “testing” make target builds a Debug build with the clang optimization 
level forced to -O3.  

The forcing of -O3 is achieved using 
"Tools/Scripts/set-webkit-configuration --force-optimization-level=O3”.  See 
trac.webkit.org/r254080 .
With this configuration, you can run JSC tests or layout tests like normal, 
except it will run faster.

How much faster?
Using a normal debug build, the JSC tests takes about 6 hours to 
complete.
Using the “testing” build, the JSC tests takes less than 30 minutes to 
complete.

release+assert
The “release+assert” make target builds a Release build with ASSERT_ENABLED=1 
defined.

With this configuration, you can run JSC tests.
Currently, if you run this on the layout tests, you will see a lot of 
ASSERT failures, which brings us to ...

#if ASSERT_ENABLED vs #ifndef NDEBUG
Since trac.webkit.org/r254087 , the 
ASSERT_DISABLED flag has been completely replaced with ASSERT_ENABLED.  

For code that guards fields and code needed to support debug ASSERTs, please 
use #if ASSERT_ENABLED going forward.

Please do NOT use #ifndef NDEBUG.

The reason the layout tests are failing ASSERTs when run on the 
“release+assert” build is because there are assertion support code above 
JavaScriptCore that are still guarded with #ifndef NDEBUG.  If you would like 
to help fix some of these, do the following:

1. Build WebKit with “make release+assert”.
2. Run "Tools/Scripts/run-webkit-tests —release”.
3. Look for tests that crashed with “ASSERTION FAILED: “ in the crash log e.g. 
"ASSERTION FAILED: m_tokenizer.isInDataState()”.
4. Find anything that the assertion depends on which is guarded by #ifndef 
NDEBUG, and replace it with  #if ASSERT_ENABLED.
5. Re-build WebKit and retest.

In general, there is probably no reason to ever use #ifndef NDEBUG.  Apart from 
assertion code, logging (and possibly other) code may also be affected by this 
same issue.

Miscellaneous details
The “testing” and “release+assert” make targets are available to use starting 
in trac.webkit.org/r254227 .  I only tested 
these make targets on Mac.  For other ports, some work may be required to get 
the builds configured similarly.

One caution about using the “testing” target: it configures the build 
environment using set-webkit-configuration.  That means unless you clear the 
configuration using "set-webkit-configuration --force-optimization-level=none”, 
it will force the clang optimization level to -O3 for all builds that follow, 
debug or release.  This is similar to how "set-webkit-configuration --asan” 
works.  Alternatively, you can also do "set-webkit-configuration —reset” to 
clear all configurations set using set-webkit-configuration.

If typing “--force-optimization-level” is too long and painful, “--force-opt” 
also works.

Thanks.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] CI builds fail on all platforms with "ReferenceError: Can't find variable: WebAssembly"

2019-09-06 Thread Mark Lam
I’m about to move it into JSTests/wasm/stress.  See 
https://bugs.webkit.org/show_bug.cgi?id=201551 
.

> On Sep 6, 2019, at 11:31 AM, Saam Barati  wrote:
> 
> Those tests should be checking if WebAssembly is enabled at all before 
> running the test. Something like:
> if (this.WebAssembly) { ... put test here ... }
> 
> Feel free to post a patch and I can review it.
> 
> - Saam
> 
>> On Sep 6, 2019, at 2:59 AM, Eike Rathke  wrote:
>> 
>> Hi,
>> 
>> our CI builds keep failing on all platforms with the following messages:
>> 
>> Running 
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.default
>> Running 
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.mini-mode
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.mini-mode:
>>  Exception: ReferenceError: Can't find variable: WebAssembly
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.mini-mode:
>>  global 
>> c...@web-assembly-constructors-should-not-override-global-object-property.js:2:12
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.default:
>>  Exception: ReferenceError: Can't find variable: WebAssembly
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.default:
>>  global 
>> c...@web-assembly-constructors-should-not-override-global-object-property.js:2:12
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.default:
>>  ERROR: Unexpected exit code: 3
>> FAIL: 
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.default
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.mini-mode:
>>  ERROR: Unexpected exit code: 3
>> FAIL: 
>> stress/web-assembly-constructors-should-not-override-global-object-property.js.mini-mode
>> 
>> 
>> The exceptions look quite similar to the output given at
>> https://bugs.webkit.org/show_bug.cgi?id=192020
>> 
>> Eike
>> 
>> -- 
>> GPG key 0x6A6CD5B765632D3A - 2265 D7F3 A7B0 95CC 3918  630B 6A6C D5B7 6563 
>> 2D3A
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Code Style: Opinion on returning void

2019-02-20 Thread Mark Lam
I also prefer it, and I think some coding patterns may require it e.g. in 
templates where sometimes we want to specialize into a void function, and other 
times into a function that returns a value.  However, this is rarely needed in 
practice.  Without being able to return void, writing such a template will be a 
pain if not impossible.

Mark

> On Feb 20, 2019, at 7:26 AM, Saam Barati  wrote:
> 
> I prefer it as well.
> 
> - Saam
> 
> On Feb 20, 2019, at 6:58 AM, Chris Dumez  > wrote:
> 
>> I also prefer allowed returning void. 
>> 
>> Chris Dumez
>> 
>> On Feb 19, 2019, at 10:35 PM, Daniel Bates > > wrote:
>> 
>>> 
>>> 
>>> On Feb 19, 2019, at 9:42 PM, Ryosuke Niwa >> > wrote:
>>> 
 On Tue, Feb 19, 2019 at 8:59 PM Daniel Bates >>> > wrote:
 > On Feb 7, 2019, at 12:47 PM, Daniel Bates >>> > > wrote:
 >
 > Hi all,
 >
 > Something bothers me about code like:
 >
 > void f();
 > void g()
 > {
 > if (...)
 > return f();
 > return f();
 > }
 >
 > I prefer:
 >
 > void g()
 > {
 > if (...) {
 > f();
 > return
 > }
 > f();
 > }
 >
 Based on the responses it seems there is sufficient leaning to codify
 the latter style.
 
 I don't think there is a sufficient consensus as far as I can tell. Geoff
>>> 
>>> I didn't get this from Geoff's remark. Geoff wrote:
>>> 
>>> ***“return f()” when f returns void is a bit mind bending.***
>>> Don't want to put words in Geoff's mouth. So, Geoff can you please confirm: 
>>> for the former style, for the latter style, no strong opinion.
>>> 
 and Alex both expressed preferences for being able to return void,
>>> 
>>> I got this from Alex's message
>>> 
 and Saam pointed out that there is a lot of existing code which does this.
>>> 
>>> I did not get this. He wrote emphasis mine:
>>> 
>>> I've definitely done this in JSC. ***I don't think it's super common***, 
>>> but I've also seen code in JSC not written by me that also does this.
>>> 
 Zalan also said he does this in his layout code.
>>> 
>>> I did not get this, quoting, emphasis mine:
>>> 
>>> I use this idiom too in the layout code. I guess I just prefer a more
>>> compact code.
>>> ***(I don't feel too strongly about it though)***
>>> 
>>> By the way, you even acknowledged that "WebKit ... tend[s] to have a 
>>> separate return.". So, I inferred you were okay with it. But from this 
>>> email I am no longer sure what your position is. Please state it clearly.
>>> 
 - R. Niwa
 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org 
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org 
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Exciting JS features (class fields) in need of review :)

2019-02-14 Thread Mark Lam
Hi Xan,

FYI, if you’re writing JSC stress tests, you can add test specific options by 
putting the following at the top of the test file:

//@ requireOptions(“--myOption=true”, “--myOtherOption=1234”)

For LayoutTests, you can add the following to the top line of the test html 
file:



This way, the feature can be disabled by default but still receive testing.

Mark

> On Feb 14, 2019, at 9:27 AM, Maciej Stachowiak  wrote:
> 
> 
> 
>> On Feb 14, 2019, at 1:37 AM, Xan Lopez > > wrote:
>> 
>> Hi Maciej,
>> 
>> the first patches had the flag indeed, so it should be easy to add it back 
>> to the patch. Not sure what's the usual procedure, but I guess it makes 
>> sense to enable it by default in the bug so that the bots keep testing the 
>> code? Then we'll disable it before landing if that's our decision.
> 
> If it’s a runtime flag then it’s possible to have the tests run without 
> having it enabled by default. This is one reason runtime flags are the 
> preferred choice, the feature can be running tests all along while still 
> getting further polish and refinement.
> 
> For a compile-time flag, your approach sounds reasonable to me (patch that 
> includes a default-on flag, ideally with a note in the ChangeLog that it will 
> be flipped before landing).
> 
> We almost never turn on features by default right in the patch where they 
> first land, unless it is really small and simple, to reduce risk of anyone 
> accidentally shipping it if they happen to cut a release branch soon after it 
> lands (and before it has had enough bake time).
> 
> BTW I appreciate the contribution of such a substantial feature, I regret 
> that you haven’t gotten more active review, and I am glad that Saam is now 
> giving you another review pass.
> 
> Cheers,
> Maciej
> 
>> 
>> Cheers,
>> Xan
>> 
>> On Thu, Feb 14, 2019 at 8:47 AM Maciej Stachowiak > > wrote:
>> 
>> I left the boring review feedback that this work should be behind a feature 
>> flag. Mentioning it here because this may apply to other feature patches you 
>> have in progress. (I am not qualified to review the substance of what the 
>> patch is doing.)
>> 
>> > On Feb 13, 2019, at 1:51 PM, ca...@igalia.com  
>> > wrote:
>> > 
>> > Hi WebKitters,
>> > 
>> > My colleagues at Igalia have been working on a number of JS language 
>> > features! We want WebKit to have implementations in order to provide 
>> > feedback for TC39, and to help meet the requirements to have them merged 
>> > into the specification proper.
>> > 
>> > Unfortunately, JSC suggested reviewers have been occupied for the past 6 
>> > months, unable to provide feedback and help us upstream the patches. I'm 
>> > reaching out to see if there are other folks reading webkit-dev who might 
>> > be able to check on the work, see how it looks, and help us get this stuff 
>> > upstream. We've been doing internal reviews, but unfortunately don't yet 
>> > have any JSC reviewers on our team.
>> > 
>> > You can find the patch for instance class fields at 
>> > https://bugs.webkit.org/show_bug.cgi?id=174212 
>> > . Any kind of constructive 
>> > feedback (from anyone) would be much appreciated :)
>> > 
>> > ___
>> > webkit-dev mailing list
>> > webkit-dev@lists.webkit.org 
>> > https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> > 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org 
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> 
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] WTFCrash()

2018-03-06 Thread Mark Lam


> On Mar 6, 2018, at 9:09 PM, Michael Catanzaro  wrote:
> 
> Hi,
> 
> After [1], WTFCrash() is used only in debug builds on Darwin. For Darwin 
> release builds, inline assembly is used to trigger a SIGTRAP. I experimented 
> with this today and found it works quite badly on Linux, somehow confusing 
> gdb and clobbering the top frames of the stacktrace, so I think we should 
> leave that unchanged and keep it Darwin-only. So this mail applies only to 
> debug builds on Darwin, and to non-Darwin ports.
> 
> Now, currently, WTFCrash() does three things:
> 
> (1) Calls a user-configurable crash hook
> (2) Print a low-quality backtrace to stderr
> (3) Crash somehow:
>  - If ASAN is used, with __builtin_trap() (that's SIGILL on Linux x86_64)
>  - Then with *(int *)(uintptr_t)0xbbadbeef = 0, which might fail to crash if 
> 0xbadbeef is a valid address, and is SIGSEGV otherwise
>  - Then with __builtin_trap() if COMPILER(GCC_OR_CLANG)
>  - Then with ((void(*)())0)() otherwise (presumably SIGSEGV or SIGBUS, dunno)
> 
> This is all rather more complicated than it needs to be.
> 
> First off, the crash hook is (almost) unused and should simply be removed, 
> see [2].
> 
> Next, the low-quality backtrace. Does anyone think this is useful? It's 
> mainly annoying to me, because it's not anywhere near as good as a proper 
> backtrace that shows stack members, it's mangled so function names are 
> unnecessarily-difficult to read, and it takes all of five seconds to get a 
> much nicer one with modern Linux developer tools. If other developers like 
> it, perhaps we could keep it for debug builds only, and skip right to the 
> crashing in release builds? I suppose we could keep printing it always if 
> there is desire to do this, because it has never caused any problems with 
> Linux crash telemetry and doesn't seem to be harming anything, but otherwise 
> my instinct is to simplify.

On Darwin, we currently only use WTFCrash() on Debug builds (see definition of 
the CRASH() macro).  Feel free to make linux do the same.  FWIW, I use this 
crash trace a lot when debugging crashes when I don’t already have a debugger 
attached yet.  Of course, with a debugger attached, it is of less value.

> Now, as for how exactly to crash. Current logic, with asan disabled, prefers 
> SIGSEGV, then SIGILL if that fails, then SIGSEGV again. I don't like that 
> WTFCrash() triggers a SIGSEGV. This is not very clean; at least on Linux, 
> it's conventional for assertion failures and other intentional crashes to 
> cause SIGABRT instead of trying to write to 0xbadbeef. SIGILL is also quite 
> unusual. I  think I'd be happy if we replace all of WTFCrash() with a single 
> call to abort(). Any objections to this? Is there a special reason for the 
> current convoluted logic?

For Darwin, I think we want assertion failures to generate crash logs for 
post-mortem analysis.  I’m not sure if SIGABRT will give us the crash logs we 
want.

The choice of a SIGSEGV on 0xbadbeef (for debug builds) and WTFBreakpointTrap() 
e.g. int3 on x86_64 (for release builds), is so that we can discern the crash 
log for an assertion failure vs any other crash.

The reason for release builds using WTFBreakpointTrap() is so that we can get a 
crash with minimal perturbation to the register state, to help with post-mortem 
crash analysis.  Debug builds are only used during development and internal 
testing.  So, we take the opportunity to get more crash info there (hence, the 
dumping of the crash trace).  The reason that ASan builds use __builtin_trap() 
is because ASan does not like the memory access of 0xbadbbeef (if I remember 
correctly).

In addition, we also have infrastructure in place that looks for these crash 
patterns.  So, changing to use SIGABRT (even if it generates a crash log on 
Darwin) would mean additional cost and churn to update that infrastructure, 
with not much gain to show for it.  Hence, I object to the change.

Feel free to change the linux implementation of CRASH() to use abort() if that 
works better for linux folks.  BTW, CRASH() is what you want to 
define/redirect.  WTFCrash() is only one implementation of CRASH().  No client 
should be calling WTFCrash() directly.

Mark

> Michael
> 
> [1] https://bugs.webkit.org/show_bug.cgi?id=153996
> [2] https://bugs.webkit.org/show_bug.cgi?id=183369
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Javier Fernandez is now a WebKit reviewer

2018-01-05 Thread Mark Lam
Hi everyone,

I would like to announce that Javier Fernandez (lajava on #webkit) is now a 
WebKit reviewer.  Javier has worked on CSS Grid Layout and Box Alignment, as 
well as the selection and editing code.

Please join me in congratulating Javier, and send him some patches to review.

Javier, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Frédéric Wang is now a WebKit reviewer

2017-09-25 Thread Mark Lam
Hi everyone,

I would like to announce that Frédéric Wang (nick: fredw) is now a WebKit 
reviewer.  Frédéric has worked on MathML, frame sandboxing, and scrolling.  He 
is also a committer in Gecko and Chromium projects.  Please congratulate him, 
and send him some patches to review.

Thanks.

Mark
 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] How we enable template functions

2017-08-23 Thread Mark Lam
One application of enable_if I’ve needed in the past is where I want 
specialization of a template function with the same argument signatures, but 
returning a different type.  The only way I know to make that happen is to use 
enable_if in the return type, e.g.

std::enable_if::type doStuff() { }
std::enable_if::type doStuff() { }

This works around the problem of “duplicate function definitions” which arises 
if the enable_if is not in the function signature itself.  So, I’m not sure 
your ENABLE_TEMPLATE_IF macro will give me a solution for this.

Mark


> On Aug 22, 2017, at 11:14 PM, Keith Miller  wrote:
> 
>> On Aug 22, 2017, at 9:17 PM, JF Bastien > > wrote:
>> 
>> I'd suggest considering what it'll look like when we're migrating to 
>> concepts in C++20.
>> 
>> Here's an example for our bitwise_cast:
>> https://github.com/jfbastien/bit_cast/blob/master/bit_cast.h#L10 
>> 
>> 
>> Notice the 3 ways to enable. There's also the option of using enable_if on 
>> the return value, or as a defaulted function parameter, but I'm not a huge 
>> fan of either.
> 
> I think the concepts approach is the cleanest. I’d avoid the macro if we go 
> that way.
> 
> But C++20 is a long way away and I only expect this problem to get worse over 
> time. So I’d rather find a nearer term solution.
> 
>> On Aug 22, 2017, at 9:13 PM, Chris Dumez > > wrote:
>> 
>> I personally prefer std::enable_if<>. For e.g.
>> 
>> template> int>::value>
>> Class Foo { }
> 
> I just find this much harder to parse since I now have to:
> 
> 1) recognize that the last class is not a actually polymorphic parameter
> 2) figure out exactly what the condition is given that it’s hidden inside an 
> enable_if*
> 
> The plus side of using a static_assert based approach is that it doesn’t 
> impact the readability of function/class signature at a high level since it’s 
> nested inside the body. It’s also not hidden particularly hidden since I 
> would expect it to be the first line of the body
> 
> Another downside of enable_if as a default template parameter is that someone 
> could make a mistake and pass an extra template value, e.g. Foo, 
> and it might pick the wrong template parameter. This isn’t super likely but 
> it’s still a hazard.
> 
> Admittedly, we could make a macro like (totes not stolen from JF’s GitHub):
> 
> #define ENABLE_TEMPLATE_IF(condition) typename = typename 
> std::enable_if::type
> 
> and implement Foo as:
> 
> template::value)>
> class Foo { };
> 
> I think this approach is pretty good, although, I think I care about the 
> enable_if condition rarely enough that I’d rather not see it in the 
> signature. Most of the time the code will look like:
> 
> template::value)>
> class Foo {...};
> 
> template::value)>
> class Foo {...};
> 
> template::value)>
> class Foo {...};
> 
> So when I know I want to use a Foo but I forgot the signature I now need to 
> look mentally skip the enable_if macro, which I’d rather avoid.
> 
>> 
>> I don’t like that something inside the body of a class / function would 
>> cause a template to be enabled or not.
> 
> I believe there are cases where this already basically already happens e.g. 
> bitwise_cast. Although, I think those cases could be fixed with a more 
> standard approach.
> 
> Cheers,
> Keith
> 
>> 
>> --
>>  Chris Dumez
>> 
>> 
>> 
>> 
>>> On Aug 22, 2017, at 8:34 PM, Keith Miller >> > wrote:
>>> 
>>> Hello fellow WebKittens,
>>> 
>>> I’ve noticed over time that we don’t have standard way that we enable 
>>> versions of template functions/classes (flasses?). For the most part it 
>>> seems that people use std::enable_if, although, it seems like it is 
>>> attached to every possible place in the function/class.
>>> 
>>> I propose that we choose a standard way to conditionally enable a template.
>>> 
>>> There are a ton of options; my personal favorite is to add the following 
>>> macro:
>>> 
>>> #define ENABLE_TEMPLATE_IF(condition) static_assert(condition, “template 
>>> disabled”)
>>> 
>>> Then have every function do:
>>> 
>>> template
>>> void foo(…)
>>> {
>>>ENABLE_TEMPLATE_IF(std::is_same::value);
>>>…
>>> }
>>> 
>>> And classes:
>>> 
>>> template
>>> class Foo {
>>>ENABLE_TEMPLATE_IF(std::is_same::value);
>>> };
>>> 
>>> I like this proposal because it doesn’t obstruct the signature/declaration 
>>> of the function/class but it’s still obvious when the class is enabled. 
>>> Obviously, I think we should require that this macro is the first line of 
>>> the function or class for visibility. Does anyone else have thoughts or 
>>> ideas?
>>> 
>>> Cheers,
>>> Keith
>>> 
>>> P.S. in case you are wondering why this macro works (ugh C++), it’s because 
>>> if 

Re: [webkit-dev] JIT probe mechanism soon required for DFG and FTL OSR Exit

2017-08-10 Thread Mark Lam
FYI, following up on this work, I’ll soon land patches that relies on the probe 
mechanism being available.  So, I will now make the JIT probe (MASM_PROBE) a 
mandatory feature in order to enable the DFG and FTL (see 
https://bugs.webkit.org/show_bug.cgi?id=175446 
<https://bugs.webkit.org/show_bug.cgi?id=175446>).  As a consequence, I’ll be 
disabling the DFG for the MIPS port in this patch.

Mark

> On Jul 19, 2017, at 9:52 AM, Mark Lam <mark@apple.com> wrote:
> 
> 
> 
>> On Jul 19, 2017, at 9:49 AM, Konstantin Tokarev <annu...@yandex.ru> wrote:
>> 
>> 
>> 
>> 19.07.2017, 02:49, "Mark Lam" <mark@apple.com>:
>>> FYI, I’m looking into making OSR exits work in a more memory efficient way 
>>> using the JIT probe mechanism.  Once we make this transition, the DFG and 
>>> FTL will not work for any CPU targets that don’t support the JIT probe 
>>> mechanism.  The API to the JIT probe will probably change as well as this 
>>> work progresses.  I’ll take care of updating all the probe implementations 
>>> for ARM and x86 variants.
>>> 
>>> AFAICT, only CPU(MIPS) does not currently support the probe mechanism.  
>>> It’s up to the MIPS folks to implement that support if they want to 
>>> continue to use the DFG.  Feel free to contact me if you have questions 
>>> regarding the probe mechanism.
>>> 
>>> This work will be tracked in https://bugs.webkit.org/show_bug.cgi?id=174645.
>> 
>> Could you elaborate, what code needs to be implemented for MIPS?
> 
> See everything guarded by ENABLE(MASM_PROBE) in MacroAssemblerX86Common.cpp.  
> You’ll need to have the equivalent for MIPS.  Eventually, we may need some 
> additional changes as well, but that depends on the solution currently being 
> investigated.
> 
> Mark
> 
>> 
>>> 
>>> Thanks.
>>> 
>>> Mark
>>> 
>>> ,
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
>> 
>> -- 
>> Regards,
>> Konstantin
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
> https://lists.webkit.org/mailman/listinfo/webkit-dev 
> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] JIT probe mechanism soon required for DFG and FTL OSR Exit

2017-07-19 Thread Mark Lam


> On Jul 19, 2017, at 9:49 AM, Konstantin Tokarev <annu...@yandex.ru> wrote:
> 
> 
> 
> 19.07.2017, 02:49, "Mark Lam" <mark@apple.com>:
>> FYI, I’m looking into making OSR exits work in a more memory efficient way 
>> using the JIT probe mechanism.  Once we make this transition, the DFG and 
>> FTL will not work for any CPU targets that don’t support the JIT probe 
>> mechanism.  The API to the JIT probe will probably change as well as this 
>> work progresses.  I’ll take care of updating all the probe implementations 
>> for ARM and x86 variants.
>> 
>> AFAICT, only CPU(MIPS) does not currently support the probe mechanism.  It’s 
>> up to the MIPS folks to implement that support if they want to continue to 
>> use the DFG.  Feel free to contact me if you have questions regarding the 
>> probe mechanism.
>> 
>> This work will be tracked in https://bugs.webkit.org/show_bug.cgi?id=174645.
> 
> Could you elaborate, what code needs to be implemented for MIPS?

See everything guarded by ENABLE(MASM_PROBE) in MacroAssemblerX86Common.cpp.  
You’ll need to have the equivalent for MIPS.  Eventually, we may need some 
additional changes as well, but that depends on the solution currently being 
investigated.

Mark

> 
>> 
>> Thanks.
>> 
>> Mark
>> 
>> ,
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> 
> -- 
> Regards,
> Konstantin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] JIT probe mechanism soon required for DFG and FTL OSR Exit

2017-07-18 Thread Mark Lam
FYI, I’m looking into making OSR exits work in a more memory efficient way 
using the JIT probe mechanism.  Once we make this transition, the DFG and FTL 
will not work for any CPU targets that don’t support the JIT probe mechanism.  
The API to the JIT probe will probably change as well as this work progresses.  
I’ll take care of updating all the probe implementations for ARM and x86 
variants.

AFAICT, only CPU(MIPS) does not currently support the probe mechanism.  It’s up 
to the MIPS folks to implement that support if they want to continue to use the 
DFG.  Feel free to contact me if you have questions regarding the probe 
mechanism.

This work will be tracked in https://bugs.webkit.org/show_bug.cgi?id=174645 
.

Thanks.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Jiewen Tan is now a WebKit reviewer

2017-05-31 Thread Mark Lam
Hi everyone,

I would like to announce that Jiewen Tan (jiewen on #webkit) is now a WebKit 
reviewer.  Jiewen usually works on the WebCrypto API.

Please join me in congratulating Jiewen, and send him some patches to review.

Jiewen, congratulations.

Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Konstantin Tokarev is now a WebKit reviewer

2017-05-22 Thread Mark Lam
Hi everyone,

I would like to announce that Konstantin Tokarev (annulen on #webkit) is now a 
WebKit reviewer.  Konstantin has extensive knowledge of WebKit, and has 
resurrected the QtWebKit port, helped WebKit compete against Chromium in the Qt 
project, and ensured that the dozens of applications that depend on QtWebKit 
will soon be able to receive updated versions of WebKit.

Please join me in congratulating Konstantin, and send him some patches to 
review.

Konstantin, congratulations.

Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] JF Bastien is now a WebKit reviewer

2017-04-04 Thread Mark Lam
Oops, I have sticky fingers today.  My apologies to JF.  The title should have 
been "JF Bastien is now a WebKit reviewer”.


> On Apr 4, 2017, at 3:57 PM, Mark Lam <mark@apple.com> wrote:
> 
> Hi everyone,
> 
> I would like to announce that JF Bastien (jfbastien on #webkit) is now a 
> WebKit reviewer.  JF is on the C++ standards committee, and had previously 
> worked on LLVM for 4 years.  He currently works on JavaScriptCore and knows 
> lots of things about WASM.  Please join me in congratulating JF, and send him 
> some patches to review.
> 
> JF, congratulations.
> 
> Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] JS Bastien is now a WebKit reviewer

2017-04-04 Thread Mark Lam
Hi everyone,

I would like to announce that JF Bastien (jfbastien on #webkit) is now a WebKit 
reviewer.  JF is on the C++ standards committee, and had previously worked on 
LLVM for 4 years.  He currently works on JavaScriptCore and knows lots of 
things about WASM.  Please join me in congratulating JF, and send him some 
patches to review.

JF, congratulations.

Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Carlos Alberto Lopez Perez is now a WebKit Reviewer

2017-03-30 Thread Mark Lam
Hi folks,

I just want to announce that Carlos Alberto Lopez Perez (clopez on #webkit) is 
now a WebKit Reviewer.  Carlos has been contributing on the WebKitGTK+ port and 
has expertise on tools and build/test infrastructure.  Please join me in 
congratulating Carlos, and send him some patches to review.

Carlos, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-28 Thread Mark Lam
Matt,
I tested this on Firefox and Chrome and saw that Chrome captures up to 200 
frames.  I don’t see Firefox capturing frames at all when not throwing an 
Error.  Were you looking at Error.stack for Firefox when you came up with the 
128 frames number?  Maybe there’s a Firebug option I’m not familiar with?

Geoff,
It just occurred to me that the developer does have one recourse: he/she can 
use break on exception thrown / on uncaught exception, and inspect the full 
stack via the debugger.

Mark


> On Mar 28, 2017, at 9:08 PM, Mark Lam <mark@apple.com> wrote:
> 
> Currently, there’s no way for the developer to change this.  We can certainly 
> make it an option that the Inspector can change if desired.
> 
> Mark
> 
> 
>> On Mar 28, 2017, at 7:35 PM, Matt Baker <matthew_a_ba...@apple.com 
>> <mailto:matthew_a_ba...@apple.com>> wrote:
>> 
>>> On Mar 28, 2017, at 4:23 PM, Geoffrey Garen <gga...@apple.com 
>>> <mailto:gga...@apple.com>> wrote:
>>> 
>>> Does the separate exceptionStackTraceLimit mean that if a developer gets a 
>>> truncated stack trace in the Web Inspector, there’s no way for the 
>>> developer to remedy that? Is that what other browsers’ developer tools do?
>> 
>> FireFox and Chrome show console entires with exception stack traces with 128 
>> and 200 frames (respectively).
>> 
>>> 
>>> Geoff
>>> 
>>>> On Mar 28, 2017, at 4:09 PM, Mark Lam <mark@apple.com 
>>>> <mailto:mark@apple.com>> wrote:
>>>> 
>>>> To follow up, I’ve implemented the change in r214289: <http://trac.webkit 
>>>> <http://trac.webkit/>.org/r214289>.  Error.stackTraceLimit is now 100.  I 
>>>> also implemented a separate exceptionStackTraceLimit for stack traces 
>>>> captured at the time of throwing a value (not to be confused with 
>>>> Error.stack which is captured at the time of instantiation of the Error 
>>>> object).  exceptionStackTraceLimit is also limited to 100 by default.
>>>> 
>>>> Mark
>>>> 
>>>> 
>>>>> On Mar 17, 2017, at 1:04 PM, Mark Lam <mark@apple.com 
>>>>> <mailto:mark@apple.com>> wrote:
>>>>> 
>>>>> @Geoff, my testing shows that we can do 200 frames and still perform well 
>>>>> (~1 second to console.log Error.stack).  Base on what we at present, I 
>>>>> think 100 is a good round number to use as our default stackTraceLimit.
>>>>> 
>>>>>> On Mar 17, 2017, at 11:40 AM, Maciej Stachowiak <m...@apple.com 
>>>>>> <mailto:m...@apple.com>> wrote:
>>>>>> 
>>>>>>> 
>>>>>>> On Mar 17, 2017, at 11:09 AM, Mark Lam <mark@apple.com 
>>>>>>> <mailto:mark@apple.com>> wrote:
>>>>>>> 
>>>>>>> Thanks for the reminder to back observations up with data.  I was 
>>>>>>> previously running some tests that throws StackOverflowErrors a lot 
>>>>>>> (which tainted my perspective), and I made a hasty conclusion which 
>>>>>>> isn’t good.  Anyway, here’s the data using an instrumented VM to take 
>>>>>>> some measurements and a simple test program that recurses forever to 
>>>>>>> throw a StackOverflowError (run on a MacPro):
>>>>>>> 
>>>>>>> 1. For a release build of jsc shell:
>>>>>>> Time to capture exception stack = 0.002807 sec
>>>>>>> Number of stack frames captured = 31722
>>>>>>> sizeof StackFrame = 24
>>>>>>> total memory consumed = ~761328 bytes.
>>>>>>> 
>>>>>>> 2. For a debug build of jsc shell:
>>>>>>> Time to capture exception stack = 0.052107 sec
>>>>>>> Number of stack frames captured = 31688
>>>>>>> sizeof StackFrame = 24
>>>>>>> total memory consumed = ~760512 bytes.
>>>>>>> 
>>>>>>> So, regarding performance, I was wrong.  The amount of time taken to 
>>>>>>> capture the entire JS stack each time is insignificant.
>>>>>>> Regarding memory usage, ~760K is not so good, but maybe it’s acceptable.
>>>>>>> 
>>>>>>> Comparing browsers with their respective inspectors open:
>>>>>>> 
>>>>>>&

Re: [webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-28 Thread Mark Lam
Currently, there’s no way for the developer to change this.  We can certainly 
make it an option that the Inspector can change if desired.

Mark


> On Mar 28, 2017, at 7:35 PM, Matt Baker <matthew_a_ba...@apple.com> wrote:
> 
>> On Mar 28, 2017, at 4:23 PM, Geoffrey Garen <gga...@apple.com 
>> <mailto:gga...@apple.com>> wrote:
>> 
>> Does the separate exceptionStackTraceLimit mean that if a developer gets a 
>> truncated stack trace in the Web Inspector, there’s no way for the developer 
>> to remedy that? Is that what other browsers’ developer tools do?
> 
> FireFox and Chrome show console entires with exception stack traces with 128 
> and 200 frames (respectively).
> 
>> 
>> Geoff
>> 
>>> On Mar 28, 2017, at 4:09 PM, Mark Lam <mark@apple.com 
>>> <mailto:mark@apple.com>> wrote:
>>> 
>>> To follow up, I’ve implemented the change in r214289: <http://trac.webkit 
>>> <http://trac.webkit/>.org/r214289>.  Error.stackTraceLimit is now 100.  I 
>>> also implemented a separate exceptionStackTraceLimit for stack traces 
>>> captured at the time of throwing a value (not to be confused with 
>>> Error.stack which is captured at the time of instantiation of the Error 
>>> object).  exceptionStackTraceLimit is also limited to 100 by default.
>>> 
>>> Mark
>>> 
>>> 
>>>> On Mar 17, 2017, at 1:04 PM, Mark Lam <mark@apple.com 
>>>> <mailto:mark@apple.com>> wrote:
>>>> 
>>>> @Geoff, my testing shows that we can do 200 frames and still perform well 
>>>> (~1 second to console.log Error.stack).  Base on what we at present, I 
>>>> think 100 is a good round number to use as our default stackTraceLimit.
>>>> 
>>>>> On Mar 17, 2017, at 11:40 AM, Maciej Stachowiak <m...@apple.com 
>>>>> <mailto:m...@apple.com>> wrote:
>>>>> 
>>>>>> 
>>>>>> On Mar 17, 2017, at 11:09 AM, Mark Lam <mark@apple.com 
>>>>>> <mailto:mark@apple.com>> wrote:
>>>>>> 
>>>>>> Thanks for the reminder to back observations up with data.  I was 
>>>>>> previously running some tests that throws StackOverflowErrors a lot 
>>>>>> (which tainted my perspective), and I made a hasty conclusion which 
>>>>>> isn’t good.  Anyway, here’s the data using an instrumented VM to take 
>>>>>> some measurements and a simple test program that recurses forever to 
>>>>>> throw a StackOverflowError (run on a MacPro):
>>>>>> 
>>>>>> 1. For a release build of jsc shell:
>>>>>> Time to capture exception stack = 0.002807 sec
>>>>>> Number of stack frames captured = 31722
>>>>>> sizeof StackFrame = 24
>>>>>> total memory consumed = ~761328 bytes.
>>>>>> 
>>>>>> 2. For a debug build of jsc shell:
>>>>>> Time to capture exception stack = 0.052107 sec
>>>>>> Number of stack frames captured = 31688
>>>>>> sizeof StackFrame = 24
>>>>>> total memory consumed = ~760512 bytes.
>>>>>> 
>>>>>> So, regarding performance, I was wrong.  The amount of time taken to 
>>>>>> capture the entire JS stack each time is insignificant.
>>>>>> Regarding memory usage, ~760K is not so good, but maybe it’s acceptable.
>>>>>> 
>>>>>> Comparing browsers with their respective inspectors open:
>>>>>> 
>>>>>> 1. Chrome
>>>>>> number of frames captured: 10
>>>>>> length of e.stack string: 824 chars
>>>>>> time to console.log e.stack: 0.27 seconds
>>>>>> 
>>>>>> 2. Firefox
>>>>>> number of frames captured: 129
>>>>>> length of e.stack string: 8831 chars
>>>>>> time to console.log e.stack: 0.93 seconds
>>>>>> 
>>>>>> 3. Safari
>>>>>> number of frames captured: 31722
>>>>>> length of e.stack string: 218821 chars
>>>>>> time to console.log e.stack: 50.8 seconds
>>>>>> 
>>>>>> 4. Safari (with error.stack shrunk to 201 frames at time of capture to 
>>>>>> simulate my proposal)
>>>>>> number of frames capt

Re: [webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-28 Thread Mark Lam
To follow up, I’ve implemented the change in r214289: <http://trac.webkit 
<http://trac.webkit/>.org/r214289>.  Error.stackTraceLimit is now 100.  I also 
implemented a separate exceptionStackTraceLimit for stack traces captured at 
the time of throwing a value (not to be confused with Error.stack which is 
captured at the time of instantiation of the Error object).  
exceptionStackTraceLimit is also limited to 100 by default.

Mark


> On Mar 17, 2017, at 1:04 PM, Mark Lam <mark@apple.com> wrote:
> 
> @Geoff, my testing shows that we can do 200 frames and still perform well (~1 
> second to console.log Error.stack).  Base on what we at present, I think 100 
> is a good round number to use as our default stackTraceLimit.
> 
>> On Mar 17, 2017, at 11:40 AM, Maciej Stachowiak <m...@apple.com 
>> <mailto:m...@apple.com>> wrote:
>> 
>>> 
>>> On Mar 17, 2017, at 11:09 AM, Mark Lam <mark@apple.com 
>>> <mailto:mark@apple.com>> wrote:
>>> 
>>> Thanks for the reminder to back observations up with data.  I was 
>>> previously running some tests that throws StackOverflowErrors a lot (which 
>>> tainted my perspective), and I made a hasty conclusion which isn’t good.  
>>> Anyway, here’s the data using an instrumented VM to take some measurements 
>>> and a simple test program that recurses forever to throw a 
>>> StackOverflowError (run on a MacPro):
>>> 
>>> 1. For a release build of jsc shell:
>>> Time to capture exception stack = 0.002807 sec
>>> Number of stack frames captured = 31722
>>> sizeof StackFrame = 24
>>> total memory consumed = ~761328 bytes.
>>> 
>>> 2. For a debug build of jsc shell:
>>> Time to capture exception stack = 0.052107 sec
>>> Number of stack frames captured = 31688
>>> sizeof StackFrame = 24
>>> total memory consumed = ~760512 bytes.
>>> 
>>> So, regarding performance, I was wrong.  The amount of time taken to 
>>> capture the entire JS stack each time is insignificant.
>>> Regarding memory usage, ~760K is not so good, but maybe it’s acceptable.
>>> 
>>> Comparing browsers with their respective inspectors open:
>>> 
>>> 1. Chrome
>>> number of frames captured: 10
>>> length of e.stack string: 824 chars
>>> time to console.log e.stack: 0.27 seconds
>>> 
>>> 2. Firefox
>>> number of frames captured: 129
>>> length of e.stack string: 8831 chars
>>> time to console.log e.stack: 0.93 seconds
>>> 
>>> 3. Safari
>>> number of frames captured: 31722
>>> length of e.stack string: 218821 chars
>>> time to console.log e.stack: 50.8 seconds
>>> 
>>> 4. Safari (with error.stack shrunk to 201 frames at time of capture to 
>>> simulate my proposal)
>>> number of frames captured: 201
>>> length of e.stack string: 13868 chars
>>> time to console.log e.stack: 1 second
>>> 
>>> With my proposal, the experience of printing Error.stack drops from 50.8 
>>> seconds to about 1 second.  The memory used for capturing the stack also 
>>> drops from ~760K to 5K.
>>> 
>>> I wasn’t aware of the Error.stackTraceLimit, but that does sound like a 
>>> better solution than my proposal since it gives developers the ability to 
>>> capture more stack frames if they need it.  Chrome’s default 
>>> Error.stackTraceLimit appears to be 10.  MS appears to support it as well 
>>> and defaults to 10 
>>> (https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript
>>>  
>>> <https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript>).
>>>   Firefox does now.
>> 
>> Out of curiosity: Why does Firefox capture 129 frames instead of 31722 in 
>> this case? Do they have a hardcoded limit?
> 
> Actually, my previous frame counts are a bit off.  I was using 
> e.stack.split(/\r\n|\r|\n/).length as the frame count.  Below, I just copy 
> the console.log dump into an editor and take the line count from there as the 
> frame count instead.  The result of that string.split appears to be a bit off 
> from the actual frames printed by console.log. 
> 
> I also modified my recursing test function to console.log the re-entry count 
> on entry and this is what I saw:
> 
> 1. Chrome
> test reported reentry count = 10150
> split(…).length = 11 (

Re: [webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-17 Thread Mark Lam
@Geoff, my testing shows that we can do 200 frames and still perform well (~1 
second to console.log Error.stack).  Base on what we at present, I think 100 is 
a good round number to use as our default stackTraceLimit.

> On Mar 17, 2017, at 11:40 AM, Maciej Stachowiak <m...@apple.com> wrote:
> 
>> 
>> On Mar 17, 2017, at 11:09 AM, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> 
>> Thanks for the reminder to back observations up with data.  I was previously 
>> running some tests that throws StackOverflowErrors a lot (which tainted my 
>> perspective), and I made a hasty conclusion which isn’t good.  Anyway, 
>> here’s the data using an instrumented VM to take some measurements and a 
>> simple test program that recurses forever to throw a StackOverflowError (run 
>> on a MacPro):
>> 
>> 1. For a release build of jsc shell:
>> Time to capture exception stack = 0.002807 sec
>> Number of stack frames captured = 31722
>> sizeof StackFrame = 24
>> total memory consumed = ~761328 bytes.
>> 
>> 2. For a debug build of jsc shell:
>> Time to capture exception stack = 0.052107 sec
>> Number of stack frames captured = 31688
>> sizeof StackFrame = 24
>> total memory consumed = ~760512 bytes.
>> 
>> So, regarding performance, I was wrong.  The amount of time taken to capture 
>> the entire JS stack each time is insignificant.
>> Regarding memory usage, ~760K is not so good, but maybe it’s acceptable.
>> 
>> Comparing browsers with their respective inspectors open:
>> 
>> 1. Chrome
>> number of frames captured: 10
>> length of e.stack string: 824 chars
>> time to console.log e.stack: 0.27 seconds
>> 
>> 2. Firefox
>> number of frames captured: 129
>> length of e.stack string: 8831 chars
>> time to console.log e.stack: 0.93 seconds
>> 
>> 3. Safari
>> number of frames captured: 31722
>> length of e.stack string: 218821 chars
>> time to console.log e.stack: 50.8 seconds
>> 
>> 4. Safari (with error.stack shrunk to 201 frames at time of capture to 
>> simulate my proposal)
>> number of frames captured: 201
>> length of e.stack string: 13868 chars
>> time to console.log e.stack: 1 second
>> 
>> With my proposal, the experience of printing Error.stack drops from 50.8 
>> seconds to about 1 second.  The memory used for capturing the stack also 
>> drops from ~760K to 5K.
>> 
>> I wasn’t aware of the Error.stackTraceLimit, but that does sound like a 
>> better solution than my proposal since it gives developers the ability to 
>> capture more stack frames if they need it.  Chrome’s default 
>> Error.stackTraceLimit appears to be 10.  MS appears to support it as well 
>> and defaults to 10 
>> (https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript
>>  
>> <https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript>).
>>   Firefox does now.
> 
> Out of curiosity: Why does Firefox capture 129 frames instead of 31722 in 
> this case? Do they have a hardcoded limit?

Actually, my previous frame counts are a bit off.  I was using 
e.stack.split(/\r\n|\r|\n/).length as the frame count.  Below, I just copy the 
console.log dump into an editor and take the line count from there as the frame 
count instead.  The result of that string.split appears to be a bit off from 
the actual frames printed by console.log. 

I also modified my recursing test function to console.log the re-entry count on 
entry and this is what I saw:

1. Chrome
test reported reentry count = 10150
split(…).length = 11 (because Chromes starts e.stack with a line 
"RangeError: Maximum call stack size exceeded”)
e.stack lines according to editor = 10 frames

2. Firefox
test reported reentry count = 222044
split(…).length = 129 (probably because there’s an extra newline in 
there somewhere)
e.stack lines according to editor = 128 frames

3. Safari
test reported reentry count = 31701
split(…).length = 31722 (I don’t know why there’s a 21 frame 
discrepancy here.  I’ll debug this later)
e.stack lines according to editor = ??? frames (WebInspector hangs every 
time I try to scroll in it, let alone let me highlight and copy the stack 
trace.  So I gave up)

Assuming the test function frame is not significantly different in size for all 
browsers, it looks like:
1. Chrome uses a much smaller stack (about 1/3 of our stack).
2. Firefox uses a much larger stack (possibly the full machine stack), but caps 
its Erro

Re: [webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-17 Thread Mark Lam
Thanks for the reminder to back observations up with data.  I was previously 
running some tests that throws StackOverflowErrors a lot (which tainted my 
perspective), and I made a hasty conclusion which isn’t good.  Anyway, here’s 
the data using an instrumented VM to take some measurements and a simple test 
program that recurses forever to throw a StackOverflowError (run on a MacPro):

1. For a release build of jsc shell:
Time to capture exception stack = 0.002807 sec
Number of stack frames captured = 31722
sizeof StackFrame = 24
total memory consumed = ~761328 bytes.

2. For a debug build of jsc shell:
Time to capture exception stack = 0.052107 sec
Number of stack frames captured = 31688
sizeof StackFrame = 24
total memory consumed = ~760512 bytes.

So, regarding performance, I was wrong.  The amount of time taken to capture 
the entire JS stack each time is insignificant.
Regarding memory usage, ~760K is not so good, but maybe it’s acceptable.

Comparing browsers with their respective inspectors open:

1. Chrome
number of frames captured: 10
length of e.stack string: 824 chars
time to console.log e.stack: 0.27 seconds

2. Firefox
number of frames captured: 129
length of e.stack string: 8831 chars
time to console.log e.stack: 0.93 seconds

3. Safari
number of frames captured: 31722
length of e.stack string: 218821 chars
time to console.log e.stack: 50.8 seconds

4. Safari (with error.stack shrunk to 201 frames at time of capture to simulate 
my proposal)
number of frames captured: 201
length of e.stack string: 13868 chars
time to console.log e.stack: 1 second

With my proposal, the experience of printing Error.stack drops from 50.8 
seconds to about 1 second.  The memory used for capturing the stack also drops 
from ~760K to 5K.

I wasn’t aware of the Error.stackTraceLimit, but that does sound like a better 
solution than my proposal since it gives developers the ability to capture more 
stack frames if they need it.  Chrome’s default Error.stackTraceLimit appears 
to be 10.  MS appears to support it as well and defaults to 10 
(https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript
 
<https://docs.microsoft.com/en-us/scripting/javascript/reference/stacktracelimit-property-error-javascript>).
  Firefox does now.

Does anyone object to us adopting Error.stackTraceLimit and setting the default 
to 10 to match Chrome?

Mark



> On Mar 16, 2017, at 11:29 PM, Geoffrey Garen <gga...@apple.com> wrote:
> 
> Can you be more specific about the motivation here?
> 
> Do we have any motivating examples that will tell us wether time+memory were 
> unacceptable before this change, or are acceptable after this change?
> 
> In our motivating examples, does Safari use more time+memory than other 
> browsers? If so, how large of a stack do other browsers capture?
> 
> We already limit the size of the JavaScript stack to avoid performance 
> problems like the ones you mention in many other contexts. Why is that limit 
> not sufficient?
> 
> Did you consider implementing Chrome’s Error.stackTraceLimit behavior?
> 
> Geoff
> 
>> On Mar 16, 2017, at 10:09 PM, Mark Lam <mark@apple.com> wrote:
>> 
>> Hi folks,
>> 
>> Currently, if we have an exception stack that is incredibly deep (especially 
>> for a StackOverflowError), JSC may end up thrashing memory just to capture 
>> the large stack trace in memory.This is bad for many reasons:
>> 
>> 1. the captured stack will take a lot of memory.
>> 2. capturing the stack may take a long time (due to memory thrashing) and 
>> makes for a bad user experience.
>> 3. if memory availability is low, capturing such a large stack may result in 
>> an OutOfMemoryError being thrown in its place.
>>   The OutOfMemoryError thrown there will also have the same problem with 
>> capturing such a large stack.
>> 4. most of the time, no one will look at the captured Error.stack anyway.
>> 
>> Since there isn’t a standard on what we really need to capture for 
>> Error.stack, I propose that we limit how much stack we capture to a 
>> practical size.  How about an Error.stack that consists of (1) the top N 
>> frames, (2) an ellipses, and (3) the bottom M frames?  If the number of 
>> frames on the stack at the time of capture  is less or equal to than N + M 
>> frames, then Error.stack will just show the whole stack with no ellipses.  
>> For example, if N is 4 and M is 2, the captured stack will look something 
>> like this:
>> 
>> foo10001
>> foo1
>> foo
>> foo9998
>> …
>> foo1
>> foo0
>> 
>> If we pick a sufficient large number for N and

[webkit-dev] Proposal to limit the size of the captured exception stack

2017-03-16 Thread Mark Lam
Hi folks,

Currently, if we have an exception stack that is incredibly deep (especially 
for a StackOverflowError), JSC may end up thrashing memory just to capture the 
large stack trace in memory.This is bad for many reasons:

1. the captured stack will take a lot of memory.
2. capturing the stack may take a long time (due to memory thrashing) and makes 
for a bad user experience.
3. if memory availability is low, capturing such a large stack may result in an 
OutOfMemoryError being thrown in its place.
The OutOfMemoryError thrown there will also have the same problem with 
capturing such a large stack.
4. most of the time, no one will look at the captured Error.stack anyway.

Since there isn’t a standard on what we really need to capture for Error.stack, 
I propose that we limit how much stack we capture to a practical size.  How 
about an Error.stack that consists of (1) the top N frames, (2) an ellipses, 
and (3) the bottom M frames?  If the number of frames on the stack at the time 
of capture  is less or equal to than N + M frames, then Error.stack will just 
show the whole stack with no ellipses.  For example, if N is 4 and M is 2, the 
captured stack will look something like this:

  foo10001
  foo1
  foo
  foo9998
  …
  foo1
  foo0

If we pick a sufficient large number for N and M (I suggest 100 each), I think 
this should provide sufficient context for debugging uses of Error.stack, while 
keeping an upper bound on how much memory and time we throw at capturing the 
exception stack.

My plan for implementing this is:
1. change Exception::finishCreation() to only capture the N and M frames, plus 
possibly 1 ellipses placeholder in the between them.
2. change all clients of Exception::stack() to be able to recognize and render 
the ellipses.

Does anyone object to doing this or have a compelling reason why this should 
not be done?

Thanks.

Mark



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Devin Rousso is now a WebKit reviewer

2017-03-09 Thread Mark Lam
Hi everyone,

I would like to announce that Devin Rousso (dcrousso on #webkit) is now a 
WebKit reviewer.  Devin has been contributing to Web Inspector for nearly two 
years.  Please join me in congratulating Devin, and send him some patches to 
review.

Devin, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] VM::setExclusiveThread()

2017-02-28 Thread Mark Lam
I’ve run Speedometer many times on a quiet 13” MacBookPro: removing the use of 
exclusive thread status does not appear to impact performance in any measurable 
way.  I also did some measurements on a microbenchmark locking and unlocking 
the JSLock using a JSLockHolder in a loop.  The microbenchmark shows that 
removing exclusive thread status results in the locking and unlocking operation 
increasing by up to 20%.

Given that locking and unlocking the JSLock is a very small fraction of the 
work done in a webpage, it’s not surprising that the 20% increase in time for 
the lock and unlock operation is not measurable in Speedometer.  Note also that 
the 20% only impacts WebCore which uses the exclusive thread status.  For all 
other clients of JSC (which never uses exclusive thread status), it may 
actually be faster to have exclusive thread checks removed (simply due to that 
code doing less work).

I’ll put up a patch to remove the use of exclusive thread status.  This will 
simplify the code and make it easier to move forward with new features.

Mark


> On Feb 24, 2017, at 9:01 PM, Filip Pizlo <fpi...@apple.com> wrote:
> 
> Seems like if the relevant benchmarks (speedometer) are ok with it then we 
> should just do this. 
> 
> -Filip
> 
>> On Feb 24, 2017, at 20:50, Mark Lam <mark@apple.com> wrote:
>> 
>> The JSC VM has this method setExclusiveThread().  Some details:
>> 1. setExclusiveThread() is only used to forego actually locking/unlocking 
>> the underlying lock inside JSLock.
>> 2. setExclusiveThread() is only used by WebCore where we can guarantee that 
>> the VM will only ever be used exclusively on one thread.
>> 3. the underlying lock inside JSLock used to be a slow system lock.
>> 
>> Now that we have fast locking, I propose that we simplify the JSLock code by 
>> removing the concept of the exclusiveThread and always lock/unlock the 
>> underlying lock.  This also give us the ability to tryLock the JSLock 
>> (something I would like to be able to do for something new I’m working on).
>> 
>> Does anyone see a reason why we can’t remove the concept of the 
>> exclusiveThread?
>> 
>> Thanks.
>> 
>> Mark
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] VM::setExclusiveThread()

2017-02-24 Thread Mark Lam
The JSC VM has this method setExclusiveThread().  Some details:
1. setExclusiveThread() is only used to forego actually locking/unlocking the 
underlying lock inside JSLock.
2. setExclusiveThread() is only used by WebCore where we can guarantee that the 
VM will only ever be used exclusively on one thread.
3. the underlying lock inside JSLock used to be a slow system lock.

Now that we have fast locking, I propose that we simplify the JSLock code by 
removing the concept of the exclusiveThread and always lock/unlock the 
underlying lock.  This also give us the ability to tryLock the JSLock 
(something I would like to be able to do for something new I’m working on).

Does anyone see a reason why we can’t remove the concept of the exclusiveThread?

Thanks.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-23 Thread Mark Lam
To give an update: I plan to experiment and get some size and perf numbers and 
report back later.  But since this is not a high priority task for me, it may 
be a while before I get back to this.

Mark

> On Feb 22, 2017, at 1:25 PM, Ryosuke Niwa <rn...@webkit.org> wrote:
> 
> On Wed, Feb 22, 2017 at 12:41 PM, Mark Lam <mark@apple.com> wrote:
>> 
>> I would like to point out that we might be able to get the best of both 
>> worlds.  Here’s how we can do it:
>> 
>> define RELEASE_ASSERT(assertion) do { \
>>if (UNLIKELY(!(assertion))) { \
>>preserveRegisterState(); \
>>WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
>> #assertion); \
>>restoreRegisterState(); \
>>CRASH(); \
>>} \
>> 
>> preserveRegisterState() and restoreRegisterState() will carefully push and 
>> pop registers onto / off the stack (like how the JIT probe works).
>> 
> 
> I'm afraid this would bloat the binary size too much. You can test but
> I'd be surprised if this didn't negatively impact perf especially in
> WebCore code.
> 
> - R. Niwa

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Mark Lam

> On Feb 22, 2017, at 12:46 PM, Filip Pizlo <fpi...@apple.com> wrote:
> 
>> 
>> On Feb 22, 2017, at 12:41 PM, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> 
>>> 
>>> On Feb 22, 2017, at 12:35 PM, Filip Pizlo <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>>> 
>>>> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com 
>>>> <mailto:mark@apple.com>> wrote:
>>>> 
>>>> 
>>>>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>>>>> <mailto:fpi...@apple.com>> wrote:
>>>>> 
>>>>> 
>>>>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>>>>> <mailto:gga...@apple.com>> wrote:
>>>>>> 
>>>>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>>>>> been easy to solve if I had access to register state.
>>>>> 
>>>>> The current RELEASE_ASSERT means that every assertion in what the 
>>>>> compiler thinks is a function (i.e. some function and everything inlined 
>>>>> into it) is coalesced into a single trap site.  I’d like to understand 
>>>>> how you use the register state if you don’t even know which assertion you 
>>>>> are at.
>>>> 
>>>> Correction: they are not coalesced.  I was mistaken about that.  The fact 
>>>> that we turn them into inline asm (for emitting the int3) means the 
>>>> compiler cannot optimize it away or coalesce it.  The compiler does move 
>>>> it to the end of the emitted code for the function though because we end 
>>>> the CRASH() macro with __builtin_unreachable().
>>>> 
>>>> Hence, each int3 can be correlated back to the RELEASE_ASSERT that 
>>>> triggered it (with some extended disassembly work).
>>> 
>>> This never works for me.  I tested it locally.  LLVM will even coalesce 
>>> similar inline assembly.
>> 
>> With my proposal, I’m emitting different inline asm now after the int3 trap 
>> because I’m embedding line number and file strings.  Hence, even if the 
>> compiler is smart enough to compare inline asm code blobs, it will find them 
>> to be different, and hence, it doesn’t make sense to coalesce.
> 
> Are you claiming that LLVM does not currently now coalesce RELEASE_ASSERTS, 
> or that it will not coalesce them anymore after you make some change?

Here, I’m claiming that it will not coalesce after I make some changes.

> 
>> 
>>> 
>>>> 
>>>>> I believe that if you do want to analyze register state, then switching 
>>>>> back to calling some function that prints out diagnostic information is 
>>>>> strictly better.  Sure, you get less register state, but at least you 
>>>>> know where you crashed.  Knowing where you crashed is much more important 
>>>>> than knowing the register state, since the register state is not useful 
>>>>> if you don’t know where you crashed.
>>>>> 
>>>> 
>>>> I would like to point out that we might be able to get the best of both 
>>>> worlds.  Here’s how we can do it:
>>>> 
>>>> define RELEASE_ASSERT(assertion) do { \
>>>> if (UNLIKELY(!(assertion))) { \
>>>> preserveRegisterState(); \
>>>> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
>>>> #assertion); \
>>>> restoreRegisterState(); \
>>>> CRASH(); \
>>>> } \
>>>> 
>>>> preserveRegisterState() and restoreRegisterState() will carefully push and 
>>>> pop registers onto / off the stack (like how the JIT probe works).
>>> 
>>> Why not do the preserve/restore inside the WTFReport call?
>> 
>> Because I would like to preserve the register values that were used in the 
>> comparison that failed the assertion.
> 
> That doesn't change anything.  You can create a WTFFail that is written in 
> assembly and first saves all registers, and restores them prior to trapping.

A meaningful call here requires passing __FILE__, __LINE__, 
WTF_PRETTY_FUNCTION, and #assertion as arguments.  Hence, this will necessarily 
perturb register state at the call site.  The compiler is also free to load the 
reporting function into a register to make the call.  My approach of preserving 
regs before any c

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Mark Lam

> On Feb 22, 2017, at 12:35 PM, Filip Pizlo <fpi...@apple.com> wrote:
> 
>> 
>> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> 
>> 
>>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>>> <mailto:fpi...@apple.com>> wrote:
>>> 
>>> 
>>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>>> <mailto:gga...@apple.com>> wrote:
>>>> 
>>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>>> been easy to solve if I had access to register state.
>>> 
>>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>>> thinks is a function (i.e. some function and everything inlined into it) is 
>>> coalesced into a single trap site.  I’d like to understand how you use the 
>>> register state if you don’t even know which assertion you are at.
>> 
>> Correction: they are not coalesced.  I was mistaken about that.  The fact 
>> that we turn them into inline asm (for emitting the int3) means the compiler 
>> cannot optimize it away or coalesce it.  The compiler does move it to the 
>> end of the emitted code for the function though because we end the CRASH() 
>> macro with __builtin_unreachable().
>> 
>> Hence, each int3 can be correlated back to the RELEASE_ASSERT that triggered 
>> it (with some extended disassembly work).
> 
> This never works for me.  I tested it locally.  LLVM will even coalesce 
> similar inline assembly.

With my proposal, I’m emitting different inline asm now after the int3 trap 
because I’m embedding line number and file strings.  Hence, even if the 
compiler is smart enough to compare inline asm code blobs, it will find them to 
be different, and hence, it doesn’t make sense to coalesce.

> 
>> 
>>> I believe that if you do want to analyze register state, then switching 
>>> back to calling some function that prints out diagnostic information is 
>>> strictly better.  Sure, you get less register state, but at least you know 
>>> where you crashed.  Knowing where you crashed is much more important than 
>>> knowing the register state, since the register state is not useful if you 
>>> don’t know where you crashed.
>>> 
>> 
>> I would like to point out that we might be able to get the best of both 
>> worlds.  Here’s how we can do it:
>> 
>> define RELEASE_ASSERT(assertion) do { \
>> if (UNLIKELY(!(assertion))) { \
>> preserveRegisterState(); \
>> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
>> #assertion); \
>> restoreRegisterState(); \
>> CRASH(); \
>> } \
>> 
>> preserveRegisterState() and restoreRegisterState() will carefully push and 
>> pop registers onto / off the stack (like how the JIT probe works).
> 
> Why not do the preserve/restore inside the WTFReport call?

Because I would like to preserve the register values that were used in the 
comparison that failed the assertion.

Mark

> 
>> This allows us to get a log message on the terminal when we’re running 
>> manually.
>> 
>> In addition, we can capture some additional information about the assertion 
>> site by forcing the compiler to emit code to capture the code location info 
>> after the trapping instruction.  This is redundant but provides an easy 
>> place to find this info (i.e. after the int3 instruction).
>> 
>> #define WTFBreakpointTrap() do { \
>> __asm__ volatile ("int3"); \
>> __asm__ volatile( "" :  : "r"(__FILE__), "r"(__LINE__), 
>> "r"(WTF_PRETTY_FUNCTION)); \
>> } while (false)
>> 
>> We can easily get the line number this way.  However, the line number is not 
>> very useful by itself when we have inlining.  Hence, I also capture the 
>> __FILE__ and WTF_PRETTY_FUNCTION.  However, I haven’t been able to figure 
>> out how to decode those from the otool disassembler yet.
>> 
>> The only downside of doing this extra work is that it increases the code 
>> size for each RELEASE_ASSERT site.  This is probably insignificant in total.
>> 
>> Performance-wise, it should be neutral-ish because the 
>> __builtin_unreachable() in the CRASH() macro + the UNLIKELY() macro would 
>> tell the compiler to put this in a slow path away from the main code path.
>> 
>> Any thoughts on this alternative?
>> 
>> Mark
>> 
>> 
>>>> 
>>&

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Mark Lam
For some context, we used to see aggregation of the CRASH() for 
RELEASE_ASSERT() in the old days before we switched to using the int3 trap.  
Back then we called a crash() function that never returns.  As a result, the 
C++ compiler was able to coalesce all the calls.  With the int3 trap emitted by 
inline asm, the C++ compiler has less ability to determine that the crash sites 
have the same code (probably because it doesn’t bother comparing what’s in the 
inline asm blobs).

Mark


> On Feb 22, 2017, at 12:33 PM, Mark Lam <mark@apple.com> wrote:
> 
>> 
>> On Feb 22, 2017, at 12:16 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> 
>>> On Feb 22, 2017, at 11:58 AM, Geoffrey Garen <gga...@apple.com 
>>> <mailto:gga...@apple.com>> wrote:
>>> 
>>> I’ve lost countless hours to investigating CrashTracers that would have 
>>> been easy to solve if I had access to register state.
>> 
>> The current RELEASE_ASSERT means that every assertion in what the compiler 
>> thinks is a function (i.e. some function and everything inlined into it) is 
>> coalesced into a single trap site.  I’d like to understand how you use the 
>> register state if you don’t even know which assertion you are at.
> 
> Correction: they are not coalesced.  I was mistaken about that.  The fact 
> that we turn them into inline asm (for emitting the int3) means the compiler 
> cannot optimize it away or coalesce it.  The compiler does move it to the end 
> of the emitted code for the function though because we end the CRASH() macro 
> with __builtin_unreachable().
> 
> Hence, each int3 can be correlated back to the RELEASE_ASSERT that triggered 
> it (with some extended disassembly work).
> 
>> I believe that if you do want to analyze register state, then switching back 
>> to calling some function that prints out diagnostic information is strictly 
>> better.  Sure, you get less register state, but at least you know where you 
>> crashed.  Knowing where you crashed is much more important than knowing the 
>> register state, since the register state is not useful if you don’t know 
>> where you crashed.
>> 
> 
> I would like to point out that we might be able to get the best of both 
> worlds.  Here’s how we can do it:
> 
> define RELEASE_ASSERT(assertion) do { \
> if (UNLIKELY(!(assertion))) { \
> preserveRegisterState(); \
> WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, 
> #assertion); \
> restoreRegisterState(); \
> CRASH(); \
> } \
> 
> preserveRegisterState() and restoreRegisterState() will carefully push and 
> pop registers onto / off the stack (like how the JIT probe works).
> This allows us to get a log message on the terminal when we’re running 
> manually.
> 
> In addition, we can capture some additional information about the assertion 
> site by forcing the compiler to emit code to capture the code location info 
> after the trapping instruction.  This is redundant but provides an easy place 
> to find this info (i.e. after the int3 instruction).
> 
> #define WTFBreakpointTrap() do { \
> __asm__ volatile ("int3"); \
> __asm__ volatile( "" :  : "r"(__FILE__), "r"(__LINE__), 
> "r"(WTF_PRETTY_FUNCTION)); \
> } while (false)
> 
> We can easily get the line number this way.  However, the line number is not 
> very useful by itself when we have inlining.  Hence, I also capture the 
> __FILE__ and WTF_PRETTY_FUNCTION.  However, I haven’t been able to figure out 
> how to decode those from the otool disassembler yet.
> 
> The only downside of doing this extra work is that it increases the code size 
> for each RELEASE_ASSERT site.  This is probably insignificant in total.
> 
> Performance-wise, it should be neutral-ish because the 
> __builtin_unreachable() in the CRASH() macro + the UNLIKELY() macro would 
> tell the compiler to put this in a slow path away from the main code path.
> 
> Any thoughts on this alternative?
> 
> Mark
> 
> 
>>> 
>>> I also want the freedom to add RELEASE_ASSERT without ruining performance 
>>> due to bad register allocation or making the code too large to inline. For 
>>> example, hot paths in WTF::Vector use RELEASE_ASSERT.
>> 
>> Do we have data about the performance benefits of the current RELEASE_ASSERT 
>> implementation?
>> 
>>> 
>>> Is some compromise solution possible?
>>> 
>>> Some options:
>>> 
>>> (1) Add a variant of RELEASE_ASSERT that takes a string and logs.
>> 
>&g

Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-22 Thread Mark Lam
crashed.  So, I use the explicit:
> 
> if (!thing) {
>   dataLog(“…”);
>   RELEASE_ASSERT_NOT_REACHED();
> }
> 
> -Filip
> 
> 
>> 
>> Geoff
>> 
>>> On Feb 22, 2017, at 11:09 AM, Filip Pizlo <fpi...@apple.com> wrote:
>>> 
>>> I disagree actually.  I've lost countless hours to converting this:
>>> 
>>> RELEASE_ASSERT(blah)
>>> 
>>> into this:
>>> 
>>> if (!blah) {
>>> dataLog("Reason why I crashed");
>>> RELEASE_ASSERT_NOT_REACHED();
>>> }
>>> 
>>> Look in the code - you'll find lots of stuff like this.
>>> 
>>> I don't think analyzing register state at crashes is more important than 
>>> keeping our code sane.
>>> 
>>> -Filip
>>> 
>>> 
>>>> On Feb 21, 2017, at 5:56 PM, Mark Lam <mark@apple.com> wrote:
>>>> 
>>>> Oh yeah, I forgot about that.  I think the register state is more 
>>>> important for crash analysis, especially if we can make sure that the 
>>>> compiler does not aggregate the int3s.  I’ll explore alternatives.
>>>> 
>>>>> On Feb 21, 2017, at 5:54 PM, Saam barati <sbar...@apple.com> wrote:
>>>>> 
>>>>> I thought the main point of moving to SIGTRAP was to preserve register 
>>>>> state?
>>>>> 
>>>>> That said, there are probably places where we care more about the message 
>>>>> than the registers.
>>>>> 
>>>>> - Saam
>>>>> 
>>>>>> On Feb 21, 2017, at 5:43 PM, Mark Lam <mark@apple.com> wrote:
>>>>>> 
>>>>>> Is there a reason why RELEASE_ASSERT (and friends) does not call 
>>>>>> WTFReportAssertionFailure() to report where the assertion occur?  Is 
>>>>>> this purely to save memory?  svn blame tells me that it has been this 
>>>>>> way since the introduction of RELEASE_ASSERT in r140577 many years ago.
>>>>>> 
>>>>>> Would anyone object to adding a call to WTFReportAssertionFailure() in 
>>>>>> RELEASE_ASSERT() like we do for ASSERT()?  One of the upside 
>>>>>> (side-effect) of adding this call is that it appears to stop the 
>>>>>> compiler from aggregating all the RELEASE_ASSERTS into a single code 
>>>>>> location, and this will help with post-mortem crash debugging.
>>>>>> 
>>>>>> Any thoughts?
>>>>>> 
>>>>>> Mark
>>>>>> 
>>>>>> ___
>>>>>> webkit-dev mailing list
>>>>>> webkit-dev@lists.webkit.org
>>>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>>>>> 
>>>> 
>>>> ___
>>>> webkit-dev mailing list
>>>> webkit-dev@lists.webkit.org
>>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>>> 
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev
>> 
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-21 Thread Mark Lam
Oh yeah, I forgot about that.  I think the register state is more important for 
crash analysis, especially if we can make sure that the compiler does not 
aggregate the int3s.  I’ll explore alternatives.

> On Feb 21, 2017, at 5:54 PM, Saam barati <sbar...@apple.com> wrote:
> 
> I thought the main point of moving to SIGTRAP was to preserve register state?
> 
> That said, there are probably places where we care more about the message 
> than the registers.
> 
> - Saam
> 
>> On Feb 21, 2017, at 5:43 PM, Mark Lam <mark@apple.com> wrote:
>> 
>> Is there a reason why RELEASE_ASSERT (and friends) does not call 
>> WTFReportAssertionFailure() to report where the assertion occur?  Is this 
>> purely to save memory?  svn blame tells me that it has been this way since 
>> the introduction of RELEASE_ASSERT in r140577 many years ago.
>> 
>> Would anyone object to adding a call to WTFReportAssertionFailure() in 
>> RELEASE_ASSERT() like we do for ASSERT()?  One of the upside (side-effect) 
>> of adding this call is that it appears to stop the compiler from aggregating 
>> all the RELEASE_ASSERTS into a single code location, and this will help with 
>> post-mortem crash debugging.
>> 
>> Any thoughts?
>> 
>> Mark
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Why does RELEASE_ASSERT not have an error message?

2017-02-21 Thread Mark Lam
Is there a reason why RELEASE_ASSERT (and friends) does not call 
WTFReportAssertionFailure() to report where the assertion occur?  Is this 
purely to save memory?  svn blame tells me that it has been this way since the 
introduction of RELEASE_ASSERT in r140577 many years ago.

Would anyone object to adding a call to WTFReportAssertionFailure() in 
RELEASE_ASSERT() like we do for ASSERT()?  One of the upside (side-effect) of 
adding this call is that it appears to stop the compiler from aggregating all 
the RELEASE_ASSERTS into a single code location, and this will help with 
post-mortem crash debugging.

Any thoughts?

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Reducing the use of EncodedJSValue and use JSValue directly instead.

2017-01-04 Thread Mark Lam
JF and I took a cursory look at this before, but I think it warrants a more 
rigorous check before we commit to this change.  I’ll do the due diligence.

> On Jan 3, 2017, at 2:54 PM, Geoffrey Garen <gga...@apple.com> wrote:
> 
> EncodedJSValue was always just a work-around to convince the compiler to put 
> a JSValue in registers (instead of on the stack, with different compilers 
> disagreeing about where on the stack).
> 
> I agree with removing EncodedJSValue if possible.
> 
> Did something change to make this possible? For example, are you sure that 
> Windows and 32bit are OK with this change?
> 
> I don’t understand why C linkage would affect things: Either the compiler 
> puts structs in registers or it doesn’t.
> 
> Geoff
> 
>> On Jan 3, 2017, at 2:44 PM, Filip Pizlo <fpi...@apple.com 
>> <mailto:fpi...@apple.com>> wrote:
>> 
>> I think that this is great!
>> 
>> I agree with the policy that we should use JSValue everywhere that it would 
>> give us the same codegen/ABI (args in registers, return in registers, etc). 
>> 
>> -Filip
>> 
>> On Jan 3, 2017, at 14:33, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> 
>>> Over the holiday period, I looked into the possibility of giving 
>>> EncodedJSValue a default constructor (because I didn’t like having to 
>>> return encodedJSValue() instead of { } in lots of places), and learned why 
>>> we had EncodedJSValue in the first place (i.e. for C linkage).  This led me 
>>> to discover (in my reading of the code) that we don’t really need to use 
>>> EncodedJSValue for most of our host functions (those of type 
>>> NativeFunction, GetValueFunc, and PutValueFunc).  
>>> 
>>> I propose that we switch to using JSValue directly where we can.  This has 
>>> the benefit of:
>>> 1. better type safety with the use of JSValue (EncodedJSValue is a int64_t 
>>> typedef).
>>> 2. more readable code (less marshaling back and forth with 
>>> JSValue::encode()/decode()).
>>> 
>>> The patch for this change can be found here:
>>> https://bugs.webkit.org/show_bug.cgi?id=166658 
>>> <https://bugs.webkit.org/show_bug.cgi?id=166658>
>>> 
>>> Perf is neutral.  Any opinions?
>>> 
>>> Thanks.
>>> 
>>> Mark
>>> ___
>>> webkit-dev mailing list
>>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Reducing the use of EncodedJSValue and use JSValue directly instead.

2017-01-03 Thread Mark Lam
Over the holiday period, I looked into the possibility of giving EncodedJSValue 
a default constructor (because I didn’t like having to return encodedJSValue() 
instead of { } in lots of places), and learned why we had EncodedJSValue in the 
first place (i.e. for C linkage).  This led me to discover (in my reading of 
the code) that we don’t really need to use EncodedJSValue for most of our host 
functions (those of type NativeFunction, GetValueFunc, and PutValueFunc).  

I propose that we switch to using JSValue directly where we can.  This has the 
benefit of:
1. better type safety with the use of JSValue (EncodedJSValue is a int64_t 
typedef).
2. more readable code (less marshaling back and forth with 
JSValue::encode()/decode()).

The patch for this change can be found here:
https://bugs.webkit.org/show_bug.cgi?id=166658 


Perf is neutral.  Any opinions?

Thanks.

Mark___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Said Abou-Hallawa is now a WebKit reviewer

2016-10-31 Thread Mark Lam
Hi everyone,

I would like to announce that Said Abou-Hallawa is now a WebKit reviewer.  
Said’s expertise is on SVG/graphics and image decoders.  You may now ask Said 
to help review your code.

Said, congratulations.

Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Wenson Hsieh is now a WebKit reviewer

2016-10-24 Thread Mark Lam
Hi everyone,

I would like to announce that Wenson Hsieh is now a WebKit reviewer.  Wenson 
has worked on CSS Scroll Snapping, HTML forms, iOS gesture code, HTML editing, 
media, and more.  You may now ask Wenson to help review your code.

Wenson, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Manuel Rego Casasnovas is now a WebKit Reviewer

2016-07-30 Thread Mark Lam
Hi everyone,

I just want to announce that  Manuel Rego Casasnovas  (aka 
rego) is now a WebKit Reviewer.  He has expertise in CSS, layout, selection, 
and the GTK port.  You may now ping him for reviews.

Congratulations, Rego.

Cheers,
Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Jon Lee is now a WebKit Reviewer

2016-05-31 Thread Mark Lam
FYI, I just want to announce that Jon Lee is now a WebKit reviewer.  You may 
now send him patches to review.

Jon, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Keith Miller is now a WebKit reviewer

2016-04-05 Thread Mark Lam
Hi everyone,

Just want to announce that Keith Miller is now a reviewer.  Keith primarily 
works on JavaScriptCore.  Feel free to ask him for reviews.

Congratulations, Keith.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Joanmarie Diggs is now a WebKit reviewer

2016-03-28 Thread Mark Lam
Hi everyone,

Just want to announce that Joanmarie Diggs is now a reviewer.  Joanmarie 
primarily works on GTK accessibility and is a current editor of the ARIA spec 
(https://www.w3.org/TR/wai-aria-1.1/ ).

Congratulations, Joanmarie.

Mark___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Sukolsak Sakshuwong is now a WebKit Reviewer

2016-03-08 Thread Mark Lam
Turns out Sukolsak wasn’t on webkit-dev earlier.  But he is now.

Sokol, Filip also says congrats, but was not on the thread below.

Mark


> On Mar 8, 2016, at 7:26 PM, Saam barati <sbar...@apple.com> wrote:
> 
> Congrats Sukol!
> 
> Saam
> 
>> On Mar 8, 2016, at 6:33 PM, Yusuke SUZUKI <utatane@gmail.com 
>> <mailto:utatane@gmail.com>> wrote:
>> 
>> Congrats!
>> 
>> ---
>> Regards,
>> Yusuke Suzuki
>> 
>> On Wed, Mar 9, 2016 at 6:01 AM, Ryosuke Niwa <rn...@webkit.org 
>> <mailto:rn...@webkit.org>> wrote:
>> Congratulations, Sukolsak!
>> 
>> - R. Niwa
>> 
>> On Tue, Mar 8, 2016 at 11:33 AM, Mark Lam <mark@apple.com 
>> <mailto:mark@apple.com>> wrote:
>> Hi everyone,
>> 
>> Just want to announce that Sukolsak Sakshuwong is now a reviewer.  
>> Congratulations, Sukolsak.
>> 
>> Mark
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>> 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev 
>> <https://lists.webkit.org/mailman/listinfo/webkit-dev>
>> 
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org <mailto:webkit-dev@lists.webkit.org>
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Sukolsak Sakshuwong is now a WebKit Reviewer

2016-03-08 Thread Mark Lam
Hi everyone,

Just want to announce that Sukolsak Sakshuwong is now a reviewer.  
Congratulations, Sukolsak.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] PSA: Distinguishing between failed release assertions and other crashes

2016-02-10 Thread Mark Lam
As I shared in a previous email, I’ll be changing the implementation of 
WTFCrash() for non-debug OS(DARWIN) builds to use an inlined asm statement that 
issues a breakpoint instruction.  See 
https://bugs.webkit.org/show_bug.cgi?id=153996 for details and discussion.  I 
will be landing this patch shortly.

After this patch lands, for release builds, failed assertions will manifest as 
a EXC_BREAKPOINT (SIGTRAP).  Failed assertions should be your only source of 
EXC_BREAKPOINT crashes.
For debug builds, failed assertions will continue to manifest as EXC_BAD_ACCESS 
(SIGSEGV) on access of invalid address 0xbbadbeef.

Again, this change will only apply to OS(DARWIN) builds.  I’m leaving it up to 
the linux folks to decide if they also want to adopt this behavior on linux.

Thanks.

Mark
  
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] PSA: Distinguishing between failed release assertions and other crashes

2016-02-10 Thread Mark Lam
FYI, the change has landed in r196397: <http://trac.webkit.org/r196397>.

> On Feb 10, 2016, at 3:01 PM, Mark Lam <mark@apple.com> wrote:
> 
> As I shared in a previous email, I’ll be changing the implementation of 
> WTFCrash() for non-debug OS(DARWIN) builds to use an inlined asm statement 
> that issues a breakpoint instruction.  See 
> https://bugs.webkit.org/show_bug.cgi?id=153996 for details and discussion.  I 
> will be landing this patch shortly.
> 
> After this patch lands, for release builds, failed assertions will manifest 
> as a EXC_BREAKPOINT (SIGTRAP).  Failed assertions should be your only source 
> of EXC_BREAKPOINT crashes.
> For debug builds, failed assertions will continue to manifest as 
> EXC_BAD_ACCESS (SIGSEGV) on access of invalid address 0xbbadbeef.
> 
> Again, this change will only apply to OS(DARWIN) builds.  I’m leaving it up 
> to the linux folks to decide if they also want to adopt this behavior on 
> linux.
> 
> Thanks.
> 
> Mark
> 
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Change WTFCrash to not trash the crash site register state.

2016-02-08 Thread Mark Lam
A store to 0xbbadbeef will still require the use of a register (at least on 
ARM).  The breakpoint instruction uses no registers (hence, we don’t have to 
choose which register to sacrifice).  We can still identify the crash as an 
assertion by looking fro the EXC_BREAKPOINT instead of the 0xbbadbeef address.

Mark


> On Feb 8, 2016, at 12:14 PM, Filip Pizlo <fpi...@apple.com> wrote:
> 
> I like this idea.  I’ve wanted this for a while.
> 
> Can you explain why your approach doesn’t inline a store to 0xbbadbeef, so 
> that this aspect of the current behavior is preserved?
> 
> -Filip
> 
> 
>> On Feb 8, 2016, at 11:55 AM, Mark Lam <mark@apple.com> wrote:
>> 
>> Hi WebKit folks,
>> 
>> For non-debug OS(DARWIN) builds, I would like to change WTFCrash()’s 
>> implementation into an inlined function that has a single inlined asm 
>> statement that issues a breakpoint trap.  The intent is to crash directly in 
>> the caller’s frame and preserve the register values at the time of the 
>> crash.  As a result, for non-debug OS(DARWIN) builds, crashes due to failed 
>> RELEASE_ASSERTs will now show up in crash reports as crashing due to 
>> EXC_BREAKPOINT (SIGTRAP) instead of a EXC_BAD_ACCESS (SIGSEGV) on address 
>> 0xbbadbeef.
>> 
>> This is in contrast to the current implementation where WTFCrash() is a 
>> function that calls a lot of handler / callback functions before actually 
>> crashing.  As a result, by the time it crashes, the caller’s register values 
>> has most likely been trashed by all the work that the WTFCrash and the 
>> handlers / callbacks do.  The register values in the captured crash report 
>> will, therefore, no longer be useful for crash site analysis. 
>> 
>> You can find the patch for this change at 
>> https://bugs.webkit.org/show_bug.cgi?id=153996.  This change will only be 
>> applied for non-debug OS(DARWIN) builds for now.  I’m leaving all other 
>> build build configurations with the existing WTFCrash() implementation and 
>> behavior.
>> 
>> Does anyone have any opinion / feedback on this change?
>> 
>> Thanks.
>> 
>> Regards,
>> Mark
>> 
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> https://lists.webkit.org/mailman/listinfo/webkit-dev
> 

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Change WTFCrash to not trash the crash site register state.

2016-02-08 Thread Mark Lam
Hi WebKit folks,

For non-debug OS(DARWIN) builds, I would like to change WTFCrash()’s 
implementation into an inlined function that has a single inlined asm statement 
that issues a breakpoint trap.  The intent is to crash directly in the caller’s 
frame and preserve the register values at the time of the crash.  As a result, 
for non-debug OS(DARWIN) builds, crashes due to failed RELEASE_ASSERTs will now 
show up in crash reports as crashing due to EXC_BREAKPOINT (SIGTRAP) instead of 
a EXC_BAD_ACCESS (SIGSEGV) on address 0xbbadbeef.

This is in contrast to the current implementation where WTFCrash() is a 
function that calls a lot of handler / callback functions before actually 
crashing.  As a result, by the time it crashes, the caller’s register values 
has most likely been trashed by all the work that the WTFCrash and the handlers 
/ callbacks do.  The register values in the captured crash report will, 
therefore, no longer be useful for crash site analysis. 

You can find the patch for this change at 
https://bugs.webkit.org/show_bug.cgi?id=153996.  This change will only be 
applied for non-debug OS(DARWIN) builds for now.  I’m leaving all other build 
build configurations with the existing WTFCrash() implementation and behavior.

Does anyone have any opinion / feedback on this change?

Thanks.

Regards,
Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Xabier Rodriguez-Calvar and Michael Catanzaro are now WebKit reviewers

2015-12-22 Thread Mark Lam
Hi everyone,

With pleasure, I would like to announce that Xabier Rodriguez-Calvar and 
Michael Catanzaro are now WebKit reviewers.

Congratulations to Xabier and Michael.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Youenn Fablet is now a WebKit reviewer

2015-10-26 Thread Mark Lam
Hello everyone,

I would like to announce that Youenn Fablet is now a WebKit reviewer.  Please 
send him your congratulations, and some requests for patch reviews too. :-)

Youenn, congratulations.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Lucas Forschler is now a WebKit reviewer

2015-09-24 Thread Mark Lam
Hi everyone,

Just want to announce that Lucas Forschler is now a reviewer.  Congratulations, 
Lucas.

Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Infinite JavaScript loop blocks the MiniBrowser

2015-07-30 Thread Mark Lam
Pascal,
I was wrong.  Please see https://bugs.webkit.org/show_bug.cgi?id=131082#c1 for 
details.

The gist of it is that the existing watchdog mechanism requires that the user 
of the VM explicitly reset it before any scripts are allowed to run again.  For 
example, you can do that by calling JSContextGroupSetExecutionTimeLimit() again.

Hence, you’re probably not resetting the watchdog after a time out.

Regards,
Mark


 On Jul 29, 2015, at 9:54 AM, Mark Lam mark@apple.com wrote:
 
 Pascal,
 
 You can take a look at ExecutionTimeLimitTest.cpp for examples of the 
 watchdog in action.  The way the watchdog works is by throwing a 
 non-catchable TerminatedExecutionException.  Normally, the JSC APIs will 
 clear the vm-exception() before returning to client code; the exception is 
 returned to the caller via an argument.  Thereafter, you should be able to 
 re-enter the VM and execute JS code like normal without needing to do any 
 explicit resets.
 
 If you’re not able to do this, then I suspect you have a code path where the 
 termination exception is not cleared before re-entering the VM.  All the 
 entry points in Interpreter.cpp already check for this with an 
 ASSERT(!vm.exception())”.  If you see that assertion fail on a debug build 
 when re-entering the VM, then the bug is that the exception is not being 
 cleared.  If you don’t see that assertion, then you’ll need to do some 
 debugging to see what went wrong.
 
 Another detail is that the Watchdog::Scope is responsible for disarming the 
 watchdog when you exit the VM.  Watchdog::Scope is already being used in all 
 the proper places.  Disarming in this case also means that the watchdog will 
 be ready to start fresh when you enter the VM again.  If you have a bug in 
 your disarming code where it’s not reseting the watchdog, then it is possible 
 that your watchdog is firing immediately when you re-enter the VM.  Some 
 tracing / debugging in your watchdog implementation should quickly show if 
 that’s the case.
 
 Regards,
 Mark
 
 
 
 On Jul 28, 2015, at 11:14 AM, Geoffrey Garen gga...@apple.com 
 mailto:gga...@apple.com wrote:
 
 Mark, do you know how to restart JavaScript after it has reached a watchdog 
 time limit?
 
 Geoff
 
 On Jul 28, 2015, at 9:09 AM, Pascal Jacquemart p.jacquem...@samsung.com 
 mailto:p.jacquem...@samsung.com wrote:
 
 Hello,
 
 I am trying to protect the MiniBrowser from executing faulty JavaScript 
 code taking too much time / CPU. All browsers normally raise a pop-up 
 allowing the user to stop the script and run away. 
 But MiniBrowser does not seem to have such feature. It is just stuck 
 forever ;-(
 
 After a little digging I found this JSC API: 
 JSContextGroupSetExecutionTimeLimit()
 I had to implement a JSC Watchdog back-end because it is not implemented 
 for EFL, fair enough - https://bugs.webkit.org/show_bug.cgi?id=147107 
 https://bugs.webkit.org/show_bug.cgi?id=147107 (ongoing)
 
 Now the JSContextGroupSetExecutionTimeLimit() have the expected behaviour.
 I can stop the JavaScript execution and run away... Great but the big 
 problem now is that the JavaScript won't restart. Even while loading other 
 pages, the JavaScript remains disabled.
 
 If you have any hints about this, I would be grateful.
 How to restart JavaScript execution? Where to look? Is it an EFL bug?
 
 Thanks,   Pascal Jacquemart
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org mailto:webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev 
 https://lists.webkit.org/mailman/listinfo/webkit-dev
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org mailto:webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Infinite JavaScript loop blocks the MiniBrowser

2015-07-29 Thread Mark Lam
Pascal,

You can take a look at ExecutionTimeLimitTest.cpp for examples of the watchdog 
in action.  The way the watchdog works is by throwing a non-catchable 
TerminatedExecutionException.  Normally, the JSC APIs will clear the 
vm-exception() before returning to client code; the exception is returned to 
the caller via an argument.  Thereafter, you should be able to re-enter the VM 
and execute JS code like normal without needing to do any explicit resets.

If you’re not able to do this, then I suspect you have a code path where the 
termination exception is not cleared before re-entering the VM.  All the entry 
points in Interpreter.cpp already check for this with an 
ASSERT(!vm.exception())”.  If you see that assertion fail on a debug build 
when re-entering the VM, then the bug is that the exception is not being 
cleared.  If you don’t see that assertion, then you’ll need to do some 
debugging to see what went wrong.

Another detail is that the Watchdog::Scope is responsible for disarming the 
watchdog when you exit the VM.  Watchdog::Scope is already being used in all 
the proper places.  Disarming in this case also means that the watchdog will be 
ready to start fresh when you enter the VM again.  If you have a bug in your 
disarming code where it’s not reseting the watchdog, then it is possible that 
your watchdog is firing immediately when you re-enter the VM.  Some tracing / 
debugging in your watchdog implementation should quickly show if that’s the 
case.

Regards,
Mark



 On Jul 28, 2015, at 11:14 AM, Geoffrey Garen gga...@apple.com wrote:
 
 Mark, do you know how to restart JavaScript after it has reached a watchdog 
 time limit?
 
 Geoff
 
 On Jul 28, 2015, at 9:09 AM, Pascal Jacquemart p.jacquem...@samsung.com 
 mailto:p.jacquem...@samsung.com wrote:
 
 Hello,
 
 I am trying to protect the MiniBrowser from executing faulty JavaScript code 
 taking too much time / CPU. All browsers normally raise a pop-up allowing 
 the user to stop the script and run away. 
 But MiniBrowser does not seem to have such feature. It is just stuck forever 
 ;-(
 
 After a little digging I found this JSC API: 
 JSContextGroupSetExecutionTimeLimit()
 I had to implement a JSC Watchdog back-end because it is not implemented for 
 EFL, fair enough - https://bugs.webkit.org/show_bug.cgi?id=147107 
 https://bugs.webkit.org/show_bug.cgi?id=147107 (ongoing)
 
 Now the JSContextGroupSetExecutionTimeLimit() have the expected behaviour.
 I can stop the JavaScript execution and run away... Great but the big 
 problem now is that the JavaScript won't restart. Even while loading other 
 pages, the JavaScript remains disabled.
 
 If you have any hints about this, I would be grateful.
 How to restart JavaScript execution? Where to look? Is it an EFL bug?
 
 Thanks,   Pascal Jacquemart
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org mailto:webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev 
 https://lists.webkit.org/mailman/listinfo/webkit-dev
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Using OfflineAsm

2014-09-05 Thread Mark Lam
Hi Chris,
I’ll try to answer your questions below …

 On Sep 5, 2014, at 3:23 PM, Vienneau, Christopher cvienn...@ea.com wrote:
 
 Hello,
  
 I’m looking for some information about the Java script interpreter ASM 
 backend, aka OfflineAsm.  
  
 First a bit of background information; the code I have been using is about a 
 year old, on several platforms we don’t run with JIT and won’t be able to 
 enable it.  This means that we’ve been using the LLINT_C_LOOP path in these 
 cases.  It was considered that the ASM backend may be able to improve 
 performance in this case, currently I’m taking a look at how things are done 
 in a more recent snapshot of the trunk to be sure this development effort 
 will be going in the right direction on what is done there.  As a side note I 
 see that LLINT_C_LOOP has been replaced with #if !ENABLE(JIT) which I guess 
 is pretty much how it has always behaved.

Yes, LLINT_C_LOOP is synonymous with !ENABLE_JIT.  The C loop is only meant as 
stand-in for ports that don’t want to spend the effort to get the JIT running.


 So my questions are:
 1)  How is the ASM backend intended to be used? Is there a doc that 
 covers this?

The code is the only documentation on this.  Some context that may help guide 
you in understanding how it works:

1. The “ASM code for the interpreter is in 
Source/JavaScriptCore/llint/LowLevelInterpreter*.asm.
2. This ASM code is assembled using the offlineasm assembler which is a ruby 
program.
3. The code for the offlineasm assembler is in Source/JavaScriptCore/offlineasm.
4. At build time:
1. LLIntOffsetsExtractor.cpp (which contains a lot of offsets that the 
LLINT needs) is built for the target platform.
2. The offlineasm will parse the binary generated from step 1 for the 
offset values.
3. The offlineasm will parse the LLINT ASM files plus the previously 
computed offsets, and emit target specific assembly code using an appropriate 
backend.
See Source/JavaScriptCore/offlineasm/backends.rb for a list of current 
backends.
4. The generated target specific “assembly file (typically named 
LLIntAssembly.h) is assembled with the target build tools.
For most platforms, LLIntAssembly.h contains inline assembly that is 
#include’d into llint/LowLevelInterpreter.cpp and built.
On Windows, an actual ASM file is generated (not inline assembly) and 
is built using MASM.
5. The built assembler is linked with the JSC VM.

 2)  Can I essentially move from the “C LOOP” to neither “C LOOP” nor 
 “JIT”? or is the ASM backend dependent on JIT itself?  It appears this would 
 be handled by the defines in 
 Source\JavaScriptCore\llint\LLIntOfflineAsmConfig.h

LLINT_C_LOOP and ENABLE_JIT are mutually exclusive.  If you want the JIT, you 
must support the ASM LLINT.  Once you have the ASM LLINT, you will essentially 
have JIT support (unless you’re working with some exotic CPU that is not 
currently supported).

That said, you can build the ASM LLINT and not run with the JIT.  You can set 
Options::useJIT() = false to only run with the LLINT.

 3)  If JIT is expected to use the ASM backend is there a way this can be 
 worked around?

Already answered above.

 4)  Should I actually expect a performance increase compared to the “C 
 LOOP”?

Yes.  The ASM LLINT should be faster than the C LOOP LLINT.

 5)  Are there any other suggestions on how to improve performance of Java 
 script code in a non-JIT port?

I presume options that are current supported.  The ASM LLINT is your best bet.  
Alternatively, you can try optimizing the C Loop LLINT if you don’t want to 
tackle the ASM LLINT.  The C Loop LLINT is intended to be simple and not 
intended to be super optimized, but any improvements that does not come at the 
cost of too much added complexity is welcomed.

Regards,
Mark


  
 Thanks for any feedback
  
 Chris Vienneau
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org mailto:webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev 
 https://lists.webkit.org/mailman/listinfo/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Using OfflineAsm

2014-09-05 Thread Mark Lam

 On Sep 5, 2014, at 4:05 PM, Mark Lam mark@apple.com wrote:
 4)  Should I actually expect a performance increase compared to the “C 
 LOOP”?
 
 Yes.  The ASM LLINT should be faster than the C LOOP LLINT.

To clarify, the C LOOP LLINT emulated a CPU’s behavior using emitted C 
instructions.  As a result, it incurs a performance penalty that is under 10% 
(vs the ASM LLINT) if I remember correctly.  So, the perf gain you’ll get comes 
from recuperating the losses due to the C loop being a CPU emulator.  You 
should not expect to see any of the more substantive gains that come from the 
JITs.

Regards,
Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] ftlopt merge - basically done

2014-08-08 Thread Mark Lam
Ossy,

Filip is on vacation.  I’m keeping an eye on this issue, and will see what we 
can do about it.  It may take some time.  Just letting you know that we’re not 
ignoring it.  Thanks.

Mark


 On Aug 8, 2014, at 12:59 AM, Osztrogonác Csaba o...@inf.u-szeged.hu wrote:
 
 Hi Filip,
 
 the latest part of ftlopt branch merge made some performance
 tests crash everywhere. I filed a bug report with the details:
 https://bugs.webkit.org/show_bug.cgi?id=135750
 
 Could you possible fix this regression?
 
 br,
 Ossy
 
 Filip Pizlo írta:
 Hi everyone,
 The JSC ftlopt branch is basically merged.  I think I have one more 
 revision to merge over, and it is a minor one.  Please don't land more 
 things on the branch.  Landing on trunk is fine; it's unlikely to get in my 
 way as I merge the last revision over.
 Thanks to everyone who helped with diagnostic problems and fixing things!
 -Filip
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Questions about JSC CLoop crashes on ppc64

2014-04-01 Thread Mark Lam
cc’ing webkit dev because there may be other folks who might be interested in 
this information, and/or may also be able to help.

On Apr 1, 2014, at 6:57 AM, Tomas Popela tpop...@redhat.com wrote:

 Hi Mark,
 if you don't mind I have some questions for you regarding to 
 https://bugs.webkit.org/show_bug.cgi?id=128743 - CLoop crashes on PPC64, 
 S390X as I'm lost in it.. While looking on it I came to 
 https://bugs.webkit.org/show_bug.cgi?id=97586#c10 where you suggest to modify 
 llint/LowLevelInterpreter.cpp and add there some debug prints. I tried it and 
 I saw that t0.i32 != t0.i.. What that could mean?
 
 t0.i32 = 0x3fffd709693c
 t0.i   = 0x3fffd7096938
 Setting t0.i = 0xfedcba9876543211;
 raw t0  = 0xfedcba9876543211
 Setting t0.i32 = 0x;
 raw t0  = 0xfedcba98
 
 Also another thing. I managed to compile trunk on that PPC64 machine. When I 
 open jsc and just type 1 there it crashes as well. I have modified the 
 *.asm files to print names of macros and labels to see the flow with:
 
 $ sed 's@^\(\..*\):$@\ncloopDo // printf (\1\\n);@' 
 LowLevelInterpreter.asm
 $ sed 's@^macro \(.*\).*$@\ncloopDo // printf (\1\\n);@' 
 LowLevelInterpreter.asm
 
FYI, there’s already some built-in tracing functionality for the C loop LLINT.  
In LowLevelInterpreter.cpp, search for TRACE_OPCODE.  You can define 
ENABLE_TRACE_OPCODE 1 to enable that code.  Similar to the tracing that you’ve 
added, it will tell which opcodes are being executed but only at the opcode 
granularity.


 and the same for LowLevelInterpreter64.asm as well. The flow is different 
 between ppc64 and x86_64:
 
 [tpopela@ibm-power7r2-01 bin]$ ./jsc 
  1
  t0.i32 = 0x31982c2c
  t0.i   = 0x31982c28
  Setting t0.i = 0xfedcba9876543211;
  raw t0  = 0xfedcba9876543211
  Setting t0.i32 = 0x;
  raw t0  = 0xfedcba98
 doCallToJavaScript(makeCall)
 callToJavaScriptPrologue()
 checkStackPointerAlignment(tempReg, location)
 .stackHeightOK
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .copyArgs
 .copyArgsLoop
 Segmentation fault
 
 When I do this on my x86_64 machine the processflow is different.
 
  tpopela  ~/dev/WebKit/WebKitBuild/Debug/bin14:24  [master]  
 $ ./jsc 
  1
 doCallToJavaScript(makeCall)
 callToJavaScriptPrologue()
 checkStackPointerAlignment(tempReg, location)
 .stackHeightOK
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .copyHeaderLoop
 .fillExtraArgsLoop  -
 .copyArgs
 .copyArgsLoop
 .copyArgsDone
 checkStackPointerAlignment(tempReg, location)
 makeJavaScriptCall(entry, temp)
 prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath)
 preserveCallerPCAndCFR()
 notFunctionCodeBlockGetter(targetRegister)
 notFunctionCodeBlockSetter(sourceRegister)
 moveStackPointerForCodeBlock(codeBlock, scratch)
 dispatch(advance)
 jumpToInstruction()
 traceExecution()
 checkStackPointerAlignment(tempReg, location)
 .opEnterDone
 callSlowPath(slowPath)
 prepareStateForCCall()
 cCall2(function, arg1, arg2)
 checkStackPointerAlignment(tempReg, location)
 restoreStateAfterCCall()
 dispatch(advance)
 jumpToInstruction()
 traceExecution()
 loadConstantOrVariable(index, value)
 .constant
 .done
 dispatch(advance)
 jumpToInstruction()
 traceExecution()
 loadConstantOrVariable(index, value)
 .constant
 .done
 dispatch(advance)
 jumpToInstruction()
 traceExecution()
 checkSwitchToJITForEpilogue()
 checkSwitchToJIT(increment, action)
 assertNotConstant(index)
 assert(assertion)
 doReturn()
 restoreCallerPCAndCFR()
 checkStackPointerAlignment(tempReg, location)
 .calleeFramePopped
 checkStackPointerAlignment(tempReg, location)
 callToJavaScriptEpilogue()
 1

Looking at .fillExtraArgsLoop in LowLevelInterpreter64.asm, I see that it is 
part of the loop that starts at .copyHeaderLoop.
I suggest you add some “cloopDo // printf(“…”);” tracing in the instructions 
there and inspect the CLoopRegisters to see what values they contain to see why 
the difference exists. 

You can also add “cloopDo // myTestFunction();” and set a gdb breakpoint in 
myTestFunction().  Thereafter, step thru the code to see how it works and 
compare between x86_64 and ppc64. 


 When it crashes on ppc64 it is crashing here:
 
 106   pcBase.i64 = *CASTint64_t*(t3.i8p + (pc.i  3));  // 
 /home/tpopela/WebKit/Source/JavaScriptCore/llint/LowLevelInterpreter64.asm:225
 (gdb) p t3
 $1 = {{i = 0, u = 0, {i32padding = 0, i32 = 0}, {u32padding = 0, u32 = 0}, 
 {i8padding = \000\000\000\000\000\000, i8 = 0 '\000'}, {u8padding = 
 \000\000\000\000\000\000, u8 = 0 '\000'}, ip = 0x0, i8p = 0x0, vp = 0x0, 
 callFrame = 0x0, execState = 0x0, instruction = 0x0, vm = 0x0, cell = 0x0, 
 protoCallFrame = 0x0, nativeFunc = 0x0, i64 = 0, u64 = 0, encodedJSValue = 0, 
 castToDouble = 0, opcode = 0x0}}
 
 As you see the t3 register is completely empty thus is crashes. The problem 
 is that it went to copy the args as
 
 if (pc.i32 == 0)

Re: [webkit-dev] Seeking help/guidance with ParserError::StackOverflow porting JavaScriptCore to Android

2014-02-25 Thread Mark Lam
One more time from the right email address:

Eric,

As a rule of thumb, you’ll want the following:
   errorModeRequiredStack  requiredStack  total Stack size.

There are some minimum amount of stack space that some reasonable native 
(C/C++) code can use without needing to do a stack check.  Let’s call that 
amount N_min.
In addition, we will want to be able enter the VM and execute some minimal 
amount of JS code to throw an exception / error.  Let’s call this amount 
N_exception.

With that, the following should be true:
1. errorModeRequiredStack =  N_min
2. requiredStack = N_min + N_exception

We picked 64k as a conservative estimate for both the N values.  Your fix 
should be to determine if you can do with smaller values for N_min and 
N_exception on your system.  

The downside of picking a N_min that is too small is:
1. your app crashes if the stack edge is guarded by a guard page, and you don’t 
overshoot the stack by more than the size of that guard page.
2. your app corrupts memory beyond the edge of the stack if there isn’t a guard 
page there, or if you overshoot the stack by more than the size of the guard 
page.

The downside of pick a N_exception that is too small is that you will get an 
uninformative StackOverflowError where the JS stack trace may not contain 
useful info.

If your total stack is that small, you will need to work with smaller N_min and 
N_exception values.

Regards,
Mark

On Feb 21, 2014, at 6:05 PM, Eric Wing ewmail...@gmail.com wrote:

 Please bear with me. I'm trying to articulate a problem that another
 developer run into. Both of us only know certain pieces of the problem
 so I offered to post for both of us.
 
 Quick background: I've been working with Appcelerator to port
 JavaScriptCore to Android. The Appcelerator branch forked off of
 Webkit I think in the July timeframe, shortly after WWDC. There are
 plans to remerge with mainline, but this is down the road.
 
 I currently have a working JavaScriptCore that runs on my 1st gen
 Nexus 7 running 4.4. We wrote a test262 conformance app to check our
 work, and it passes the 11,000+ tests with less than 20 failures.
 
 I was contacted by the other developer (Shuan Zhao) who had found my
 branch reproduced my build of JavaScriptCore for Android. However, on
 their Android devices (Android 4.0.4, 4.1.2 and 4.3 (one HTC Desire
 HD, one CoolPad and one Samsung Galaxy S4), JavaScriptCore fails to
 successfully execute scripts when on the main thread. It appears that
 ParserError::StackOverflow is triggered inside JavaScriptCore.
 
 So our belief is that there is some problem related to the stack space
 on these devices or these versions of Android.
 
 We started in Source/WTF/wtf/StackBounds.cpp, we found:
 
 void StackBounds::initialize()
 {
void* stackBase = 0;
size_t stackSize = 0;
 
pthread_t thread = pthread_self();
pthread_attr_t sattr;
pthread_attr_init(sattr);
 #if HAVE(PTHREAD_NP_H) || OS(NETBSD)
// e.g. on FreeBSD 5.4, neund...@kde.org
pthread_attr_get_np(thread, sattr);
 #else
// FIXME: this function is non-portable; other POSIX systems may
 have different np alternatives
pthread_getattr_np(thread, sattr);
 #endif
int rc = pthread_attr_getstack(sattr, stackBase, stackSize);
(void)rc; // FIXME: Deal with error code somehow? Seems fatal.
ASSERT(stackBase);
pthread_attr_destroy(sattr);
m_bound = stackBase;
m_origin = static_castchar*(stackBase) + stackSize;
 }
 
 
 Shuan Zhao says running on the main thread, the stack size is
 131072(pthread_attr_getstack and pthread_attr_getstacksize function
 returned the same result), which is 1M when running on a new thread. I
 don't know why it is much smaller on the main thread. The stack size I
 get from program running on my Mac OS is 512K which is also smaller
 than 1M.
 
 Additionally:
 I see the following code in JSCore. The value returned by
 requiredCapacity is 128K when not in error handling mode which is as
 much as the stack size I get on the main thread. I think this will
 cause isSafeToRecurse to return false which will cause a stack
 overflow exception. I don't know how to fix this.
 
 
 class VMStackBounds {
 public:
VMStackBounds(VM vm, const StackBounds bounds)
: m_vm(vm)
, m_bounds(bounds)
{
}
 
bool isSafeToRecurse() const { return
 m_bounds.isSafeToRecurse(requiredCapacity()); }
 
 private:
inline size_t requiredCapacity() const
{
Interpreter* interpreter = m_vm.interpreter;
 
// We have two separate stack limits, one for regular JS
 execution, and one
// for when we're handling errors. We need the error stack to be 
 smaller
// otherwise there would obviously not be any stack left to
 execute JS in when
// there's a stack overflow.
//
// These sizes were derived from the stack usage of a number
 of sites when
// layout occurs when we've already consumed most of the C stack.
const size_t 

[webkit-dev] C stack direction

2013-11-20 Thread Mark Lam
Hi folks,

I’m currently doing some work involving checks on the bounds of the native 
stack.  According to WTF::StackBounds, it appears that WINCE is the only port 
that can potentially have a stack that grows up.  Is anyone still building, 
testing, and using this configuration with an upward growing stack?  I’m 
wondering whether to drop support for upward growing stacks altogether.  Our 
JITs already don’t support it.  Any opinions?

Thanks.

Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Moving LayoutTests/fast/js to LayoutTests/js

2013-09-10 Thread Mark Lam
FYI, the move has been completed.

On Sep 6, 2013, at 5:06 PM, Mark Lam mark@apple.com wrote:

 This is a courtesy notice: FYI, I’m in the process of moving 
 LayoutTests/fast/js to LayoutTests/js for 
 https://bugs.webkit.org/show_bug.cgi?id=120899.  This change will touch many 
 files in the test files and in Tools/Scripts to update the paths that expect 
 fast/js to look for js instead.
 
 Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Moving LayoutTests/fast/js to LayoutTests/js

2013-09-06 Thread Mark Lam
This is a courtesy notice: FYI, I’m in the process of moving 
LayoutTests/fast/js to LayoutTests/js for 
https://bugs.webkit.org/show_bug.cgi?id=120899.  This change will touch many 
files in the test files and in Tools/Scripts to update the paths that expect 
fast/js to look for js instead.

Mark
 
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Instrumenting JavaScriptCore

2012-11-12 Thread Mark Lam
Erick,

If your instrumentation is not performance critical, you might be interested in 
using the llint C++ backend.  To use that:

1. in WTF/wtf/Platform.h (or equivalent), #define ENABLE_JIT 0, and #define 
ENABLE_LLINT 1.  This will allow you to build for the C++ llint which generates 
C++ code.

2. Look at Source/JavaScriptCore/llint/LowLevelInterpreter.cpp.  In there, 
you'll fine macros that implements interpreter labels including one for each 
bytecode opcode.  You can modify that for your experiments.

3. Look in Source/JavaScriptCore/offlineasm/instructions.rb for the cloopDo 
instruction.  You can add this instruction to the llint asm code to insert 
instrumentation to your liking. e.g.

cloopDo // printf(I just added a printf\n);

This embeds the code after the // in the generated llint interpreter (see 
the generated LLIntAssembly.h). 

Hope that helps.  And yes, the old interpreter is no longer available.

Regards,
Mark


On Nov 12, 2012, at 7:55 PM, Erick Lavoie erick.lav...@gmail.com wrote:

 Hi,
 
 A research team instrumented JavaScriptCore in 2010 to gather empirical data 
 about the dynamic behavior of JavaScript [1]. I am currently wondering how 
 easy it would be to replicate their setup using the latest WebKit release.
 
 I noticed, in the latest release, that either the JIT or the Low-level 
 Interpreter must be enabled for the build to succeed. Does that mean that the 
 previous interpreter is not available anymore? If it is still available, is 
 there a way to use only the old interpreter, without the JIT or the LLInt?
 
 Also, I would like an opinion from one of the dev guy on how easy it would be 
 to add instrumentation code for every bytecode in the new Low-level 
 Interpreter, given that some part of it are now written in an assembler 
 dialect.
 
 Thanks,
 
 Erick
 
 [1] http://dl.acm.org/citation.cfm?id=1806598
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] …Inlines.h vs …InlineMethods.h

2012-11-02 Thread Mark Lam
FYI, some time in the near future (maybe this weekend), I plan to do some work 
to break inline methods in JavaScriptCore out of header files into their own 
inline header files.

Naming-wise, the existing JSC code has a few inline headers named …Inlines.h 
and more named …InlinedMethods.h.  On the WebCore side, the few that exists 
there are named …InlineMethods.h.  I have a preference for the …Inlines.h 
convention because it is shorter and concisely communicates the intent.  I plan 
to rename all these inline headers to be …Inlines.h for consistency.  Does 
anyone object to my renaming the inline headers?

Also, a lot of the existing inlined methods are 1 liners.  I plan to leave 
these in the original header file.  Here's the convention I'm going to adopt:

1. If the inline method is a 1 liner, then leave it in the original header file.
2. If it takes more than 1 line, then move it into a corresponding inline 
header file.

I will only be breaking out the inline methods (per the above convention) in 
the JSC sources.  Apart from renaming the few WebCore inline headers, I won't 
be looking into WebCore's placement of inline methods for now.  

If anyone has an opinion on this, please let me know asap.  Thanks.

Regards,
Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev


[webkit-dev] Replacing the JSC classic C++ interpreter with a llint generated C++ interpreter

2012-09-06 Thread Mark Lam
As part of an effort to simplify future development, the JSC team is 
deprecating the classic C++ interpreter (and will delete it shortly on Sept 
24).  In its place, we will use the LLINT (low level interpreter) with the C++ 
back-end (see https://bugs.webkit.org/show_bug.cgi?id=91052) to generate a 
llint C++ interpreter for new ports that do not support the JITs yet.

In order to deprecate the classic C++ interpreter, we will need your help to 
convert your ports to use the llint (if you are using JSC for your port).  I 
will lay out some details below on how this conversion works:


Why deprecate the classic C++ interpreter?
=
The llint is where active development will take place as we add new JIT and 
runtime enhancements.  Having the classic C++ interpreter around only slows 
down the development effort.  The classic C++ interpreter is also prone to 
experience bit-rot, and hence will easily get buggy over time.


How will you bring up a new platform without the classic C++ interpreter?

You will have to add the llint offline ASM build phase to generate the llint 
C++ interpreter (see below for details on the llint offline build phase).  
Alternatively, you can generate the llint C++ interpreter on a port that 
already supports it, and use the generated LLIntAssembly.h from there to 
bootstrap your port.  It is preferred that every port supports the llint 
natively because this will enable you to get the latest bug fixes and 
performance enhancements.


What is the performance of the llint C++ interpreter?

The performance of the new llint C++ interpreter is currently slower than the 
classic C++ interpreter, but within approximately 10% on average.  The goal of 
the llint C++ interpreter is not to achieve the highest performance, but to 
provide a functional C++ interpreter to help bootstrap new ports.  As such, we 
don't plan to spend a lot of time on optimizing it.

sunspider results:
   classic C++ interpreter   llint C++ interpreter
arithmetic * 18.3123+-0.1593  !18.6551+-0.1095! 
definitely 1.0187x slower


v8-real results:
   classic C++ interpreter   llint C++ interpreter
geometric *  15.31772+-0.22338!16.92580+-0.09735  ! 
definitely 1.1050x slower


How is the llint built?

Here's a summary of the steps:

Step 1: Generate LLIntDesiredOffsets.h

mkdir -p ${BUILT_PRODUCTS_DIR}/LLIntOffsets/

/usr/bin/env ruby ${SRCROOT}/offlineasm/generate_offset_extractor.rb 
${SRCROOT}/llint/LowLevelInterpreter.asm 
${BUILT_PRODUCTS_DIR}/LLIntOffsets/LLIntDesiredOffsets.h

Step 2: Build LLIntOffsetsExtractor from LLIntDesiredOffsets.h (from step 1) 
and Source/JavaScriptCore/llint/LLIntOffsetsExtrator.cpp using the 
cross-compiler for your target.

LLIntOffsetsExtractor is supposed to be an executable binary for your target 
port.  However, we will only be using it to extract offsets that we need for 
the next step, and won't be running it.

Step 3: Generate LLIntAssembly.h

/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb 
JavaScriptCore/llint/LowLevelInterpreter.asm 
${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor LLIntAssembly.h || exit 1

LLIntAssembly.h provides the body of the interpreter, and will be #included in 
Source/JavaScriptCore/llint/LowLevelInterpreter.cpp to be built into JSC.


How to get JSC to build with the llint C++ interpreter?


In Platform.h (or equivalent), set the following settings:

#define ENABLE_JIT 0
#define ENABLE_LLINT 1
#define ENABLE_CLASSIC_INTERPRETER 0

This combination of settings will ENABLE(LLINT_C_LOOP) which builds the llint 
C++ interpreter.  Eventually, ENABLE_CLASSIC_INTERPRETER won't be needed when 
the classic C++ interpreter gets deprecated and deleted.


What do you need to do for your port?
=
The Mac port already works with the llint.  I'm planning to fix the Windows 
port to also work with the llint.  On Sept 24, we plan to delete the classic 
C++ interpreter.  This means your port will cease to build if you are still 
relying on the classic C++ interpreter then.

Please look into migrating your port to use the llint C++ interpreter (if not 
the Assembly one) within the next few weeks.  If you encounter issues and need 
some help getting the llint to work for your port, please feel free to contact 
me.  I will do my best to help you out.


Thanks.

Regards,
Mark
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev