[chromium-dev] Moving mailing lists from @googlegroups.com to @chromium.org

2010-01-20 Thread John Abd-El-Malek
This is a heads up that as part of moving all of our resources to
chromium.org (and vanity!), I'll be switching our mailing lists to be under
chromium.org today.  I will send an email when it's done, but for now you
may want to update your rules to be either googlegroups.com or
chromium.orgso that no emails are mislabelled.

Note that unfortunately there's no way to migrate the old messages.  So I
will subscribe the googlegroups.com lists to the chromium.org ones so that
googlegroups has all the emails.

Please bear with me through the move.
-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: Moving mailing lists from @googlegroups.com to @chromium.org

2010-01-20 Thread John Abd-El-Malek
Just to explain to everyone the steps that I will follow.  For each mailing
list, I will:

1) create a corresponding list on chromium.org with the same members
2) subscribe the old list to the new list so that the archives can be
searched in one place
4) disable posting to the old list (only group managers can post)
3) send an email to the old list telling everyone that it's now disabled and
that they must now post to the new list


On Wed, Jan 20, 2010 at 11:08 AM, John Abd-El-Malek j...@chromium.orgwrote:

 This is a heads up that as part of moving all of our resources to
 chromium.org (and vanity!), I'll be switching our mailing lists to be
 under chromium.org today.  I will send an email when it's done, but for
 now you may want to update your rules to be either googlegroups.com or
 chromium.org so that no emails are mislabelled.

 Note that unfortunately there's no way to migrate the old messages.  So I
 will subscribe the googlegroups.com lists to the chromium.org ones so that
 googlegroups has all the emails.

 Please bear with me through the move.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Chromium-dev switched to @chromium.org

2010-01-20 Thread John Abd-El-Malek
Hello,

The chromium-dev mailing list has now been switched to
chromium-...@chromium.org.  Please update your filters and address book
accordingly.

Existing members have been subscribed, with the exception of users who don't
permit managers to add them to a group.  You can visit the new group at
http://groups.google.com/a/chromium.org/group/chromium-dev.
-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: Moving mailing lists from @googlegroups.com to @chromium.org

2010-01-20 Thread John Abd-El-Malek
Note: I've converted chromium-bugs, chromium-checkins, chromium-dev, and
chomium-os-checkins.  I'll convert the rest tomorrow.

It seems that there are some issues with groups and I've emailed the team
before I convert any more lists.  People who were not receiving email before
might start receiving them.  Everyone on chromium-dev also is marked for
moderation without a reason.  Trying to fix both of these issues.

On Wed, Jan 20, 2010 at 1:14 PM, John Abd-El-Malek j...@chromium.org wrote:

 Just to explain to everyone the steps that I will follow.  For each mailing
 list, I will:

 1) create a corresponding list on chromium.org with the same members
 2) subscribe the old list to the new list so that the archives can be
 searched in one place
 4) disable posting to the old list (only group managers can post)
 3) send an email to the old list telling everyone that it's now disabled
 and that they must now post to the new list


 On Wed, Jan 20, 2010 at 11:08 AM, John Abd-El-Malek j...@chromium.orgwrote:

 This is a heads up that as part of moving all of our resources to
 chromium.org (and vanity!), I'll be switching our mailing lists to be
 under chromium.org today.  I will send an email when it's done, but for
 now you may want to update your rules to be either googlegroups.com or
 chromium.org so that no emails are mislabelled.

 Note that unfortunately there's no way to migrate the old messages.  So I
 will subscribe the googlegroups.com lists to the chromium.org ones so
 that googlegroups has all the emails.

 Please bear with me through the move.



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Best Practices when disabling tests?

2009-12-14 Thread John Abd-El-Malek
(this topic has been on my mind a lot, so here's my vent :) )

I think we shouldn't allow any test to be disabled without a bug to track it
that includes an initially assigned owner. This shouldn't

I've seen it happen too often that a test gets disabled to quickly turn the
tree green, and it stays disabled for a long time without anyone noticing.
 This destroys the whole point of tests, since it's trivial to keep a tree
green by disabling each test when it fails.  It's also a large burden to
expect that people monitor checkins to files they're familiar with and
create bugs when they find a disabled test.  It's harder to enforce this,
and it's also unclear who should be doing this when multiple people touch an
area.

Filing a bug and looking at the annotations on viewvc to pick an initial
owner shouldn't take more than a minute or two so I don't think it's a large
burden.

On Mon, Dec 14, 2009 at 8:54 AM, Drew Wilson atwil...@chromium.org wrote:

 I spent a few hours last week and this weekend trying to untangle the mess
 that was the worker ui_tests. The problem is that the tests have been
 sporadically flakey due to various causes, and so various sheriffs/good
 samaritans have disabled them to keep the trees green. Some of the tests
 were disabled due to failing under valgrind, but now that we have a way to
 disable tests specifically for valgrind and some of the worker bugs have
 been fixed upstream I figured it was a good time to clean house a bit and
 re-enable some of the tests.

 I found when I was going through the worker_uitest file that it was hard to
 figure out why a given test was disabled, so it was unclear whether it was
 safe to re-enable them - some of the tests had comments pointing at a
 tracking bug, but some of them didn't, and it was a pain to track the root
 cause especially since the specific lines of code had sometimes been touched
 multiple times (adding new platforms to disable, etc).

 Anyhow, what's our best practices for disabling tests? I think ideally we'd
 always log a tracking bug and add a comment, akin to what we do in the
 test_expectations file for layout tests. This might be too much of a burden
 on sheriffs, so an alternative is for people who are working on various
 areas (like workers) track checkins to the associated files and add some
 documentation after the fact. Or we can just live with the SVN logs, in
 which case I need to get better about figuring out how to track through the
 SVN/git history of the various files :)


 -atw

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Best Practices when disabling tests?

2009-12-14 Thread John Abd-El-Malek
On Mon, Dec 14, 2009 at 10:42 AM, Drew Wilson atwil...@chromium.org wrote:

 They'll sometimes get disabled due to webkit updates, other times they'll
 get disabled due to other things (for example, we changed the valgrind bots
 to fail noisily if individual tests fail, regardless of whether they
 actually generate valgrind errors - this meant that previously-silent worker
 test failures suddenly started causing redness, leading to sheriffs
 disabling them).

 But, yeah, it'd be nice to have ui_tests run by the webkit.org bots,
 although in the case of flaky tests I'm not sure whether that'd help (not
 sure if the gardener would pick up on a 25% failure on the FYI bots).

 At this point, I'm just trying to figure out what people are *supposed* to
 do when disabling tests - should they always log a bug and add a comment?
 I'd say yes, that if you have time to babysit a CL through the build bot
 process, you have time to log a bug/add a comment, even if you don't know
 the correct owner.


Note: I think a bug should always have an initial owner.  It doesn't take
much time to get an initial guess from the viewvc annotations page.  I've
seen bugs with no owner for failing tests that just sit there for months
because no one has seen it.



 -atw


 On Mon, Dec 14, 2009 at 10:28 AM, Darin Fisher da...@chromium.org wrote:

 I presume it is frequently the case that these tests get disabled after a
 webkit update?

 Only the Linux Perf (webkit.org) bot appears to run the ui_tests.
  Perhaps that is not sufficient?

 -Darin



 On Mon, Dec 14, 2009 at 8:54 AM, Drew Wilson atwil...@chromium.orgwrote:

 I spent a few hours last week and this weekend trying to untangle the
 mess that was the worker ui_tests. The problem is that the tests have been
 sporadically flakey due to various causes, and so various sheriffs/good
 samaritans have disabled them to keep the trees green. Some of the tests
 were disabled due to failing under valgrind, but now that we have a way to
 disable tests specifically for valgrind and some of the worker bugs have
 been fixed upstream I figured it was a good time to clean house a bit and
 re-enable some of the tests.

 I found when I was going through the worker_uitest file that it was hard
 to figure out why a given test was disabled, so it was unclear whether it
 was safe to re-enable them - some of the tests had comments pointing at a
 tracking bug, but some of them didn't, and it was a pain to track the root
 cause especially since the specific lines of code had sometimes been touched
 multiple times (adding new platforms to disable, etc).

 Anyhow, what's our best practices for disabling tests? I think ideally
 we'd always log a tracking bug and add a comment, akin to what we do in the
 test_expectations file for layout tests. This might be too much of a burden
 on sheriffs, so an alternative is for people who are working on various
 areas (like workers) track checkins to the associated files and add some
 documentation after the fact. Or we can just live with the SVN logs, in
 which case I need to get better about figuring out how to track through the
 SVN/git history of the various files :)

 -atw

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] [WebGL] Recommending --no-sandbox

2009-12-11 Thread John Abd-El-Malek
On Thu, Dec 10, 2009 at 11:34 PM, Darin Fisher da...@chromium.org wrote:

 I don't think we should take away --no-sandbox in official builds.  It's a
 valuable debugging tool in case an end-user is experiencing a startup crash
 or other wackiness.


I understand the argument, but do we really end up using this for end-users
in debugging problems?  Given how many Chrome users we have, my impression
is we've fixed any issues with the sandbox long ago.

I don't feel that strongly about disabling --no-sandbox, but I'd like to be
more convinced of the arguments against it :)


 I think we should just add a modal dialog at startup that you must dismiss
 each time you launch Chrome until you remove the --no-sandbox option.  That
 should be annoying enough to cause people to remove it once they can.  We
 don't need to expend energy on anything fancier IMO.

 -Darin


 On Thu, Dec 10, 2009 at 11:02 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Dec 10, 2009 at 10:57 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Thu, Dec 10, 2009 at 10:25 PM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 9:38 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 We disable --single-process and --in-process-plugins on release Google
 Chrome builds to avoid the support headache that it causes.  I think we
 should do the same for --no-sandbox.


 There are legit reasons we have asked users to try temporarily disabling
 the sandbox, more frequently than for those other flags.  I'd prefer to 
 just
 make the UI turn ugly a la Jeremy's bug.


 It might even make sense to re-enable --single-process and use the same
 UI technique to discourage it.


 --single-process is buggy and not well tested, and can cause deadlocks in
 some scenarios.

 I think only developers should run without the sandbox, as those are the
 ones who'd be able to understand the risks in doing so, and are the only
 ones who need to test out features like webgl that aren't ready yet.  So I
 still think we should disable --no-sandbox in shipping Google Chrome builds,
 and if someone needs it, they can use Chromium builds.




-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] [WebGL] Recommending --no-sandbox

2009-12-11 Thread John Abd-El-Malek
(adding Alice)

Alice: do you have a rough estimate for how often we ask users to turn off
the sandbox when debugging problems?

Thanks

On Fri, Dec 11, 2009 at 11:33 AM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Dec 10, 2009 at 11:34 PM, Darin Fisher da...@chromium.org wrote:

 I don't think we should take away --no-sandbox in official builds.  It's a
 valuable debugging tool in case an end-user is experiencing a startup crash
 or other wackiness.


 I understand the argument, but do we really end up using this for end-users
 in debugging problems?  Given how many Chrome users we have, my impression
 is we've fixed any issues with the sandbox long ago.

 I don't feel that strongly about disabling --no-sandbox, but I'd like to be
 more convinced of the arguments against it :)



 I think we should just add a modal dialog at startup that you must dismiss
 each time you launch Chrome until you remove the --no-sandbox option.  That
 should be annoying enough to cause people to remove it once they can.  We
 don't need to expend energy on anything fancier IMO.

 -Darin


 On Thu, Dec 10, 2009 at 11:02 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Dec 10, 2009 at 10:57 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Thu, Dec 10, 2009 at 10:25 PM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 9:38 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 We disable --single-process and --in-process-plugins on release Google
 Chrome builds to avoid the support headache that it causes.  I think we
 should do the same for --no-sandbox.


 There are legit reasons we have asked users to try temporarily
 disabling the sandbox, more frequently than for those other flags.  I'd
 prefer to just make the UI turn ugly a la Jeremy's bug.


 It might even make sense to re-enable --single-process and use the same
 UI technique to discourage it.


 --single-process is buggy and not well tested, and can cause deadlocks in
 some scenarios.

 I think only developers should run without the sandbox, as those are the
 ones who'd be able to understand the risks in doing so, and are the only
 ones who need to test out features like webgl that aren't ready yet.  So I
 still think we should disable --no-sandbox in shipping Google Chrome builds,
 and if someone needs it, they can use Chromium builds.





-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] [WebGL] Recommending --no-sandbox

2009-12-11 Thread John Abd-El-Malek
Alice's reply is below.  I'm still convinced that Google Chrome shouldn't
run without the sandbox, and if someone needs that, then they can use
Chromium builds.

We actually rarely ask users to turn off the sandbox and after we confirm
that they can run it with the flag, we tell them do remove it immediately
due to security vulnerabilities. The only problem is that after this point,
it's hard for users to figure out what's preventing Google Chrome to run
properly. We need to find an easier way or some sort of utility to do this.
I agree that we need to make it obvious to the user that they shouldn't be
running without sandbox but we need to give them a better way to figure out
what's wrong so that they can continue to use it.
-- 
Alice Lin | Google, Inc. | Senior Strategist, Consumer Operations |
650.253.6827 | a...@google.com

On Fri, Dec 11, 2009 at 11:48 AM, John Abd-El-Malek j...@chromium.orgwrote:

 (adding Alice)

 Alice: do you have a rough estimate for how often we ask users to turn off
 the sandbox when debugging problems?

 Thanks


 On Fri, Dec 11, 2009 at 11:33 AM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Dec 10, 2009 at 11:34 PM, Darin Fisher da...@chromium.orgwrote:

 I don't think we should take away --no-sandbox in official builds.  It's
 a valuable debugging tool in case an end-user is experiencing a startup
 crash or other wackiness.


 I understand the argument, but do we really end up using this for
 end-users in debugging problems?  Given how many Chrome users we have, my
 impression is we've fixed any issues with the sandbox long ago.

 I don't feel that strongly about disabling --no-sandbox, but I'd like to
 be more convinced of the arguments against it :)



 I think we should just add a modal dialog at startup that you must
 dismiss each time you launch Chrome until you remove the --no-sandbox
 option.  That should be annoying enough to cause people to remove it once
 they can.  We don't need to expend energy on anything fancier IMO.

 -Darin


 On Thu, Dec 10, 2009 at 11:02 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:



 On Thu, Dec 10, 2009 at 10:57 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Thu, Dec 10, 2009 at 10:25 PM, Peter Kasting 
 pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 9:38 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 We disable --single-process and --in-process-plugins on release
 Google Chrome builds to avoid the support headache that it causes.  I 
 think
 we should do the same for --no-sandbox.


 There are legit reasons we have asked users to try temporarily
 disabling the sandbox, more frequently than for those other flags.  I'd
 prefer to just make the UI turn ugly a la Jeremy's bug.


 It might even make sense to re-enable --single-process and use the same
 UI technique to discourage it.


 --single-process is buggy and not well tested, and can cause deadlocks
 in some scenarios.

 I think only developers should run without the sandbox, as those are the
 ones who'd be able to understand the risks in doing so, and are the only
 ones who need to test out features like webgl that aren't ready yet.  So I
 still think we should disable --no-sandbox in shipping Google Chrome 
 builds,
 and if someone needs it, they can use Chromium builds.






-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Thoughts on // NOLINT?

2009-12-10 Thread John Abd-El-Malek
On Thu, Dec 10, 2009 at 10:45 AM, Jonathan Dixon j...@chromium.org wrote:



 2009/12/10 Brett Wilson bre...@chromium.org

 On Wed, Dec 9, 2009 at 4:24 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  btw I searched the code, almost all the instances are in code from
 different
  repositories, like v8, gtest, gmock.  I counted only 17 instances in
  Chrome's code.


 Most of the Chrome NOLINTs look like the're around ifdefs, where the
 ifdef code sometimes mean that a comma or a semicolon goes on the line
 after the ifdef. We should be working to remove these anyway since the
 ifdefs are super ugly, and I'm not sure the NOLINT comment actually
 makes them worse. Some of these may not be practical or desirable to
 remove, though.


 This is exactly the case I came across recently (which maybe what inspired
 John to start this thread.)

 In essence:

 return DoWork(foo)
 #if defined(OS_POSIX)
  DoWork(posix_specific)
 #endif
 ;  // -- Lint complains about this guy

 We converged on NOLINT as the solution, but I admit my ingrained instinct
 is to pull out the #if altogether, and try to make both calls to DoWork() be
 acceptable to call on all platforms, or at the very least replace the second
 with a wrapper its body with a null implementation on !OS_POSIX.

 Do we have agreed guidelines on when to use #if for portability, or which
 patterns to prefer to it?


From the code I've seen, the preference is if only one platform needs a
function to be called, then we only call it using an ifdef instead of
defining that empty function in all other platforms.  Things like
PlatformInit, PlatformDestroy are an exeception since it makes sense that
you want to give each platform the capability for that.  But if you look at
places like PluginService or PluginProcessHost, there are things that only
make sense to call on one platform.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Thoughts on // NOLINT?

2009-12-10 Thread John Abd-El-Malek
On Thu, Dec 10, 2009 at 1:22 PM, Evan Stade est...@chromium.org wrote:

 On Thu, Dec 10, 2009 at 10:58 AM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 10:45 AM, Jonathan Dixon j...@chromium.orgwrote:

 In essence:

 return DoWork(foo)
 #if defined(OS_POSIX)
  DoWork(posix_specific)
 #endif
 ;  // -- Lint complains about this guy


 I'd prefer this:

 #if defined(OS_POSIX)
   return DoWork(foo)  DoWork(posix_specific);
 #else
   return DoWork(foo);
 #endif

 The same number of lines, but much easier to read.


 disagree. It's harder to read because it's not immediately obvious that
 some of the code overlaps. Scott's solution seems best to me.


+1 Scott's solution seems best for me.  The problem with the above solution
is that it contains code duplication.  For DoWork(foo), that might seem
small, but as parameters get added, functions get renamed, etc, it's more
work (and error prone) to update two locations instead of one.




 PK




-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Extensions and the Mac

2009-12-10 Thread John Abd-El-Malek
NaCl is the answer to all these problems...

On Thu, Dec 10, 2009 at 5:15 PM, Jeremy Orlow jor...@google.com wrote:

 Or reject extensions that could be written without a NPAPI component.
  *ducks*

 On Thu, Dec 10, 2009 at 5:12 PM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 5:02 PM, Avi Drissman a...@google.com wrote:

 Q: Can't we have the extensions gallery warn that it won't work?
 A: Sorry, we can't do that in an automated fashion. The extensions author
 should mention it. Too bad they don't.


 But we explicitly review patches with binary components.  I can't see how
 it could be impossible for us to also mark them as Win-only.

 PK

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Extensions and the Mac

2009-12-10 Thread John Abd-El-Malek
The goal is to expose all this through Pepper.

On Thu, Dec 10, 2009 at 5:50 PM, Jeremy Orlow jor...@chromium.org wrote:

 Much of what can't be done on the web platform also can't be done inside
 the NaCl sandbox.


 On Thu, Dec 10, 2009 at 5:49 PM, John Abd-El-Malek jabdelma...@google.com
  wrote:

 NaCl is the answer to all these problems...

 On Thu, Dec 10, 2009 at 5:15 PM, Jeremy Orlow jor...@google.com wrote:

 Or reject extensions that could be written without a NPAPI component.
  *ducks*

 On Thu, Dec 10, 2009 at 5:12 PM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 5:02 PM, Avi Drissman a...@google.com wrote:

 Q: Can't we have the extensions gallery warn that it won't work?
 A: Sorry, we can't do that in an automated fashion. The extensions
 author should mention it. Too bad they don't.


 But we explicitly review patches with binary components.  I can't see
 how it could be impossible for us to also mark them as Win-only.

 PK

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev





-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] [WebGL] Recommending --no-sandbox

2009-12-10 Thread John Abd-El-Malek
We disable --single-process and --in-process-plugins on release Google
Chrome builds to avoid the support headache that it causes.  I think we
should do the same for --no-sandbox.

On Thu, Dec 10, 2009 at 8:22 PM, Darin Fisher da...@chromium.org wrote:

 After reading the WebGL blog post today, and following the link to the
 wiki, it struck me as fairly *bad* that we are telling people to disable the
 sandbox.  A good number of folks are going to disable the sandbox and forget
 that they had ever done so.

 Once we can support WebGL in the sandbox, what will we do?  It would be
 nice if we could somehow restore the sandbox automatically.  But renaming
 --no-sandbox doesn't seem like a great choice, and it isn't a scalable
 solution for other things like this that come up in the future.

 Perhaps --enable-webgl should instead implicitly disable the sandbox today
 so that tomorrow, when WebGL just works, folks won't have to change any
 command line options to restore sandbox functionality.  I can see a counter
 argument that people should have to explicitly opt-in to disabling the
 sandbox, but I'm not sure that out-weighs the cost of having a good number
 of dev channel users running *permanently* without the sandbox.

 Was this idea considered?  Any other ideas?

 -Darin

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] [WebGL] Recommending --no-sandbox

2009-12-10 Thread John Abd-El-Malek
On Thu, Dec 10, 2009 at 10:57 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Thu, Dec 10, 2009 at 10:25 PM, Peter Kasting pkast...@google.comwrote:

 On Thu, Dec 10, 2009 at 9:38 PM, John Abd-El-Malek j...@chromium.orgwrote:

 We disable --single-process and --in-process-plugins on release Google
 Chrome builds to avoid the support headache that it causes.  I think we
 should do the same for --no-sandbox.


 There are legit reasons we have asked users to try temporarily disabling
 the sandbox, more frequently than for those other flags.  I'd prefer to just
 make the UI turn ugly a la Jeremy's bug.


 It might even make sense to re-enable --single-process and use the same UI
 technique to discourage it.


--single-process is buggy and not well tested, and can cause deadlocks in
some scenarios.

I think only developers should run without the sandbox, as those are the
ones who'd be able to understand the risks in doing so, and are the only
ones who need to test out features like webgl that aren't ready yet.  So I
still think we should disable --no-sandbox in shipping Google Chrome builds,
and if someone needs it, they can use Chromium builds.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Thoughts on // NOLINT?

2009-12-09 Thread John Abd-El-Malek
Lately I've been seeing more and more // NOLINT added to the code.  It's
great that people are running lint to make sure that they're following the
guidelines, but I personally find adding comments or gibberish to our code
for tools that are supposed to make the code quality better happy/more
consistent a bit ironic.  I also find it distracting when reading the code.
 Am I the only one?

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Thoughts on // NOLINT?

2009-12-09 Thread John Abd-El-Malek
btw I searched the code, almost all the instances are in code from different
repositories, like v8, gtest, gmock.  I counted only 17 instances in
Chrome's code.

On Wed, Dec 9, 2009 at 4:08 PM, Evan Stade est...@chromium.org wrote:

 I didn't even know that I could disable the linter like that. Good to
 know---dozens more NOLINTs coming up!

 Jokes aside, I agree the linter seems a little draconian, especially as it
 seems to apply to all code in the files you touch, not just your changes.

 -- Evan Stade



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: Plugin Manager UI Proposal

2009-12-07 Thread John Abd-El-Malek
This is much needed, thanks for working on it :)

Some notes:
-Putting it in a url, but not having a menu item, removes much of the
benefit since only a tiny percentage would be able to use it.
-Checking that each plugin is up to date should be automatic and enabled for
all users, just like our plugin installer is automatic.  I don't see the
benefit of having that functionality be listed as a plugin, especially
since that can't be completely done inside an NPAPI plugin.

Can you put this information in a mini design doc somewhere online?  Also
things like exactly how you propose to implement this, and the future
features (i.e. disabling plugins that have big security holes, asking the
user to update etc...)

On Mon, Dec 7, 2009 at 11:08 AM, Panayiotis panayio...@google.com wrote:

 Hello, we are a small group of Googlers who'd like to volunteer our 20%
 time to work on plugin management, and eventually plugin update in Chromium.
 At first we'd like to start working only on management. We'd appreciate if
 you could review our proposal and provide feedback.

 Thanks,
 Panayiotis, Noe, Nav.


 *Plugin Manager UI Proposal *

 *Purpose*: UI for managing (enabling/disabling) plugins in Chromium.

 *Background*: There's a lot of reasons why this would be desirable, most
 are reported in http://code.google.com/p/chromium/issues/detail?id=736,
 major ones include security and performance

 *Suggested solution: *

 The User Interface could look a lot like chrome://extensions. We could use
 for example *chrome://plugins*/ for its url. A user would have to type the
 url to get to it, we would not link to it from any menu. Very similar to
 chrome://extensions, it would be a table of all plugins, with ability to
 enable/disable plugins. A mockup is attached.

 All plugins, plugin name, plugin description, plugin version (see below)
 and full filename of the plugin on the filesystem would be listed. Disabled
 plugins look like disabled extensions and have an enable link, enabled
 plugins have a disable link.

 *Non-goals:*

- We don't want to be able to enable/disable plugins per-site/ per-tab
at this point.
- There won't be any notification to the user that a plugin was
attempted to be used -- this can be hard to detect because many pages use 
 JS
to see if the plugin is installed and show different content if it's not.
It's not impossible, but we think we shouldn't aim for this in the first
iteration.

 *Alternative ideas we considered:*

- Modify about:plugins for this purpose. That page is a simple
javascript page that iterates over navigator.plugins. Similar to other
about: pages it doesn't have access to chrome internals. We feel
chrome://plugins follows the model of chrome://extensions better, in that
this page cannot be inspected and allows the user to modify state of the
browser.
- Have a popup window similar to Firefox's Add-on manager. We feel
mimicing chrome://extensions is going to be more consistent, besides, 
 having
it be a url allows you to load it in a tab.
- Add the plugins to chrome://extensions  -- we can do that too, we'll
let Chromium devs chime in on what's the preferred option here.

 *Implementation details:*

- Modify mostly PluginService (chrome/browser/plugin_service.cc)
- Will store state in the sqlite database, per user.
- A plugin is identified by its path in the filesystem. Different paths
are considered different plugins.
- Once a plugin is disabled, all subsequent calls to create a plugin
instance will fail. Similarly for enabling. Pages that have the plugin
already loaded should still work.



-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: heads up: Rietveld reply by email isn't working

2009-11-22 Thread John Abd-El-Malek
Note: this should be working again now.  We decided to put the issue numbers
in the subject line like Mondrian (but at the end, as to not conflict with
bug numbers which are put at the front of Python bug mail).

On Thu, Nov 19, 2009 at 11:40 AM, John Abd-El-Malek j...@chromium.orgwrote:

 There is a bug in App Engine with some new APIs that we're using.  I'm
 working on a workaround with another outside contributor which will
 hopefully land in a day or two.  Until then, if you care about your reply
 showing up on Rietveld, don't reply by email.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] heads up: Rietveld reply by email isn't working

2009-11-19 Thread John Abd-El-Malek
There is a bug in App Engine with some new APIs that we're using.  I'm
working on a workaround with another outside contributor which will
hopefully land in a day or two.  Until then, if you care about your reply
showing up on Rietveld, don't reply by email.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Heads up: safely ignored regression on Linux Startup perf test

2009-11-19 Thread John Abd-El-Malek
On Thu, Nov 19, 2009 at 1:57 PM, Tony Chang t...@chromium.org wrote:

 For reasons unknown to me, this line jumped back up. It seems it's because
 of Matt's revert:
 http://src.chromium.org/viewvc/chrome?view=revrevision=32524

 This is a startup test, so it basically times how long it takes for
 LaunchApp to return.  Maybe the methodology here is a bit off?


Yeah, I think New Tab tests are more important, since they measure what the
user sees, not when internal APIs are called. In the case of the above
change by Matt, New Tab Cold hasn't changed so I think it's fine to ignore
the rise.
http://build.chromium.org/buildbot/perf/linux-release-hardy/new-tab-ui-cold/report.html?history=150


 On Wed, Nov 18, 2009 at 6:02 PM, Chase Phillips c...@google.com wrote:

 t_ref shouldn't move, though, since it was isolated from your change.

 Tony, I don't think there's a problem with the graph pulling the wrong
 numbers.  I see the same difference between extension_toolstrip50 and
 extension_toolstrip1 when comparing the linux release hardy's graph values,
 the .dat file the graph code uses, and the output of the startup test
 itself.  I thought maybe extension_toolstrip50 could be using the reference
 build on accident, so I verified startup_test.cc runs extension_toolstrip50
 on the current build instead of the reference build (it does).

 Things look fine on Windows (the perf graph is what I'd expect, and
 running the test locally results in toolstrip50 results greater than
 toolstrip1).  These tests don't run on Mac.  We should run the tests on
 Linux to verify things look sane locally, too.  No explanation for the odd
 results yet.

 Chase

 On Wed, Nov 18, 2009 at 3:08 PM, John Abd-El-Malek j...@chromium.orgwrote:

 I don't have an answer to that.  The t_ref line didn't move either.

 On Wed, Nov 18, 2009 at 11:42 AM, Tony Chang t...@google.com wrote:

 Why didn't the black line on the linux warm perf bot change?  It says
 that that is the extension_toolstrip50 test, which I would expect to run
 slower than the extension_toolstrip1 test.  Maybe the graph is pulling the
 wrong numbers?


 http://build.chromium.org/buildbot/perf/linux-release-hardy/startup/report.html?history=150graph=warm

 On Wed, Nov 18, 2009 at 9:53 AM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 Yep, that was my plan.  I'm planning on doing the same thing for the
 rest of the child processes, and if I see any significant changes on the
 perf test (which I don't expect), I'll update the reference builds again.

 On Wed, Nov 18, 2009 at 6:46 AM, Brett Wilson bre...@google.comwrote:

  On Tue, Nov 17, 2009 at 10:57 PM, Darin Fisher da...@chromium.org
 wrote:
  This sounds like goodness.  Updating the reference builds is usually
 a good
  thing to do in cases like this so that new changes are easier to
 notice.

 We'll be doing this soon anyway. Al has a patch for the IPC message
 types running out which will break the reference build.

 Brett


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev



  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev



  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Heads up: safely ignored regression on Linux Startup perf test

2009-11-19 Thread John Abd-El-Malek
On Thu, Nov 19, 2009 at 2:17 PM, Matt Perry mpcompl...@chromium.org wrote:

 On Thu, Nov 19, 2009 at 2:14 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Nov 19, 2009 at 1:57 PM, Tony Chang t...@chromium.org wrote:

 For reasons unknown to me, this line jumped back up. It seems it's
 because of Matt's revert:
 http://src.chromium.org/viewvc/chrome?view=revrevision=32524

 This is a startup test, so it basically times how long it takes for
 LaunchApp to return.  Maybe the methodology here is a bit off?


 Yeah, I think New Tab tests are more important, since they measure what
 the user sees, not when internal APIs are called. In the case of the above
 change by Matt, New Tab Cold hasn't changed so I think it's fine to ignore
 the rise.
 http://build.chromium.org/buildbot/perf/linux-release-hardy/new-tab-ui-cold/report.html?history=150


 New Tab hasn't changed because none of those tests load any extensions.
 Maybe we should add some.


good point, thanks for the explanation.

I think your change will improve the user-visible response time even if this
particular test shows a drop in perf.  If I'm reading your change correctly,
things have to be faster since the process creation on the launcher thread
is kicked off earlier (instead of after a PostTask to the UI thread).




 On Wed, Nov 18, 2009 at 6:02 PM, Chase Phillips c...@google.com wrote:

 t_ref shouldn't move, though, since it was isolated from your change.

 Tony, I don't think there's a problem with the graph pulling the wrong
 numbers.  I see the same difference between extension_toolstrip50 and
 extension_toolstrip1 when comparing the linux release hardy's graph values,
 the .dat file the graph code uses, and the output of the startup test
 itself.  I thought maybe extension_toolstrip50 could be using the reference
 build on accident, so I verified startup_test.cc runs extension_toolstrip50
 on the current build instead of the reference build (it does).

 Things look fine on Windows (the perf graph is what I'd expect, and
 running the test locally results in toolstrip50 results greater than
 toolstrip1).  These tests don't run on Mac.  We should run the tests on
 Linux to verify things look sane locally, too.  No explanation for the odd
 results yet.

 Chase

 On Wed, Nov 18, 2009 at 3:08 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 I don't have an answer to that.  The t_ref line didn't move either.

 On Wed, Nov 18, 2009 at 11:42 AM, Tony Chang t...@google.com wrote:

 Why didn't the black line on the linux warm perf bot change?  It says
 that that is the extension_toolstrip50 test, which I would expect to run
 slower than the extension_toolstrip1 test.  Maybe the graph is pulling 
 the
 wrong numbers?


 http://build.chromium.org/buildbot/perf/linux-release-hardy/startup/report.html?history=150graph=warm

 On Wed, Nov 18, 2009 at 9:53 AM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 Yep, that was my plan.  I'm planning on doing the same thing for the
 rest of the child processes, and if I see any significant changes on the
 perf test (which I don't expect), I'll update the reference builds 
 again.

 On Wed, Nov 18, 2009 at 6:46 AM, Brett Wilson bre...@google.comwrote:

  On Tue, Nov 17, 2009 at 10:57 PM, Darin Fisher da...@chromium.org
 wrote:
  This sounds like goodness.  Updating the reference builds is
 usually a good
  thing to do in cases like this so that new changes are easier to
 notice.

 We'll be doing this soon anyway. Al has a patch for the IPC message
 types running out which will break the reference build.

 Brett


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev



  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev



  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev





-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Heads up: safely ignored regression on Linux Startup perf test

2009-11-18 Thread John Abd-El-Malek
Yep, that was my plan.  I'm planning on doing the same thing for the rest of
the child processes, and if I see any significant changes on the perf test
(which I don't expect), I'll update the reference builds again.

On Wed, Nov 18, 2009 at 6:46 AM, Brett Wilson bre...@google.com wrote:

 On Tue, Nov 17, 2009 at 10:57 PM, Darin Fisher da...@chromium.org wrote:
  This sounds like goodness.  Updating the reference builds is usually a
 good
  thing to do in cases like this so that new changes are easier to notice.

 We'll be doing this soon anyway. Al has a patch for the IPC message
 types running out which will break the reference build.

 Brett


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Heads up: safely ignored regression on Linux Startup perf test

2009-11-18 Thread John Abd-El-Malek
I don't have an answer to that.  The t_ref line didn't move either.

On Wed, Nov 18, 2009 at 11:42 AM, Tony Chang t...@google.com wrote:

 Why didn't the black line on the linux warm perf bot change?  It says that
 that is the extension_toolstrip50 test, which I would expect to run slower
 than the extension_toolstrip1 test.  Maybe the graph is pulling the wrong
 numbers?


 http://build.chromium.org/buildbot/perf/linux-release-hardy/startup/report.html?history=150graph=warm

 On Wed, Nov 18, 2009 at 9:53 AM, John Abd-El-Malek j...@chromium.orgwrote:

 Yep, that was my plan.  I'm planning on doing the same thing for the rest
 of the child processes, and if I see any significant changes on the perf
 test (which I don't expect), I'll update the reference builds again.

 On Wed, Nov 18, 2009 at 6:46 AM, Brett Wilson bre...@google.com wrote:

  On Tue, Nov 17, 2009 at 10:57 PM, Darin Fisher da...@chromium.org
 wrote:
  This sounds like goodness.  Updating the reference builds is usually a
 good
  thing to do in cases like this so that new changes are easier to
 notice.

 We'll be doing this soon anyway. Al has a patch for the IPC message
 types running out which will break the reference build.

 Brett


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev




-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Heads up: safely ignored regression on Linux Startup perf test

2009-11-17 Thread John Abd-El-Malek
Heads up to explain the sudden jump on Linux Startup perf test.

I just submitted r32264 which makes opening and closing processes happen off
the UI thread.  Surprisingly enough, according to UMA stats these would take
an average of 1s on Linux for the first renderer, and 100ms on Windows.
 Subsequent launches were very quick on Linux, but on Windows averaged 50ms.
 When using session restore with many tabs, this would block the UI thread
for quite a bit.  Also, Linux had code which might sleep for up to 2s on the
UI thread waiting for the process to die.

Linux Warm 
Startuphttp://build.chromium.org/buildbot/perf/linux-release-hardy/startup/report.html?history=150graph=warmperf
test is showing a big regression (200ms-300ms).  With Elliott's
insight, I tracked this down to the fact that the UI thread is very busy at
startup handling GTK messages, so the posted task back to it to tell
BrowserRenderProcessHost that the handle is available is queued up.  This
parallelization is exactly what we want though, and the Linux New Tab
Coldhttp://build.chromium.org/buildbot/perf/linux-release-hardy/new-tab-ui-cold/report.html?history=150test
went from ~615ms to ~440ms.  It's hard to see a change on Linux
Warm 
Startuphttp://build.chromium.org/buildbot/perf/linux-release-hardy/startup/report.html?history=150graph=cold
because
of all the noise.

As for other platforms: Mac
Startuphttp://build.chromium.org/buildbot/perf/mac-release-10.5/startup/report.html?history=150graph=warm
(both
warm and cold) went from around 307ms to 290ms, while Mac New Tab Cold
http://build.chromium.org/buildbot/perf/mac-release-10.5/new-tab-ui-cold/report.html?history=150
went
from 720ms to 620ms.  Windows didn't have much change.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: Waiting for privacy blacklists to load in ResourceDispatcherHost

2009-11-12 Thread John Abd-El-Malek
On Wed, Nov 11, 2009 at 10:40 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 On Thu, Nov 12, 2009 at 01:00, John Abd-El-Malek j...@chromium.org wrote:

 On Wed, Nov 11, 2009 at 1:22 PM, Paweł Hajdan Jr. 
 phajdan...@chromium.org wrote:

 To do that, I'd need to listen for BlacklistManager notifications in RDH
 (on IO thread). Does it seem OK to make RDH a NotificationObserver and use
 NotificationRegistrar inside?


 The part about making RDH a NotificationObserver sounds fine.  However, if
 you block all requests from even starting until BlackListManager is ready,
 you're basically delaying the IO thread until the data is loaded anyways, so
 it doesn't seem that making it happen asynchronously buys much.


 The problem is that for example a privacy blacklist may prevent a request
 for sending the referrer. Another thing they can block are cookies. That
 means that probably the only thing we can do before all blacklists are
 loaded is connecting to the host (which is still a win). Is it possible to
 do that using ResourceHandlers?


I think it would depend on when in the connection phase that the
ResourceHandler tries to do this.



 And of course the performance of loading blacklists from disk is considered
 to be critical. They are stored in one binary file on disk, in a simple
 format.


My question still stands: if this list is needed in order to process the
first network request, why add extra complexity to RDH to make more things
asynchronous, when either way any IO is basically blocked on the blacklist?
 You might as well load the list on the IO thread.

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Re: Waiting for privacy blacklists to load in ResourceDispatcherHost

2009-11-12 Thread John Abd-El-Malek
On Thu, Nov 12, 2009 at 11:45 AM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 On Thu, Nov 12, 2009 at 20:26, John Abd-El-Malek j...@chromium.org wrote:

 My question still stands: if this list is needed in order to process the
 first network request, why add extra complexity to RDH to make more things
 asynchronous, when either way any IO is basically blocked on the blacklist?
  You might as well load the list on the IO thread.


 Indeed, that would simplify a lot of things. However, we may recompile the
 blacklist when browser is running, for example when loading a new extension.
 In that case, we probably don't want to stall the IO thread.

 The difference is that when starting the browser, we don't have yet any
 blacklist. We should definitely wait for the blacklists to load, because it
 would be surprising for the user if the blacklist doesn't take effect for
 some time after starting the browser.

 When changing the blacklists when the browser is already running, it's more
 complicated.

 Nick, could you comment whether new network requests should be paused when
 a blacklist is changing? (for example due to now extension being loaded)
 Not waiting then would make things trivial. And anyway, waiting for
 blacklists after the browser started is even more surprising, as some
 request may already be started (it's not obvious what to do with them).


per our chat on irc, I think if a new privacy blacklist is added while
running, it should be loaded on the file thread, and when it's available the
IO thread should be updated.  The user won't have expectations or know how
already pending requests are scheduled, and won't be loading a blacklist and
within a 100ms loading a page.


  --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
 http://groups.google.com/group/chromium-dev

-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] Re: Fwd: Switching vs2008 to be preferred when present.

2009-11-12 Thread John Abd-El-Malek
Thank you!  That was it.  VS 2008 is now fast and I've switched back.

On Wed, Nov 11, 2009 at 8:38 PM, Eric Roman ero...@chromium.org wrote:

 Yuck, I experienced the same gut-wrenchingly slow single stepping problem!

 The problem was the Autos view.

 VS 2008 conveniently turns this on by default, but it is crazy slow.
 Even if it doesn't have focus, it will pause to update it after each step.

 Go ahead and right click on Autos and select Hide.
 The evil Autos it is near the Locals tab.

 It should be as zippy as 2005 now 

 On Thu, Oct 29, 2009 at 7:23 PM, Mike Belshe mbel...@google.com wrote:
  On Thu, Oct 29, 2009 at 3:34 PM, John Abd-El-Malek j...@chromium.org
 wrote:
 
 
  On Thu, Oct 29, 2009 at 3:23 PM, Finnur Thorarinsson 
 fin...@chromium.org
  wrote:
 
   I was just about to reply and say the same thing (on Win 7).  It
 would
   take me 2-3 seconds each time I step over a line.  I couldn't use it
 anymore
   and switched back to VS 2005.
  Isn't the stepping speed affected by things like what you have in your
  Watch window and what conditional breakpoints you have set, or
 something?
  Anyway, you've probably gone through this thought exercise many
 times...
  Just thought I'd mention it.
 
  This was an apple-to-apple comparison, only a few breakpoints in both,
  fresh build, both on SSD etc.
 
  I don't think it has anything to do with the watch window; it is just
  unbearable - I'd say about 1-1.5seconds per step.
 
 
 
 
 
  On Thu, Oct 29, 2009 at 14:51, John Abd-El-Malek j...@chromium.org
  wrote:
 
 
  On Thu, Oct 29, 2009 at 2:38 PM, Mike Belshe mbel...@google.com
 wrote:
 
  I've been using VS2008 on Win7 for the last month or so.
  I hate it.
  Problems:
  1) Stepping in the debugger is SOOO slow.  I am thinking about
  going back to VS2005.
 
  I was just about to reply and say the same thing (on Win 7).  It would
  take me 2-3 seconds each time I step over a line.  I couldn't use it
 anymore
  and switched back to VS 2005.
 
 
  2) If you turn on Intellisense, it crashes like crazy very regularly.
   I've turned off intellisense, but it is a big loss in productivity
 to do so
  I strongly recommend against VS2008; do others have these problems?
  Mike
 
  On Thu, Oct 29, 2009 at 2:27 PM, Ben Goodger (Google)
  b...@chromium.org wrote:
 
  FYI.
  VS2008 builds with /MP by default, and it's well supported, so when
  present there's no reason for us to not use it by default. Note that
 you can
  still force VS2005 by setting GYP_MSVS_VERSION=2005 in your
 environment.
  Thanks Brad!
  -Ben
 
  -- Forwarded message --
  From: b...@chromium.org
  Date: Thu, Oct 29, 2009 at 2:26 PM
  Subject: Re: Switching vs2008 to be preferred when present.
  To: bradnel...@google.com
  Cc: gyp-develo...@googlegroups.com
 
 
  LGTM
 
  http://codereview.chromium.org/341041
 
 
 
 
 
 
 
 
 
 
 
 
 
  --~--~-~--~~~---~--~~
  Chromium Developers mailing list: chromium-dev@googlegroups.com
  View archives, change email options, or unsubscribe:
  http://groups.google.com/group/chromium-dev
 
  -~--~~~~--~~--~--~---
 
 


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

[chromium-dev] Re: Waiting for privacy blacklists to load in ResourceDispatcherHost

2009-11-11 Thread John Abd-El-Malek
On Wed, Nov 11, 2009 at 1:22 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 Initially I got an advice to use PauseRequest and ResourceHandler to wait
 with servicing requests until all privacy blacklists are loaded. However,
 there are problems with that.

 When you look at ResourceDispatcher code, we need a Blacklist::Match* even
 before creating URLRequest.

 I thought about separating the RDH::BeginRequest into two parts: the
 initial part, and the second part, which executes when we finally have a
 blacklist match. If we don't have it at the time BeginRequest is executed,
 we'd save all needed data and put it into a queue. When the blacklist is
 loaded, all queued requests will be started by the second part of split
 BeginRequest.

 To do that, I'd need to listen for BlacklistManager notifications in RDH
 (on IO thread). Does it seem OK to make RDH a NotificationObserver and use
 NotificationRegistrar inside?


The part about making RDH a NotificationObserver sounds fine.  However, if
you block all requests from even starting until BlackListManager is ready,
you're basically delaying the IO thread until the data is loaded anyways, so
it doesn't seem that making it happen asynchronously buys much.


 The BlacklistManager will publish the compiled blacklist on the IO thread
 (the CL I published doesn't yet have that fix).

 Or maybe I should do it some other way? I'm not really familiar with the
 code in browser/renderer_host.


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chromium-dev] How to compile Google Chrome with Visual C++ 2008 Express Edition

2009-11-10 Thread John Abd-El-Malek
Enabling contribution to Chrome on Windows without having to purchase any
software is very welcome, good job :)

I looked at the steps, and as I'm sure you're thinking, if these can be
incorporated into the gyp sln generation that would be ideal.

On Mon, Nov 9, 2009 at 6:21 PM, Dominic Jodoin dominic.jod...@gmail.comwrote:

 Hello,

 I just wanted to share with you all that I've succeeded in compiling
 Chrome using Visual C++ 2008 Express Edition and ATL 7.1 bundled with
 the Windows Driver Kit (WDK).

 I've wrote a blog post that gives the details of my recipe here:

 http://cotsog.wordpress.com/2009/11/08/how-to-compile-google-chrome-with-visual-c-2008-express-edition/

 It would be nice if some Chrome developers could review the steps and
 see if it could be a valid setup to contribute to the project thus
 lowering the entry bar to develop on Windows OS.

 I also wonder if GYP could be made express edition aware? e.g. don't
 include solution folders in that case.

 Do not hesitate to make comments or suggestions.

 Thanks!

 -- Dominic.

 --
 Chromium Developers mailing list: chromium-dev@googlegroups.com
 View archives, change email options, or unsubscribe:
http://groups.google.com/group/chromium-dev


-- 
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev

Re: [chrome-devtools-team] Re: [chromium-dev] Re: How to log exceptions/console messages from shared worker context?

2009-11-08 Thread John Abd-El-Malek
On Sun, Nov 8, 2009 at 12:04 PM, Drew Wilson atwil...@chromium.org wrote:

 In base webkit (i.e. the default, single-process implementation of
 SharedWorkers), I route all of these errors back to the individual parent
 documents on the main thread, and log them via
 ScriptExecutionContext::reportException() and
 ScriptExecutionContext::addMessage().

 That base code is not executed in the chrome version of SharedWorkers,
 because the multi-process architecture is different - we don't route
 exceptions generated in worker processes back to renderer processes (if you
 think about it, it doesn't make sense to route exceptions in worker
 processes back to the renderer process to call the ScriptExecutionContext
 APIs, only to have that renderer process immediately bounce it back up to
 the browser process).


You're saying it doesn't make sense because of efficiency reasons?  I
wouldn't worry too much about it at this point, if your goal is to get this
up and working.  We already have a large overhead for workers compared ot
other browsers because of our need to go to a different process.


 -atw

 On Sun, Nov 8, 2009 at 11:54 AM, Pavel Feldman pfeld...@chromium.orgwrote:



 On Sun, Nov 8, 2009 at 10:41 PM, Drew Wilson atwil...@chromium.orgwrote:

 That makes sense.

 In the meantime, I'd like to come up with a pragmatic solution for how to
 log exceptions from shared worker context within Chromium, as I think it's a
 valuable debugging aid while we wait for upstream functionality to be added
 - I don't want to block the release of shared workers in the meantime.

 Is it possible to report things to the console directly from the browser
 process? I believe I can get a reference to all of the RenderViewHost
 objects for the worker, so I could do the same thing in Chromium that we do
 in base webkit: log all messages to every single console associated with
 every parent frame. It's not ideal, but at least they are exposed to the
 user.


 I am confused. In 'base webkit' you report to console client only or to
 inspector controller of the parent frame's page as well? If you do the
 latter, you will get console messages routed both to web inspector and
 devtools.


 It's not clear to me how I would route my messages through them to the
 inspector, though - I see RenderViewHost::OnAddMessageToConsole(), but that
 doesn't seem to do what I want (doesn't look like it routes to the
 inspector).

 -atw

 On Sun, Nov 8, 2009 at 11:29 AM, Pavel Feldman pfeld...@chromium.orgwrote:

 [now from chromium address]


 These are essentially different requests (although related). Please
 submit bugs for both.

 - Shared workers, workers, etc. require InspectorController-alike
 infrastructure upstream that they would report exceptions / console 
 messages
 to. They should also be able to report JavaScript events there and hence
 allow debugging. Going that way would allow opening inspector with limited
 capabilities (resources, scripts and console) for shared worker context. I
 think we need to start with the bug upstream for this one.

 - This thing above will lead to the UI complexity upstream that Aaron's
 team is already facing in Chromium: too hard to manage several devtools
 windows.

 Thanks
 Pavel

 On Sun, Nov 8, 2009 at 9:59 PM, Aaron Boodman a...@chromium.org wrote:

 It would be cool if the devtools window had some kind of UI where you
 could see all running top-level contexts. Like tabs that had:

 + extensions
  - foo extension
- background page
- worker
- tab 1
- tab 2
- content script a
  + bar extensions
 + worker 1
 + worker 2
 + tab 1
 + tab 2

 Then we could reconfigure the current entrypoints into devtools to go
 automatically to the right place in this tree.

 - a

 On Sun, Nov 8, 2009 at 10:34 AM, Ojan Vafai o...@chromium.org wrote:
  +chrome-devtools-team
  I think this is a general UI problem that needs to be solved for the
  inspector. There are increasingly more and more pages we have that
 don't
  have a page visible to the developer (sharedworkers, extensions
 background
  pages, etc.). I can't think of any good solutions off the top of my
 head
  though.
  Ojan
 
  On Sun, Nov 8, 2009 at 10:14 AM, Drew Wilson atwil...@chromium.org
 wrote:
 
  I'm trying to figure out the best way to report exceptions/console
  messages from shared workers. Currently, dedicated workers just send
 their
  messages back to their parent frame, where they are reported like
 any other
  frame-level exception. SharedWorkers don't have a specific parent
 window
  (they may have many parents) so this approach won't really work for
 them.
  I can think of a couple of ways to approach this:
  1) These exceptions/messages are already marshaled up to the browser
  process - perhaps they could be routed to the inspector/dev tools
 from
  there?
  2) There's a shadow frame that is created in worker context to
 handle
  resource requests from the worker. It's possible that this could be
  leveraged to report 

[chromium-dev] Ensuring that all destructors of ref counted objects are private

2009-11-05 Thread John Abd-El-Malek
I've gone through the code and made all destructors of objects that derive
from base::RefCounted or base::RefCountedThreadSafe private.  This helps to
catch corruption bugs at compile time.  For classes that are derived from
refcounted, make the destuctor protected and ensure that all derived classes
have private destructors.

The reason I have gone through and changed all classes is to ensure that new
code always has correct examples to copy from.

From now on, please ensure that you follow this pattern in code that you
write and review.  I've updated the wiki and the headers as well.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Changes to using threads in the browser process

2009-11-05 Thread John Abd-El-Malek
On Thu, Nov 5, 2009 at 1:15 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Mon, Nov 2, 2009 at 9:50 PM, John Abd-El-Malek j...@chromium.orgwrote:

 Over the last week, I've been making some changes to how threads are used
 in the browser process, with the goal of simplifying cross-thread access and
 improving stability.

 *The problem*
 We were using a number of patterns that were problematic:

 1) using ChromeThread::GetMessageLoop
   -this isn't safe, since it could return a valid pointer, but since the
 caller isn't holding on to a lock anymore, the target MessageLoop could be
 destructed while it's being used

 2) caching of MessageLoop pointers in order to use them later for PostTask
 and friends
   -this was more efficient previously (more on that later) since using
 ChromeThread::GetMessageLoop involved a lock
   -but it spread logic about the order of thread destruction all over the
 code.  Code that moved from the IO thread to the file thread and back, in
 order to avoid doing disk access on the IO thread, ended up having to do an
 extra hop through the UI thread on the way back to the IO thread since the
 file thread outlives the Io thread.  Of course, most code learnt this the
 hard way, after doing the straight forward IO-file-IO thread hop and
 updating the code after seeing reliability or user crashes
   -it made the browser shutdown fragile and hence difficult to update

 3) file thread hops using RefCountedThreadSafe objects which have
 non-trivial destructors
   -to reduce jank, frequently an object on the UI or IO thread would
 execute some code on the file thread, then post the result back to the
 original thread.  We make this easy using RefCountedThreadSafe and
 NewRunnableMethod so this pattern happened often, but it's not always
 safe: NewRunnableMethod holds an extra reference on the object to ensure
 that it doesn't invoke a method on a deleted object.  But it's quite
 possible that before the file thread's stack unwinds and it releases the
 extra reference, that the response task on the original thread executed and
 released its own additional reference.  The file thread is then left with
 the remaining reference and the object gets destructed there.  While for
 most objects this is ok, many have non-trivial destructors, with the worst
 being ones that register with the per-thread NotificationService.  Dangling
 pointers would be left behind and tracking these crashes from ChromeBot or
 the crash dumps has wasted several days at least for me.

 4) having browser code take different code paths if a thread didn't exist
   -this could be either deceptively harmless  (i.e. execute synchronously
 when it was normally asynchronous), when in fact it makes shutdown slower
 because disk access is moved to the UI thread
   -it could lead to data loss, if tasks are silently not posted because
 the code assumes this only happens in unit tests, when it could occur on
 browser shutdown as well


 *The solution*
 1+2: Use ChromeThread::PostTask and friends (i.e. PostDelayedTask,
 DeleteSoon, ReleaseSoon) which are safe and efficient: no locks are grabbed
 if the target thread is known to outlive the current thread.  The four
 static methods have the same signature as the ones from MessageLoop, with
 the addition of the first parameter to indicate the target thread.

 ChromeThread::PostTask(ChromeThread::FILE, FROM_HERE, task);

 3: If your object must be destructed on a specific thread, use a trait
 from ChromeThread:

 class Foo : public base::RefCountedThreadSafeFoo,
 ChromeThread::DeleteOnIOThread


 This is really cool and a great idea, but I'm a little concerned about what
 happens when the thread has already been torn down.  It seems that it calls
 DeleteSoon which calls PostNonNestableTask which calls PostTaskHelper.
  PostTaskHelper deletes the task if the thread is shut down.

 This works fine if something is only supposed to touch 2 threads.  But what
 if 2 threads simultaneously delete something which is supposed to be deleted
 on a third thread that's already been shut down?


I'm not sure which object that you're referring to?  The PostTaskHelper will
delete the task.  But if you have a DeleteSoon task, deleting the task (i.e.
because the target thread is gone) doesn't delete the object.

As for simultaneously deleting an object, if more than 1 thread are
accessing an object, they should each have a reference to it.  They can't
ensure that releasing their reference will cause deletion, since there could
be other refences.


  And what if they both call a method on another class that's not thread
 safe?


If they use NewRunnableMethod on an object that's not thread safe, then the
result task would only execute on the target thread.  If the task couldn't
execute because the target thread is gone, then the method won't be invoked.


  I ask because this scenario is going to prevent me from using this for DOM
 Storage.

 A possible solution would be to guarantee that, if the thread

[chromium-dev] Re: Changes to using threads in the browser process

2009-11-05 Thread John Abd-El-Malek
On Thu, Nov 5, 2009 at 1:42 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Thu, Nov 5, 2009 at 1:34 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Thu, Nov 5, 2009 at 1:15 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Mon, Nov 2, 2009 at 9:50 PM, John Abd-El-Malek j...@chromium.orgwrote:

 Over the last week, I've been making some changes to how threads are
 used in the browser process, with the goal of simplifying cross-thread
 access and improving stability.

 *The problem*
 We were using a number of patterns that were problematic:

 1) using ChromeThread::GetMessageLoop
   -this isn't safe, since it could return a valid pointer, but since the
 caller isn't holding on to a lock anymore, the target MessageLoop could be
 destructed while it's being used

 2) caching of MessageLoop pointers in order to use them later for
 PostTask and friends
   -this was more efficient previously (more on that later) since using
 ChromeThread::GetMessageLoop involved a lock
   -but it spread logic about the order of thread destruction all over
 the code.  Code that moved from the IO thread to the file thread and back,
 in order to avoid doing disk access on the IO thread, ended up having to do
 an extra hop through the UI thread on the way back to the IO thread since
 the file thread outlives the Io thread.  Of course, most code learnt this
 the hard way, after doing the straight forward IO-file-IO thread hop and
 updating the code after seeing reliability or user crashes
   -it made the browser shutdown fragile and hence difficult to update

 3) file thread hops using RefCountedThreadSafe objects which have
 non-trivial destructors
   -to reduce jank, frequently an object on the UI or IO thread would
 execute some code on the file thread, then post the result back to the
 original thread.  We make this easy using RefCountedThreadSafe and
 NewRunnableMethod so this pattern happened often, but it's not always
 safe: NewRunnableMethod holds an extra reference on the object to ensure
 that it doesn't invoke a method on a deleted object.  But it's quite
 possible that before the file thread's stack unwinds and it releases the
 extra reference, that the response task on the original thread executed and
 released its own additional reference.  The file thread is then left with
 the remaining reference and the object gets destructed there.  While for
 most objects this is ok, many have non-trivial destructors, with the worst
 being ones that register with the per-thread NotificationService.  Dangling
 pointers would be left behind and tracking these crashes from ChromeBot or
 the crash dumps has wasted several days at least for me.

 4) having browser code take different code paths if a thread didn't
 exist
   -this could be either deceptively harmless  (i.e. execute
 synchronously when it was normally asynchronous), when in fact it makes
 shutdown slower because disk access is moved to the UI thread
   -it could lead to data loss, if tasks are silently not posted because
 the code assumes this only happens in unit tests, when it could occur on
 browser shutdown as well


 *The solution*
 1+2: Use ChromeThread::PostTask and friends (i.e. PostDelayedTask,
 DeleteSoon, ReleaseSoon) which are safe and efficient: no locks are grabbed
 if the target thread is known to outlive the current thread.  The four
 static methods have the same signature as the ones from MessageLoop, with
 the addition of the first parameter to indicate the target thread.

 ChromeThread::PostTask(ChromeThread::FILE, FROM_HERE, task);

 3: If your object must be destructed on a specific thread, use a trait
 from ChromeThread:

 class Foo : public base::RefCountedThreadSafeFoo,
 ChromeThread::DeleteOnIOThread


 This is really cool and a great idea, but I'm a little concerned about
 what happens when the thread has already been torn down.  It seems that it
 calls DeleteSoon which calls PostNonNestableTask which calls PostTaskHelper.
  PostTaskHelper deletes the task if the thread is shut down.

 This works fine if something is only supposed to touch 2 threads.  But
 what if 2 threads simultaneously delete something which is supposed to be
 deleted on a third thread that's already been shut down?


 I'm not sure which object that you're referring to?  The PostTaskHelper
 will delete the task.  But if you have a DeleteSoon task, deleting the task
 (i.e. because the target thread is gone) doesn't delete the object.


 So this means the memory will just be leaked?  That won't work for DOM
 Storage since the backing database is shut down in destructors.


If the object doesn't need to be closed on a specific thread, and you want
to ensure that it's deleted on any thread, then ReleaseSoon and
ChromeThread::DeleteOnIOThread aren't going to serve your need.  You'll
probably want to have either a singleton or an object like
ResourceDispatcherHost that BrowserProcess can delete directly.


 As for simultaneously deleting an object, if more than 1 thread

[chromium-dev] Re: Singleton, LazyInstance, and testability

2009-11-04 Thread John Abd-El-Malek
On Tue, Nov 3, 2009 at 11:49 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 I encountered another problem related to Singletons in unit tests.
 PluginService is a Singleton, and it listens to extensions notifications. In
 one of my tests when I was using the extensions notifications the
 PluginService crashed because I passed NULL as the Extension* - because the
 listener I was testing didn't even read that value, but PluginService did
 (it was unexpected).

 I tried to fix the issue by adding a shadow AtExitManager to the test which
 instantiated PluginService (resource_dispatcher_host_unittest.cc). And then
 another crash appeared on Windows: NPAPI::PluginList is a LazyInstance,
 which means that the shadow AtExitManager destroyed it, but it didn't
 re-instantiate itself the next time it was used.

 Is there a big difference between a Singleton and LazyInstance? I was
 thinking about making NPAPI::PluginList a Singleton instead...


It's fine to switch PluginList to Singleton instead of LazyInstance.  The
big benefit of LazyInstance is using it with TLS objects and not using up a
slot unless the code runs.  In this case, that's not the case since
PluginList isn't stored per thread.


 By the way, I have a more general fix in the queue (an unwanted Singleton
 detector). If you're interested, please star http://crbug.com/12710. It
 seems that the most recent gtest (1.4.0) has necessary support.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: revert now, ask questions later? WAS: Reverting a change, the fast way

2009-11-03 Thread John Abd-El-Malek
But this means that the person didn't use the trybot.

I think we need to be harsher on people who commit with changes that didn't
complete or failed on the trybot.  They need to have a really good reason as
to why they want to try their change on the buildbot and possibly delay many
other engineers.

On Tue, Nov 3, 2009 at 3:11 PM, Ben Goodger (Google) b...@chromium.orgwrote:


 The most common case of  5 minute bustage fix is file was omitted
 from changelist.

 -Ben

 On Tue, Nov 3, 2009 at 2:34 PM, Peter Kasting pkast...@google.com wrote:
  On Tue, Nov 3, 2009 at 2:08 PM, Ojan Vafai o...@chromium.org wrote:
 
  To be clear, here's the proposed policy: Any change that would close the
  tree can be reverted if it can't be fixed in 2 minutes.
 
  How about:
  If a change closes the tree, the change author has 1 or 2 minutes to
 respond
  to a ping.  The change should be reverted if the author doesn't respond,
 if
  he says to revert, or if he does not say he has a fix within the next 5
  minutes.
  I can't fix _any_ problem in 2 minutes.  But I can fix most of them in 5.
   The goal is to allow the author a reasonable chance to fix trivial
 problems
  before we revert.  And I think the tree should go ahead and close during
  that interval.
  PK
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Changes to using threads in the browser process

2009-11-03 Thread John Abd-El-Malek
On Tue, Nov 3, 2009 at 12:03 AM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 On Tue, Nov 3, 2009 at 06:50, John Abd-El-Malek j...@chromium.org wrote:

 *The solution*
 1+2: Use ChromeThread::PostTask and friends (i.e. PostDelayedTask,
 DeleteSoon, ReleaseSoon) which are safe and efficient: no locks are grabbed
 if the target thread is known to outlive the current thread.  The four
 static methods have the same signature as the ones from MessageLoop, with
 the addition of the first parameter to indicate the target thread.

 ChromeThread::PostTask(ChromeThread::FILE, FROM_HERE, task);

 3: If your object must be destructed on a specific thread, use a trait
 from ChromeThread:

 class Foo : public base::RefCountedThreadSafeFoo,
 ChromeThread::DeleteOnIOThread


 Awesome!


 4: I've removed all the special casing and always made the objects in the
 browser code behave in one way.  If you're writing a unit test and need to
 use an object that goes to a file thread (where before it would proceed
 synchronously), you just need:

 ChromeThread file_thread(ChromeThread::FILE, MessageLoop::current());
 foo-StartAsyncTaskOnFileThread();
 MessageLoop::current()-RunAllPending();

 There are plenty of examples now in the tests.


 Are there any examples for asynchronous processing (with a real backend
 thread)?


Sure, look at BufferedResourceHandler or ResourceMessageFilter, which both
need to load plugins on the file thread.



 I think that chrome/browser/privacy_blacklist/blacklist_manager.cc uses
 some of the anti-patterns you described (it caches base::Thread* instead of
 MessageLoop*, but I understand it's equally bad). And it will benefit a lot
 from ChromeThread::DeleteOnIOThread. The good part is that BlacklistManager
 is not yet used in the browser. I will fix the issues before the final
 integration of BlacklistManager.


Great, I missed that one since I was just looking for caching of MessageLoop
pointers.  Thanks.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Changes to using threads in the browser process

2009-11-03 Thread John Abd-El-Malek
I've updated the wiki.  I was wary of putting this stuff up because of bit
rot, but since there's already documentation we won't be worse off.

As for the renderer process: this object would have to be recreated.  I'd
prefer to keep ChromeThread just for one process to keep things clearer.  I
think this pattern doesn't happen often in child processes (yours is the
first example I hear of), so it seems a one-off solution is ok for now.

On Tue, Nov 3, 2009 at 11:49 AM, Andrew Scherkus scher...@chromium.orgwrote:

 Very cool!

 Could this idea be done in the render process?  We try to keep the media
 processing code off the render thread but we've been bitten using cached
 MessageLoops which have been destructed (usually on tab close when the
 render thread goes away).

 On Mon, Nov 2, 2009 at 9:50 PM, John Abd-El-Malek j...@chromium.orgwrote:

 Over the last week, I've been making some changes to how threads are used
 in the browser process, with the goal of simplifying cross-thread access and
 improving stability.

 *The problem*
 We were using a number of patterns that were problematic:

 1) using ChromeThread::GetMessageLoop
   -this isn't safe, since it could return a valid pointer, but since the
 caller isn't holding on to a lock anymore, the target MessageLoop could be
 destructed while it's being used

 2) caching of MessageLoop pointers in order to use them later for PostTask
 and friends
   -this was more efficient previously (more on that later) since using
 ChromeThread::GetMessageLoop involved a lock
   -but it spread logic about the order of thread destruction all over the
 code.  Code that moved from the IO thread to the file thread and back, in
 order to avoid doing disk access on the IO thread, ended up having to do an
 extra hop through the UI thread on the way back to the IO thread since the
 file thread outlives the Io thread.  Of course, most code learnt this the
 hard way, after doing the straight forward IO-file-IO thread hop and
 updating the code after seeing reliability or user crashes
   -it made the browser shutdown fragile and hence difficult to update

 3) file thread hops using RefCountedThreadSafe objects which have
 non-trivial destructors
   -to reduce jank, frequently an object on the UI or IO thread would
 execute some code on the file thread, then post the result back to the
 original thread.  We make this easy using RefCountedThreadSafe and
 NewRunnableMethod so this pattern happened often, but it's not always
 safe: NewRunnableMethod holds an extra reference on the object to ensure
 that it doesn't invoke a method on a deleted object.  But it's quite
 possible that before the file thread's stack unwinds and it releases the
 extra reference, that the response task on the original thread executed and
 released its own additional reference.  The file thread is then left with
 the remaining reference and the object gets destructed there.  While for
 most objects this is ok, many have non-trivial destructors, with the worst
 being ones that register with the per-thread NotificationService.  Dangling
 pointers would be left behind and tracking these crashes from ChromeBot or
 the crash dumps has wasted several days at least for me.

 4) having browser code take different code paths if a thread didn't exist
   -this could be either deceptively harmless  (i.e. execute synchronously
 when it was normally asynchronous), when in fact it makes shutdown slower
 because disk access is moved to the UI thread
   -it could lead to data loss, if tasks are silently not posted because
 the code assumes this only happens in unit tests, when it could occur on
 browser shutdown as well


 *The solution*
 1+2: Use ChromeThread::PostTask and friends (i.e. PostDelayedTask,
 DeleteSoon, ReleaseSoon) which are safe and efficient: no locks are grabbed
 if the target thread is known to outlive the current thread.  The four
 static methods have the same signature as the ones from MessageLoop, with
 the addition of the first parameter to indicate the target thread.

 ChromeThread::PostTask(ChromeThread::FILE, FROM_HERE, task);

 3: If your object must be destructed on a specific thread, use a trait
 from ChromeThread:

 class Foo : public base::RefCountedThreadSafeFoo,
 ChromeThread::DeleteOnIOThread

 4: I've removed all the special casing and always made the objects in the
 browser code behave in one way.  If you're writing a unit test and need to
 use an object that goes to a file thread (where before it would proceed
 synchronously), you just need:

 ChromeThread file_thread(ChromeThread::FILE, MessageLoop::current());
 foo-StartAsyncTaskOnFileThread();
 MessageLoop::current()-RunAllPending();

 There are plenty of examples now in the tests.


 *Gotchas*
 -PostTask will silently delete a task if the thread doesn't exist.  This
 is done to avoid having all the code that uses it have special cases if the
 target thread exists or not, and to make Valgrind happy.  As usual, the task
 for DeleteSoon

[chromium-dev] Re: FYI: Linux interactive_ui_tests a bit hosed

2009-11-02 Thread John Abd-El-Malek
(CCing today's sheriffs + Jay)

I was thinking of marking the bug to P1, except that won't change much if no
one has the time to work on it now.

I think to keep tree green, the entire test should be disabled until
singletons are cleared between each test, or it's switched to browser_tests
launcher.

On Mon, Nov 2, 2009 at 12:44 AM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 I think that the right fix is to switch interactive_ui_tests to
 browser_tests launcher (jcampan did some great work to make the launcher
 more flexible, it may be quite simple to do the switch now - if there are no
 UI tests in interactive_ui_tests).

 I also suggest bumping the priority of http://crbug.com/25997 to P1 and
 possibly marking it with FlakyTest label.

 On Mon, Nov 2, 2009 at 09:37, John Abd-El-Malek j...@chromium.org wrote:

 This is related to bug 25997.  The tests are inherently flakey because
 singeltons aren't released between runs, so cached MessageLoop pointers are
 bogus.  Sometimes they're ok when the order of construction/destruction of
 MessageLoop pointers is the same.  But if that changes, or other memory
 allocations change, the tests start failing.

 Just wanted to send a heads up since I spent a day debugging this last
 week, and now they're failing again.

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Changes to using threads in the browser process

2009-11-02 Thread John Abd-El-Malek
That sounds interesting, but there are still two changes remaining and other
unrelated changes have gone in in the meantime, so it wouldn't be an apple
to apple comparison.  I can let you know when I'm done though (when the bug
is marked fixed).

On Mon, Nov 2, 2009 at 10:09 PM, Huan Ren hu...@google.com wrote:

 Cool! I could compare the builds before and after these changes to see
 what difference it makes. Of course it also prevents future issues.

 Huan

 On Mon, Nov 2, 2009 at 9:50 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  Over the last week, I've been making some changes to how threads are used
 in
  the browser process, with the goal of simplifying cross-thread access and
  improving stability.
  The problem
  We were using a number of patterns that were problematic:
  1) using ChromeThread::GetMessageLoop
-this isn't safe, since it could return a valid pointer, but since the
  caller isn't holding on to a lock anymore, the target MessageLoop could
 be
  destructed while it's being used
  2) caching of MessageLoop pointers in order to use them later for
 PostTask
  and friends
-this was more efficient previously (more on that later) since using
  ChromeThread::GetMessageLoop involved a lock
-but it spread logic about the order of thread destruction all over the
  code.  Code that moved from the IO thread to the file thread and back, in
  order to avoid doing disk access on the IO thread, ended up having to do
 an
  extra hop through the UI thread on the way back to the IO thread since
 the
  file thread outlives the Io thread.  Of course, most code learnt this the
  hard way, after doing the straight forward IO-file-IO thread hop and
  updating the code after seeing reliability or user crashes
-it made the browser shutdown fragile and hence difficult to update
  3) file thread hops using RefCountedThreadSafe objects which have
  non-trivial destructors
-to reduce jank, frequently an object on the UI or IO thread would
 execute
  some code on the file thread, then post the result back to the original
  thread.  We make this easy using RefCountedThreadSafe and
 NewRunnableMethod
  so this pattern happened often, but it's not always
 safe: NewRunnableMethod
  holds an extra reference on the object to ensure that it doesn't invoke a
  method on a deleted object.  But it's quite possible that before the file
  thread's stack unwinds and it releases the extra reference, that the
  response task on the original thread executed and released its own
  additional reference.  The file thread is then left with the remaining
  reference and the object gets destructed there.  While for most objects
 this
  is ok, many have non-trivial destructors, with the worst being ones that
  register with the per-thread NotificationService.  Dangling pointers
 would
  be left behind and tracking these crashes from ChromeBot or the crash
 dumps
  has wasted several days at least for me.
  4) having browser code take different code paths if a thread didn't exist
-this could be either deceptively harmless  (i.e. execute synchronously
  when it was normally asynchronous), when in fact it makes shutdown slower
  because disk access is moved to the UI thread
-it could lead to data loss, if tasks are silently not posted because
 the
  code assumes this only happens in unit tests, when it could occur on
 browser
  shutdown as well
 
  The solution
  1+2: Use ChromeThread::PostTask and friends (i.e. PostDelayedTask,
  DeleteSoon, ReleaseSoon) which are safe and efficient: no locks are
 grabbed
  if the target thread is known to outlive the current thread.  The four
  static methods have the same signature as the ones from MessageLoop, with
  the addition of the first parameter to indicate the target thread.
  ChromeThread::PostTask(ChromeThread::FILE, FROM_HERE, task);
  3: If your object must be destructed on a specific thread, use a trait
 from
  ChromeThread:
  class Foo : public base::RefCountedThreadSafeFoo,
  ChromeThread::DeleteOnIOThread
  4: I've removed all the special casing and always made the objects in the
  browser code behave in one way.  If you're writing a unit test and need
 to
  use an object that goes to a file thread (where before it would proceed
  synchronously), you just need:
  ChromeThread file_thread(ChromeThread::FILE, MessageLoop::current());
  foo-StartAsyncTaskOnFileThread();
  MessageLoop::current()-RunAllPending();
  There are plenty of examples now in the tests.
 
  Gotchas
  -PostTask will silently delete a task if the thread doesn't exist.  This
 is
  done to avoid having all the code that uses it have special cases if the
  target thread exists or not, and to make Valgrind happy.  As usual, the
 task
  for DeleteSoon/ReleaseSoon doesn't do anything in its destructor, so this
  won't cause unexpected behavior with them.  But be wary of posted Task
  objects which have non-trivial destructors or smart pointers as members.
   I'm still on the fence about

[chromium-dev] Re: Fwd: Switching vs2008 to be preferred when present.

2009-10-29 Thread John Abd-El-Malek
On Thu, Oct 29, 2009 at 2:38 PM, Mike Belshe mbel...@google.com wrote:

 I've been using VS2008 on Win7 for the last month or so.

 I hate it.

 Problems:

 1) Stepping in the debugger is SOOO slow.  I am thinking about going
 back to VS2005.


I was just about to reply and say the same thing (on Win 7).  It would take
me 2-3 seconds each time I step over a line.  I couldn't use it anymore and
switched back to VS 2005.


 2) If you turn on Intellisense, it crashes like crazy very regularly.  I've
 turned off intellisense, but it is a big loss in productivity to do so

 I strongly recommend against VS2008; do others have these problems?

 Mike


 On Thu, Oct 29, 2009 at 2:27 PM, Ben Goodger (Google) 
 b...@chromium.orgwrote:

 FYI.

 VS2008 builds with /MP by default, and it's well supported, so when
 present there's no reason for us to not use it by default. Note that you can
 still force VS2005 by setting GYP_MSVS_VERSION=2005 in your environment.

 Thanks Brad!

 -Ben

 -- Forwarded message --
 From: b...@chromium.org
 Date: Thu, Oct 29, 2009 at 2:26 PM
 Subject: Re: Switching vs2008 to be preferred when present.
 To: bradnel...@google.com
 Cc: gyp-develo...@googlegroups.com


 LGTM


 http://codereview.chromium.org/341041





 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Fwd: Switching vs2008 to be preferred when present.

2009-10-29 Thread John Abd-El-Malek
On Thu, Oct 29, 2009 at 3:23 PM, Finnur Thorarinsson fin...@chromium.orgwrote:


  I was just about to reply and say the same thing (on Win 7).  It would
 take me 2-3 seconds each time I step over a line.  I couldn't use it anymore
 and switched back to VS 2005.

 Isn't the stepping speed affected by things like what you have in your
 Watch window and what conditional breakpoints you have set, or something?

 Anyway, you've probably gone through this thought exercise many times...
 Just thought I'd mention it.


This was an apple-to-apple comparison, only a few breakpoints in both, fresh
build, both on SSD etc.





 On Thu, Oct 29, 2009 at 14:51, John Abd-El-Malek j...@chromium.org wrote:



 On Thu, Oct 29, 2009 at 2:38 PM, Mike Belshe mbel...@google.com wrote:

 I've been using VS2008 on Win7 for the last month or so.

 I hate it.

 Problems:

 1) Stepping in the debugger is SOOO slow.  I am thinking about going
 back to VS2005.


 I was just about to reply and say the same thing (on Win 7).  It would
 take me 2-3 seconds each time I step over a line.  I couldn't use it anymore
 and switched back to VS 2005.


 2) If you turn on Intellisense, it crashes like crazy very regularly.
  I've turned off intellisense, but it is a big loss in productivity to do so

 I strongly recommend against VS2008; do others have these problems?

 Mike


 On Thu, Oct 29, 2009 at 2:27 PM, Ben Goodger (Google) 
 b...@chromium.orgwrote:

 FYI.

 VS2008 builds with /MP by default, and it's well supported, so when
 present there's no reason for us to not use it by default. Note that you 
 can
 still force VS2005 by setting GYP_MSVS_VERSION=2005 in your environment.

 Thanks Brad!

 -Ben

 -- Forwarded message --
 From: b...@chromium.org
 Date: Thu, Oct 29, 2009 at 2:26 PM
 Subject: Re: Switching vs2008 to be preferred when present.
 To: bradnel...@google.com
 Cc: gyp-develo...@googlegroups.com


 LGTM


 http://codereview.chromium.org/341041








 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] git users and svn:eol-style

2009-10-29 Thread John Abd-El-Malek
Per the Chromium style guide (
http://dev.chromium.org/developers/coding-style), we require all new files
to have the svn:eol-style property set. We even have a presubmit check for
it in case you don't configure Subversion to automatically add them per the
above previous link.

git users seem to not have such a presubmit check, so nothing reminds them
to add them. Can the git-cl script be modified to do this, and if not
quickly, can git users please try to remember and follow this?

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: git users and svn:eol-style

2009-10-29 Thread John Abd-El-Malek
On Thu, Oct 29, 2009 at 5:31 PM, Evan Martin e...@chromium.org wrote:

 On Thu, Oct 29, 2009 at 5:28 PM, Evan Martin e...@chromium.org wrote:
  If you add the junk to your ~/.subversion/config that's specified on this
 page
   http://dev.chromium.org/developers/coding-style
  then git will do the right thing as well.  (I just tested it on a
  local svn repo to be sure.)

 BTW, it looks like Yaar helpfully added this to the commit
 instructions a month ago!
  http://code.google.com/p/chromium/wiki/UsingGit#Committing


Nice, so all new users will have it set.  Existing users: please please set
this.  Otherwise you punt the work onto svn users when they next modify the
file.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How to use PauseRequest

2009-10-23 Thread John Abd-El-Malek
Check out BufferedResourceHandler, it pauses requests until plugins are
loaded (needed to know which mime types are available).

On Fri, Oct 23, 2009 at 1:23 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 I'm going to use PauseRequest for privacy blacklists. It seems that I
 should create a new ResourceHandler, and resource handlers seem to wrap
 another resource handlers. Then I'd have to add code to use the new
 ResourceHandler in ResourceDispatcherHost.

 I'd need to write a ResourceHandler which would pause any requests until
 the BlacklistManager has loaded its blacklist (I can get a notification for
 that, this is the easy part). After receiving the notification, I'd un-pause
 all pending requests. What's a good way to do that (writing such
 ResourceHandler)?

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [LTTF][WebKit Gardening]: Keeping up with the weeds.

2009-10-13 Thread John Abd-El-Malek
On Tue, Oct 13, 2009 at 3:46 PM, Dimitri Glazkov dglaz...@chromium.orgwrote:


 Based on the feedback, it sounds like we need to take the approach
 with LTTF team adding more resources on cleaning up the bottom of the
 test_expectations file (i.e. stuff recently added by the gardeners).

 It is still the gardener's responsibility to take care of the
 rebaselines, right? What should they do with the rest? File bugs? Mark
 as BUG_LTTF?


I agree with the need to have some method of assigning failed tests and have
someone fix them soon.  I'm not sure the sheriff is the best person for the
reasons given above.  However one thing that I think we need to be very
diligent in is assigning bugs to the correct person.  The sheriff usually
creates bugs, but often they may not have an owner, the owner might be left
to the default which is themselves, etc.  The person who owns that area is
then unaware that tests in their area is failing.

I wish that any bugs that are created, whether because of a failing layout
test or a Chrome unit test, must have an owner.  This shouldn't take much
time by going through the svn log or asking on #chromium.



 :DG

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
Saw this on slashdot:
http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
The conclusion is an average of 25,000–75,000 FIT (failures in time per
billion hours of operation) per Mbit.

On my machine the browser process is usually  100MB, so that averages out
to 176 to 493 error per year, with those numbers having big variance
depending on the machine.  Since most users don't have ECC, which means this
will lead to corruption.  Sqlite is a heavy user of memory, so even if it's
1/4 of the 100MB, that means we'll see an average of 40-120 errors naturally
because of faulty DIMMs.

Given that sqlite corruption means (repeated) crashing of the browser
process, it seems this data heavily suggests we should separate sqlite code
into a separate process.  The IPC overhead is negligible compared to disk
access.  My hunch is that the complexity is also not that high, since the
code that deals with it is already asynchronous since we don't use sqlite on
the UI/IO threads.

What do others think?

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
I'm not sure how Carlos is doing it?  Will we know if something is corrupt
just on load/save?  I imagine there's no way we can know when corruption
happen in steady-state and the next query leads to some other browser memory
(or another database) getting corrupted?

On Tue, Oct 6, 2009 at 3:58 PM, Huan Ren hu...@google.com wrote:

 It will be helpful to get our own measurement on database failures.
 Carlos just added something like that.

 Huan

 On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  Saw this on
  slashdot: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
  The conclusion is an average of 25,000–75,000 FIT (failures in time per
  billion hours of operation) per Mbit.
  On my machine the browser process is usually  100MB, so that averages
 out
  to 176 to 493 error per year, with those numbers having big variance
  depending on the machine.  Since most users don't have ECC, which means
 this
  will lead to corruption.  Sqlite is a heavy user of memory, so even if
 it's
  1/4 of the 100MB, that means we'll see an average of 40-120 errors
 naturally
  because of faulty DIMMs.
  Given that sqlite corruption means (repeated) crashing of the browser
  process, it seems this data heavily suggests we should separate sqlite
 code
  into a separate process.  The IPC overhead is negligible compared to disk
  access.  My hunch is that the complexity is also not that high, since the
  code that deals with it is already asynchronous since we don't use sqlite
 on
  the UI/IO threads.
  What do others think?
   
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 4:30 PM, Carlos Pizano c...@google.com wrote:

 On Tue, Oct 6, 2009 at 4:14 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  I'm not sure how Carlos is doing it?  Will we know if something is
 corrupt
  just on load/save?

 Many sqlite calls can return sqlite_corrupt. For example a query or an
 insert
 We just check for error codes 1 to 26 with 5 or 6 of them being
 serious error such as sqlite_corrupt

 I am sure that random bit flip in memory and on disk is the cause of
 some crashes, this is probably the 'limit' factor of how low the crash
 rate of a perfect program deployed in millions of computers can go.


The point I was trying to make is that the 'limit' factor as you put it is
proportional to memory usage.  Given our large memory consumption in the
browser process, the numbers from the paper imply dozens of corruptions just
in sqlite memory per user.  Even if only a small fraction of these are
harmful, spread over millions of users that's a lot of corruption.


 But I am unsure how to calculate, for example a random bit flip on the
 backingstores, which add to at least 10M on most machines does not
 hurt, or in the middle of a cache entry, or in the data part of some
 structure.


   I imagine there's no way we can know when corruption
  happen in steady-state and the next query leads to some other browser
 memory
  (or another database) getting corrupted?
 
  On Tue, Oct 6, 2009 at 3:58 PM, Huan Ren hu...@google.com wrote:
 
  It will be helpful to get our own measurement on database failures.
  Carlos just added something like that.
 
  Huan
 
  On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek j...@chromium.org
  wrote:
   Saw this on
   slashdot: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
   The conclusion is an average of 25,000–75,000 FIT (failures in time
 per
   billion hours of operation) per Mbit.
   On my machine the browser process is usually  100MB, so that averages
   out
   to 176 to 493 error per year, with those numbers having big variance
   depending on the machine.  Since most users don't have ECC, which
 means
   this
   will lead to corruption.  Sqlite is a heavy user of memory, so even if
   it's
   1/4 of the 100MB, that means we'll see an average of 40-120 errors
   naturally
   because of faulty DIMMs.
   Given that sqlite corruption means (repeated) crashing of the browser
   process, it seems this data heavily suggests we should separate sqlite
   code
   into a separate process.  The IPC overhead is negligible compared to
   disk
   access.  My hunch is that the complexity is also not that high, since
   the
   code that deals with it is already asynchronous since we don't use
   sqlite on
   the UI/IO threads.
   What do others think?
 
  
 
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 5:09 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Tue, Oct 6, 2009 at 4:59 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Tue, Oct 6, 2009 at 4:30 PM, Carlos Pizano c...@google.com wrote:

  On Tue, Oct 6, 2009 at 4:14 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  I'm not sure how Carlos is doing it?  Will we know if something is
 corrupt
  just on load/save?

 Many sqlite calls can return sqlite_corrupt. For example a query or an
 insert
 We just check for error codes 1 to 26 with 5 or 6 of them being
 serious error such as sqlite_corrupt

 I am sure that random bit flip in memory and on disk is the cause of
 some crashes, this is probably the 'limit' factor of how low the crash
 rate of a perfect program deployed in millions of computers can go.


 The point I was trying to make is that the 'limit' factor as you put it is
 proportional to memory usage.  Given our large memory consumption in the
 browser process, the numbers from the paper imply dozens of corruptions just
 in sqlite memory per user.  Even if only a small fraction of these are
 harmful, spread over millions of users that's a lot of corruption.


 For what it's worth:  This makes sense to me.  It seems like pulling SQLite
 into its own process would be helpful for the reasons you laid out.  I
 wonder if the only reason no one else has chimed in on this thread is that
 no one wants to have to implement it.  :-)


Chase is going to start investigating it (i.e. figure out what the cost in
doing it is, how much change it requires and ways of measuring the benefit).




 But I am unsure how to calculate, for example a random bit flip on the
 backingstores, which add to at least 10M on most machines does not
 hurt, or in the middle of a cache entry, or in the data part of some
 structure.



   I imagine there's no way we can know when corruption
  happen in steady-state and the next query leads to some other browser
 memory
  (or another database) getting corrupted?
 
  On Tue, Oct 6, 2009 at 3:58 PM, Huan Ren hu...@google.com wrote:
 
  It will be helpful to get our own measurement on database failures.
  Carlos just added something like that.
 
  Huan
 
  On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek j...@chromium.org
  wrote:
   Saw this on
   slashdot: http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
   The conclusion is an average of 25,000–75,000 FIT (failures in time
 per
   billion hours of operation) per Mbit.
   On my machine the browser process is usually  100MB, so that
 averages
   out
   to 176 to 493 error per year, with those numbers having big variance
   depending on the machine.  Since most users don't have ECC, which
 means
   this
   will lead to corruption.  Sqlite is a heavy user of memory, so even
 if
   it's
   1/4 of the 100MB, that means we'll see an average of 40-120 errors
   naturally
   because of faulty DIMMs.
   Given that sqlite corruption means (repeated) crashing of the
 browser
   process, it seems this data heavily suggests we should separate
 sqlite
   code
   into a separate process.  The IPC overhead is negligible compared to
   disk
   access.  My hunch is that the complexity is also not that high,
 since
   the
   code that deals with it is already asynchronous since we don't use
   sqlite on
   the UI/IO threads.
   What do others think?

  
 
 



 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 5:10 PM, Scott Hess sh...@chromium.org wrote:

 Our use of exclusive locking and page-cache preloading may open us up
 more to this kind of shenanigans.  Basically SQLite will trust those
 pages which we faulted into memory days ago.  We could mitigate
 against that somewhat, but this problem reaches into areas we cannot
 materially impact, such as filesystem caches.  And don't even begin to
 imagine that there are not similar issues with commodity disk drives
 and controllers.

 That said, I don't think this is an incremental addition of any kind.
 I've pointed it out before, there are things in the woods which
 corrupt databases.  We could MAYBE reduce occurrences to a suitable
 minimum using check-summing or something of the sort, but in the end
 we still have to detect corruption and decide what course to take from
 there.


I do think these are two separate problems.  Personally, I don't care as
much if my history or any other database is corrupted and I start from
scratch.  But random crashes that I can't isolate is something else.



 -scott


 On Tue, Oct 6, 2009 at 4:59 PM, John Abd-El-Malek j...@chromium.org
 wrote:
 
 
  On Tue, Oct 6, 2009 at 4:30 PM, Carlos Pizano c...@google.com wrote:
 
  On Tue, Oct 6, 2009 at 4:14 PM, John Abd-El-Malek j...@chromium.org
  wrote:
   I'm not sure how Carlos is doing it?  Will we know if something is
   corrupt
   just on load/save?
 
  Many sqlite calls can return sqlite_corrupt. For example a query or an
  insert
  We just check for error codes 1 to 26 with 5 or 6 of them being
  serious error such as sqlite_corrupt
 
  I am sure that random bit flip in memory and on disk is the cause of
  some crashes, this is probably the 'limit' factor of how low the crash
  rate of a perfect program deployed in millions of computers can go.
 
  The point I was trying to make is that the 'limit' factor as you put it
 is
  proportional to memory usage.  Given our large memory consumption in the
  browser process, the numbers from the paper imply dozens of corruptions
 just
  in sqlite memory per user.  Even if only a small fraction of these are
  harmful, spread over millions of users that's a lot of corruption.
 
  But I am unsure how to calculate, for example a random bit flip on the
  backingstores, which add to at least 10M on most machines does not
  hurt, or in the middle of a cache entry, or in the data part of some
  structure.
 
 
I imagine there's no way we can know when corruption
   happen in steady-state and the next query leads to some other browser
   memory
   (or another database) getting corrupted?
  
   On Tue, Oct 6, 2009 at 3:58 PM, Huan Ren hu...@google.com wrote:
  
   It will be helpful to get our own measurement on database failures.
   Carlos just added something like that.
  
   Huan
  
   On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek j...@chromium.org
   wrote:
Saw this on
slashdot:
 http://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf
The conclusion is an average of 25,000–75,000 FIT (failures in
 time
per
billion hours of operation) per Mbit.
On my machine the browser process is usually  100MB, so that
averages
out
to 176 to 493 error per year, with those numbers having big
 variance
depending on the machine.  Since most users don't have ECC, which
means
this
will lead to corruption.  Sqlite is a heavy user of memory, so even
if
it's
1/4 of the 100MB, that means we'll see an average of 40-120 errors
naturally
because of faulty DIMMs.
Given that sqlite corruption means (repeated) crashing of the
 browser
process, it seems this data heavily suggests we should separate
sqlite
code
into a separate process.  The IPC overhead is negligible compared
 to
disk
access.  My hunch is that the complexity is also not that high,
 since
the
code that deals with it is already asynchronous since we don't use
sqlite on
the UI/IO threads.
What do others think?
 
   
  
  
 
 
   
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 5:16 PM, Scott Hess sh...@chromium.org wrote:

 On Tue, Oct 6, 2009 at 5:09 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Tue, Oct 6, 2009 at 4:59 PM, John Abd-El-Malek j...@chromium.org
 wrote:
  On Tue, Oct 6, 2009 at 4:30 PM, Carlos Pizano c...@google.com wrote:
  On Tue, Oct 6, 2009 at 4:14 PM, John Abd-El-Malek j...@chromium.org
 wrote:
   I'm not sure how Carlos is doing it?  Will we know if something is
   corrupt
   just on load/save?
 
  Many sqlite calls can return sqlite_corrupt. For example a query or an
  insert
  We just check for error codes 1 to 26 with 5 or 6 of them being
  serious error such as sqlite_corrupt
 
  I am sure that random bit flip in memory and on disk is the cause of
  some crashes, this is probably the 'limit' factor of how low the crash
  rate of a perfect program deployed in millions of computers can go.
 
  The point I was trying to make is that the 'limit' factor as you put it
 is
  proportional to memory usage.  Given our large memory consumption in the
  browser process, the numbers from the paper imply dozens of corruptions
 just
  in sqlite memory per user.  Even if only a small fraction of these are
  harmful, spread over millions of users that's a lot of corruption.
 
  For what it's worth:  This makes sense to me.  It seems like pulling
 SQLite
  into its own process would be helpful for the reasons you laid out.  I
  wonder if the only reason no one else has chimed in on this thread is
 that
  no one wants to have to implement it.  :-)

 I don't understand how this paper makes it very useful to pull SQLite
 into a separate process.  I can see how it would make it useful to
 minimize how much in-memory data SQLite keeps, regardless of where
 SQLite lives.  I can also see how this effect would increase our
 incidence of memory stompers, but in a lot of cases it just won't
 matter.  If it corrupts a piece of data which we pass to SQLite,
 passing it direct or via IPC won't change that.  Most memory stompers
 won't manifest by hitting SQLite anyhow, unless there's reason to
 believe specific bits will be munged.


You would only pass POD over IPC, the same which is passed using Task
objects across threads.  So the only corruption you'll get over IPC would be
data corruption, but you ensure that crashes due to corruption are isolated
to the sqlite process.



 -scott


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 5:30 PM, Peter Kasting pkast...@google.com wrote:

 On Tue, Oct 6, 2009 at 3:49 PM, John Abd-El-Malek j...@chromium.orgwrote:

 Given that sqlite corruption means (repeated) crashing of the browser
 process, it seems this data heavily suggests we should separate sqlite code
 into a separate process.


 What does this mean for cases like the in-memory URL database, which is
 sqlite code running on a completely in-memory dataset?  This is currently
 used synchronously inside the UI threas to do things like inline
 autocomplete.  We _cannot_ make it async or move it to another thread.

 So are you proposing to move _some_ sqlite accesses elsewhere?


I'm not sure, I think this is one of the questions that needs to be explored
more by whoever investigates this.

If corruption in the in-memory URL database doesn't survive a crash (i.e.
because it's recreated each time), then perhaps it's ok to keep it in the
browser process.  However, if it's not, I'm not sure I understand why moving
it to another process is unworkable?  Chrome code is littered with many
examples of how we turned synchronous operations into asynchronous ones.
 But if worst comes to worst and we can't, you can always do sync IPC calls
to another process.  The overhead is in the microseconds so it won't
be noticeable.



 (Personally, I don't think this paper is evidence that we should move
 sqlite into a separate process, for similar reasons as to Scott.)

 PK


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 5:56 PM, Peter Kasting pkast...@google.com wrote:

 On Tue, Oct 6, 2009 at 5:53 PM, John Abd-El-Malek j...@chromium.orgwrote:

 If corruption in the in-memory URL database doesn't survive a crash (i.e.
 because it's recreated each time),


 It doesn't

 I'm not sure I understand why moving it to another process is unworkable?
  Chrome code is littered with many examples of how we turned synchronous
 operations into asynchronous ones.


 The user-visible behavior cannot be async.  Inline autocomplete must never
 race the user's action or it goes from awesome to hellishly annoying and
 awful.

 But if worst comes to worst and we can't, you can always do sync IPC calls
 to another process.  The overhead is in the microseconds so it won't
 be noticeable.


 I'd far prefer to keep it in the UI thread even if it were subject to these
 corruption issues.  Doing sync IPC from the UI thread to a sqlite thread
 sounds like a recipe for Jank.


Note, I don't know what the right answer for this specific case (I think
more investigation is necessary), but I do want to point out that I don't
think moving it to another process will introduce jank.  If it's currently
not done on the same thread as other databases, then if it were moved to
another process, it would have to be done on a separate thread as well.  Our
overhead for sync IPCs is in the microseconds.

Of course, actually implementing this and comparing histograms of in-process
and out-of-process delays is the fool-proof way of proving this :)


 PK


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Paper about DRAM error rates

2009-10-06 Thread John Abd-El-Malek
This isn't about decreasing memory usage.  It's about getting rid of nasty
problems like the browser process crashing every startup because of a
corrupt database and decreasing browser process crashes in general.
I think it's fair that not everyone shares the same opinion, but I do hope
that an experiment is run and we can compare numbers to see the effects on
crash rates.  If Chase or others do it, that's great, if not, I'll try to do
it after the jank task force.

On Tue, Oct 6, 2009 at 6:20 PM, cpu c...@chromium.org wrote:


 I am with the others that don't see move sqlite to another process
 as a natural outcome of these thread.

 If using more memory is the concern, another process uses more memory.
 sqlite is not crashing *that* much; yes it was the top crasher for a
 while but it was a data race





 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Who owns crash...@chromium.org and why is there so many updates from it?

2009-10-06 Thread John Abd-El-Malek
I got 21 emails in the last day for
http://code.google.com/p/chromium/issues/detail?id=20915

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Who owns crash...@chromium.org and why is there so many updates from it?

2009-10-06 Thread John Abd-El-Malek
On Tue, Oct 6, 2009 at 8:57 PM, Anthony LaForge lafo...@chromium.orgwrote:

 The short of it:

- People manually associated bugs in http:crash
- My tool creates its own map of signatures and for whatever reason
that bug is on all of them
- It goes through each aggregate signature and updates the status of
the bug (which is a flash crasher)
- It's made much worse because we are dealing w/ crashes that don't
have symbols and cannot be aggregated...

 What went wrong:

- The limiting function (seeing if crash-VERSION) was applied does not
happen for priority updates.  This was actually intentional since we start
looking at crash data early on.  However this should no longer be needed
since we wait for enough data that priority should no longer be shifting.


 What can be done about it?

- I will put a limiter on setting the priority


Thanks.



 Kind Regards,

 Anthony Laforge
 Technical Program Manager
 Mountain View, CA


 On Tue, Oct 6, 2009 at 8:46 PM, Patrick Johnson patr...@chromium.orgwrote:

 John's question is about why it's generating so many issue updates.

 Patrick


 On Tue, Oct 6, 2009 at 8:41 PM, Anthony LaForge lafo...@chromium.org
 wrote:
  It's the role account that manages crashes and crash related bugs.
  Kind Regards,
 
  Anthony Laforge
  Technical Program Manager
  Mountain View, CA
 
 
  On Tue, Oct 6, 2009 at 7:26 PM, Patrick Johnson patr...@chromium.org
  wrote:
 
  [+laforge]
 
  On Tue, Oct 6, 2009 at 6:51 PM, John Abd-El-Malek j...@chromium.org
  wrote:
   I got 21 emails in the last day
   for http://code.google.com/p/chromium/issues/detail?id=20915
 
  
 
 




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Jank: performance and animation frame rates

2009-10-02 Thread John Abd-El-Malek
On Fri, Oct 2, 2009 at 12:54 PM, Evan Stade est...@chromium.org wrote:


  We try to fire the timer rapidly, but if we get bogged down, it just
 won't fire until later; when it actually does fire, we update our state
  based on how much time has really passed instead of how many times the
 timer has triggered.

 In this case, something is not working as expected (at least on
 Linux), because when I test on the download shelf slide animation, the
 number of AnimationProgressed calls is exactly what one would
 calculate based on the frame rate and duration, even if I put a
 Sleep(1000) in the middle of the callback.

 Reading the Animation code, it seems the number of iterations is hard
 coded at the start: iteration_count_ = duration_ / timer_interval_; So
 we don't update our state on how much time has actually passed, we
 update it based on the number of times the timer has fired. It would
 be easy enough to fix this to do as you say while only touching the
 Animation class, although on a very bogged down machine the effect
 would be that instead of having some slow-looking animation we have a
 jerky (but fast!) animation. I personally think that trade-off is
 worth it, maybe others don't?


I also agree this is the better tradeoff.  Better to have a jerky animation
than slowing the machine even more.


  I'm not sure how to deal with this other than to create a separate queue
 of
  events triggered by animations and let later updates overwrite earlier
  ones that haven't yet been processed; that would be really hard to plumb,
  though.

 Hmm, why would we need a separate queue for that? Seems we could
 search for other animation events when dequeueing an animation event
 on the normal message loop. But the above fix seems cleaner/safer
 anyway.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Has anyone built successfully on Win7 64bit? I am getting bash.exe errors

2009-10-01 Thread John Abd-El-Malek
I built using the instructions on Windows 7 64 bit without installing
another cygwin.

On Thu, Oct 1, 2009 at 4:03 PM, vha14 vuh...@gmail.com wrote:


 So it looks like I need to install Cygwin separately before I can
 build Chrome? If so this should be added to the Windows build
 instruction page.

 On Oct 1, 2:59 pm, Mohamed Mansour m...@chromium.org wrote:
  I run Cygwin Bash Shell not from Command Prompt:
 
  moha...@mohamed-pc ~$ bash
 
   -Mohamed
 
 
 
  On Thu, Oct 1, 2009 at 5:32 PM, vha14 vuh...@gmail.com wrote:
 
   Hi Mohamed, can you run bash.exe from cmd? I get the following error:
 
   E:\chromium\home\chrome-svn\tarball\chromium\src\third_party\cygwin
   \binbash
9 [main] bash 8384 _cygtls::handle_exceptions: Exception:
   STATUS_ACCESS_VIOLATION
 9964 [main] bash 8384 open_stackdumpfile: Dumping stack trace to
   bash.exe.stackdump
211065 [main] bash 8384 _cygtls::handle_exceptions: Exception:
   STATUS_ACCESS_VIOLATION
229623 [main] bash 8384 _cygtls::handle_exceptions: Error while
   dumping state (probably corrupted stack)
 
   On Oct 1, 2:17 pm, Mohamed Mansour m...@chromium.org wrote:
Windows 7 64bit works fine here (using the default settings from
dev.chromium.org)  Make sure you have access to the folder your
 writing
   to.
Some folders require admin mode.
 -Mohamed
 
On Thu, Oct 1, 2009 at 4:10 PM, vha14 vuh...@gmail.com wrote:
 
 Detailed error message:
 
 1-- Build started: Project: js2c, Configuration: Debug Win32
 --
 1js2c
 2-- Build started: Project: cygwin, Configuration: Debug Win32
 --
 2setup_mount
 1 30 [main] bash 8980 _cygtls::handle_exceptions: Exception:
 STATUS_ACCESS_VIOLATION
 1   3645 [main] bash 8980 open_stackdumpfile: Dumping stack trace
 to
 bash.exe.stackdump
 2The operation completed successfully.
 2The operation completed successfully.
 1Project : error PRJ0002 : Error result 35584 returned from 'C:
 \Windows\SysWow64\cmd.exe'.
 
 Content of bash.exe.stackdump:
 
 E:\chromium\home\chrome-svn\tarball\chromium\src\chromemore
 bash.exe.stackdump
 Exception: STATUS_ACCESS_VIOLATION at eip=0043AE30
 eax= ebx= ecx=61106EC8 edx= esi=611021A0
 edi=0045A3E0
 ebp=0028CBC8 esp=0028C58C
 program=e:\chromium\home\chrome-svn\tarball
 \chromium\s
 rc\third_party\cygwin\bin\bash.exe, pid 8296, thread main
 cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B
 Stack trace:
 Frame Function  Args
 0028CBC8  0043AE30  (0003, 02361C00, 02360090, 772F)
 0028CD68  610060D8  (, 0028CDA0, 61005450, 0028CDA0)
 61005450  61004416  (009C, A02404C7, E8611021, FF48)
  431390 [main] bash 8296 _cygtls::handle_exceptions: Exception:
 STATUS_ACCESS_VI
 OLATION
  509897 [main] bash 8296 _cygtls::handle_exceptions: Error while
 dumping state (
 probably corrupted stack)
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Has anyone built successfully on Win7 64bit? I am getting bash.exe errors

2009-10-01 Thread John Abd-El-Malek
Chrome's cygwin installation is hermetic.  It works for a number of Google
devs on Windows 7/64 bit so I don't think it's a simple case of the cygwin
installation being broken.  What do you mean the binaries crash, did they
not resolve any dependent dlls?  Are you sure your gclient sync finished
successfully?  If you can move your working binaries off to another
directory, sync again, and try to repro this and get this data, it would be
helpful in determining how we can fix this.
Thanks

On Thu, Oct 1, 2009 at 4:46 PM, vuh...@gmail.com wrote:

 Ok, I fixed the problem by installing Cygwin (1.5.25-15), copying sh.exe
 and bash.exe over to src\third_party\cygwin\bin.

 So it may be a good idea to check in this Cygwin version.

 On Oct 1, 2009 4:43pm, John Abd-El-Malek j...@chromium.org wrote:
  I built using the instructions on Windows 7 64 bit without installing
 another cygwin.
 
  On Thu, Oct 1, 2009 at 4:03 PM, vha14 vuh...@gmail.com wrote:
 
 
 
  So it looks like I need to install Cygwin separately before I can
 
  build Chrome? If so this should be added to the Windows build
 
  instruction page.
 
 
 
  On Oct 1, 2:59 pm, Mohamed Mansour m...@chromium.org wrote:
 
   I run Cygwin Bash Shell not from Command Prompt:
 
  
 
   moha...@mohamed-pc ~$ bash
 
  
 
-Mohamed
 
  
 
  
 
  
 
 
 
   On Thu, Oct 1, 2009 at 5:32 PM, vha14 vuh...@gmail.com wrote:
 
  
 
Hi Mohamed, can you run bash.exe from cmd? I get the following error:
 
  
 
E:\chromium\home\chrome-svn\tarball\chromium\src\third_party\cygwin
 
\binbash
 
 9 [main] bash 8384 _cygtls::handle_exceptions: Exception:
 
STATUS_ACCESS_VIOLATION
 
  9964 [main] bash 8384 open_stackdumpfile: Dumping stack trace to
 
bash.exe.stackdump
 
 211065 [main] bash 8384 _cygtls::handle_exceptions: Exception:
 
STATUS_ACCESS_VIOLATION
 
 229623 [main] bash 8384 _cygtls::handle_exceptions: Error while
 
dumping state (probably corrupted stack)
 
  
 
On Oct 1, 2:17 pm, Mohamed Mansour m...@chromium.org wrote:
 
 Windows 7 64bit works fine here (using the default settings from
 
 dev.chromium.org)  Make sure you have access to the folder your
 writing
 
to.
 
 Some folders require admin mode.
 
  -Mohamed
 
  
 
 On Thu, Oct 1, 2009 at 4:10 PM, vha14 vuh...@gmail.com wrote:
 
  
 
  Detailed error message:
 
  
 
  1-- Build started: Project: js2c, Configuration: Debug Win32
 
  --
 
  1js2c
 
  2-- Build started: Project: cygwin, Configuration: Debug
 Win32
 
  --
 
  2setup_mount
 
  1 30 [main] bash 8980 _cygtls::handle_exceptions: Exception:
 
  STATUS_ACCESS_VIOLATION
 
  1   3645 [main] bash 8980 open_stackdumpfile: Dumping stack
 trace to
 
  bash.exe.stackdump
 
  2The operation completed successfully.
 
  2The operation completed successfully.
 
  1Project : error PRJ0002 : Error result 35584 returned from 'C:
 
  \Windows\SysWow64\cmd.exe'.
 
  
 
  Content of bash.exe.stackdump:
 
  
 
  E:\chromium\home\chrome-svn\tarball\chromium\src\chromemore
 
  bash.exe.stackdump
 
  Exception: STATUS_ACCESS_VIOLATION at eip=0043AE30
 
  eax= ebx= ecx=61106EC8 edx= esi=611021A0
 
  edi=0045A3E0
 
  ebp=0028CBC8 esp=0028C58C
 program=e:\chromium\home\chrome-svn\tarball
 
  \chromium\s
 
  rc\third_party\cygwin\bin\bash.exe, pid 8296, thread main
 
  cs=0023 ds=002B es=002B fs=0053 gs=002B ss=002B
 
  Stack trace:
 
  Frame Function  Args
 
  0028CBC8  0043AE30  (0003, 02361C00, 02360090, 772F)
 
  0028CD68  610060D8  (, 0028CDA0, 61005450, 0028CDA0)
 
  61005450  61004416  (009C, A02404C7, E8611021, FF48)
 
   431390 [main] bash 8296 _cygtls::handle_exceptions: Exception:
 
  STATUS_ACCESS_VI
 
  OLATION
 
   509897 [main] bash 8296 _cygtls::handle_exceptions: Error while
 
  dumping state (
 
  probably corrupted stack)
 
   
 
 
 
 
 
 
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: WebKit Hygiene: Please be kind. Test your patches. Warn about two-sided patches.

2009-09-23 Thread John Abd-El-Malek
Marc-Antonie knows the try scripts best, so to avoid volunteering him
directly, I'll say that I'm sure he can answer any questions from whoever
does this.  I don't think it'll be much work.

On Wed, Sep 23, 2009 at 10:04 AM, Dimitri Glazkov dglaz...@google.comwrote:

 I think this is a great idea! Do we have a Python/gcl/rietveld expert
 who can tackle this?

 :DG

 On Wed, Sep 23, 2009 at 9:49 AM, John Abd-El-Malek j...@chromium.org
 wrote:
  I didn't know this was possible.  It seems it will get a lot more usage
 if
  it just works, i.e. the try script grabs these settings automatically
 from
  a codereview.settings file.  If we start by putting this file in
  third_party\WebKit, then people who start with their patch there (also to
  upload) can use this transparently.  What do you think?
  On Wed, Sep 23, 2009 at 7:42 AM, Marc-Antoine Ruel mar...@chromium.org
  wrote:
 
  On Tue, Sep 22, 2009 at 9:35 PM, Adam Barth aba...@chromium.org
 wrote:
   On Tue, Sep 22, 2009 at 6:25 PM, John Abd-El-Malek j...@chromium.org
   wrote:
   Is this even possible?  i.e. I had uploaded a WebKit patch on
   codereview but
   none of the patchsets got run on the try server
   http://codereview.chromium.org/178030/show
  
   It is possible:
  
   aba...@zenque:~/svn/kr/src/third_party/WebKit$ gcl try scriptcontext
   --use_svn --svn_repo=svn://svn.chromium.org/chrome-try/try --bot
   layout_win,layout_mac,layout_linux --root src/third_party
 
  I know the format is not very user friendly but it is definitely
  possible. You probably can skip --use_svn and --svn_repo or at least
  use the --http stuff instead, see go/chrometryserver.
 
  You could want to use both the layout tests and the normal tests so
  you can send the patch to all the slaves at the same time, e.g.
  --bot win,linux,mac,layout_win,layout_mac,layout_linux
 
  If the layout tests try slaves get overused, I'll add more slaves.
 
  M-A
 
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: WebKit Hygiene: Please be kind. Test your patches. Warn about two-sided patches.

2009-09-23 Thread John Abd-El-Malek
so I just tried a few experiments.  For some reason, none of the changes in
third_party\WebKit do an automatic try when uploading.
When I try to do gcl try with --root src\third_party, the change doesn't
apply on the server since it tries to patch a diff with
src\third_party\WebKit... in the filenames in the src\third_party directory.

I tried without --root, and it doesn't work because then the filenames are
under WebCore directory (see
http://codereview.chromium.org/download/issue223008_1.diff) don't have
src\third_party in them.

Something seems to be fishy in how the filenames are generates when in the
WebKit directory.  Marc-Antoine knows this best, so I'm handing it off.

On Wed, Sep 23, 2009 at 10:13 AM, Dimitri Glazkov dglaz...@google.comwrote:

 BTW, I trying to be snarky in my request. More like begging :)

 :DG

 On Wed, Sep 23, 2009 at 10:08 AM, John Abd-El-Malek j...@chromium.org
 wrote:
  Marc-Antonie knows the try scripts best, so to avoid volunteering him
  directly, I'll say that I'm sure he can answer any questions from whoever
  does this.  I don't think it'll be much work.
 
  On Wed, Sep 23, 2009 at 10:04 AM, Dimitri Glazkov dglaz...@google.com
  wrote:
 
  I think this is a great idea! Do we have a Python/gcl/rietveld expert
  who can tackle this?
 
  :DG
 
  On Wed, Sep 23, 2009 at 9:49 AM, John Abd-El-Malek j...@chromium.org
  wrote:
   I didn't know this was possible.  It seems it will get a lot more
 usage
   if
   it just works, i.e. the try script grabs these settings
 automatically
   from
   a codereview.settings file.  If we start by putting this file in
   third_party\WebKit, then people who start with their patch there (also
   to
   upload) can use this transparently.  What do you think?
   On Wed, Sep 23, 2009 at 7:42 AM, Marc-Antoine Ruel 
 mar...@chromium.org
   wrote:
  
   On Tue, Sep 22, 2009 at 9:35 PM, Adam Barth aba...@chromium.org
   wrote:
On Tue, Sep 22, 2009 at 6:25 PM, John Abd-El-Malek 
 j...@chromium.org
wrote:
Is this even possible?  i.e. I had uploaded a WebKit patch on
codereview but
none of the patchsets got run on the try server
http://codereview.chromium.org/178030/show
   
It is possible:
   
aba...@zenque:~/svn/kr/src/third_party/WebKit$ gcl try
 scriptcontext
--use_svn --svn_repo=svn://svn.chromium.org/chrome-try/try --bot
layout_win,layout_mac,layout_linux --root src/third_party
  
   I know the format is not very user friendly but it is definitely
   possible. You probably can skip --use_svn and --svn_repo or at least
   use the --http stuff instead, see go/chrometryserver.
  
   You could want to use both the layout tests and the normal tests so
   you can send the patch to all the slaves at the same time, e.g.
   --bot win,linux,mac,layout_win,layout_mac,layout_linux
  
   If the layout tests try slaves get overused, I'll add more slaves.
  
   M-A
  
  
 
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Problem with gyp every time v8 updates their branch

2009-09-23 Thread John Abd-El-Malek
Every time v8 team updates which branch is the one that's used in Chrome,
gclient sync fails on Windows.  The error is below.

 running 'svn update a:\chrome2\src\tools\tryserver' in 'a:\chrome2'
At revision 3275.
Error: Can't switch the checkout to http://v8.googlecode.com/svn/tr...@2966;
UUID don't match and there is local changes in a:\chrome2\src\v8. Delete the
direct
ory and try again.


a:\chrome2\src\v8svn st
?   tools\gyp\v8.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\js2c.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\v8_nosnapshot.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\v8_snapshot.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\v8_shell.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\v8_base.vcproj.GOOGLE.jabdelmalek.user
?   tools\gyp\mksnapshot.vcproj.GOOGLE.jabdelmalek.user



I have to delete the v8 directory and run gclient sync again (or remove the
generated files).  This is annoying, since it breaks syncing to all the
Windows developers (I don't think this happens on Mac and Linux).

Can the generated files be put elsewhere?  Or is there a way that gyp can
tell svn to ignore .user files?  I realize I could put it in my svn config
file, but this doesn't solve it for all the other developers.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] When disabling a test, please assign to whoever wrote it

2009-09-22 Thread John Abd-El-Malek
It only takes a few moments to figure out this information, and ensures that
the bug lands on the right person's desk.
http://src.chromium.org/viewvc/chrome/trunk/src/ can show who wrote the
initial test
For commits before we moved to the open source repository, look at
http://chrome-corpsvn.mtv/viewvc/trunk/?root=chrome  (internal only)

Thanks

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: debugging the browser started by UI tests

2009-09-22 Thread John Abd-El-Malek
Agreed, we should have a --browser-startup-dialog that's added to
BrowserMain.  ui_tests can then pass it and --renderer-startup-dialog,
plugin-startup-dialog, in-process-plugins, --single-process along if they're
specified.

On Tue, Sep 22, 2009 at 4:03 PM, Scott Violet s...@chromium.org wrote:


 If you uncomment WAIT_FOR_DEBUGGER_ON_OPEN on ui_test you'll be
 prompted. We really need to convert this into a comment line option.

  -Scott

 On Tue, Sep 22, 2009 at 3:00 PM, Paweł Hajdan Jr.
 phajdan...@chromium.org wrote:
  What's the best way to attach the debugger to a browser started by a UI
  test? How about doing that only in case of a crash?
  I'm looking for solution both for Windows and Linux, so if you have good
  techniques, it'd be really nice. I can even document them on the wiki,
 but
  currently I'm using LOG statements when debugging the browser (I know
 it's
  not the optimal and kind of sucks, but I couldn't find a good way to
 attach
  the debugger).
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: WebKit Hygiene: Please be kind. Test your patches. Warn about two-sided patches.

2009-09-22 Thread John Abd-El-Malek
On Tue, Sep 22, 2009 at 6:06 PM, Dimitri Glazkov dglaz...@google.comwrote:


 Today wasn't a happy day for p...@. He did a seemingly innocuous roll
 that broke the world: selenium, ui tests, layout tests. I am sure it
 was stressful and probably added unnecessary gray to his hair.

 Stuff like this happens to WebKit gardeners. We're used to breakages
 upstream. That's the cost of being unforked, right?

 The problem however, is that since we unforked, most of these
 breakages and regressions are caused by fellow teammates. There are
 two major issues:

 1) writers of patches don't mention that the patch is two-sided and
 will break Chromium if landed prematurely. I don't have to go far for
 an example. Commit queue bot landed
 http://trac.webkit.org/changeset/48659 a few minutes ago and broke the
 canary. This means that the canary will be red all night and any
 subsequent regressions will either not be noticed or create more
 complications.

 2) writers of patches don't test them properly. In Paul's case, it was
 http://trac.webkit.org/changeset/48639, again by a teammate, and it
 looks like the patch wasn't very thoroughly tested -- it showed a few
 regressions in the canary, but because it had to do with V8 garbage
 collection, the failures were intermittent and seemingly random.
 However, landing it on trunk looked like a shrapnel blast.

 This all means that we have to be a bit more diligent. We shouldn't be
 paying these unnecessary costs. So, from now on, I propose a fairly
 simple set of new rules:

 1) if you write a Chromium patch for WebKit, you must provide URLs of
 successful trybot runs with your submission. Chromium WebKit reviewers
 will not r+ your patch otherwise. If you can't provide the trybot URLs
 for some reason, please explain in detail why this patch could still
 land.


Is this even possible?  i.e. I had uploaded a WebKit patch on codereview but
none of the patchsets got run on the try server
http://codereview.chromium.org/178030/show


 2) if the two-sided patch you authored broke the canary and this
 happened with no coordination with the WebKit gardener, you assume
 WebKit gardening responsibility for the next 24 hours.

 Hopefully, these amendments to our existing ways will bring a bit more
 peace and stability to the Chromium land. What do you think?

 :DG

 :DG

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Shipping features behind a run-time flag can sometimes still be dangerous

2009-09-21 Thread John Abd-El-Malek
On Mon, Sep 21, 2009 at 2:31 PM, Jeremy Orlow jor...@chromium.org wrote:

 I think we need to re-consider our practice of shipping beta/stable
 browsers with experimental features hidden behind flags--at least when they
 have any side-effects in JavaScript.  An example of where this has bitten us
 is http://code.google.com/p/chromium/issues/detail?id=22181

 Although part of the problem is the way they coded things (since both
 SessionStorage and LocalStorage use the Storage interface,
 its existence doesn't imply SessionStorage is necessarily available), this
 bug has pointed out a couple problems.  1) constructors are visible to
 javascript even when the feature is totally disabled.   2) When an object
 (like the Storage interface) wraps a NULL it shows up as null in JavaScript.
  Since returning NULL/0 is the standard thing to do when the feature is
 disabled, this means that the functions return null when disabled at run
 time and undefined when disabled at compile time.  3) Even if we fixed the
 undefined problem, |'localStorage' in window| would still return true.

 We've been discussing these issues in a WebKit-dev thread (
 https://lists.webkit.org/pipermail/webkit-dev/2009-September/thread.html#9860)
 and although (2) is probably something we can solve without too much effort
 (Drew is going to look into it), (1) and (3) probably aren't worth changing
 if the runtime flag is just temporary.

 *As such, I feel fairly strongly that we should start disabling features
 behing a run-time flag with compile-time flags in future beta/stable builds
 if they have any side-effects in JavaScript.*


Having features enabled behind command line flag has been very helpful in
tracking down bugs and getting the wider population to test something.  It's
also useful when debugging on someone's machine.

It sounds like the problem is how we're keying off the run time flag (i.e.
not in the bindings).  When I looked at the database bindings, there was no
way to both use the automatically generated constructor and not have side
effects.  I think it's worth the tradeoff to make it custom in the meantime.


 I'm working with Anthony LaForge to fix this for
 LocalStorage/SessionStorage right now.  I'm not sure if it's worth
 doing preemptively for other features, but it might be.  I definitely think
 we should do it the next time we cut a beta build.  Especially if there's a
 chance the beta will be an ancestor of a stable channel release.

 J

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Shipping features behind a run-time flag can sometimes still be dangerous

2009-09-21 Thread John Abd-El-Malek
On Mon, Sep 21, 2009 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Mon, Sep 21, 2009 at 3:29 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Mon, Sep 21, 2009 at 2:31 PM, Jeremy Orlow jor...@chromium.orgwrote:

 I think we need to re-consider our practice of shipping beta/stable
 browsers with experimental features hidden behind flags--at least when they
 have any side-effects in JavaScript.  An example of where this has bitten us
 is http://code.google.com/p/chromium/issues/detail?id=22181

 Although part of the problem is the way they coded things (since both
 SessionStorage and LocalStorage use the Storage interface,
 its existence doesn't imply SessionStorage is necessarily available), this
 bug has pointed out a couple problems.  1) constructors are visible to
 javascript even when the feature is totally disabled.   2) When an object
 (like the Storage interface) wraps a NULL it shows up as null in JavaScript.
  Since returning NULL/0 is the standard thing to do when the feature is
 disabled, this means that the functions return null when disabled at run
 time and undefined when disabled at compile time.  3) Even if we fixed the
 undefined problem, |'localStorage' in window| would still return true.

 We've been discussing these issues in a WebKit-dev thread (
 https://lists.webkit.org/pipermail/webkit-dev/2009-September/thread.html#9860)
 and although (2) is probably something we can solve without too much effort
 (Drew is going to look into it), (1) and (3) probably aren't worth changing
 if the runtime flag is just temporary.

 *As such, I feel fairly strongly that we should start disabling features
 behing a run-time flag with compile-time flags in future beta/stable builds
 if they have any side-effects in JavaScript.*


 Having features enabled behind command line flag has been very helpful in
 tracking down bugs and getting the wider population to test something.  It's
 also useful when debugging on someone's machine.


 Which features/flags (with side effects in JavaScript) have you found
 helpful when debugging *stable/beta* versions?  Which of those were
 helpful in stable versions?


Things like dev tools, OOP, new safe browsing, different memory models and
allocators, no activex, disabling plugins/javascript are what comes to my
mind.  Not just for debugging, but also for using them, and for having users
with problems try them out to see if it fixes them.  As Chrome is maturing
as a browser, feature development will move from infrastructure to web
features, which is why the list is biased to the former, but the same
benefits still exist.

There is also cost of maintaining  different dev/beta/stable and
Chrome/Chromium releases, in terms of testing, release processes, user
confusion etc.


 It sounds like the problem is how we're keying off the run time flag (i.e.
 not in the bindings).


 No, the problem is the bindings.  We'd need to add more logic to them to
 handle these cases.


oops, I meant in the bindings, hence my statement below :)




 When I looked at the database bindings, there was no way to both use the
 automatically generated constructor and not have side effects.


 Exactly.  And so the alternative is to add custom bindings or to change the
 code generator.  Both have a price in terms of development, performance,
 and maintenance.  As I said, I believe Drew is going to explore this when he
 enables shared workers behind a flag.


 I think it's worth the tradeoff to make it custom in the meantime.


 Please reply to
 https://lists.webkit.org/pipermail/webkit-dev/2009-September/009995.htmlthen.


What's wrong with this thread?  We're discussing what to do in Chrome.




 Whether or not we make these run time flags have no side-effects in the
 future, there's still the matter of what we should do for Chrome 3 since
 adding in custom getters and such seems like a very high risk for little
 reward.


Really?  Why is it very high risk?



 J


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Shipping features behind a run-time flag can sometimes still be dangerous

2009-09-21 Thread John Abd-El-Malek
On Mon, Sep 21, 2009 at 3:59 PM, Jeremy Orlow jor...@chromium.org wrote:

 On Mon, Sep 21, 2009 at 3:51 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Mon, Sep 21, 2009 at 3:43 PM, Jeremy Orlow jor...@chromium.orgwrote:

 On Mon, Sep 21, 2009 at 3:29 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Mon, Sep 21, 2009 at 2:31 PM, Jeremy Orlow jor...@chromium.orgwrote:

 I think we need to re-consider our practice of shipping beta/stable
 browsers with experimental features hidden behind flags--at least when 
 they
 have any side-effects in JavaScript.  An example of where this has bitten 
 us
 is http://code.google.com/p/chromium/issues/detail?id=22181

 Although part of the problem is the way they coded things (since both
 SessionStorage and LocalStorage use the Storage interface,
 its existence doesn't imply SessionStorage is necessarily available), this
 bug has pointed out a couple problems.  1) constructors are visible to
 javascript even when the feature is totally disabled.   2) When an object
 (like the Storage interface) wraps a NULL it shows up as null in 
 JavaScript.
  Since returning NULL/0 is the standard thing to do when the feature is
 disabled, this means that the functions return null when disabled at run
 time and undefined when disabled at compile time.  3) Even if we fixed the
 undefined problem, |'localStorage' in window| would still return true.

 We've been discussing these issues in a WebKit-dev thread (
 https://lists.webkit.org/pipermail/webkit-dev/2009-September/thread.html#9860)
 and although (2) is probably something we can solve without too much 
 effort
 (Drew is going to look into it), (1) and (3) probably aren't worth 
 changing
 if the runtime flag is just temporary.

 *As such, I feel fairly strongly that we should start disabling
 features behing a run-time flag with compile-time flags in future
 beta/stable builds if they have any side-effects in JavaScript.*


 Having features enabled behind command line flag has been very helpful
 in tracking down bugs and getting the wider population to test something.
  It's also useful when debugging on someone's machine.


 Which features/flags (with side effects in JavaScript) have you found
 helpful when debugging *stable/beta* versions?  Which of those were
 helpful in stable versions?


 Things like dev tools, OOP, new safe browsing, different memory models and
 allocators, no activex, disabling plugins/javascript are what comes to my
 mind.  Not just for debugging, but also for using them, and for having users
 with problems try them out to see if it fixes them.  As Chrome is maturing
 as a browser, feature development will move from infrastructure to web
 features, which is why the list is biased to the former, but the same
 benefits still exist.


 Sorry, I meant to say with side effects in JavaScript even when not
 enabled.  Do any on that list still apply?


I don't think anyone is arguing that side-effects in JS are ok.



 There is also cost of maintaining  different dev/beta/stable and
 Chrome/Chromium releases, in terms of testing, release processes, user
 confusion etc.


  The maintenance should only be a difference in compile time flags and
 which tests get run, but I do agree that there's a cost to this approach as
 well.


There are also manual tests.  The same binary can't be reused, so a new
build will have to be made which means every test that was run before has to
be run again (say there was corruption in the second build).  We can't just
promote one build from a channel to another, etc.  You can talk to Mark or
others to see how much time the release processes already take.




  It sounds like the problem is how we're keying off the run time flag
 (i.e. not in the bindings).


 No, the problem is the bindings.  We'd need to add more logic to them to
 handle these cases.


 oops, I meant in the bindings, hence my statement below :)




 When I looked at the database bindings, there was no way to both use the
 automatically generated constructor and not have side effects.


 Exactly.  And so the alternative is to add custom bindings or to change
 the code generator.  Both have a price in terms of development, performance,
 and maintenance.  As I said, I believe Drew is going to explore this when he
 enables shared workers behind a flag.


 I think it's worth the tradeoff to make it custom in the meantime.


 Please reply to
 https://lists.webkit.org/pipermail/webkit-dev/2009-September/009995.htmlthen.


 What's wrong with this thread?  We're discussing what to do in Chrome.


 What we do in Chrome affects WebKit in terms of performance
 and maintenance.  This is why the thread existed to begin with.  Any
 decision that we make that affects the rest of the ports should be discussed
 there unless it's obviously the right thing to do for them.  (Which this
 isn't.)


Chrome has very different requirements than other products that use WebKit.
 Our dev/beta/stable channel approach means we can move fast

[chromium-dev] Re: Enabling --disable-hang-monitor for new windows when Chrome is already running

2009-09-11 Thread John Abd-El-Malek
For reference, something similar is done for popups:
void NPN_PushPopupsEnabledState(NPP instance, NPBool enabled);
void NPN_PopPopupsEnabledState(NPP instance);

Perhaps we can do the same thing here:

void NPN_PushPluginHangDetectorState(NPP instance, NPBool enabled);
void NPN_Pop PluginHangDetectorState(NPP instance);

On Fri, Sep 11, 2009 at 2:46 PM, John Tamplin j...@google.com wrote:

 On Fri, Sep 11, 2009 at 5:24 PM, Darin Fisher da...@chromium.org wrote:

 I think that is a reasonable feature request.  It would be nice however if
 there were some way to know when to restore the old behavior.
  Unfortunately, Chrome won't know when you are done.


 I was thinking something like this for my case (substitute appropriate
 method names):
 NPN_SetPluginWarning(false);
 processSocketMessages();
 NPN_SetPluginWarning(true);

 and trying to call NPN_SetPluginWarning where you didn't request that
 permission in the manifest would fail.

 --
 John A. Tamplin
 Software Engineer (GWT), Google

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Enabling --disable-hang-monitor for new windows when Chrome is already running

2009-09-11 Thread John Abd-El-Malek
On Fri, Sep 11, 2009 at 3:44 PM, John Tamplin j...@google.com wrote:

 On Fri, Sep 11, 2009 at 6:38 PM, John Abd-El-Malek j...@chromium.orgwrote:

 I presume you're referring to Chrome extensions?  I don't see the
 advantage of making this depend on the plugin being distributed via
 extensions.


 How else would an end-user get a plugin installed for Chrome?


Through whatever plugin installer they have (i.e. Flash's installer) or the
toolkit (i.e. Flash Builder).


 I don't think you want to tell them to go create a directory if it doesn't
 exist, and copy the file there, and you don't want to have to write a
 platform-specific installer to do that either.


 --
 John A. Tamplin
 Software Engineer (GWT), Google


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Enabling --disable-hang-monitor for new windows when Chrome is already running

2009-09-11 Thread John Abd-El-Malek
On Fri, Sep 11, 2009 at 4:01 PM, Mike Morearty m...@morearty.com wrote:



 On Fri, Sep 11, 2009 at 3:54 PM, John Tamplin j...@google.com wrote:

 On Fri, Sep 11, 2009 at 6:37 PM, John Abd-El-Malek j...@chromium.orgwrote:

 For reference, something similar is done for popups:
 void NPN_PushPopupsEnabledState(NPP instance, NPBool enabled);
 void NPN_PopPopupsEnabledState(NPP instance);

 Perhaps we can do the same thing here:

 void NPN_PushPluginHangDetectorState(NPP instance, NPBool enabled);
 void NPN_Pop PluginHangDetectorState(NPP instance);


 Sounds fine to me.


 Me too.


If this sounds good to you, the next step would be getting a broader
discussion with other browser vendors on the plugin-futures mailing list (
https://mail.mozilla.org/listinfo/plugin-futures).



 As for a ping solution, my response is the same as John's: I could make
 that work if necessary, but it would be awkward. Having an NPN_... API seems
 like a cleaner solution.


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Enabling --disable-hang-monitor for new windows when Chrome is already running

2009-09-11 Thread John Abd-El-Malek
On Fri, Sep 11, 2009 at 4:32 PM, John Tamplin j...@google.com wrote:

 On Fri, Sep 11, 2009 at 7:31 PM, John Abd-El-Malek j...@chromium.orgwrote:

 If this sounds good to you, the next step would be getting a broader
 discussion with other browser vendors on the plugin-futures mailing list (
 https://mail.mozilla.org/listinfo/plugin-futures).


 Since the other browsers do not run plugins in a separate thread, they
 don't have this issue.  Is that list still relevant then?


Firefox and Safari (at least on Mac) are also moving towards out of process
plugins.



 --
 John A. Tamplin
 Software Engineer (GWT), Google


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Enabling --disable-hang-monitor for new windows when Chrome is already running

2009-09-11 Thread John Abd-El-Malek
On Fri, Sep 11, 2009 at 4:52 PM, Mike Morearty m...@morearty.com wrote:

 So, since Flash is installed by means other than as part of an Extension,
 does that mean that John Tamplin's suggestion of giving permissions via
 manifest.json won't work for me?  I take it manifest.json is something that
 only applies to extensions, and not to the other methods of installing a
 plugin.


right


 On the other hand, it seems to me that since (as far as I know) plugins are
 native code that can do whatever they want, there is no need for giving a
 plugin special permission to use the new NPN API -- just grant that
 permission to all plugins.  Native plugins can already do just about
 anything, including read/write access to the filesystem and the Internet, so
 it doesn't seem necessary for them to need special permission to access this
 API.


agreed





 On Fri, Sep 11, 2009 at 4:30 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Fri, Sep 11, 2009 at 4:03 PM, Mike Morearty m...@morearty.com wrote:



 On Fri, Sep 11, 2009 at 3:44 PM, John Tamplin j...@google.com wrote:

 On Fri, Sep 11, 2009 at 6:38 PM, John Abd-El-Malek 
 j...@chromium.orgwrote:

 I presume you're referring to Chrome extensions?  I don't see the
 advantage of making this depend on the plugin being distributed via
 extensions.


 How else would an end-user get a plugin installed for Chrome?  I don't
 think you want to tell them to go create a directory if it doesn't exist,
 and copy the file there, and you don't want to have to write a
 platform-specific installer to do that either.


 I don't know quite how the Flash player got into my Chrome, but all I
 know is, it's there.  Although I don't know for sure, I sort of suspect that
 when Chrome installed, it looked for either (a) all existing Netscape
 plugins, or (b) just Flash, and enabled it.


 We crawled the disk/registry for pointers to NPAPI plugins, using the same
 algorithm that other NPAPI browsers use.  You probably already had the
 plugin from when you used Firefox.  If you didn't, we have a plugin
 installer UI that, once given permission, would download and install it.



 As far as I know, we (Adobe) don't have any special Chrome extension for
 installing Flash player.  We just have the ActiveX version and the Netscape
 plugin version.





--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: autolinking BUG= lines in Rietveld

2009-09-10 Thread John Abd-El-Malek
This is just for Rietveld issues.  We can possibly do the same to the commit
log in gcl/git scripts, again patches welcome :)
Thanks Mohamed for the patch, I committed it and made it live with two minor
changes.

On Wed, Sep 9, 2009 at 10:04 PM, PhistucK phist...@gmail.com wrote:

 Are we only talking about when it is viewed in Rietveld?What about when it
 is actually committed, can the change log be tweaked that way,
 automatically, too?
 That would be nice when reading change logs.

 Thank you!

 ☆PhistucK



 On Thu, Sep 10, 2009 at 05:01, John Abd-El-Malek j...@chromium.org wrote:

 Thanks, glad you enjoy it :)
 This was a side distraction to fix the annoying issue of having to
 manually copying and pasting the bug ids.  It severely tested my regex-fu
 and since the multiple bugs case should be rare, I'll hide behind the excuse
 that if they're duplicates they should be marked as such and only end up
 with one, and if they're different bugs there should be separate changelists
 ;)  The rietveld change is at
 http://code.google.com/p/rietveld/source/detail?r=455, if anyone sends me
 a patch to make it work with multiple bug ids, I'd be happy to push it.


 On Wed, Sep 9, 2009 at 6:15 PM, Mohamed Mansour m...@chromium.org wrote:

 Very nice, no longer I have to put crbug.com/#.  Whoever did that
 autolink, can you support multiple bugs as well? The format bugdroid uses is
 BUG=1, 2, 3
 - Mohamed Mansour



 On Wed, Sep 9, 2009 at 8:58 PM, Paweł Hajdan Jr. 
 phajdan...@chromium.org wrote:

 I just noticed that BUG=1234 lines are autolinked to the correct bug
 when viewed in Rietveld (codereview.chromium.org). This is very useful,
 thanks!









 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: memory leak in render_thread_unittest

2009-09-03 Thread John Abd-El-Malek
These leaks have always existed in the unit tests.  They don't happen in
Chrome builds since the posted tasks always get a chance to run on different
thread where they clean up.  But in unit tests that doesn't (always) happen.
 It's been low priority for me to look at because it's not a real leak, but
if you have time to look into it, we can chat about the details.

On Thu, Sep 3, 2009 at 10:33 AM, Antony Sargent asarg...@google.com wrote:

 Looks like we have a leak in render_thread_unittest.cc :


 http://build.chromium.org/buildbot/waterfall/builders/XP%20Unit%20(purify)/builds/5386/steps/purify%20test:%20unit/logs/stdio

 There are several distinct stacks, but most boil down to this allocation:

ipc/ipc_sync_channel.cc:347  
 IPC::SyncChannel::SyncChannel(basic_string::std const,...
chrome/common/child_thread.cc:42  ChildThread::Init(void)
chrome/common/child_thread.cc:27  
 ChildThread::ChildThread(basic_string::std const)
chrome/renderer/render_thread.cc:97  
 RenderThread::RenderThread(basic_string::std const)
chrome/renderer/render_thread_unittest.cc:22  RenderThreadTest::SetUp(void)


 This has been happening since before the oldest entry on the buildbot front
 page, so I'm not sure how to figure out when it started happening.

 Does this seem familiar to anyone?



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: memory leak in render_thread_unittest

2009-09-03 Thread John Abd-El-Malek
There's already this one that I know of:
http://code.google.com/p/chromium/issues/detail?id=20703, there may be
others.

On Thu, Sep 3, 2009 at 11:11 AM, Antony Sargent asarg...@google.com wrote:

 In that case I think I'll just put a bug in for these and add a suppression
 to get the tree green.
 Thanks.

 On Thu, Sep 3, 2009 at 10:53 AM, John Abd-El-Malek j...@chromium.orgwrote:

 These leaks have always existed in the unit tests.  They don't happen in
 Chrome builds since the posted tasks always get a chance to run on different
 thread where they clean up.  But in unit tests that doesn't (always) happen.
  It's been low priority for me to look at because it's not a real leak, but
 if you have time to look into it, we can chat about the details.

 On Thu, Sep 3, 2009 at 10:33 AM, Antony Sargent asarg...@google.comwrote:

  Looks like we have a leak in render_thread_unittest.cc :


 http://build.chromium.org/buildbot/waterfall/builders/XP%20Unit%20(purify)/builds/5386/steps/purify%20test:%20unit/logs/stdio

 There are several distinct stacks, but most boil down to this allocation:

ipc/ipc_sync_channel.cc:347  
 IPC::SyncChannel::SyncChannel(basic_string::std const,...
chrome/common/child_thread.cc:42  ChildThread::Init(void)
chrome/common/child_thread.cc:27  
 ChildThread::ChildThread(basic_string::std const)
chrome/renderer/render_thread.cc:97  
 RenderThread::RenderThread(basic_string::std const)
chrome/renderer/render_thread_unittest.cc:22  
 RenderThreadTest::SetUp(void)


 This has been happening since before the oldest entry on the buildbot
 front page, so I'm not sure how to figure out when it started happening.

 Does this seem familiar to anyone?



 




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Long (exciting?) writeup on improving Mac New Tab Performance (not entirely Mac-specific)

2009-08-31 Thread John Abd-El-Malek
On Mon, Aug 31, 2009 at 4:50 PM, Scott Hess sh...@chromium.org wrote:


 On Mon, Aug 31, 2009 at 8:56 AM, Mark Mentovaim...@chromium.org wrote:
  Well, that annoying throbber is still chewing up time, causing some
  amount of UI loop contention while the images, thumbnails, and icons
  are fetched.  Windows and Linux don't have a throbber for the new tab
  page.  We shouldn't either.  Excellent, now we're down to 200ms.  It's
  still high, but it's reasonable.  It's a perceptible improvement from
  the 300ms we started with.

 It might be interesting to have the IO thread tag messages with the
 time as they go by, and have the UI thread keep track of the
 distribution of times that messages spend enqueued.  Then we can set
 goals around how fast IPCs get pulled off the queue.


The IPC logging code already keeps track of message dispatch time (i.e. from
when Send() was called until the message handler started executing) and
processing time (i.e. how long the handler took).  I don't know how much of
the UI is implemented on Mac, but on Windows this was only a few hours to
implement, and it'll give a lot of insight on which messages are backed
up/taking a lot of time.


 -scott

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Long (exciting?) writeup on improving Mac New Tab Performance (not entirely Mac-specific)

2009-08-31 Thread John Abd-El-Malek
On Mon, Aug 31, 2009 at 4:50 PM, John Grabowski j...@chromium.org wrote:

 jam: It's a good idea; we came to the same conclusion.

Good to hear :)


 I implemented the about:ipc UI as part of our performance focus last week.
  However, when I hooked it up, I noticed the logging itself was never fully
 ported from Windows; it uses a cross-process waitable event, but
 waitable_event_posix.cc isn't cross-process.  So I'm working on that too.


It might be easier to make this code work without events, i.e. by sending an
IPC message to all child processes when logging state changes.  agl has
spent a bit of time looking into this and came to the conclusion that it's
too much work (it's the reason that showModalDialog doesn't work on
mac/linux btw).  If you can make it work with the same behavior as the
Windows event though, that'd be good since it'll make showModalDialog work.


 jrg

 On Mon, Aug 31, 2009 at 9:57 AM, John Abd-El-Malek j...@chromium.orgwrote:



 On Mon, Aug 31, 2009 at 4:50 PM, Scott Hess sh...@chromium.org wrote:


 On Mon, Aug 31, 2009 at 8:56 AM, Mark Mentovaim...@chromium.org wrote:
  Well, that annoying throbber is still chewing up time, causing some
  amount of UI loop contention while the images, thumbnails, and icons
  are fetched.  Windows and Linux don't have a throbber for the new tab
  page.  We shouldn't either.  Excellent, now we're down to 200ms.  It's
  still high, but it's reasonable.  It's a perceptible improvement from
  the 300ms we started with.

 It might be interesting to have the IO thread tag messages with the
 time as they go by, and have the UI thread keep track of the
 distribution of times that messages spend enqueued.  Then we can set
 goals around how fast IPCs get pulled off the queue.


 The IPC logging code already keeps track of message dispatch time (i.e.
 from when Send() was called until the message handler started executing) and
 processing time (i.e. how long the handler took).  I don't know how much of
 the UI is implemented on Mac, but on Windows this was only a few hours to
 implement, and it'll give a lot of insight on which messages are backed
 up/taking a lot of time.


 -scott




 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [linux] plugin paths

2009-08-28 Thread John Abd-El-Malek
Can't we blacklist nspluginwrapper, and use the same logic that it uses to
find the real plugins?

On Fri, Aug 28, 2009 at 11:01 AM, Evan Martin e...@chromium.org wrote:


 Right now our plugin loading code matches Firefox in the search path order.
 1  $MOZ_PLUGIN_PATH
 2  ~/.mozilla/plugins
 3  path_to_chrome_binary/plugins  - analogous to Firefox
 4  /usr/lib/mozilla/plugins and related directories - what Firefox uses

 On systems that have nspluginwrapper installed, they have an
 nspluginwrapper instance in the 4th directory and a copy of Flash
 (etc.) hidden off to the side.  That means Chrome will also use
 nspluginwrapper, which is suboptimal: Chrome spawns a plugin process
 which loads nspluginwrapper which itself spawns another plugin
 process.

 It would be nice to not use nspluginwrapper, but we cannot just
 request people install plugins into the normal plugins directories,
 as you want other browsers (Firefox, etc.) to continue using
 nspluginwrapper.

 I propose the solution to this is to put Chrome-specific plugin
 directories at the front of the search path.  Something like
 1  ~/.config/google-chrome/plugins  (not sure on this one... a bit
 weird to stick plugins in a config dir)
 2  path_to_chrome_binary/plugins
 and then the Mozilla paths as before
 1  $MOZ_PLUGIN_PATH
 2  ~/.mozilla/plugins
 4  /usr/lib/mozilla/plugins and related directories - what Firefox uses

 Then people can install (symlink) the real plugins into the
 chrome-specific dirs if they want.
 Does that seem reasonable?

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Question about resource_dispatcher_host.h

2009-08-27 Thread John Abd-El-Malek
On Thu, Aug 27, 2009 at 11:57 AM, hap 497 hap...@gmail.com wrote:



 On Mon, Aug 24, 2009 at 1:26 PM, John Abd-El-Malek j...@chromium.orgwrote:



 On Mon, Aug 24, 2009 at 1:06 PM, Brett Wilson bre...@chromium.orgwrote:


 On Mon, Aug 24, 2009 at 12:49 PM, hap 497hap...@gmail.com wrote:
  Thanks. But the picture in the document shows there is only 1
  ResourceDispatcherHost and there are 2 Renderer Processes:
 
 
 http://dev.chromium.org/developers/design-documents/multi-process-architecture
  And the ResourceDispatcherHost has access to both Channels for each
 Renderer
  Process.

 Information about each request including the originating renderer is
 tacked onto each URLRequest in the form of ExtraRequestInfo. See one
 of the functions in there such as
 ResourceDispatcherHost::OnResponseCompleted for how this is retrieved.


 Right, information such as renderer process id is available, but there are
 no pointers to the ResourceMessageFilter in there.  Using the process id,
 you could get to the RenderProcessHost (but only on the UI thread) and from
 there to the RMF.



 Thanks for all the answers.

 If ResourceMessageFilter has all the information for dispatching the
 message, why it needs to talk to ResourceDispatcherHost ( a singleton class)
 for dispatching? Looking at resource_dispatcher_host.h, I am not sure what
 centralized information it is holding so that each ResourceMessageFilter
 needs ResourceDispatcherHost for dispatching.


ResourceDispatcherHost is also used by other classes that represent
plugin/worker sub-processes.

Additionally, separating resource loading into a separate class makes
testing easier.


 Thank you for your help.






 Brett

 




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Copying of profiles across systems

2009-08-26 Thread John Abd-El-Malek
I think this is one of those things that aren't worth the effort.  Brett's
change was also to fix other bugs, so it wasn't just for this.  I'd be
surprised if people spend time to fix bugs solely for this.

On Wed, Aug 26, 2009 at 10:23 AM, Avi Drissman a...@google.com wrote:

 I've heard people proclaim the principle of being able to copy a profile
 across systems as being a deciding factor for certain changes (e.g. the
 history epoch change). However, it doesn't seem to be universally held or
 obeyed, and I'm not sure to the extent to which it can be obeyed. So some
 questions:

 - Is profile platform independence a guiding principle?
 - To what extent do we work to make it reality?
 - In the cases where it can't be kept (e.g. download folder path) should we
 keep a copy for each platform?
 - Is it worth rewriting today's code that doesn't conform (e.g. extensions
 and themes which use full paths and platform path separators)?

 Avi

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Copying of profiles across systems

2009-08-26 Thread John Abd-El-Malek
On Wed, Aug 26, 2009 at 12:41 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 12:16 PM, Brett Wilson bre...@chromium.orgwrote:


 On Wed, Aug 26, 2009 at 10:23 AM, Avi Drissmana...@google.com wrote:
  I've heard people proclaim the principle of being able to copy a profile
  across systems as being a deciding factor for certain changes (e.g. the
  history epoch change). However, it doesn't seem to be universally held
 or
  obeyed, and I'm not sure to the extent to which it can be obeyed. So
 some
  questions:
 
  - Is profile platform independence a guiding principle?
  - To what extent do we work to make it reality?

 I don't think this is a guiding principle. We should do it when it's
 reasonable. I think most places it is reasonable to do so.

  - In the cases where it can't be kept (e.g. download folder path) should
 we
  keep a copy for each platform?

 I don't think so.

  - Is it worth rewriting today's code that doesn't conform (e.g.
 extensions
  and themes which use full paths and platform path separators)?

 I don't think so, although I do wonder if full paths are necessary,


 It was painful to remove all of the full paths from mozilla's profile.  It
 was something people wanted because they wanted to be able to just copy
 their profile from one computer to another.  Think of moving a profile from
 WinXP to Vista where the path to your windows profile changes.


I think moving between different versions of the OS is a simpler and more
common problem than moving between different OSs.  I could come up with
convoluted scenarios about why I'd want to move my profile between different
OSs, but if it's not a common user request, then it's not worth the effort
IMO.


 -darin



 even if we're not worried about platform portability (not knowing
 anything about what these are used for if there are any).

 Brett




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Copying of profiles across systems

2009-08-26 Thread John Abd-El-Malek
On Wed, Aug 26, 2009 at 1:20 PM, Dan Kegel d...@kegel.com wrote:


 On Wed, Aug 26, 2009 at 12:59 PM, Jeremy Orlowjor...@chromium.org wrote:
  I really like the idea of being able to move people between
  operating systems and just bringing the profile along
  without having to export and import...
 
  (seems to me there are online services that offer that
  convenience, but being able to do it with the raw file
  is cool.)
 
  In what scenarios would this be useful?  If it's easy to do, it'd be
 cool,
  but it seems like this would have a very small minority of users.

 When migrating users between operating systems.  Say,
 a company decides to migrate all its office workers from
 Windows to Linux.   Or if somebody installs Ubuntu on
 a Windows machine on its own; I believe they try
 to migrate settings when you do that.

 One goal of Chrome was to make operating systems not matter;
 one way to do that is to make the profile file (mostly) portable.




On Wed, Aug 26, 2009 at 12:41 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Aug 26, 2009 at 12:16 PM, Brett Wilson bre...@chromium.org
  wrote:


 On Wed, Aug 26, 2009 at 10:23 AM, Avi Drissmana...@google.com wrote:
  I've heard people proclaim the principle of being able to copy a profile
  across systems as being a deciding factor for certain changes (e.g. the
  history epoch change). However, it doesn't seem to be universally held
 or
  obeyed, and I'm not sure to the extent to which it can be obeyed. So
 some
  questions:
 
  - Is profile platform independence a guiding principle?
  - To what extent do we work to make it reality?

 I don't think this is a guiding principle. We should do it when it's
 reasonable. I think most places it is reasonable to do so.

  - In the cases where it can't be kept (e.g. download folder path) should
 we
  keep a copy for each platform?

 I don't think so.

  - Is it worth rewriting today's code that doesn't conform (e.g.
 extensions
  and themes which use full paths and platform path separators)?

 I don't think so, although I do wonder if full paths are necessary,


 It was painful to remove all of the full paths from mozilla's profile.  It
 was something people wanted because they wanted to be able to just copy
 their profile from one computer to another.  Think of moving a profile from
 WinXP to Vista where the path to your windows profile changes.


I think moving between different versions of the OS is a simpler and more
common problem than moving between different OSs.  I could come up with
convoluted scenarios about why I'd want to move my profile between different
OSs, but if it's not a common user request, then it's not worth the effort
IMO.

- Dan

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Splitting up chrome.dll

2009-08-25 Thread John Abd-El-Malek
This is very cool work, it'll especially be useful for outside developers
who don't have our beefy machines :)
Would it be possible to get a sense of what percentage of chrome.dll is due
to our code (i.e. base, net, chrome) vs third party (i.e. everything in
\third_party including webkit)?  Perhaps this estimate can be calculated by
looking at obj file size.  My hunch is that if we were to move off
everything in third_party, that would get rid of a huge amount of code that
hardly changes and it would be easier because that code is self contained
(i.e. none of the problems that you mention below).

On Tue, Aug 25, 2009 at 3:25 PM, Nicolas Sylvain nsylv...@chromium.orgwrote:

 Hello,
 In the past a lot of you have asked to split up chrome.dll in multiple
 DLLs.  For this quarter, I had a goal to see how feasible this is.

 Background information:

 Breaking up chrome.dll would make linking time faster and use less memory.
 It would also enforce a cleaner cut between our modules. Eventually
 it might make some of the modules be more easily reusable or swappable.

 This is likely not suitable for release or official builds, because adding
 more dlls means slower startup time. And the penalty we would get from
 loading 2 or 3 more dlls won't be acceptable.

 This is why the scope of this change is Debug mode only.  (Actually, it
 would be possible to enable this with a gyp define for release too, but it
 would
 be enabled by default only in debug). Oh, and this is Windows only for now
 too. (Linux already supports that AFAIK)

 The results:

 The good news is that it is feasible. The bad news is that the cost
 associated with it might be too high.
 This is what we need to decide here.

 I started by looking at base and net. Right now I have a checkout that can
 build the full solution with a base.dll and a net.dll. (205 modified files)

 The initial goal was to export only what was needed, but since the unit
 tests do a good job at testing almost everything, then
 we were exporting almost everything, so I am now exporting full classes
 instead. This is a lot less intrusive
 and the code is cleaner, even though the export table can look a little bit
 ugly.

 Some of the problems I am seeing:

 - ICU needs to be registered per module. Right now since the Initialize
 function is in base.dll, it always get initialized there, even though it's
 called
 by other dlls. We would need to move to a more central dll icu mode. I
 think it's already supported.

 - Some classes in base were clearly designed to be in a lib. One example is
 PathService. Right now when I call PathService::Get(FILE_MODULE), I
 always get base.dll. There are some similar problems with the VersionInfo
 classes. The solution here would be to create a new project called
 base_lib, which is always a lib. Unfortunately, PathService depends on a
 lot of other base classes, so base.lib would need to depend on base.dll, and
 base.dll would need to depend on base.lib (since it does use PathService).
 So a major cleanup would be required here. I'm sure we can find a better
 solution though.

 - Global variables are not global for the app, they are global per module.
 So if you use a lib that has a global variables, and this is used
 from multiple dlls (base, net, chrome, etc), then they will have different
 values.  The solution here would be to move this lib to a dll too.

 - Singleton. They are not too much of an issue right now because they seem
 to be all registered by base.dll, but there is a risk that you would
 have multiple instances of a singleton. (one per module).

 Eventually we would like to be able to split webkit in it's own dll too. I
 heard to it's not possible right now due to some inter-dependency problems,
 but people are working on that. Webkit was designed to be in its own dll,
 so it should be less of a problem.

 My next step is to back off a little bit from base and net, and go look at
 the other more self contained modules, like libxml and the rest of the
 third_party libs we use.

 Depending on the feedback on this thread, I will trash the base/net split,
 or finish it.

 Let me know what you think. I want to know if you have huge concerns about
 this, or if you think this would result in too much code
 change for benefit we would get out of it.

 Thanks

 Nicolas

 [Oh, and right now i'm using the DLL CRT, not tcmalloc. This is something
 else I will need to get working before we can roll this out]








 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Question about resource_dispatcher_host.h

2009-08-24 Thread John Abd-El-Malek
There's one ResourceDispatcherHost for all renderers (but there is one
ResourceMessageFilter per renderer process).
That graph shows a connection between the filter (ResourceMessageFilter) and
the ResourceDispatcherHost, but it's the former that has a pointer to the
latter.  RDH doesn't have a list of active RMF objects.

On Mon, Aug 24, 2009 at 12:49 PM, hap 497 hap...@gmail.com wrote:

 Thanks. But the picture in the document shows there is only 1
 ResourceDispatcherHost and there are 2 Renderer Processes:


 http://dev.chromium.org/developers/design-documents/multi-process-architecture
 And the ResourceDispatcherHost has access to both Channels for each
 Renderer Process.




 On Mon, Aug 24, 2009 at 12:37 PM, Jeremy Orlow jor...@google.com wrote:

 There's one host per renderer, so there's no need for any list.

 On Mon, Aug 24, 2009 at 11:22 AM, hap 497 hap...@gmail.com wrote:

 From
 http://dev.chromium.org/developers/design-documents/multi-process-architecture,
 Resoruce dispatcher Host should have the list of all the channel opened with
 each Renderer Process. But when I look at the resource_dispatcher_host.h, I
 dont' find any attribute of ResourceDispatcherHost which maintains that
 list.  Can you please tell me how/where does ResourceDispatcherHost keep
 track of that list of  Channel?
 Thank you.






 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Question about resource_dispatcher_host.h

2009-08-24 Thread John Abd-El-Malek
On Mon, Aug 24, 2009 at 1:06 PM, Brett Wilson bre...@chromium.org wrote:


 On Mon, Aug 24, 2009 at 12:49 PM, hap 497hap...@gmail.com wrote:
  Thanks. But the picture in the document shows there is only 1
  ResourceDispatcherHost and there are 2 Renderer Processes:
 
 
 http://dev.chromium.org/developers/design-documents/multi-process-architecture
  And the ResourceDispatcherHost has access to both Channels for each
 Renderer
  Process.

 Information about each request including the originating renderer is
 tacked onto each URLRequest in the form of ExtraRequestInfo. See one
 of the functions in there such as
 ResourceDispatcherHost::OnResponseCompleted for how this is retrieved.


Right, information such as renderer process id is available, but there are
no pointers to the ResourceMessageFilter in there.  Using the process id,
you could get to the RenderProcessHost (but only on the UI thread) and from
there to the RMF.



 Brett

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] PSA: do not include X_messages.h in other headers

2009-08-20 Thread John Abd-El-Malek
Including files like render_messages.h and automation_messages.h from other
header files is unnecessary and slows down the build (adds about ~100K lines
of headers to each cc file).  Last time I removed all these occurrences, it
improved the build time by 15%.  Looks like a few more crept in now, so I'm
removing them.  Please be careful not to reintroduce this, and look out for
this in code reviews (yes, it would be great to have an automated way of
catching this, patches welcome).

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: PSA: do not include X_messages.h in other headers

2009-08-20 Thread John Abd-El-Malek
Great!  Please try to add this to an existing check, or do it in a way that
doesn't involve the files being read once for each presubmit check, as the
presubmit step is already too slow.

On Thu, Aug 20, 2009 at 11:16 AM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 Cool! Thanks so much. I'm going to write a presubmit check for that.

 On Thu, Aug 20, 2009 at 11:12, John Abd-El-Malek j...@chromium.org wrote:

 Including files like render_messages.h and automation_messages.h from
 other header files is unnecessary and slows down the build (adds about ~100K
 lines of headers to each cc file).  Last time I removed all these
 occurrences, it improved the build time by 15%.  Looks like a few more
 crept in now, so I'm removing them.  Please be careful not to reintroduce
 this, and look out for this in code reviews (yes, it would be great to have
 an automated way of catching this, patches welcome).
 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: PSA: do not include X_messages.h in other headers

2009-08-20 Thread John Abd-El-Malek
On Thu, Aug 20, 2009 at 11:40 AM, Marc-Antoine Ruel mar...@chromium.orgwrote:

 The commit checks is bound to 2x appengine latency (hint hint) since
 it parses try job results registered on rietveld and looks up
 chromium-status to know if the tree is open.


I wasn't talking about commit check, just upload (although of course it's
better to make both faster).



 presubmit_support.py still reads the whole file. It's *supposed* to
 only load the new lines from the diff. I just assumed that would be
 done once it gets slow enough, you know, I didn't want to do an early
 optimization :D


I think whether it loads the whole file or just a small part won't make a
big difference.  I suspect it's all the file operations that happen on a
change with lots of files.


 M-A

 On Thu, Aug 20, 2009 at 2:33 PM, Jeremy Orlowjor...@chromium.org wrote:
  Are you positive it's the per-file presubmit checks slowing things down?
  If
  so, maybe the presubmit stuff needs to be re-factored?  Right now, it
 does
  each presubmit check one by one (and each check might read in the files).
   If it were changed to go file by file (reading fully into memory,
 running
  all the per-file pre-submit checks at once) it miight be faster.
  That said, it would surprise me if this was adding more than a second or
 two
  to the time.  I bet most of it is waiting on other servers and such.
 
  On Thu, Aug 20, 2009 at 11:20 AM, John Abd-El-Malek j...@chromium.org
  wrote:
 
  Great!  Please try to add this to an existing check, or do it in a way
  that doesn't involve the files being read once for each presubmit check,
 as
  the presubmit step is already too slow.
 
  On Thu, Aug 20, 2009 at 11:16 AM, Paweł Hajdan Jr.
  phajdan...@chromium.org wrote:
 
  Cool! Thanks so much. I'm going to write a presubmit check for that.
 
  On Thu, Aug 20, 2009 at 11:12, John Abd-El-Malek j...@chromium.org
  wrote:
 
  Including files like render_messages.h and automation_messages.h from
  other header files is unnecessary and slows down the build (adds about
 ~100K
  lines of headers to each cc file).  Last time I removed all
  these occurrences, it improved the build time by 15%.  Looks like a
 few more
  crept in now, so I'm removing them.  Please be careful not to
 reintroduce
  this, and look out for this in code reviews (yes, it would be great to
 have
  an automated way of catching this, patches welcome).
 
 
 
 
 
 
 
   
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: PSA: do not include X_messages.h in other headers

2009-08-20 Thread John Abd-El-Malek
On Thu, Aug 20, 2009 at 11:33 AM, Jeremy Orlow jor...@chromium.org wrote:

 Are you positive it's the per-file presubmit checks slowing things down?
  If so, maybe the presubmit stuff needs to be re-factored?  Right now, it
 does each presubmit check one by one (and each check might read in the
 files).  If it were changed to go file by file (reading fully into memory,
 running all the per-file pre-submit checks at once) it miight be faster.
 That said, it would surprise me if this was adding more than a second or
 two to the time.  I bet most of it is waiting on other servers and such.


This gives me an idea: I'll add the time it takes to run presubmit checks to
the output, so we can see how long it's taking.



 On Thu, Aug 20, 2009 at 11:20 AM, John Abd-El-Malek j...@chromium.orgwrote:

 Great!  Please try to add this to an existing check, or do it in a way
 that doesn't involve the files being read once for each presubmit check, as
 the presubmit step is already too slow.


 On Thu, Aug 20, 2009 at 11:16 AM, Paweł Hajdan Jr. 
 phajdan...@chromium.org wrote:

 Cool! Thanks so much. I'm going to write a presubmit check for that.

 On Thu, Aug 20, 2009 at 11:12, John Abd-El-Malek j...@chromium.orgwrote:

 Including files like render_messages.h and automation_messages.h from
 other header files is unnecessary and slows down the build (adds about 
 ~100K
 lines of headers to each cc file).  Last time I removed all these
 occurrences, it improved the build time by 15%.  Looks like a few more
 crept in now, so I'm removing them.  Please be careful not to reintroduce
 this, and look out for this in code reviews (yes, it would be great to have
 an automated way of catching this, patches welcome).




 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Plugin control flow and race conditions

2009-08-18 Thread John Abd-El-Malek
On Mon, Aug 17, 2009 at 11:43 AM, bsmedberg bsmedb...@gmail.com wrote:


 At Mozilla we're currently working on implementing multi-process
 plugin hosts similar to the model used by Chromium. However, we're
 having trouble working through the many potential race conditions
 introduced by having multiple processes (and therefore multiple flows
 of control). I've read
 http://dev.chromium.org/developers/design-documents/plugin-architecture
 but that doesn't seem to address most of the issues we're facing.


For the detailed view of how we solve these issues, the best place to look
at is the code.  Specifically, plugin_channel.cc  ipc_sync_channel.cc.

To avoid deadlock, when one process is sending a synchronous message, it
responds to other synchronous messages in the meantime.  However to avoid
unnecessary reentrancy, we disable this for synchronous messages from the
plugin process unless that process is itself responding to synchronous
messages.  We did this because otherwise a plugin process could try to force
layout while it's already happening, which is not expected in WebKit.  You
can find more about this in PluginChannel's constructor, we call
SendUnblockingOnlyDuringDispatch which makes it so that synchronous


 The most obvious problem is that both processes may send a synchronous
 IPC message at the same time. Assuming that these don't deadlock, the
 native stack for the two calls would end up interleaved. What happens
 when the renderer process and the plugin process send conflicting
 messages at roughly the same time? For example: the browser finishes a
 network request with NPP_DestroyStream and the plugin (responding to a
 UI event, perhaps) calls NPN_DestroyStream simultaneously? I can't
 imagine that a plugin would expect to receive aa NPP_DestroyStream
 message after it has already called NPN_DestroyStream, and this is
 likely to cause erratic plugin behavior.


This specific case is not really a problem.  If you look at our
implementation of PluginInstance::NPP_DestroyStream(), we set NPStream.ndata
to NULL.  The second call would early exit if it's already null.


 Are the IPC delegates/stubs responsible for checking the state of each
 call and avoiding improper nesting? Do you have any procedure/system
 for detecting with racy improper nesting? For example, racing pairs
 NPP_Write/NPN_DestroyStream and NPN_RequestRead/NPP_DestroyStream are
 equally unexpected. And all these example come only from the NPStream
 interface; I haven't even begun to examine potential message races in
 the core NPP APIs or the NPObject APIs.


We did run into a bunch of these issues early on, but they were all easy to
workaround (i.e. always checking first if the stream is destroyed, since it
could be called afterwards in the situations that you describe).

The hardest issues we've had to solve are related to performance.  The
initial release had poor performance when scrolling with lots of
windowed/windowless plugins.  To solve this, we moved to an asynchronous
painting/scrolling model.  While adding extra complexity and memory usage,
the user experience when scrolling is much better.  The techniques we used
should be applicable in your implementation as well, we can talk more about
this when you're ready (plugin lunch? :) ).


 --BDS

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Can't edit others' codereviews?

2009-08-18 Thread John Abd-El-Malek
On Tue, Aug 18, 2009 at 5:37 AM, Evan Martin e...@chromium.org wrote:

 On Sat, Aug 15, 2009 at 11:44 PM, Nico Webertha...@chromium.org wrote:
  `git cl patch` reuses existing issues, so when landing stuff for other
  people, I used to edit the issue on codereview to add Patch by
  someone+usuallytru...@example.org someone%2busuallytru...@example.org
 before landing. So for me editing
  other people's issues is useful.

 $ git cl dcommit --help | grep -- -c
  -c CONTRIBUTOR  external contributor for patch (appended to description)

 My typical workflow is:
  grep trung AUTHORS
  select line with full name
  git cl dcommit -c middle click here

 I think it'd be neat to somehow use git's distinction of author vs
 committer such that when you patch in a cl (with git cl patch) it
 plumbs the rietveld-side author info into the local git commit info
 and from there into the svn commit, but I think the mapping of
 rietveld username to AUTHORS username isn't very clear.


A little hacky, but you could reuse the url used for autocomplete in
reviewers/CC boxes.  i.e.
http://codereview.chromium.org/account?q=Evan%20Martin returns
e...@chromium.org (Evan Martin)



 PS: Personal OKR of responding to 100% of messages mentioning git
 remains green.  :D


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: running reliability tests

2009-08-18 Thread John Abd-El-Malek
The information on how to do this is currently on the internal wiki,
although it seems there's no reason it shouldn't be moved to the public wiki
(not it :) ).
http://wiki.corp.google.com/twiki/bin/view/Main/ChromeDebuggingChromeBotCrashes

On Tue, Aug 18, 2009 at 3:42 PM, James Hawkins jhawk...@chromium.orgwrote:


 Hi,

 How can I run the reliability tests, specifically the automated UI
 tests, locally?

 Thanks,
 James Hawkins

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Can't edit others' codereviews?

2009-08-17 Thread John Abd-El-Malek
This does affect closing, adding reviewers, changing the description, and
deleting-with-no-undo.  Given the feedback on this thread, I'll restore
almost-full functionality to all chromium accounts (except for delete, I'll
make it only work for the owner since it's irreversible).

On Sun, Aug 16, 2009 at 3:14 PM, Mark Mentovai m...@chromium.org wrote:

 John Abd-El-Malek wrote:
  Intentional but I have a TODO in the Chromium branch of Rietveld to
 update
  this.  I just synced ~60 changes from trunk to our branch, and now we get
 a
  close button to the left of each issue.  Since everyone with a @chromium
  account could edit an issue, I thought it was too easy to accidentally
 close
  others' issues.  So I disabled the give-every-chromium-user-full-access
  change.

 But they're still reopenable, right?  We're talking about close, not
 delete-there-is-no-undo?  If so, it doesn't really sound like a
 problem to me at all.

 I don't think that restricting our access to edit and close issues is
 a reasonable solution to this.  Personally, I rarely edited others'
 descriptions, but I frequently closed issues for others when
 committing their changes for them.

 Mark


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Can't edit others' codereviews?

2009-08-16 Thread John Abd-El-Malek
Intentional but I have a TODO in the Chromium branch of Rietveld to update
this.  I just synced ~60 changes from trunk to our branch, and now we get a
close button to the left of each issue.  Since everyone with a @chromium
account could edit an issue, I thought it was too easy to accidentally close
others' issues.  So I disabled the give-every-chromium-user-full-access
change.
We can get rid of the close button to the left of issues in our branch, or
with a little bit of work only show it for your issues.  I didn't have time
to do this so I just removed the full access for everyone for now.

What is the preferred behavior?  Whoever feels strongly about this can reply
to this thread or to me and I can implement the popular option.  I don't
feel strongly about any of the options.

On Sat, Aug 15, 2009 at 2:24 PM, Peter Kasting pkast...@google.com wrote:

 I noticed yesterday and today the codereviews I'm looking at sent by other
 people) no longer give me the option to Edit.  Is this an intentional
 change?  Now I can't assign reviewers when appropriate.
 PK

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: gcl change that you should be aware of

2009-08-06 Thread John Abd-El-Malek
whoa, please revert this change.  if I have a big changelist, now every time
I run gcl I have to expend effort to make sure no new files crept in.  this
will lead to more build failures as people check in other files accidently.

On Wed, Aug 5, 2009 at 5:56 PM, Anthony LaForge lafo...@google.com wrote:

 Howdy,
 Quick little change to gcl that everyone should be aware of.  When you
 execute the script it will now automatically pull all physically modified
 files into the Paths in this changelist section, which means no more
 copying and pasting the files you changed into your CL.  The behavior is
 closer to that of P4 (where we delete files as opposed to adding them).  All
 the unchanged files are still below.

 Kind Regards,

 Anthony Laforge
 Technical Program Manager
 Mountain View, CA

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Running UI test in parallel (experimental)

2009-08-06 Thread John Abd-El-Malek
This is very cool, but I ran into a few problems when I tried to run it:
a:\chrome2\src\chrometools\test\smoketests.py --tests=ui

 You must have your local path of trunk/src/tools/python added to your
PYTHONPATH.

Traceback (most recent call last):
  File A:\chrome2\src\chrome\tools\test\smoketests.py, line 32, in
module
import google.httpd_utils
ImportError: No module named google.httpd_utils


hmmm, never needed PYTHONPATH set before.  Can't this script set it itself?
 Setting it manually will fail when I move depot locations etc..  But
anyways, I set it and then

a:\chrome2\src\chromeset PYTHONPATH=a:\chrome\src\tools\python

a:\chrome2\src\chrometools\test\smoketests.py --tests=ui
Traceback (most recent call last):
  File A:\chrome2\src\chrome\tools\test\smoketests.py, line 274, in
module
result = main(options, args)
  File A:\chrome2\src\chrome\tools\test\smoketests.py, line 155, in main
sys.path.insert(0, _BuildbotScriptPath('slave'))
  File A:\chrome2\src\chrome\tools\test\smoketests.py, line 84, in
_BuildbotScriptPath
'scripts', sub_dir)
  File a:\chrome\src\tools\python\google\path_utils.py, line 72, in
FindUpward
parent = FindUpwardParent(start_dir, *desired_list)
  File a:\chrome\src\tools\python\google\path_utils.py, line 55, in
FindUpwardParent
(desired_path, start_dir))
google.path_utils.PathNotFound: Unable to find tools\buildbot\scripts\slave
above A:\chrome2\src\chrome\tools\test



tools\buildbot isn't in the public tree I think, since I don't find it
here: http://src.chromium.org/viewvc/chrome/trunk/src/chrome/tools/.  So
external contributors can't use this?  Also, why should this script depend
on the buildbot scripts?  I don't have them so I can't try this out.

Can't we just have a minimal python script that runs ui_tests in parallel?

On Wed, Jul 29, 2009 at 3:28 PM, Huan Ren hu...@google.com wrote:


 I just checked in a change to run ui_tests in parallel based on
 sharding mechanism provided by GTest. Each ui_tests instance has its
 own user data dir, and the number of ui_tests instances is
 NUMBER_OF_PROCESSORS. I have updated
 src/chrome/tools/test/smoketests.py so you can run it through command
 line:

 python.exe smoketests.py --tests=ui [--verbose]

 Running ui_tests.exe directly is still the old behavior of
 sequentially running. On my 4 core machine, the running time has been
 reduced by half, from 832 secs to 443 secs. But I need to make sure
 all tests can run reliably in this parallel fashion. So if you try it
 out, I will be interested to know how fast the performance is improved
 and what additional tests are failing.

 Huan

 P.S. this change is for Windows platform as I think Linux/Mac is
 already using GTest sharding.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Design Doc: out of process (v8) proxy resolving

2009-07-29 Thread John Abd-El-Malek
The plug-in sandbox is too weak to be robust, see the other thread about it.
 As long as it has access to HWNDs, it's very easy to break out of it.

On Wed, Jul 29, 2009 at 10:43 AM, Adam Barth aba...@chromium.org wrote:


 I wonder if we could use something like the plug-in sandbox for the
 main browser process in the intermediate term.  That way the browser
 could still have HWNDs and the like.

 Adam


 On Wed, Jul 29, 2009 at 9:44 AM, Linus Upsonli...@google.com wrote:
  I realize this is not a small request, but it would be better if we could
  move to a model where the browser was sandboxed and talked to a much
 simpler
  process to carry out trusted operations on its behalf.
  Linus
 
  On Wed, Jul 29, 2009 at 3:29 AM, Eric Roman ero...@chromium.org wrote:
 
  Here is a design document for http://crbug.com/11746
 
 
 
 http://sites.google.com/a/chromium.org/dev/developers/design-documents/out-of-process-v8-pac
 
  Feedback welcome.
 
 
 
 
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Quota UI for LocalStorage (and others in the future)

2009-07-29 Thread John Abd-El-Malek
On Tue, Jul 28, 2009 at 8:40 PM, Jeremy Orlow jor...@chromium.org wrote:

 I'm starting to think ahead to how quotas will work with LocalStorage (and
 I assume database and maybe even AppCache).  To begin with, I'll probably
 just set a fixed quota (5mb is pretty standard), but some apps will surely
 desire more space than that, so I think we'll need a more robust solution
 fairly quickly.  (Maybe even before it comes out from behind the
 --enable-local-storage flag.)
 The question is how should we handle quotas from a UI perspective.

 One approach that seems obvious to a lot of people I've talked to is asking
 the user (maybe via an info bar?) whenever an origin hits its limit (before
 we decide whether to return a quota exceeded error or not).  The problem is
 that WebKit (in the renderer) can't block on the UI thread (since it may be
 blocked on WebKit).  Maybe it's safe to pump plugin related events while in
 the middle of a JavaScript context?  If not, then I'm not sure if any
 just-in-time solution like this is going to be viable.


It is possible to stop JS execution to wait on the browser process, just
like what we do for alerts/showModalDialog.  This involves running a nested
message loop, which we try to avoid unless absolutely necessary because of
reentrancy issues (but in this case, it's necessary so it's ok).



 We could implement heuristics that predict when LocalStorage is about to
 run out of space, but if we don't predict it in time, we will have to send a
 quota exceeded exception to the script.  A lot of common use cases like you
 syncing gmail for the first time would make this difficult to do well, I
 think.

 Safari buries the setting in the preferences menu.  That would be better
 than nothing, I suppose.

 Any other ideas?

 Jeremy

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



  1   2   >