[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Aaron Boodman

On Tue, Oct 27, 2009 at 8:55 PM, Tim Steele t...@chromium.org wrote:
 I can take a stab at more formal heuristics for bookmarks, at least.  We
 will have a better idea of actual limiting parameters for bookmarks (as in
 how many operations in a certain time frame is reasonable) once the
 ExtensionsActivityMonitor I just landed percolates and we can aggregate
 across representative sample data it produces.

A couple thoughts:

a) I think it is overly clever to hash the changes to the bookmarks
and count per unique-params. This can be easily or accidentally
defeated by just doing something like update({title:foo}),
update({url:blech}), over and over, anyway. Instead, at least for
bookmarks, I think a simple per-item-count is very reasonable. It
doesn't make sense to me to update the same bookmark more than a few
times per minute. An easy heuristic could be that updating the same
bookmark more than twice a minute sustained over 10 minutes. For
creates it's a bit tricker. In that case, maybe the best we can do is
the same combination of properties.

b) What is wrong with solving this by rate limiting the communication
between the Chrome sync client and the Chrome sync server? It seems
like that would be more foolproof; if the client tries to send too
many updates for any reason, they are rate limited.

- a

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [Green Tree] Task Force Status Update 10/26

2009-10-28 Thread Jeremy Orlow
On Tue, Oct 27, 2009 at 11:37 PM, Paweł Hajdan Jr.
phajdan...@chromium.orgwrote:

 On Wed, Oct 28, 2009 at 00:44, Ian Fette i...@chromium.org wrote:

 In an effort to provide more transparency into what the team is working
 on, I'm sending out the meeting notes from the green tree task force to
 chromium-dev, below. I will try to send further notes to chromium-dev from
 our meetings.


 Sounds great!



- Flakiness
   - Brought down to acceptable levels for unit tests. Need help to
   keep it that way and not regress.

 I think that unit_tests are well and happy, but ui_tests are another
 story. We have UI tests which have more than 100 flaky flips in just two
 weeks. And we have UI tests which fail on almost every try run. IMHO such
 outstanding (is that the right word?) cases of flakiness should be fixed
 with a high enough priority.


A bunch of these are related to DOM Storage.  I'm pretty sure the cause is
related to what's causing a lot of crashes in the dev channel.  I'm working
actively to figure it outeven though it has been this way for a while
now.


 And we still have lots of problems with resource-related flakiness, like
 this:

 [FATAL:resource_bundle.cc(133)] Check failed: false.

 [FATAL:tab_renderer.cc(132)] Check failed: waiting_animation_frames-width() 
 % waiting_animation_frames-height() == 0.

 This is making all ui tests super-flaky. It's windows-specific, and I was
 asking about the issue several times on chromium-dev. Unfortunately, I don't
 know how to fix or even analyse it (but I can help). Please note that fixing
 this should visibly decrease flakiness of all ui tests, not only the Top25
 flaky ones.


 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: chromium linux' Native Client Plugin hides my NPAPI plugin

2009-10-28 Thread Anselm R Garbe

2009/10/27 Evan Martin e...@chromium.org:
 On Fri, Oct 23, 2009 at 1:58 AM, Anselm R Garbe garb...@gmail.com wrote:
 However, I'm not sure if chrome resp. libnpGoogleNaClPluginChrome.a
 does it right with exporting these symbols as plain C symbols because
 this might conflict with other existing plugins as well in the same
 way.

 I checked in a fix for this in r30253.  Thanks for bringing up the
 problem, and please let us know if you encounter any other problems.

Thanks a lot for this, it works fine for me now.

Kind regards,
Anselm

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: chromium linux' Native Client Plugin hides my NPAPI plugin

2009-10-28 Thread Anselm R Garbe

2009/10/26 Antoine Labour pi...@google.com:
 On Fri, Oct 23, 2009 at 1:58 AM, Anselm R Garbe garb...@gmail.com wrote:

 2009/10/23 Anselm R Garbe garb...@gmail.com:
  I rebuilt chromium yesterday from yesterday's tip on Linux (last time
  I did that was about 8 weeks ago or so). I'm involved in developing
  some NPAPI plugins that used to work well with my older chromium linux
  build and that work without any issues in all other NPAPI supporting
  browsers including firefox, Opera and some webkitgtk based ones, such
  as surf.
 
  Anyway, the symptom I see in chromium is this (in about:plugins):
 
  Native Client Plugin
 
  File name: npfoo.so
  Native Client Plugin was built on Oct 22 2009 at 12:28:12 and expires
  on 11/4/2009 (mm/dd/)
  MIME Type       Description     Suffixes        Enabled
  application/x-nacl-srpc NativeClient Simple RPC module   nexe   Yes
 
  So the Native Client Plugin somehow overrides the original mime type
  (application/x-vnd-foo-x and foo as suffix) now and makes it
  impossible to use the NPAPI plugin as usual. Also note that this
  Native Client Plugin stuff seems to be totally untested on Linux, at
  least the borken extension nexe (.exe?) hints that there seems to be
  something not in sync between code that's used on Windows perhaps?
 
  On the other hand some plugins seem not to be wrapped by Native Client
  Plugin, such as Adobe's PDF plugin (nppdf.so) and I wonder why that's
  the case (is there any hard-coded trusted plugins list somewhere?).
 
  Any insight on how to make it work would be really helpful.

 Ok, I'm a moron and found the issue. The problem is that the chrome
 executable (in particular statically linked in
 libnpGoogleNaClPluginChrome.a) exports 'char
 *NPP_GetMIMEDescription(void)' as a C symbol, which my plugin code
 also implements and calls when NP_GetMIMEDescription() is called by
 the browser. So chrome's Native Client Plugin's
 NPP_GetMIMEDescription() takes preference and is called by my plugin
 code instead of its own build-in NPP_GetMIMEDescription()
 implementation.

 The original pattern came from some initial mozilla NPAPI plugin
 example I started from a long while ago and that contained the pattern
 to call NPP_GetMIMEDescription() on returning from
 NP_GetMIMEDescription entry point. I never bothered to change that
 until now ;)

 This might not happen of course if I'd use a C++ compiler that mangles
 the symbols, but my definitions are plain C symbols and hence the
 conflict.

 However, I'm not sure if chrome resp. libnpGoogleNaClPluginChrome.a
 does it right with exporting these symbols as plain C symbols because
 this might conflict with other existing plugins as well in the same
 way.

 http://gcc.gnu.org/wiki/Visibility
 Make your plugin compile with -fvisibility=hidden, and only explicitly
 export the plugin functions by adding __attribute__((visibility(default)))
 That should fix this, and similar problems.
 Though agreed, Chrome should aim to do the same...

I realized that chromium is using this flag now, though I really
dislike this gcc specific flag (which wouldn't have the same result
with a different compiler anyways). I'd rather recommend to declare
those NPP functions static to aim for a more future-proof and portable
solution. In my plugin code I fixed the issue through using a
different prefix.

Anyway, it's up to you what you prefer, just my 2 cts ;)

Kind regards,
Anselm

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] A Dictionary-Evaluation Plan

2009-10-28 Thread 坊野 博典

Greetings chromium-developers,

(Please feel free to ignore this e-mail if you are not interested in
our spell-checker and its dictionaries.)

As you may know, we have updated hunspell used by Chromium to the
latest version. It adds lots of features that improve its
spell-checking quality (especially for non-English dictionaries.)

To use these new features, we would like to update the dictionaries
used by Chromium and support more spell-checker dictionaries.
Nevertheless, we would like to evaluate the new dictionaries before
releasing them to check their qualities, i.e. whether or not the new
dictionaries are better than the old ones.

Unfortunately, I don't have clear ideas for this evaluation, I would
like to write my random thought and ask your opinions.

Even though this is still a random thought, I would personally like to
use chromium to evaluate the new dictionaries: i.e. uploading the new
dictionaries to our dictionary server, changing the chromium code to
use the updated ones, asking users to compare the new dictionaries
with the old ones and give us their feedbacks (*1). If users like the
new dictionaries, we would like to release the new ones. Otherwise we
will keep the old ones.

(*1) I don't have clear ideas how to get feedbacks, it may be good to
use Google Spreadsheet?

Since this is just my random thought to start a discussion, any
comments and suggestions are definitely welcome.
Would it be possible to give me your opinions?

Best regards,

Hironori Bono
E-mail: hb...@chromium.org

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Anton Muhin

On Wed, Oct 28, 2009 at 7:10 AM, Mike Belshe mbel...@google.com wrote:
 On Tue, Oct 27, 2009 at 3:24 PM, Jens Alfke s...@chromium.org wrote:

 Do we plan to switch the Mac build of Chromium to use tcmalloc instead of
 the system malloc? I thought this was the case, but I can't find any bug
 covering the task to do that. I'm on the memory task force, and this
 decision could definitely have an impact (one direction or the other) on
 footprint on Mac.

 From a performance perspective, it may be critical to use tcmalloc to match
 safari performance.  It was literally a 50% speedup on most of the DOM perf
 when running on WinXP.


 I just tried enabling tcmalloc (by changing USE_SYSTEM_MALLOC to 0 in
 JavaScriptCore.gyp), and Chromium launches but logs a ton of warnings about
 unknown pointers being freed.

 I suspect this will use the version of TCMalloc which is embedded in WTF.
  I'd recommend against this approach.  We should try to use a single
 allocator for the entire app, and specialize when necessary.  Having the
 rendering engine drive the allocator is a bit backwards; it seems better for
 chrome itself to choose the allocator and then the rendering engine should
 use that.
 To make this work, you'll need to figure out whatever build magic is
 necessary on the Mac; I'm kinda clueless in that regard, but if you want to
 know what we did on Windows, I'm happy to share.
 You'll find the tcmalloc version that we use on windows available in
 third_party/tcmalloc.
 Keep in mind also that the version of tcmalloc in webcore is heavily
 modified by apple.  Some of those changes have been similarly made in
 tcmalloc's open source project, but others have not.  Apple has not seemed
 interested in syncing back and appears to be on the fork it route.
 Using third_party/tcmalloc will offer a couple of advantages:
    - we are continuing to improve it all the way from google3 to the
 google-perftools to chrome.
    - it will provide the same allocator on windows and mac (easier
 debugging)
    - the chromium implementation allows for selection of multiple allocators
 fairly easily.  Using the allocator_shim.cc, you can plug in your own
 allocators pretty quickly.
 There is a disadvantage.  I suspect Apple is farther along in optimizing the
 mac-specific optimizations of tcmalloc than the google3 guys are.  You may
 have to stumble into these.  Either Jim or I could probably give you some
 pointers for what to look at.  tcmalloc is solid, but the ports other than
 linux have had some very serious port related bugs.

The last time I looked into WTF FastMalloc, the main difference was
another treatment of committing/decommitting of the pages: they
decommit in ::Delete and have a thread which periodically scavenges
committed pages and decommit a share of those.  But what works fine
for them might do not work for us---they have a pretty different model
after all---many pages renderers residing in the same process vs. page
renderer per process for Chrome.

yours,
anton.

 Mike


 I think this happens because tcmalloc is built by the 'wtf' target of
 JavaScriptCore.xcodeproj, which is a static library, so we may be ending up
 with multiple instances of the library with their own copies of its globals,
 leading to this problem if one instance allocates a block and another one
 tries to free it. But this usually happens only if there are multiple
 dynamic libraries using the same static library — do we have more than just
 Chromium.framework?
 —Jens



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Mike Pinkerton

When we were all out in MtnView last, one of the action items for some
of the Mac QA folks was to get a machine that triple-boots
(Mac/Win/Linux) so that we could run the same version of chrome on the
same hardware and see the differences between platforms and then to
run a bunch of tests (startup, new tab, page-cycler, etc). I'm pretty
sure krisr got the machine created, but I don't think we ever ran any
tests on it beyond that.

Anyone know what happened our best laid plans? This seems like
something we should be very active in tracking.

On Wed, Oct 28, 2009 at 12:11 AM, Adam Barth aba...@chromium.org wrote:

 My three laptops have relatively comparable hardware and run Chrome on
 Windows, Mac, and Linux respectively.  The Linux version of Chrome
 feels ridiculously faster than Windows and Mac.  Do we understand why
 this is?  Can we make Windows and Mac feel that fast too?

 General observations:

 1) Scroll performance is extremely good.  Even on Gmail, I can only
 get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
 it doesn't even look like I can do that.

 2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
 we pre-render the NTP on other platforms?

 3) Startup time is faster than calculator.

 Adam

 




-- 
Mike Pinkerton
Mac Weenie
pinker...@google.com

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Mark Mentovai

Darin Fisher wrote:
 I suspect this is at least one of the bigger issues.
 I also suspect that process creation is a problem on Windows.  We should
 probably look into having a spare child process on Windows to minimize new
 tab jank.  Maybe there is a bug on this already?

This shouldn't be restricted to Windows, we should do it on all
platforms.  And we should start the first one as early as possible
during the startup process.

When I benchmarked this a few months ago on a fairly ordinary Mac, it
took nearly 100ms from the time that the browser started a renderer to
the time that the renderer was ready to service requests.  A decent
chunk of that is load time and pre-main initialization in system
libraries.  It's beyond our control, but there's no reason we can't
make it happen sooner.

Mark

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Is there any plan to support Visual Studio 2010? (Current in Beta 2)

2009-10-28 Thread Marc-Antoine Ruel

[Subtly setting expectations here]

Updated http://crbug.com/25628 accordingly. You have your answer in
this feature request and read my comment on it before adding any
comment. (as in don't add any please)

Feel free to star it though.

But as a sane person, well, as sane as I can be, I can only say the
IDE is barely usable (sorry Borris, even if you fixed Intellisense)
which doesn't help in motivating me to add any kind of support to
VS2010 in the short term. Anyway most of the grunt work will be done
by either Brad or Steve so it's them you need to convince and they
have a lot of other much more needed things to do before.

Thanks for your interest, gyp patches to kick start a new msbuild
exporter are really welcomed.

M-A

On Wed, Oct 28, 2009 at 1:05 AM, sam mhsien.t...@gmail.com wrote:

 I saw this issue, which was accepted, in gyp project.
 http://code.google.com/p/gyp/issues/detail?id=96q=type%3DEnhancement

 Good to know this.

 BR,
 mht



 On Wed, Oct 28, 2009 at 12:16 PM, Dan Kegel d...@kegel.com wrote:
 On Tue, Oct 27, 2009 at 9:00 PM, mht mhsien.t...@gmail.com wrote:
 Some information about Visual Studio 2010. But question first: Is
 there any plan (long term / short term) to support it?

 It would probably be a good idea, eventually.
 If anyone wants to write an msbuild backend for gyp, have at it!
 - Dan


 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A Dictionary-Evaluation Plan

2009-10-28 Thread Evan Martin

2009/10/28 Hironori Bono (坊野 博典) hb...@chromium.org:
 Even though this is still a random thought, I would personally like to
 use chromium to evaluate the new dictionaries: i.e. uploading the new
 dictionaries to our dictionary server, changing the chromium code to
 use the updated ones, asking users to compare the new dictionaries
 with the old ones and give us their feedbacks (*1). If users like the
 new dictionaries, we would like to release the new ones. Otherwise we
 will keep the old ones.

In the web search world when we made changes like these, we'd try to
measure it without users giving explicit feedback.  For example: give
some users the new dictionary, and others the old one.  Log which
entry index from the dictionary suggestion list people are frequently
choosing, then compare the aggregate counts between the two sets of
users.  (Maybe call the Add to Dictionary menu option -1.)  For a
better dictionary, I'd expect people to use the earlier entries from
the suggestion list more frequently, and the Add to Dictionary
option less frequently.

This can be done with our existing histogram framework and I believe
Jim wrote a framework within that for doing these sorts of
experiments.

It still might be worth soliciting feedback from users directly.  For
example, if the new dictionary is missing a common word the above
measure would get a high count of Add to Dictionary, and maybe users
could tell us about this.  But in general, you get higher quality data
when you involve more users, and a spreadsheet will be limited to
people who understand the English instructions.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Marc-Antoine Ruel

On Wed, Oct 28, 2009 at 1:30 AM, Darin Fisher da...@chromium.org wrote:
 I'm pretty sure that enabling USE_SYSTEM_MALLOC will also lead to corruption
 since WebKit is not hermetic (we allocate things externally that we then
 delete inside WebKit).
 -Darin

Wouha! That really limit our capacity to link wekbit.dll.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Improving our documentation

2009-10-28 Thread Mike Pinkerton

One of my personal OKRs for this quarter is to identify areas where we
need better docs, especially on Mac where we've been so busy getting
caught up that we haven't taken the time to explain how things work.

To that end, I've started a doc on our public Google Code wiki:

  http://code.google.com/p/chromium/wiki/DesignDocsToWrite

Feel free to add anything to it. It doesn't commit you to writing the
doc, we can always find someone down the line to do the work, but
knowing in what areas we need better coverage will help when we come
time to do a doc fixit. Eventually, we'll add all of these docs to
dev.chromium.org.

-- 
Mike Pinkerton
Mac Weenie
pinker...@google.com

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Evan Martin

On Tue, Oct 27, 2009 at 9:11 PM, Adam Barth aba...@chromium.org wrote:
 My three laptops have relatively comparable hardware and run Chrome on
 Windows, Mac, and Linux respectively.  The Linux version of Chrome
 feels ridiculously faster than Windows and Mac.  Do we understand why
 this is?  Can we make Windows and Mac feel that fast too?

My first instinct is to say because (1) we're awesome and (2) Linux is
awesome, but I'd prefer to have facts back it up.  :)

There's a perf link on http://build.chromium.org that has builders
tracking various metrics.  If we get perf tests for the behaviors you
care about, we can better compare and improve them.

On the other hand, I'm not sure if the hardware lines up between
platforms so maybe the comparisons I do below are not valid...

 General observations:

General comments: Linux tends to be lighter which means it does
better on older hardware, so depending on what sorts of laptops you're
talking about that could be a major factor.  Windowses later than 2000
or so need surprising amounts of hardware to run well.  (I don't
mention Mac below because there hasn't been much performance work
there yet.)

 1) Scroll performance is extremely good.  Even on Gmail, I can only
 get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
 it doesn't even look like I can do that.

On plain pages (one scrollbar on the right, no Flash) scrolling is
literally shifting the pixels down.  On Linux we do this by sending a
command to the X server, which is a single process that even has the
graphics drivers built in so it talks directly to your graphics card
and can in theory do a hardware-accelerated copy.  I would expect this
to be pretty fast.

However, Gmail is a complicated page (the main scrollbar is an
iframe) so in that case I guess rendering speed is getting involved.
There I'd expect Windows Chrome to be faster because the compiler is
better and there have been more people looking at performance (I saw
in another thread that tcmalloc, currently only used on Windows,
improved the page cycler by 50%?).

The page cycler perf graphs are intended to test rendering speed.  Do
the numbers match your perception?  I can't get the right graphs to
load right now.  It looks like spew from NOTIMPLEMENTED()s may be
obscuring the data.

 2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
 we pre-render the NTP on other platforms?

The zygote is paused right at process start, before we've even started
a renderer.  On the other hand Windows process creation is more
expensive.

There is a new tab graph that attempts to measure this.  The various
lines on the graph are tracking how quickly we get to each stage in
constructing the page.  We hit the first line 20ms faster on Linux
than Windows likely due to the zygote and slow Windows process
creation, but process startup seems to be a relatively small part of
the total time.  Linux hits other lines later and Linux and Windows
hit the finish line at around the same time.

In your case, I wonder if you have more history accumulated on your
Windows profile, making the new tab computation more expensive than
the equivalent one on the Linux box.

I'd expect the faster file system on Linux to eventually be help here.
 (My experience with git has been you get an order of magnitude slower
each step from Linux-Mac-Windows, but that could be git or
hardware-specific.)

 3) Startup time is faster than calculator.

I'm not sure if you're kidding.  Do you mean Windows calculator?
Maybe there's something wrong with your Windows box -- maybe a virus
scanner or disk indexer or some other crap procmon will show is
continually thrashing your computer.  Or maybe you have a spare Chrome
instance on another virtual desktop on your Linux box so clicking the
Chrome button is just telling it to show another window.

The startup tests are intended to track startup performance, and again
the Windows graphs are much better than the Linux ones.  However, the
difference between the two is milliseconds and my experience as a user
is that Chrome rarely starts that fast, so I wonder if these graphs
are really measuring what a user perceives (which frequently involves
disk).

In the limit, I'd expect us to pay a lot more on Linux due to using
more libraries, GTK initialization, round trips to the X server, etc.
but I don't know much about Windows here.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Reference build has been changed?

2009-10-28 Thread Anton Muhin

Dear chromerers,

Looks like reference build (for buildbots) has been changed recently.
Does anybody know exact build which is a reference now?

yours,
anton.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [Green Tree] Task Force Status Update 10/26

2009-10-28 Thread Scott Violet

On Tue, Oct 27, 2009 at 11:37 PM, Paweł Hajdan Jr.
phajdan...@chromium.org wrote:
 On Wed, Oct 28, 2009 at 00:44, Ian Fette i...@chromium.org wrote:

 In an effort to provide more transparency into what the team is working
 on, I'm sending out the meeting notes from the green tree task force to
 chromium-dev, below. I will try to send further notes to chromium-dev from
 our meetings.

 Sounds great!


 Flakiness

 Brought down to acceptable levels for unit tests. Need help to keep it
 that way and not regress.

 I think that unit_tests are well and happy, but ui_tests are another story.
 We have UI tests which have more than 100 flaky flips in just two weeks. And
 we have UI tests which fail on almost every try run. IMHO such outstanding
 (is that the right word?) cases of flakiness should be fixed with a high
 enough priority.
 And we still have lots of problems with resource-related flakiness, like
 this:

 [FATAL:resource_bundle.cc(133)] Check failed: false.

 [FATAL:tab_renderer.cc(132)] Check failed: waiting_animation_frames-width()
 % waiting_animation_frames-height() == 0.

I suspect this happens when the theme resources aren't correctly
built. Perhaps we should have this check early on in ui tests so that
we don't run any tests if this check fails.

  -Scott

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: [Green Tree] Task Force Status Update 10/26

2009-10-28 Thread Paweł Hajdan Jr .
On Wed, Oct 28, 2009 at 16:35, Scott Violet s...@chromium.org wrote:

 I suspect this happens when the theme resources aren't correctly
 built. Perhaps we should have this check early on in ui tests so that
 we don't run any tests if this check fails.


Yeah, I was even thinking about a build step, right after the compile (so we
detect it early, on the builder - and seeing the patch that cause the
problem would be easier).

So there are two possibilities:

- theme resources are not rebuilt correctly
- .cc files depending on the resources are not rebuilt correctly

Where the second one is harder to debug.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Scott Violet

I'm confused by the diagram. In step 5, why does F' get added to the
model. Are you saying the 'extension cloud' service always creates a
new bookmark, without verifying if the model already has a matching
entry?

  -Scott

On Wed, Oct 21, 2009 at 5:08 PM, Tim Steele t...@chromium.org wrote:
 [re-sending from correct email account]
 Hi,
 I wrote up a document that discusses some interesting unintentional
 relationships that can exist between independent extensions, and how this
 general problem also currently affects the browser sync engine.  This issue
 was discovered from trying to explain the primary symptom of unusually high
 syncing traffic generated by Chrome clients.  Please find it here:
 A Tale of Two (or more) Sync Engines
 You should read that before continuing!
 This led to me thinking about what we do long term, short term, or basically
 before Chrome Sync and extensions are running in parallel in a beta channel
 environment. You'll see a bit of this at the end of the first document, but
 after posing the problem as an extensions problem I ended up at a random
 idea that I think makes at least a little sense, though I admit I was having
 fun thinking and writing about it so maybe I missed some major roadblocks
 along the way.  There are downsides, mainly revolving around the added
 hand-holding we would impose on extensions.  Please read! Hoping for
 comments and feedback. Extensions API quotaserver
 In addition to that, Colin and Todd (cc'ed) brought up some sync specific
 ideas they have (I mention it a bit at the end of the first doc).  We'll try
 to get a separate thread going about this soon!
 Thanks!
 Tim
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Erik Corry

Do you have anti-virus software on your Windows machines?

2009/10/28 Adam Barth aba...@chromium.org:

 My three laptops have relatively comparable hardware and run Chrome on
 Windows, Mac, and Linux respectively.  The Linux version of Chrome
 feels ridiculously faster than Windows and Mac.  Do we understand why
 this is?  Can we make Windows and Mac feel that fast too?

 General observations:

 1) Scroll performance is extremely good.  Even on Gmail, I can only
 get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
 it doesn't even look like I can do that.

 2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
 we pre-render the NTP on other platforms?

 3) Startup time is faster than calculator.

 Adam

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Marc-Antoine Ruel

An additional note:

Most Windows boxes have an AV installed while most linux boxes don't.
Never underestimate the sluggishness of AVs.

M-A

On Wed, Oct 28, 2009 at 11:05 AM, Evan Martin e...@chromium.org wrote:

 On Tue, Oct 27, 2009 at 9:11 PM, Adam Barth aba...@chromium.org wrote:
 My three laptops have relatively comparable hardware and run Chrome on
 Windows, Mac, and Linux respectively.  The Linux version of Chrome
 feels ridiculously faster than Windows and Mac.  Do we understand why
 this is?  Can we make Windows and Mac feel that fast too?

 My first instinct is to say because (1) we're awesome and (2) Linux is
 awesome, but I'd prefer to have facts back it up.  :)

 There's a perf link on http://build.chromium.org that has builders
 tracking various metrics.  If we get perf tests for the behaviors you
 care about, we can better compare and improve them.

 On the other hand, I'm not sure if the hardware lines up between
 platforms so maybe the comparisons I do below are not valid...

 General observations:

 General comments: Linux tends to be lighter which means it does
 better on older hardware, so depending on what sorts of laptops you're
 talking about that could be a major factor.  Windowses later than 2000
 or so need surprising amounts of hardware to run well.  (I don't
 mention Mac below because there hasn't been much performance work
 there yet.)

 1) Scroll performance is extremely good.  Even on Gmail, I can only
 get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
 it doesn't even look like I can do that.

 On plain pages (one scrollbar on the right, no Flash) scrolling is
 literally shifting the pixels down.  On Linux we do this by sending a
 command to the X server, which is a single process that even has the
 graphics drivers built in so it talks directly to your graphics card
 and can in theory do a hardware-accelerated copy.  I would expect this
 to be pretty fast.

 However, Gmail is a complicated page (the main scrollbar is an
 iframe) so in that case I guess rendering speed is getting involved.
 There I'd expect Windows Chrome to be faster because the compiler is
 better and there have been more people looking at performance (I saw
 in another thread that tcmalloc, currently only used on Windows,
 improved the page cycler by 50%?).

 The page cycler perf graphs are intended to test rendering speed.  Do
 the numbers match your perception?  I can't get the right graphs to
 load right now.  It looks like spew from NOTIMPLEMENTED()s may be
 obscuring the data.

 2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
 we pre-render the NTP on other platforms?

 The zygote is paused right at process start, before we've even started
 a renderer.  On the other hand Windows process creation is more
 expensive.

 There is a new tab graph that attempts to measure this.  The various
 lines on the graph are tracking how quickly we get to each stage in
 constructing the page.  We hit the first line 20ms faster on Linux
 than Windows likely due to the zygote and slow Windows process
 creation, but process startup seems to be a relatively small part of
 the total time.  Linux hits other lines later and Linux and Windows
 hit the finish line at around the same time.

 In your case, I wonder if you have more history accumulated on your
 Windows profile, making the new tab computation more expensive than
 the equivalent one on the Linux box.

 I'd expect the faster file system on Linux to eventually be help here.
  (My experience with git has been you get an order of magnitude slower
 each step from Linux-Mac-Windows, but that could be git or
 hardware-specific.)

 3) Startup time is faster than calculator.

 I'm not sure if you're kidding.  Do you mean Windows calculator?
 Maybe there's something wrong with your Windows box -- maybe a virus
 scanner or disk indexer or some other crap procmon will show is
 continually thrashing your computer.  Or maybe you have a spare Chrome
 instance on another virtual desktop on your Linux box so clicking the
 Chrome button is just telling it to show another window.

 The startup tests are intended to track startup performance, and again
 the Windows graphs are much better than the Linux ones.  However, the
 difference between the two is milliseconds and my experience as a user
 is that Chrome rarely starts that fast, so I wonder if these graphs
 are really measuring what a user perceives (which frequently involves
 disk).

 In the limit, I'd expect us to pay a lot more on Linux due to using
 more libraries, GTK initialization, round trips to the X server, etc.
 but I don't know much about Windows here.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Dan Kegel

On Wed, Oct 28, 2009 at 8:05 AM, Evan Martin e...@chromium.org wrote:
 3) Startup time is faster than calculator.

 I'm not sure if you're kidding.  Do you mean Windows calculator?

On my home linux box (Jaunty, reasonably fast),
warm startup time of chrome is less
than the warm startup time of gnome calculator.
Strange but true.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Evan Martin

On Wed, Oct 28, 2009 at 9:07 AM, Dan Kegel d...@kegel.com wrote:
 On Wed, Oct 28, 2009 at 8:05 AM, Evan Martin e...@chromium.org wrote:
 3) Startup time is faster than calculator.

 I'm not sure if you're kidding.  Do you mean Windows calculator?

 On my home linux box (Jaunty, reasonably fast),
 warm startup time of chrome is less
 than the warm startup time of gnome calculator.
 Strange but true.

Yeah, that's why I was asking -- it wouldn't surprise me if we were
faster than the GNOME one.  Without any numbers I'd blame the icon
theme stuff (Elliot found it can do a truly ludicrous number of disk
accesses on startup).

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A Dictionary-Evaluation Plan

2009-10-28 Thread Jim Roskind
Will we have any chance to ship both, and randomly select (at startup
time??) between the two dictionaries?  Alternatively, could we ship a series
of dev builds, and alternate use of old an new dictionaries.

The bottom line IMO is that when running experiments, you need the closest
to apples-apples comparisons.  Comparisons across different user groups
(beta vs stable?), and different time periods (last months, vs next month),
tend to include too much noise.  Also note that experiments often have long
lasting impact, by which I mean that IF the dictionary is problematic,
then users will proceed into the future to avoid using it, and this will
persist even after we correct the problems.

Do you have any way of supporting and (without asking the user) providing
new vs old dictionaries?

Thanks,

Jim

2009/10/28 Evan Martin e...@chromium.org

 2009/10/28 Hironori Bono (坊野 博典) hb...@chromium.org:
  Even though this is still a random thought, I would personally like to
  use chromium to evaluate the new dictionaries: i.e. uploading the new
  dictionaries to our dictionary server, changing the chromium code to
  use the updated ones, asking users to compare the new dictionaries
  with the old ones and give us their feedbacks (*1). If users like the
  new dictionaries, we would like to release the new ones. Otherwise we
  will keep the old ones.

 In the web search world when we made changes like these, we'd try to
 measure it without users giving explicit feedback.  For example: give
 some users the new dictionary, and others the old one.  Log which
 entry index from the dictionary suggestion list people are frequently
 choosing, then compare the aggregate counts between the two sets of
 users.  (Maybe call the Add to Dictionary menu option -1.)  For a
 better dictionary, I'd expect people to use the earlier entries from
 the suggestion list more frequently, and the Add to Dictionary
 option less frequently.

 This can be done with our existing histogram framework and I believe
 Jim wrote a framework within that for doing these sorts of
 experiments.

 It still might be worth soliciting feedback from users directly.  For
 example, if the new dictionary is missing a common word the above
 measure would get a high count of Add to Dictionary, and maybe users
 could tell us about this.  But in general, you get higher quality data
 when you involve more users, and a spreadsheet will be limited to
 people who understand the English instructions.


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Is there any plan to support Visual Studio 2010? (Current in Beta 2)

2009-10-28 Thread Amit Joshi
Is the compiler tool chain 64-bit in VS2010? In that case, the biggest
advantage may be to produce WPO builds on 64 bit machines.

On Wed, Oct 28, 2009 at 7:50 AM, Marc-Antoine Ruel mar...@chromium.orgwrote:


 [Subtly setting expectations here]

 Updated http://crbug.com/25628 accordingly. You have your answer in
 this feature request and read my comment on it before adding any
 comment. (as in don't add any please)

 Feel free to star it though.

 But as a sane person, well, as sane as I can be, I can only say the
 IDE is barely usable (sorry Borris, even if you fixed Intellisense)
 which doesn't help in motivating me to add any kind of support to
 VS2010 in the short term. Anyway most of the grunt work will be done
 by either Brad or Steve so it's them you need to convince and they
 have a lot of other much more needed things to do before.

 Thanks for your interest, gyp patches to kick start a new msbuild
 exporter are really welcomed.

 M-A

 On Wed, Oct 28, 2009 at 1:05 AM, sam mhsien.t...@gmail.com wrote:
 
  I saw this issue, which was accepted, in gyp project.
  http://code.google.com/p/gyp/issues/detail?id=96q=type%3DEnhancement
 
  Good to know this.
 
  BR,
  mht
 
 
 
  On Wed, Oct 28, 2009 at 12:16 PM, Dan Kegel d...@kegel.com wrote:
  On Tue, Oct 27, 2009 at 9:00 PM, mht mhsien.t...@gmail.com wrote:
  Some information about Visual Studio 2010. But question first: Is
  there any plan (long term / short term) to support it?
 
  It would probably be a good idea, eventually.
  If anyone wants to write an msbuild backend for gyp, have at it!
  - Dan
 
 
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Reference build has been changed?

2009-10-28 Thread Tony Chang

From the svn log:
r30141 | ch...@chromium.org | 2009-10-26 18:00:16 -0700 (Mon, 26 Oct
2009) | 6 lines

Update Windows reference build to r30072.

BUG=25200
TEST=ref build runs locally, buildbot tests continue
to work
Review URL: http://codereview.chromium.org/339015


On Wed, Oct 28, 2009 at 8:14 AM, Anton Muhin ant...@chromium.org wrote:

 Dear chromerers,

 Looks like reference build (for buildbots) has been changed recently.
 Does anybody know exact build which is a reference now?

 yours,
 anton.

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Reference build has been changed?

2009-10-28 Thread Anton Muhin

Thanks a lot, Tony.

yours,
anton.

On Wed, Oct 28, 2009 at 7:44 PM, Tony Chang t...@chromium.org wrote:
 From the svn log:
 r30141 | ch...@chromium.org | 2009-10-26 18:00:16 -0700 (Mon, 26 Oct
 2009) | 6 lines

 Update Windows reference build to r30072.

 BUG=25200
 TEST=ref build runs locally, buildbot tests continue
 to work
 Review URL: http://codereview.chromium.org/339015


 On Wed, Oct 28, 2009 at 8:14 AM, Anton Muhin ant...@chromium.org wrote:

 Dear chromerers,

 Looks like reference build (for buildbots) has been changed recently.
 Does anybody know exact build which is a reference now?

 yours,
 anton.

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Reference build has been changed?

2009-10-28 Thread Marc-Antoine Ruel

FTR, you could have got the same info with:

src\chrome\tools\test\reference_build\chromechrome about:version

M-A

On Wed, Oct 28, 2009 at 12:46 PM, Anton Muhin ant...@chromium.org wrote:

 Thanks a lot, Tony.

 yours,
 anton.

 On Wed, Oct 28, 2009 at 7:44 PM, Tony Chang t...@chromium.org wrote:
 From the svn log:
 r30141 | ch...@chromium.org | 2009-10-26 18:00:16 -0700 (Mon, 26 Oct
 2009) | 6 lines

 Update Windows reference build to r30072.

 BUG=25200
 TEST=ref build runs locally, buildbot tests continue
 to work
 Review URL: http://codereview.chromium.org/339015


 On Wed, Oct 28, 2009 at 8:14 AM, Anton Muhin ant...@chromium.org wrote:

 Dear chromerers,

 Looks like reference build (for buildbots) has been changed recently.
 Does anybody know exact build which is a reference now?

 yours,
 anton.

 



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Reference build has been changed?

2009-10-28 Thread Anton Muhin

Cool, thanks a lot.

yours,
anton.

On Wed, Oct 28, 2009 at 7:48 PM, Marc-Antoine Ruel mar...@chromium.org wrote:
 FTR, you could have got the same info with:

 src\chrome\tools\test\reference_build\chromechrome about:version

 M-A

 On Wed, Oct 28, 2009 at 12:46 PM, Anton Muhin ant...@chromium.org wrote:

 Thanks a lot, Tony.

 yours,
 anton.

 On Wed, Oct 28, 2009 at 7:44 PM, Tony Chang t...@chromium.org wrote:
 From the svn log:
 r30141 | ch...@chromium.org | 2009-10-26 18:00:16 -0700 (Mon, 26 Oct
 2009) | 6 lines

 Update Windows reference build to r30072.

 BUG=25200
 TEST=ref build runs locally, buildbot tests continue
 to work
 Review URL: http://codereview.chromium.org/339015


 On Wed, Oct 28, 2009 at 8:14 AM, Anton Muhin ant...@chromium.org wrote:

 Dear chromerers,

 Looks like reference build (for buildbots) has been changed recently.
 Does anybody know exact build which is a reference now?

 yours,
 anton.

 



 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Elliot Glaysher (Chromium)

On Wed, Oct 28, 2009 at 9:20 AM, Evan Martin e...@chromium.org wrote:
 3) Startup time is faster than calculator.

 I'm not sure if you're kidding.  Do you mean Windows calculator?

 On my home linux box (Jaunty, reasonably fast),
 warm startup time of chrome is less
 than the warm startup time of gnome calculator.
 Strange but true.

 Yeah, that's why I was asking -- it wouldn't surprise me if we were
 faster than the GNOME one.  Without any numbers I'd blame the icon
 theme stuff (Elliot found it can do a truly ludicrous number of disk
 accesses on startup).

Here's an experiment: Set Options  Personal Stuff  Use GTK+ Theme.
Close chrome. Now try another warm start of chrome. Is it still faster
than gnome calculator? If not, it's because you were using the classic
chrome theme, which doesn't have to warm up GTK's *per process* icon
cache. strace gnome-calculator and watch the output for reading
directories and files with icons in them and be blown away by the
amount of work done.

-- Elliot

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Jens Alfke


On Oct 28, 2009, at 7:21 AM, Mark Mentovai wrote:

 When I benchmarked this a few months ago on a fairly ordinary Mac, it
 took nearly 100ms from the time that the browser started a renderer to
 the time that the renderer was ready to service requests.  A decent
 chunk of that is load time and pre-main initialization in system
 libraries.  It's beyond our control, but there's no reason we can't
 make it happen sooner.

Unfortunately, it's nearly impossible to continue a forked process on  
OS X if it uses any higher-level (above POSIX) APIs. The main problem  
is that Mach ports can't be replicated across the fork, so if any  
ports were already open, they'll all be bogus in the new process. And  
all kinds of stuff in the OS is done via IPC across Mach ports, most  
significantly to the window server.

It might be possible to create a forkable renderer by doing as much  
setup as possible without actually invoking any OS X-specific APIs,  
then initializing the rest after the fork. I don't know if this has  
ever been tried, or if it would provide sufficient improvement to be  
worth the effort.

I would expect that rendering speed would suffer somewhat due to the  
extra layer of pixel buffering incurred by Chrome's renderers. Has  
anyone experimented with giving the renderer access to a child window  
of the browser to allow it to draw more directly?

—Jens
--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Nick Carter
On Wed, Oct 28, 2009 at 8:46 AM, Scott Violet s...@chromium.org wrote:


 I'm confused by the diagram. In step 5, why does F' get added to the
 model. Are you saying the 'extension cloud' service always creates a
 new bookmark, without verifying if the model already has a matching
 entry?


This is indeed the behavior we're seeing with one
extensionhttp://www.uniformedopinion.com/ -- delete
all old bookmarks, write a whole new copy.

 - nick



  -Scott

 On Wed, Oct 21, 2009 at 5:08 PM, Tim Steele t...@chromium.org wrote:
  [re-sending from correct email account]
  Hi,
  I wrote up a document that discusses some interesting unintentional
  relationships that can exist between independent extensions, and how this
  general problem also currently affects the browser sync engine.  This
 issue
  was discovered from trying to explain the primary symptom of unusually
 high
  syncing traffic generated by Chrome clients.  Please find it here:
  A Tale of Two (or more) Sync Engines
  You should read that before continuing!
  This led to me thinking about what we do long term, short term, or
 basically
  before Chrome Sync and extensions are running in parallel in a beta
 channel
  environment. You'll see a bit of this at the end of the first document,
 but
  after posing the problem as an extensions problem I ended up at a random
  idea that I think makes at least a little sense, though I admit I was
 having
  fun thinking and writing about it so maybe I missed some major roadblocks
  along the way.  There are downsides, mainly revolving around the added
  hand-holding we would impose on extensions.  Please read! Hoping for
  comments and feedback. Extensions API quotaserver
  In addition to that, Colin and Todd (cc'ed) brought up some sync specific
  ideas they have (I mention it a bit at the end of the first doc).  We'll
 try
  to get a separate thread going about this soon!
  Thanks!
  Tim
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Amanda Walker
On Wed, Oct 28, 2009 at 1:39 PM, Jens Alfke s...@google.com wrote:

 Unfortunately, it's nearly impossible to continue a forked process on
 OS X if it uses any higher-level (above POSIX) APIs.


Nothing says we have to use fork().  Always having a renderer process
started and waiting for instructions could also be done via other
mechanisms.  The same issue affects plugin startup time (e.g., Flash, i.e.,
YouTube :-)).

I would expect that rendering speed would suffer somewhat due to the
 extra layer of pixel buffering incurred by Chrome's renderers. Has
 anyone experimented with giving the renderer access to a child window
 of the browser to allow it to draw more directly?


There are no public APIs for cross-process child window rendering or
grouping.  10.6 introduces IOSurface, which is roughly speaking a shared GPU
texture, which could be useful once the renderer is GPU accelerated.

We could of course use private APIs, but it would be nice to avoid that at
least for 10.6 and above.

--Amanda

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Jens Alfke

On Oct 27, 2009, at 9:10 PM, Mike Belshe wrote:

 From a performance perspective, it may be critical to use tcmalloc  
 to match safari performance.  It was literally a 50% speedup on most  
 of the DOM perf when running on WinXP.

Yeah, I've profiled some of the Dromaeo benchmarks, and the DOM- 
mutation test in particular is spending huge amounts of time in malloc  
and free.

Should I open a bug on this task, then?

 I suspect this will use the version of TCMalloc which is embedded in  
 WTF.  I'd recommend against this approach.  We should try to use a  
 single allocator for the entire app

I agree; no sense linking in two different versions of tcmalloc.

 There is a disadvantage.  I suspect Apple is farther along in  
 optimizing the mac-specific optimizations of tcmalloc than the  
 google3 guys are.

I would say more generally client-specific optimizations. Some of  
the recent memory-bloat issues found by the Memory taskforce (jamesr  
in particular) show that baseline tcmalloc is tuned for server  
environments where memory footprint is much less of an issue.

—Jens
--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Aaron Boodman

On Wed, Oct 28, 2009 at 10:42 AM, Nick Carter n...@chromium.org wrote:
 On Wed, Oct 28, 2009 at 8:46 AM, Scott Violet s...@chromium.org wrote:

 I'm confused by the diagram. In step 5, why does F' get added to the
 model. Are you saying the 'extension cloud' service always creates a
 new bookmark, without verifying if the model already has a matching
 entry?

 This is indeed the behavior we're seeing with one extension -- delete all
 old bookmarks, write a whole new copy.

One thing I forgot I wanted to point out about that extension in
particular. In Tim's proposal he says:

...in Chrome Sync, we try really hard to not get into feedback loops
with ourselves, and not to make unnecessary / no-op changes, but we
have seen at least one extension (GBX) periodically delete all
bookmarks and then recreate all bookmarks with no changes to the nodes
in question. That seems almost positively unnecessary.  If we detect
that they have removed the same tree (from the same root position) 10
times in one day, this does really seem fishy and enough to trigger
limiting.  But at that point the onus is on extension developers, not
us.

It sounds like Tim agrees here, but I just want to reiterate: we
should not be the arbiters of quality or taste wrt extensions. There
will be buggy and crappy extensions. We should try and design things
such that you have to go out of your way to write crappy things, but
at the end of the day we want this to be an open environment where
people surprise us -- both in good and bad ways :).

If we need to protect our own systems that is fine, but other than
that we shouldn't prevent things in the extensions system, even if
they don't seem to make much sense.

-a

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Adam Barth

On Wed, Oct 28, 2009 at 8:47 AM, Erik Corry erik.co...@gmail.com wrote:
 Do you have anti-virus software on your Windows machines?

No.  I could editorialize here, but I wont.

Adam

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Mark Mentovai

Jens Alfke wrote:
 Unfortunately, it's nearly impossible to continue a forked process on OS X
 if it uses any higher-level (above POSIX) APIs. The main problem is that
 Mach ports can't be replicated across the fork, so if any ports were already
 open, they'll all be bogus in the new process. And all kinds of stuff in the
 OS is done via IPC across Mach ports, most significantly to the window
 server.

Sure, we understand that.  Why does that become a concern with
pre-warmed renderers in a way that it's not with the renderers we're
using now?

My proposal is to fork a new process, exec the renderer, and then let
it bring itself up.  That's exactly how we start renderers now.  The
only difference is that I'm suggesting we should always keep a spare
one warmed up and ready to go, and we should start the initial one
sooner instead of waiting for something in the browser to say um, I'm
gonna need a renderer.  We can pretty much guarantee that we'll
always need a renderer, let's give it a head start.

I don't want to just pre-fork a process and have it sit around with
its thumb up its Mach port.  That wouldn't really gain us much on the
Mac anyway, because our fork is relatively cheap.  As I mentioned, the
big losses that we experience in bringing up a new renderer process
are in loading and initialization.

Mark

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Mac renderer launch (Was: Why is Linux Chrome so fast?)

2009-10-28 Thread Jens Alfke


On Oct 28, 2009, at 11:08 AM, Mark Mentovai wrote:

 My proposal is to fork a new process, exec the renderer, and then let
 it bring itself up.  That's exactly how we start renderers now.  The
 only difference is that I'm suggesting we should always keep a spare
 one warmed up and ready to go,

How much would that increase memory use? (says the guy on the Memory  
task force...) I.e. what's the RPRVT of a warmed-up renderer process?

 and we should start the initial one
 sooner instead of waiting for something in the browser to say um, I'm
 gonna need a renderer.  We can pretty much guarantee that we'll
 always need a renderer, let's give it a head start.

If bringing up the first renderer is CPU-bound, that would be a great  
idea. If it's disk-bound, then it could have a negative effect on  
launch time. Do we have profiles/samples of renderer launch, both warm  
and cold?

—Jens
--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Mike Belshe
On Wed, Oct 28, 2009 at 10:54 AM, Jens Alfke s...@chromium.org wrote:


 On Oct 27, 2009, at 9:10 PM, Mike Belshe wrote:

 From a performance perspective, it may be critical to use tcmalloc to match
 safari performance.  It was literally a 50% speedup on most of the DOM perf
 when running on WinXP.


 Yeah, I've profiled some of the Dromaeo benchmarks, and the DOM-mutation
 test in particular is spending huge amounts of time in malloc and free.

 Should I open a bug on this task, then?


SGTM.



 I suspect this will use the version of TCMalloc which is embedded in WTF.
  I'd recommend against this approach.  We should try to use a single
 allocator for the entire app


 I agree; no sense linking in two different versions of tcmalloc.

 There is a disadvantage.  I suspect Apple is farther along in optimizing
 the mac-specific optimizations of tcmalloc than the google3 guys are.


 I would say more generally *client*-specific optimizations. Some of the
 recent memory-bloat issues found by the Memory taskforce (jamesr in
 particular) show that baseline tcmalloc is tuned for server environments
 where memory footprint is much less of an issue.


That may be true too, but, having looked at what they've done, I don't think
they've done a lot to fix this problem (yet).  Chrome now uses a lot less
memory than safari.

I was really pointing out the mac-port potential problems.  See tcmalloc's
TCMalloc_SystemAlloc() routines ported to the mac.  I don't know that the
google3 tools has a mac version at all?  tcmalloc was designed for linux
which leverages a cute trick using MADV_DONT_NEED (is this MADV_FREE_REUSE
on the mac?) to release memory (sorta).It may be that we can port the
webkit implementations for mac back to the google3-perftools open source
project, and also make it release memory better.

If you pick up this task, you can ask Jim, James or me for details; I think
we've all been fairly deep into tcmalloc to help get you ramped up quickly.


Mike





 —Jens


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Mac renderer launch (Was: Why is Linux Chrome so fast?)

2009-10-28 Thread Mark Mentovai

Jens Alfke wrote:
 How much would that increase memory use? (says the guy on the Memory task
 force...) I.e. what's the RPRVT of a warmed-up renderer process?

Does it matter?  At least for the startup case, that's a renderer we
know we'll need anyway.

You could use this argument to shoot down keeping a spare warmed-up
renderer ready to go at other times, but I don't think it's relevant
to the startup case.

 If bringing up the first renderer is CPU-bound, that would be a great idea.
 If it's disk-bound, then it could have a negative effect on launch time. Do
 we have profiles/samples of renderer launch, both warm and cold?

I suspect that most (but not all) of the stuff that the renderer needs
to read to warm itself up will already be in the buffer cache.  The
renderer shouldn't be doing much writing.  But we could profile it.

Mark

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Scott Violet

On Wed, Oct 28, 2009 at 10:42 AM, Nick Carter n...@chromium.org wrote:
 On Wed, Oct 28, 2009 at 8:46 AM, Scott Violet s...@chromium.org wrote:

 I'm confused by the diagram. In step 5, why does F' get added to the
 model. Are you saying the 'extension cloud' service always creates a
 new bookmark, without verifying if the model already has a matching
 entry?

 This is indeed the behavior we're seeing with one extension -- delete all
 old bookmarks, write a whole new copy.

Ugh!

I don't think there is going to be a way to make it impossible to
write poorly written extensions. Perhaps sync should have a way to
detect lots of mutations from an extension and then disable either the
extension or itself at some point with a suitable warning. It should
certainly be possible to track number of mutations from an extension
and to know which extension is the result of the mutations.

  -Scott


  - nick


  -Scott

 On Wed, Oct 21, 2009 at 5:08 PM, Tim Steele t...@chromium.org wrote:
  [re-sending from correct email account]
  Hi,
  I wrote up a document that discusses some interesting unintentional
  relationships that can exist between independent extensions, and how
  this
  general problem also currently affects the browser sync engine.  This
  issue
  was discovered from trying to explain the primary symptom of unusually
  high
  syncing traffic generated by Chrome clients.  Please find it here:
  A Tale of Two (or more) Sync Engines
  You should read that before continuing!
  This led to me thinking about what we do long term, short term, or
  basically
  before Chrome Sync and extensions are running in parallel in a beta
  channel
  environment. You'll see a bit of this at the end of the first document,
  but
  after posing the problem as an extensions problem I ended up at a random
  idea that I think makes at least a little sense, though I admit I was
  having
  fun thinking and writing about it so maybe I missed some major
  roadblocks
  along the way.  There are downsides, mainly revolving around the added
  hand-holding we would impose on extensions.  Please read! Hoping for
  comments and feedback. Extensions API quotaserver
  In addition to that, Colin and Todd (cc'ed) brought up some sync
  specific
  ideas they have (I mention it a bit at the end of the first doc).  We'll
  try
  to get a separate thread going about this soon!
  Thanks!
  Tim
  
 

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread 陈智昌

On Tue, Oct 27, 2009 at 9:10 PM, Mike Belshe mbel...@google.com wrote:
 On Tue, Oct 27, 2009 at 3:24 PM, Jens Alfke s...@chromium.org wrote:

 Do we plan to switch the Mac build of Chromium to use tcmalloc instead of
 the system malloc? I thought this was the case, but I can't find any bug
 covering the task to do that. I'm on the memory task force, and this
 decision could definitely have an impact (one direction or the other) on
 footprint on Mac.

 From a performance perspective, it may be critical to use tcmalloc to match
 safari performance.  It was literally a 50% speedup on most of the DOM perf
 when running on WinXP.

Just thought I'd throw in the data point that I've experimented with
enabling tcmalloc for Linux builds of Chromium and the perfbots had
negligible changes.  I haven't had time to investigate why.  I'd
recommend you enable it for a run on the perfbots before you spend
much time tweaking the mac port of tcmalloc.



 I just tried enabling tcmalloc (by changing USE_SYSTEM_MALLOC to 0 in
 JavaScriptCore.gyp), and Chromium launches but logs a ton of warnings about
 unknown pointers being freed.

 I suspect this will use the version of TCMalloc which is embedded in WTF.
  I'd recommend against this approach.  We should try to use a single
 allocator for the entire app, and specialize when necessary.  Having the
 rendering engine drive the allocator is a bit backwards; it seems better for
 chrome itself to choose the allocator and then the rendering engine should
 use that.
 To make this work, you'll need to figure out whatever build magic is
 necessary on the Mac; I'm kinda clueless in that regard, but if you want to
 know what we did on Windows, I'm happy to share.
 You'll find the tcmalloc version that we use on windows available in
 third_party/tcmalloc.
 Keep in mind also that the version of tcmalloc in webcore is heavily
 modified by apple.  Some of those changes have been similarly made in
 tcmalloc's open source project, but others have not.  Apple has not seemed
 interested in syncing back and appears to be on the fork it route.
 Using third_party/tcmalloc will offer a couple of advantages:
    - we are continuing to improve it all the way from google3 to the
 google-perftools to chrome.
    - it will provide the same allocator on windows and mac (easier
 debugging)
    - the chromium implementation allows for selection of multiple allocators
 fairly easily.  Using the allocator_shim.cc, you can plug in your own
 allocators pretty quickly.
 There is a disadvantage.  I suspect Apple is farther along in optimizing the
 mac-specific optimizations of tcmalloc than the google3 guys are.  You may
 have to stumble into these.  Either Jim or I could probably give you some
 pointers for what to look at.  tcmalloc is solid, but the ports other than
 linux have had some very serious port related bugs.
 Mike


 I think this happens because tcmalloc is built by the 'wtf' target of
 JavaScriptCore.xcodeproj, which is a static library, so we may be ending up
 with multiple instances of the library with their own copies of its globals,
 leading to this problem if one instance allocates a block and another one
 tries to free it. But this usually happens only if there are multiple
 dynamic libraries using the same static library — do we have more than just
 Chromium.framework?
 —Jens



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Tim Steele
On Tue, Oct 27, 2009 at 11:00 PM, Aaron Boodman a...@chromium.org wrote:

 On Tue, Oct 27, 2009 at 8:55 PM, Tim Steele t...@chromium.org wrote:
  I can take a stab at more formal heuristics for bookmarks, at least.  We
  will have a better idea of actual limiting parameters for bookmarks (as
 in
  how many operations in a certain time frame is reasonable) once the
  ExtensionsActivityMonitor I just landed percolates and we can aggregate
  across representative sample data it produces.

 A couple thoughts:

 a) I think it is overly clever to hash the changes to the bookmarks
 and count per unique-params. This can be easily or accidentally
 defeated by just doing something like update({title:foo}),
 update({url:blech}), over and over, anyway. Instead, at least for
 bookmarks, I think a simple per-item-count is very reasonable. It
 doesn't make sense to me to update the same bookmark more than a few
 times per minute. An easy heuristic could be that updating the same
 bookmark more than twice a minute sustained over 10 minutes. For
 creates it's a bit tricker. In that case, maybe the best we can do is
 the same combination of properties.


The update{foo} update{blech} case is most likely a different kind of
failure, though, and I was thinking we could limit that with a generic cap
on just the number of updates in a period of time. From the data we have
seen so far, most common is 'update{foo}' in a loop type behavior. But if
the generic cap gets us far enough along, then I agree simpler is better.


 b) What is wrong with solving this by rate limiting the communication
 between the Chrome sync client and the Chrome sync server? It seems
 like that would be more foolproof; if the client tries to send too
 many updates for any reason, they are rate limited.


We have this. The problem is that the user doesn't even realize the
extension is spamming the server. Our server knows the client is producing a
lot of traffic, that's all. So what happens is we limit the client and the
user is left bewildered and helpless because a rogue extension is eating
away his quota.  What I just landed was a way to correlate how many of our
sync commits originate from extensions, but we need to find a way to solve
the problem once we learn from the data.  The reason I suggested this, is
because it dawned on me that this problem affects any extension author
trying to send updates from Chrome to their servers. If extension Bob and
Eve are installed, Eve can bring down Bob's service because of a silly bug.
 I was proposing we just add some layer of protection in here, because we
can, to help our extensions developers out (and Chrome Sync in the process).


 - a


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Tim Steele



 Ugh!

 I don't think there is going to be a way to make it impossible to
 write poorly written extensions. Perhaps sync should have a way to
 detect lots of mutations from an extension and then disable either the
 extension or itself at some point with a suitable warning. It should
 certainly be possible to track number of mutations from an extension
 and to know which extension is the result of the mutations.


Yes, I just added ExtensionsActivityMonitor to get this kind of data.  Our
quickfix idea is precisely to do this and kill either the extension or
Chrome Sync if we detect that things get out of hand.  But this feels like a
problem that could be nipped at a lower level than Chrome Sync. If we have
this kind of data available, we could make it available to the developers by
reporting quota-limited extensions so they can realize that their extension
is acting wrongly. Maybe they already know and/or won't do anything about
it.  Either way, we will likely need to disable that extension *anyway *if
it is bringing down *our* service. So if we have to solve this problem for
us, I don't see a great reason not to offer the help to them (e.g if
Chrome Sync isn't installed) by reporting the data back and choking the
traffic at its origin, instead of falling back and relying on our sync
servers to differentiate traffic from different parts of the client (which
would have to be done at the point when our actual Chrome sync protocol
messages are being processed, which means all the server infrastructure
treats these requests as valid until quite late in the chain eating up
resources along the way).

Granted, I'm not an extensions developer, but I have a hard time believing
this wouldn't be a useful and friendly feature to offer.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Adam Barth

On Wed, Oct 28, 2009 at 8:05 AM, Evan Martin e...@chromium.org wrote:
 General comments: Linux tends to be lighter which means it does
 better on older hardware, so depending on what sorts of laptops you're
 talking about that could be a major factor.  Windowses later than 2000
 or so need surprising amounts of hardware to run well.  (I don't
 mention Mac below because there hasn't been much performance work
 there yet.)

I pulled out the laptops side-by-side to be more scientific about
this.  Here are the stats:

XP: 2GB RAM, Core2 Duo, 2.00 GHz
Ubuntu 9.10: 2GB RAM, Core 2 Duo, 2.40 GHz

So, the Linux machine as 20% more CPU to work with.

 1) Scroll performance is extremely good.  Even on Gmail, I can only
 get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
 it doesn't even look like I can do that.

 On plain pages (one scrollbar on the right, no Flash) scrolling is
 literally shifting the pixels down.  On Linux we do this by sending a
 command to the X server, which is a single process that even has the
 graphics drivers built in so it talks directly to your graphics card
 and can in theory do a hardware-accelerated copy.  I would expect this
 to be pretty fast.

Looking at this more carefully, scroll performance on Slashdot is
great in both Windows and Linux.  On Gmail (no chat mole), there's a
noticeable difference.  Here's a visualization of the numb on the
scroll bar:

||
||
||
||
||
||
--  -- Click here and pull down
--
--  -- Linux: mouse latency gets to here
||
||  -- Windows: mouse latency gets to here
||
||
||
||

Admittedly, it's hard to see precisely, but it affects the feel.
Scroll on Windows feels slightly heavier.

 2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
 we pre-render the NTP on other platforms?

 The zygote is paused right at process start, before we've even started
 a renderer.  On the other hand Windows process creation is more
 expensive.

 There is a new tab graph that attempts to measure this.  The various
 lines on the graph are tracking how quickly we get to each stage in
 constructing the page.  We hit the first line 20ms faster on Linux
 than Windows likely due to the zygote and slow Windows process
 creation, but process startup seems to be a relatively small part of
 the total time.  Linux hits other lines later and Linux and Windows
 hit the finish line at around the same time.

So, I retried this with a fresh profile on both.  The differences are
not as dramatic as I remember.  I can't actually see a difference when
I run them side-by-side.

 3) Startup time is faster than calculator.

 I'm not sure if you're kidding.  Do you mean Windows calculator?

I meant Linux calculator.

 In the limit, I'd expect us to pay a lot more on Linux due to using
 more libraries, GTK initialization, round trips to the X server, etc.
 but I don't know much about Windows here.

I tried turning on the GTK theme.  That killed startup performance.

Side-by-side startup easily noticeably faster in Linux.  To be more
precise, drawing the first pixel is noticeably faster.  Total startup
time is harder to say.

Interestingly startup drawing is different between Windows and Linux.
We might want to film with a high-speed camera to see exactly what's
going on, but here are my impressions:

Linux draw order:
1) Fill entire window with blue (This looks bad, can we use a
different color? White?).
2) Paint main UI widgets, including NTP.
3) Paint NTP thumbnails.
I bet that (2) is actually broken in to two pieces, I just can't see it.

Window draw order:
1) Paint NC region (the blue border around the edge).
2) Paint main UI widgets (without omnibox).
3) Blit NTP content area (the sweep from top to bottom is noticeable).
4) Paint omnibox.
5) Paint NTP thumbnails.

Keep in mind that this all happens very fast, so I could be imagining things.

Ideas for improving perceived windows startup time:

1) Draw a fake omnibox with the rest of the main UI widgets.
Currently we draw the omnibox really late and it looks slow and bad.
You can see this if you have a dark desktop wallpaper and you focus on
where the omnibox will be.  You'll see a dark rectangle inside the
main toolbar which is the desktop showing through.  We should never
show a dark rectangle there.

2) Fill the main content area with white when drawing the main UI
widgets.  You can see this if you focus on the bottom of where the
bookmark bar is going to be (especially when the bookmark bar is set
to show only on the NTP).  You'll see an edge there when the bookmark
bar is draw by while the main content area is still transparent.
There's no reason we should ever paint an edge there.

I bet the reason Windows startup feels slower is whatever drawing
operation we're using for the main content area is slow.  The
top-to-bottom sweep probably makes me feel like the browser isn't
loaded until the sweep reaches the bottom, whereas I feel like Linux
is done earlier in its startup sequence.

Adam


[chromium-dev] Re: Mac renderer launch (Was: Why is Linux Chrome so fast?)

2009-10-28 Thread Jens Alfke


On Oct 28, 2009, at 11:29 AM, Mark Mentovai wrote:

 You could use this argument to shoot down keeping a spare warmed-up
 renderer ready to go at other times, but I don't think it's relevant
 to the startup case.

We weren't just talking about startup — f'rinstance, Darin mentioned  
new-tab jank.

 I suspect that most (but not all) of the stuff that the renderer needs
 to read to warm itself up will already be in the buffer cache.

Not on a cold launch, since the renderer uses a lot of code (like  
WebCore) that the browser doesn't, and will be paging that stuff in.  
We'll need to benchmark both scenarios.

—Jens


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A tale of two (or more) syncing extensions, and a proposal

2009-10-28 Thread Aaron Boodman

On Wed, Oct 28, 2009 at 11:39 AM, Tim Steele t...@chromium.org wrote:
 The update{foo} update{blech} case is most likely a different kind of
 failure, though, and I was thinking we could limit that with a generic cap
 on just the number of updates in a period of time. From the data we have
 seen so far, most common is 'update{foo}' in a loop type behavior. But if
 the generic cap gets us far enough along, then I agree simpler is better.

Cool. I hope and suspect we can get far enough with the simple approach.

 We have this. The problem is that the user doesn't even realize the
 extension is spamming the server. Our server knows the client is producing a
 lot of traffic, that's all. So what happens is we limit the client and the
 user is left bewildered and helpless because a rogue extension is eating
 away his quota.  What I just landed was a way to correlate how many of our
 sync commits originate from extensions, but we need to find a way to solve
 the problem once we learn from the data.  The reason I suggested this, is
 because it dawned on me that this problem affects any extension author
 trying to send updates from Chrome to their servers. If extension Bob and
 Eve are installed, Eve can bring down Bob's service because of a silly bug.
  I was proposing we just add some layer of protection in here, because we
 can, to help our extensions developers out (and Chrome Sync in the process).

Thanks for explaining. Makes sense to me.

- a

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Mac renderer launch (Was: Why is Linux Chrome so fast?)

2009-10-28 Thread Amanda Walker
On Wed, Oct 28, 2009 at 3:12 PM, Jens Alfke s...@google.com wrote:

 Not on a cold launch, since the renderer uses a lot of code (like
 WebCore) that the browser doesn't, and will be paging that stuff in.
 We'll need to benchmark both scenarios.


Indeed.  Proof of concept code that we can compare hard numbers about is
always better than speculation :-).

--Amanda

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Antoine Labour
On Wed, Oct 28, 2009 at 12:05 PM, Adam Barth aba...@chromium.org wrote:


 On Wed, Oct 28, 2009 at 8:05 AM, Evan Martin e...@chromium.org wrote:
  General comments: Linux tends to be lighter which means it does
  better on older hardware, so depending on what sorts of laptops you're
  talking about that could be a major factor.  Windowses later than 2000
  or so need surprising amounts of hardware to run well.  (I don't
  mention Mac below because there hasn't been much performance work
  there yet.)

 I pulled out the laptops side-by-side to be more scientific about
 this.  Here are the stats:

 XP: 2GB RAM, Core2 Duo, 2.00 GHz
 Ubuntu 9.10: 2GB RAM, Core 2 Duo, 2.40 GHz

 So, the Linux machine as 20% more CPU to work with.

  1) Scroll performance is extremely good.  Even on Gmail, I can only
  get the mouse to lead the scroll bar by a dozen pixels.  On Slashdot,
  it doesn't even look like I can do that.
 
  On plain pages (one scrollbar on the right, no Flash) scrolling is
  literally shifting the pixels down.  On Linux we do this by sending a
  command to the X server, which is a single process that even has the
  graphics drivers built in so it talks directly to your graphics card
  and can in theory do a hardware-accelerated copy.  I would expect this
  to be pretty fast.

 Looking at this more carefully, scroll performance on Slashdot is
 great in both Windows and Linux.  On Gmail (no chat mole), there's a
 noticeable difference.  Here's a visualization of the numb on the
 scroll bar:

 ||
 ||
 ||
 ||
 ||
 ||
 --  -- Click here and pull down
 --
 --  -- Linux: mouse latency gets to here
 ||
 ||  -- Windows: mouse latency gets to here
 ||
 ||
 ||
 ||

 Admittedly, it's hard to see precisely, but it affects the feel.
 Scroll on Windows feels slightly heavier.

  2) Tab creation is very fast.  Maybe the zygote is helping here?  Can
  we pre-render the NTP on other platforms?
 
  The zygote is paused right at process start, before we've even started
  a renderer.  On the other hand Windows process creation is more
  expensive.
 
  There is a new tab graph that attempts to measure this.  The various
  lines on the graph are tracking how quickly we get to each stage in
  constructing the page.  We hit the first line 20ms faster on Linux
  than Windows likely due to the zygote and slow Windows process
  creation, but process startup seems to be a relatively small part of
  the total time.  Linux hits other lines later and Linux and Windows
  hit the finish line at around the same time.

 So, I retried this with a fresh profile on both.  The differences are
 not as dramatic as I remember.  I can't actually see a difference when
 I run them side-by-side.

  3) Startup time is faster than calculator.
 
  I'm not sure if you're kidding.  Do you mean Windows calculator?

 I meant Linux calculator.

  In the limit, I'd expect us to pay a lot more on Linux due to using
  more libraries, GTK initialization, round trips to the X server, etc.
  but I don't know much about Windows here.

 I tried turning on the GTK theme.  That killed startup performance.

 Side-by-side startup easily noticeably faster in Linux.  To be more
 precise, drawing the first pixel is noticeably faster.  Total startup
 time is harder to say.

 Interestingly startup drawing is different between Windows and Linux.
 We might want to film with a high-speed camera to see exactly what's
 going on, but here are my impressions:

 Linux draw order:
 1) Fill entire window with blue (This looks bad, can we use a
 different color? White?).
 2) Paint main UI widgets, including NTP.
 3) Paint NTP thumbnails.
 I bet that (2) is actually broken in to two pieces, I just can't see it.

 Window draw order:
 1) Paint NC region (the blue border around the edge).
 2) Paint main UI widgets (without omnibox).
 3) Blit NTP content area (the sweep from top to bottom is noticeable).
 4) Paint omnibox.
 5) Paint NTP thumbnails.

 Keep in mind that this all happens very fast, so I could be imagining
 things.

 Ideas for improving perceived windows startup time:

 1) Draw a fake omnibox with the rest of the main UI widgets.
 Currently we draw the omnibox really late and it looks slow and bad.
 You can see this if you have a dark desktop wallpaper and you focus on
 where the omnibox will be.  You'll see a dark rectangle inside the
 main toolbar which is the desktop showing through.  We should never
 show a dark rectangle there.

 2) Fill the main content area with white when drawing the main UI
 widgets.  You can see this if you focus on the bottom of where the
 bookmark bar is going to be (especially when the bookmark bar is set
 to show only on the NTP).  You'll see an edge there when the bookmark
 bar is draw by while the main content area is still transparent.
 There's no reason we should ever paint an edge there.

 I bet the reason Windows startup feels slower is whatever drawing
 operation we're using for the main content area is slow.  The
 top-to-bottom sweep 

[chromium-dev] How can we kill scons?

2009-10-28 Thread 王重傑
If I'm not mistaken, I think like most everyone running on linux is using
the make build nowadays, and the make build seems to work well enough for
most people.  The only time I hear someone mention the scons build, it's in
reference to you broke the scons build, or so you developed on make.  Did
you check it worked on scons?

Given that, what's keeping us from killing the scons build completely?

My current motivation for asking is that I've been spending the last hour
trying to figure out why scons is deciding to insert an -fPIC into my build,
whereas make is not.  This is on top of my previous motivation (from about 3
days ago) where I spent another few hours making something that worked fine
on the make build, scons compatible.  I'd rather spend that time killing
scons if there was a clear list of what was needed to make that happen.

-Albert

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Adam Barth

On Wed, Oct 28, 2009 at 12:24 PM, Antoine Labour pi...@google.com wrote:
 On Wed, Oct 28, 2009 at 12:05 PM, Adam Barth aba...@chromium.org wrote:
 I bet the reason Windows startup feels slower is whatever drawing
 operation we're using for the main content area is slow.  The
 top-to-bottom sweep probably makes me feel like the browser isn't
 loaded until the sweep reaches the bottom, whereas I feel like Linux
 is done earlier in its startup sequence.

 For the UI bits, I'm willing to believe that GTK, which uses cairo, hence
 XRender for rendering, is hardware accelerated and in any case pipelined in
 another process (X), and so is faster than serialized, software rendered
 Skia. How much is the impact ? I don't know, we're not talking a huge amount
 of pixels, but still...

I wonder if the problem is we're using a main-memory-to-video-memory
blit to paint the content area in Windows on startup.  How hard would
it be to use a DDB during startup?  That would give us a
video-memory-to-video-memory blit, which can easily paint the whole
screen at 180 fps.

Adam

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Marc-Antoine Ruel

Have you tried starring http://crbug.com/22044 ?

On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
ajw...@chromium.org wrote:
 If I'm not mistaken, I think like most everyone running on linux is using
 the make build nowadays, and the make build seems to work well enough for
 most people.  The only time I hear someone mention the scons build, it's in
 reference to you broke the scons build, or so you developed on make.  Did
 you check it worked on scons?
 Given that, what's keeping us from killing the scons build completely?
 My current motivation for asking is that I've been spending the last hour
 trying to figure out why scons is deciding to insert an -fPIC into my build,
 whereas make is not.  This is on top of my previous motivation (from about 3
 days ago) where I spent another few hours making something that worked fine
 on the make build, scons compatible.  I'd rather spend that time killing
 scons if there was a clear list of what was needed to make that happen.
 -Albert




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Marc-Antoine Ruel

Not that it is effective :)

On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel mar...@chromium.org wrote:
 Have you tried starring http://crbug.com/22044 ?

 On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
 If I'm not mistaken, I think like most everyone running on linux is using
 the make build nowadays, and the make build seems to work well enough for
 most people.  The only time I hear someone mention the scons build, it's in
 reference to you broke the scons build, or so you developed on make.  Did
 you check it worked on scons?
 Given that, what's keeping us from killing the scons build completely?
 My current motivation for asking is that I've been spending the last hour
 trying to figure out why scons is deciding to insert an -fPIC into my build,
 whereas make is not.  This is on top of my previous motivation (from about 3
 days ago) where I spent another few hours making something that worked fine
 on the make build, scons compatible.  I'd rather spend that time killing
 scons if there was a clear list of what was needed to make that happen.
 -Albert




 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread 王重傑
On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel mar...@chromium.orgwrote:

 Not that it is effective :)


Starred. :)

Now what?



 On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel mar...@chromium.org
 wrote:
  Have you tried starring http://crbug.com/22044 ?
 
  On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
  ajw...@chromium.org wrote:
  If I'm not mistaken, I think like most everyone running on linux is
 using
  the make build nowadays, and the make build seems to work well enough
 for
  most people.  The only time I hear someone mention the scons build, it's
 in
  reference to you broke the scons build, or so you developed on make.
  Did
  you check it worked on scons?
  Given that, what's keeping us from killing the scons build completely?
  My current motivation for asking is that I've been spending the last
 hour
  trying to figure out why scons is deciding to insert an -fPIC into my
 build,
  whereas make is not.  This is on top of my previous motivation (from
 about 3
  days ago) where I spent another few hours making something that worked
 fine
  on the make build, scons compatible.  I'd rather spend that time killing
  scons if there was a clear list of what was needed to make that happen.
  -Albert
 
 
 
 
   
 
 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Ben Goodger (Google)
Step 2: ???
Step 3: Profit!

-Ben

On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
ajw...@chromium.orgwrote:

 On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.orgwrote:

 Not that it is effective :)


 Starred. :)

 Now what?



 On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel mar...@chromium.org
 wrote:
  Have you tried starring http://crbug.com/22044 ?
 
  On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
  ajw...@chromium.org wrote:
  If I'm not mistaken, I think like most everyone running on linux is
 using
  the make build nowadays, and the make build seems to work well enough
 for
  most people.  The only time I hear someone mention the scons build,
 it's in
  reference to you broke the scons build, or so you developed on make.
  Did
  you check it worked on scons?
  Given that, what's keeping us from killing the scons build completely?
  My current motivation for asking is that I've been spending the last
 hour
  trying to figure out why scons is deciding to insert an -fPIC into my
 build,
  whereas make is not.  This is on top of my previous motivation (from
 about 3
  days ago) where I spent another few hours making something that worked
 fine
  on the make build, scons compatible.  I'd rather spend that time
 killing
  scons if there was a clear list of what was needed to make that happen.
  -Albert
 
 
 
 
  
 
 



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Nico Weber

FWIW, I build with scons. I only build Linux once a month or so, and
the default build instructions told me to use scons. I'd imagine lots
of people who are just playing with chrome on the side use scons too.

On Wed, Oct 28, 2009 at 12:28 PM, Albert J. Wong (王重傑)
ajw...@chromium.org wrote:
 If I'm not mistaken, I think like most everyone running on linux is using
 the make build nowadays, and the make build seems to work well enough for
 most people.  The only time I hear someone mention the scons build, it's in
 reference to you broke the scons build, or so you developed on make.  Did
 you check it worked on scons?
 Given that, what's keeping us from killing the scons build completely?
 My current motivation for asking is that I've been spending the last hour
 trying to figure out why scons is deciding to insert an -fPIC into my build,
 whereas make is not.  This is on top of my previous motivation (from about 3
 days ago) where I spent another few hours making something that worked fine
 on the make build, scons compatible.  I'd rather spend that time killing
 scons if there was a clear list of what was needed to make that happen.
 -Albert




 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Lei Zhang

mmoss has been working on the make gyp generator, maybe he has a
better feel for what's keeping us from switching.

On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
ajw...@chromium.org wrote:
 On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel mar...@chromium.org
 wrote:

 Not that it is effective :)

 Starred. :)
 Now what?


 On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel mar...@chromium.org
 wrote:
  Have you tried starring http://crbug.com/22044 ?
 
  On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
  ajw...@chromium.org wrote:
  If I'm not mistaken, I think like most everyone running on linux is
  using
  the make build nowadays, and the make build seems to work well enough
  for
  most people.  The only time I hear someone mention the scons build,
  it's in
  reference to you broke the scons build, or so you developed on make.
   Did
  you check it worked on scons?
  Given that, what's keeping us from killing the scons build completely?
  My current motivation for asking is that I've been spending the last
  hour
  trying to figure out why scons is deciding to insert an -fPIC into my
  build,
  whereas make is not.  This is on top of my previous motivation (from
  about 3
  days ago) where I spent another few hours making something that worked
  fine
  on the make build, scons compatible.  I'd rather spend that time
  killing
  scons if there was a clear list of what was needed to make that happen.
  -Albert
 
 
 
 
  
 
 


 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Bradley Nelson
So we have set of tests for gyp which are green for all the generators other
than make.
I believe mmoss has been whittling away on them, and I think its down to
just 2 failures.
go/gypbot
After that its just a matter of the will to switch over the buildbots and
fix any unforeseen issues.

-BradN

2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel mar...@chromium.org
 
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel mar...@chromium.org
 
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux is
   using
   the make build nowadays, and the make build seems to work well enough
   for
   most people.  The only time I hear someone mention the scons build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the last
   hour
   trying to figure out why scons is deciding to insert an -fPIC into my
   build,
   whereas make is not.  This is on top of my previous motivation (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Bradley Nelson
Looks like the failures are part of the same test case.
It's the case where the same source file is built as part of two different
targets using different defines.
The make generator appears to build it only one way and use it in both
targets.

-BradN

2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down to
 just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots and
 fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the last
   hour
   trying to figure out why scons is deciding to insert an -fPIC into
 my
   build,
   whereas make is not.  This is on top of my previous motivation (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 

 



--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Elliot Glaysher (Chromium)

On Wed, Oct 28, 2009 at 12:24 PM, Antoine Labour pi...@google.com wrote:
 For the UI bits, I'm willing to believe that GTK, which uses cairo, hence
 XRender for rendering, is hardware accelerated and in any case pipelined in
 another process (X), and so is faster than serialized, software rendered
 Skia. How much is the impact ? I don't know, we're not talking a huge amount
 of pixels, but still...

Not only GTK mode. On linux, we upload (most of) the theme images to
the X server so blitting the images is done server side and
(hopefully) hardware accelerated.

Off the top of my head, the tabstrip and the floating bookmark bar are
the only pieces of the linux UI drawn with skia.

-- Elliot

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread 王重傑
I ran into that yesterday as well trying to make a make generator fix.  I
think I'll hang on until mmoss gets back since I heard he's in the middle of
trying to fix that.  But assuming the unittest can all be made green, then
it's update the public instructions, and finally buildbot work?

I can pickup on fixing the public instructions if no one objects.  I don't
think that needs to be blocked on the unittests, and might as well allow it
to propagate out to the casual developers like while we get our ducks in
line.

-Albert


2009/10/28 Bradley Nelson bradnel...@google.com

 Looks like the failures are part of the same test case.
 It's the case where the same source file is built as part of two different
 targets using different defines.
 The make generator appears to build it only one way and use it in both
 targets.

 -BradN

 2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down to
 just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots and
 fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the
 last
   hour
   trying to figure out why scons is deciding to insert an -fPIC into
 my
   build,
   whereas make is not.  This is on top of my previous motivation
 (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 

 




--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Bradley Nelson
Updating the public instructions would be helpful! Please proceed.
I'd be willing do the buildbot switchover, unless someone is more eager.
I'm a little surprised that the failing test doesn't hork something in the
chromium build.
I known that there are some shared files like that (though it may be only on
windows come to think of it).

-BradN

On Wed, Oct 28, 2009 at 1:13 PM, Albert J. Wong (王重傑)
ajw...@chromium.orgwrote:

 I ran into that yesterday as well trying to make a make generator fix.  I
 think I'll hang on until mmoss gets back since I heard he's in the middle of
 trying to fix that.  But assuming the unittest can all be made green, then
 it's update the public instructions, and finally buildbot work?

 I can pickup on fixing the public instructions if no one objects.  I don't
 think that needs to be blocked on the unittests, and might as well allow it
 to propagate out to the casual developers like while we get our ducks in
 line.

 -Albert


 2009/10/28 Bradley Nelson bradnel...@google.com

 Looks like the failures are part of the same test case.
 It's the case where the same source file is built as part of two different
 targets using different defines.
 The make generator appears to build it only one way and use it in both
 targets.

 -BradN

 2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down to
 just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots and
 fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux
 is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons
 build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the
 last
   hour
   trying to figure out why scons is deciding to insert an -fPIC into
 my
   build,
   whereas make is not.  This is on top of my previous motivation
 (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 

 





--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Nico Weber

On Wed, Oct 28, 2009 at 1:10 PM, Elliot Glaysher (Chromium)
e...@chromium.org wrote:

 On Wed, Oct 28, 2009 at 12:24 PM, Antoine Labour pi...@google.com wrote:
 For the UI bits, I'm willing to believe that GTK, which uses cairo, hence
 XRender for rendering, is hardware accelerated and in any case pipelined in
 another process (X), and so is faster than serialized, software rendered
 Skia. How much is the impact ? I don't know, we're not talking a huge amount
 of pixels, but still...

 Not only GTK mode. On linux, we upload (most of) the theme images to
 the X server so blitting the images is done server side and
 (hopefully) hardware accelerated.

 Off the top of my head, the tabstrip and the floating bookmark bar are
 the only pieces of the linux UI drawn with skia.

The download completion disks in the shelf probably too for what it's worth.


 -- Elliot

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread 王重傑
I actually got some weird warnings on the make build a while back when I
specified the same file in two sources entries...something about circular
dependencies and make ignore one. But don't remember the exact scenario.

I betcha it isn't a problem in chrome cause it'd only trigger a bug if the
file was compiled with different flags that modified behavior.  Since our
defines and compiler options are so stable  (especially within one target),
building once probably doesn't break stuff...

-Albert


2009/10/28 Bradley Nelson bradnel...@google.com

 Updating the public instructions would be helpful! Please proceed.
 I'd be willing do the buildbot switchover, unless someone is more eager.
 I'm a little surprised that the failing test doesn't hork something in the
 chromium build.
 I known that there are some shared files like that (though it may be only
 on windows come to think of it).

 -BradN


 On Wed, Oct 28, 2009 at 1:13 PM, Albert J. Wong (王重傑) ajw...@chromium.org
  wrote:

 I ran into that yesterday as well trying to make a make generator fix.  I
 think I'll hang on until mmoss gets back since I heard he's in the middle of
 trying to fix that.  But assuming the unittest can all be made green, then
 it's update the public instructions, and finally buildbot work?

 I can pickup on fixing the public instructions if no one objects.  I don't
 think that needs to be blocked on the unittests, and might as well allow it
 to propagate out to the casual developers like while we get our ducks in
 line.

 -Albert


 2009/10/28 Bradley Nelson bradnel...@google.com

 Looks like the failures are part of the same test case.
 It's the case where the same source file is built as part of two
 different targets using different defines.
 The make generator appears to build it only one way and use it in both
 targets.

 -BradN

 2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down to
 just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots
 and fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux
 is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons
 build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the
 last
   hour
   trying to figure out why scons is deciding to insert an -fPIC
 into my
   build,
   whereas make is not.  This is on top of my previous motivation
 (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 

 






--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Steven Knight
Yeah, that's about it.  It's definitely time to make this switch.  After the
gyp tests for make are green, it just needs someone with the right buidlbot
knowledge + time to work out the details.

(Last time I did a comparison of the make vs. scons build output there were
still some differences in the built files, some missing, a few different
locations, etc., but that was a long time ago now.  There'll still probably
be some minor shakeout, but nothing insurmountable.)

--SK

On Wed, Oct 28, 2009 at 1:13 PM, Albert J. Wong (王重傑)
ajw...@chromium.orgwrote:

 I ran into that yesterday as well trying to make a make generator fix.  I
 think I'll hang on until mmoss gets back since I heard he's in the middle of
 trying to fix that.  But assuming the unittest can all be made green, then
 it's update the public instructions, and finally buildbot work?

 I can pickup on fixing the public instructions if no one objects.  I don't
 think that needs to be blocked on the unittests, and might as well allow it
 to propagate out to the casual developers like while we get our ducks in
 line.

 -Albert


 2009/10/28 Bradley Nelson bradnel...@google.com

 Looks like the failures are part of the same test case.
 It's the case where the same source file is built as part of two different
 targets using different defines.
 The make generator appears to build it only one way and use it in both
 targets.

 -BradN

 2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down to
 just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots and
 fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux
 is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons
 build,
   it's in
   reference to you broke the scons build, or so you developed on
 make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the
 last
   hour
   trying to figure out why scons is deciding to insert an -fPIC into
 my
   build,
   whereas make is not.  This is on top of my previous motivation
 (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 






 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Evan Stade

On Wed, Oct 28, 2009 at 1:18 PM, Nico Weber tha...@chromium.org wrote:

 On Wed, Oct 28, 2009 at 1:10 PM, Elliot Glaysher (Chromium)
 e...@chromium.org wrote:

 On Wed, Oct 28, 2009 at 12:24 PM, Antoine Labour pi...@google.com wrote:
 For the UI bits, I'm willing to believe that GTK, which uses cairo, hence
 XRender for rendering, is hardware accelerated and in any case pipelined in
 another process (X), and so is faster than serialized, software rendered
 Skia. How much is the impact ? I don't know, we're not talking a huge amount
 of pixels, but still...

 Not only GTK mode. On linux, we upload (most of) the theme images to
 the X server so blitting the images is done server side and
 (hopefully) hardware accelerated.

 Off the top of my head, the tabstrip and the floating bookmark bar are
 the only pieces of the linux UI drawn with skia.

 The download completion disks in the shelf probably too for what it's worth.

and extension badges, and the sad tab page

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Evan Martin

On Wed, Oct 28, 2009 at 1:23 PM, Albert J. Wong (王重傑)
ajw...@chromium.org wrote:
 I actually got some weird warnings on the make build a while back when I
 specified the same file in two sources entries...something about circular
 dependencies and make ignore one. But don't remember the exact scenario.
 I betcha it isn't a problem in chrome cause it'd only trigger a bug if the
 file was compiled with different flags that modified behavior.  Since our
 defines and compiler options are so stable  (especially within one target),
 building once probably doesn't break stuff...

Currently, as far as I know, no files in Chrome require this compile
twice behavior.
I am skeptical of the utility of gyp features that are unused by
Chrome.  But this may be the magic bullet to make -fPIC on 64-bit
work.

I think I still see situations where the generated strings aren't
properly rebuilt in the make build.  You have to run it twice.  At
this point, given the number of people hammering on it, I suspect the
gyp rules are wrong and that the make build parallelizes too
aggressively, but that is likely just wishful thinking and there's a
subtle bug in there.  :(
That means for the build bots to switch, they need to always run make
twice to be sure everything was built.

(PS: currently every time you run make it rebuilds some NACL stuff
too.  I am so tired of NACL busting my build that I just turned it off
locally.)

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Antoine Labour
On Wed, Oct 28, 2009 at 1:23 PM, Albert J. Wong (王重傑)
ajw...@chromium.orgwrote:

 I actually got some weird warnings on the make build a while back when I
 specified the same file in two sources entries...something about circular
 dependencies and make ignore one. But don't remember the exact scenario.

 I betcha it isn't a problem in chrome cause it'd only trigger a bug if the
 file was compiled with different flags that modified behavior.  Since our
 defines and compiler options are so stable  (especially within one target),
 building once probably doesn't break stuff...

 -Albert


I found one occurrence of it when building with shared libs. protobuf and
protobuf_lite both try to compile the same file (but with the same options),
and protobuf depends on lite, which effectively makes that file depend on
itself. I'm fixing a few things around that so I'll fix that one.
But that's the only case I know where we're trying to build the same file
twice (except with the new host/target thing for cross-compiles).

Antoine



 2009/10/28 Bradley Nelson bradnel...@google.com

 Updating the public instructions would be helpful! Please proceed.
 I'd be willing do the buildbot switchover, unless someone is more eager.
 I'm a little surprised that the failing test doesn't hork something in the
 chromium build.
 I known that there are some shared files like that (though it may be only
 on windows come to think of it).

 -BradN


 On Wed, Oct 28, 2009 at 1:13 PM, Albert J. Wong (王重傑) 
 ajw...@chromium.org wrote:

 I ran into that yesterday as well trying to make a make generator fix.  I
 think I'll hang on until mmoss gets back since I heard he's in the middle of
 trying to fix that.  But assuming the unittest can all be made green, then
 it's update the public instructions, and finally buildbot work?

 I can pickup on fixing the public instructions if no one objects.  I
 don't think that needs to be blocked on the unittests, and might as well
 allow it to propagate out to the casual developers like while we get our
 ducks in line.

 -Albert


 2009/10/28 Bradley Nelson bradnel...@google.com

 Looks like the failures are part of the same test case.
 It's the case where the same source file is built as part of two
 different targets using different defines.
 The make generator appears to build it only one way and use it in both
 targets.

 -BradN

 2009/10/28 Bradley Nelson bradnel...@google.com

 So we have set of tests for gyp which are green for all the generators
 other than make.
 I believe mmoss has been whittling away on them, and I think its down
 to just 2 failures.
 go/gypbot
 After that its just a matter of the will to switch over the buildbots
 and fix any unforeseen issues.

 -BradN

 2009/10/28 Lei Zhang thes...@chromium.org


 mmoss has been working on the make gyp generator, maybe he has a
 better feel for what's keeping us from switching.

 On Wed, Oct 28, 2009 at 12:35 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  On Wed, Oct 28, 2009 at 12:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
 
  Not that it is effective :)
 
  Starred. :)
  Now what?
 
 
  On Wed, Oct 28, 2009 at 3:34 PM, Marc-Antoine Ruel 
 mar...@chromium.org
  wrote:
   Have you tried starring http://crbug.com/22044 ?
  
   On Wed, Oct 28, 2009 at 3:28 PM, Albert J. Wong (王重傑)
   ajw...@chromium.org wrote:
   If I'm not mistaken, I think like most everyone running on linux
 is
   using
   the make build nowadays, and the make build seems to work well
 enough
   for
   most people.  The only time I hear someone mention the scons
 build,
   it's in
   reference to you broke the scons build, or so you developed
 on make.
Did
   you check it worked on scons?
   Given that, what's keeping us from killing the scons build
 completely?
   My current motivation for asking is that I've been spending the
 last
   hour
   trying to figure out why scons is deciding to insert an -fPIC
 into my
   build,
   whereas make is not.  This is on top of my previous motivation
 (from
   about 3
   days ago) where I spent another few hours making something that
 worked
   fine
   on the make build, scons compatible.  I'd rather spend that time
   killing
   scons if there was a clear list of what was needed to make that
 happen.
   -Albert
  
  
  
  
   
  
  
 
 
  
 








 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: How can we kill scons?

2009-10-28 Thread Antoine Labour
On Wed, Oct 28, 2009 at 1:37 PM, Evan Martin e...@chromium.org wrote:


 On Wed, Oct 28, 2009 at 1:23 PM, Albert J. Wong (王重傑)
 ajw...@chromium.org wrote:
  I actually got some weird warnings on the make build a while back when I
  specified the same file in two sources entries...something about circular
  dependencies and make ignore one. But don't remember the exact scenario.
  I betcha it isn't a problem in chrome cause it'd only trigger a bug if
 the
  file was compiled with different flags that modified behavior.  Since our
  defines and compiler options are so stable  (especially within one
 target),
  building once probably doesn't break stuff...

 Currently, as far as I know, no files in Chrome require this compile
 twice behavior.
 I am skeptical of the utility of gyp features that are unused by
 Chrome.  But this may be the magic bullet to make -fPIC on 64-bit
 work.

 I think I still see situations where the generated strings aren't
 properly rebuilt in the make build.  You have to run it twice.  At
 this point, given the number of people hammering on it, I suspect the
 gyp rules are wrong and that the make build parallelizes too
 aggressively, but that is likely just wishful thinking and there's a
 subtle bug in there.  :(
 That means for the build bots to switch, they need to always run make
 twice to be sure everything was built.

 (PS: currently every time you run make it rebuilds some NACL stuff
 too.  I am so tired of NACL busting my build that I just turned it off
 locally.)


That was a regression that I think I fixed.

Antoine

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Why is Linux Chrome so fast?

2009-10-28 Thread Tony Chang

On Wed, Oct 28, 2009 at 12:05 PM, Adam Barth aba...@chromium.org wrote:
 Linux draw order:
 1) Fill entire window with blue (This looks bad, can we use a
 different color? White?).

http://code.google.com/p/chromium/issues/detail?id=20059

Looking at it again, I imagine one of the widgets has no background
(TabContentsContainer?) and we just need to give it a white
background.

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] NXCOMPAT, DYNAMICBASE and you

2009-10-28 Thread Andrew Scherkus
I'm preparing to land a change to the Chromium XP and Google Chrome FYI
build bots that make sure all Windows DLL and EXE files were built with
/NXCOMPAT and /DYNAMICBASE.  You can read about these neat security features
here:
http://blogs.msdn.com/vcblog/archive/2009/05/21/dynamicbase-and-nxcompat.aspx

To check, you can run my freshly minted tool on a build output directory:
D:\chrome\src tools\checkbins\checkbins.bat chrome\Debug

Currently, most if not all of the Native Client binaries fail the test.
 wow_helper.exe also appears to fail as does our prebuilt icudt42.dll.

For compiled binaries, the gyp/scons files will need to be updated.

For pre-compiled binaries (such as icu and our FFmpeg binaries), you must
run editbin on your binaries before checking them in (the real reason why
I'm adding this change to the bots):
editbin.exe /NXCOMPAT /DYNAMICBASE [files]

Andrew

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Darin Fisher
On Wed, Oct 28, 2009 at 7:52 AM, Marc-Antoine Ruel mar...@chromium.orgwrote:

 On Wed, Oct 28, 2009 at 1:30 AM, Darin Fisher da...@chromium.org wrote:
  I'm pretty sure that enabling USE_SYSTEM_MALLOC will also lead to
 corruption
  since WebKit is not hermetic (we allocate things externally that we then
  delete inside WebKit).
  -Darin

 Wouha! That really limit our capacity to link wekbit.dll.



Why?  So long as our allocator also lives in a DLL, we should be fine.

The webkit.dll build should only be used for developer builds, where we can
also just enabled the CRT DLL, and build tcmalloc as a DLL too.

-Darin

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread Darin Fisher
On Wed, Oct 28, 2009 at 5:04 PM, Darin Fisher da...@chromium.org wrote:

 On Wed, Oct 28, 2009 at 7:52 AM, Marc-Antoine Ruel mar...@chromium.orgwrote:

 On Wed, Oct 28, 2009 at 1:30 AM, Darin Fisher da...@chromium.org wrote:
  I'm pretty sure that enabling USE_SYSTEM_MALLOC will also lead to
 corruption
  since WebKit is not hermetic (we allocate things externally that we then
  delete inside WebKit).
  -Darin

 Wouha! That really limit our capacity to link wekbit.dll.



 Why?  So long as our allocator also lives in a DLL, we should be fine.

 The webkit.dll build should only be used for developer builds, where we can
 also just enabled the CRT DLL, and build tcmalloc as a DLL too.

 -Darin


I should add that patches are welcome if you'd like to make our webkit
hermetic w.r.t. memory allocation ;-)

I know of some issues...

-darin

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] anyone for a webkit review?

2009-10-28 Thread Evan Stade

https://bugs.webkit.org/show_bug.cgi?id=30832

FYI the chromium side is here: http://codereview.chromium.org/337032/show

-- Evan Stade

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: anyone for a webkit review?

2009-10-28 Thread Adam Langley

On Wed, Oct 28, 2009 at 5:47 PM, Evan Stade est...@chromium.org wrote:
 https://bugs.webkit.org/show_bug.cgi?id=30832

It is generally quite important to attach the patch :)


AGL

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: anyone for a webkit review?

2009-10-28 Thread Eric Seidel

Thankfully I'm telelegetic and can read it from here.  Looks amazing.
Ping me once it's up and I'll give it the ol' r+.

-eric

On Wed, Oct 28, 2009 at 5:51 PM, Adam Langley a...@chromium.org wrote:

 On Wed, Oct 28, 2009 at 5:47 PM, Evan Stade est...@chromium.org wrote:
 https://bugs.webkit.org/show_bug.cgi?id=30832

 It is generally quite important to attach the patch :)


 AGL

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: anyone for a webkit review?

2009-10-28 Thread Evan Stade

On Wed, Oct 28, 2009 at 5:51 PM, Adam Langley a...@chromium.org wrote:
 On Wed, Oct 28, 2009 at 5:47 PM, Evan Stade est...@chromium.org wrote:
 https://bugs.webkit.org/show_bug.cgi?id=30832

 It is generally quite important to attach the patch :)


 AGL


oo, thanks for the pointer. These computer things are tricky.

updated.

-- Evan Stade

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: Are we going to use tcmalloc on Mac?

2009-10-28 Thread John Grabowski
Macs appear to support MADV_DONTNEED (at least they are doc'ed as such).
 However, wtf/Platform.h seems to imply Apple uses mmap/munmap instead of
madvise for their tcmalloc fork.

The tcmalloc in third_party does not appear to have the mmap/munmap support
seen in WTF's tcmalloc, so our TCMalloc_SystemRelease() would be a noop on
the Mac.  Thus a naive compile and use on Mac will be unlikely to reduce the
working set.

Experimenting with tcmalloc (or jemalloc or whatever) on Mac would be
interesting.

jrg


On Wed, Oct 28, 2009 at 11:26 AM, Mike Belshe mbel...@google.com wrote:

 On Wed, Oct 28, 2009 at 10:54 AM, Jens Alfke s...@chromium.org wrote:


 On Oct 27, 2009, at 9:10 PM, Mike Belshe wrote:

 From a performance perspective, it may be critical to use tcmalloc to
 match safari performance.  It was literally a 50% speedup on most of the DOM
 perf when running on WinXP.


 Yeah, I've profiled some of the Dromaeo benchmarks, and the DOM-mutation
 test in particular is spending huge amounts of time in malloc and free.

 Should I open a bug on this task, then?


 SGTM.



 I suspect this will use the version of TCMalloc which is embedded in WTF.
  I'd recommend against this approach.  We should try to use a single
 allocator for the entire app


 I agree; no sense linking in two different versions of tcmalloc.

 There is a disadvantage.  I suspect Apple is farther along in optimizing
 the mac-specific optimizations of tcmalloc than the google3 guys are.


 I would say more generally *client*-specific optimizations. Some of the
 recent memory-bloat issues found by the Memory taskforce (jamesr in
 particular) show that baseline tcmalloc is tuned for server environments
 where memory footprint is much less of an issue.


 That may be true too, but, having looked at what they've done, I don't
 think they've done a lot to fix this problem (yet).  Chrome now uses a lot
 less memory than safari.

 I was really pointing out the mac-port potential problems.  See tcmalloc's
 TCMalloc_SystemAlloc() routines ported to the mac.  I don't know that the
 google3 tools has a mac version at all?  tcmalloc was designed for linux
 which leverages a cute trick using MADV_DONT_NEED (is this MADV_FREE_REUSE
 on the mac?) to release memory (sorta).It may be that we can port the
 webkit implementations for mac back to the google3-perftools open source
 project, and also make it release memory better.

 If you pick up this task, you can ask Jim, James or me for details; I think
 we've all been fairly deep into tcmalloc to help get you ramped up quickly.


 Mike





 —Jens



 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A Dictionary-Evaluation Plan

2009-10-28 Thread 坊野 博典

Hi Evan,

Thank you for your feedback.

2009/10/28 Evan Martin e...@chromium.org:

 It still might be worth soliciting feedback from users directly.  For
 example, if the new dictionary is missing a common word the above
 measure would get a high count of Add to Dictionary, and maybe users
 could tell us about this.

Counting a common word is a good option for English.
On the other hand, I'm wondering how much this idea works for case
languages, such as Russian, Polish, etc.
For example, a Russian noun has six cases (nominative, accusative,
genitive, prepositional, and instrumental) and each noun changes its
form according to the case and number. For example, a masculine noun
стол (table) has six forms: стол, стол, стола, столу,
столе, and столом. If a noun is countable, its plural form столы
(tables) also has six forms: столы, столы, столов, столам,
столах, столами.
So, when we count each form as a separated dictionary word, the
frequency count of a Russian word statistically (*1) becomes 1/6 of an
English word.
To write more about Russian, a Russian noun has three genders
(masculine, feminine, and neutral) and each adjective has to change
its form according to the gender and the case of a noun being
qualified. That is, the frequency count of a Russian adjective
statistically becomes 1/(3*6) = 1/18 of an English adjective. (This is
a reason why our ru_RU.dic_delta file doesn't have adjectives.)
If we can add an option menu so that a user can choose such
grammatical information when the user adds a word, it definitely
helps.

(*1) In reality, some cases (nominative) are used more often than other cases.

Regards,

Hironori Bono
E-mail: hb...@chromium.org

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: anyone for a webkit review?

2009-10-28 Thread Peter Kasting
On Wed, Oct 28, 2009 at 5:55 PM, Evan Stade est...@chromium.org wrote:

 oo, thanks for the pointer. These computer things are tricky.


The files are *in* the computer?

PK

P.S. Bonus quote: Math is hard.  Let's go shopping!

--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---



[chromium-dev] Re: A Dictionary-Evaluation Plan

2009-10-28 Thread Brian Rakowski
Before launch, we asked a team of international Googlers to assess quality.
We could reach out to that group again. Anders Sandholm coordinated that
effort and would be a good person to reach out to if we want to repeat it.

2009/10/28 Hironori Bono (坊野 博典) hb...@chromium.org


 Hi Evan,

 Thank you for your feedback.

 2009/10/28 Evan Martin e...@chromium.org:

  It still might be worth soliciting feedback from users directly.  For
  example, if the new dictionary is missing a common word the above
  measure would get a high count of Add to Dictionary, and maybe users
  could tell us about this.

 Counting a common word is a good option for English.
 On the other hand, I'm wondering how much this idea works for case
 languages, such as Russian, Polish, etc.
 For example, a Russian noun has six cases (nominative, accusative,
 genitive, prepositional, and instrumental) and each noun changes its
 form according to the case and number. For example, a masculine noun
 стол (table) has six forms: стол, стол, стола, столу,
 столе, and столом. If a noun is countable, its plural form столы
 (tables) also has six forms: столы, столы, столов, столам,
 столах, столами.
 So, when we count each form as a separated dictionary word, the
 frequency count of a Russian word statistically (*1) becomes 1/6 of an
 English word.
 To write more about Russian, a Russian noun has three genders
 (masculine, feminine, and neutral) and each adjective has to change
 its form according to the gender and the case of a noun being
 qualified. That is, the frequency count of a Russian adjective
 statistically becomes 1/(3*6) = 1/18 of an English adjective. (This is
 a reason why our ru_RU.dic_delta file doesn't have adjectives.)
 If we can add an option menu so that a user can choose such
 grammatical information when the user adds a word, it definitely
 helps.

 (*1) In reality, some cases (nominative) are used more often than other
 cases.

 Regards,

 Hironori Bono
 E-mail: hb...@chromium.org

 


--~--~-~--~~~---~--~~
Chromium Developers mailing list: chromium-dev@googlegroups.com 
View archives, change email options, or unsubscribe: 
http://groups.google.com/group/chromium-dev
-~--~~~~--~~--~--~---