[webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread zaheer ahmad
hi,

In the linux Gtk port, with Webkit revision 33493, i see that the resource
handles (curl backend) never get released after completing the data transfer
for that request. This results in big leaks in resourcehandles as well as
the curl internal data structures. (~800k on opening nytimes.com and closing
the connection)

The reason is that the ResourceHandle ref count never drops to 0, resouce
loaders drop their refcount correctly, but the ref done by the Resource
handle onitself  (source below) before handing over to the resourcehandle
manager is not matched with a deref.

ResourceHandleCurl.cpp:
bool ResourceHandle::start(Frame* frame)
{
ASSERT(frame);
ref();
ResourceHandleManager::sharedInstance()->add(this);
return true;
}

The fix that works is to deref in the ResourceHandleManager::removeFromCurl
however we do not know the impact. Brief look at the latest code doesnt seem
to have changed this much, however i can still verify on it.

BTW why does handing resourcehnadle to resourcehandlemanager need to be
protected, i guess a weak pointer would do. Also i dont see this done in
other ports + gtk/soup though the interfaces are different.

thanks in advance for any inputs.

regards,
Zaheer
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] [webkit-changes] [24723] trunk/WebCore

2008-09-10 Thread Arvid Nilsson
Hello, sorry to bring this old topic up again. Nothing has happened in the
last year or so, and this issue is bothering me again...

I am thinking that libxml2 should be somehow considered "default" since it
is used by most existing ports, and also is quite portable, making it likely
that new ports will use it. Maybe this idea will make it possible to make a
patch that permits using something else than libxml2 without having to patch
build systems other than qmake.

Would it be possible to use a variety on the ResourceHandle porting approach
and have a WebCore/dom/XMLTokenizerBase.{h, cpp}. The libxml2 implementation
lives in WebCore/dom/XMLTokenizer.{h, cpp}. If you don't specify in your
build system that you should use WebCore/dom/port/XMLTokenizer.h and
WebCore/dom/port/XMLTokenizerPort.cpp, you will get libxml2.

An added bonus is that you don't have to come up with a better name than
"XMLTokenizerLibXml2.cpp" since it's still called "XMLTokenizer.cpp".

/Arvid

On Sun, Jul 29, 2007 at 7:53 PM, Lars Knoll <[EMAIL PROTECTED]> wrote:

> On Sunday 29 July 2007 08:47:50 Maciej Stachowiak wrote:
> > On Jul 28, 2007, at 3:52 AM, Lars Knoll wrote:
> > > On Saturday 28 July 2007 00:26:19 Maciej Stachowiak wrote:
> > >> On Jul 27, 2007, at 11:36 AM, Lars Knoll wrote:
> > >>
> > >> Other organizations have requested the ability to use other XML
> > >> parsers as well, such as expat. Seems like in the long run we want a
> > >> different approach than just ifdefs in the XMLTokenizer.cpp file. It
> > >> seems like the best would be some abstraction layer on top of the
> > >> parser library, but if that is difficult then your option #2 sounds
> > >> like a docent long-run approach. I would have expected just about
> > >> every XML parsing library to have a SAX-like API, which shouldn't be
> > >> too hard to abstract, but perhaps QXml works differently.
> > >
> > > I guess that assumption doesn't hold. QXmlStream is a streaming
> > > parser with an
> > > API that is very different from SAX. It IMO a whole lot simpler to
> > > use than a
> > > SAX like API and is inspired from similar APIs in the Java world. If
> > > you're
> > > interested, have a look at
> > > http://doc.trolltech.com/4.3/qxmlstreamreader.html
> >
> > I'm told libxml has a StreamReader-style API now as well, so if that's
> > the better alternative, we could design the XML code around that style
> > of API (though probably not right at the moment).
>
> No, for the moment, I'd rather just go with the approach I've posted in bug
> 14791. Once there are requests for more parser backends, we could rethink
> this, but for now I think we have more urgent things to do.
>
> Cheers,
> Lars
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo/webkit-dev
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] [webkit-changes] [24723] trunk/WebCore

2008-09-10 Thread Alexey Proskuryakov

On Sep 10, 2008, at 3:08 PM, Arvid Nilsson wrote:

> Hello, sorry to bring this old topic up again. Nothing has happened  
> in the last year or so, and this issue is bothering me again...


There is actually a reviewed patch in 
 now, waiting to be committed. You are welcome to try it and to  
provide comments on this approach.

- WBR, Alexey Proskuryakov

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] [webkit-changes] [24723] trunk/WebCore

2008-09-10 Thread Holger Freyther
On Wednesday 10 September 2008 14:09:31 Alexey Proskuryakov wrote:
> On Sep 10, 2008, at 3:08 PM, Arvid Nilsson wrote:
> > Hello, sorry to bring this old topic up again. Nothing has happened
> > in the last year or so, and this issue is bothering me again...
>
> There is actually a reviewed patch in
> 
>  > now, waiting to be committed. You are welcome to try it and to
>
> provide comments on this approach.

I try to do a macbuild and run the tests there during the weekend and then 
land it.

z.
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Marco Barisione
Il giorno mer, 10/09/2008 alle 13.26 +0530, zaheer ahmad ha scritto:
> hi,
> 
> In the linux Gtk port, with Webkit revision 33493, i see that the
> resource handles (curl backend) never get released after completing
> the data transfer for that request. This results in big leaks in
> resourcehandles as well as the curl internal data structures. (~800k
> on opening nytimes.com and closing the connection)

I started some days ago to write some smart pointer classes for the Gtk
port (using g_free or g_object_ref/unref) that should be able to fix
some issues (I already fixed several memory leaks).

Now I just started to use valgrind to find other memory leaks, so this
and other issues should be hopefully fixed soon.

Thanks!

-- 
Marco Barisione

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Mike Emmel
This leak is fixed in the ncurl port.

On Wed, Sep 10, 2008 at 6:41 AM, Marco Barisione
<[EMAIL PROTECTED]> wrote:
> Il giorno mer, 10/09/2008 alle 13.26 +0530, zaheer ahmad ha scritto:
>> hi,
>>
>> In the linux Gtk port, with Webkit revision 33493, i see that the
>> resource handles (curl backend) never get released after completing
>> the data transfer for that request. This results in big leaks in
>> resourcehandles as well as the curl internal data structures. (~800k
>> on opening nytimes.com and closing the connection)
>
> I started some days ago to write some smart pointer classes for the Gtk
> port (using g_free or g_object_ref/unref) that should be able to fix
> some issues (I already fixed several memory leaks).
>
> Now I just started to use valgrind to find other memory leaks, so this
> and other issues should be hopefully fixed soon.
>
> Thanks!
>
> --
> Marco Barisione
>
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Marco Barisione
Il giorno mer, 10/09/2008 alle 13.26 +0530, zaheer ahmad ha scritto:
> hi,
> 
> In the linux Gtk port, with Webkit revision 33493, i see that the
> resource handles (curl backend) never get released after completing
> the data transfer for that request. This results in big leaks in
> resourcehandles as well as the curl internal data structures. (~800k
> on opening nytimes.com and closing the connection)
> 
> The reason is that the ResourceHandle ref count never drops to 0,
> resouce loaders drop their refcount correctly, but the ref done by the
> Resource handle onitself  (source below) before handing over to the
> resourcehandle manager is not matched with a deref.

It seems that the ref was added to all the ports to adapt to the change
in r16803, so that's right. The problem is that the resource handle
should delete itself when there is an error or the load is completed but
that doesn't happen in the CURL backend. Later I will check also the
soup one as it may have the same problem.

Thanks for your report, you saved me time trying to find where the extra
reference should have been added/removed. 

-- 
Marco Barisione

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Marco Barisione
Il giorno mer, 10/09/2008 alle 07.22 -0700, Mike Emmel ha scritto:
> This leak is fixed in the ncurl port.

Is it possible to backport the fix?

-- 
Marco Barisione

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread zaheer ahmad
hi,
The fix only helps little as we see the bigger leaks in curl. feedback from
curl experts suggests that this design is correct.. let me know if you are
aware of this issue

== here's the mail snapshot.
we are seeing big leaks in curl (Curl_connect - 600-800k and Curl_open -
~200k) when we browse through as little as few websites. This values
keep increasing as we browse more sites.

heres the high level logic of webkit=curl interaction
1- create a multi handle at the start of program
2- keep creating easy handles for each request
3- when request is done remove it from multi handle and clean up the
handle
4- multi handle is never released (stays till the end of program)

This design assumes that multi handle has a bounded memory usage as we
keep adding and removing easy handles, but that seems to be not true
with the leaks.
==


>> Now I just started to use valgrind to find other memory leaks, so this
and other issues should be hopefully fixed soon.

these are not traditional memory leaks, you are holding on to things longer
than you should, so they are more functional leaks. Does valgrind help in
that too?

thanks,
Zaheer


On Wed, Sep 10, 2008 at 8:02 PM, Marco Barisione <
[EMAIL PROTECTED]> wrote:

> Il giorno mer, 10/09/2008 alle 07.22 -0700, Mike Emmel ha scritto:
> > This leak is fixed in the ncurl port.
>
> Is it possible to backport the fix?
>
> --
> Marco Barisione
>
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Mike Emmel
Look I had to change to one multi handle per handle basically just and
asynchronous handle to get everything to clean up.
Its a significant refactoring and better design.

What it points to on the curl side is the need for and asynchronous
simple handle.

Also polling was removed as much as possible curl does not send decent
time outs if it has real work
to perform so this is still a issue. However open file handles are
handled in the event loop select.

Curl needs to be extended to have the concept of a work request and a
longer term watch timeout.

So in my opinion the issues are fixed at least to the extent possible
without help from the curl team.

On Wed, Sep 10, 2008 at 7:53 AM, zaheer ahmad <[EMAIL PROTECTED]> wrote:
> hi,
> The fix only helps little as we see the bigger leaks in curl. feedback from
> curl experts suggests that this design is correct.. let me know if you are
> aware of this issue
>
> == here's the mail snapshot.
> we are seeing big leaks in curl (Curl_connect - 600-800k and Curl_open -
> ~200k) when we browse through as little as few websites. This values
> keep increasing as we browse more sites.
>
> heres the high level logic of webkit=curl interaction
> 1- create a multi handle at the start of program
> 2- keep creating easy handles for each request
> 3- when request is done remove it from multi handle and clean up the
> handle
> 4- multi handle is never released (stays till the end of program)
>
> This design assumes that multi handle has a bounded memory usage as we
> keep adding and removing easy handles, but that seems to be not true
> with the leaks.
> ==
>
>
>>> Now I just started to use valgrind to find other memory leaks, so this
> and other issues should be hopefully fixed soon.
>
> these are not traditional memory leaks, you are holding on to things longer
> than you should, so they are more functional leaks. Does valgrind help in
> that too?
>
> thanks,
> Zaheer
>
>
> On Wed, Sep 10, 2008 at 8:02 PM, Marco Barisione
> <[EMAIL PROTECTED]> wrote:
>>
>> Il giorno mer, 10/09/2008 alle 07.22 -0700, Mike Emmel ha scritto:
>> > This leak is fixed in the ncurl port.
>>
>> Is it possible to backport the fix?
>>
>> --
>> Marco Barisione
>>
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
>
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Can someone give insight on this error?

2008-09-10 Thread Ryan McGrath
Hey,

Lurked for some time now, though still not sure if this is the correct 
mailing list to ask on, so feel free to shout and scream at me if I'm 
wrong. ;)

I was doing some work on making various templates compatible with 
Webkit's engine, and I've run into this error (intermittently, a refresh 
will sometimes fix it) on most versions of Webkit.

/Operation could not be completed. (kCFErrorDomainCFNetwork error 302.)” 
(kCFErrorDomainCFNetwork:302)

/I've noticed that the exact same symptoms occur in Chrome (I know it's 
Webkit, though I'm not sure to what extent the codebases are the same - 
shouldn't be too different I assume) under the following bug:

/Error 2 (net::ERR_FAILED): Unknown error.

/Google returned various results, nothing really solid in terms of what 
the issue is or whether there's a fix - anybody got an idea? Are the two 
errors just completely different, and Chrome's the problem (and I should 
take it over to their bug tracker)?

Thanks again,

- Ryan McGrath
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Can someone give insight on this error?

2008-09-10 Thread Mark Rowe

On Sep 10, 2008, at 12:32 PM, Ryan McGrath wrote:

> I was doing some work on making various templates compatible with
> Webkit's engine, and I've run into this error (intermittently, a  
> refresh
> will sometimes fix it) on most versions of Webkit.
>
> /Operation could not be completed. (kCFErrorDomainCFNetwork error  
> 302.)”
> (kCFErrorDomainCFNetwork:302)

Error 302 in kCFErrorDomainCFNetwork maps to  
kCFErrorHTTPConnectionLost, which indicates that "The connection to  
the server was dropped. This usually indicates a highly overloaded  
server".

> /I've noticed that the exact same symptoms occur in Chrome (I know  
> it's
> Webkit, though I'm not sure to what extent the codebases are the  
> same -
> shouldn't be too different I assume) under the following bug:
>
> /Error 2 (net::ERR_FAILED): Unknown error.

The underlying HTTP stack in Chrome is completely different than what  
is used in Safari's WebKit.  Safari's WebKit makes use of CFNetwork,  
but I'm not sure what exactly Chrome uses. If you have reproducible  
instances of this problem you should file bug reports against  
CFNetwork () and Chrome 
() so that they can be investigated by the respective teams.  Given  
that two different HTTP stacks are running into a similar issue it is  
quite possible that the problem is in fact a server-side issue where  
the server is prematurely dropping the connection.

- Mark

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] WebKit and Windows/Cygwin

2008-09-10 Thread Frank.Lautenbach
Hi Mark,

First of all I tried the Apple build by unsetting the QTDIR environment
variable. It got a little farther but died in various places. However, I
decided to abandon that build and attempt the QT per the instructions at
http://trac.webkit.org/wiki/BuildingQtOnWindows. It also fails, although
differently. The following is the build output. Please not my source
tree in my cygwin directory structure, but I was not building in a
cygwin shell. That's just where the source was from my previous build
attempts using cygwin. Also note the initial "The system cannot find the
path specified" error is due to a reference to /dev/null in a perl
script within a system call. 

At this point, I am at a total loss as I have yet to be able to build
anything.




The system cannot find the path specified.
Calling 'qmake CONFIG+=qt-port -r
OUTPUT_DIR=C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release
C:/cygwin/home/Administrator/WebKit/WebKit.pro CONFIG+=release
CONFIG-=debug' in
C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release

c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\page\inspector\WebKit.qrc'
c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\Resources\WebKitResources.qrc'
c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\platform\qt\WebCoreResources.qrc'
c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\page\inspector\WebKit.qrc'
c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\Resources\WebKitResources.qrc'
c:\Projects\qt-win-opensource-src-4.4.1\bin\rcc.exe: File does not exist
'..\..\..\WebCore\platform\qt\WebCoreResources.qrc'
Reading
C:/cygwin/home/Administrator/WebKit/JavaScriptCore/JavaScriptCore.pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release//JavaScriptCore
]
Reading C:/cygwin/home/Administrator/WebKit/WebCore/WebCore.pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release//WebCore]
Reading C:/cygwin/home/Administrator/WebKit/JavaScriptCore/kjs/jsc.pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release//JavaScriptCore
/kjs]
Reading
C:/cygwin/home/Administrator/WebKit/WebKit/qt/QtLauncher/QtLauncher.pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release//WebKit/qt/QtLa
uncher]
Reading C:/cygwin/home/Administrator/WebKit/WebKit/qt/tests/tests.pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release//WebKit/qt/test
s]
 Reading
C:/cygwin/home/Administrator/WebKit/WebKit/qt/tests/qwebframe/qwebframe.
pro
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release/WebKit/qt/tests
//qwebframe]
 Reading
C:/cygwin/home/Administrator/WebKit/WebKit/qt/tests/qwebpage/qwebpage.pr
o
[C:/cygwin/home/Administrator/WebKit/WebKitBuild/Release/WebKit/qt/tests
//qwebpage]

Microsoft (R) Program Maintenance Utility Version 8.00.50727.42
Copyright (C) Microsoft Corporation.  All rights reserved.

cd JavaScriptCore\ && "C:\Program Files\Microsoft Visual Studio
8\SDK\v2.0\Bin\nmake.exe" -f Makefile

Microsoft (R) Program Maintenance Utility Version 8.00.50727.42
Copyright (C) Microsoft Corporation.  All rights reserved.

"C:\Program Files\Microsoft Visual Studio
8\SDK\v2.0\Bin\nmake.exe" -f Makefile.Release

Microsoft (R) Program Maintenance Utility Version 8.00.50727.42
Copyright (C) Microsoft Corporation.  All rights reserved.

perl
C:/cygwin/home/Administrator/WebKit/JavaScriptCore/pcre/dftables
tmp\chartables.c --preprocessor="cl /E"
Error in tempfile() using \tmp\dftables-.in: Parent directory
(\tmp\) is not a directory at
C:/cygwin/home/Administrator/WebKit/JavaScriptCore/pcre/dftables line
245
NMAKE : fatal error U1077: 'C:\Perl\bin\perl.EXE' : return code '0x2'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio
8\SDK\v2.0\Bin\nmake.exe"' : return code '0x2'
Stop.
NMAKE : fatal error U1077: 'cd' : return code '0x2'
Stop.

-Original Message-
From: Mark Rowe [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, September 09, 2008 4:01 PM
To: Lautenbach Frank SOFTECHNICS
Cc: [EMAIL PROTECTED]; webkit-dev@lists.webkit.org Development
Subject: Re: [webkit-dev] WebKit and Windows/Cygwin


On Sep 9, 2008, at 12:59 PM, [EMAIL PROTECTED] wrote:

> Ok ... but the Windows build requires me to be in cygwin for other
> reasons. I guess I'm basically trying to confirm that as it stands
> today, you cannot build Qt Windows port as specified on webkit.org due
> to cygwin not supporting the required version of qt.

If you want to build the Qt port on Windows, you should follow the  
instructions at .  As  
I mentioned previously, the instructions at
 are related to building Apple's Windows port rather than the Qt port.

Kind regards,

Mark Rowe

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
ht

Re: [webkit-dev] Can someone give insight on this error?

2008-09-10 Thread Mark Pauley
For what it's worth, this is a known issue with CFNetwork, essentially  
a status code translation changed unexpectedly.

No need to file another bug on the 302 errors :)


On Sep 10, 2008, at 12:43 PM, Mark Rowe wrote:

>
> On Sep 10, 2008, at 12:32 PM, Ryan McGrath wrote:
>
>> I was doing some work on making various templates compatible with
>> Webkit's engine, and I've run into this error (intermittently, a
>> refresh
>> will sometimes fix it) on most versions of Webkit.
>>
>> /Operation could not be completed. (kCFErrorDomainCFNetwork error
>> 302.)”
>> (kCFErrorDomainCFNetwork:302)
>
> Error 302 in kCFErrorDomainCFNetwork maps to
> kCFErrorHTTPConnectionLost, which indicates that "The connection to
> the server was dropped. This usually indicates a highly overloaded
> server".
>
>> /I've noticed that the exact same symptoms occur in Chrome (I know
>> it's
>> Webkit, though I'm not sure to what extent the codebases are the
>> same -
>> shouldn't be too different I assume) under the following bug:
>>
>> /Error 2 (net::ERR_FAILED): Unknown error.
>
> The underlying HTTP stack in Chrome is completely different than what
> is used in Safari's WebKit.  Safari's WebKit makes use of CFNetwork,
> but I'm not sure what exactly Chrome uses. If you have reproducible
> instances of this problem you should file bug reports against
> CFNetwork () and Chrome 
> (> ) so that they can be investigated by the respective teams.  Given
> that two different HTTP stacks are running into a similar issue it is
> quite possible that the problem is in fact a server-side issue where
> the server is prematurely dropping the connection.
>
> - Mark
>
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

_Mark
[EMAIL PROTECTED]




___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] [webkit-changes] [24723] trunk/WebCore

2008-09-10 Thread Maciej Stachowiak


On Sep 10, 2008, at 4:08 AM, Arvid Nilsson wrote:

Hello, sorry to bring this old topic up again. Nothing has happened  
in the last year or so, and this issue is bothering me again...


I am thinking that libxml2 should be somehow considered "default"  
since it is used by most existing ports, and also is quite portable,  
making it likely that new ports will use it. Maybe this idea will  
make it possible to make a patch that permits using something else  
than libxml2 without having to patch build systems other than qmake.


Would it be possible to use a variety on the ResourceHandle porting  
approach and have a WebCore/dom/XMLTokenizerBase.{h, cpp}. The  
libxml2 implementation lives in WebCore/dom/XMLTokenizer.{h, cpp}.  
If you don't specify in your build system that you should use  
WebCore/dom/port/XMLTokenizer.h and WebCore/dom/port/ 
XMLTokenizerPort.cpp, you will get libxml2.


An added bonus is that you don't have to come up with a better name  
than "XMLTokenizerLibXml2.cpp" since it's still called  
"XMLTokenizer.cpp".


I think we should wrap the XML parser API at the platform/ layer. The  
main risk in doing so would be performance.


 - Maciej




/Arvid

On Sun, Jul 29, 2007 at 7:53 PM, Lars Knoll <[EMAIL PROTECTED]>  
wrote:

On Sunday 29 July 2007 08:47:50 Maciej Stachowiak wrote:
> On Jul 28, 2007, at 3:52 AM, Lars Knoll wrote:
> > On Saturday 28 July 2007 00:26:19 Maciej Stachowiak wrote:
> >> On Jul 27, 2007, at 11:36 AM, Lars Knoll wrote:
> >>
> >> Other organizations have requested the ability to use other XML
> >> parsers as well, such as expat. Seems like in the long run we  
want a
> >> different approach than just ifdefs in the XMLTokenizer.cpp  
file. It

> >> seems like the best would be some abstraction layer on top of the
> >> parser library, but if that is difficult then your option #2  
sounds

> >> like a docent long-run approach. I would have expected just about
> >> every XML parsing library to have a SAX-like API, which  
shouldn't be

> >> too hard to abstract, but perhaps QXml works differently.
> >
> > I guess that assumption doesn't hold. QXmlStream is a streaming
> > parser with an
> > API that is very different from SAX. It IMO a whole lot simpler to
> > use than a
> > SAX like API and is inspired from similar APIs in the Java  
world. If

> > you're
> > interested, have a look at
> > http://doc.trolltech.com/4.3/qxmlstreamreader.html
>
> I'm told libxml has a StreamReader-style API now as well, so if  
that's
> the better alternative, we could design the XML code around that  
style

> of API (though probably not right at the moment).

No, for the moment, I'd rather just go with the approach I've posted  
in bug
14791. Once there are requests for more parser backends, we could  
rethink

this, but for now I think we have more urgent things to do.

Cheers,
Lars
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] How does the Javascript garbage collection work?

2008-09-10 Thread Josh Chia (谢任中)
Hi,

I'm trying to debug some memory leaks and now need to understand what
collector.{h,cpp} are doing.  Could someone point me to some documents to
explain how the garbage collector works?  I've also run valgrind and it
complained that CollectorBitmap::get() uses an unreferenced value.  I'm not
sure whether this is really wrong, so I'll have to first understand how the
garbage collector works, the alignment magic used with JSCell and whatever
other GC magic I could probably figure out on my own but only after staring
at the code for a long time.

Josh
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread zaheer ahmad
hi mike,
The ncurl port is not yet in the official builds. meanwhile how do you
suggest to fix this in the current baseline.

one of the changes that does help is to periodically cleanup the multihandle
when the running job count drops to 0 and recreate on the next request (this
is just a temporary fix till we find the real issue in curl)

i have few comments on the ncurl patch:
https://bugs.webkit.org/show_bug.cgi?id=17972
- is it better than the current timer driven behavior, since the glib main
loop polls the fds faster if its free- but thats seems to be a small gain
since the timeout is very small
- in the current implementation curl_multi_perform may block if theres lots
of data queued up on multiple handles, but that can be easily mitigated by
returning frequently from curl_multi_perform
- what about doing select in separate thread as glibcurl does. i think this
is safe as perform happens in the main thread.

thanks,
Zaheer

On Wed, Sep 10, 2008 at 8:40 PM, Mike Emmel <[EMAIL PROTECTED]> wrote:

> Look I had to change to one multi handle per handle basically just and
> asynchronous handle to get everything to clean up.
> Its a significant refactoring and better design.
>
> What it points to on the curl side is the need for and asynchronous
> simple handle.
>
> Also polling was removed as much as possible curl does not send decent
> time outs if it has real work
> to perform so this is still a issue. However open file handles are
> handled in the event loop select.
>
> Curl needs to be extended to have the concept of a work request and a
> longer term watch timeout.
>
> So in my opinion the issues are fixed at least to the extent possible
> without help from the curl team.
>
> On Wed, Sep 10, 2008 at 7:53 AM, zaheer ahmad <[EMAIL PROTECTED]>
> wrote:
> > hi,
> > The fix only helps little as we see the bigger leaks in curl. feedback
> from
> > curl experts suggests that this design is correct.. let me know if you
> are
> > aware of this issue
> >
> > == here's the mail snapshot.
> > we are seeing big leaks in curl (Curl_connect - 600-800k and Curl_open -
> > ~200k) when we browse through as little as few websites. This values
> > keep increasing as we browse more sites.
> >
> > heres the high level logic of webkit=curl interaction
> > 1- create a multi handle at the start of program
> > 2- keep creating easy handles for each request
> > 3- when request is done remove it from multi handle and clean up the
> > handle
> > 4- multi handle is never released (stays till the end of program)
> >
> > This design assumes that multi handle has a bounded memory usage as we
> > keep adding and removing easy handles, but that seems to be not true
> > with the leaks.
> > ==
> >
> >
> >>> Now I just started to use valgrind to find other memory leaks, so this
> > and other issues should be hopefully fixed soon.
> >
> > these are not traditional memory leaks, you are holding on to things
> longer
> > than you should, so they are more functional leaks. Does valgrind help in
> > that too?
> >
> > thanks,
> > Zaheer
> >
> >
> > On Wed, Sep 10, 2008 at 8:02 PM, Marco Barisione
> > <[EMAIL PROTECTED]> wrote:
> >>
> >> Il giorno mer, 10/09/2008 alle 07.22 -0700, Mike Emmel ha scritto:
> >> > This leak is fixed in the ncurl port.
> >>
> >> Is it possible to backport the fix?
> >>
> >> --
> >> Marco Barisione
> >>
> >> ___
> >> webkit-dev mailing list
> >> webkit-dev@lists.webkit.org
> >> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
> >
> >
> > ___
> > webkit-dev mailing list
> > webkit-dev@lists.webkit.org
> > http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
> >
> >
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Curl resourcehandle leaks in Linux/Gtk port

2008-09-10 Thread Mike Emmel
On Wed, Sep 10, 2008 at 10:15 PM, zaheer ahmad <[EMAIL PROTECTED]> wrote:
> hi mike,
> The ncurl port is not yet in the official builds. meanwhile how do you
> suggest to fix this in the current baseline.
>
No suggestions. I wrote he original code and did not care much for it
even when I wrote it.
I was waiting on the newer versions of curl that had callbacks for
file descriptors.
That approach was copied from the examples which were intended or
designed for command
line apps not browsers. The new file handle callbacks where added to
address this problem
and make curl more ui friendly.

> one of the changes that does help is to periodically cleanup the multihandle
> when the running job count drops to 0 and recreate on the next request (this
> is just a temporary fix till we find the real issue in curl)
>
> i have few comments on the ncurl patch:
> https://bugs.webkit.org/show_bug.cgi?id=17972
> - is it better than the current timer driven behavior, since the glib main
> loop polls the fds faster if its free- but thats seems to be a small gain
> since the timeout is very small

My primary goal with the changes to the even loop was to work toward
elimination of most timeouts.
My focus is on battery powered systems and firing a timer rapidly for
minutes at a time drains the battery.

> - in the current implementation curl_multi_perform may block if theres lots
> of data queued up on multiple handles, but that can be easily mitigated by
> returning frequently from curl_multi_perform

In my new implementation with only one simple handle per multi the
main even loop runs
correctly and checks for events from the user in a timely fashion.

> - what about doing select in separate thread as glibcurl does. i think this
> is safe as perform happens in the main thread.
>
Why use a different thread to wait in select ?
Thats a design decision left to the implementor of the main loop its
outside of the scope of the curl binding to try and make this
decision. In general servicing these data file handles need to be
cooperative with user input however you implement it.
How this happens depends on the platform.


> thanks,
> Zaheer
>
> On Wed, Sep 10, 2008 at 8:40 PM, Mike Emmel <[EMAIL PROTECTED]> wrote:
>>
>> Look I had to change to one multi handle per handle basically just and
>> asynchronous handle to get everything to clean up.
>> Its a significant refactoring and better design.
>>
>> What it points to on the curl side is the need for and asynchronous
>> simple handle.
>>
>> Also polling was removed as much as possible curl does not send decent
>> time outs if it has real work
>> to perform so this is still a issue. However open file handles are
>> handled in the event loop select.
>>
>> Curl needs to be extended to have the concept of a work request and a
>> longer term watch timeout.
>>
>> So in my opinion the issues are fixed at least to the extent possible
>> without help from the curl team.
>>
>> On Wed, Sep 10, 2008 at 7:53 AM, zaheer ahmad <[EMAIL PROTECTED]>
>> wrote:
>> > hi,
>> > The fix only helps little as we see the bigger leaks in curl. feedback
>> > from
>> > curl experts suggests that this design is correct.. let me know if you
>> > are
>> > aware of this issue
>> >
>> > == here's the mail snapshot.
>> > we are seeing big leaks in curl (Curl_connect - 600-800k and Curl_open -
>> > ~200k) when we browse through as little as few websites. This values
>> > keep increasing as we browse more sites.
>> >
>> > heres the high level logic of webkit=curl interaction
>> > 1- create a multi handle at the start of program
>> > 2- keep creating easy handles for each request
>> > 3- when request is done remove it from multi handle and clean up the
>> > handle
>> > 4- multi handle is never released (stays till the end of program)
>> >
>> > This design assumes that multi handle has a bounded memory usage as we
>> > keep adding and removing easy handles, but that seems to be not true
>> > with the leaks.
>> > ==
>> >
>> >
>> >>> Now I just started to use valgrind to find other memory leaks, so this
>> > and other issues should be hopefully fixed soon.
>> >
>> > these are not traditional memory leaks, you are holding on to things
>> > longer
>> > than you should, so they are more functional leaks. Does valgrind help
>> > in
>> > that too?
>> >
>> > thanks,
>> > Zaheer
>> >
>> >
>> > On Wed, Sep 10, 2008 at 8:02 PM, Marco Barisione
>> > <[EMAIL PROTECTED]> wrote:
>> >>
>> >> Il giorno mer, 10/09/2008 alle 07.22 -0700, Mike Emmel ha scritto:
>> >> > This leak is fixed in the ncurl port.
>> >>
>> >> Is it possible to backport the fix?
>> >>
>> >> --
>> >> Marco Barisione
>> >>
>> >> ___
>> >> webkit-dev mailing list
>> >> webkit-dev@lists.webkit.org
>> >> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>> >
>> >
>> > ___
>> > webkit-dev mailing list
>> > webkit-dev@lists.webkit.org
>> > http://lists.webkit.org/mailman/listinfo.cgi

Re: [webkit-dev] HTML5 Application Cache

2008-09-10 Thread Michael(tm) Smith
Michael Nordman <[EMAIL PROTECTED]>, 2008-09-09 11:42 -0700:

> What is the status of the work-in-progress around the HTML5 AppCache that is
> in the repository? Is anybody actively working on that now? I'm interested
> in incorporating support for this feature into Chrome is why I'm asking.

I'd been wondering the same thing myself, so I asked yesterday on
#webkit on irc.freenode.net. The response from a couple of people
there familiar with the code was that support for ApplicationCache
(and I think in general for the offline-webapps part of the HTML5
spec) is on par with what's currently supported in Gecko. One
limitation is that it doesn't support opportunistic caching -- but
Gecko's implementation has the same limitation.

  --Mike

-- 
Michael(tm) Smith
http://people.w3.org/mike/
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] HTML5 Application Cache

2008-09-10 Thread Anders Carlsson

9 sep 2008 kl. 20.42 skrev Michael Nordman:

> What is the status of the work-in-progress around the HTML5 AppCache  
> that is in the repository? Is anybody actively working on that now?  
> I'm interested in incorporating support for this feature into Chrome  
> is why I'm asking.
>
> Michael

Hey Michael!

As far as the specification goes, the two big parts that aren't  
implemented are opportunistic entries, and dynamic entries.

Also, all I/O is currently synchronous which is of course something  
that we'd like to avoid. The relevant code is (as you probably already  
know) in WebCore/loader/appcache, but also elsewhere in the loader,  
surrounded by #if ENABLE(OFFLINE_WEB_APPLICATIONS).

The code hasn't received a lot of testing (given that the spec is  
fairly new and in flux). Some regression tests are in LayoutTests/http/ 
tests/appcache.

Any feedback/comments you have is of course much appreciated!

Anders
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev