On Fri, Feb 4, 2011 at 9:11 AM, Stefan Urbanek <[email protected]>wrote:

> Hi,
>
> I'm trying to fetch 1m+ pages over HTTP from single site. urlgrabber seemed
> like fast solution using "keep alive" connections. However, after a while my
> script fails without any stack trace with: "Fatal Python error: deallocating
> None"
>
> Here is isolated python code that fails (for simplification, file:// URL is
> used, gives same results):
>
> import urlgrabber
>
> url = "file:///some/existing/file.html"
>
> for i in range(0, 15000):
>    print i
>    handle = urlgrabber.urlopen(url, timeout = 1)
>    # do something useful
>    handle.close()
>
> It fails after ~3231 cycles. I am using python 2.7.1 and urlgrabber-3.9.1.
>
> What I am doing wrong?
>
> Regards,
>
> Stefan
>
> _______________________________________________
> Yum-devel mailing list
> [email protected]
> http://lists.baseurl.org/mailman/listinfo/yum-devel
>

try adding

del handle

just after handle.close()


or

i = 0
while i < 15000:
     print i
     handle = urlgrabber.urlopen(url, timeout = 1)
     # do something useful
    handle.close()
    del handle
    i += 1


Tim
_______________________________________________
Yum-devel mailing list
[email protected]
http://lists.baseurl.org/mailman/listinfo/yum-devel

Reply via email to