Me again.

I'm getting this error when parsing an external URL - I understand that
a common cause of this is badly formed HTML (or XHTML) and that's fair
enough, but is there any way to turn the parser into forgiving mode?

As I'm getting this error from documents over which I have no control,
I need to be able to recover from this situation. Is there a way to
clean the document before parsing it, or have the parser ignore the
issue and proceed (which would probably be ok in my case)?

Mike

-- 
http://mail.python.org/mailman/listinfo/python-list

Reply via email to