traced
> > >down to _socket.recv. I am calling some web services and each of them
> > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> > >while the rest of the call is finished in milliseconds.
> >
> > What happens if you use urlopen
traced
> > >down to _socket.recv. I am calling some web services and each of them
> > >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> > >while the rest of the call is finished in milliseconds.
> >
> > What happens if you use urlopen(
gt; >uses about 0.2 sec and 99% of this time is spent on urllib2.urlopen,
> >while the rest of the call is finished in milliseconds.
>
> What happens if you use urlopen() by itself?
> --
> Aahz (a...@pythoncraft.com) <*> http://www.pythoncraft.com/
>
by a general problem.
Try to access a "common" https site via urllib2, one you do not have
any problems to connect with a browser (or another "http" utility
like "wget" or "curl") from the same machines.
If this works, the problem is assiciated with the speci
Robin Becker writes:
> I have an application built on 32 bit windows 7 with python 2.7.10.
> The application runs fine on windows 7 and older windows machines, but
> it is failing to connect properly using urllib2 when run on windows 8.
>
> The error CERTIFICATE_VERIFY_FAILED i
I have an application built on 32 bit windows 7 with python 2.7.10.
The application runs fine on windows 7 and older windows machines, but it is
failing to connect properly using urllib2 when run on windows 8.
The error CERTIFICATE_VERIFY_FAILED indcates this is some issue with urllib2 not
On Friday, September 2, 2016 at 6:05:05 AM UTC-7, Peter Otten wrote:
> Sumeet Sandhu wrote:
>
> > Hi,
> >
> > I use urllib2 to grab google.com webpages on my Mac over my Comcast home
> > network.
> >
> > I see about 1 error for every 50 pages grabbed.
Sumeet Sandhu wrote:
> Hi,
>
> I use urllib2 to grab google.com webpages on my Mac over my Comcast home
> network.
>
> I see about 1 error for every 50 pages grabbed. Most exceptions are
> ssl.SSLError, very few are socket.error and urllib2.URLError.
>
> The
Hi,
I use urllib2 to grab google.com webpages on my Mac over my Comcast home
network.
I see about 1 error for every 50 pages grabbed. Most exceptions are
ssl.SSLError, very few are socket.error and urllib2.URLError.
The problem is - after a first exception, urllib2 occasionally stalls for
='Python Library for interfacing with Withings API',
> install_requires=['simplejson', 'urllib2', 'hashlib'],
> )
>
>
> When I run the install (sudo python setup.py install) I get an error:
>
> Could not find suitabl
h Withings API',
install_requires=['simplejson', 'urllib2', 'hashlib'],
)
When I run the install (sudo python setup.py install) I get an error:
Could not find suitable distribution for Requirement.parse('urllib2')
Any ideas on
e package comes with a setup.py:
from setuptools import setup
setup(
name='pythings',
py_modules=['pythings'],
version='0.1',
description='Python Library for interfacing with Withings API',
install_requires=
Dennis Lee Bieber wrote:
> >> Connection reset by peer.
> >>
> >> An existing connection was forcibly closed by the remote host.
> >
> >This is not true.
> >The server is under my control. Die client has terminated the connection
> >(or a router between).
> The odds are still good
line 1992, in
> > File "", line 180, in main
> > File "", line 329, in get_ID
> > File "", line 1627, in check_7z
> > File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen
> > File "C:\Software\Python\lib\urllib2.p
et_ID
File "", line 1627, in check_7z
File "C:\Software\Python\lib\urllib2.py", line 154, in urlopen
File "C:\Software\Python\lib\urllib2.py", line 431, in open
File "C:\Software\Python\lib\urllib2.py", line 449, in _open
File "C:\Software\Pyth
Peter Otten <__pete...@web.de> wrote:
> > I have a problem with it: There is no feedback for the user about the
> > progress of the transfer, which can last several hours.
> >
> > For small files shutil.copyfileobj() is a good idea, but not for huge
> > ones.
>
> Indeed. Have a look at the sourc
Ulli Horlacher wrote:
> Ulli Horlacher wrote:
>> Peter Otten <__pete...@web.de> wrote:
>
>> > - consider shutil.copyfileobj to limit memory usage when dealing with
>> > data
>> > of arbitrary size.
>> >
>> > Putting it together:
>> >
>> > with open(sz, "wb") as szo:
>> > shutil.c
Ulli Horlacher wrote:
> Peter Otten <__pete...@web.de> wrote:
> > - consider shutil.copyfileobj to limit memory usage when dealing with data
> > of arbitrary size.
> >
> > Putting it together:
> >
> > with open(sz, "wb") as szo:
> > shutil.copyfileobj(u, szo)
>
> This writes the
Peter Otten <__pete...@web.de> wrote:
> Ulli Horlacher wrote:
>
> > if u.getcode() == 200:
> > print(u.read(),file=szo,end='')
> > szo.close()
> > else:
> > die('cannot get %s - server reply: %d' % (szurl,u.getcode()))
>
> More random remarks:
Always welcome - I am here
Peter Otten <__pete...@web.de> wrote:
> > It works with Linux, but not with Windows 7, where the downloaded 7za.exe
> > is corrupt: it has the wrong size, 589044 instead of 587776 Bytes.
> >
> > Where is my error?
>
> > sz = path.join(fexhome,'7za.exe')
> > szurl = "http://fex.belwue.de
Ulli Horlacher wrote:
> if u.getcode() == 200:
> print(u.read(),file=szo,end='')
> szo.close()
> else:
> die('cannot get %s - server reply: %d' % (szurl,u.getcode()))
More random remarks:
- print() gives the impression that you are dealing with text, and using it
with
sz,e.strerror))
Unrelated, but I recommend that you let the exceptions bubble up for easier
debugging.
Python is not Perl ;)
> import urllib2
> printf("\ndownloading %s\n",szurl)
> try:
> req = urllib2.Request(szurl)
> req.add_header('User-Agent&
rl = "http://fex.belwue.de/download/7za.exe";
try:
szo = open(sz,'w')
except (IOError,OSError) as e:
die('cannot write %s - %s' % (sz,e.strerror))
import urllib2
printf("\ndownloading %s\n",szurl)
try:
req = urllib2.Request(
on.
It may have looked at the "User-Agent" request header to differentiate
between a browser request and an automated (script) request.
To work around this, you may provide a "User-Agent" header
to your "urllib2.Request" (see the documentation) letting
the "Use
(default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
request =
urllib2.Request('http://guggenheiminvestments.com/products/etf/gsy/holdings')
re
Steven,
Thank you! User advice was on point.
Sumit
On Tue, Sep 2, 2014 at 11:29 PM, dieter wrote:
> Steven D'Aprano writes:
> > ...
> > I'm not an expert, but that sounds like a fault at the server end. I just
> > tried it in Chrome, and it worked, and then with wget, and I get the same
> >
Steven D'Aprano writes:
> ...
> I'm not an expert, but that sounds like a fault at the server end. I just
> tried it in Chrome, and it worked, and then with wget, and I get the same
> sort of error:
> ...
> Sounds like if the server doesn't recognise the browser, it gets
> confused and ends up in
On Tue, 02 Sep 2014 02:08:47 -0400, Sumit Ray wrote:
> Hi,
>
> I've tried versions of the following but continue to get errors:
>
> - snip -
> url = 'https://www.usps.com/send/official-abbreviations.htm'
> request = urllib2.buil
Hi,
I've tried versions of the following but continue to get errors:
- snip -
url = 'https://www.usps.com/send/official-abbreviations.htm'
request = urllib2.build_opener(urllib2.HTTPRedirectHandler).open(url)
- snip -
Generat
Hi,
I've tried various versions but continue to get the following error:
--
https://mail.python.org/mailman/listinfo/python-list
On Fri, Jun 20, 2014 at 12:19 AM, Robin Becker wrote:
> in practice [monkeypatching socket] worked well with urllib in python27.
Excellent! That's empirical evidence of success, then.
Like with all monkey-patching, you need to keep it as visible as
possible, but if your driver script is only a p
t sure why it hasn't
been updated beyond that), pull up urllib2.py, and step through
manually, seeing where the hostname gets turned into an IP address.
Hence, this code:
.
in practice this approach worked well with urllib in python27.
--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list
On Thu, Jun 19, 2014 at 9:51 PM, Robin Becker wrote:
>> Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x.
>> The exact version may be significant.
>>
> I can use python >= 3.3 if required.
The main reason I ask is in case something's changed.
..
Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x.
The exact version may be significant.
I can use python >= 3.3 if required.
Can you simply query the server by IP address rather than host name?
According to the docs, urllib2.urlopen() doesn
e
> instance on a different ip address.
Since you mention urllib2, I'm assuming this is Python 2.x, not 3.x.
The exact version may be significant.
Can you simply query the server by IP address rather than host name?
According to the docs, urllib2.urlopen() doesn't check the
certificate
ib or urllib2 to use my host name and a specifed ip
address?
I can always change my hosts file, but that is inconvenient and potentially
dangerous.
--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list
Hi,
this is probably a dumb question but I just cannot find a way
how to create AuthHandler which would add Authorization header
to the FIRST request. The only thing I see in urllib2.py are
various http_error handler which add Authorization header to the
ADDITIONAL request which handles the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
this is probably a dumb question but I just cannot find a way
how to create AuthHandler which would add Authorization header
to the FIRST request. The only thing I see in urllib2.py are
various http_error handler which add Authorization header
I understand the problem now. the echo is a string, wich can contain text but
no array.
I've changed the PHP script so I get only text separated with comma's and in
python I separate the textfields and declare them in the array. With the split
methode I saw in the answer of J. Gordon. Thank yo
On 2014-01-10 20:57, vanommen.rob...@gmail.com wrote:
Hello,
I have a Raspberry Pi with 10 temperature sensors. I send the data from the
sensors and some other values with json encoding and:
result = urllib2.urlopen(request, postData)
to a online PHP script wich places the data in a mysql
On Fri, 10 Jan 2014 12:57:59 -0800, vanommen.robert wrote:
> Hello,
>
> I have a Raspberry Pi with 10 temperature sensors. I send the data from
> the sensors and some other values with json encoding and:
>
> result = urllib2.urlopen(request, postData)
>
> to a online PH
On Fri, 10 Jan 2014 12:57:59 -0800 (PST), vanommen.rob...@gmail.com
wrote:
No idea about the php..
In python when i do
para = result.read()
print para
the output is:
[null,null,null,null,null,"J"]
That's a string that just looks like a list.
This is correct according to the data in P
In
vanommen.rob...@gmail.com writes:
> result = urllib2.urlopen(request, postData)
> para = result.read()
> print para
> the output is:
> [null,null,null,null,null,"J"]
> print para[1]
> the output is:
> n
Probably because para is a string with the v
Hello,
I have a Raspberry Pi with 10 temperature sensors. I send the data from the
sensors and some other values with json encoding and:
result = urllib2.urlopen(request, postData)
to a online PHP script wich places the data in a mysql database.
In the result:
result.read()
i am trying to
On 12/10/2013 10:36 AM, David Robinow wrote:
> On Tue, Dec 10, 2013 at 11:59 AM, wrote:
>> On 12/10/2013 09:22 AM, Mark Lawrence wrote:
> ...
>> Mark is one of the resident trolls here. Among his other traits
>> is his delusion that he is Lord High Commander of this list.
>> Like with other trol
On Tue, Dec 10, 2013 at 11:59 AM, wrote:
> On 12/10/2013 09:22 AM, Mark Lawrence wrote:
...
> Mark is one of the resident trolls here. Among his other traits
> is his delusion that he is Lord High Commander of this list.
> Like with other trolls, the best advice is to ignore him (which
> I'm not
On 2013-12-10, Chris Angelico wrote:
> On Wed, Dec 11, 2013 at 3:49 AM, rusi wrote:
>> There are 10 kinds of people in the world: those who understand
>> binary and those who dont.
>
> There are 10 kinds of people in the world: those who understand Gray
> Code, those who don't, and those who conf
On 10/12/2013 16:59, ru...@yahoo.com wrote:
On 12/10/2013 09:22 AM, Mark Lawrence wrote:
On 10/12/2013 15:48, ru...@yahoo.com wrote:
[...]
There is no "you might want to" about it. There are two options here,
either read and action the page so we don't see double spaced crap
amongst other thing
On 12/10/2013 09:22 AM, Mark Lawrence wrote:
> On 10/12/2013 15:48, ru...@yahoo.com wrote:
> [...]
> There is no "you might want to" about it. There are two options here,
> either read and action the page so we don't see double spaced crap
> amongst other things, use another tool, or don't post.
On 10/12/2013 16:49, rusi wrote:
On Tuesday, December 10, 2013 9:52:47 PM UTC+5:30, Mark Lawrence wrote:
On 10/12/2013 15:48, rurpy wrote:
On 12/10/2013 06:47 AM, Chris Angelico wrote:
On Wed, Dec 11, 2013 at 12:35 AM, harish.barvekar wrote:
Also: You appear to be using Google Groups, which i
On Wed, Dec 11, 2013 at 3:49 AM, rusi wrote:
> There are 10 kinds of people in the world: those who understand
> binary and those who dont.
There are 10 kinds of people in the world: those who understand Gray
Code, those who don't, and those who confuse it with binary.
ChrisA
--
https://mail.py
On Tuesday, December 10, 2013 9:52:47 PM UTC+5:30, Mark Lawrence wrote:
> On 10/12/2013 15:48, rurpy wrote:
> > On 12/10/2013 06:47 AM, Chris Angelico wrote:
> >> On Wed, Dec 11, 2013 at 12:35 AM, harish.barvekar wrote:
> >> Also: You appear to be using Google Groups, which is the Mos Eisley of
>
On 10/12/2013 15:48, ru...@yahoo.com wrote:
On 12/10/2013 06:47 AM, Chris Angelico wrote:
On Wed, Dec 11, 2013 at 12:35 AM, wrote:
Also: You appear to be using Google Groups, which is the Mos Eisley of
the newsgroup posting universe. You'll do far better to instead use
some other means of post
On 12/10/2013 06:47 AM, Chris Angelico wrote:
> On Wed, Dec 11, 2013 at 12:35 AM, wrote:
> Also: You appear to be using Google Groups, which is the Mos Eisley of
> the newsgroup posting universe. You'll do far better to instead use
> some other means of posting, such as the mailing list:
Using
On 10/12/2013 14:14, Chris Angelico wrote:
On Wed, Dec 11, 2013 at 1:06 AM, Mark Lawrence wrote:
On 10/12/2013 13:47, Chris Angelico wrote:
On Wed, Dec 11, 2013 at 12:35 AM, wrote:
Is this issue fixed. I am also facing the same issue of tunneling in
https request. Please suggest how to pr
On Wed, Dec 11, 2013 at 1:06 AM, Mark Lawrence wrote:
> On 10/12/2013 13:47, Chris Angelico wrote:
>>
>> On Wed, Dec 11, 2013 at 12:35 AM, wrote:
>>>
>>> Is this issue fixed. I am also facing the same issue of tunneling in
>>> https request. Please suggest how to proceed further
>>
>>
>> You're
On 10/12/2013 13:47, Chris Angelico wrote:
On Wed, Dec 11, 2013 at 12:35 AM, wrote:
Is this issue fixed. I am also facing the same issue of tunneling in https
request. Please suggest how to proceed further
You're responding to something from 2009. It's highly likely things
have changed.
tocol.
2. to confirm python version and extended libs work well
3. to confirm ssl work well.
goog luck!
nikekoo
I've reduced my code to the following:
import urllib2
p = "https://user:pass@myproxy:port";
proxy_handler = urllib2.ProxyHandler({"https": p})
On Wed, Dec 11, 2013 at 12:35 AM, wrote:
> Is this issue fixed. I am also facing the same issue of tunneling in https
> request. Please suggest how to proceed further
You're responding to something from 2009. It's highly likely things
have changed.
Does the same code cause an error in Python 2
2. to confirm python version and extended libs work well
> > 3. to confirm ssl work well.
> >
> > goog luck!
> >
> > nikekoo
>
> I've reduced my code to the following:
>
> import urllib2
>
> p = "https://user:pass@myproxy:port";
>
hout the user having to play with DNS config and all. Even if the
discussion on my DNS config is interesting, my question was about
urllib2.urlopen() timeout. I thought the timeout parameter was meant to
escape those long delays. Apparently, it is not.
According to this SO topic:
http://stackoverflow.com/que
Hello,
On Thu, Oct 17, 2013 at 03:34:05PM +0200, Jérôme wrote:
> Hi.
>
> Thank you all for your answers.
>
> --
> Context:
>
> The problem I want to address is the code being stuck too long when the
> network is down.
>
> I'm working on a softwar
7;t see anything I can change in my router/modem config (it is closed and
belongs to my ISP, so I can only change what i'm allowed to).
Any other recommandation ?
I'm afraid I'm being out of topic, here.
Back to python, is it normal that urllib2's timeout does not apply to thos
If run on my Debian Wheezy computer, or on my Debian Squeeze server,
the answer is instantaneous :
[...]
urllib2.URLError:
When run on my Raspberry Pi with Raspian Wheezy, the answer is
identical but it takes 10 seconds.
What happens when you use ping to resolve that address. Do you get
On 2013-10-16 13:22, Peter Otten wrote:
> The problem might be ipv6-related.
I second this as the likely culprit -- I've had to disable IPv6 on my
Debian laptop since my AT&T router is brain-dead and doesn't seem to
support it, so I would often get timeouts similar to what is the OP
describes and
Jérôme writes:
> Hi all.
>
> I'm having troubles with urllib2 timeout.
>
> See the following script :
>
> ----
> import urllib2
> result = urllib2.urlopen("http://dumdgdfgdgmyurl.com/";)
> print result.readline()
>
Jérôme wrote:
> Hi all.
>
> I'm having troubles with urllib2 timeout.
>
> See the following script :
>
> ----
> import urllib2
> result = urllib2.urlopen("http://dumdgdfgdgmyurl.com/";)
> print result.readline()
> -
Le 16/10/2013 11:21, Jérôme a écrit :
Hi all.
I'm having troubles with urllib2 timeout.
See the following script :
import urllib2
result = urllib2.urlopen("http://dumdgdfgdgmyurl.com/";)
print result.readline()
If r
Hi all.
I'm having troubles with urllib2 timeout.
See the following script :
import urllib2
result = urllib2.urlopen("http://dumdgdfgdgmyurl.com/";)
print result.readline()
If run on my Debian Wheezy computer, or on my
On Tuesday, August 6, 2013 5:14:48 PM UTC-7, MRAB wrote:
> On 06/08/2013 23:52, cerr wrote:
>
> > Hi,
>
> >
>
> > Why does this code:
>
> >
>
> > #!/usr/bin/python
>
> >
>
> >
>
> > import urllib2
>
> >
On Tue, Aug 6, 2013 at 11:52 PM, cerr wrote:
> ./post.py
> Traceback (most recent call last):
> File "./post.py", line 13, in
> response = urllib2.urlopen(req, 120)
> File "/usr/lib/python2.7/urllib2.py", line 126, in urlopen
> return _opener.o
On 06/08/2013 23:52, cerr wrote:
Hi,
Why does this code:
#!/usr/bin/python
import urllib2
from binascii import hexlify, unhexlify
host = "localhost"
uri="/test.php"
data ="\x48\x65\x6C\x6C\x6F\x57\x6F\x72\x6C\x64" #Hello World
url="http://{0}{1}?f=te
On Tue, Aug 6, 2013 at 7:35 PM, cerr wrote:
> On Tuesday, August 6, 2013 4:08:34 PM UTC-7, Joel Goldstick wrote:
>> On Tue, Aug 6, 2013 at 6:52 PM, cerr wrote:
>>
>> > Hi,
>>
>> >
>>
>> > Why does this code:
>>
>> >
>>
On Tuesday, August 6, 2013 4:08:34 PM UTC-7, Joel Goldstick wrote:
> On Tue, Aug 6, 2013 at 6:52 PM, cerr wrote:
>
> > Hi,
>
> >
>
> > Why does this code:
>
> >
>
> > #!/usr/bin/python
>
> >
>
> >
>
> > impor
On Tue, Aug 6, 2013 at 6:52 PM, cerr wrote:
> Hi,
>
> Why does this code:
>
> #!/usr/bin/python
>
>
> import urllib2
> from binascii import hexlify, unhexlify
>
> host = "localhost"
> uri="/test.php"
> data ="\x48\x65\x6C\x6C\x
Hi,
Why does this code:
#!/usr/bin/python
import urllib2
from binascii import hexlify, unhexlify
host = "localhost"
uri="/test.php"
data ="\x48\x65\x6C\x6C\x6F\x57\x6F\x72\x6C\x64" #Hello World
url="http://{0}{1}?f=test".format(host, uri)
req =
I have a httplib based application and in an effort to find a quick way to
start leveraging urllib2, including NTLM authentication (via python-ntlm) I am
hoping there is a way to utilize an HTTPConnection object opened by urllib2.
The goal is to change the initial opener to use urllib2, after
Hi,
One problem, thanks for help.
import gevent.monkey
gevent.monkey.match_all()
from lxml import etree
# I using xpath parse the html
def _get(p):
url = BUILD_URL(p)
html = urllib2.urlopen(url)
# RUN AT HERE AND BLOCKING
# ver1
tree = etree.parse(html, parse
On 07Feb2013 02:43, Steven D'Aprano
wrote:
| On Thu, 07 Feb 2013 10:06:32 +1100, Cameron Simpson wrote:
| > Timing. (Let me say I consider this scenario unlikely, very unlikely.
| > But...)
| > If the latter is consistently slightly slower
|
| On my laptop, the difference is of the order of 10
On Thu, 07 Feb 2013 10:06:32 +1100, Cameron Simpson wrote:
> | I cannot see how the firewall could possible distinguish between using
> | a temporary variable or not in these two snippets:
> |
> | # no temporary variable hangs, or fails
> | urllib2.urlopen("ftp://ftp2
On 24Jan2013 04:12, Steven D'Aprano
wrote:
| On Thu, 24 Jan 2013 01:45:31 +0100, Hans Mulder wrote:
| > On 24/01/13 00:58:04, Chris Angelico wrote:
| >> Possibly it's some kind of race condition??
| >
| > If urllib2 is using active mode FTP, then a firewall on your b
On Thu, 24 Jan 2013 01:45:31 +0100, Hans Mulder wrote:
> On 24/01/13 00:58:04, Chris Angelico wrote:
>> On Thu, Jan 24, 2013 at 7:07 AM, Nick Cash
>> wrote:
>>> Python 2.7.3 on linux
>>>
>>> This has me fairly stumped. It looks like
>>>
On 24/01/13 00:58:04, Chris Angelico wrote:
> On Thu, Jan 24, 2013 at 7:07 AM, Nick Cash
> wrote:
>> Python 2.7.3 on linux
>>
>> This has me fairly stumped. It looks like
>> urllib2.urlopen("ftp://some.ftp.site/path";).read()
>> will either i
Nick Cash wrote:
> Python 2.7.3 on linux
>
> This has me fairly stumped. It looks like
> urllib2.urlopen("ftp://some.ftp.site/path";).read()
> will either immediately return '' or hang indefinitely. But
> response = urllib2.urlopen("ftp:
On Thu, Jan 24, 2013 at 7:07 AM, Nick Cash
wrote:
> Python 2.7.3 on linux
>
> This has me fairly stumped. It looks like
> urllib2.urlopen("ftp://some.ftp.site/path";).read()
> will either immediately return '' or hang indefinitely. But
>
Python 2.7.3 on linux
This has me fairly stumped. It looks like
urllib2.urlopen("ftp://some.ftp.site/path";).read()
will either immediately return '' or hang indefinitely. But
response = urllib2.urlopen("ftp://some.ftp.site/path";)
response
Hi there:
I'm working with urllib2 to open some urls and grab some data. The url
will be inserted by the user and my script will open it and parse the
page for results.
the thing is I'm behind a ntlm proxy, and I've tried with a lot of
things to authenticate but it still does
On Friday, September 21, 2012 2:22:08 PM UTC+8, Cosmia Luna wrote:
> I'm porting my code to python3, and found there is no parse_http_list in any
> module of urllib of python3.
>
>
>
> So, is there a public API equvalent for urllib2.parse_http_
I'm porting my code to python3, and found there is no parse_http_list in any
module of urllib of python3.
So, is there a public API equvalent for urllib2.parse_http_list?
Thanks.
Cosmia Luna
--
http://mail.python.org/mailman/listinfo/python-list
>
>>> On 04/20/2012 06:47 PM, Diego Manenti Martins wrote:
>>>> Hi.
>>>> Anybody knows the data is sent in a different way for Python 2.5, 2.6
>>>> and 2.7 using this code:
>>>>
>>>>>>> import urllib2
>>>
;> Hi.
>>> Anybody knows the data is sent in a different way for Python 2.5, 2.6
>>> and 2.7 using this code:
>>>
>>>>>> import urllib2
>>>>>> url = 'http://server.com/post_image?tid=zoV6LJ'
>>>>>> f = o
nt in a different way for Python 2.5, 2.6
>> and 2.7 using this code:
>>
>>>>> import urllib2
>>>>> url = 'http://server.com/post_image?tid=zoV6LJ'
>>>>> f = open('test.jpg')
>>>>> data = f.read()
>>>
On Fri, Apr 20, 2012 at 10:08 PM, Dave Angel wrote:
> On 04/20/2012 06:47 PM, Diego Manenti Martins wrote:
>> Hi.
>> Anybody knows the data is sent in a different way for Python 2.5, 2.6
>> and 2.7 using this code:
>>
>>>>> import urllib2
>>&g
On 04/20/2012 06:47 PM, Diego Manenti Martins wrote:
> Hi.
> Anybody knows the data is sent in a different way for Python 2.5, 2.6
> and 2.7 using this code:
>
>>>> import urllib2
>>>> url = 'http://server.com/post_image?tid=zoV6LJ'
>>>> f
On 4/20/2012 6:47 PM, Diego Manenti Martins wrote:
Anybody knows the data is sent in a different way for Python 2.5, 2.6
and 2.7 using this code:
You could check the What's New for 2.7 and see if there is any mention
of a change to urllib2. Or diff the 2.6 and 2.7 versions of urlli
Hi.
Anybody knows the data is sent in a different way for Python 2.5, 2.6
and 2.7 using this code:
>>> import urllib2
>>> url = 'http://server.com/post_image?tid=zoV6LJ'
>>> f = open('test.jpg')
>>> data = f.read()
>>> res = ur
On Feb 28, 10:50 am, Alex Borghgraef
wrote:
> I'll still have to find out a way to get this thing working with proxy
> enabled if I ever want to connect it to our overall network.
Ok, doing os.environ['http_proxy']='' before importing urllib2 seems
to do the tric
On Feb 28, 1:36 am, Steven D'Aprano wrote:
> On Mon, 27 Feb 2012 12:48:27 -0800, Alex Borghgraef wrote:
> > Hi all,
>
> > Some time ago I've written some python code to read video data off an IP
> > camera connected via a router to a laptop. Now I try to run this code on
> > a different laptop and
On Mon, 27 Feb 2012 12:48:27 -0800, Alex Borghgraef wrote:
> Hi all,
>
> Some time ago I've written some python code to read video data off an IP
> camera connected via a router to a laptop. Now I try to run this code on
> a different laptop and router combination, but now I can't access the
> ca
Hi all,
Some time ago I've written some python code to read video data off an
IP camera connected via a router to a laptop. Now I try to run this
code on a different laptop and router combination, but now I can't
access the camera.
Some minimal example code:
import urllib2
url
1 - 100 of 809 matches
Mail list logo