On 10/05/17 17:06, Rafael Knuth wrote:
>>> Then, there is another package, along with a dozen other
>>> urllib-related packages (such as aiourllib).
>>
>> Again, where are you finding these? They are not in
>> the standard library. Have you been installing other
>> packages that may have their own
>> Then, there is another package, along with a dozen other
>> urllib-related packages (such as aiourllib).
>
> Again, where are you finding these? They are not in
> the standard library. Have you been installing other
> packages that may have their own versions maybe?
they are all available via P
this is one of those things where if what you want is simple, they're all
usable, and easy. if not, some are frankly horrid.
requests is the current hot module. go ahead and try it. (urllib.request is not
from requests, it's from urllib)
On May 8, 2017 9:23:15 AM MDT, Rafael Knuth wrote:
>Whic
As a side note see a tutorial on urllib and requests and try them at the
same time
see for python 3.x; 3.4 or 3.6
also see the data type received by the different combinations, when you
should use .read() etc
also use utf-8 or unicode like .decode("utf8")
Well play around fool mess with it, fee
On 08/05/17 16:23, Rafael Knuth wrote:
> Which package should I use to fetch and open an URL?
> I am using Python 3.5 and there are presently 4 versions:
>
> urllib2
> urllib3
> urllib4
> urllib5
I don't know where you are getting those from but the
standard install of Python v3.6 only has urllib
Which package should I use to fetch and open an URL?
I am using Python 3.5 and there are presently 4 versions:
urllib2
urllib3
urllib4
urllib5
Common sense is telling me to use the latest version.
Not sure if my common sense is fooling me here though ;-)
Then, there is another package, along wit
On 21Nov2014 15:57, Clayton Kirkwood wrote:
Got a general problem with url work. I’ve struggled through a lot of
code which uses urllib.[parse,request]* and urllib2. First q: I read
someplace in urllib documentation which makes it sound like either
urllib or urllib2 modules are being deprecated
On Fri, Nov 21, 2014 at 01:37:45PM -0800, Clayton Kirkwood wrote:
> Got a general problem with url work. I've struggled through a lot of code
> which uses urllib.[parse,request]* and urllib2. First q: I read someplace in
> urllib documentation which makes it sound like either urllib or urllib2
> m
>-Original Message-
>From: Joel Goldstick [mailto:joel.goldst...@gmail.com]
>Sent: Friday, November 21, 2014 2:39 PM
>To: Clayton Kirkwood
>Cc: tutor@python.org
>Subject: Re: [Tutor] urllib confusion
>
>On Fri, Nov 21, 2014 at 4:37 PM, Clayton Kirkwood
>wrote:
On 21/11/14 21:37, Clayton Kirkwood wrote:
urllib or urllib2 modules are being deprecated in 3.5. Don’t know if
it’s only part or whole.
urlib2 doesn't exist in Python3 there is only the urllib package.
As to urllib being deprecated, thats the first I've heard of
it but it may be the case - I
On Fri, Nov 21, 2014 at 4:37 PM, Clayton Kirkwood wrote:
> Hi all.
>
>
>
> Got a general problem with url work. I’ve struggled through a lot of code
> which uses urllib.[parse,request]* and urllib2. First q: I read someplace in
> urllib documentation which makes it sound like either urllib or urll
Hi all.
Got a general problem with url work. I've struggled through a lot of code
which uses urllib.[parse,request]* and urllib2. First q: I read someplace in
urllib documentation which makes it sound like either urllib or urllib2
modules are being deprecated in 3.5. Don't know if it's only par
George Anonymous wrote:
I am trying to make a simple programm with Python 3,that tries to open
differnet pages from a wordlist and prints which are alive.Here is the code:
from urllib import request
fob=open('c:/passwords/pass.txt','r')
x = fob.readlines()
for i in x:
urllib.request.openurl('
On Fri, Jul 29, 2011 at 5:58 AM, Karim wrote:
> **
> On 07/29/2011 11:52 AM, George Anonymous wrote:
>
> I am trying to make a simple programm with Python 3,that tries to open
> differnet pages from a wordlist and prints which are alive.Here is the code:
> from urllib import request
> fob=open('c
On 07/29/2011 11:52 AM, George Anonymous wrote:
I am trying to make a simple programm with Python 3,that tries to open
differnet pages from a wordlist and prints which are alive.Here is the
code:
from urllib import request
fob=open('c:/passwords/pass.txt','r')
x = fob.readlines()
for i in x:
I am trying to make a simple programm with Python 3,that tries to open
differnet pages from a wordlist and prints which are alive.Here is the code:
from urllib import request
fob=open('c:/passwords/pass.txt','r')
x = fob.readlines()
for i in x:
urllib.request.openurl('www.google.gr/' + i)
But
"Roelof Wobben" wrote
Finally solved this puzzle.
Now the next one of the 33 puzzles.
Don;t be surprised if you get stuck. Python Challenge is quite tricky
and is deliberately designed to make you explore parts of the
standard library you might not otherwise find. Expect to do a lot
of readi
> From: st...@pearwood.info
> To: tutor@python.org
> Date: Wed, 13 Oct 2010 01:51:16 +1100
> Subject: Re: [Tutor] urllib problem
>
> On Tue, 12 Oct 2010 11:58:03 pm Steven D'Aprano wrote:
> > On Tue, 12 Oct 2010 11:40:17 pm R
> From: st...@pearwood.info
> To: tutor@python.org
> Date: Tue, 12 Oct 2010 23:58:03 +1100
> Subject: Re: [Tutor] urllib problem
>
> On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote:
>> Hoi,
>>
>> I have this prog
On Tue, 12 Oct 2010 11:58:03 pm Steven D'Aprano wrote:
> On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote:
> > Hoi,
> >
> > I have this programm :
> >
> > import urllib
> > import re
> > f =
> > urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.ph
> >p? nothing=6") inhoud = f.read
On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote:
> Hoi,
>
> I have this programm :
>
> import urllib
> import re
> f =
> urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php?
>nothing=6") inhoud = f.read()
> f.close()
> nummer = re.search('[0-9]', inhoud)
> volgende = int(nummer
> I have this program :
>
> import urllib
> import re
> f =
> urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=6";)
> inhoud = f.read()
> f.close()
> nummer = re.search('[0-9]', inhoud)
> volgende = int(nummer.group())
> teller = 1
> while teller <= 3 :
> url = "
Hoi,
I have this programm :
import urllib
import re
f =
urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=6";)
inhoud = f.read()
f.close()
nummer = re.search('[0-9]', inhoud)
volgende = int(nummer.group())
teller = 1
while teller <= 3 :
url = "http://www.py
thanks, Senthil
On Mon, Dec 7, 2009 at 11:10 AM, Senthil Kumaran wrote:
> On Mon, Dec 07, 2009 at 08:38:24AM +0100, Jojo Mwebaze wrote:
> > I need help on something very small...
> >
> > i am using urllib to write a query and what i want returned is
> 'FHI=128%2C128&
> > FLO=1%2C1'
> >
>
> The wa
On Mon, Dec 07, 2009 at 08:38:24AM +0100, Jojo Mwebaze wrote:
> I need help on something very small...
>
> i am using urllib to write a query and what i want returned is 'FHI=128%2C128&
> FLO=1%2C1'
>
The way to use urllib.encode is like this:
>>> urllib.urlencode({"key":"value"})
'key=value'
>
hello Tutor,
I need help on something very small...
i am using urllib to write a query and what i want returned is
'FHI=128%2C128&FLO=1%2C1'
i have tried the statement below and i have failed to get the above..
x1,y1,x2,y2 = 1,1,128,128
query = urllib.urlencode({'FHI':'x2,y2,', 'FLO':'x1,y1'})
Thanks Kent, perhaps I'll cool the Python jets and move on to HTTP and
HTML. I was hoping it would be something I could just pick up along
the way, looks like I was wrong.
dk
On Tue, Jul 7, 2009 at 1:56 PM, Kent Johnson wrote:
> On Tue, Jul 7, 2009 at 1:20 PM, David Kim wrote:
>> On Tue, Jul 7, 2
On Tue, Jul 7, 2009 at 1:20 PM, David Kim wrote:
> On Tue, Jul 7, 2009 at 7:26 AM, Kent Johnson wrote:
>>
>> curl works because it ignores the redirect to the ToS page, and the
>> site is (astoundingly) dumb enough to serve the content with the
>> redirect. You could make urllib2 behave the same wa
2009/7/7 David Kim :
> opener = urllib2.build_opener(MyHTTPRedirectHandler, cookieprocessor)
> urllib2.install_opener(opener)
>
> response =
> urllib2.urlopen("http://www.dtcc.com/products/derivserv/data_table_i.php?id=table1";)
> print response.read()
>
>
> I suspect I am not understanding s
On Tue, Jul 7, 2009 at 7:26 AM, Kent Johnson wrote:
>
> curl works because it ignores the redirect to the ToS page, and the
> site is (astoundingly) dumb enough to serve the content with the
> redirect. You could make urllib2 behave the same way by defining a 302
> handler that does nothing.
Many
On Mon, Jul 6, 2009 at 5:54 PM, David Kim wrote:
> Hello all,
>
> I have two questions I'm hoping someone will have the patience to
> answer as an act of mercy.
>
> I. How to get past a Terms of Service page?
>
> I've just started learning python (have never done any programming
> prior) and am try
Hi,
David Kim wrote:
> I have two questions I'm hoping someone will have the patience to
> answer as an act of mercy.
>
> I. How to get past a Terms of Service page?
>
> I've just started learning python (have never done any programming
> prior) and am trying to figure out how to open or downloa
Hello all,
I have two questions I'm hoping someone will have the patience to
answer as an act of mercy.
I. How to get past a Terms of Service page?
I've just started learning python (have never done any programming
prior) and am trying to figure out how to open or download a website
to scrape da
it is my error, the data is a sha string and it is not possible to get
the string back, unless you use rainbowtables or something of the sort.
Kent Johnson wrote:
On Mon, Feb 16, 2009 at 8:12 AM, Norman Khine wrote:
Hello,
Can someone point me in the right direction. I would like to return th
On Mon, Feb 16, 2009 at 8:12 AM, Norman Khine wrote:
> Hello,
> Can someone point me in the right direction. I would like to return the
> string for the following:
>
> Type "help", "copyright", "credits" or "license" for more information.
import base64, urllib
data = 'hL/FGNS40fjoTnp2zIq
On Tue, Feb 17, 2009 at 1:24 PM, Norman Khine wrote:
> Thank you, but is it possible to get the original string from this?
What do you mean by the original string Norman?
Look at these definitions:
Quoted String:
In the different parts of the URL, there are set of characters, for
e.g. space cha
On Tue, Feb 17, 2009 at 08:54, Norman Khine wrote:
> Thank you, but is it possible to get the original string from this?
You mean something like this?
>>> urllib.quote('hL/FGNS40fjoTnp2zIqq73reK60=\n')
'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A'
Greets
Sander
___
Thank you, but is it possible to get the original string from this?
Sander Sweers wrote:
On Mon, Feb 16, 2009 at 14:12, Norman Khine wrote:
Type "help", "copyright", "credits" or "license" for more information.
import base64, urllib
data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A'
data = urllib.unq
On Mon, Feb 16, 2009 at 14:12, Norman Khine wrote:
> Type "help", "copyright", "credits" or "license" for more information.
import base64, urllib
data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A'
data = urllib.unquote(data)
print base64.decodestring(data)
> ???Ը???Nzv̊??z?+?
>
>
Hello,
Can someone point me in the right direction. I would like to return the
string for the following:
Type "help", "copyright", "credits" or "license" for more information.
>>> import base64, urllib
>>> data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A'
>>> data = urllib.unquote(data)
>>> print base
John wrote:
> Hello,
>
> I would like to write a program which looks in a web directory for, say
> *.gif files. Then processes those files in some manner. What I need is
> something like glob which will return a directory listing of all the
> files matching the search pattern (or just a simply
Hello,
I would like to write a program which looks in a web directory for, say
*.gif files. Then processes those files in some manner. What I need is
something like glob which will return a directory listing of all the files
matching the search pattern (or just a simply a certain extension).
Is t
Hi again,
I was able to use urllib2_file, which is a wrapper to urllib2.urlopen(). It
seems to work fine, and I'm able to retrieve the contents of the file using:
afile = req.form.list[1].file.read()
Now I have to store this text file (which is about 500k) and an id number into a
mysql database
Hi, You can try this: import httplib, urllib params = urllib.urlencode({'ID':'1','Name':'name', 'Eid':'we[at]you.com'}) #Assumed URL: test.com/cgi-bin/myform h = httplib.HTTP("test.com")h.putrequest("POST", "/cgi-bin/myform")h.putheader("Content-length", "%d" % len(params))h.putheader('A
Patricia wrote:
> Hi,
>
> I have used urllib and urllib2 to post data like the following:
>
> dict = {}
> dict['data'] = info
> dict['system'] = aname
>
> data = urllib.urlencode(dict)
> req = urllib2.Request(url)
>
> And to get the data, I emulated a web page with a submit button:
> s =
Hi,
I have used urllib and urllib2 to post data like the following:
dict = {}
dict['data'] = info
dict['system'] = aname
data = urllib.urlencode(dict)
req = urllib2.Request(url)
And to get the data, I emulated a web page with a submit button:
s = ""
s += ""
s += ""
s += ""
Please post the code that gave you the error.
Kent
Servando Garcia wrote:
> I tired that and here is the error I am currently getting:
>
> assert hasattr(proxies, 'has_key'), "proxies must be a mapping"
>
> I was trying this:
>
>>> X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_
Servando Garcia wrote:
> Hello list
> I am on challenge 5. I think I need to some how download a file. I have
> been trying like so
>
> X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_file('SomeFileName')
>
urlopener() returns a file-like object - something that behaves like an o
Hello list
I am on challenge 5. I think I need to some how download a file. I have been trying like so
X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_file('SomeFileName')
but with no luck.
Servando Garcia
John 3:16
For GOD so loved the world.._
49 matches
Mail list logo