Re: [Tutor] urllib ... lost novice's question

2017-05-10 Thread Alan Gauld via Tutor
On 10/05/17 17:06, Rafael Knuth wrote: >>> Then, there is another package, along with a dozen other >>> urllib-related packages (such as aiourllib). >> >> Again, where are you finding these? They are not in >> the standard library. Have you been installing other >> packages that may have their own

Re: [Tutor] urllib ... lost novice's question

2017-05-10 Thread Rafael Knuth
>> Then, there is another package, along with a dozen other >> urllib-related packages (such as aiourllib). > > Again, where are you finding these? They are not in > the standard library. Have you been installing other > packages that may have their own versions maybe? they are all available via P

Re: [Tutor] urllib ... lost novice's question

2017-05-09 Thread Mats Wichmann
this is one of those things where if what you want is simple, they're all usable, and easy. if not, some are frankly horrid. requests is the current hot module. go ahead and try it. (urllib.request is not from requests, it's from urllib) On May 8, 2017 9:23:15 AM MDT, Rafael Knuth wrote: >Whic

Re: [Tutor] urllib ... lost novice's question

2017-05-09 Thread Abdur-Rahmaan Janhangeer
As a side note see a tutorial on urllib and requests and try them at the same time see for python 3.x; 3.4 or 3.6 also see the data type received by the different combinations, when you should use .read() etc also use utf-8 or unicode like .decode("utf8") Well play around fool mess with it, fee

Re: [Tutor] urllib ... lost novice's question

2017-05-08 Thread Alan Gauld via Tutor
On 08/05/17 16:23, Rafael Knuth wrote: > Which package should I use to fetch and open an URL? > I am using Python 3.5 and there are presently 4 versions: > > urllib2 > urllib3 > urllib4 > urllib5 I don't know where you are getting those from but the standard install of Python v3.6 only has urllib

[Tutor] urllib ... lost novice's question

2017-05-08 Thread Rafael Knuth
Which package should I use to fetch and open an URL? I am using Python 3.5 and there are presently 4 versions: urllib2 urllib3 urllib4 urllib5 Common sense is telling me to use the latest version. Not sure if my common sense is fooling me here though ;-) Then, there is another package, along wit

Re: [Tutor] urllib confusion

2014-11-23 Thread Cameron Simpson
On 21Nov2014 15:57, Clayton Kirkwood wrote: Got a general problem with url work. I’ve struggled through a lot of code which uses urllib.[parse,request]* and urllib2. First q: I read someplace in urllib documentation which makes it sound like either urllib or urllib2 modules are being deprecated

Re: [Tutor] urllib confusion

2014-11-22 Thread Steven D'Aprano
On Fri, Nov 21, 2014 at 01:37:45PM -0800, Clayton Kirkwood wrote: > Got a general problem with url work. I've struggled through a lot of code > which uses urllib.[parse,request]* and urllib2. First q: I read someplace in > urllib documentation which makes it sound like either urllib or urllib2 > m

Re: [Tutor] urllib confusion

2014-11-21 Thread Clayton Kirkwood
>-Original Message- >From: Joel Goldstick [mailto:joel.goldst...@gmail.com] >Sent: Friday, November 21, 2014 2:39 PM >To: Clayton Kirkwood >Cc: tutor@python.org >Subject: Re: [Tutor] urllib confusion > >On Fri, Nov 21, 2014 at 4:37 PM, Clayton Kirkwood >wrote:

Re: [Tutor] urllib confusion

2014-11-21 Thread Alan Gauld
On 21/11/14 21:37, Clayton Kirkwood wrote: urllib or urllib2 modules are being deprecated in 3.5. Don’t know if it’s only part or whole. urlib2 doesn't exist in Python3 there is only the urllib package. As to urllib being deprecated, thats the first I've heard of it but it may be the case - I

Re: [Tutor] urllib confusion

2014-11-21 Thread Joel Goldstick
On Fri, Nov 21, 2014 at 4:37 PM, Clayton Kirkwood wrote: > Hi all. > > > > Got a general problem with url work. I’ve struggled through a lot of code > which uses urllib.[parse,request]* and urllib2. First q: I read someplace in > urllib documentation which makes it sound like either urllib or urll

[Tutor] urllib confusion

2014-11-21 Thread Clayton Kirkwood
Hi all. Got a general problem with url work. I've struggled through a lot of code which uses urllib.[parse,request]* and urllib2. First q: I read someplace in urllib documentation which makes it sound like either urllib or urllib2 modules are being deprecated in 3.5. Don't know if it's only par

Re: [Tutor] Urllib Problem

2011-07-29 Thread Steven D'Aprano
George Anonymous wrote: I am trying to make a simple programm with Python 3,that tries to open differnet pages from a wordlist and prints which are alive.Here is the code: from urllib import request fob=open('c:/passwords/pass.txt','r') x = fob.readlines() for i in x: urllib.request.openurl('

Re: [Tutor] Urllib Problem

2011-07-29 Thread Alexander
On Fri, Jul 29, 2011 at 5:58 AM, Karim wrote: > ** > On 07/29/2011 11:52 AM, George Anonymous wrote: > > I am trying to make a simple programm with Python 3,that tries to open > differnet pages from a wordlist and prints which are alive.Here is the code: > from urllib import request > fob=open('c

Re: [Tutor] Urllib Problem

2011-07-29 Thread Karim
On 07/29/2011 11:52 AM, George Anonymous wrote: I am trying to make a simple programm with Python 3,that tries to open differnet pages from a wordlist and prints which are alive.Here is the code: from urllib import request fob=open('c:/passwords/pass.txt','r') x = fob.readlines() for i in x:

[Tutor] Urllib Problem

2011-07-29 Thread George Anonymous
I am trying to make a simple programm with Python 3,that tries to open differnet pages from a wordlist and prints which are alive.Here is the code: from urllib import request fob=open('c:/passwords/pass.txt','r') x = fob.readlines() for i in x: urllib.request.openurl('www.google.gr/' + i) But

Re: [Tutor] urllib problem

2010-10-12 Thread Alan Gauld
"Roelof Wobben" wrote Finally solved this puzzle. Now the next one of the 33 puzzles. Don;t be surprised if you get stuck. Python Challenge is quite tricky and is deliberately designed to make you explore parts of the standard library you might not otherwise find. Expect to do a lot of readi

Re: [Tutor] urllib problem

2010-10-12 Thread Roelof Wobben
> From: st...@pearwood.info > To: tutor@python.org > Date: Wed, 13 Oct 2010 01:51:16 +1100 > Subject: Re: [Tutor] urllib problem > > On Tue, 12 Oct 2010 11:58:03 pm Steven D'Aprano wrote: > > On Tue, 12 Oct 2010 11:40:17 pm R

Re: [Tutor] urllib problem

2010-10-12 Thread Roelof Wobben
> From: st...@pearwood.info > To: tutor@python.org > Date: Tue, 12 Oct 2010 23:58:03 +1100 > Subject: Re: [Tutor] urllib problem > > On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote: >> Hoi, >> >> I have this prog

Re: [Tutor] urllib problem

2010-10-12 Thread Steven D'Aprano
On Tue, 12 Oct 2010 11:58:03 pm Steven D'Aprano wrote: > On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote: > > Hoi, > > > > I have this programm : > > > > import urllib > > import re > > f = > > urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.ph > >p? nothing=6") inhoud = f.read

Re: [Tutor] urllib problem

2010-10-12 Thread Steven D'Aprano
On Tue, 12 Oct 2010 11:40:17 pm Roelof Wobben wrote: > Hoi, > > I have this programm : > > import urllib > import re > f = > urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php? >nothing=6") inhoud = f.read() > f.close() > nummer = re.search('[0-9]', inhoud) > volgende = int(nummer

Re: [Tutor] urllib problem

2010-10-12 Thread Evert Rol
> I have this program : > > import urllib > import re > f = > urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=6";) > inhoud = f.read() > f.close() > nummer = re.search('[0-9]', inhoud) > volgende = int(nummer.group()) > teller = 1 > while teller <= 3 : > url = "

[Tutor] urllib problem

2010-10-12 Thread Roelof Wobben
Hoi, I have this programm : import urllib import re f = urllib.urlopen("http://www.pythonchallenge.com/pc/def/linkedlist.php?nothing=6";) inhoud = f.read() f.close() nummer = re.search('[0-9]', inhoud) volgende = int(nummer.group()) teller = 1 while teller <= 3 : url = "http://www.py

Re: [Tutor] urllib

2009-12-07 Thread Jojo Mwebaze
thanks, Senthil On Mon, Dec 7, 2009 at 11:10 AM, Senthil Kumaran wrote: > On Mon, Dec 07, 2009 at 08:38:24AM +0100, Jojo Mwebaze wrote: > > I need help on something very small... > > > > i am using urllib to write a query and what i want returned is > 'FHI=128%2C128& > > FLO=1%2C1' > > > > The wa

Re: [Tutor] urllib

2009-12-07 Thread Senthil Kumaran
On Mon, Dec 07, 2009 at 08:38:24AM +0100, Jojo Mwebaze wrote: > I need help on something very small... > > i am using urllib to write a query and what i want returned is 'FHI=128%2C128& > FLO=1%2C1' > The way to use urllib.encode is like this: >>> urllib.urlencode({"key":"value"}) 'key=value' >

[Tutor] urllib

2009-12-06 Thread Jojo Mwebaze
hello Tutor, I need help on something very small... i am using urllib to write a query and what i want returned is 'FHI=128%2C128&FLO=1%2C1' i have tried the statement below and i have failed to get the above.. x1,y1,x2,y2 = 1,1,128,128 query = urllib.urlencode({'FHI':'x2,y2,', 'FLO':'x1,y1'})

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-07 Thread David Kim
Thanks Kent, perhaps I'll cool the Python jets and move on to HTTP and HTML. I was hoping it would be something I could just pick up along the way, looks like I was wrong. dk On Tue, Jul 7, 2009 at 1:56 PM, Kent Johnson wrote: > On Tue, Jul 7, 2009 at 1:20 PM, David Kim wrote: >> On Tue, Jul 7, 2

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-07 Thread Kent Johnson
On Tue, Jul 7, 2009 at 1:20 PM, David Kim wrote: > On Tue, Jul 7, 2009 at 7:26 AM, Kent Johnson wrote: >> >> curl works because it ignores the redirect to the ToS page, and the >> site is (astoundingly) dumb enough to serve the content with the >> redirect. You could make urllib2 behave the same wa

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-07 Thread Sander Sweers
2009/7/7 David Kim : > opener = urllib2.build_opener(MyHTTPRedirectHandler, cookieprocessor) > urllib2.install_opener(opener) > > response = > urllib2.urlopen("http://www.dtcc.com/products/derivserv/data_table_i.php?id=table1";) > print response.read() > > > I suspect I am not understanding s

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-07 Thread David Kim
On Tue, Jul 7, 2009 at 7:26 AM, Kent Johnson wrote: > > curl works because it ignores the redirect to the ToS page, and the > site is (astoundingly) dumb enough to serve the content with the > redirect. You could make urllib2 behave the same way by defining a 302 > handler that does nothing. Many

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-07 Thread Kent Johnson
On Mon, Jul 6, 2009 at 5:54 PM, David Kim wrote: > Hello all, > > I have two questions I'm hoping someone will have the patience to > answer as an act of mercy. > > I. How to get past a Terms of Service page? > > I've just started learning python (have never done any programming > prior) and am try

Re: [Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-06 Thread Stefan Behnel
Hi, David Kim wrote: > I have two questions I'm hoping someone will have the patience to > answer as an act of mercy. > > I. How to get past a Terms of Service page? > > I've just started learning python (have never done any programming > prior) and am trying to figure out how to open or downloa

[Tutor] Urllib, mechanize, beautifulsoup, lxml do not compute (for me)!

2009-07-06 Thread David Kim
Hello all, I have two questions I'm hoping someone will have the patience to answer as an act of mercy. I. How to get past a Terms of Service page? I've just started learning python (have never done any programming prior) and am trying to figure out how to open or download a website to scrape da

Re: [Tutor] urllib unquote

2009-02-17 Thread Norman Khine
it is my error, the data is a sha string and it is not possible to get the string back, unless you use rainbowtables or something of the sort. Kent Johnson wrote: On Mon, Feb 16, 2009 at 8:12 AM, Norman Khine wrote: Hello, Can someone point me in the right direction. I would like to return th

Re: [Tutor] urllib unquote

2009-02-17 Thread Kent Johnson
On Mon, Feb 16, 2009 at 8:12 AM, Norman Khine wrote: > Hello, > Can someone point me in the right direction. I would like to return the > string for the following: > > Type "help", "copyright", "credits" or "license" for more information. import base64, urllib data = 'hL/FGNS40fjoTnp2zIq

Re: [Tutor] urllib unquote

2009-02-17 Thread Senthil Kumaran
On Tue, Feb 17, 2009 at 1:24 PM, Norman Khine wrote: > Thank you, but is it possible to get the original string from this? What do you mean by the original string Norman? Look at these definitions: Quoted String: In the different parts of the URL, there are set of characters, for e.g. space cha

Re: [Tutor] urllib unquote

2009-02-17 Thread Sander Sweers
On Tue, Feb 17, 2009 at 08:54, Norman Khine wrote: > Thank you, but is it possible to get the original string from this? You mean something like this? >>> urllib.quote('hL/FGNS40fjoTnp2zIqq73reK60=\n') 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A' Greets Sander ___

Re: [Tutor] urllib unquote

2009-02-16 Thread Norman Khine
Thank you, but is it possible to get the original string from this? Sander Sweers wrote: On Mon, Feb 16, 2009 at 14:12, Norman Khine wrote: Type "help", "copyright", "credits" or "license" for more information. import base64, urllib data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A' data = urllib.unq

Re: [Tutor] urllib unquote

2009-02-16 Thread Sander Sweers
On Mon, Feb 16, 2009 at 14:12, Norman Khine wrote: > Type "help", "copyright", "credits" or "license" for more information. import base64, urllib data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A' data = urllib.unquote(data) print base64.decodestring(data) > ???Ը???Nzv̊??z?+? > >

[Tutor] urllib unquote

2009-02-16 Thread Norman Khine
Hello, Can someone point me in the right direction. I would like to return the string for the following: Type "help", "copyright", "credits" or "license" for more information. >>> import base64, urllib >>> data = 'hL/FGNS40fjoTnp2zIqq73reK60%3D%0A' >>> data = urllib.unquote(data) >>> print base

Re: [Tutor] URLLIB / GLOB

2007-10-22 Thread Kent Johnson
John wrote: > Hello, > > I would like to write a program which looks in a web directory for, say > *.gif files. Then processes those files in some manner. What I need is > something like glob which will return a directory listing of all the > files matching the search pattern (or just a simply

[Tutor] URLLIB / GLOB

2007-10-22 Thread John
Hello, I would like to write a program which looks in a web directory for, say *.gif files. Then processes those files in some manner. What I need is something like glob which will return a directory listing of all the files matching the search pattern (or just a simply a certain extension). Is t

Re: [Tutor] urllib

2006-09-17 Thread Patricia
Hi again, I was able to use urllib2_file, which is a wrapper to urllib2.urlopen(). It seems to work fine, and I'm able to retrieve the contents of the file using: afile = req.form.list[1].file.read() Now I have to store this text file (which is about 500k) and an id number into a mysql database

Re: [Tutor] urllib

2006-09-12 Thread N
Hi,   You can try this:   import httplib, urllib params = urllib.urlencode({'ID':'1','Name':'name', 'Eid':'we[at]you.com'}) #Assumed URL: test.com/cgi-bin/myform h = httplib.HTTP("test.com")h.putrequest("POST", "/cgi-bin/myform")h.putheader("Content-length", "%d" % len(params))h.putheader('A

Re: [Tutor] urllib

2006-09-12 Thread Kent Johnson
Patricia wrote: > Hi, > > I have used urllib and urllib2 to post data like the following: > > dict = {} > dict['data'] = info > dict['system'] = aname > > data = urllib.urlencode(dict) > req = urllib2.Request(url) > > And to get the data, I emulated a web page with a submit button: > s =

[Tutor] urllib

2006-09-11 Thread Patricia
Hi, I have used urllib and urllib2 to post data like the following: dict = {} dict['data'] = info dict['system'] = aname data = urllib.urlencode(dict) req = urllib2.Request(url) And to get the data, I emulated a web page with a submit button: s = "" s += "" s += "" s += ""

Re: [Tutor] URLLIB

2005-05-16 Thread Kent Johnson
Please post the code that gave you the error. Kent Servando Garcia wrote: > I tired that and here is the error I am currently getting: > > assert hasattr(proxies, 'has_key'), "proxies must be a mapping" > > I was trying this: > >>> X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_

Re: [Tutor] URLLIB

2005-05-13 Thread Kent Johnson
Servando Garcia wrote: > Hello list > I am on challenge 5. I think I need to some how download a file. I have > been trying like so > > X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_file('SomeFileName') > urlopener() returns a file-like object - something that behaves like an o

[Tutor] URLLIB

2005-05-13 Thread Servando Garcia
Hello list I am on challenge 5. I think I need to some how download a file. I have been trying like so X=urllib.URLopener(name,proxies={'http':'URL').distutils.copy_file('SomeFileName') but with no luck. Servando Garcia John 3:16 For GOD so loved the world.._