Re: MySQL with Python
On Tue, 2012-10-16 at 01:01 +1100, Chris Angelico wrote: > But you may wish to consider using PostgreSQL instead. Thanks, as I am very much new in database thing, I am not very aware of the options I have. But in my library, I did not found any thing on PostgreSQL. Though, I will google its support as well, can you kindly let me know if this is well documented. I can see there mailing list is quite active. So that may not be a problem though. -- http://mail.python.org/mailman/listinfo/python-list
MySQL with Python
Dear friends, I am starting a project of creating a database using mySQL(my first project with database). I went to my institute library and find that, all books are managing "mySQL with perl and php" I am new to python itself and gradually loving it. I mostly use it as an alternative of shell-script. Since learning a new language for every new project is not possible(its self assigned project, generally in free time), can I do a "mySQL with python?" if yes, can you kindly suggest a book/reference on this? -- http://mail.python.org/mailman/listinfo/python-list
Re: get google scholar using python
I know one more python app that do the same thing http://www.icir.org/christian/downloads/scholar.py and few other app(Mendeley desktop) for which I found an explanation: (from http://academia.stackexchange.com/questions/2567/api-eula-and-scraping-for-google-scholar ) that: "I know how Mendley uses it: they require you to click a button for each individual search of Google Scholar. If they automatically did the Google Scholar meta-data search for each paper when you import a folder-full then they would violate the old Scholar EULA. That is why they make you click for each query: if each query is accompanied by a click and not part of some script or loop then it is in compliance with the old EULA." So, If I manage to use the User-Agent as shown by you, will I still violating the google EULA? This is my first try of scrapping HTML. So please help On Mon, 2012-10-01 at 16:51 +, Nick Cash wrote: > > urllib2.urlopen('http://scholar.google.co.uk/scholar?q=albert > >... > > urllib2.HTTPError: HTTP Error 403: Forbidden > > >>> > > > > Will you kindly explain me the way to get rid of this? > > Looks like Google blocks non-browser user agents from retrieving this query. > You *could* work around it by setting the User-Agent header to something fake > that looks browser-ish, but you're almost certainly breaking Google's TOS if > you do so. > > Should you really really want to, urllib2 makes it easy: > urllib2.urlopen(urllib2.Request("http://scholar.google.co.uk/scholar?q=albert+einstein%2B1905&btnG=&hl=en&as_sdt=0%2C5&as_sdtp=";, > headers={"User-Agent":"Mozilla/5.0 Cheater/1.0"})) > > -Nick Cash -- http://mail.python.org/mailman/listinfo/python-list
get google scholar using python
If I am trying to access a google scholar search result using python, I get the following error(403): $ python Python 2.7.3 (default, Jul 24 2012, 10:05:38) [GCC 4.7.0 20120507 (Red Hat 4.7.0-5)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from HTMLParser import HTMLParser >>> import urllib2 response = urllib2.urlopen('http://scholar.google.co.uk/scholar?q=albert +einstein%2B1905&btnG=&hl=en&as_sdt=0%2C5&as_sdtp=') Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python2.7/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib64/python2.7/urllib2.py", line 406, in open response = meth(req, response) File "/usr/lib64/python2.7/urllib2.py", line 519, in http_response 'http', request, response, code, msg, hdrs) File "/usr/lib64/python2.7/urllib2.py", line 444, in error return self._call_chain(*args) File "/usr/lib64/python2.7/urllib2.py", line 378, in _call_chain result = func(*args) File "/usr/lib64/python2.7/urllib2.py", line 527, in http_error_default raise HTTPError(req.get_full_url(), code, msg, hdrs, fp) urllib2.HTTPError: HTTP Error 403: Forbidden >>> Will you kindly explain me the way to get rid of this? -- http://mail.python.org/mailman/listinfo/python-list