On 15May2018 18:18, MRAB wrote:
On 2018-05-15 13:12, mahesh d wrote:
import glob,os
[...]
files = glob.glob(path)
You've got a list of filenames here - they would work, _if_ you had passed in a
glob pattern, instead of just the directory path. Try this again, joining
"*.msg" or "*.txt" to
On May 15, 2018 14:12, mahesh d wrote:
import glob,os
import errno
path = 'C:/Users/A-7993\Desktop/task11/sample emails/'
files = glob.glob(path)
'''for name in files:
print(str(name))
if name.endswith(".txt"):
print(name)'''
for file in os.listdir(path):
print(f
On May 15, 2018 08:54, Steven D'Aprano
wrote:
On Tue, 15 May 2018 11:53:47 +0530, mahesh d wrote:
> Hii.
>
> I have folder.in that folder some files .txt and some files .msg files.
> .
> My requirement is reading those file contents . Extract data in that
> files
On 2018-05-15 13:12, mahesh d wrote:
import glob,os
import errno
path = 'C:/Users/A-7993\Desktop/task11/sample emails/'
files = glob.glob(path)
'''for name in files:
print(str(name))
if name.endswith(".txt"):
print(name)'''
for file in os.listdir(path):
print(f
On 15/05/18 13:12, mahesh d wrote:
import glob,os
import errno
path = 'C:/Users/A-7993\Desktop/task11/sample emails/'
files = glob.glob(path)
for file in os.listdir(path):
print(file)
if file.endswith(".txt"):
print(os.path.join(path, file))
print(file)
try:
import glob,os
import errno
path = 'C:/Users/A-7993\Desktop/task11/sample emails/'
files = glob.glob(path)
'''for name in files:
print(str(name))
if name.endswith(".txt"):
print(name)'''
for file in os.listdir(path):
print(file)
if file.endswith(".txt"):
On Tue, 15 May 2018 11:53:47 +0530, mahesh d wrote:
> Hii.
>
> I have folder.in that folder some files .txt and some files .msg files.
> .
> My requirement is reading those file contents . Extract data in that
> files .
The answer to this question is the same as the answe
Hii.
I have folder.in that folder some files .txt and some files .msg files. .
My requirement is reading those file contents . Extract data in that files .
--
https://mail.python.org/mailman/listinfo/python-list
You could use http://docs.python.org/lib/module-optparse.html
--
http://mail.python.org/mailman/listinfo/python-list
So the return value from getopt.getopt() is a list of tuples, e.g.
>>> import getopt
>>> opts = getopt.getopt('-a 1 -b 2 -a 3'.split(), 'a:b:')[0]; opts
[('-a', '1'), ('-b', '2'), ('-a', '3')]
what's the idiomatic way of using this result? I can think of several
possibilities.
For options not a
Steve Holden <[EMAIL PROTECTED]> writes:
> I especially like the rems and conditions they ask you to acknowledge
> if you want to sign up as a worker:
>http://www.captchasolver.com/join/worker#
Heh, cute, I guess you have to solve a different type of puzzle to
read them.
I'm surprised anyone
Paul Rubin wrote:
> "Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
>> Obviously this wouldn't really help, as you can't predict what a
>> website actually wants which events, in possibly which
>> order. Especially if the site does not _want_ to be scrapable- think
>> of a simple "click on the image
[EMAIL PROTECTED] wrote:
> How extract the visible numerical data from this Microsoft financial
> web site?
>
> http://tinyurl.com/yw2w4h
>
> If you simply download the HTML file you'll see the data is *not*
> embedded in it but loaded from some other file.
>
> Surely if I can see the data in my
"Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
> Obviously this wouldn't really help, as you can't predict what a
> website actually wants which events, in possibly which
> order. Especially if the site does not _want_ to be scrapable- think
> of a simple "click on the images in the order of the nu
Paul Rubin schrieb:
> "Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
>> Nice idea, but not really helpful in the end. Besides the rather nasty
>> parts of the DOMs that make JS programming the PITA it is, I think the
>> whole event-based stuff makes this basically impossible.
>
> Obviously the Pyt
"Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
> Nice idea, but not really helpful in the end. Besides the rather nasty
> parts of the DOMs that make JS programming the PITA it is, I think the
> whole event-based stuff makes this basically impossible.
Obviously the Python interface would need ways
Paul Rubin schrieb:
> "Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
>> Still, some pages are AJAX, you won't be able to scrape them easily
>> without analyzing the JS code.
>
> Sooner or later it would be great to have a JS interpreter written in
> Python for this purpose. It would do all the sa
"Diez B. Roggisch" <[EMAIL PROTECTED]> writes:
> Still, some pages are AJAX, you won't be able to scrape them easily
> without analyzing the JS code.
Sooner or later it would be great to have a JS interpreter written in
Python for this purpose. It would do all the same operations on an
HTML/XML D
> It's an AJAX-site. You have to carefully analyze it and see what
> actually happens in the javascript, then use that. Maybe something like
> the http header plugin for firefox helps you there.
ups, obviously I wasn't looking enough at the site. Sorry for the confusion.
Still, some pages are
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> How extract the visible numerical data from this Microsoft
> financial web site?
>
> http://tinyurl.com/yw2w4h
>
> If you simply download the HTML file you'll see the data is *not*
> embedded in it but loaded from some other file.
>
> Surely if I
[EMAIL PROTECTED] schrieb:
> How extract the visible numerical data from this Microsoft financial
> web site?
>
> http://tinyurl.com/yw2w4h
>
> If you simply download the HTML file you'll see the data is *not*
> embedded in it but loaded from some other file.
>
> Surely if I can see the data in
How extract the visible numerical data from this Microsoft financial
web site?
http://tinyurl.com/yw2w4h
If you simply download the HTML file you'll see the data is *not*
embedded in it but loaded from some other file.
Surely if I can see the data in my browser I can grab it somehow right
in a P
[EMAIL PROTECTED] a écrit :
> I'm trying to extract some data from an XHTML Transitional web page.
>
> What is best way to do this?
>
> xml.dom.minidom.
As a side note, cElementTree is probably a better choice. Or even a
simple SAX parser.
>parseString("text of web page") gives errors about it
[EMAIL PROTECTED] wrote:
> I'm trying to extract some data from an XHTML Transitional web page.
>
> What is best way to do this?
May I suggest html5lib [1]? It's based on the parsing section of the
WHATWG "HTML5" spec [2] which is in turn based on the behavior of major
web browsers so it should
Den Fri, 02 Mar 2007 15:32:58 -0800 skrev [EMAIL PROTECTED]:
> I'm trying to extract some data from an XHTML Transitional web page.
> xml.dom.minidom.parseString("text of web page") gives errors about it
> not being well formed XML.
> Do I just need to add something like or what?
As many HTML Tr
[EMAIL PROTECTED] wrote:
> I'm trying to extract some data from an XHTML Transitional web page.
>
> What is best way to do this?
An XML parser should be sufficient. However...
> xml.dom.minidom.parseString("text of web page") gives errors about it
> not being well formed XML.
>
> Do I just need t
I'm trying to extract some data from an XHTML Transitional web page.
What is best way to do this?
xml.dom.minidom.parseString("text of web page") gives errors about it
not being well formed XML.
Do I just need to add something like or what?
Chris
--
http://mail.python.org/mailman/listinfo/py
Milos Prudek wrote:
> > A better solution would be to extract cookies from headers in the
> > request method and return them with response (see the code below). I
>
> Full solution! Wow! Thank you very much. I certainly do not deserve such
> kindness. Thanks a lot Filip!
Glad to help. All in all t
> A better solution would be to extract cookies from headers in the
> request method and return them with response (see the code below). I
Full solution! Wow! Thank you very much. I certainly do not deserve such
kindness. Thanks a lot Filip!
--
Milos Prudek
--
http://mail.python.org/mailman/l
Milos Prudek wrote:
> > Overload the _parse_response method of Transport in your
> > BasicAuthTransport and extract headers from raw response. See the
> > source of xmlrpclib.py in the standard library for details.
>
> Thank you.
>
> I am a bit of a false beginner in Python. I have written only sho
> Overload the _parse_response method of Transport in your
> BasicAuthTransport and extract headers from raw response. See the
> source of xmlrpclib.py in the standard library for details.
Thank you.
I am a bit of a false beginner in Python. I have written only short scripts. I
want to read "D
Milos Prudek wrote:
> I perform a XML-RPC call by calling xmlrpclibBasicAuth which in turn calls
> xmlrpclib. This call of course sends a HTTP request with correct HTTP
> headers. The response is correctly parsed by xmlrpclib, and I get my desired
> values.
>
> However, I also need to get the raw H
I perform a XML-RPC call by calling xmlrpclibBasicAuth which in turn calls
xmlrpclib. This call of course sends a HTTP request with correct HTTP
headers. The response is correctly parsed by xmlrpclib, and I get my desired
values.
However, I also need to get the raw HTTP headers from the HTTP r
the 2nd url seems to be a dead link?
--
http://mail.python.org/mailman/listinfo/python-list
Em Seg, 2006-03-20 às 23:01 +1100, John Machin escreveu:
> *ALL* [ho ho chuckle chuckle]
> you need to do is step through the tokens and do something with the ones
> that contain references.
And contribute back the code? =)
--
Felipe.
--
http://mail.python.org/mailman/listinfo/python-list
On 20/03/2006 10:37 PM, jcmendez wrote:
> Exactly.
Yes, your requirement is exactly that, a list of references.
> Once I get the formulas, I can do a weak parsing of them and
> find the references.
>
A formula is not stored as input e.g. "(A1+A2)*3.0+$Z$29"; it's kept as
an RPN stream of var
Exactly. Once I get the formulas, I can do a weak parsing of them and
find the references.
--
http://mail.python.org/mailman/listinfo/python-list
On 20/03/2006 10:00 PM, jcmendez wrote:
> Hi John
>
> I'd like to create a dependency graph and plot it with Graphviz. I've
> played a bit with exporting the sheet in XML format, and parsing the
> XML. That somehow works, but it would be much better if the users
> wouldn't need to save as the sh
Hi John
I'd like to create a dependency graph and plot it with Graphviz. I've
played a bit with exporting the sheet in XML format, and parsing the
XML. That somehow works, but it would be much better if the users
wouldn't need to save as the sheets, just put them is a shared
directory where I ca
On 20/03/2006 9:28 PM, jcmendez wrote:
> John
>
> Thanks for walking us through the comparison. On the xlrd website I
> saw that it does not import formulas from the Excel files, which is
> what I'm looking for. Any suggestions?
>
> Juan C.
>
Juan, what do you want to do with the formulas afte
John
Thanks for walking us through the comparison. On the xlrd website I
saw that it does not import formulas from the Excel files, which is
what I'm looking for. Any suggestions?
Juan C.
--
http://mail.python.org/mailman/listinfo/python-list
Kent Johnson wrote:
> John Machin wrote:
>> * Herewith the biased comparison:
>
> Thank you!
Thank you (John) as well. I realize you are a bit reluctant to toot
your own horn, but it is just this kind of biased comparison that
let's us know whether to investigate further. It also helps th
John Machin wrote:
> On 19/03/2006 2:30 PM, Kent Johnson wrote:
>>That didn't shed much light. I'm interested in your biased opinion,
>>certainly you must have had a reason to write a new package.
>
> * It's not new. First public release was on 2005-05-15. When I started
> writing it, there was
On 19/03/2006 2:30 PM, Kent Johnson wrote:
> John Machin wrote:
>
>> On 19/03/2006 8:31 AM, Kent Johnson wrote:
>>
>>> How does xlrd compare with pyexcelerator? At a glance they look
>>> pretty similar.
>>>
>>
>> I have an obvious bias, so I'll just leave you with a not-very-PC
>> analogy to thi
John Machin wrote:
> On 19/03/2006 8:31 AM, Kent Johnson wrote:
>>How does xlrd compare with pyexcelerator? At a glance they look pretty
>>similar.
>>
>
> I have an obvious bias, so I'll just leave you with a not-very-PC
> analogy to think about:
>
> Depending on the ambient light and the quant
- xlrd seems to be focused on extracting data.
- pyexcelerator can also generate Excel files.
--
http://mail.python.org/mailman/listinfo/python-list
On 19/03/2006 8:31 AM, Kent Johnson wrote:
> John Machin wrote:
>
>> I am pleased to announce a new general release (0.5.2) of xlrd, a Python
>> package for extracting data from Microsoft Excel spreadsheets.
>
>
> How does xlrd compare with pyexcelerator? At a glance they look pretty
> similar.
John Machin wrote:
> I am pleased to announce a new general release (0.5.2) of xlrd, a Python
> package for extracting data from Microsoft Excel spreadsheets.
How does xlrd compare with pyexcelerator? At a glance they look pretty
similar.
Thanks,
Kent
--
http://mail.python.org/mailman/listinfo/
I am pleased to announce a new general release (0.5.2) of xlrd, a Python
package for extracting data from Microsoft Excel spreadsheets.
CHANGES:
* Book and sheet objects can now be pickled and unpickled. Instead of
reading a large spreadsheet multiple times, consider pickling it once
and loading
livin wrote:
> I'm looking for an easy way to automate the below web site browsing and pull
> the data I'm searching for.
This is a task that BeautifulSoup[1] is usually good for.
> 4) After search, table shows many links (hundreds sometimes) to the actual
> data I need.
> Links are this fo
I'm hoping someone knows of an example script I can see to help me build
mine.
I'm looking for an easy way to automate the below web site browsing and pull
the data I'm searching for.
Here's steps it needs to accomplish...
1) login to the site (windows dialog when hitting web page) *optional*
51 matches
Mail list logo