On Monday, January 21, 2013 8:29:50 PM UTC-6, MRAB wrote:
> On 2013-01-22 01:56, Brian D wrote:
>
> > Hi,
>
> >
>
> > I'm trying to instantiate a class object repeated times, dynamically for as
> > many times as are required, storing each class object
Hi,
I'm trying to instantiate a class object repeated times, dynamically for as
many times as are required, storing each class object in a container to later
write out to a database. It kind of looks like what's needed is a
two-dimensional class object, but I can't quite conceptualize how to do
In an HTML page that I'm scraping using urllib2, a \xc2\xa0
bytestring appears.
The page's charset = utf-8, and the Chrome browser I'm using displays
the characters as a space.
The page requires authentication:
https://www.nolaready.info/myalertlog.php
When I try to concatenate strings containi
On Jan 28, 8:27 am, Lie Ryan wrote:
> On 01/28/10 11:28, Brian D wrote:
>
>
>
> > I've tackled this kind of problem before by looping through a patterns
> > dictionary, but there must be a smarter approach.
>
> > Two addresses. Note that the first has incorr
> Correction:
>
> [snip] the expression "parts[1 : -1]" means gather list items from the
> second element in the list (index value 1) to one index position
> before the end of the list. [snip]
MRAB's solution was deserving of a more complete solution:
>>> def parse_address(address):
# Ha
On Jan 28, 7:40 am, Brian D wrote:
> > > [snip]
> > > Regex doesn't gain you much. I'd split the string and then fix the parts
> > > as necessary:
>
> > > >>> def parse_address(address):
> > > ... parts = address.split(
> > [snip]
> > Regex doesn't gain you much. I'd split the string and then fix the parts
> > as necessary:
>
> > >>> def parse_address(address):
> > ... parts = address.split()
> > ... if parts[-2] == "S":
> > ... parts[1 : -1] = [parts[-2]] + parts[1 : -2]
> > ... parts[1 : -1]
On Jan 27, 7:27 pm, MRAB wrote:
> Brian D wrote:
> > I've tackled this kind of problem before by looping through a patterns
> > dictionary, but there must be a smarter approach.
>
> > Two addresses. Note that the first has incorrectly transposed the
> > direct
On Jan 27, 6:35 pm, Paul Rubin wrote:
> Brian D writes:
> > I've tackled this kind of problem before by looping through a patterns
> > dictionary, but there must be a smarter approach.>
> > Two addresses. Note that the first has incorrectly transposed the
I've tackled this kind of problem before by looping through a patterns
dictionary, but there must be a smarter approach.
Two addresses. Note that the first has incorrectly transposed the
direction and street name. The second has an extra space in it before
the street type. Clearly done by someone
On Jan 19, 11:51 am, Brian D wrote:
> On Jan 19, 11:28 am, Peter Otten <__pete...@web.de> wrote:
>
>
>
> > Brian D wrote:
> > > Here's a simple named group matching pattern:
>
> > >>>> s = "1,2,3"
> > >>
On Jan 19, 11:28 am, Peter Otten <__pete...@web.de> wrote:
> Brian D wrote:
> > Here's a simple named group matching pattern:
>
> >>>> s = "1,2,3"
> >>>> p = re.compile(r"(?P\d),(?P\d),(?P\d)")
> >>>> m =
Here's a simple named group matching pattern:
>>> s = "1,2,3"
>>> p = re.compile(r"(?P\d),(?P\d),(?P\d)")
>>> m = re.match(p, s)
>>> m
<_sre.SRE_Match object at 0x011BE610>
>>> print m.groups()
('1', '2', '3')
Is it possible to call the group names, so that I can iterate over
them?
The result I'
On Jan 5, 1:08 pm, Nobody wrote:
> On Mon, 04 Jan 2010 08:09:56 -0800, Brian D wrote:
> > If I'm running a process in a loop that runs for a long time, I
> > occasionally would like to look at a log to see how it's going.
>
> > I know about the logging modu
On Jan 4, 10:29 am, Antoine Pitrou wrote:
> Le Mon, 04 Jan 2010 08:09:56 -0800, Brian D a écrit :
>
>
>
> > What I've seen is that flush() alone produces a complete log when the
> > loop finishes. When I used fsync(), I lost all of the write entries
> > except t
If I'm running a process in a loop that runs for a long time, I
occasionally would like to look at a log to see how it's going.
I know about the logging module, and may yet decide to use that.
Still, I'm troubled by how fsync() doesn't seem to work as advertised:
http://docs.python.org/library/o
On Dec 30, 7:08 pm, MRAB wrote:
> Brian D wrote:
> > Thanks MRAB as well. I've printed all of the replies to retain with my
> > pile of essential documentation.
>
> > To follow up with a complete response, I'm ripping out of my mechanize
> > module the es
Thanks MRAB as well. I've printed all of the replies to retain with my
pile of essential documentation.
To follow up with a complete response, I'm ripping out of my mechanize
module the essential components of the solution I got to work.
The main body of the code passes a URL to the scrape_record
On Dec 30, 12:31 pm, Philip Semanchuk wrote:
> On Dec 30, 2009, at 11:00 AM, Brian D wrote:
>
>
>
> > I'm actually using mechanize, but that's too complicated for testing
> > purposes. Instead, I've simulated in a urllib2 sample below an attempt
> &g
On Dec 30, 11:06 am, samwyse wrote:
> On Dec 30, 10:00 am, Brian D wrote:
>
> > What I don't understand is how to test for a valid URL request, and
> > then jump out of the "while True" loop to proceed to another line of
> > code below the loop. There'
I'm actually using mechanize, but that's too complicated for testing
purposes. Instead, I've simulated in a urllib2 sample below an attempt
to test for a valid URL request.
I'm attempting to craft a loop that will trap failed attempts to
request a URL (in cases where the connection intermittently
On Dec 25, 4:36 am, "Diez B. Roggisch" wrote:
> Brian D schrieb:
>
> > A search form returns a list of records embedded in a table.
>
> > The user has to click on a table row to call a Javascript call that
> > opens up the detail page.
>
> > It'
A search form returns a list of records embedded in a table.
The user has to click on a table row to call a Javascript call that
opens up the detail page.
It's the detail page, of course, that really contains the useful
information.
How can I use Mechanize to click a row?
Any ideas?
--
http:/
On Dec 23, 8:33 am, Brian D wrote:
> All,
>
> I'm hoping to implement a project that will be historically
> transformational by mapping inequalities in property assessments.
>
> I'm stuck at step one: Scrape data fromhttp://www.opboa.org.
>
> The site uses a bunc
On Dec 24, 8:20 am, Brian D wrote:
> Just kidding. That was a fascinating discussion.
>
> Now I'd like to see if anyone would rather procrastinate than finish
> last-minute shopping.
>
> This problem remains untouched. Anyone want to give it a try? Please?
>
> I
Just kidding. That was a fascinating discussion.
Now I'd like to see if anyone would rather procrastinate than finish
last-minute shopping.
This problem remains untouched. Anyone want to give it a try? Please?
I'm hoping to implement a project that will be historically
transformational by mappin
All,
I'm hoping to implement a project that will be historically
transformational by mapping inequalities in property assessments.
I'm stuck at step one: Scrape data from http://www.opboa.org.
The site uses a bunch of hidden controls. I can't find a way to get
past the initial disclaimer page be
The other thought I had was that I may not be properly trapping the
end of the first row, and the beginning of the next row.
On Oct 2, 8:38 am, John wrote:
> On Oct 2, 1:10 am, "504cr...@gmail.com" <504cr...@gmail.com> wrote:
>
>
>
> > I'm kind of new to regular expressions, and I've spent hou
Yes, John, that's correct. I'm trying to trap and discard the row
elements, re-formatting with pipes so that I can more readily
import the data into a database. The tags are, of course, initially
useful for pattern discovery. But there are other approaches -- I
could just replace the tags and cap
On Jun 11, 9:22 am, Brian D wrote:
> On Jun 11, 2:01 am, Lie Ryan wrote:
>
>
>
> > 504cr...@gmail.com wrote:
> > > I've encountered a problem with my RegEx learning curve -- how to
> > > escape hash characters # in strings being matched, e.g.:
On Jun 11, 2:01 am, Lie Ryan wrote:
> 504cr...@gmail.com wrote:
> > I've encountered a problem with my RegEx learning curve -- how to
> > escape hash characters # in strings being matched, e.g.:
>
> string = re.escape('123#abc456')
> match = re.match('\d+', string)
> print match
>
>
On Jun 10, 5:17 am, Paul McGuire wrote:
> On Jun 9, 11:13 pm, "504cr...@gmail.com" <504cr...@gmail.com> wrote:
>
> > By what method would a string be inserted at each instance of a RegEx
> > match?
>
> Some might say that using a parsing library for this problem is
> overkill, but let me just put
In article <[EMAIL PROTECTED]>, Edward
Elliott <[EMAIL PROTECTED]> wrote:
> This is just anecdotal, but I still find it interesting. Take it for what
> it's worth. I'm interested in hearing others' perspectives, just please
> don't turn this into a pissing contest.
>
> I'm in the process of con
33 matches
Mail list logo