Hi,

I would like to have some advice on how to deal with a CPU consuming
script.
The script simply fetches an Atom XML file (using urlfetch) and then
parses each item using both minidom and BeautifulSoup. The Atom file
typically has 50 entries.

It seems that spawning a process for each N entries to be parsed would
be the best option. However I think that this is not possible with
GAE.

The Atom file is being retrieved every hour. I could reduce the number
of entries to be parsed by increasing the frequency of urlfetch calls.
The trade off seems to be between more calls to urlfetch with fewer
items to parse, or less calls to urlfetch with more items to parse.

Any other option I am missing?
In a nutshell, what is the best (optimized and scalable) way to
periodically fetch and parse an Atom feed.

Thanks in advance for any comments,
--
Sérgio Nunes
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com
To unsubscribe from this group, send email to 
google-appengine+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to