This would be a great topic for a meeting, either a report later on what you found and how you used it, or as a workshop to evaluate your options.
[FYI, Schedule for next Tuesday is Federico will update us on his embedded Perl hardware hacking project.] bill@$dayjob #include <not_speaking_for_the_firm> -----Original Message----- From: Boston-pm [mailto:boston-pm-bounces+william.ricker=fmr....@mail.pm.org] On Behalf Of David Larochelle Sent: Wednesday, April 03, 2013 10:34 AM To: Boston Perl Mongers Subject: [Boston.pm] Passing large complex data structures between process I'm trying to optimize a database driven web crawler and I was wondering if anyone could offer any recommendations for interprocess communications. Currently, the driver process periodically queries a database to get a list of URLs to crawler. It then stores these url's to be downloaded in a complex in memory and pipes them to separate processes that do the actual downloading. The problem is that the database queries are slow and block the driver process. I'd like to rearchitect the system so that the database queries occur in the background. However, I'm unsure what mechanism to use. In theory, threads would be ideally suited for this case but I'm not sure how stable they are (I'm running Perl 5.14.1 with 200+ cpan modules). Does anyone have any recommendations? Thanks, David _______________________________________________ Boston-pm mailing list Boston-pm@mail.pm.org http://mail.pm.org/mailman/listinfo/boston-pm _______________________________________________ Boston-pm mailing list Boston-pm@mail.pm.org http://mail.pm.org/mailman/listinfo/boston-pm