be something like unwittingly pickling
the engine and sending it across processes.
Thanks again
On Friday, January 22, 2016 at 10:03:38 AM UTC-5, Michael Bayer wrote:
>
>
>
> On 01/22/2016 01:23 AM, Maximilian Roos wrote:
> > Great, thanks for the reply Mike.
> >
>
PM UTC-5, Michael Bayer wrote:
>
>
>
> On 01/21/2016 08:43 PM, Maximilian Roos wrote:
> > We're using celery, a job distribution package. On a single machine,
> > there are 20+ celery workers running, each with their own Python
> > process. We had some issues with t
on the same
machine include those the docs details between threads - and so require a
scoped_session (or something that achieves the same goal)?
On Thursday, January 21, 2016 at 11:01:41 PM UTC-5, Michael Bayer wrote:
>
>
>
> On 01/21/2016 08:43 PM, Maximilian Roos wrote:
> >
This is mainly a pandas question, but wanted to ensure we didn't build
something at pandas that used SQLAlchemy inefficiently.
There are two main approaches:
- Build a DataFrame from the results of a SQLAlchemy query; i.e. pandas
has no knowledge that it's a SQL Query