whats the complete error message + stack trace ? are there any
@event listeners going on that might be modifying the query? try
upgrading to 1.0.17?There is no reason the parameters would be
lost like that but that's only a log message so need to see the
complete error to see if that
The job is running at an interval of 5min and the job in itself is very
small. It finishes its work in less than a minute. It is a single thread
job forked from the apsheduler process. In terms of memory-- since the job
is very short running I don't think there is the scenario of memory
Hi Mike,
The job is running at an interval of 5min and the job in itself is very
small. It finishes its work in less than a minute. It is a single thread
job forked from the apsheduler process. In terms of memory-- since the job
is very short running I don't think there is the scenario of memory
On Tue, Jan 2, 2018 at 7:53 PM, Aubrey wrote:
> Hello,
>
> I've just been thrown by what seems like an inconsistency in how parameters
> can be passed to an insert statement that adds multiple rows.
>
> To demonstrate:
>
>
> from sqlalchemy import MetaData, String, Integer,
Hello,
I've just been thrown by what seems like an inconsistency in how parameters can
be passed to an insert statement that adds multiple rows.
To demonstrate:
from sqlalchemy import MetaData, String, Integer, Table, Column
from sqlalchemy.dialects.postgresql.base import PGDialect
m =
I have an API endpoint that handles searches from the frontend. A search
can have a dynamic amount of filters applied to it, including (1) sizes,
(2) colors, (3) price, and (4) category that are passed through query
parameters. Sizes and colors are passed as comma separated values (i.e.
On Tue, Jan 2, 2018 at 2:18 PM, Seaders Oloinsigh wrote:
> This change is the culprit,
>
> https://github.com/zzzeek/alembic/commit/4cdb25bf5d70e6e8a789c75c59a2a908433674ce#diff-e9d21bbd0f4be415139b75917420ad1c
>
> I've a few auto-generated tables, that are one-to-one
This change is the culprit,
https://github.com/zzzeek/alembic/commit/4cdb25bf5d70e6e8a789c75c59a2a908433674ce#diff-e9d21bbd0f4be415139b75917420ad1c
I've a few auto-generated tables, that are one-to-one mappings with an
external data feed. For any fields that have indexes set on them, the
On Tue, Jan 2, 2018 at 12:06 PM, Tim Chen wrote:
> Ah yes, makes sense. So there is a slight difference in that add() will
> modify the original object, while merge() will not. Thanks for the
> explanation!
>
> It would be great if this difference could be added to the docs
Ah yes, makes sense. So there is a slight difference in that add() will
modify the original object, while merge() will not. Thanks for the
explanation!
It would be great if this difference could be added to the docs here
Great! Thanks a lot for all your help!
On Tuesday, January 2, 2018 at 6:48:35 PM UTC+2, Mike Bayer wrote:
>
> On Tue, Jan 2, 2018 at 11:24 AM, Jevgenij Kusakovskij > wrote:
> > I see... I should have warned that I am new to Python and that questions
> of
> > this caliber
On Tue, Jan 2, 2018 at 11:24 AM, Jevgenij Kusakovskij wrote:
> I see... I should have warned that I am new to Python and that questions of
> this caliber could be expected.
>
> If I may ask one more thing, I would like to check with you if it is
> possible to achieve the same
On Tue, Jan 2, 2018 at 11:44 AM, Tim Chen wrote:
> Hrmm, that's not what I'm getting. Maybe I'm misunderstanding something -
> here's a simple test to illustrate my example. test_add() works as
> expected, test_merge() fails.
>
>
> import sqlalchemy as sa
> from
Hrmm, that's not what I'm getting. Maybe I'm misunderstanding something -
here's a simple test to illustrate my example. test_add() works as
expected, test_merge() fails.
import sqlalchemy as sa
from sqlalchemy.orm import sessionmaker
from sqlalchemy.ext.declarative import declarative_base
I see... I should have warned that I am new to Python and that questions of
this caliber could be expected.
If I may ask one more thing, I would like to check with you if it is
possible to achieve the same effect
without any custom options by simply the executemany flag in the if clause.
It
On Tue, Jan 2, 2018 at 9:54 AM, Jevgenij Kusakovskij wrote:
>> @event.listens_for(SomeEngine, 'before_cursor_execute')
>> def receive_before_cursor_execute(conn, cursor, statement, parameters,
>> context, executemany):
>> if
>
> @event.listens_for(SomeEngine, 'before_cursor_execute')
> def receive_before_cursor_execute(conn, cursor, statement, parameters,
> context, executemany):
> if context.execution_options.get('pyodbc_fast_execute', False):
> cursor.fast_executemany = True
Maybe I am missing
On Tue, Jan 2, 2018 at 6:46 AM, Jevgenij Kusakovskij wrote:
> I would like to send a large pandas.DataFrame to a remote server running MS
> SQL. I am using pandas-0.20.3, pyODBC-4.0.21 and sqlalchemy-1.1.13.
>
> My first attempt of tackling this problem can be reduced to
On Tue, Jan 2, 2018 at 2:21 AM, srishty patel wrote:
> Code-snippet
>
> subq = db.session.query(Subscription.user_id) \
> .filter(((Subscription.subscription_status == "EXPIRED")
> & (Subscription.end_date < expiration_limit)) |
>
I would like to send a large pandas.DataFrame to a remote server running MS
SQL. I am using pandas-0.20.3, pyODBC-4.0.21 and sqlalchemy-1.1.13.
My first attempt of tackling this problem can be reduced to following code:
import sqlalchemy as sa
engine =
20 matches
Mail list logo