[sqlalchemy] Session / declarative_base scope

2016-09-28 Thread Warwick Prince
Hi Mike

I would like a little insight into the session object, and the declarative_base 
class.

I have a process running many threads, where each thread may be connected to 
potentially a different engine/database.  If the database connection between 2 
or more threads is the same, then they will share the same engine.  However, 
they each have their own MetaData objects.   

There is a global sessionmaker() that has no binding at that time.   When each 
thread creates its OWN session, then it processes mySession = 
Session(bind=myThreadsEngine).

The Engines and MetaData part has worked perfectly for years, using basic 
queries like Table(’some_table', threadMetaData, 
autoload=True).select().execute().fetchall(). etc.

I’ve started to use the ORM more now, and am using the relationships between 
the objects.  However, I’m hitting and issue that appears to centre around some 
shared registry or class variables or something that is causing a conflict.

I’ve made it so each THREAD has is own Base = 
declarative_base(metadata=theSessionsMetaData)

Then, classes are mapped dynamically based on this new Base, and the columns 
are autoload’ed.  Again, this is working - sometimes.   There’s some 
random-like problem that mostly means it does not work when I do a 
mySession.query(myMappedClassWithRelationships) and I get the following 
exception being raised;

  File "C:\Python27\lib\site-packages\dap-2.1.2-py2.7.egg\dap\db\dbutils.py", 
line 323, in DAPDB_SetColumns
query = session.query(mappedClass).filter_by(**whereCriteria)
  File "build\bdist.win32\egg\sqlalchemy\orm\session.py", line 1260, in query
return self._query_cls(entities, self, **kwargs)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 110, in __init__
self._set_entities(entities)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 120, in 
_set_entities
self._set_entity_selectables(self._entities)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 150, in 
_set_entity_selectables
ent.setup_entity(*d[entity])
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 3446, in 
setup_entity
self._with_polymorphic = ext_info.with_polymorphic_mappers
  File "build\bdist.win32\egg\sqlalchemy\util\langhelpers.py", line 754, in 
__get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1891, in 
_with_polymorphic_mappers
configure_mappers()
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 2768, in 
configure_mappers
mapper._post_configure_properties()
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1708, in 
_post_configure_properties
prop.init()
  File "build\bdist.win32\egg\sqlalchemy\orm\interfaces.py", line 183, in init
self.do_init()
  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line 1628, in 
do_init
self._process_dependent_arguments()
  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line 1653, in 
_process_dependent_arguments
setattr(self, attr, attr_value())
  File "build\bdist.win32\egg\sqlalchemy\ext\declarative\clsregistry.py", line 
293, in __call__
(self.prop.parent, self.arg, n.args[0], self.cls)
InvalidRequestError: When initializing mapper 
Mapper|SalesDocument|rm_dt_documents, expression 
'SalesDocumentLine.parentID==SalesDocument.id' failed to locate a name ("name 
'SalesDocumentLine' is not defined"). If this is a class name, consider adding 
this relationship() to the  class after 
both dependent classes have been defined.

I understand what this is trying to tell me, however, the classes ARE defined.  
Sometimes the code works perfectly, but mostly not.  If I have ONE Thread 
working and then start up another using exactly the same code, then it will 
probably NOT work but more importantly, the one that WAS working then dies with 
the same error.  Clearly something somewhere is shared - I just can’t find out 
what it is, or how I can separate the code further.

In summary;

one global sessionmaker()
global Session=sessionmaker()
each thread (for the example here) shares an Engine
each thread has it’s OWN session from mySession = Session(bind=e)
each thread has it’s own Base created from 
declarative_base(metadata=threadsMetaData)
I’m declaring two classes in this example. SalesDocument and SalesDocumentLine. 
 The relationships are set up and ‘can’ work on occasion.

In that error, where exactly are they not ‘defined’.  I’ve looked in 
Base.decl_class_registry and both those names are there!  Where else do they 
need to be to be considered ‘declared'?

Any pointers as to the error of my ways would be most appreciated.  :-)

Cheers
Warwick




-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to 

[sqlalchemy] How can I programmatically give a hybrid_property its name?

2016-09-28 Thread Jinghui Niu
The documentation shows that hybrid_property should used as a decorator, 
like:
@hybrid_property
def my_property(self):
pass


What if I wanted to give this hybrid property a name by referring a 
variable in runtime? Is this allowed? Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy asyncio ThreadPoolExecutor “Too many clients”

2016-09-28 Thread Viktor Roytman
Yes, the problem was the call to engine.dispose()

Thanks!

On Wednesday, September 28, 2016 at 5:11:59 PM UTC-4, Mike Bayer wrote:
>
>
>
> it would appear that either the ThreadExecutorPool is starting more 
> threads than your Postgresql database has available connections, or your 
> engine.dispose() is leaving PG connections lying open to be garbage 
> collected.   A single Engine should be all that's necessary in a single 
> process. 
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Feedback appreciated

2016-09-28 Thread Seth P
On Wednesday, September 28, 2016 at 5:43:04 PM UTC-4, Mike Bayer wrote:
>
> looks incredibly difficult.   I'm not really about to have the resources 
> to work with a type that awkward anytime soon, unfortunately.   If it 
> could be made to be a drop-in for 1.1's ARRAY feature, that would be 
> helpful but it at least needs bound parameter support to be solid. 
>

Would it be possible to add read-only support? It looks like cx_Oracle 
returns selected varray values in a pretty straightforward form.
That would still be very useful (at least in my case, where I would be 
populating the database using SQL*Loader anyway).

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Feedback appreciated

2016-09-28 Thread Mike Bayer



On 09/28/2016 12:45 PM, Seth P wrote:


On Wednesday, September 28, 2016 at 10:16:20 AM UTC-4, Mike Bayer wrote:


So illustrating VARRAY round trip on cx_oracle is the first step.


It looks like cx_Oracle supports reading varrays, but supports writing
them only as column literals, not as bound parameters. The following
code tests a bunch of plausible formats for bound parameters, none of
which work. This is consistent with
https://sourceforge.net/p/cx-oracle/mailman/message/26769899/ .


looks incredibly difficult.   I'm not really about to have the resources 
to work with a type that awkward anytime soon, unfortunately.   If it 
could be made to be a drop-in for 1.1's ARRAY feature, that would be 
helpful but it at least needs bound parameter support to be solid. 
cx_Oracle should have some hooks to help with this.








if __name__ == '__main__':

import logging
logging.basicConfig(level='DEBUG')

def format_args(args):
return ', '.join(['"%s"' % arg if isinstance(arg, str) else str(arg) 
for arg in execute_args])

import cx_Oracle
with cx_Oracle.Connection(user, password, host) as connection:
cursor = connection.cursor()
for drop_object in ("TABLE tb_aaa", "TYPE tp_str_vec", "TYPE 
tp_num_vec"):
try:
cursor.execute("DROP " + drop_object)
except:
pass
for execute_args in [("CREATE OR REPLACE TYPE tp_str_vec as VARRAY(3) of 
VARCHAR2(8)",),
 ("CREATE OR REPLACE TYPE tp_num_vec as VARRAY(3) of 
NUMBER",),
 ("CREATE TABLE tb_aaa ( strvec tp_str_vec, numvec 
tp_num_vec )",),]:
logging.info(format_args(execute_args) + "\n")
cursor.execute(*execute_args)

for i, execute_args in enumerate([
(("INSERT INTO tb_aaa VALUES ( tp_str_vec(%(strvec1)s),
tp_num_vec(%(numvec1)s) )" %
  {"strvec1": str(['A1', 'A2', 'A3'])[1:-1], "numvec1": str([1.1, 
1.2, 1.3])[1:-1]}),),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)",
 {"strvec1": ['B1', 'B2', 'B3'], "numvec1": [2.1, 2.2, 2.3]}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)",
 {"strvec1": str(['C1', 'C2', 'C3'])[1:-1], "numvec1": str([3.1, 
3.2, 3.3])[1:-1]}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)",
 {"strvec1": str(['D1', 'D2', 'D3']), "numvec1": str([4.1, 4.2, 
4.3])}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)",
 {"strvec1": '(%s)' % str(['E1', 'E2', 'E3'])[1:-1], "numvec1": 
'(%s)' % str([5.1, 5.2, 5.3])[1:-1]}),
("INSERT INTO tb_aaa VALUES ( :strvec1, :numvec1 )",
 {"strvec1": ['F1', 'F2', 'F3'], "numvec1": [6.1, 6.2, 6.3]}),
(("INSERT INTO tb_aaa VALUES ( (%(strvec1)s), (%(numvec1)s) )" %
  {"strvec1": str(['G1', 'G2', 'G3'])[1:-1], "numvec1": str([7.1, 
7.2, 7.3])[1:-1]}),),
(("INSERT INTO tb_aaa VALUES ( %(strvec1)s, %(numvec1)s )" %
  {"strvec1": str(['H1', 'H2', 'H3']), "numvec1": str([8.1, 8.2, 
8.3])}),),
]):
try:
logging.info("INSERT #%d: %s" % (i, format_args(execute_args)))
cursor.execute(*execute_args)
logging.info("INSERT #%d succeeded.\n" % i)
except cx_Oracle.DatabaseError as e:
logging.info("INSERT #%d failed: " % i + str(e))

cursor.execute("SELECT * FROM tb_aaa")
result = cursor.fetchall()
logging.info("SELECT returns:\n" + str(result))


INFO:root:"CREATE OR REPLACE TYPE tp_str_vec as VARRAY(3) of VARCHAR2(8)"

INFO:root:"CREATE OR REPLACE TYPE tp_num_vec as VARRAY(3) of NUMBER"

INFO:root:"CREATE TABLE tb_aaa ( strvec tp_str_vec, numvec tp_num_vec )"

INFO:root:INSERT #0: "INSERT INTO tb_aaa VALUES ( tp_str_vec('A1', 'A2', 'A3'), 
tp_num_vec(1.1, 1.2, 1.3) )"
INFO:root:INSERT #0 succeeded.

INFO:root:INSERT #1: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )", {'numvec1': [2.1, 2.2, 2.3], 'strvec1': ['B1', 'B2', 'B3']}
INFO:root:INSERT #1 failed: ORA-01484: arrays can only be bound to PL/SQL 
statements

INFO:root:INSERT #2: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)", {'numvec1': '3.1, 3.2, 3.3', 'strvec1': "'C1', 'C2', 'C3'"}
INFO:root:INSERT #2 failed: ORA-22814: attribute or element value is larger 
than specified in type

INFO:root:INSERT #3: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)", {'numvec1': '[4.1, 4.2, 4.3]', 'strvec1': "['D1', 'D2', 'D3']"}
INFO:root:INSERT #3 failed: ORA-22814: attribute or element value is larger 
than specified in type

INFO:root:INSERT #4: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), tp_num_vec(:numvec1) 
)", {'numvec1': '(5.1, 5.2, 5.3)', 'strvec1': "('E1', 'E2', 'E3')"}

Re: [sqlalchemy] SQLAlchemy asyncio ThreadPoolExecutor “Too many clients”

2016-09-28 Thread Mike Bayer



On 09/28/2016 04:51 PM, Viktor Roytman wrote:

I wrote a script with this sort of logic in order to insert many records
into a PostgreSQL table as they are generated.

|

|#!/usr/bin/env python3
importasyncio
fromconcurrent.futures importProcessPoolExecutoraspool
fromfunctools importpartial

importsqlalchemy assa
fromsqlalchemy.ext.declarative importdeclarative_base


metadata =sa.MetaData(schema='stackoverflow')
Base=declarative_base(metadata=metadata)


classExample(Base):
__tablename__ ='example'
pk =sa.Column(sa.Integer,primary_key=True)
text =sa.Column(sa.Text)


sa.event.listen(Base.metadata,'before_create',
sa.DDL('CREATE SCHEMA IF NOT EXISTS stackoverflow'))

engine =sa.create_engine(
'postgresql+psycopg2://postgres:password@localhost:5432/stackoverflow'
)
Base.metadata.create_all(engine)
session =sa.orm.sessionmaker(bind=engine,autocommit=True)()


deftask(value):
engine.dispose()
withsession.begin():
session.add(Example(text=value))


async definfinite_task(loop):
spawn_task =partial(loop.run_in_executor,None,task)
whileTrue:
await asyncio.wait([spawn_task(value)forvalue inrange(1)])


defmain():
loop =asyncio.get_event_loop()
withpool()asexecutor:
loop.set_default_executor(executor)
asyncio.ensure_future(infinite_task(loop))
loop.run_forever()
loop.close()


if__name__ =='__main__':
main()|

|

This code works just fine, creating a pool of as many processes as I
have CPU cores, and happily chugging along forever. I wanted to see how
threads would compare to processes, but I could not get a working
example. Here are the changes I made:

|

|fromconcurrent.futures importThreadPoolExecutoraspool

session_maker =sa.orm.sessionmaker(bind=engine,autocommit=True)
Session=sa.orm.scoped_session(session_maker)


deftask(value):
engine.dispose()
# create new session per thread
session =Session()
withsession.begin():
session.add(Example(text=value))
# remove session once the work is done
Session.remove()|

|

This version runs for a while before a flood of "too many clients"
exceptions:

|sqlalchemy.exc.OperationalError:(psycopg2.OperationalError)FATAL:sorry,too
many clients already|


What's causing the problem?


it would appear that either the ThreadExecutorPool is starting more 
threads than your Postgresql database has available connections, or your 
engine.dispose() is leaving PG connections lying open to be garbage 
collected.   A single Engine should be all that's necessary in a single 
process.







--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy asyncio ThreadPoolExecutor “Too many clients”

2016-09-28 Thread Viktor Roytman
I wrote a script with this sort of logic in order to insert many records 
into a PostgreSQL table as they are generated.

#!/usr/bin/env python3
import asyncio
from concurrent.futures import ProcessPoolExecutor as pool
from functools import partial

import sqlalchemy as sa
from sqlalchemy.ext.declarative import declarative_base


metadata = sa.MetaData(schema='stackoverflow')
Base = declarative_base(metadata=metadata)


class Example(Base):
__tablename__ = 'example'
pk = sa.Column(sa.Integer, primary_key=True)
text = sa.Column(sa.Text)


sa.event.listen(Base.metadata, 'before_create',
sa.DDL('CREATE SCHEMA IF NOT EXISTS stackoverflow'))

engine = sa.create_engine(
'postgresql+psycopg2://postgres:password@localhost:5432/stackoverflow'
)
Base.metadata.create_all(engine)
session = sa.orm.sessionmaker(bind=engine, autocommit=True)()


def task(value):
engine.dispose()
with session.begin():
session.add(Example(text=value))


async def infinite_task(loop):
spawn_task = partial(loop.run_in_executor, None, task)
while True:
await asyncio.wait([spawn_task(value) for value in range(1)])


def main():
loop = asyncio.get_event_loop()
with pool() as executor:
loop.set_default_executor(executor)
asyncio.ensure_future(infinite_task(loop))
loop.run_forever()
loop.close()


if __name__ == '__main__':
main()


This code works just fine, creating a pool of as many processes as I have 
CPU cores, and happily chugging along forever. I wanted to see how threads 
would compare to processes, but I could not get a working example. Here are 
the changes I made:

from concurrent.futures import ThreadPoolExecutor as pool

session_maker = sa.orm.sessionmaker(bind=engine, autocommit=True)
Session = sa.orm.scoped_session(session_maker)


def task(value):
engine.dispose()
# create new session per thread
session = Session()
with session.begin():
session.add(Example(text=value))
# remove session once the work is done
Session.remove()


This version runs for a while before a flood of "too many clients" 
exceptions:

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) FATAL:  sorry, too 
many clients already
>
>
What's causing the problem?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] on_conflict_do_update (ON CONFLICT ... Postgresql 9.5) WHERE clause problem

2016-09-28 Thread Mike Bayer
this is likely use cases that have been untested, if you can file this 
w/ a complete test case as a bug report on bitbucket we can start 
looking into it.



On 09/28/2016 12:05 PM, Paweł Szynkiewicz wrote:

Hello all

SA: 1.1.0b3
db: postgresql 9.5

I have a problem with method on_conflict_do_update for pg specific insert.
Precisely with the where argument. It looks like wrong SQL statement is
generated.

example:

class Foo(Base):
  ...
  bar = Column(Integer)

insert_stmt = insert(Foo)

on_update_stmt = insert_stmt.on_conflict_do_update(
set_=dict(
bar=insert_stmt.excluded.bar,
),
where=(Foo.bar < insert_stmt.excluded.bar)
)

session.execute(on_update_stmt, data)

it gives error and rightly so:

column reference "bar" is ambiguous


SQL looks like that:

SQL: 'INSERT INTO foo (...) VALUES (...) ON CONFLICT (...) DO UPDATE SET
bar = excluded.bar WHERE bar < bar'

WHERE clause is not expanded properly, the alias EXCLUDED is omitted. Is
this a bug or I'm doing sht wrong?

the workaround I use is:

...
where=(text('foo.bar < EXCLUDED.bar'))
...

Cheers



--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Feedback appreciated

2016-09-28 Thread Seth P

On Wednesday, September 28, 2016 at 10:16:20 AM UTC-4, Mike Bayer wrote:
>
>
> So illustrating VARRAY round trip on cx_oracle is the first step. 
>

It looks like cx_Oracle supports reading varrays, but supports writing them 
only as column literals, not as bound parameters. The following code tests 
a bunch of plausible formats for bound parameters, none of which work. This 
is consistent with 
https://sourceforge.net/p/cx-oracle/mailman/message/26769899/ .

 

if __name__ == '__main__':

import logging
logging.basicConfig(level='DEBUG')

def format_args(args):
return ', '.join(['"%s"' % arg if isinstance(arg, str) else str(arg) 
for arg in execute_args])

import cx_Oracle
with cx_Oracle.Connection(user, password, host) as connection:
cursor = connection.cursor()
for drop_object in ("TABLE tb_aaa", "TYPE tp_str_vec", "TYPE 
tp_num_vec"):
try:
cursor.execute("DROP " + drop_object)
except:
pass
for execute_args in [("CREATE OR REPLACE TYPE tp_str_vec as VARRAY(3) 
of VARCHAR2(8)",),
 ("CREATE OR REPLACE TYPE tp_num_vec as VARRAY(3) 
of NUMBER",),
 ("CREATE TABLE tb_aaa ( strvec tp_str_vec, numvec 
tp_num_vec )",),]:
logging.info(format_args(execute_args) + "\n")
cursor.execute(*execute_args)

for i, execute_args in enumerate([
(("INSERT INTO tb_aaa VALUES ( tp_str_vec(%(strvec1)s), 
tp_num_vec(%(numvec1)s) )" %
  {"strvec1": str(['A1', 'A2', 'A3'])[1:-1], "numvec1": str([1.1, 
1.2, 1.3])[1:-1]}),),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )",
 {"strvec1": ['B1', 'B2', 'B3'], "numvec1": [2.1, 2.2, 2.3]}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )",
 {"strvec1": str(['C1', 'C2', 'C3'])[1:-1], "numvec1": str([3.1, 
3.2, 3.3])[1:-1]}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )",
 {"strvec1": str(['D1', 'D2', 'D3']), "numvec1": str([4.1, 4.2, 
4.3])}),
("INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )",
 {"strvec1": '(%s)' % str(['E1', 'E2', 'E3'])[1:-1], "numvec1": 
'(%s)' % str([5.1, 5.2, 5.3])[1:-1]}),
("INSERT INTO tb_aaa VALUES ( :strvec1, :numvec1 )",
 {"strvec1": ['F1', 'F2', 'F3'], "numvec1": [6.1, 6.2, 6.3]}),
(("INSERT INTO tb_aaa VALUES ( (%(strvec1)s), (%(numvec1)s) )" %
  {"strvec1": str(['G1', 'G2', 'G3'])[1:-1], "numvec1": str([7.1, 
7.2, 7.3])[1:-1]}),),
(("INSERT INTO tb_aaa VALUES ( %(strvec1)s, %(numvec1)s )" %
  {"strvec1": str(['H1', 'H2', 'H3']), "numvec1": str([8.1, 8.2, 
8.3])}),),
]):
try:
logging.info("INSERT #%d: %s" % (i, format_args(execute_args)))
cursor.execute(*execute_args)
logging.info("INSERT #%d succeeded.\n" % i)
except cx_Oracle.DatabaseError as e:
logging.info("INSERT #%d failed: " % i + str(e))

cursor.execute("SELECT * FROM tb_aaa")
result = cursor.fetchall()
logging.info("SELECT returns:\n" + str(result))


INFO:root:"CREATE OR REPLACE TYPE tp_str_vec as VARRAY(3) of VARCHAR2(8)"

INFO:root:"CREATE OR REPLACE TYPE tp_num_vec as VARRAY(3) of NUMBER"

INFO:root:"CREATE TABLE tb_aaa ( strvec tp_str_vec, numvec tp_num_vec )"

INFO:root:INSERT #0: "INSERT INTO tb_aaa VALUES ( tp_str_vec('A1', 'A2', 'A3'), 
tp_num_vec(1.1, 1.2, 1.3) )"
INFO:root:INSERT #0 succeeded.

INFO:root:INSERT #1: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )", {'numvec1': [2.1, 2.2, 2.3], 'strvec1': ['B1', 'B2', 
'B3']}
INFO:root:INSERT #1 failed: ORA-01484: arrays can only be bound to PL/SQL 
statements

INFO:root:INSERT #2: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )", {'numvec1': '3.1, 3.2, 3.3', 'strvec1': "'C1', 'C2', 
'C3'"}
INFO:root:INSERT #2 failed: ORA-22814: attribute or element value is larger 
than specified in type

INFO:root:INSERT #3: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )", {'numvec1': '[4.1, 4.2, 4.3]', 'strvec1': "['D1', 
'D2', 'D3']"}
INFO:root:INSERT #3 failed: ORA-22814: attribute or element value is larger 
than specified in type

INFO:root:INSERT #4: "INSERT INTO tb_aaa VALUES ( tp_str_vec(:strvec1), 
tp_num_vec(:numvec1) )", {'numvec1': '(5.1, 5.2, 5.3)', 'strvec1': "('E1', 
'E2', 'E3')"}
INFO:root:INSERT #4 failed: ORA-22814: attribute or element value is larger 
than specified in type

INFO:root:INSERT #5: "INSERT INTO tb_aaa VALUES ( :strvec1, :numvec1 )", 
{'numvec1': [6.1, 6.2, 6.3], 'strvec1': ['F1', 'F2', 'F3']}
INFO:root:INSERT #5 failed: ORA-01484: arrays can only be bound to PL/SQL 
statements

INFO:root:INSERT #6: "INSERT INTO tb_aaa VALUES ( 

[sqlalchemy] on_conflict_do_update (ON CONFLICT ... Postgresql 9.5) WHERE clause problem

2016-09-28 Thread Paweł Szynkiewicz
Hello all

SA: 1.1.0b3
db: postgresql 9.5

I have a problem with method on_conflict_do_update for pg specific insert.
Precisely with the where argument. It looks like wrong SQL statement is 
generated.

example:

class Foo(Base):
  ...
  bar = Column(Integer)

insert_stmt = insert(Foo)

on_update_stmt = insert_stmt.on_conflict_do_update(
set_=dict(
bar=insert_stmt.excluded.bar,
),
where=(Foo.bar < insert_stmt.excluded.bar)
)

session.execute(on_update_stmt, data)

it gives error and rightly so:
> column reference "bar" is ambiguous

SQL looks like that:

SQL: 'INSERT INTO foo (...) VALUES (...) ON CONFLICT (...) DO UPDATE SET 
bar = excluded.bar WHERE bar < bar'

WHERE clause is not expanded properly, the alias EXCLUDED is omitted. Is 
this a bug or I'm doing sht wrong?

the workaround I use is:

...
where=(text('foo.bar < EXCLUDED.bar'))
...

Cheers



-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: [RFC] Connecting SqlAlchemy to NoSQL DBs

2016-09-28 Thread Olemis Lang
On 9/28/16, Ashfaque Siddiqui  wrote:
> Hey Olemis,
>
> Did you ever find anything in response to this request?

Unfortunately , not quite . Though I've heard of a project named
MongoAlchemy [1]_ . I am not sure of how much related it is to
SQLAlchemy at all . OTOH Apacha Allura connects Turbogears to MongoDB
instances via Ming [2]_ .


Beyond this , MongoEngine [3]_ ORM looks similar to Django's .


Sorry for not having more info . Should you find anything else, it
would be nice if you took the time to share .

[...]

.. [1] http://www.mongoalchemy.org/

.. [2] http://ming.readthedocs.io/en/latest/index.html

.. [3] http://mongoengine.org/

-- 
Regards,

Olemis - @olemislc

Apache™ Bloodhound contributor
http://issues.apache.org/bloodhound
http://blood-hound.net

Brython committer
http://brython.info
http://github.com/brython-dev/brython

SciPy Latin America - Cuban Ambassador
Chairman of SciPy LA 2017 - http://scipyla.org/conf/2017

Blog ES: http://simelo-es.blogspot.com/
Blog EN: http://simelo-en.blogspot.com/

Featured article:

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] concerns over generating a bitwise comparison in postgres - (or "can columns be explicit in a text filter?"

2016-09-28 Thread Jonathan Vanasco
Ok. I'll put a note on the code and leave as is.

The cast/bit/op is just too hard to read for maintenance.  BUT it might be 
okay in one spot as a compiles function.  I think i may do that!

Thanks, Mike!


-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLAlchemy - Bulk update using sqlalchemy core table.update() expects all columns in the values data

2016-09-28 Thread Seth P
Oops, I missed that this is an UPDATE rather than an INSERT. Setting the 
missing columns to None probably isn't what you want.

On Wednesday, September 28, 2016 at 9:08:00 AM UTC-4, Seth P wrote:
>
> Can't you include the missing columns in your dictionary with None values?
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] SQLAlchemy - Bulk update using sqlalchemy core table.update() expects all columns in the values data

2016-09-28 Thread Mike Bayer



On 09/28/2016 04:29 AM, Rajesh Rolo wrote:

I'm trying to do a bulk update using core SQLAlchemy to a postgres
database. bulk_update_mappings does not work (reports StaleDataError).
So I'm trying to use core functions to do a bulk update. This works fine
when the update data passed to the values have all the columns in the db
but fails to work when we update only a certain columns. In my
application, during periodic syncs between the server and the client
only a few of the columns will get updated most of the times.

The code snippet I have for update is :

|conn =session.connection()table =table_dict[table_key].__table__
stmt=table.update().where(and_(table.c.user_id==bindparam('uid'),tbl_pk[table_key]==bindparam('pkey'))).values()conn.execute(stmt,update_list)|

Since I update multiple tables on every sync, table names and primary
keys are indexed through an array. For the example below
table_dict[table_key] would translate to the table 'nwork' and
tbl_pk[table_key] would translate to 'table.c.nwid' which would be
'nwork.nwid'.

The update_list is a list of records (that need to get updated) as a
python dictionary. When the record has values for all the columns it
works fine and when only some of the columns is getting updated it's
throwing the following error:

|StatementError:(sqlalchemy.exc.InvalidRequestError)A value isrequired
forbind parameter 'last_sync',inparameter group 1[SQL:u'UPDATE nwork SET
type=%(type)s, name=%(name)s, last_sync=%(last_sync)s,
update_time=%(update_time)s, status=%(status)s,
total_contacts=%(total_contacts)s, import_type=%(import_type)s,
cur_index=%(cur_index)s WHERE nwork.user_id = %(uid)s AND nwork.nwid =
%(pkey)s']|

In this case the error was happening for a record where the column
'last_sync' was not getting updated.

What's the way of doing a bulk update where the records may not have all
the columns (the same set of them) getting updated?



the underlying call we use in this case is cursor.executemany():

https://www.python.org/dev/peps/pep-0249/#executemany

that is, the SQL is just once, and then lots of parameters.   Such as,


   INSERT INTO table (x, y, z) VALUES (%s, %s, %s)

Then you'd send a list of 3-tuples.   So it should be clear that *every* 
tuple needs to have three values.   SQLAlchemy does not assume what 
value you'd want to put in your dictionary if values are missing.



The three ways this could be dealt with are:

1. provide an onupdate value for the Column that's missing (described at 
http://docs.sqlalchemy.org/en/latest/core/defaults.html).   This is not 
likely to be what you want.


2. Fill in those missing values for your dictionaries.  But since this 
is an UPDATE, I can tell maybe that's not what you want here.  Which 
leaves us with


3. Break up your collection of dictionaries based on which keys are 
present, then use multiple calls to conn.execute() for each batch.   I 
like to use itertools.groupby() for this sort of thing.











I'm running SQLAlchemy 1.0.14.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Feedback appreciated

2016-09-28 Thread Mike Bayer



On 09/28/2016 10:05 AM, Seth P wrote:

On Friday, August 23, 2013 at 3:52:54 PM UTC-4, Konsta Vesterinen wrote:



On Friday, August 23, 2013 1:52:41 AM UTC+3, Michael Bayer wrote:


2. ScalarListType vs. Postgresql ARRAY ?   same/better?  should
SLT use ARRAY on a PG backend ?


Hmm I'm not sure about this yet. Its definately not better than
using PostgreSQL ARRAY. ARRAY is better in many ways but its
PostgreSQL specific. Maybe we could make ScalarListType use ARRAY on
PostgreSQL by default (as you suggested).


I was wondering if there are any plans for SQLAlchemy to support
Oracle's VARRAY column type? I've recently had the misfortune of having
to use Oracle, and would like to have columns of the
formsa.Column(sa.ARRAY(pt.String(8), dimensions=1)). I just looked at
SQLAlchemy-Utils's ScalarListType
(https://github.com/kvesteri/sqlalchemy-utils/blob/master/sqlalchemy_utils/types/scalar_list.py),
but I think then I'd bump into Oracle's 4000-byte limit
(https://docs.oracle.com/cd/B19306_01/server.102/b14237/limits001.htm#i287903).
(It's not clear to me what Oracle's VARRAY length limit is when used as
a column type, but that's a separate issue.) It's also not clear to me
if cx_Oracle supports VARRAYs nicely.


someone just asked about this and your last sentence is the main thing 
to worry about first, getting cx_oracle support confirmed.Oracle's 
OCI is vast, ancient, heavily encrusted, and even really simple things 
become awkward in cx_Oracle due to this.   I won't bash on cx_Oracle too 
much because it just tries to expose OCI as much as possible, but there 
are crap-tons of awkwardnesses, "just don't works", and other edge cases 
in this area.   So illustrating VARRAY round trip on cx_oracle is the 
first step.







--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Inserting VARBINARY(MAX) columns with MSSQL/SQLAlchemy

2016-09-28 Thread Mike Bayer



On 09/28/2016 02:32 AM, jonathan.schu...@gmail.com wrote:

Usual apologies for newbie question...

I'm trying to use SQLAlchemy to insert data into a MSSQL DB with columns
of type VARBINARY(MAX). The only way I could find to avoid the error message

 Implicit conversion from data type varchar to varbinary(max) is not
allowed. Use the CONVERT function to run this query.

wais to wrap the bound column in CONVERT(VARBINARY(MAX), ...). This
however creates problems when I want to use the same code on other
database types. I guess I could work around this issue by manually
checking the database dialect, but since one purpose of SQLAlchemy
appears to be to help write DB agnostic code, I would hope that this
problem could be solved in a cleaner way.

Many thanks for any suggestions.


if you are getting that error on INSERT it sounds like there are driver 
issues involved (e.g. pymssql, pyodbc).  For an INSERT you should not 
get that error; using the VARBINARY SQLAlchemy type, SQLAlchemy calls 
upon a construct provided by your DBAPI (e.g., the driver) called 
Binary().  This is a wrapper that is intended to signal to the driver 
that this value is not a character string, it's bytes.   Internally the 
driver should be doing this step.


For here we'd look to see what driver you're using, if changing drivers 
resolves, and as always a small and self-contained reproducing test 
case.   For a driver issue I'd then look to convert the test case to be 
using the DB driver alone, then we send a bug report to that driver.





--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Feedback appreciated

2016-09-28 Thread Seth P
On Friday, August 23, 2013 at 3:52:54 PM UTC-4, Konsta Vesterinen wrote:
>
>
>
> On Friday, August 23, 2013 1:52:41 AM UTC+3, Michael Bayer wrote:
>  
>
>> 2. ScalarListType vs. Postgresql ARRAY ?   same/better?  should SLT use 
>> ARRAY on a PG backend ?
>>
>
> Hmm I'm not sure about this yet. Its definately not better than using 
> PostgreSQL ARRAY. ARRAY is better in many ways but its PostgreSQL specific. 
> Maybe we could make ScalarListType use ARRAY on PostgreSQL by default (as 
> you suggested).
>

I was wondering if there are any plans for SQLAlchemy to support Oracle's 
VARRAY column type? I've recently had the misfortune of having to use 
Oracle, and would like to have columns of the form 
sa.Column(sa.ARRAY(pt.String(8), 
dimensions=1)). I just looked at SQLAlchemy-Utils's ScalarListType 
(https://github.com/kvesteri/sqlalchemy-utils/blob/master/sqlalchemy_utils/types/scalar_list.py),
 
but I think then I'd bump into Oracle's 4000-byte limit 
(https://docs.oracle.com/cd/B19306_01/server.102/b14237/limits001.htm#i287903). 
(It's not clear to me what Oracle's VARRAY length limit is when used as a 
column type, but that's a separate issue.) It's also not clear to me if 
cx_Oracle supports VARRAYs nicely.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] concerns over generating a bitwise comparison in postgres - (or "can columns be explicit in a text filter?"

2016-09-28 Thread Mike Bayer



On 09/27/2016 05:12 PM, Jonathan Vanasco wrote:


Let's say that I have a table such as this:

 CREATE TABLE foo (
 id SERIAL PRIMARY KEY,
 attribute_toggles INT DEFAULT NULL
 );
CREATE INDEX idx_attribute_toggles ON foo(CAST(attribute_toggles AS
BIT(32))) WHERE attribute_toggles  <> 0;

represented by such a class

class Foo():
__tablename__ = 'foo'
id = Column(Integer(primary_key=True))
attribute_toggles = Column(Integer())

and I need to generate the sql:

  SELECT * FROM foo WHERE (CAST(attribute_toggles AS BIT(32)) &
CAST(1 AS BIT(32)) <> CAST (0 AS BIT(32))) AND attribute_toggles > 0;

The casting and `>0` comparison are used to filter on the index instead
of a sequential scan.  that drops a query from around 4000ms to 30ms.

Is it ok/reliable just doing this:

foos = session.query(Foo)\
.filter("""(cast(attribute_toggles as bit(32)) &
cast(:bit_wanted as bit(32)) <> cast(0 as bit(32))) AND
attribute_toggles > 0""",
)\
.params(bit_wanted = bit_wanted,
)\
.order_by(Foo.id.desc())\
.limit(limit)\
.offset(offset)\
.all()

Writing out the filter in SqlAlchemy was getting messy, so I went for a
text filter.

My concern is that the textual filter doesn't specify what
`attribute_toggles` is.  The sqlalchemy core has a somewhat similar
`columns` construct for statements, but that's not applicable here.

i made a few attempts to specify the columns using various sqlalchemy
features that I'm familiar with, but i had no luck.


well it will work as long as that column name is non-ambiguous.  If you 
started using Foo + FooAlias together, then it would have problems.   In 
that case using plain cast() / BIT / op() would be the quickest 
solution.   Or a @compiles construct if you really want the string thing 
to be present.







--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLAlchemy - Bulk update using sqlalchemy core table.update() expects all columns in the values data

2016-09-28 Thread Seth P
Can't you include the missing columns in your dictionary with None values?

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] SQLAlchemy - Bulk update using sqlalchemy core table.update() expects all columns in the values data

2016-09-28 Thread Rajesh Rolo


I'm trying to do a bulk update using core SQLAlchemy to a postgres 
database. bulk_update_mappings does not work (reports StaleDataError). So 
I'm trying to use core functions to do a bulk update. This works fine when 
the update data passed to the values have all the columns in the db but 
fails to work when we update only a certain columns. In my application, 
during periodic syncs between the server and the client only a few of the 
columns will get updated most of the times.

The code snippet I have for update is :

conn = session.connection()
table = table_dict[table_key].__table__
stmt=table.update().where(and_(table.c.user_id==bindparam('uid'),
tbl_pk[table_key]==bindparam('pkey'))).values()
conn.execute(stmt, update_list)

Since I update multiple tables on every sync, table names and primary keys 
are indexed through an array. For the example below table_dict[table_key] 
would translate to the table 'nwork' and tbl_pk[table_key] would translate 
to 'table.c.nwid' which would be 'nwork.nwid'.

The update_list is a list of records (that need to get updated) as a python 
dictionary. When the record has values for all the columns it works fine 
and when only some of the columns is getting updated it's throwing the 
following error:

StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required 
for bind parameter 'last_sync', in parameter group 1 
[SQL: u'UPDATE nwork SET type=%(type)s, name=%(name)s, 
last_sync=%(last_sync)s, 
update_time=%(update_time)s, status=%(status)s, 
total_contacts=%(total_contacts)s, 
import_type=%(import_type)s, cur_index=%(cur_index)s WHERE 
nwork.user_id = %(uid)s AND nwork.nwid = %(pkey)s']

In this case the error was happening for a record where the column 
'last_sync' was not getting updated.

What's the way of doing a bulk update where the records may not have all 
the columns (the same set of them) getting updated? 


I'm running SQLAlchemy 1.0.14.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Inserting VARBINARY(MAX) columns with MSSQL/SQLAlchemy

2016-09-28 Thread jonathan . schultz
Usual apologies for newbie question...

I'm trying to use SQLAlchemy to insert data into a MSSQL DB with columns of 
type VARBINARY(MAX). The only way I could find to avoid the error message

>  Implicit conversion from data type varchar to varbinary(max) is not 
> allowed. Use the CONVERT function to run this query.
>
wais to wrap the bound column in CONVERT(VARBINARY(MAX), ...). This however 
creates problems when I want to use the same code on other database types. 
I guess I could work around this issue by manually checking the database 
dialect, but since one purpose of SQLAlchemy appears to be to help write DB 
agnostic code, I would hope that this problem could be solved in a cleaner 
way.

Many thanks for any suggestions.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.