Re: [sqlalchemy] Inserting VARBINARY(MAX) columns with MSSQL/SQLAlchemy

2016-10-04 Thread Jonathan Schultz
Many thanks for the prompt reply.

if you are getting that error on INSERT it sounds like there are driver 
> issues involved (e.g. pymssql, pyodbc).  For an INSERT you should not 
> get that error; using the VARBINARY SQLAlchemy type, SQLAlchemy calls 
> upon a construct provided by your DBAPI (e.g., the driver) called 
> Binary().  This is a wrapper that is intended to signal to the driver 
> that this value is not a character string, it's bytes.   Internally the 
> driver should be doing this step. 
>
> For here we'd look to see what driver you're using, if changing drivers 
> resolves, and as always a small and self-contained reproducing test 
> case.   For a driver issue I'd then look to convert the test case to be 
> using the DB driver alone, then we send a bug report to that driver. 
>

OK in brief I'm using pymssql. I've never used pyodbc but if it's worth 
investigating whether it produces the same error I'll find the time to 
figure out how to use it.

In the meantime I have produced a small test script that demonstrates the 
problem. You need a database already created with something like:

CREATE DATABASE test
> USE test
>
CREATE USER user FROM LOGIN user
> GRANT CONTROL TO USER
>
CREATE TABLE Test(Id UNIQUEIDENTIFIER PRIMARY KEY, Test VARBINARY(MAX) not 
> null)
>

Then if you run the attached script testvarbinary.py with a single argument 
that references the database, for example:

.\testvarbinary.py mssql+pymssql://user:password@localhost/test
>

you'll get the following output:

Test 1 works
> Test 2 fails
> Traceback (most recent call last):
>   File "./testvarbinary.py", line 25, in 
> {'Test':bytearray(b'a')})
> 
>   File "pymssql.pyx", line 467, in pymssql.Cursor.execute (pymssql.c:7561)
> sqlalchemy.exc.OperationalError: (pymssql.OperationalError) (257, 
> 'Implicit conversion from data type varchar to varbinary(max) is not 
> allowed. Use the CONVERT function to run this query.DB-Lib error message 
> 20018, severity 16:\nGeneral SQL Server error: Check messages from the SQL 
> Server\n') [SQL: 'INSERT INTO [Test] ([Test]) OUTPUT inserted.[Id] VALUES 
> (%(Test)s)'] [parameters: {'Test': 'a'}]
>

What I find curious is that although I have tried to cast the value to 
insert as a bytearray, it seems to have been cast back to a string in the 
error message. However in some other cases where I obtain a value via 
SQLAlchemy from a BLOB column in an SQLite DB, which appears to be a byte 
array, I am able to insert it into an MSSQL VARBINARY(MAX) column without 
calling CONVERT.

Hope that's of some use in tracking down any issue, or perhaps you can see 
where I am going wrong.

Thanks again... Jonathan

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
#!/usr/bin/python
# -*- coding: utf-8 -*-

import argparse
from sqlalchemy import *
from sqlalchemy import exc
from array import array

try:
parser = argparse.ArgumentParser(description='Test VARBINARY(MAX) insert operation.')
parser.add_argument('db', type=str, help='SQLAlchemy path of database.')
args = parser.parse_args()

engine = create_engine(args.db)
metadata = MetaData(bind=engine)
metadata.reflect(engine)

table = metadata.tables.get('Test')

print('Test 1 works')
engine.execute(table.insert().values({'Test':func.CONVERT(literal_column('VARBINARY(MAX)'), bindparam('Test'))}), 
 {'Test':bytearray(b'a')})
print('Test 2 fails')
engine.execute(table.insert().values({'Test':bindparam('Test')}), 
 {'Test':bytearray(b'a')})

except exc.SQLAlchemyError:
raise



Re: [sqlalchemy] Session / declarative_base scope

2016-10-04 Thread Warwick Prince
Hi Mike

I don’t need support very often, but when I do I know I can count on your clear 
and concise responses to save the day.

Everything was SO close - all I needed to do was take my dynamically created 
classes (SalesDocumentLine for example) and force them into the general 
globals() and hey-presto - all fixed.  ‘Knowing’ that the environment had a 
global scope, and that it was thread safe eliminated so many other possible red 
herrings.  I could then focus on why my classes could not be ‘seen’.

Thanks once again.  Best support on the interwebs. ;-)

> On 29 Sep 2016, at 11:48 PM, Mike Bayer  wrote:
> 
> 
> 
> On 09/29/2016 01:38 AM, Warwick Prince wrote:
>> Hi Mike
>> 
>> I would like a little insight into the session object, and the
>> declarative_base class.
>> 
>> I have a process running many threads, where each thread may be
>> connected to potentially a different engine/database.  If the database
>> connection between 2 or more threads is the same, then they will share
>> the same engine.  However, they each have their own MetaData objects.
>> 
>> There is a global sessionmaker() that has no binding at that time.
>> When each thread creates its OWN session, then it processes mySession =
>> Session(bind=myThreadsEngine).
>> 
>> The Engines and MetaData part has worked perfectly for years, using
>> basic queries like Table(’some_table', threadMetaData,
>> autoload=True).select().execute().fetchall(). etc.
>> 
>> I’ve started to use the ORM more now, and am using the relationships
>> between the objects.  However, I’m hitting and issue that appears to
>> centre around some shared registry or class variables or something that
>> is causing a conflict.
>> 
>> I’ve made it so each THREAD has is own Base =
>> declarative_base(metadata=theSessionsMetaData)
>> 
>> Then, classes are mapped dynamically based on this new Base, and the
>> columns are autoload’ed.  Again, this is working - sometimes.   There’s
>> some random-like problem that mostly means it does not work when I do a
>> mySession.query(myMappedClassWithRelationships) and I get the following
>> exception being raised;
> 
> so generating new classes in threads can be problematic because the registry 
> of mappers is essentially global state.   Initialization of mappers against 
> each other, which is where your error here is, is mutexed and is overall 
> thread-safe, but still, you need to make sure that all the things that your 
> class needs to be used exist.  Here, somewhere in your program you have a 
> class called SalesDocumentLine, and that class has not been seen by your 
> Python interpreter yet.   That the problem only happens randomly in threads 
> implies some kind of race condition which will make this harder to diagnose, 
> but basically that name has to exist, if your mapping refers to it.   You 
> might want to play with the configure_mappers() call that will cause this 
> initialization to occur at the point you tell it.
> 
> 
> 
> 
>> 
>>  File
>> "C:\Python27\lib\site-packages\dap-2.1.2-py2.7.egg\dap\db\dbutils.py",
>> line 323, in DAPDB_SetColumns
>>query = session.query(mappedClass).filter_by(**whereCriteria)
>>  File "build\bdist.win32\egg\sqlalchemy\orm\session.py", line 1260, in
>> query
>>return self._query_cls(entities, self, **kwargs)
>>  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 110, in
>> __init__
>>self._set_entities(entities)
>>  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 120, in
>> _set_entities
>>self._set_entity_selectables(self._entities)
>>  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 150, in
>> _set_entity_selectables
>>ent.setup_entity(*d[entity])
>>  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 3446, in
>> setup_entity
>>self._with_polymorphic = ext_info.with_polymorphic_mappers
>>  File "build\bdist.win32\egg\sqlalchemy\util\langhelpers.py", line 754,
>> in __get__
>>obj.__dict__[self.__name__] = result = self.fget(obj)
>>  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1891, in
>> _with_polymorphic_mappers
>>configure_mappers()
>>  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 2768, in
>> configure_mappers
>>mapper._post_configure_properties()
>>  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1708, in
>> _post_configure_properties
>>prop.init()
>>  File "build\bdist.win32\egg\sqlalchemy\orm\interfaces.py", line 183,
>> in init
>>self.do_init()
>>  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line
>> 1628, in do_init
>>self._process_dependent_arguments()
>>  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line
>> 1653, in _process_dependent_arguments
>>setattr(self, attr, attr_value())
>>  File
>> "build\bdist.win32\egg\sqlalchemy\ext\declarative\clsregistry.py", line
>> 293, in __call__
>>(self.prop.parent, self.arg, n.args[0], self.cls)
>> InvalidRequestError: When initializing mapper

Re: [sqlalchemy] Baked Queries | .in_() & .count()

2016-10-04 Thread Mike Bayer



On 10/03/2016 07:41 PM, Andrew Kowalik wrote:

SQLAlchemy == 1.0.3
mysql-python == 1.2.5

I have been playing around with baked queries and have a couple
observations, one which is problem my lack of SQLA knowledge and the
other a possible feature.

1) .in_ and bindparam
I have a query that I am baking that uses in_ with a python list. For
example
baked_query += lambda q: q.filter(Model.attr1.in_([bindparam('attr1')])


that's not a supported argument form.  in_() generates SQL like this:

col IN (?, ?, ?, ?)

as you can see, the physical structure of the SQL is dependent on how 
many bound parameter items are in the list.  So a single bound parameter 
cannot accommodate for this.I *think* that some of the MySQL and 
Postgresql drivers do attempt to receive a single list as a bound 
parameter and expand within IN() in some of these cases but SQLAlchemy 
knows nothing about that and it's not clear how those drivers accomplish 
this without parsing the SQL statement.


If you are trying to work with the underlying driver's auto-expansion of 
IN, I think at the very least you would not want to use 
in_([bindparam()]), you'd want to use in_(bindparam()) and I'm not sure 
SQLAlchemy allows that.  You'd be better off building up a custom 
construct for now short of patching SQLAlchemy.





Generating the query in python succeeds but fails in mysql as bindparam
is not being converted to a valid representation for the list. "operand
should contain 1 column(s)". I tried to manually convert the python list
into a string of comma separated values, the query runs but I am doing
something stupid so the query returns zero results.


So because the physical structure of the IN statement is affected by the 
length of the argument list, this is not an appropriate target for 
something that would be "baked".   Unless the underlying database driver 
is translating for a single bound parameter (and SQLAlchemy is letting 
you do that), you're going to get weird errors.





2) Result class currently does not support count(). I have a use case
where I want to allow the possibility to paginate a baked query and with
it, gather the total number of records. Without supporting count I am
having to call _as_query() and then call count on the returned query.


Result sets don't work that way.  Relational databases do not send out 
the count of rows that would be expected separately, they simply start 
sending out rows and you have no idea how many there will be (nor does 
the database) until it has no more to send.   So for count(), fetch the 
entire result set and do a length.  Obviously this does not work for 
pagination.   For pagination you at the very least need to do 
LIMIT/OFFSET at the SQL level and depending on platform and specifics 
you may want to use 
https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/WindowedRangeQuery.







Just some thoughts that I did not see brought up before in my searches.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: Baked Queries | .in_() & .count()

2016-10-04 Thread Jonathan Vanasco


On Monday, October 3, 2016 at 7:41:09 PM UTC-4, Andrew Kowalik wrote:
>
> I have a query that I am baking that uses in_ with a python list. For 
> example
> baked_query += lambda q: q.filter(Model.attr1.in_([bindparam('attr1')])
>
>
Try using a tuple instead of a list.  Some Python drivers require a tuple 
input at the DB Cursor level. 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.