Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Mike Bayer


On Thu, Oct 15, 2020, at 3:26 AM, Nicolas Lykke Iversen wrote:
> Hi Mike,
> 
> I have created an issue for pyodbc: 
> https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870
> 
> I've gotten really good feedback there from Microsoft, and a fix has been 
> proposed that works:
> 
> *"You can try to use setinputsizes on your parameter to tell it to send as 
> varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"
*
> I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc dialect be 
> updated to support varchar(max)using the proposed method? If not, how can I 
> execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) using SQLAlchemy, 
> so that I can make use of  varchar(max)in my application?

SQLAlchemy has some dialects that make use of setinputsizes() out of necessity, 
but it's an area that is fraught with issues as it means SQLAlchemy is 
second-guessing what the DBAPI is coming up with.  

It's actually news to me that pyodbc supports setinputsizes() as historically, 
the cx_Oracle DBAPI was the only DBAPI that ever did so and this method is 
usually not supported by any other DBAPI.   We have a hook that calls upon 
setinputsizes() but right now it's hardcoded to cx_Oracle's argument style, so 
the hook would need alterations to support different calling styles on 
different dialects.

In https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-709385941 it 
is suggested that there should be no need to use this "_SC" collation -  then 
in https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-709428774, you 
stated "I previously experimented with non-`_SC` in my application, and it 
caused errors.".   Can you be more specific of these errors?   At the moment, 
this is suggesting a major architectural rework of the pyodbc dialect to 
support a use case which has other workarounds.   The architecture of 
SQLAlchemy's set_input_sizes() hook has changed and at best this would be part 
of 1.4 which is not in beta release yet, a production release is not for 
several months.   

>From that point, there's an event hook at do_setinputsizes(): 
>https://docs.sqlalchemy.org/en/13/core/events.html?highlight=do_setinputsizes#sqlalchemy.events.DialectEvents.do_setinputsizes
>   that would be usable so that you could set up rules like these on your own, 
>and we eventually would document the known workarounds for various unusual 
>issues.  

This issue is definitely unusual, it's never been reported and was difficult to 
find in google searches, so I don't believe we are using pyodbc incorrectly and 
it would be nice if pyodbc could someday realize that MS SQL Server is the only 
database anyone really uses their driver with, and they could perhaps add a SQL 
Server ruleset directly.  If this were a problem that occurred frequently, then 
we would begin looking into turning on some of this behavior by default but we 
need to be very conservative on that as this is an area where things break 
quite a lot.

Below is the recipe that includes a directly vendored version of the 
set_input_sizes() hook to suit your immediate use case.   that's what I can get 
you for now and it will allow you to set the input sizes any way you'd like.

import pyodbc

from sqlalchemy import Column
from sqlalchemy import create_engine
from sqlalchemy import event
from sqlalchemy import Integer
from sqlalchemy import String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import Session


Base = declarative_base()

e = create_engine(

"mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+13+for+SQL+Server",
echo=True,
)


@event.listens_for(e, "before_cursor_execute")
def before_cursor_execute(
conn, cursor, statement, parameters, context, executemany
):

if not hasattr(context.compiled, "bind_names"):
return

inputsizes = {}
for bindparam in context.compiled.bind_names:
# check for the specific datatype you care about here
if bindparam.type._type_affinity is String:
inputsizes[bindparam] = ((pyodbc.SQL_WLONGVARCHAR, 0, 0),)
else:
inputsizes[bindparam] = None

positional_inputsizes = []
for key in context.compiled.positiontup:
bindparam = context.compiled.binds[key]
dbtype = inputsizes.get(bindparam, None)
positional_inputsizes.append(dbtype)

cursor.setinputsizes(positional_inputsizes)


class A(Base):
__tablename__ = "a"

id = Column(Integer, primary_key=True)
x = Column(Integer)
data = Column(String)
y = Column(Integer)

Base.metadata.drop_all(e)
Base.metadata.create_all(e)

s = Session(e)

s.add(A(data="some data", x=1, y=4))
s.commit()













> 
> Can you recommend a hotfix for using varchar(max)in current SQLAlchemy 
> applications that need to handle Unicode supplementary characters (_SC)? 
> 
> I appreciate really appreciate your help.
> 
> Best regards
> Nicolas 
> On Wednesday, October 14, 

Re: [sqlalchemy] Text subquery column names in the results

2020-10-15 Thread Mike Bayer


On Thu, Oct 15, 2020, at 2:52 AM, Kotofos online wrote:
> 
> Hi,
> Could you shed some light on what I might be doing incorrectly? I have this 
> text() SELECT * query on top of a one-column sub-query and in the result, I 
> am not getting that column name.
> 
> ```
> stmt = text('select "FirstName", from "Customer"')
> stmt = select('*').select_from(stmt.columns().alias())
> print(stmt)
> 
> SELECT *
> FROM (select "FirstName",  from "Customer") AS anon_1
> ```
> 
> This is works, but produces incorrect column name in the output:
> ```
> res = engine.execute(stmt)
> keys = res.fetchall()[0].keys()
> print(keys)

When using select(), the names in the result rows come from those which were 
provided to the select() method.  they are matched to those of the actual 
cursor if possible but not if they don't match.


> 
> ['*']
> ```
> 
> But when the subquery has two columns, then it works as expected:
> ```
> stmt = text('select "FirstName", "LastName" from "Customer"')
> stmt = select('*').select_from(stmt.columns().alias())
> res = engine.execute(stmt)
> keys = res.fetchall()[0].keys()
> print(keys)
> 
> ['FirstName', 'LastName']
> ```
> So, is there a better way to wrap text query? Why column name is lost in the 
> first case? Is it a bug?

it's basically undefined behavior you're dealing with.  you wouldn't want to 
pass a "*" to select().   I'm sure there's more context to what you actually 
need to do, but in the example above, there's no need to use a subquery, just 
invoke the text() construct directly.


> 
> Tested on 1.3.20 and 1.2.19 
> 

> --
> SQLAlchemy - 
> The Python SQL Toolkit and Object Relational Mapper
>  
> http://www.sqlalchemy.org/
>  
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> --- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/sqlalchemy/c0be052d-1c20-49d1-a73d-875b4a7afef9n%40googlegroups.com
>  
> .

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/e7967445-b80a-4696-8d7e-db36899f0a10%40www.fastmail.com.


Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Nicolas Lykke Iversen
I already tested the _UTF8 encoding using SQL Server 2019 in 
Docker: mcr.microsoft.com/mssql/server:2019-latest

The POC fails here too with the error, when the database is created with 
the LATIN1_GENERAL_100_CI_AS_SC_UTF8 collation:  

Traceback (most recent call last):
  File "test.py", line 56, in 
cursor.execute(sql, (content,))
pyodbc.ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 17 for 
SQL Server][SQL Server]Cannot convert to text/ntext or collate to 
'Latin1_General_100_CI_AS_SC_UTF8' because these legacy LOB types do not 
support UTF-8 or UTF-16 encodings. Use types varchar(max), nvarchar(max) or 
a collation which does not have the _SC or _UTF8 flags. (4189) 
(SQLParamData)")

However, it works when using 
cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]).

You are not right about the collation being legacy - 
LATIN1_GENERAL_100_CI_AS_SC is used extensively, due to its support for 
supplementary characters. It's text datatype that's legacy:

*If you store character data that reflects multiple languages in SQL Server 
(SQL Server 2005 (9.x) and later), use Unicode data types (nchar, nvarchar, 
and ntext) instead of non-Unicode data types (char, varchar, and text).*

I'm not saying it's SQLAlchemy fault or that SQLAlchemy should fix this 
issue. But if you read my pyodbc issue on Github, you will see that they 
argue that's SQLAlchemy that's using pyodbc incorrectly :(






On Thursday, October 15, 2020 at 1:06:29 PM UTC+2 Simon King wrote:

> Do you know if there is a downside to calling setinputsizes like that?
> To put it another way, why doesn't pyodbc configure itself like that
> automatically? Why do you think this belongs in SQLAlchemy rather than
> pyodbc?
>
> I suspect the answer is that most applications don't need it and there
> is a downside (perhaps a performance implication?).
>
> I've never used SQL Server, but the fact that the error message refers
> to these collations as "legacy" suggests that an alternative collation
> might be better.
>
> https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15#Supplementary_Characters
> says:
>
> SQL Server 2019 (15.x) extends supplementary character support to
> the char and varchar data types with the new UTF-8 enabled collations
> (_UTF8). These data types are also capable of representing the full
> Unicode character range.
>
> If you can restrict yourself to SQL Server 2019, that might be a better 
> option.
>
> Simon
>
> On Thu, Oct 15, 2020 at 10:08 AM Nicolas Lykke Iversen
>  wrote:
> >
> > Thank you, Simon.
> >
> > I'm curious whether this is the way to do it in the future, or whether 
> SQLAlchemy should implement varchar(max)properly?
> >
> > What would the argument be for not implementing varchar(max)in the 
> pyodbc dialect?
> >
> > On Thursday, October 15, 2020 at 11:05:32 AM UTC+2 Simon King wrote:
> >>
> >> You could call 'setinputsizes' in a handler for the
> >> 'before_cursor_execute' event, something like this:
> >>
> >>
> >> from sqlalchemy import event
> >>
> >> @event.listens_for(SomeEngine, 'before_cursor_execute')
> >> def receive_before_cursor_execute(conn, cursor, statement,
> >> parameters, context, executemany):
> >> cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])
> >>
> >>
> >> 
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents
> >> 
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
> >>
> >> Hope that helps,
> >>
> >> Simon
> >>
> >>
> >> On Thu, Oct 15, 2020 at 8:27 AM Nicolas Lykke Iversen  
> wrote:
> >> >
> >> > Hi Mike,
> >> >
> >> > I have created an issue for pyodbc: 
> https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870
> >> >
> >> > I've gotten really good feedback there from Microsoft, and a fix has 
> been proposed that works:
> >> >
> >> > "You can try to use setinputsizes on your parameter to tell it to 
> send as varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"
> >> >
> >> > I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc 
> dialect be updated to support varchar(max)using the proposed method? If 
> not, how can I execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) 
> using SQLAlchemy, so that I can make use of varchar(max)in my application?
> >> >
> >> > Can you recommend a hotfix for using varchar(max)in current 
> SQLAlchemy applications that need to handle Unicode supplementary 
> characters (_SC)?
> >> >
> >> > I appreciate really appreciate your help.
> >> >
> >> > Best regards
> >> > Nicolas
> >> > On Wednesday, October 14, 2020 at 3:36:11 PM UTC+2 Mike Bayer wrote:
> >> >>
> >> >>
> >> >>
> >> >> On Wed, Oct 14, 2020, at 5:35 AM, Nicolas Lykke Iversen wrote:
> >> >>
> >> >> Hi Mike,
> >> >>
> >> >> I've now tested inserting strings with more than 2000 characters 
> using Azure Data Studio (SQL Server GUI) and everything works.
> >> >>
> >> >> 

Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Simon King
Do you know if there is a downside to calling setinputsizes like that?
To put it another way, why doesn't pyodbc configure itself like that
automatically? Why do you think this belongs in SQLAlchemy rather than
pyodbc?

I suspect the answer is that most applications don't need it and there
is a downside (perhaps a performance implication?).

I've never used SQL Server, but the fact that the error message refers
to these collations as "legacy" suggests that an alternative collation
might be better.
https://docs.microsoft.com/en-us/sql/relational-databases/collations/collation-and-unicode-support?view=sql-server-ver15#Supplementary_Characters
says:

SQL Server 2019 (15.x) extends supplementary character support to
the char and varchar data types with the new UTF-8 enabled collations
(_UTF8). These data types are also capable of representing the full
Unicode character range.

If you can restrict yourself to SQL Server 2019, that might be a better option.

Simon

On Thu, Oct 15, 2020 at 10:08 AM Nicolas Lykke Iversen
 wrote:
>
> Thank you, Simon.
>
> I'm curious whether this is the way to do it in the future, or whether 
> SQLAlchemy should implement varchar(max)properly?
>
> What would the argument be for not implementing varchar(max)in the pyodbc 
> dialect?
>
> On Thursday, October 15, 2020 at 11:05:32 AM UTC+2 Simon King wrote:
>>
>> You could call 'setinputsizes' in a handler for the
>> 'before_cursor_execute' event, something like this:
>>
>>
>> from sqlalchemy import event
>>
>> @event.listens_for(SomeEngine, 'before_cursor_execute')
>> def receive_before_cursor_execute(conn, cursor, statement,
>> parameters, context, executemany):
>> cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])
>>
>>
>> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents
>> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
>>
>> Hope that helps,
>>
>> Simon
>>
>>
>> On Thu, Oct 15, 2020 at 8:27 AM Nicolas Lykke Iversen  
>> wrote:
>> >
>> > Hi Mike,
>> >
>> > I have created an issue for pyodbc: 
>> > https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870
>> >
>> > I've gotten really good feedback there from Microsoft, and a fix has been 
>> > proposed that works:
>> >
>> > "You can try to use setinputsizes on your parameter to tell it to send as 
>> > varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"
>> >
>> > I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc dialect be 
>> > updated to support varchar(max)using the proposed method? If not, how can 
>> > I execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) using 
>> > SQLAlchemy, so that I can make use of varchar(max)in my application?
>> >
>> > Can you recommend a hotfix for using varchar(max)in current SQLAlchemy 
>> > applications that need to handle Unicode supplementary characters (_SC)?
>> >
>> > I appreciate really appreciate your help.
>> >
>> > Best regards
>> > Nicolas
>> > On Wednesday, October 14, 2020 at 3:36:11 PM UTC+2 Mike Bayer wrote:
>> >>
>> >>
>> >>
>> >> On Wed, Oct 14, 2020, at 5:35 AM, Nicolas Lykke Iversen wrote:
>> >>
>> >> Hi Mike,
>> >>
>> >> I've now tested inserting strings with more than 2000 characters using 
>> >> Azure Data Studio (SQL Server GUI) and everything works.
>> >>
>> >> Furthermore, I've tested pyodbc (DBAPI) directly, and it only fails when 
>> >> inserting such strings using parameterised SQL queries (it succeeds 
>> >> without using parametrised queries).
>> >>
>> >>
>> >> that would be expected because all the datatype-related issues occur when 
>> >> bound parameters are passed.
>> >>
>> >>
>> >>
>> >> You can see my POC below, if you have any interest.
>> >>
>> >> I guess it should be submitted as a bug to pyodbc... Do you know if I can 
>> >> disable parametrisation for certain SQL queries in SQLAlchemy?
>> >>
>> >>
>> >> there is not and this is definitely an issue that has to be solved at the 
>> >> pyodbc level, either a bug on their end or something in your 
>> >> configuration that has to be changed.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> Best regards (and thanks for your help and support!!!)
>> >> Nicolas
>> >>
>> >> System info:
>> >> python v. 3.8.5
>> >> pyodbc v. 4.0.30
>> >> msodbcsql17 v. 17.6.1.1
>> >>
>> >> POC:
>> >> import sys
>> >> import pyodbc
>> >>
>> >> host = 'tcp:127.0.0.1,1433'
>> >> db = 'pyodbc_test'
>> >> user = 'sa'
>> >> pwd = 'P@ssw0rd'
>> >>
>> >> print('started')
>> >>
>> >> cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL 
>> >> Server};SERVER='+host+';DATABASE='+'master'+';UID='+user+';PWD='+ pwd, 
>> >> autocommit=True)
>> >> cursor = cnxn.cursor()
>> >>
>> >> try:
>> >> cursor.execute(f'CREATE DATABASE {db} COLLATE 
>> >> Latin1_General_100_CI_AS_SC')
>> >> except pyodbc.ProgrammingError as e:
>> >> pass # database exists
>> >>
>> >> cursor.execute(f'USE {db}')
>> >>
>> >> try:
>> >> cursor.execute("""
>> >> CREATE TABLE msg (
>> >> id int 

Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Nicolas Lykke Iversen
Thank you, Simon.

I'm curious whether this is the way to do it in the future, or whether 
SQLAlchemy should implement varchar(max)properly?

What would the argument be for not implementing varchar(max)in the pyodbc 
dialect?

On Thursday, October 15, 2020 at 11:05:32 AM UTC+2 Simon King wrote:

> You could call 'setinputsizes' in a handler for the
> 'before_cursor_execute' event, something like this:
>
>
> from sqlalchemy import event
>
> @event.listens_for(SomeEngine, 'before_cursor_execute')
> def receive_before_cursor_execute(conn, cursor, statement,
> parameters, context, executemany):
> cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])
>
>
>
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents
>
> https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute
>
> Hope that helps,
>
> Simon
>
>
> On Thu, Oct 15, 2020 at 8:27 AM Nicolas Lykke Iversen  
> wrote:
> >
> > Hi Mike,
> >
> > I have created an issue for pyodbc: 
> https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870
> >
> > I've gotten really good feedback there from Microsoft, and a fix has 
> been proposed that works:
> >
> > "You can try to use setinputsizes on your parameter to tell it to send 
> as varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"
> >
> > I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc dialect 
> be updated to support varchar(max)using the proposed method? If not, how 
> can I execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) using 
> SQLAlchemy, so that I can make use of varchar(max)in my application?
> >
> > Can you recommend a hotfix for using varchar(max)in current SQLAlchemy 
> applications that need to handle Unicode supplementary characters (_SC)?
> >
> > I appreciate really appreciate your help.
> >
> > Best regards
> > Nicolas
> > On Wednesday, October 14, 2020 at 3:36:11 PM UTC+2 Mike Bayer wrote:
> >>
> >>
> >>
> >> On Wed, Oct 14, 2020, at 5:35 AM, Nicolas Lykke Iversen wrote:
> >>
> >> Hi Mike,
> >>
> >> I've now tested inserting strings with more than 2000 characters using 
> Azure Data Studio (SQL Server GUI) and everything works.
> >>
> >> Furthermore, I've tested pyodbc (DBAPI) directly, and it only fails 
> when inserting such strings using parameterised SQL queries (it succeeds 
> without using parametrised queries).
> >>
> >>
> >> that would be expected because all the datatype-related issues occur 
> when bound parameters are passed.
> >>
> >>
> >>
> >> You can see my POC below, if you have any interest.
> >>
> >> I guess it should be submitted as a bug to pyodbc... Do you know if I 
> can disable parametrisation for certain SQL queries in SQLAlchemy?
> >>
> >>
> >> there is not and this is definitely an issue that has to be solved at 
> the pyodbc level, either a bug on their end or something in your 
> configuration that has to be changed.
> >>
> >>
> >>
> >>
> >>
> >> Best regards (and thanks for your help and support!!!)
> >> Nicolas
> >>
> >> System info:
> >> python v. 3.8.5
> >> pyodbc v. 4.0.30
> >> msodbcsql17 v. 17.6.1.1
> >>
> >> POC:
> >> import sys
> >> import pyodbc
> >>
> >> host = 'tcp:127.0.0.1,1433'
> >> db = 'pyodbc_test'
> >> user = 'sa'
> >> pwd = 'P@ssw0rd'
> >>
> >> print('started')
> >>
> >> cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL 
> Server};SERVER='+host+';DATABASE='+'master'+';UID='+user+';PWD='+ pwd, 
> autocommit=True)
> >> cursor = cnxn.cursor()
> >>
> >> try:
> >> cursor.execute(f'CREATE DATABASE {db} COLLATE 
> Latin1_General_100_CI_AS_SC')
> >> except pyodbc.ProgrammingError as e:
> >> pass # database exists
> >>
> >> cursor.execute(f'USE {db}')
> >>
> >> try:
> >> cursor.execute("""
> >> CREATE TABLE msg (
> >> id int identity(1,1) not null,
> >> content varchar(max) not null
> >> );""")
> >> except pyodbc.ProgrammingError as exc:
> >> pass # table exists
> >>
> >> content = 2000 * 'A'
> >>
> >> cursor.execute(f"""
> >> INSERT INTO msg (content)
> >> VALUES ('{content}')""")
> >> print(f'non-param: {len(content)=}: success')
> >>
> >> sql = f"""
> >> INSERT INTO msg (content)
> >> VALUES (?)"""
> >> cursor.execute(sql, (content))
> >> print(f'param: {len(content)=}: success')
> >>
> >> content = 2001 * 'A'
> >>
> >> cursor.execute(f"""
> >> INSERT INTO msg (content)
> >> VALUES ('{content}')""")
> >> print(f'non-param: {len(content)=}: success')
> >>
> >> # this fails!
> >> sql = f"""
> >> INSERT INTO msg (content)
> >> VALUES (?)"""
> >> cursor.execute(sql, (content))
> >> print(f'param: {len(content)=}: success')
> >>
> >>
> >> #cursor.execute('SELECT * FROM msg')
> >> #rows = cursor.fetchall()
> >> #for r in rows:
> >> # print(r)
> >>
> >> print('finished')
> >>
> >>
> >>
> >> On Wednesday, October 14, 2020 at 12:43:25 AM UTC+2 Mike Bayer wrote:
> >>
> >>
> >>
> >> On Tue, Oct 13, 2020, at 4:57 PM, Nicolas Lykke Iversen wrote:
> >>
> >> Thank you, Mike - very much appreciated!
> >>
> >> Just to be clear, pyodbc 

Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Simon King
You could call 'setinputsizes' in a handler for the
'before_cursor_execute' event, something like this:


from sqlalchemy import event

@event.listens_for(SomeEngine, 'before_cursor_execute')
def receive_before_cursor_execute(conn, cursor, statement,
parameters, context, executemany):
cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])


https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents
https://docs.sqlalchemy.org/en/13/core/events.html#sqlalchemy.events.ConnectionEvents.before_cursor_execute

Hope that helps,

Simon


On Thu, Oct 15, 2020 at 8:27 AM Nicolas Lykke Iversen  wrote:
>
> Hi Mike,
>
> I have created an issue for pyodbc: 
> https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870
>
> I've gotten really good feedback there from Microsoft, and a fix has been 
> proposed that works:
>
> "You can try to use setinputsizes on your parameter to tell it to send as 
> varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"
>
> I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc dialect be 
> updated to support varchar(max)using the proposed method? If not, how can I 
> execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) using SQLAlchemy, 
> so that I can make use of  varchar(max)in my application?
>
> Can you recommend a hotfix for using varchar(max)in current SQLAlchemy 
> applications that need to handle Unicode supplementary characters (_SC)?
>
> I appreciate really appreciate your help.
>
> Best regards
> Nicolas
> On Wednesday, October 14, 2020 at 3:36:11 PM UTC+2 Mike Bayer wrote:
>>
>>
>>
>> On Wed, Oct 14, 2020, at 5:35 AM, Nicolas Lykke Iversen wrote:
>>
>> Hi Mike,
>>
>> I've now tested inserting strings with more than 2000 characters using Azure 
>> Data Studio (SQL Server GUI) and everything works.
>>
>> Furthermore, I've tested pyodbc (DBAPI) directly, and it only fails when 
>> inserting such strings using parameterised SQL queries (it succeeds without 
>> using parametrised queries).
>>
>>
>> that would be expected because all the datatype-related issues occur when 
>> bound parameters are passed.
>>
>>
>>
>> You can see my POC below, if you have any interest.
>>
>> I guess it should be submitted as a bug to pyodbc... Do you know if I can 
>> disable parametrisation for certain SQL queries in SQLAlchemy?
>>
>>
>> there is not and this is definitely an issue that has to be solved at the 
>> pyodbc level, either a bug on their end or something in your configuration 
>> that has to be changed.
>>
>>
>>
>>
>>
>> Best regards (and thanks for your help and support!!!)
>> Nicolas
>>
>> System info:
>> python v. 3.8.5
>> pyodbc v. 4.0.30
>> msodbcsql17 v. 17.6.1.1
>>
>> POC:
>> import sys
>> import pyodbc
>>
>> host = 'tcp:127.0.0.1,1433'
>> db = 'pyodbc_test'
>> user = 'sa'
>> pwd = 'P@ssw0rd'
>>
>> print('started')
>>
>> cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL 
>> Server};SERVER='+host+';DATABASE='+'master'+';UID='+user+';PWD='+ pwd, 
>> autocommit=True)
>> cursor = cnxn.cursor()
>>
>> try:
>> cursor.execute(f'CREATE DATABASE {db} COLLATE 
>> Latin1_General_100_CI_AS_SC')
>> except pyodbc.ProgrammingError as e:
>> pass # database exists
>>
>> cursor.execute(f'USE {db}')
>>
>> try:
>> cursor.execute("""
>> CREATE TABLE msg (
>> id int identity(1,1) not null,
>> content varchar(max) not null
>> );""")
>> except pyodbc.ProgrammingError as exc:
>> pass # table exists
>>
>> content = 2000 * 'A'
>>
>> cursor.execute(f"""
>> INSERT INTO msg (content)
>> VALUES ('{content}')""")
>> print(f'non-param: {len(content)=}: success')
>>
>> sql = f"""
>>   INSERT INTO msg (content)
>>   VALUES (?)"""
>> cursor.execute(sql, (content))
>> print(f'param: {len(content)=}: success')
>>
>> content = 2001 * 'A'
>>
>> cursor.execute(f"""
>> INSERT INTO msg (content)
>> VALUES ('{content}')""")
>> print(f'non-param: {len(content)=}: success')
>>
>> # this fails!
>> sql = f"""
>>   INSERT INTO msg (content)
>>   VALUES (?)"""
>> cursor.execute(sql, (content))
>> print(f'param: {len(content)=}: success')
>>
>>
>> #cursor.execute('SELECT * FROM msg')
>> #rows = cursor.fetchall()
>> #for r in rows:
>> #print(r)
>>
>> print('finished')
>>
>>
>>
>> On Wednesday, October 14, 2020 at 12:43:25 AM UTC+2 Mike Bayer wrote:
>>
>>
>>
>> On Tue, Oct 13, 2020, at 4:57 PM, Nicolas Lykke Iversen wrote:
>>
>> Thank you, Mike - very much appreciated!
>>
>> Just to be clear, pyodbc is not a driver, it’s a ODBC-compliant DBAPI, 
>> right? I separately downloaded a driver for SQL Server from Microsoft, which 
>> pyodbc makes use of.
>>
>>
>> right the pyodbc is the DBAPI in this case, which we usually refer to as a 
>> "driver" but in the case of ODBC the "driver" is more specifically the 
>> separate ODBC driver component.
>>
>>
>> Do you suggest that changing pyodbc to another SQL Server DPAPI would solve 
>> the problem?
>>
>>
>> I 

Re: [sqlalchemy] Cannot insert strings with a length greater than 2000 into columns with a datatype of Text, VARCHAR, or NVARCHAR using MS SQL Server 2017 and 2019

2020-10-15 Thread Nicolas Lykke Iversen
Hi Mike,

I have created an issue for pyodbc
: https://github.com/mkleehammer/pyodbc/issues/835#issuecomment-708930870

I've gotten really good feedback there from Microsoft, and a fix has been 
proposed that works:


*"You can try to use setinputsizes on your parameter to tell it to send as 
varchar(max): cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),])"*
I'm by no means an SQLAlchemy expert, but shouldn't the pyodbc dialect be 
updated to support varchar(max)using the proposed method? If not, how can I 
execute cursor.setinputsizes([(pyodbc.SQL_WVARCHAR,0,0),]) using 
SQLAlchemy, so that I can make use of  varchar(max)in my application?

Can you recommend a hotfix for using varchar(max)in current SQLAlchemy 
applications that need to handle Unicode supplementary characters (_SC)? 

I appreciate really appreciate your help.

Best regards
Nicolas 
On Wednesday, October 14, 2020 at 3:36:11 PM UTC+2 Mike Bayer wrote:

>
>
> On Wed, Oct 14, 2020, at 5:35 AM, Nicolas Lykke Iversen wrote:
>
> Hi Mike,
>
> I've now tested inserting strings with more than 2000 characters using 
> Azure Data Studio (SQL Server GUI) and everything works.
>
> Furthermore, I've tested pyodbc (DBAPI) directly, and it only fails when 
> inserting such strings using parameterised SQL queries (it succeeds without 
> using parametrised queries).
>
>
> that would be expected because all the datatype-related issues occur when 
> bound parameters are passed.
>
>
>
> You can see my POC below, if you have any interest.
>
> I guess it should be submitted as a bug to pyodbc... Do you know if I can 
> disable parametrisation for certain SQL queries in SQLAlchemy?
>
>
> there is not and this is definitely an issue that has to be solved at the 
> pyodbc level, either a bug on their end or something in your configuration 
> that has to be changed.
>
>
>
>
>
> Best regards (and thanks for your help and support!!!)
> Nicolas
>
> *System info*:
> python v. 3.8.5
> pyodbc v. 4.0.30
> msodbcsql17 v. 17.6.1.1
>
> *POC*:
> import sys
> import pyodbc
>
> host = 'tcp:127.0.0.1,1433'
> db = 'pyodbc_test'
> user = 'sa'
> pwd = 'P@ssw0rd'
>
> print('started')
>
> cnxn = pyodbc.connect('DRIVER={ODBC Driver 17 for SQL 
> Server};SERVER='+host+';DATABASE='+'master'+';UID='+user+';PWD='+ pwd, 
> autocommit=True)
> cursor = cnxn.cursor()
>
> try:
> cursor.execute(f'CREATE DATABASE {db} COLLATE 
> Latin1_General_100_CI_AS_SC')
> except pyodbc.ProgrammingError as e:
> pass # database exists
>
> cursor.execute(f'USE {db}')
>
> try:
> cursor.execute("""
> CREATE TABLE msg (
> id int identity(1,1) not null,
> content varchar(max) not null
> );""")
> except pyodbc.ProgrammingError as exc:
> pass # table exists
>
> content = 2000 * 'A'
>
> cursor.execute(f"""
> INSERT INTO msg (content)
> VALUES ('{content}')""")
> print(f'non-param: {len(content)=}: success')
>
> sql = f"""
>   INSERT INTO msg (content)
>   VALUES (?)"""
> cursor.execute(sql, (content))
> print(f'param: {len(content)=}: success')
>
> content = 2001 * 'A'
>
> cursor.execute(f"""
> INSERT INTO msg (content)
> VALUES ('{content}')""")
> print(f'non-param: {len(content)=}: success')
>
> # this fails!
> sql = f"""
>   INSERT INTO msg (content)
>   VALUES (?)"""
> cursor.execute(sql, (content))
> print(f'param: {len(content)=}: success')
>
>
> #cursor.execute('SELECT * FROM msg')
> #rows = cursor.fetchall()
> #for r in rows:
> #print(r)
>
> print('finished')
>
>
>
> On Wednesday, October 14, 2020 at 12:43:25 AM UTC+2 Mike Bayer wrote:
>
>
>
> On Tue, Oct 13, 2020, at 4:57 PM, Nicolas Lykke Iversen wrote:
>
> Thank you, Mike - very much appreciated!
>
> Just to be clear, pyodbc is not a driver, it’s a ODBC-compliant DBAPI, 
> right? I separately downloaded a driver for SQL Server from Microsoft, 
> which pyodbc makes use of.
>
>
> right the pyodbc is the DBAPI in this case, which we usually refer to as a 
> "driver" but in the case of ODBC the "driver" is more specifically the 
> separate ODBC driver component.
>
>
> Do you suggest that changing pyodbc to another SQL Server DPAPI would 
> solve the problem?
>
>
> I suggest that if there is no issue with the query you're running outside 
> of the context of pyodbc that you submit an issue to pyodbc at 
> https://github.com/mkleehammer/pyodbc/issues .  However I was able to 
> find a discussion thread about your error message that seemed to be 
> independent of ODBC.
>
>
>
>
>
> If so, can you recommend another DBAPI for SQL Server? Or do you think 
> that the problem is caused by Microsoft’s driver?
>
>
> pyodbc is the only supported driver for SQL Server that exists now for 
> Python.You also definitely want to use Microsoft's ODBC drivers so 
> you're already there.
>
>
>
> I’m pretty sure SQL Server works fine when accessed using .NET, otherwise 
> the Internet would be full of complaints regarding not being able to insert 
> +2000 characters in a varchar(max).
>

[sqlalchemy] Text subquery column names in the results

2020-10-15 Thread Kotofos online

Hi,
Could you shed some light on what I might be doing incorrectly? I have this 
text() SELECT * query on top of a one-column sub-query and in the result, I 
am not getting that column name.

```
stmt = text('select "FirstName", from "Customer"')
stmt = select('*').select_from(stmt.columns().alias())
print(stmt)

SELECT *
FROM (select "FirstName",  from "Customer") AS anon_1
```

This is works, but produces incorrect column name in the output:
```
res = engine.execute(stmt)
keys = res.fetchall()[0].keys()
print(keys)

['*']
```

But when the subquery has two columns, then it works as expected:
```
stmt = text('select "FirstName", "LastName" from "Customer"')
stmt = select('*').select_from(stmt.columns().alias())
res = engine.execute(stmt)
keys = res.fetchall()[0].keys()
print(keys)

['FirstName', 'LastName']
```
So, is there a better way to wrap text query? Why column name is lost in 
the first case? Is it a bug?

Tested on 1.3.20 and 1.2.19

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/sqlalchemy/c0be052d-1c20-49d1-a73d-875b4a7afef9n%40googlegroups.com.