Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2019-01-31 Thread Mark Pearl
No the error related to this:

sqlalchemy ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000]
[Microsoft][ODBC Driver 13 for SQL Server][SQL Server]111214;An attempt to
complete a transaction has failed. No corresponding transaction found.
(111214) (SQLEndTran)') (Background on this error at:
http://sqlalche.me/e/f405)

On Thu, Jan 31, 2019 at 11:02 AM Mike Bayer 
wrote:

> for "dm_exec_sessions" ?  that's an old SQLAlchemy bug that was fixed
> long ago.  see https://github.com/sqlalchemy/sqlalchemy/issues/3994
> please upgrade.
>
> On Wed, Jan 30, 2019 at 10:52 PM  wrote:
> >
> > Any solution for this?
> >
> > On Monday, September 11, 2017 at 6:34:47 PM UTC-4, dirk.biesinger wrote:
> >>
> >> I am encountering errors when trying to use the pd.to_sql function to
> write a dataframe to MS SQL Data Warehouse.
> >> The connection works when NOT using sqlalchemy engines.
> >> I can read dataframes as well as row-by-row via select statements when
> I use pyodbc connections
> >> I can write data via insert statements (as well as delete data) when
> using pyodbc.
> >> However, when I try to connect using a sqlalchemy engine I run into a
> string of error messages starting with:
> >>
> >> ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000]
> [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Catalog view
> 'dm_exec_sessions' is not supported in this version. (104385)
> (SQLExecDirectW)")
> >>
> >>
> >> I have searched online, and this exact error seems to have been
> reported / evaluated in May of this year as issue #3994:
> >>
> >>
> >>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3994/azure-sql-datawarehouse-basic
> >>
> >>
> >> I could not find a solution to this, and I'd really dislike to do a
> line-wise or blob insert statement (I'm working with multiple datasets that
> each has a few million rows, so execution time is a consideration, although
> the result sets I'm getting are more like in the 100k lines area each.)
> >>
> >>
> >> I get the same error messages even when I replace the pd.to_sql command
> with a simple engine.connect()
> >>
> >>
> >> Enclosed my installed packages (packages.list)
> >>
> >> Enclosed the full traceback (traceback.txt)
> >>
> >>
> >> This is the code I'm using:
> >>
> >> connection_string = "mssql+pyodbc://:@.
> database.windows.net
> :/?driver=ODBC+Driver+13+for+SQL+Server"
> >> engn = sqlalchemy.engine.create_engine(connection_string, echo=True)
> >> engn.connect()
> >>
> >>
> >> I'm very well aware that MS SQL DataWarehouse behaves a bit different,
> so I'm open for some experimenting to get this issue narrowed down.
> >>
> >> In case it matters: I'm running an ubuntu 16.04 VM on azure with
> jupyter notebook server and python 3.6.1.
> >>
> >> Best,
> >>
> >> DB
> >
> >
> > Confidentiality Note: This email may contain confidential and/or private
> information.
> > If you received this email in error please delete and notify sender.
> >
> > --
> > SQLAlchemy -
> > The Python SQL Toolkit and Object Relational Mapper
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> > ---
> > You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>


-- 

Mark Pearl
Senior Data Developer | Data & New Markets
613.722.0688
mjpe...@lixar.com 
[image: Lixat IT] 
PREMIER AI & DATA COMPANY
End-to-End Development . IoT Intelligence . Cloud Computing
W: lixar.com | T: 613.722.0688
START YOUR DATA JOURNEY TODAY | CONTACT US 
[image: LinkedIn]  [image:
Twitter]  [image: YouTube]



   


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2019-01-31 Thread Mike Bayer
for "dm_exec_sessions" ?  that's an old SQLAlchemy bug that was fixed
long ago.  see https://github.com/sqlalchemy/sqlalchemy/issues/3994
please upgrade.

On Wed, Jan 30, 2019 at 10:52 PM  wrote:
>
> Any solution for this?
>
> On Monday, September 11, 2017 at 6:34:47 PM UTC-4, dirk.biesinger wrote:
>>
>> I am encountering errors when trying to use the pd.to_sql function to write 
>> a dataframe to MS SQL Data Warehouse.
>> The connection works when NOT using sqlalchemy engines.
>> I can read dataframes as well as row-by-row via select statements when I use 
>> pyodbc connections
>> I can write data via insert statements (as well as delete data) when using 
>> pyodbc.
>> However, when I try to connect using a sqlalchemy engine I run into a string 
>> of error messages starting with:
>>
>> ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] 
>> [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Catalog view 
>> 'dm_exec_sessions' is not supported in this version. (104385) 
>> (SQLExecDirectW)")
>>
>>
>> I have searched online, and this exact error seems to have been reported / 
>> evaluated in May of this year as issue #3994:
>>
>>
>> https://bitbucket.org/zzzeek/sqlalchemy/issues/3994/azure-sql-datawarehouse-basic
>>
>>
>> I could not find a solution to this, and I'd really dislike to do a 
>> line-wise or blob insert statement (I'm working with multiple datasets that 
>> each has a few million rows, so execution time is a consideration, although 
>> the result sets I'm getting are more like in the 100k lines area each.)
>>
>>
>> I get the same error messages even when I replace the pd.to_sql command with 
>> a simple engine.connect()
>>
>>
>> Enclosed my installed packages (packages.list)
>>
>> Enclosed the full traceback (traceback.txt)
>>
>>
>> This is the code I'm using:
>>
>> connection_string = 
>> "mssql+pyodbc://:@.database.windows.net:/?driver=ODBC+Driver+13+for+SQL+Server"
>> engn = sqlalchemy.engine.create_engine(connection_string, echo=True)
>> engn.connect()
>>
>>
>> I'm very well aware that MS SQL DataWarehouse behaves a bit different, so 
>> I'm open for some experimenting to get this issue narrowed down.
>>
>> In case it matters: I'm running an ubuntu 16.04 VM on azure with jupyter 
>> notebook server and python 3.6.1.
>>
>> Best,
>>
>> DB
>
>
> Confidentiality Note: This email may contain confidential and/or private 
> information.
> If you received this email in error please delete and notify sender.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and 
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full 
> description.
> ---
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2019-01-30 Thread mjpearl
Any solution for this?

On Monday, September 11, 2017 at 6:34:47 PM UTC-4, dirk.biesinger wrote:
>
> I am encountering errors when trying to use the pd.to_sql function to 
> write a dataframe to MS SQL Data Warehouse.
> The connection works when NOT using sqlalchemy engines.
> I can read dataframes as well as row-by-row via select statements when I 
> use pyodbc connections
> I can write data via insert statements (as well as delete data) when using 
> pyodbc.
> However, when I try to connect using a sqlalchemy engine I run into a 
> string of error messages starting with:
>
> ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] 
> [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Catalog view 
> 'dm_exec_sessions' is not supported in this version. (104385) 
> (SQLExecDirectW)")
>
>
> I have searched online, and this exact error seems to have been reported / 
> evaluated in May of this year as issue #3994:
>
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3994/azure-sql-datawarehouse-basic
>
>
> I could not find a solution to this, and I'd really dislike to do a line-wise 
> or blob insert statement (I'm working with multiple datasets that each has a 
> few million rows, so execution time is a consideration, although the result 
> sets I'm getting are more like in the 100k lines area each.)
>
>
> I get the same error messages even when I replace the pd.to_sql command with 
> a simple engine.connect()
>
>
> Enclosed my installed packages (packages.list)
>
> Enclosed the full traceback (traceback.txt)
>
>
> This is the code I'm using:
>
> connection_string = 
> "mssql+pyodbc://:@.database.windows.net:/?driver=ODBC+Driver+13+for+SQL+Server"
> engn = sqlalchemy.engine.create_engine(connection_string, echo=True)
> engn.connect()
>
>
> I'm very well aware that MS SQL DataWarehouse behaves a bit different, so 
> I'm open for some experimenting to get this issue narrowed down.
>
> In case it matters: I'm running an ubuntu 16.04 VM on azure with jupyter 
> notebook server and python 3.6.1.
>
> Best,
>
> DB
>

-- 
*Confidentiality Note:* This email may contain confidential and/or private 
information. 
If you received this email in error please delete and notify 
sender.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2018-10-12 Thread Jingting Lu
Hi Dirk,

Happy to report that there are more projects using dw.  I have the same 
issues here.  Using Azure SQL DW at the moment and building a serverless 
function app that reads and sends data back to the SQL DW.
Did you eventually find a solution other than looping through the dataframe?

Cheers,
Nicole

On Wednesday, September 13, 2017 at 5:39:47 PM UTC-5, dirk.biesinger wrote:
>
> yeah, I ran into some 'nice' 'features' in this project.
> the azure datawarehouse does not behave or work like a typical old-school 
> sql server.
> There might be some potential there, just not sure how many projects would 
> be using the datawarehouse
>
> On Wednesday, September 13, 2017 at 3:33:20 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger 
>>  wrote: 
>> > "oh that is very interesting." he says and then it's getting eerily 
>> quiet. 
>> > I guess Mike and Dilly are somewhere in the depth of code and docs... 
>>
>>
>> unfortunately not, azure seems to offer free trials if you are willing 
>> to give them a credit card number so I can perhaps eventually get 
>> around to working with that but at the moment it would be preferable 
>> if someone wants to work on an azure variant of the SQL Server 
>> dialect.It shouldn't be hard but the various glitches need to be 
>> understood for anything to be committed. 
>>
>> > 
>> > On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote: 
>> >> 
>> >> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger 
>> >>  wrote: 
>> >> > 
>> >> > as for the rollback call, 
>> >> > adding it after the print command, does not change anything. 
>> >> > When I insert it between cursor.execute() and rows = 
>> cursor.fetchall(), 
>> >> > I 
>> >> > get an error as expected. 
>> >> 
>> >> oh that is very interesting. 
>> >> 
>> >> 
>> >> > 
>> >> > 
>> >> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer 
>> wrote: 
>> >> >> 
>> >> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
>> >> >>  wrote: 
>> >> >> > using 
>> >> >> > 
>> >> >> > connection = pyodbc.connect() 
>> >> >> > connection.autocommit = 0 
>> >> >> > cursor = connection.cursor() 
>> >> >> > cursor.execute([proper sql statement that references a table]) 
>> >> >> > rows = corsor.fetchall() 
>> >> >> > print(rows) 
>> >> >> > cursor.close() 
>> >> >> > 
>> >> >> > gives me the output out of the table that I expect. 
>> >> >> > So if you were wondering if a raw pyodbc connection works, this 
>> is 
>> >> >> > confirmed. 
>> >> >> 
>> >> >> did you call: 
>> >> >> 
>> >> >> connection.rollback() 
>> >> >> 
>> >> >> the stack traces you have given me indicate this method cannot be 
>> >> >> called else Azure raises an error.  This must be illustrated as 
>> >> >> definitely the problem, and not a side effect of something else. 
>> >> >> 
>> >> >> This is why this would go a lot quicker if someone had a *blank* 
>> azure 
>> >> >> database on a cloud node somewhere for me to log into.  I don't 
>> need 
>> >> >> your customer data. 
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> > 
>> >> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer 
>> >> >> > wrote: 
>> >> >> >> 
>> >> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
>> >> >> >>  wrote: 
>> >> >> >> > I have the 'patched' pyodbc.py file active. 
>> >> >> >> > Executing your code snippet does NOT produce an error or any 
>> >> >> >> > output 
>> >> >> >> > for 
>> >> >> >> > that 
>> >> >> >> > matter. 
>> >> >> >> > 
>> >> >> >> > 
>> >> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike 
>> Bayer 
>> >> >> >> > wrote: 
>> >> >> >> >> 
>> >> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
>> >> >> >> >>  wrote: 
>> >> >> >> >> > Got ya, 
>> >> >> >> >> > 
>> >> >> >> >> > so we could solve the issue on the sqlalchemy end with the 
>> >> >> >> >> > alteration 
>> >> >> >> >> > of 
>> >> >> >> >> > the 
>> >> >> >> >> > pyodbc.py file. 
>> >> >> >> >> > I assume you'll include this in the next release? 
>> >> >> >> >> 
>> >> >> >> >> um. 
>> >> >> >> >> 
>> >> >> >> >> can you just confirm for me this makes the error? 
>> >> >> >> >> 
>> >> >> >> >> 
>> >> >> >> >> connection = pyodbc.connect() 
>> >> >> >> >> connection.autocommit = 0 
>> >> >> >> >> connection.rollback() 
>> >> >> >> 
>> >> >> >> 
>> >> >> >> try it like this: 
>> >> >> >> 
>> >> >> >> 
>> >> >> >> connection = pyodbc.connect() 
>> >> >> >> connection.autocommit = 0 
>> >> >> >> cursor = connection.cursor() 
>> >> >> >> cursor.execute("SELECT 1") 
>> >> >> >> cursor.close() 
>> >> >> >> 
>> >> >> >> connection.rollback() 
>> >> >> >> 
>> >> >> >> 
>> >> >> >> >> 
>> >> >> >> >> 
>> >> >> >> >> 
>> >> >> >> >> > The issue with creating a table when the option 
>> >> >> >> >> > "if_exists='append'" 
>> >> >> >> >> > is 
>> >> >> >> >> > set 
>> >> >> >> >> > in the df.to_sql() call, is a pandas problem. 
>> >> >> >> >> > 
>> >> >> >> >> > Thank you for 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
There is definitely interest in this platform and we will need to support
it.  It's just it falls under the category of a subscriber database like
Redhift or google whatever it was called so needs some folks with access to
get it working and possibly support an external overlay project -
sqlalchemy-redshift is the current example.

On Sep 13, 2017 6:39 PM, "dirk.biesinger"  wrote:

yeah, I ran into some 'nice' 'features' in this project.
the azure datawarehouse does not behave or work like a typical old-school
sql server.
There might be some potential there, just not sure how many projects would
be using the datawarehouse

On Wednesday, September 13, 2017 at 3:33:20 PM UTC-7, Mike Bayer wrote:

> On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger
>  wrote:
> > "oh that is very interesting." he says and then it's getting eerily
> quiet.
> > I guess Mike and Dilly are somewhere in the depth of code and docs...
>
>
> unfortunately not, azure seems to offer free trials if you are willing
> to give them a credit card number so I can perhaps eventually get
> around to working with that but at the moment it would be preferable
> if someone wants to work on an azure variant of the SQL Server
> dialect.It shouldn't be hard but the various glitches need to be
> understood for anything to be committed.
>
> >
> > On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
> >>
> >> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
> >>  wrote:
> >> >
> >> > as for the rollback call,
> >> > adding it after the print command, does not change anything.
> >> > When I insert it between cursor.execute() and rows =
> cursor.fetchall(),
> >> > I
> >> > get an error as expected.
> >>
> >> oh that is very interesting.
> >>
> >>
> >> >
> >> >
> >> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer
> wrote:
> >> >>
> >> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
> >> >>  wrote:
> >> >> > using
> >> >> >
> >> >> > connection = pyodbc.connect()
> >> >> > connection.autocommit = 0
> >> >> > cursor = connection.cursor()
> >> >> > cursor.execute([proper sql statement that references a table])
> >> >> > rows = corsor.fetchall()
> >> >> > print(rows)
> >> >> > cursor.close()
> >> >> >
> >> >> > gives me the output out of the table that I expect.
> >> >> > So if you were wondering if a raw pyodbc connection works, this is
> >> >> > confirmed.
> >> >>
> >> >> did you call:
> >> >>
> >> >> connection.rollback()
> >> >>
> >> >> the stack traces you have given me indicate this method cannot be
> >> >> called else Azure raises an error.  This must be illustrated as
> >> >> definitely the problem, and not a side effect of something else.
> >> >>
> >> >> This is why this would go a lot quicker if someone had a *blank*
> azure
> >> >> database on a cloud node somewhere for me to log into.  I don't need
> >> >> your customer data.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> >
> >> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer
> >> >> > wrote:
> >> >> >>
> >> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
> >> >> >>  wrote:
> >> >> >> > I have the 'patched' pyodbc.py file active.
> >> >> >> > Executing your code snippet does NOT produce an error or any
> >> >> >> > output
> >> >> >> > for
> >> >> >> > that
> >> >> >> > matter.
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike
> Bayer
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
> >> >> >> >>  wrote:
> >> >> >> >> > Got ya,
> >> >> >> >> >
> >> >> >> >> > so we could solve the issue on the sqlalchemy end with the
> >> >> >> >> > alteration
> >> >> >> >> > of
> >> >> >> >> > the
> >> >> >> >> > pyodbc.py file.
> >> >> >> >> > I assume you'll include this in the next release?
> >> >> >> >>
> >> >> >> >> um.
> >> >> >> >>
> >> >> >> >> can you just confirm for me this makes the error?
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> connection = pyodbc.connect()
> >> >> >> >> connection.autocommit = 0
> >> >> >> >> connection.rollback()
> >> >> >>
> >> >> >>
> >> >> >> try it like this:
> >> >> >>
> >> >> >>
> >> >> >> connection = pyodbc.connect()
> >> >> >> connection.autocommit = 0
> >> >> >> cursor = connection.cursor()
> >> >> >> cursor.execute("SELECT 1")
> >> >> >> cursor.close()
> >> >> >>
> >> >> >> connection.rollback()
> >> >> >>
> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> > The issue with creating a table when the option
> >> >> >> >> > "if_exists='append'"
> >> >> >> >> > is
> >> >> >> >> > set
> >> >> >> >> > in the df.to_sql() call, is a pandas problem.
> >> >> >> >> >
> >> >> >> >> > Thank you for your help.
> >> >> >> >> >
> >> >> >> >> > Best,
> >> >> >> >> > DB
> >> >> >> >> >
> >> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
yeah, I ran into some 'nice' 'features' in this project.
the azure datawarehouse does not behave or work like a typical old-school 
sql server.
There might be some potential there, just not sure how many projects would 
be using the datawarehouse

On Wednesday, September 13, 2017 at 3:33:20 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger 
>  wrote: 
> > "oh that is very interesting." he says and then it's getting eerily 
> quiet. 
> > I guess Mike and Dilly are somewhere in the depth of code and docs... 
>
>
> unfortunately not, azure seems to offer free trials if you are willing 
> to give them a credit card number so I can perhaps eventually get 
> around to working with that but at the moment it would be preferable 
> if someone wants to work on an azure variant of the SQL Server 
> dialect.It shouldn't be hard but the various glitches need to be 
> understood for anything to be committed. 
>
> > 
> > On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger 
> >>  wrote: 
> >> > 
> >> > as for the rollback call, 
> >> > adding it after the print command, does not change anything. 
> >> > When I insert it between cursor.execute() and rows = 
> cursor.fetchall(), 
> >> > I 
> >> > get an error as expected. 
> >> 
> >> oh that is very interesting. 
> >> 
> >> 
> >> > 
> >> > 
> >> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > using 
> >> >> > 
> >> >> > connection = pyodbc.connect() 
> >> >> > connection.autocommit = 0 
> >> >> > cursor = connection.cursor() 
> >> >> > cursor.execute([proper sql statement that references a table]) 
> >> >> > rows = corsor.fetchall() 
> >> >> > print(rows) 
> >> >> > cursor.close() 
> >> >> > 
> >> >> > gives me the output out of the table that I expect. 
> >> >> > So if you were wondering if a raw pyodbc connection works, this is 
> >> >> > confirmed. 
> >> >> 
> >> >> did you call: 
> >> >> 
> >> >> connection.rollback() 
> >> >> 
> >> >> the stack traces you have given me indicate this method cannot be 
> >> >> called else Azure raises an error.  This must be illustrated as 
> >> >> definitely the problem, and not a side effect of something else. 
> >> >> 
> >> >> This is why this would go a lot quicker if someone had a *blank* 
> azure 
> >> >> database on a cloud node somewhere for me to log into.  I don't need 
> >> >> your customer data. 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > I have the 'patched' pyodbc.py file active. 
> >> >> >> > Executing your code snippet does NOT produce an error or any 
> >> >> >> > output 
> >> >> >> > for 
> >> >> >> > that 
> >> >> >> > matter. 
> >> >> >> > 
> >> >> >> > 
> >> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike 
> Bayer 
> >> >> >> > wrote: 
> >> >> >> >> 
> >> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >> >> >>  wrote: 
> >> >> >> >> > Got ya, 
> >> >> >> >> > 
> >> >> >> >> > so we could solve the issue on the sqlalchemy end with the 
> >> >> >> >> > alteration 
> >> >> >> >> > of 
> >> >> >> >> > the 
> >> >> >> >> > pyodbc.py file. 
> >> >> >> >> > I assume you'll include this in the next release? 
> >> >> >> >> 
> >> >> >> >> um. 
> >> >> >> >> 
> >> >> >> >> can you just confirm for me this makes the error? 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> connection = pyodbc.connect() 
> >> >> >> >> connection.autocommit = 0 
> >> >> >> >> connection.rollback() 
> >> >> >> 
> >> >> >> 
> >> >> >> try it like this: 
> >> >> >> 
> >> >> >> 
> >> >> >> connection = pyodbc.connect() 
> >> >> >> connection.autocommit = 0 
> >> >> >> cursor = connection.cursor() 
> >> >> >> cursor.execute("SELECT 1") 
> >> >> >> cursor.close() 
> >> >> >> 
> >> >> >> connection.rollback() 
> >> >> >> 
> >> >> >> 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> > The issue with creating a table when the option 
> >> >> >> >> > "if_exists='append'" 
> >> >> >> >> > is 
> >> >> >> >> > set 
> >> >> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> >> >> > 
> >> >> >> >> > Thank you for your help. 
> >> >> >> >> > 
> >> >> >> >> > Best, 
> >> >> >> >> > DB 
> >> >> >> >> > 
> >> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike 
> >> >> >> >> > Bayer 
> >> >> >> >> > wrote: 
> >> >> >> >> >> 
> >> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >> >> >>  wrote: 
> >> >> >> >> >> > I don't get why the table is getting created in the first 
> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger
 wrote:
> "oh that is very interesting." he says and then it's getting eerily quiet.
> I guess Mike and Dilly are somewhere in the depth of code and docs...


unfortunately not, azure seems to offer free trials if you are willing
to give them a credit card number so I can perhaps eventually get
around to working with that but at the moment it would be preferable
if someone wants to work on an azure variant of the SQL Server
dialect.It shouldn't be hard but the various glitches need to be
understood for anything to be committed.

>
> On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
>>  wrote:
>> >
>> > as for the rollback call,
>> > adding it after the print command, does not change anything.
>> > When I insert it between cursor.execute() and rows = cursor.fetchall(),
>> > I
>> > get an error as expected.
>>
>> oh that is very interesting.
>>
>>
>> >
>> >
>> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
>> >>  wrote:
>> >> > using
>> >> >
>> >> > connection = pyodbc.connect()
>> >> > connection.autocommit = 0
>> >> > cursor = connection.cursor()
>> >> > cursor.execute([proper sql statement that references a table])
>> >> > rows = corsor.fetchall()
>> >> > print(rows)
>> >> > cursor.close()
>> >> >
>> >> > gives me the output out of the table that I expect.
>> >> > So if you were wondering if a raw pyodbc connection works, this is
>> >> > confirmed.
>> >>
>> >> did you call:
>> >>
>> >> connection.rollback()
>> >>
>> >> the stack traces you have given me indicate this method cannot be
>> >> called else Azure raises an error.  This must be illustrated as
>> >> definitely the problem, and not a side effect of something else.
>> >>
>> >> This is why this would go a lot quicker if someone had a *blank* azure
>> >> database on a cloud node somewhere for me to log into.  I don't need
>> >> your customer data.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> >
>> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > I have the 'patched' pyodbc.py file active.
>> >> >> > Executing your code snippet does NOT produce an error or any
>> >> >> > output
>> >> >> > for
>> >> >> > that
>> >> >> > matter.
>> >> >> >
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > Got ya,
>> >> >> >> >
>> >> >> >> > so we could solve the issue on the sqlalchemy end with the
>> >> >> >> > alteration
>> >> >> >> > of
>> >> >> >> > the
>> >> >> >> > pyodbc.py file.
>> >> >> >> > I assume you'll include this in the next release?
>> >> >> >>
>> >> >> >> um.
>> >> >> >>
>> >> >> >> can you just confirm for me this makes the error?
>> >> >> >>
>> >> >> >>
>> >> >> >> connection = pyodbc.connect()
>> >> >> >> connection.autocommit = 0
>> >> >> >> connection.rollback()
>> >> >>
>> >> >>
>> >> >> try it like this:
>> >> >>
>> >> >>
>> >> >> connection = pyodbc.connect()
>> >> >> connection.autocommit = 0
>> >> >> cursor = connection.cursor()
>> >> >> cursor.execute("SELECT 1")
>> >> >> cursor.close()
>> >> >>
>> >> >> connection.rollback()
>> >> >>
>> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> > The issue with creating a table when the option
>> >> >> >> > "if_exists='append'"
>> >> >> >> > is
>> >> >> >> > set
>> >> >> >> > in the df.to_sql() call, is a pandas problem.
>> >> >> >> >
>> >> >> >> > Thank you for your help.
>> >> >> >> >
>> >> >> >> > Best,
>> >> >> >> > DB
>> >> >> >> >
>> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike
>> >> >> >> > Bayer
>> >> >> >> > wrote:
>> >> >> >> >>
>> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >> >> >>  wrote:
>> >> >> >> >> > I don't get why the table is getting created in the first
>> >> >> >> >> > place. A
>> >> >> >> >> > table
>> >> >> >> >> > with this name exists, and the option "if_exists='append'"
>> >> >> >> >> > should
>> >> >> >> >> > append
>> >> >> >> >> > the
>> >> >> >> >> > dataframe to the existing table.
>> >> >> >> >> > There should not be a dropping of the table (which I have
>> >> >> >> >> > not
>> >> >> >> >> > seen)
>> >> >> >> >> > nor
>> >> >> >> >> > creation of the table.
>> >> >> >> >> >
>> >> >> >> >> > And in case of creating the table, I think it should be
>> >> >> >> >> > possible
>> >> >> >> >> > to
>> >> >> >> >> > define
>> >> >> >> >> > the length of the field, so
>> >> >> >> >> > varchar([variable_to_be_submitted]).
>> >> >> >> >> > In my case I expect this particular table to 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
"oh that is very interesting." he says and then it's getting eerily quiet.
I guess Mike and Dilly are somewhere in the depth of code and docs...

On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger 
>  wrote: 
> > 
> > as for the rollback call, 
> > adding it after the print command, does not change anything. 
> > When I insert it between cursor.execute() and rows = cursor.fetchall(), 
> I 
> > get an error as expected. 
>
> oh that is very interesting. 
>
>
> > 
> > 
> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
> >>  wrote: 
> >> > using 
> >> > 
> >> > connection = pyodbc.connect() 
> >> > connection.autocommit = 0 
> >> > cursor = connection.cursor() 
> >> > cursor.execute([proper sql statement that references a table]) 
> >> > rows = corsor.fetchall() 
> >> > print(rows) 
> >> > cursor.close() 
> >> > 
> >> > gives me the output out of the table that I expect. 
> >> > So if you were wondering if a raw pyodbc connection works, this is 
> >> > confirmed. 
> >> 
> >> did you call: 
> >> 
> >> connection.rollback() 
> >> 
> >> the stack traces you have given me indicate this method cannot be 
> >> called else Azure raises an error.  This must be illustrated as 
> >> definitely the problem, and not a side effect of something else. 
> >> 
> >> This is why this would go a lot quicker if someone had a *blank* azure 
> >> database on a cloud node somewhere for me to log into.  I don't need 
> >> your customer data. 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> > 
> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > I have the 'patched' pyodbc.py file active. 
> >> >> > Executing your code snippet does NOT produce an error or any 
> output 
> >> >> > for 
> >> >> > that 
> >> >> > matter. 
> >> >> > 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > Got ya, 
> >> >> >> > 
> >> >> >> > so we could solve the issue on the sqlalchemy end with the 
> >> >> >> > alteration 
> >> >> >> > of 
> >> >> >> > the 
> >> >> >> > pyodbc.py file. 
> >> >> >> > I assume you'll include this in the next release? 
> >> >> >> 
> >> >> >> um. 
> >> >> >> 
> >> >> >> can you just confirm for me this makes the error? 
> >> >> >> 
> >> >> >> 
> >> >> >> connection = pyodbc.connect() 
> >> >> >> connection.autocommit = 0 
> >> >> >> connection.rollback() 
> >> >> 
> >> >> 
> >> >> try it like this: 
> >> >> 
> >> >> 
> >> >> connection = pyodbc.connect() 
> >> >> connection.autocommit = 0 
> >> >> cursor = connection.cursor() 
> >> >> cursor.execute("SELECT 1") 
> >> >> cursor.close() 
> >> >> 
> >> >> connection.rollback() 
> >> >> 
> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >> > The issue with creating a table when the option 
> >> >> >> > "if_exists='append'" 
> >> >> >> > is 
> >> >> >> > set 
> >> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> >> > 
> >> >> >> > Thank you for your help. 
> >> >> >> > 
> >> >> >> > Best, 
> >> >> >> > DB 
> >> >> >> > 
> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike 
> Bayer 
> >> >> >> > wrote: 
> >> >> >> >> 
> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >> >>  wrote: 
> >> >> >> >> > I don't get why the table is getting created in the first 
> >> >> >> >> > place. A 
> >> >> >> >> > table 
> >> >> >> >> > with this name exists, and the option "if_exists='append'" 
> >> >> >> >> > should 
> >> >> >> >> > append 
> >> >> >> >> > the 
> >> >> >> >> > dataframe to the existing table. 
> >> >> >> >> > There should not be a dropping of the table (which I have 
> not 
> >> >> >> >> > seen) 
> >> >> >> >> > nor 
> >> >> >> >> > creation of the table. 
> >> >> >> >> > 
> >> >> >> >> > And in case of creating the table, I think it should be 
> >> >> >> >> > possible 
> >> >> >> >> > to 
> >> >> >> >> > define 
> >> >> >> >> > the length of the field, so 
> >> >> >> >> > varchar([variable_to_be_submitted]). 
> >> >> >> >> > In my case I expect this particular table to grow to several 
> >> >> >> >> > hundred 
> >> >> >> >> > million 
> >> >> >> >> > rows, so assigned storage space is a factor. 
> >> >> >> >> > 
> >> >> >> >> > the existing table was created like this: 
> >> >> >> >> > 
> >> >> >> >> > CREATE TABLE dbo.DSI 
> >> >> >> >> > ( 
> >> >> >> >> > a datetime 
> >> >> >> >> > b varchar(10) null, 
> >> >> >> >> > c varchar(100) null, 
> >> >> >> >> > d varchar(10) null, 
> >> >> >> >> > e varchar(100) null, 
> >> >> >> >> > f decimal (8,6) null, 
> >> >> >> >> > g 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
 wrote:
>
> as for the rollback call,
> adding it after the print command, does not change anything.
> When I insert it between cursor.execute() and rows = cursor.fetchall(), I
> get an error as expected.

oh that is very interesting.


>
>
> On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
>>  wrote:
>> > using
>> >
>> > connection = pyodbc.connect()
>> > connection.autocommit = 0
>> > cursor = connection.cursor()
>> > cursor.execute([proper sql statement that references a table])
>> > rows = corsor.fetchall()
>> > print(rows)
>> > cursor.close()
>> >
>> > gives me the output out of the table that I expect.
>> > So if you were wondering if a raw pyodbc connection works, this is
>> > confirmed.
>>
>> did you call:
>>
>> connection.rollback()
>>
>> the stack traces you have given me indicate this method cannot be
>> called else Azure raises an error.  This must be illustrated as
>> definitely the problem, and not a side effect of something else.
>>
>> This is why this would go a lot quicker if someone had a *blank* azure
>> database on a cloud node somewhere for me to log into.  I don't need
>> your customer data.
>>
>>
>>
>>
>>
>>
>> >
>> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>> >>  wrote:
>> >> > I have the 'patched' pyodbc.py file active.
>> >> > Executing your code snippet does NOT produce an error or any output
>> >> > for
>> >> > that
>> >> > matter.
>> >> >
>> >> >
>> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > Got ya,
>> >> >> >
>> >> >> > so we could solve the issue on the sqlalchemy end with the
>> >> >> > alteration
>> >> >> > of
>> >> >> > the
>> >> >> > pyodbc.py file.
>> >> >> > I assume you'll include this in the next release?
>> >> >>
>> >> >> um.
>> >> >>
>> >> >> can you just confirm for me this makes the error?
>> >> >>
>> >> >>
>> >> >> connection = pyodbc.connect()
>> >> >> connection.autocommit = 0
>> >> >> connection.rollback()
>> >>
>> >>
>> >> try it like this:
>> >>
>> >>
>> >> connection = pyodbc.connect()
>> >> connection.autocommit = 0
>> >> cursor = connection.cursor()
>> >> cursor.execute("SELECT 1")
>> >> cursor.close()
>> >>
>> >> connection.rollback()
>> >>
>> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> > The issue with creating a table when the option
>> >> >> > "if_exists='append'"
>> >> >> > is
>> >> >> > set
>> >> >> > in the df.to_sql() call, is a pandas problem.
>> >> >> >
>> >> >> > Thank you for your help.
>> >> >> >
>> >> >> > Best,
>> >> >> > DB
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > I don't get why the table is getting created in the first
>> >> >> >> > place. A
>> >> >> >> > table
>> >> >> >> > with this name exists, and the option "if_exists='append'"
>> >> >> >> > should
>> >> >> >> > append
>> >> >> >> > the
>> >> >> >> > dataframe to the existing table.
>> >> >> >> > There should not be a dropping of the table (which I have not
>> >> >> >> > seen)
>> >> >> >> > nor
>> >> >> >> > creation of the table.
>> >> >> >> >
>> >> >> >> > And in case of creating the table, I think it should be
>> >> >> >> > possible
>> >> >> >> > to
>> >> >> >> > define
>> >> >> >> > the length of the field, so
>> >> >> >> > varchar([variable_to_be_submitted]).
>> >> >> >> > In my case I expect this particular table to grow to several
>> >> >> >> > hundred
>> >> >> >> > million
>> >> >> >> > rows, so assigned storage space is a factor.
>> >> >> >> >
>> >> >> >> > the existing table was created like this:
>> >> >> >> >
>> >> >> >> > CREATE TABLE dbo.DSI
>> >> >> >> > (
>> >> >> >> > a datetime
>> >> >> >> > b varchar(10) null,
>> >> >> >> > c varchar(100) null,
>> >> >> >> > d varchar(10) null,
>> >> >> >> > e varchar(100) null,
>> >> >> >> > f decimal (8,6) null,
>> >> >> >> > g decimal (8,6) null
>> >> >> >> > )
>> >> >> >> >
>> >> >> >> > If I read and understand the error stack correct, the cause is
>> >> >> >> > the
>> >> >> >> > create
>> >> >> >> > table statement, which I would very strongly hope to cause an
>> >> >> >> > error,
>> >> >> >> > as
>> >> >> >> > the
>> >> >> >> > table exists.
>> >> >> >> > Should the create table statement not be omitted if the option
>> >> >> >> > "if_exists='append'" option is set?
>> >> >> >>
>> >> >> >> This is all on the Pandas side so you'd need to talk to them.
>> >> >> >>
>> >> >> >>
>> >> >> >> >
>> >> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike
>> >> >> >> > Bayer
>> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I hear your pain,

unfortunately we have to have very tight security on our server instances; 
I only can log into the server when I am in the office, I can't log in with 
my work laptop from home. (I have a not authorized ip address there)
If it were a possible, I would have given you access to a blank area.

as for the rollback call, 
adding it after the print command, does not change anything.
When I insert it between cursor.execute() and rows = cursor.fetchall(), I 
get an error as expected.


On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
>  wrote: 
> > using 
> > 
> > connection = pyodbc.connect() 
> > connection.autocommit = 0 
> > cursor = connection.cursor() 
> > cursor.execute([proper sql statement that references a table]) 
> > rows = corsor.fetchall() 
> > print(rows) 
> > cursor.close() 
> > 
> > gives me the output out of the table that I expect. 
> > So if you were wondering if a raw pyodbc connection works, this is 
> > confirmed. 
>
> did you call: 
>
> connection.rollback() 
>
> the stack traces you have given me indicate this method cannot be 
> called else Azure raises an error.  This must be illustrated as 
> definitely the problem, and not a side effect of something else. 
>
> This is why this would go a lot quicker if someone had a *blank* azure 
> database on a cloud node somewhere for me to log into.  I don't need 
> your customer data. 
>
>
>
>
>
>
> > 
> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >>  wrote: 
> >> > I have the 'patched' pyodbc.py file active. 
> >> > Executing your code snippet does NOT produce an error or any output 
> for 
> >> > that 
> >> > matter. 
> >> > 
> >> > 
> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > Got ya, 
> >> >> > 
> >> >> > so we could solve the issue on the sqlalchemy end with the 
> alteration 
> >> >> > of 
> >> >> > the 
> >> >> > pyodbc.py file. 
> >> >> > I assume you'll include this in the next release? 
> >> >> 
> >> >> um. 
> >> >> 
> >> >> can you just confirm for me this makes the error? 
> >> >> 
> >> >> 
> >> >> connection = pyodbc.connect() 
> >> >> connection.autocommit = 0 
> >> >> connection.rollback() 
> >> 
> >> 
> >> try it like this: 
> >> 
> >> 
> >> connection = pyodbc.connect() 
> >> connection.autocommit = 0 
> >> cursor = connection.cursor() 
> >> cursor.execute("SELECT 1") 
> >> cursor.close() 
> >> 
> >> connection.rollback() 
> >> 
> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> > The issue with creating a table when the option 
> "if_exists='append'" 
> >> >> > is 
> >> >> > set 
> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> > 
> >> >> > Thank you for your help. 
> >> >> > 
> >> >> > Best, 
> >> >> > DB 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > I don't get why the table is getting created in the first 
> place. A 
> >> >> >> > table 
> >> >> >> > with this name exists, and the option "if_exists='append'" 
> should 
> >> >> >> > append 
> >> >> >> > the 
> >> >> >> > dataframe to the existing table. 
> >> >> >> > There should not be a dropping of the table (which I have not 
> >> >> >> > seen) 
> >> >> >> > nor 
> >> >> >> > creation of the table. 
> >> >> >> > 
> >> >> >> > And in case of creating the table, I think it should be 
> possible 
> >> >> >> > to 
> >> >> >> > define 
> >> >> >> > the length of the field, so 
> varchar([variable_to_be_submitted]). 
> >> >> >> > In my case I expect this particular table to grow to several 
> >> >> >> > hundred 
> >> >> >> > million 
> >> >> >> > rows, so assigned storage space is a factor. 
> >> >> >> > 
> >> >> >> > the existing table was created like this: 
> >> >> >> > 
> >> >> >> > CREATE TABLE dbo.DSI 
> >> >> >> > ( 
> >> >> >> > a datetime 
> >> >> >> > b varchar(10) null, 
> >> >> >> > c varchar(100) null, 
> >> >> >> > d varchar(10) null, 
> >> >> >> > e varchar(100) null, 
> >> >> >> > f decimal (8,6) null, 
> >> >> >> > g decimal (8,6) null 
> >> >> >> > ) 
> >> >> >> > 
> >> >> >> > If I read and understand the error stack correct, the cause is 
> the 
> >> >> >> > create 
> >> >> >> > table statement, which I would very strongly hope to cause an 
> >> >> >> > error, 
> >> >> >> > as 
> >> >> >> > the 
> >> >> >> > table exists. 
> >> >> >> > Should the create table statement not be omitted if the option 
> >> >> >> > "if_exists='append'" option is set? 
> >> >> >> 
> >> >> >> This is all on the Pandas side so you'd need to talk to them. 
> >> >> >> 
> >> >> >> 
> >> >> >> > 
> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
 wrote:
> using
>
> connection = pyodbc.connect()
> connection.autocommit = 0
> cursor = connection.cursor()
> cursor.execute([proper sql statement that references a table])
> rows = corsor.fetchall()
> print(rows)
> cursor.close()
>
> gives me the output out of the table that I expect.
> So if you were wondering if a raw pyodbc connection works, this is
> confirmed.

did you call:

connection.rollback()

the stack traces you have given me indicate this method cannot be
called else Azure raises an error.  This must be illustrated as
definitely the problem, and not a side effect of something else.

This is why this would go a lot quicker if someone had a *blank* azure
database on a cloud node somewhere for me to log into.  I don't need
your customer data.






>
> On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>>  wrote:
>> > I have the 'patched' pyodbc.py file active.
>> > Executing your code snippet does NOT produce an error or any output for
>> > that
>> > matter.
>> >
>> >
>> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >>  wrote:
>> >> > Got ya,
>> >> >
>> >> > so we could solve the issue on the sqlalchemy end with the alteration
>> >> > of
>> >> > the
>> >> > pyodbc.py file.
>> >> > I assume you'll include this in the next release?
>> >>
>> >> um.
>> >>
>> >> can you just confirm for me this makes the error?
>> >>
>> >>
>> >> connection = pyodbc.connect()
>> >> connection.autocommit = 0
>> >> connection.rollback()
>>
>>
>> try it like this:
>>
>>
>> connection = pyodbc.connect()
>> connection.autocommit = 0
>> cursor = connection.cursor()
>> cursor.execute("SELECT 1")
>> cursor.close()
>>
>> connection.rollback()
>>
>>
>> >>
>> >>
>> >>
>> >> > The issue with creating a table when the option "if_exists='append'"
>> >> > is
>> >> > set
>> >> > in the df.to_sql() call, is a pandas problem.
>> >> >
>> >> > Thank you for your help.
>> >> >
>> >> > Best,
>> >> > DB
>> >> >
>> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > I don't get why the table is getting created in the first place. A
>> >> >> > table
>> >> >> > with this name exists, and the option "if_exists='append'" should
>> >> >> > append
>> >> >> > the
>> >> >> > dataframe to the existing table.
>> >> >> > There should not be a dropping of the table (which I have not
>> >> >> > seen)
>> >> >> > nor
>> >> >> > creation of the table.
>> >> >> >
>> >> >> > And in case of creating the table, I think it should be possible
>> >> >> > to
>> >> >> > define
>> >> >> > the length of the field, so varchar([variable_to_be_submitted]).
>> >> >> > In my case I expect this particular table to grow to several
>> >> >> > hundred
>> >> >> > million
>> >> >> > rows, so assigned storage space is a factor.
>> >> >> >
>> >> >> > the existing table was created like this:
>> >> >> >
>> >> >> > CREATE TABLE dbo.DSI
>> >> >> > (
>> >> >> > a datetime
>> >> >> > b varchar(10) null,
>> >> >> > c varchar(100) null,
>> >> >> > d varchar(10) null,
>> >> >> > e varchar(100) null,
>> >> >> > f decimal (8,6) null,
>> >> >> > g decimal (8,6) null
>> >> >> > )
>> >> >> >
>> >> >> > If I read and understand the error stack correct, the cause is the
>> >> >> > create
>> >> >> > table statement, which I would very strongly hope to cause an
>> >> >> > error,
>> >> >> > as
>> >> >> > the
>> >> >> > table exists.
>> >> >> > Should the create table statement not be omitted if the option
>> >> >> > "if_exists='append'" option is set?
>> >> >>
>> >> >> This is all on the Pandas side so you'd need to talk to them.
>> >> >>
>> >> >>
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?
>> >> >> >> We
>> >> >> >> have most of them and you can make new ones too.
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > Mike,
>> >> >> >> >
>> >> >> >> > here's the error stack (I had to mask some details):
>> >> >> >> > The columns (dataformats) in the create table statement are
>> >> >> >> > wrong.
>> >> >> >> > Also,
>> >> >> >> > this table does not have an index.
>> >> >> >> >
>> >> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine
>> >> >> >> > SELECT
>> >> >> >> > SERVERPROPERTY('ProductVersion')
>> >> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine
>> >> >> >> > SELECT
>> >> >> >> > schema_name()
>> >> >> >> > 2017-09-13 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
using

connection = pyodbc.connect()
connection.autocommit = 0
cursor = connection.cursor()
cursor.execute([proper sql statement that references a table])
rows = corsor.fetchall()
print(rows)
cursor.close()

gives me the output out of the table that I expect.
So if you were wondering if a raw pyodbc connection works, this is 
confirmed.

On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
>  wrote: 
> > I have the 'patched' pyodbc.py file active. 
> > Executing your code snippet does NOT produce an error or any output for 
> that 
> > matter. 
> > 
> > 
> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >>  wrote: 
> >> > Got ya, 
> >> > 
> >> > so we could solve the issue on the sqlalchemy end with the alteration 
> of 
> >> > the 
> >> > pyodbc.py file. 
> >> > I assume you'll include this in the next release? 
> >> 
> >> um. 
> >> 
> >> can you just confirm for me this makes the error? 
> >> 
> >> 
> >> connection = pyodbc.connect() 
> >> connection.autocommit = 0 
> >> connection.rollback() 
>
>
> try it like this: 
>
>
> connection = pyodbc.connect() 
> connection.autocommit = 0 
> cursor = connection.cursor() 
> cursor.execute("SELECT 1") 
> cursor.close() 
>
> connection.rollback() 
>
>
> >> 
> >> 
> >> 
> >> > The issue with creating a table when the option "if_exists='append'" 
> is 
> >> > set 
> >> > in the df.to_sql() call, is a pandas problem. 
> >> > 
> >> > Thank you for your help. 
> >> > 
> >> > Best, 
> >> > DB 
> >> > 
> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > I don't get why the table is getting created in the first place. A 
> >> >> > table 
> >> >> > with this name exists, and the option "if_exists='append'" should 
> >> >> > append 
> >> >> > the 
> >> >> > dataframe to the existing table. 
> >> >> > There should not be a dropping of the table (which I have not 
> seen) 
> >> >> > nor 
> >> >> > creation of the table. 
> >> >> > 
> >> >> > And in case of creating the table, I think it should be possible 
> to 
> >> >> > define 
> >> >> > the length of the field, so varchar([variable_to_be_submitted]). 
> >> >> > In my case I expect this particular table to grow to several 
> hundred 
> >> >> > million 
> >> >> > rows, so assigned storage space is a factor. 
> >> >> > 
> >> >> > the existing table was created like this: 
> >> >> > 
> >> >> > CREATE TABLE dbo.DSI 
> >> >> > ( 
> >> >> > a datetime 
> >> >> > b varchar(10) null, 
> >> >> > c varchar(100) null, 
> >> >> > d varchar(10) null, 
> >> >> > e varchar(100) null, 
> >> >> > f decimal (8,6) null, 
> >> >> > g decimal (8,6) null 
> >> >> > ) 
> >> >> > 
> >> >> > If I read and understand the error stack correct, the cause is the 
> >> >> > create 
> >> >> > table statement, which I would very strongly hope to cause an 
> error, 
> >> >> > as 
> >> >> > the 
> >> >> > table exists. 
> >> >> > Should the create table statement not be omitted if the option 
> >> >> > "if_exists='append'" option is set? 
> >> >> 
> >> >> This is all on the Pandas side so you'd need to talk to them. 
> >> >> 
> >> >> 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like? 
>  We 
> >> >> >> have most of them and you can make new ones too. 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > Mike, 
> >> >> >> > 
> >> >> >> > here's the error stack (I had to mask some details): 
> >> >> >> > The columns (dataformats) in the create table statement are 
> wrong. 
> >> >> >> > Also, 
> >> >> >> > this table does not have an index. 
> >> >> >> > 
> >> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > SERVERPROPERTY('ProductVersion') 
> >> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > schema_name() 
> >> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > CAST('test 
> >> >> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > CAST('test 
> >> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine 
> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
 wrote:
> I have the 'patched' pyodbc.py file active.
> Executing your code snippet does NOT produce an error or any output for that
> matter.
>
>
> On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>>  wrote:
>> > Got ya,
>> >
>> > so we could solve the issue on the sqlalchemy end with the alteration of
>> > the
>> > pyodbc.py file.
>> > I assume you'll include this in the next release?
>>
>> um.
>>
>> can you just confirm for me this makes the error?
>>
>>
>> connection = pyodbc.connect()
>> connection.autocommit = 0
>> connection.rollback()


try it like this:


connection = pyodbc.connect()
connection.autocommit = 0
cursor = connection.cursor()
cursor.execute("SELECT 1")
cursor.close()

connection.rollback()


>>
>>
>>
>> > The issue with creating a table when the option "if_exists='append'" is
>> > set
>> > in the df.to_sql() call, is a pandas problem.
>> >
>> > Thank you for your help.
>> >
>> > Best,
>> > DB
>> >
>> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >>  wrote:
>> >> > I don't get why the table is getting created in the first place. A
>> >> > table
>> >> > with this name exists, and the option "if_exists='append'" should
>> >> > append
>> >> > the
>> >> > dataframe to the existing table.
>> >> > There should not be a dropping of the table (which I have not seen)
>> >> > nor
>> >> > creation of the table.
>> >> >
>> >> > And in case of creating the table, I think it should be possible to
>> >> > define
>> >> > the length of the field, so varchar([variable_to_be_submitted]).
>> >> > In my case I expect this particular table to grow to several hundred
>> >> > million
>> >> > rows, so assigned storage space is a factor.
>> >> >
>> >> > the existing table was created like this:
>> >> >
>> >> > CREATE TABLE dbo.DSI
>> >> > (
>> >> > a datetime
>> >> > b varchar(10) null,
>> >> > c varchar(100) null,
>> >> > d varchar(10) null,
>> >> > e varchar(100) null,
>> >> > f decimal (8,6) null,
>> >> > g decimal (8,6) null
>> >> > )
>> >> >
>> >> > If I read and understand the error stack correct, the cause is the
>> >> > create
>> >> > table statement, which I would very strongly hope to cause an error,
>> >> > as
>> >> > the
>> >> > table exists.
>> >> > Should the create table statement not be omitted if the option
>> >> > "if_exists='append'" option is set?
>> >>
>> >> This is all on the Pandas side so you'd need to talk to them.
>> >>
>> >>
>> >> >
>> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> >> >> have most of them and you can make new ones too.
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > Mike,
>> >> >> >
>> >> >> > here's the error stack (I had to mask some details):
>> >> >> > The columns (dataformats) in the create table statement are wrong.
>> >> >> > Also,
>> >> >> > this table does not have an index.
>> >> >> >
>> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > SERVERPROPERTY('ProductVersion')
>> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > schema_name()
>> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > CAST('test
>> >> >> > plain returns' AS VARCHAR(60)) AS anon_1
>> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > CAST('test
>> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> >> >> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> >> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> >> >> > NVARCHAR(max))
>> >> >> > AND 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I have the 'patched' pyodbc.py file active.
Executing your code snippet does NOT produce an error or any output for 
that matter.


On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
>  wrote: 
> > Got ya, 
> > 
> > so we could solve the issue on the sqlalchemy end with the alteration of 
> the 
> > pyodbc.py file. 
> > I assume you'll include this in the next release? 
>
> um. 
>
> can you just confirm for me this makes the error? 
>
>
> connection = pyodbc.connect() 
> connection.autocommit = 0 
> connection.rollback() 
>
>
>
> > The issue with creating a table when the option "if_exists='append'" is 
> set 
> > in the df.to_sql() call, is a pandas problem. 
> > 
> > Thank you for your help. 
> > 
> > Best, 
> > DB 
> > 
> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >>  wrote: 
> >> > I don't get why the table is getting created in the first place. A 
> table 
> >> > with this name exists, and the option "if_exists='append'" should 
> append 
> >> > the 
> >> > dataframe to the existing table. 
> >> > There should not be a dropping of the table (which I have not seen) 
> nor 
> >> > creation of the table. 
> >> > 
> >> > And in case of creating the table, I think it should be possible to 
> >> > define 
> >> > the length of the field, so varchar([variable_to_be_submitted]). 
> >> > In my case I expect this particular table to grow to several hundred 
> >> > million 
> >> > rows, so assigned storage space is a factor. 
> >> > 
> >> > the existing table was created like this: 
> >> > 
> >> > CREATE TABLE dbo.DSI 
> >> > ( 
> >> > a datetime 
> >> > b varchar(10) null, 
> >> > c varchar(100) null, 
> >> > d varchar(10) null, 
> >> > e varchar(100) null, 
> >> > f decimal (8,6) null, 
> >> > g decimal (8,6) null 
> >> > ) 
> >> > 
> >> > If I read and understand the error stack correct, the cause is the 
> >> > create 
> >> > table statement, which I would very strongly hope to cause an error, 
> as 
> >> > the 
> >> > table exists. 
> >> > Should the create table statement not be omitted if the option 
> >> > "if_exists='append'" option is set? 
> >> 
> >> This is all on the Pandas side so you'd need to talk to them. 
> >> 
> >> 
> >> > 
> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> >> >> have most of them and you can make new ones too. 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > Mike, 
> >> >> > 
> >> >> > here's the error stack (I had to mask some details): 
> >> >> > The columns (dataformats) in the create table statement are wrong. 
> >> >> > Also, 
> >> >> > this table does not have an index. 
> >> >> > 
> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > SERVERPROPERTY('ProductVersion') 
> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > schema_name() 
> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > CAST('test 
> >> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > CAST('test 
> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> >> >> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> >> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> >> >> > NVARCHAR(max)) 
> >> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> >> >> > NVARCHAR(max)) 
> >> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> >> >> > ('dbo.MSODS_DSI', 
> >> >> > 'dbo') 
> >> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> >> >> > CREATE TABLE [dbo.MSODS_DSI] ( 
> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
 wrote:
> Got ya,
>
> so we could solve the issue on the sqlalchemy end with the alteration of the
> pyodbc.py file.
> I assume you'll include this in the next release?

um.

can you just confirm for me this makes the error?


connection = pyodbc.connect()
connection.autocommit = 0
connection.rollback()



> The issue with creating a table when the option "if_exists='append'" is set
> in the df.to_sql() call, is a pandas problem.
>
> Thank you for your help.
>
> Best,
> DB
>
> On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>>  wrote:
>> > I don't get why the table is getting created in the first place. A table
>> > with this name exists, and the option "if_exists='append'" should append
>> > the
>> > dataframe to the existing table.
>> > There should not be a dropping of the table (which I have not seen) nor
>> > creation of the table.
>> >
>> > And in case of creating the table, I think it should be possible to
>> > define
>> > the length of the field, so varchar([variable_to_be_submitted]).
>> > In my case I expect this particular table to grow to several hundred
>> > million
>> > rows, so assigned storage space is a factor.
>> >
>> > the existing table was created like this:
>> >
>> > CREATE TABLE dbo.DSI
>> > (
>> > a datetime
>> > b varchar(10) null,
>> > c varchar(100) null,
>> > d varchar(10) null,
>> > e varchar(100) null,
>> > f decimal (8,6) null,
>> > g decimal (8,6) null
>> > )
>> >
>> > If I read and understand the error stack correct, the cause is the
>> > create
>> > table statement, which I would very strongly hope to cause an error, as
>> > the
>> > table exists.
>> > Should the create table statement not be omitted if the option
>> > "if_exists='append'" option is set?
>>
>> This is all on the Pandas side so you'd need to talk to them.
>>
>>
>> >
>> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>> >>
>> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> >> have most of them and you can make new ones too.
>> >>
>> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >>  wrote:
>> >> > Mike,
>> >> >
>> >> > here's the error stack (I had to mask some details):
>> >> > The columns (dataformats) in the create table statement are wrong.
>> >> > Also,
>> >> > this table does not have an index.
>> >> >
>> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > SERVERPROPERTY('ProductVersion')
>> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > schema_name()
>> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > CAST('test
>> >> > plain returns' AS VARCHAR(60)) AS anon_1
>> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > CAST('test
>> >> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> >> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> >> > NVARCHAR(max))
>> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS
>> >> > NVARCHAR(max))
>> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine
>> >> > ('dbo.MSODS_DSI',
>> >> > 'dbo')
>> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
>> >> > CREATE TABLE [dbo.MSODS_DSI] (
>> >> > [a] DATETIME NULL,
>> >> > [b] VARCHAR(max) NULL,
>> >> > [c] VARCHAR(max) NULL,
>> >> > [d] VARCHAR(max) NULL,
>> >> > [e] VARCHAR(max) NULL,
>> >> > [f] FLOAT(53) NULL,
>> >> > [g] FLOAT(53) NULL
>> >> > )
>> >> >
>> >> >
>> >> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
>> >> >
>> >> >
>> >> > ---
>> >> > ProgrammingError  Traceback (most recent call
>> >> > last)
>> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
Got ya,

so we could solve the issue on the sqlalchemy end with the alteration of 
the pyodbc.py file.
I assume you'll include this in the next release?
The issue with creating a table when the option "if_exists='append'" is set 
in the df.to_sql() call, is a pandas problem.

Thank you for your help.

Best,
DB

On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
>  wrote: 
> > I don't get why the table is getting created in the first place. A table 
> > with this name exists, and the option "if_exists='append'" should append 
> the 
> > dataframe to the existing table. 
> > There should not be a dropping of the table (which I have not seen) nor 
> > creation of the table. 
> > 
> > And in case of creating the table, I think it should be possible to 
> define 
> > the length of the field, so varchar([variable_to_be_submitted]). 
> > In my case I expect this particular table to grow to several hundred 
> million 
> > rows, so assigned storage space is a factor. 
> > 
> > the existing table was created like this: 
> > 
> > CREATE TABLE dbo.DSI 
> > ( 
> > a datetime 
> > b varchar(10) null, 
> > c varchar(100) null, 
> > d varchar(10) null, 
> > e varchar(100) null, 
> > f decimal (8,6) null, 
> > g decimal (8,6) null 
> > ) 
> > 
> > If I read and understand the error stack correct, the cause is the 
> create 
> > table statement, which I would very strongly hope to cause an error, as 
> the 
> > table exists. 
> > Should the create table statement not be omitted if the option 
> > "if_exists='append'" option is set? 
>
> This is all on the Pandas side so you'd need to talk to them. 
>
>
> > 
> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote: 
> >> 
> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> >> have most of them and you can make new ones too. 
> >> 
> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >>  wrote: 
> >> > Mike, 
> >> > 
> >> > here's the error stack (I had to mask some details): 
> >> > The columns (dataformats) in the create table statement are wrong. 
> Also, 
> >> > this table does not have an index. 
> >> > 
> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > SERVERPROPERTY('ProductVersion') 
> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > schema_name() 
> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > CAST('test 
> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > CAST('test 
> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> >> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> >> > NVARCHAR(max)) 
> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> >> > NVARCHAR(max)) 
> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> >> > ('dbo.MSODS_DSI', 
> >> > 'dbo') 
> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> >> > CREATE TABLE [dbo.MSODS_DSI] ( 
> >> > [a] DATETIME NULL, 
> >> > [b] VARCHAR(max) NULL, 
> >> > [c] VARCHAR(max) NULL, 
> >> > [d] VARCHAR(max) NULL, 
> >> > [e] VARCHAR(max) NULL, 
> >> > [f] FLOAT(53) NULL, 
> >> > [g] FLOAT(53) NULL 
> >> > ) 
> >> > 
> >> > 
> >> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK 
> >> > 
> >> > 
> --- 
> >> > ProgrammingError  Traceback (most recent call 
> >> > last) 
> >> > 
> >> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
>
> >> > in _execute_context(self, dialect, constructor, statement, 
> parameters, 
> >> > *args) 
> >> >1181 parameters, 
> >> > -> 1182   

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
 wrote:
> I don't get why the table is getting created in the first place. A table
> with this name exists, and the option "if_exists='append'" should append the
> dataframe to the existing table.
> There should not be a dropping of the table (which I have not seen) nor
> creation of the table.
>
> And in case of creating the table, I think it should be possible to define
> the length of the field, so varchar([variable_to_be_submitted]).
> In my case I expect this particular table to grow to several hundred million
> rows, so assigned storage space is a factor.
>
> the existing table was created like this:
>
> CREATE TABLE dbo.DSI
> (
> a datetime
> b varchar(10) null,
> c varchar(100) null,
> d varchar(10) null,
> e varchar(100) null,
> f decimal (8,6) null,
> g decimal (8,6) null
> )
>
> If I read and understand the error stack correct, the cause is the create
> table statement, which I would very strongly hope to cause an error, as the
> table exists.
> Should the create table statement not be omitted if the option
> "if_exists='append'" option is set?

This is all on the Pandas side so you'd need to talk to them.


>
> On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>>
>> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> have most of them and you can make new ones too.
>>
>> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>>  wrote:
>> > Mike,
>> >
>> > here's the error stack (I had to mask some details):
>> > The columns (dataformats) in the create table statement are wrong. Also,
>> > this table does not have an index.
>> >
>> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> > SERVERPROPERTY('ProductVersion')
>> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> > schema_name()
>> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> > CAST('test
>> > plain returns' AS VARCHAR(60)) AS anon_1
>> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> > CAST('test
>> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> > NVARCHAR(max))
>> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS
>> > NVARCHAR(max))
>> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine
>> > ('dbo.MSODS_DSI',
>> > 'dbo')
>> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
>> > CREATE TABLE [dbo.MSODS_DSI] (
>> > [a] DATETIME NULL,
>> > [b] VARCHAR(max) NULL,
>> > [c] VARCHAR(max) NULL,
>> > [d] VARCHAR(max) NULL,
>> > [e] VARCHAR(max) NULL,
>> > [f] FLOAT(53) NULL,
>> > [g] FLOAT(53) NULL
>> > )
>> >
>> >
>> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
>> >
>> > ---
>> > ProgrammingError  Traceback (most recent call
>> > last)
>> >
>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py
>> > in _execute_context(self, dialect, constructor, statement, parameters,
>> > *args)
>> >1181 parameters,
>> > -> 1182 context)
>> >1183 except BaseException as e:
>> >
>> >
>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>> > in do_execute(self, cursor, statement, parameters, context)
>> > 469 def do_execute(self, cursor, statement, parameters,
>> > context=None):
>> > --> 470 cursor.execute(statement, parameters)
>> > 471
>> >
>> > ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL
>> > Server][SQL Server]The statement failed. Column 'b' has a data type that
>> > cannot participate in a columnstore index.\r\nOperation cancelled by
>> > user.
>> > (35343) (SQLExecDirectW)")
>> >
>> > The above exception was the direct cause of the 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I don't get why the table is getting created in the first place. A table 
with this name exists, and the option "if_exists='append'" should append 
the dataframe to the existing table.
There should not be a dropping of the table (which I have not seen) nor 
creation of the table.

And in case of creating the table, I think it should be possible to define 
the length of the field, so varchar([variable_to_be_submitted]).
In my case I expect this particular table to grow to several hundred 
million rows, so assigned storage space is a factor.

the existing table was created like this:

CREATE TABLE dbo.DSI
(
a datetime
b varchar(10) null,
c varchar(100) null,
d varchar(10) null,
e varchar(100) null,
f decimal (8,6) null,
g decimal (8,6) null
)
 
If I read and understand the error stack correct, the cause is the create 
table statement, which I would very strongly hope to cause an error, as the 
table exists.
Should the create table statement not be omitted if the option 
"if_exists='append'" option is set?

On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>
> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> have most of them and you can make new ones too. 
>
> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
>  wrote: 
> > Mike, 
> > 
> > here's the error stack (I had to mask some details): 
> > The columns (dataformats) in the create table statement are wrong. Also, 
> > this table does not have an index. 
> > 
> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> > SERVERPROPERTY('ProductVersion') 
> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> > schema_name() 
> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> CAST('test 
> > plain returns' AS VARCHAR(60)) AS anon_1 
> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> CAST('test 
> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> NVARCHAR(max)) 
> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> NVARCHAR(max)) 
> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> ('dbo.MSODS_DSI', 
> > 'dbo') 
> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> > CREATE TABLE [dbo.MSODS_DSI] ( 
> > [a] DATETIME NULL, 
> > [b] VARCHAR(max) NULL, 
> > [c] VARCHAR(max) NULL, 
> > [d] VARCHAR(max) NULL, 
> > [e] VARCHAR(max) NULL, 
> > [f] FLOAT(53) NULL, 
> > [g] FLOAT(53) NULL 
> > ) 
> > 
> > 
> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK 
> > 
> --- 
> > ProgrammingError  Traceback (most recent call 
> last) 
> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
>
> > in _execute_context(self, dialect, constructor, statement, parameters, 
> > *args) 
> >1181 parameters, 
> > -> 1182 context) 
> >1183 except BaseException as e: 
> > 
> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>  
>
> > in do_execute(self, cursor, statement, parameters, context) 
> > 469 def do_execute(self, cursor, statement, parameters, 
> > context=None): 
> > --> 470 cursor.execute(statement, parameters) 
> > 471 
> > 
> > ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL 
> > Server][SQL Server]The statement failed. Column 'b' has a data type that 
> > cannot participate in a columnstore index.\r\nOperation cancelled by 
> user. 
> > (35343) (SQLExecDirectW)") 
> > 
> > The above exception was the direct cause of the following exception: 
> > 
> > ProgrammingError  Traceback (most recent call 
> last) 
> >  in () 
> >  17 #cnxn = pyodbc.connect(connection_str) 
> >  18 #engn.connect() 
> > 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
OK sodon't use VARCHAR(max)?  What datatype would you like?  We
have most of them and you can make new ones too.

On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
 wrote:
> Mike,
>
> here's the error stack (I had to mask some details):
> The columns (dataformats) in the create table statement are wrong. Also,
> this table does not have an index.
>
> 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
> SERVERPROPERTY('ProductVersion')
> 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
> schema_name()
> 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT CAST('test
> plain returns' AS VARCHAR(60)) AS anon_1
> 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT CAST('test
> unicode returns' AS NVARCHAR(60)) AS anon_1
> 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
> [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
> [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
> [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
> [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
> [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
> [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
> [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
> [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
> [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
> [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
> [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS]
> WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS NVARCHAR(max))
> AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
> 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine ('dbo.MSODS_DSI',
> 'dbo')
> 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
> CREATE TABLE [dbo.MSODS_DSI] (
> [a] DATETIME NULL,
> [b] VARCHAR(max) NULL,
> [c] VARCHAR(max) NULL,
> [d] VARCHAR(max) NULL,
> [e] VARCHAR(max) NULL,
> [f] FLOAT(53) NULL,
> [g] FLOAT(53) NULL
> )
>
>
> 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
> ---
> ProgrammingError  Traceback (most recent call last)
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py
> in _execute_context(self, dialect, constructor, statement, parameters,
> *args)
>1181 parameters,
> -> 1182 context)
>1183 except BaseException as e:
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
> in do_execute(self, cursor, statement, parameters, context)
> 469 def do_execute(self, cursor, statement, parameters,
> context=None):
> --> 470 cursor.execute(statement, parameters)
> 471
>
> ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL
> Server][SQL Server]The statement failed. Column 'b' has a data type that
> cannot participate in a columnstore index.\r\nOperation cancelled by user.
> (35343) (SQLExecDirectW)")
>
> The above exception was the direct cause of the following exception:
>
> ProgrammingError  Traceback (most recent call last)
>  in ()
>  17 #cnxn = pyodbc.connect(connection_str)
>  18 #engn.connect()
> ---> 19 df.to_sql(tbl_server_out, engn, if_exists='append', index=False)
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py
> in to_sql(self, name, con, flavor, schema, if_exists, index, index_label,
> chunksize, dtype)
>1343 sql.to_sql(self, name, con, flavor=flavor, schema=schema,
>1344if_exists=if_exists, index=index,
> index_label=index_label,
> -> 1345chunksize=chunksize, dtype=dtype)
>1346
>1347 def to_pickle(self, path, compression='infer'):
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in
> to_sql(frame, name, con, flavor, schema, if_exists, index, index_label,
> chunksize, dtype)
> 469 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
> 470   index_label=index_label, schema=schema,
> --> 471   chunksize=chunksize, dtype=dtype)
> 472
> 473
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in
> to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize,
> dtype)
>1148  if_exists=if_exists,
> index_label=index_label,
>1149  schema=schema, dtype=dtype)
> -> 1150 table.create()
>1151 table.insert(chunksize)
>1152 if (not 

[sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
Mike,

here's the error stack (I had to mask some details):
The columns (dataformats) in the create table statement are wrong. Also, 
this table does not have an index.

2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
 SERVERPROPERTY('ProductVersion')
2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
schema_name()
2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1
2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
[INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
[INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
[INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
[INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
[INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
[INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
[INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
[INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
[INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
[INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
[INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
FROM [INFORMATION_SCHEMA].[COLUMNS] 
WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
NVARCHAR(max)) AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? 
AS NVARCHAR(max))
2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
('dbo.MSODS_DSI', 'dbo')
2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
CREATE TABLE [dbo.MSODS_DSI] (
[a] DATETIME NULL, 
[b] VARCHAR(max) NULL, 
[c] VARCHAR(max) NULL, 
[d] VARCHAR(max) NULL, 
[e] VARCHAR(max) NULL, 
[f] FLOAT(53) NULL, 
[g] FLOAT(53) NULL
)


2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
---
ProgrammingError  Traceback (most recent call last)
/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
in _execute_context(self, dialect, constructor, statement, parameters, 
*args)
   1181 parameters,
-> 1182 context)
   1183 except BaseException as e:

/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
 
in do_execute(self, cursor, statement, parameters, context)
469 def do_execute(self, cursor, statement, parameters, 
context=None):
--> 470 cursor.execute(statement, parameters)
471 

ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL 
Server][SQL Server]The statement failed. Column 'b' has a data type that 
cannot participate in a columnstore index.\r\nOperation cancelled by user. 
(35343) (SQLExecDirectW)")

The above exception was the direct cause of the following exception:

ProgrammingError  Traceback (most recent call last)
 in ()
 17 #cnxn = pyodbc.connect(connection_str)
 18 #engn.connect()
---> 19 df.to_sql(tbl_server_out, engn, if_exists='append', index=False)

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py 
in to_sql(self, name, con, flavor, schema, if_exists, index, index_label, 
chunksize, dtype)
   1343 sql.to_sql(self, name, con, flavor=flavor, schema=schema,
   1344if_exists=if_exists, index=index, 
index_label=index_label,
-> 1345chunksize=chunksize, dtype=dtype)
   1346 
   1347 def to_pickle(self, path, compression='infer'):

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
to_sql(frame, name, con, flavor, schema, if_exists, index, index_label, 
chunksize, dtype)
469 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
470   index_label=index_label, schema=schema,
--> 471   chunksize=chunksize, dtype=dtype)
472 
473 

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, 
dtype)
   1148  if_exists=if_exists, 
index_label=index_label,
   1149  schema=schema, dtype=dtype)
-> 1150 table.create()
   1151 table.insert(chunksize)
   1152 if (not name.isdigit() and not name.islower()):

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
create(self)
596 "'{0}' is not valid for 
if_exists".format(self.if_exists))
597 else:
--> 598 self._execute_create()
599 
600 def insert_statement(self):


[sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-11 Thread dirk.biesinger
re-attaching the files

On Monday, September 11, 2017 at 3:34:47 PM UTC-7, dirk.biesinger wrote:
>
> I am encountering errors when trying to use the pd.to_sql function to 
> write a dataframe to MS SQL Data Warehouse.
> The connection works when NOT using sqlalchemy engines.
> I can read dataframes as well as row-by-row via select statements when I 
> use pyodbc connections
> I can write data via insert statements (as well as delete data) when using 
> pyodbc.
> However, when I try to connect using a sqlalchemy engine I run into a 
> string of error messages starting with:
>
> ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] 
> [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]Catalog view 
> 'dm_exec_sessions' is not supported in this version. (104385) 
> (SQLExecDirectW)")
>
>
> I have searched online, and this exact error seems to have been reported / 
> evaluated in May of this year as issue #3994:
>
>
> https://bitbucket.org/zzzeek/sqlalchemy/issues/3994/azure-sql-datawarehouse-basic
>
>
> I could not find a solution to this, and I'd really dislike to do a line-wise 
> or blob insert statement (I'm working with multiple datasets that each has a 
> few million rows, so execution time is a consideration, although the result 
> sets I'm getting are more like in the 100k lines area each.)
>
>
> I get the same error messages even when I replace the pd.to_sql command with 
> a simple engine.connect()
>
>
> Enclosed my installed packages (packages.list)
>
> Enclosed the full traceback (traceback.txt)
>
>
> This is the code I'm using:
>
> connection_string = 
> "mssql+pyodbc://:@.database.windows.net:/?driver=ODBC+Driver+13+for+SQL+Server"
> engn = sqlalchemy.engine.create_engine(connection_string, echo=True)
> engn.connect()
>
>
> I'm very well aware that MS SQL DataWarehouse behaves a bit different, so 
> I'm open for some experimenting to get this issue narrowed down.
>
> In case it matters: I'm running an ubuntu 16.04 VM on azure with jupyter 
> notebook server and python 3.6.1.
>
> Best,
>
> DB
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
2017-09-11 21:33:50,797 INFO sqlalchemy.engine.base.Engine SELECT  
SERVERPROPERTY('ProductVersion')
INFO:sqlalchemy.engine.base.Engine:SELECT  SERVERPROPERTY('ProductVersion')
2017-09-11 21:33:50,799 INFO sqlalchemy.engine.base.Engine ()
INFO:sqlalchemy.engine.base.Engine:()
2017-09-11 21:33:50,857 INFO sqlalchemy.engine.base.Engine SELECT schema_name()
INFO:sqlalchemy.engine.base.Engine:SELECT schema_name()
2017-09-11 21:33:50,859 INFO sqlalchemy.engine.base.Engine ()
INFO:sqlalchemy.engine.base.Engine:()
ERROR:root:An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line string', (1, 10))

ERROR:root:An unexpected error occurred while tokenizing input
The following traceback may be corrupted or invalid
The error message is: ('EOF in multi-line string', (1, 10))

---
Empty Traceback (most recent call last)
/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/pool.py in 
_do_get(self)
   1121 wait = use_overflow and self._overflow >= self._max_overflow
-> 1122 return self._pool.get(wait, self._timeout)
   1123 except sqla_queue.Empty:

/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/util/queue.py in 
get(self, block, timeout)
144 if self._empty():
--> 145 raise Empty
146 elif timeout is None:

Empty: 

During handling of the above exception, another exception occurred:

ProgrammingError  Traceback (most recent call last)
/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
in _wrap_pool_connect(self, fn, connection)
   2146 try:
-> 2147 return fn()
   2148 except dialect.dbapi.Error as e:

/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/pool.py in 
unique_connection(self)
327 """
--> 328 return _ConnectionFairy._checkout(self)
329 

/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/pool.py in 
_checkout(cls, pool, threadconns, fairy)
765 if not fairy:
--> 766 fairy =