Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
There is definitely interest in this platform and we will need to support
it.  It's just it falls under the category of a subscriber database like
Redhift or google whatever it was called so needs some folks with access to
get it working and possibly support an external overlay project -
sqlalchemy-redshift is the current example.

On Sep 13, 2017 6:39 PM, "dirk.biesinger"  wrote:

yeah, I ran into some 'nice' 'features' in this project.
the azure datawarehouse does not behave or work like a typical old-school
sql server.
There might be some potential there, just not sure how many projects would
be using the datawarehouse

On Wednesday, September 13, 2017 at 3:33:20 PM UTC-7, Mike Bayer wrote:

> On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger
>  wrote:
> > "oh that is very interesting." he says and then it's getting eerily
> quiet.
> > I guess Mike and Dilly are somewhere in the depth of code and docs...
>
>
> unfortunately not, azure seems to offer free trials if you are willing
> to give them a credit card number so I can perhaps eventually get
> around to working with that but at the moment it would be preferable
> if someone wants to work on an azure variant of the SQL Server
> dialect.It shouldn't be hard but the various glitches need to be
> understood for anything to be committed.
>
> >
> > On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
> >>
> >> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
> >>  wrote:
> >> >
> >> > as for the rollback call,
> >> > adding it after the print command, does not change anything.
> >> > When I insert it between cursor.execute() and rows =
> cursor.fetchall(),
> >> > I
> >> > get an error as expected.
> >>
> >> oh that is very interesting.
> >>
> >>
> >> >
> >> >
> >> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer
> wrote:
> >> >>
> >> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
> >> >>  wrote:
> >> >> > using
> >> >> >
> >> >> > connection = pyodbc.connect()
> >> >> > connection.autocommit = 0
> >> >> > cursor = connection.cursor()
> >> >> > cursor.execute([proper sql statement that references a table])
> >> >> > rows = corsor.fetchall()
> >> >> > print(rows)
> >> >> > cursor.close()
> >> >> >
> >> >> > gives me the output out of the table that I expect.
> >> >> > So if you were wondering if a raw pyodbc connection works, this is
> >> >> > confirmed.
> >> >>
> >> >> did you call:
> >> >>
> >> >> connection.rollback()
> >> >>
> >> >> the stack traces you have given me indicate this method cannot be
> >> >> called else Azure raises an error.  This must be illustrated as
> >> >> definitely the problem, and not a side effect of something else.
> >> >>
> >> >> This is why this would go a lot quicker if someone had a *blank*
> azure
> >> >> database on a cloud node somewhere for me to log into.  I don't need
> >> >> your customer data.
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> >
> >> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer
> >> >> > wrote:
> >> >> >>
> >> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
> >> >> >>  wrote:
> >> >> >> > I have the 'patched' pyodbc.py file active.
> >> >> >> > Executing your code snippet does NOT produce an error or any
> >> >> >> > output
> >> >> >> > for
> >> >> >> > that
> >> >> >> > matter.
> >> >> >> >
> >> >> >> >
> >> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike
> Bayer
> >> >> >> > wrote:
> >> >> >> >>
> >> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
> >> >> >> >>  wrote:
> >> >> >> >> > Got ya,
> >> >> >> >> >
> >> >> >> >> > so we could solve the issue on the sqlalchemy end with the
> >> >> >> >> > alteration
> >> >> >> >> > of
> >> >> >> >> > the
> >> >> >> >> > pyodbc.py file.
> >> >> >> >> > I assume you'll include this in the next release?
> >> >> >> >>
> >> >> >> >> um.
> >> >> >> >>
> >> >> >> >> can you just confirm for me this makes the error?
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> connection = pyodbc.connect()
> >> >> >> >> connection.autocommit = 0
> >> >> >> >> connection.rollback()
> >> >> >>
> >> >> >>
> >> >> >> try it like this:
> >> >> >>
> >> >> >>
> >> >> >> connection = pyodbc.connect()
> >> >> >> connection.autocommit = 0
> >> >> >> cursor = connection.cursor()
> >> >> >> cursor.execute("SELECT 1")
> >> >> >> cursor.close()
> >> >> >>
> >> >> >> connection.rollback()
> >> >> >>
> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >>
> >> >> >> >> > The issue with creating a table when the option
> >> >> >> >> > "if_exists='append'"
> >> >> >> >> > is
> >> >> >> >> > set
> >> >> >> >> > in the df.to_sql() call, is a pandas problem.
> >> >> >> >> >
> >> >> >> >> > Thank you for your help.
> >> >> >> >> >
> >> >> >> >> > Best,
> >> >> >> >> > DB
> >> >> >> >> >
> >> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
yeah, I ran into some 'nice' 'features' in this project.
the azure datawarehouse does not behave or work like a typical old-school 
sql server.
There might be some potential there, just not sure how many projects would 
be using the datawarehouse

On Wednesday, September 13, 2017 at 3:33:20 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger 
>  wrote: 
> > "oh that is very interesting." he says and then it's getting eerily 
> quiet. 
> > I guess Mike and Dilly are somewhere in the depth of code and docs... 
>
>
> unfortunately not, azure seems to offer free trials if you are willing 
> to give them a credit card number so I can perhaps eventually get 
> around to working with that but at the moment it would be preferable 
> if someone wants to work on an azure variant of the SQL Server 
> dialect.It shouldn't be hard but the various glitches need to be 
> understood for anything to be committed. 
>
> > 
> > On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger 
> >>  wrote: 
> >> > 
> >> > as for the rollback call, 
> >> > adding it after the print command, does not change anything. 
> >> > When I insert it between cursor.execute() and rows = 
> cursor.fetchall(), 
> >> > I 
> >> > get an error as expected. 
> >> 
> >> oh that is very interesting. 
> >> 
> >> 
> >> > 
> >> > 
> >> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > using 
> >> >> > 
> >> >> > connection = pyodbc.connect() 
> >> >> > connection.autocommit = 0 
> >> >> > cursor = connection.cursor() 
> >> >> > cursor.execute([proper sql statement that references a table]) 
> >> >> > rows = corsor.fetchall() 
> >> >> > print(rows) 
> >> >> > cursor.close() 
> >> >> > 
> >> >> > gives me the output out of the table that I expect. 
> >> >> > So if you were wondering if a raw pyodbc connection works, this is 
> >> >> > confirmed. 
> >> >> 
> >> >> did you call: 
> >> >> 
> >> >> connection.rollback() 
> >> >> 
> >> >> the stack traces you have given me indicate this method cannot be 
> >> >> called else Azure raises an error.  This must be illustrated as 
> >> >> definitely the problem, and not a side effect of something else. 
> >> >> 
> >> >> This is why this would go a lot quicker if someone had a *blank* 
> azure 
> >> >> database on a cloud node somewhere for me to log into.  I don't need 
> >> >> your customer data. 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > I have the 'patched' pyodbc.py file active. 
> >> >> >> > Executing your code snippet does NOT produce an error or any 
> >> >> >> > output 
> >> >> >> > for 
> >> >> >> > that 
> >> >> >> > matter. 
> >> >> >> > 
> >> >> >> > 
> >> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike 
> Bayer 
> >> >> >> > wrote: 
> >> >> >> >> 
> >> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >> >> >>  wrote: 
> >> >> >> >> > Got ya, 
> >> >> >> >> > 
> >> >> >> >> > so we could solve the issue on the sqlalchemy end with the 
> >> >> >> >> > alteration 
> >> >> >> >> > of 
> >> >> >> >> > the 
> >> >> >> >> > pyodbc.py file. 
> >> >> >> >> > I assume you'll include this in the next release? 
> >> >> >> >> 
> >> >> >> >> um. 
> >> >> >> >> 
> >> >> >> >> can you just confirm for me this makes the error? 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> connection = pyodbc.connect() 
> >> >> >> >> connection.autocommit = 0 
> >> >> >> >> connection.rollback() 
> >> >> >> 
> >> >> >> 
> >> >> >> try it like this: 
> >> >> >> 
> >> >> >> 
> >> >> >> connection = pyodbc.connect() 
> >> >> >> connection.autocommit = 0 
> >> >> >> cursor = connection.cursor() 
> >> >> >> cursor.execute("SELECT 1") 
> >> >> >> cursor.close() 
> >> >> >> 
> >> >> >> connection.rollback() 
> >> >> >> 
> >> >> >> 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> 
> >> >> >> >> > The issue with creating a table when the option 
> >> >> >> >> > "if_exists='append'" 
> >> >> >> >> > is 
> >> >> >> >> > set 
> >> >> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> >> >> > 
> >> >> >> >> > Thank you for your help. 
> >> >> >> >> > 
> >> >> >> >> > Best, 
> >> >> >> >> > DB 
> >> >> >> >> > 
> >> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike 
> >> >> >> >> > Bayer 
> >> >> >> >> > wrote: 
> >> >> >> >> >> 
> >> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >> >> >>  wrote: 
> >> >> >> >> >> > I don't get why the table is getting created in the first 
> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger
 wrote:
> "oh that is very interesting." he says and then it's getting eerily quiet.
> I guess Mike and Dilly are somewhere in the depth of code and docs...


unfortunately not, azure seems to offer free trials if you are willing
to give them a credit card number so I can perhaps eventually get
around to working with that but at the moment it would be preferable
if someone wants to work on an azure variant of the SQL Server
dialect.It shouldn't be hard but the various glitches need to be
understood for anything to be committed.

>
> On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
>>  wrote:
>> >
>> > as for the rollback call,
>> > adding it after the print command, does not change anything.
>> > When I insert it between cursor.execute() and rows = cursor.fetchall(),
>> > I
>> > get an error as expected.
>>
>> oh that is very interesting.
>>
>>
>> >
>> >
>> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
>> >>  wrote:
>> >> > using
>> >> >
>> >> > connection = pyodbc.connect()
>> >> > connection.autocommit = 0
>> >> > cursor = connection.cursor()
>> >> > cursor.execute([proper sql statement that references a table])
>> >> > rows = corsor.fetchall()
>> >> > print(rows)
>> >> > cursor.close()
>> >> >
>> >> > gives me the output out of the table that I expect.
>> >> > So if you were wondering if a raw pyodbc connection works, this is
>> >> > confirmed.
>> >>
>> >> did you call:
>> >>
>> >> connection.rollback()
>> >>
>> >> the stack traces you have given me indicate this method cannot be
>> >> called else Azure raises an error.  This must be illustrated as
>> >> definitely the problem, and not a side effect of something else.
>> >>
>> >> This is why this would go a lot quicker if someone had a *blank* azure
>> >> database on a cloud node somewhere for me to log into.  I don't need
>> >> your customer data.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> >
>> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > I have the 'patched' pyodbc.py file active.
>> >> >> > Executing your code snippet does NOT produce an error or any
>> >> >> > output
>> >> >> > for
>> >> >> > that
>> >> >> > matter.
>> >> >> >
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > Got ya,
>> >> >> >> >
>> >> >> >> > so we could solve the issue on the sqlalchemy end with the
>> >> >> >> > alteration
>> >> >> >> > of
>> >> >> >> > the
>> >> >> >> > pyodbc.py file.
>> >> >> >> > I assume you'll include this in the next release?
>> >> >> >>
>> >> >> >> um.
>> >> >> >>
>> >> >> >> can you just confirm for me this makes the error?
>> >> >> >>
>> >> >> >>
>> >> >> >> connection = pyodbc.connect()
>> >> >> >> connection.autocommit = 0
>> >> >> >> connection.rollback()
>> >> >>
>> >> >>
>> >> >> try it like this:
>> >> >>
>> >> >>
>> >> >> connection = pyodbc.connect()
>> >> >> connection.autocommit = 0
>> >> >> cursor = connection.cursor()
>> >> >> cursor.execute("SELECT 1")
>> >> >> cursor.close()
>> >> >>
>> >> >> connection.rollback()
>> >> >>
>> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >>
>> >> >> >> > The issue with creating a table when the option
>> >> >> >> > "if_exists='append'"
>> >> >> >> > is
>> >> >> >> > set
>> >> >> >> > in the df.to_sql() call, is a pandas problem.
>> >> >> >> >
>> >> >> >> > Thank you for your help.
>> >> >> >> >
>> >> >> >> > Best,
>> >> >> >> > DB
>> >> >> >> >
>> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike
>> >> >> >> > Bayer
>> >> >> >> > wrote:
>> >> >> >> >>
>> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >> >> >>  wrote:
>> >> >> >> >> > I don't get why the table is getting created in the first
>> >> >> >> >> > place. A
>> >> >> >> >> > table
>> >> >> >> >> > with this name exists, and the option "if_exists='append'"
>> >> >> >> >> > should
>> >> >> >> >> > append
>> >> >> >> >> > the
>> >> >> >> >> > dataframe to the existing table.
>> >> >> >> >> > There should not be a dropping of the table (which I have
>> >> >> >> >> > not
>> >> >> >> >> > seen)
>> >> >> >> >> > nor
>> >> >> >> >> > creation of the table.
>> >> >> >> >> >
>> >> >> >> >> > And in case of creating the table, I think it should be
>> >> >> >> >> > possible
>> >> >> >> >> > to
>> >> >> >> >> > define
>> >> >> >> >> > the length of the field, so
>> >> >> >> >> > varchar([variable_to_be_submitted]).
>> >> >> >> >> > In my case I expect this particular table to 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
"oh that is very interesting." he says and then it's getting eerily quiet.
I guess Mike and Dilly are somewhere in the depth of code and docs...

On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger 
>  wrote: 
> > 
> > as for the rollback call, 
> > adding it after the print command, does not change anything. 
> > When I insert it between cursor.execute() and rows = cursor.fetchall(), 
> I 
> > get an error as expected. 
>
> oh that is very interesting. 
>
>
> > 
> > 
> > On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
> >>  wrote: 
> >> > using 
> >> > 
> >> > connection = pyodbc.connect() 
> >> > connection.autocommit = 0 
> >> > cursor = connection.cursor() 
> >> > cursor.execute([proper sql statement that references a table]) 
> >> > rows = corsor.fetchall() 
> >> > print(rows) 
> >> > cursor.close() 
> >> > 
> >> > gives me the output out of the table that I expect. 
> >> > So if you were wondering if a raw pyodbc connection works, this is 
> >> > confirmed. 
> >> 
> >> did you call: 
> >> 
> >> connection.rollback() 
> >> 
> >> the stack traces you have given me indicate this method cannot be 
> >> called else Azure raises an error.  This must be illustrated as 
> >> definitely the problem, and not a side effect of something else. 
> >> 
> >> This is why this would go a lot quicker if someone had a *blank* azure 
> >> database on a cloud node somewhere for me to log into.  I don't need 
> >> your customer data. 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> 
> >> > 
> >> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > I have the 'patched' pyodbc.py file active. 
> >> >> > Executing your code snippet does NOT produce an error or any 
> output 
> >> >> > for 
> >> >> > that 
> >> >> > matter. 
> >> >> > 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > Got ya, 
> >> >> >> > 
> >> >> >> > so we could solve the issue on the sqlalchemy end with the 
> >> >> >> > alteration 
> >> >> >> > of 
> >> >> >> > the 
> >> >> >> > pyodbc.py file. 
> >> >> >> > I assume you'll include this in the next release? 
> >> >> >> 
> >> >> >> um. 
> >> >> >> 
> >> >> >> can you just confirm for me this makes the error? 
> >> >> >> 
> >> >> >> 
> >> >> >> connection = pyodbc.connect() 
> >> >> >> connection.autocommit = 0 
> >> >> >> connection.rollback() 
> >> >> 
> >> >> 
> >> >> try it like this: 
> >> >> 
> >> >> 
> >> >> connection = pyodbc.connect() 
> >> >> connection.autocommit = 0 
> >> >> cursor = connection.cursor() 
> >> >> cursor.execute("SELECT 1") 
> >> >> cursor.close() 
> >> >> 
> >> >> connection.rollback() 
> >> >> 
> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >> 
> >> >> >> > The issue with creating a table when the option 
> >> >> >> > "if_exists='append'" 
> >> >> >> > is 
> >> >> >> > set 
> >> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> >> > 
> >> >> >> > Thank you for your help. 
> >> >> >> > 
> >> >> >> > Best, 
> >> >> >> > DB 
> >> >> >> > 
> >> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike 
> Bayer 
> >> >> >> > wrote: 
> >> >> >> >> 
> >> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >> >>  wrote: 
> >> >> >> >> > I don't get why the table is getting created in the first 
> >> >> >> >> > place. A 
> >> >> >> >> > table 
> >> >> >> >> > with this name exists, and the option "if_exists='append'" 
> >> >> >> >> > should 
> >> >> >> >> > append 
> >> >> >> >> > the 
> >> >> >> >> > dataframe to the existing table. 
> >> >> >> >> > There should not be a dropping of the table (which I have 
> not 
> >> >> >> >> > seen) 
> >> >> >> >> > nor 
> >> >> >> >> > creation of the table. 
> >> >> >> >> > 
> >> >> >> >> > And in case of creating the table, I think it should be 
> >> >> >> >> > possible 
> >> >> >> >> > to 
> >> >> >> >> > define 
> >> >> >> >> > the length of the field, so 
> >> >> >> >> > varchar([variable_to_be_submitted]). 
> >> >> >> >> > In my case I expect this particular table to grow to several 
> >> >> >> >> > hundred 
> >> >> >> >> > million 
> >> >> >> >> > rows, so assigned storage space is a factor. 
> >> >> >> >> > 
> >> >> >> >> > the existing table was created like this: 
> >> >> >> >> > 
> >> >> >> >> > CREATE TABLE dbo.DSI 
> >> >> >> >> > ( 
> >> >> >> >> > a datetime 
> >> >> >> >> > b varchar(10) null, 
> >> >> >> >> > c varchar(100) null, 
> >> >> >> >> > d varchar(10) null, 
> >> >> >> >> > e varchar(100) null, 
> >> >> >> >> > f decimal (8,6) null, 
> >> >> >> >> > g 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
 wrote:
>
> as for the rollback call,
> adding it after the print command, does not change anything.
> When I insert it between cursor.execute() and rows = cursor.fetchall(), I
> get an error as expected.

oh that is very interesting.


>
>
> On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
>>  wrote:
>> > using
>> >
>> > connection = pyodbc.connect()
>> > connection.autocommit = 0
>> > cursor = connection.cursor()
>> > cursor.execute([proper sql statement that references a table])
>> > rows = corsor.fetchall()
>> > print(rows)
>> > cursor.close()
>> >
>> > gives me the output out of the table that I expect.
>> > So if you were wondering if a raw pyodbc connection works, this is
>> > confirmed.
>>
>> did you call:
>>
>> connection.rollback()
>>
>> the stack traces you have given me indicate this method cannot be
>> called else Azure raises an error.  This must be illustrated as
>> definitely the problem, and not a side effect of something else.
>>
>> This is why this would go a lot quicker if someone had a *blank* azure
>> database on a cloud node somewhere for me to log into.  I don't need
>> your customer data.
>>
>>
>>
>>
>>
>>
>> >
>> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>> >>  wrote:
>> >> > I have the 'patched' pyodbc.py file active.
>> >> > Executing your code snippet does NOT produce an error or any output
>> >> > for
>> >> > that
>> >> > matter.
>> >> >
>> >> >
>> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > Got ya,
>> >> >> >
>> >> >> > so we could solve the issue on the sqlalchemy end with the
>> >> >> > alteration
>> >> >> > of
>> >> >> > the
>> >> >> > pyodbc.py file.
>> >> >> > I assume you'll include this in the next release?
>> >> >>
>> >> >> um.
>> >> >>
>> >> >> can you just confirm for me this makes the error?
>> >> >>
>> >> >>
>> >> >> connection = pyodbc.connect()
>> >> >> connection.autocommit = 0
>> >> >> connection.rollback()
>> >>
>> >>
>> >> try it like this:
>> >>
>> >>
>> >> connection = pyodbc.connect()
>> >> connection.autocommit = 0
>> >> cursor = connection.cursor()
>> >> cursor.execute("SELECT 1")
>> >> cursor.close()
>> >>
>> >> connection.rollback()
>> >>
>> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >> > The issue with creating a table when the option
>> >> >> > "if_exists='append'"
>> >> >> > is
>> >> >> > set
>> >> >> > in the df.to_sql() call, is a pandas problem.
>> >> >> >
>> >> >> > Thank you for your help.
>> >> >> >
>> >> >> > Best,
>> >> >> > DB
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > I don't get why the table is getting created in the first
>> >> >> >> > place. A
>> >> >> >> > table
>> >> >> >> > with this name exists, and the option "if_exists='append'"
>> >> >> >> > should
>> >> >> >> > append
>> >> >> >> > the
>> >> >> >> > dataframe to the existing table.
>> >> >> >> > There should not be a dropping of the table (which I have not
>> >> >> >> > seen)
>> >> >> >> > nor
>> >> >> >> > creation of the table.
>> >> >> >> >
>> >> >> >> > And in case of creating the table, I think it should be
>> >> >> >> > possible
>> >> >> >> > to
>> >> >> >> > define
>> >> >> >> > the length of the field, so
>> >> >> >> > varchar([variable_to_be_submitted]).
>> >> >> >> > In my case I expect this particular table to grow to several
>> >> >> >> > hundred
>> >> >> >> > million
>> >> >> >> > rows, so assigned storage space is a factor.
>> >> >> >> >
>> >> >> >> > the existing table was created like this:
>> >> >> >> >
>> >> >> >> > CREATE TABLE dbo.DSI
>> >> >> >> > (
>> >> >> >> > a datetime
>> >> >> >> > b varchar(10) null,
>> >> >> >> > c varchar(100) null,
>> >> >> >> > d varchar(10) null,
>> >> >> >> > e varchar(100) null,
>> >> >> >> > f decimal (8,6) null,
>> >> >> >> > g decimal (8,6) null
>> >> >> >> > )
>> >> >> >> >
>> >> >> >> > If I read and understand the error stack correct, the cause is
>> >> >> >> > the
>> >> >> >> > create
>> >> >> >> > table statement, which I would very strongly hope to cause an
>> >> >> >> > error,
>> >> >> >> > as
>> >> >> >> > the
>> >> >> >> > table exists.
>> >> >> >> > Should the create table statement not be omitted if the option
>> >> >> >> > "if_exists='append'" option is set?
>> >> >> >>
>> >> >> >> This is all on the Pandas side so you'd need to talk to them.
>> >> >> >>
>> >> >> >>
>> >> >> >> >
>> >> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike
>> >> >> >> > Bayer
>> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I hear your pain,

unfortunately we have to have very tight security on our server instances; 
I only can log into the server when I am in the office, I can't log in with 
my work laptop from home. (I have a not authorized ip address there)
If it were a possible, I would have given you access to a blank area.

as for the rollback call, 
adding it after the print command, does not change anything.
When I insert it between cursor.execute() and rows = cursor.fetchall(), I 
get an error as expected.


On Wednesday, September 13, 2017 at 1:16:59 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger 
>  wrote: 
> > using 
> > 
> > connection = pyodbc.connect() 
> > connection.autocommit = 0 
> > cursor = connection.cursor() 
> > cursor.execute([proper sql statement that references a table]) 
> > rows = corsor.fetchall() 
> > print(rows) 
> > cursor.close() 
> > 
> > gives me the output out of the table that I expect. 
> > So if you were wondering if a raw pyodbc connection works, this is 
> > confirmed. 
>
> did you call: 
>
> connection.rollback() 
>
> the stack traces you have given me indicate this method cannot be 
> called else Azure raises an error.  This must be illustrated as 
> definitely the problem, and not a side effect of something else. 
>
> This is why this would go a lot quicker if someone had a *blank* azure 
> database on a cloud node somewhere for me to log into.  I don't need 
> your customer data. 
>
>
>
>
>
>
> > 
> > On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
> >>  wrote: 
> >> > I have the 'patched' pyodbc.py file active. 
> >> > Executing your code snippet does NOT produce an error or any output 
> for 
> >> > that 
> >> > matter. 
> >> > 
> >> > 
> >> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > Got ya, 
> >> >> > 
> >> >> > so we could solve the issue on the sqlalchemy end with the 
> alteration 
> >> >> > of 
> >> >> > the 
> >> >> > pyodbc.py file. 
> >> >> > I assume you'll include this in the next release? 
> >> >> 
> >> >> um. 
> >> >> 
> >> >> can you just confirm for me this makes the error? 
> >> >> 
> >> >> 
> >> >> connection = pyodbc.connect() 
> >> >> connection.autocommit = 0 
> >> >> connection.rollback() 
> >> 
> >> 
> >> try it like this: 
> >> 
> >> 
> >> connection = pyodbc.connect() 
> >> connection.autocommit = 0 
> >> cursor = connection.cursor() 
> >> cursor.execute("SELECT 1") 
> >> cursor.close() 
> >> 
> >> connection.rollback() 
> >> 
> >> 
> >> >> 
> >> >> 
> >> >> 
> >> >> > The issue with creating a table when the option 
> "if_exists='append'" 
> >> >> > is 
> >> >> > set 
> >> >> > in the df.to_sql() call, is a pandas problem. 
> >> >> > 
> >> >> > Thank you for your help. 
> >> >> > 
> >> >> > Best, 
> >> >> > DB 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > I don't get why the table is getting created in the first 
> place. A 
> >> >> >> > table 
> >> >> >> > with this name exists, and the option "if_exists='append'" 
> should 
> >> >> >> > append 
> >> >> >> > the 
> >> >> >> > dataframe to the existing table. 
> >> >> >> > There should not be a dropping of the table (which I have not 
> >> >> >> > seen) 
> >> >> >> > nor 
> >> >> >> > creation of the table. 
> >> >> >> > 
> >> >> >> > And in case of creating the table, I think it should be 
> possible 
> >> >> >> > to 
> >> >> >> > define 
> >> >> >> > the length of the field, so 
> varchar([variable_to_be_submitted]). 
> >> >> >> > In my case I expect this particular table to grow to several 
> >> >> >> > hundred 
> >> >> >> > million 
> >> >> >> > rows, so assigned storage space is a factor. 
> >> >> >> > 
> >> >> >> > the existing table was created like this: 
> >> >> >> > 
> >> >> >> > CREATE TABLE dbo.DSI 
> >> >> >> > ( 
> >> >> >> > a datetime 
> >> >> >> > b varchar(10) null, 
> >> >> >> > c varchar(100) null, 
> >> >> >> > d varchar(10) null, 
> >> >> >> > e varchar(100) null, 
> >> >> >> > f decimal (8,6) null, 
> >> >> >> > g decimal (8,6) null 
> >> >> >> > ) 
> >> >> >> > 
> >> >> >> > If I read and understand the error stack correct, the cause is 
> the 
> >> >> >> > create 
> >> >> >> > table statement, which I would very strongly hope to cause an 
> >> >> >> > error, 
> >> >> >> > as 
> >> >> >> > the 
> >> >> >> > table exists. 
> >> >> >> > Should the create table statement not be omitted if the option 
> >> >> >> > "if_exists='append'" option is set? 
> >> >> >> 
> >> >> >> This is all on the Pandas side so you'd need to talk to them. 
> >> >> >> 
> >> >> >> 
> >> >> >> > 
> >> >> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
 wrote:
> using
>
> connection = pyodbc.connect()
> connection.autocommit = 0
> cursor = connection.cursor()
> cursor.execute([proper sql statement that references a table])
> rows = corsor.fetchall()
> print(rows)
> cursor.close()
>
> gives me the output out of the table that I expect.
> So if you were wondering if a raw pyodbc connection works, this is
> confirmed.

did you call:

connection.rollback()

the stack traces you have given me indicate this method cannot be
called else Azure raises an error.  This must be illustrated as
definitely the problem, and not a side effect of something else.

This is why this would go a lot quicker if someone had a *blank* azure
database on a cloud node somewhere for me to log into.  I don't need
your customer data.






>
> On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
>>  wrote:
>> > I have the 'patched' pyodbc.py file active.
>> > Executing your code snippet does NOT produce an error or any output for
>> > that
>> > matter.
>> >
>> >
>> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>> >>  wrote:
>> >> > Got ya,
>> >> >
>> >> > so we could solve the issue on the sqlalchemy end with the alteration
>> >> > of
>> >> > the
>> >> > pyodbc.py file.
>> >> > I assume you'll include this in the next release?
>> >>
>> >> um.
>> >>
>> >> can you just confirm for me this makes the error?
>> >>
>> >>
>> >> connection = pyodbc.connect()
>> >> connection.autocommit = 0
>> >> connection.rollback()
>>
>>
>> try it like this:
>>
>>
>> connection = pyodbc.connect()
>> connection.autocommit = 0
>> cursor = connection.cursor()
>> cursor.execute("SELECT 1")
>> cursor.close()
>>
>> connection.rollback()
>>
>>
>> >>
>> >>
>> >>
>> >> > The issue with creating a table when the option "if_exists='append'"
>> >> > is
>> >> > set
>> >> > in the df.to_sql() call, is a pandas problem.
>> >> >
>> >> > Thank you for your help.
>> >> >
>> >> > Best,
>> >> > DB
>> >> >
>> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > I don't get why the table is getting created in the first place. A
>> >> >> > table
>> >> >> > with this name exists, and the option "if_exists='append'" should
>> >> >> > append
>> >> >> > the
>> >> >> > dataframe to the existing table.
>> >> >> > There should not be a dropping of the table (which I have not
>> >> >> > seen)
>> >> >> > nor
>> >> >> > creation of the table.
>> >> >> >
>> >> >> > And in case of creating the table, I think it should be possible
>> >> >> > to
>> >> >> > define
>> >> >> > the length of the field, so varchar([variable_to_be_submitted]).
>> >> >> > In my case I expect this particular table to grow to several
>> >> >> > hundred
>> >> >> > million
>> >> >> > rows, so assigned storage space is a factor.
>> >> >> >
>> >> >> > the existing table was created like this:
>> >> >> >
>> >> >> > CREATE TABLE dbo.DSI
>> >> >> > (
>> >> >> > a datetime
>> >> >> > b varchar(10) null,
>> >> >> > c varchar(100) null,
>> >> >> > d varchar(10) null,
>> >> >> > e varchar(100) null,
>> >> >> > f decimal (8,6) null,
>> >> >> > g decimal (8,6) null
>> >> >> > )
>> >> >> >
>> >> >> > If I read and understand the error stack correct, the cause is the
>> >> >> > create
>> >> >> > table statement, which I would very strongly hope to cause an
>> >> >> > error,
>> >> >> > as
>> >> >> > the
>> >> >> > table exists.
>> >> >> > Should the create table statement not be omitted if the option
>> >> >> > "if_exists='append'" option is set?
>> >> >>
>> >> >> This is all on the Pandas side so you'd need to talk to them.
>> >> >>
>> >> >>
>> >> >> >
>> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer
>> >> >> > wrote:
>> >> >> >>
>> >> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?
>> >> >> >> We
>> >> >> >> have most of them and you can make new ones too.
>> >> >> >>
>> >> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >> >> >>  wrote:
>> >> >> >> > Mike,
>> >> >> >> >
>> >> >> >> > here's the error stack (I had to mask some details):
>> >> >> >> > The columns (dataformats) in the create table statement are
>> >> >> >> > wrong.
>> >> >> >> > Also,
>> >> >> >> > this table does not have an index.
>> >> >> >> >
>> >> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine
>> >> >> >> > SELECT
>> >> >> >> > SERVERPROPERTY('ProductVersion')
>> >> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine
>> >> >> >> > SELECT
>> >> >> >> > schema_name()
>> >> >> >> > 2017-09-13 

Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent
Never mind that last silly 0.7 question.  (Your patch is compatible it 
seems.)  (And pretend that you didn't hear any mention of 0.7)

Thanks again for your awesome software!
Kent


On Wednesday, September 13, 2017 at 3:42:55 PM UTC-4, Kent wrote:
>
>
>> dude!it is 2017.   get on this client!   :)   I literally have to 
>> maintain this feature for you personally :). 
>>
>>
>>
> Hahaha!  If you could only feel my pain!  Unfortunately, there are 
> multiple clients still on 8i because they are stuck on it due to their 
> legacy application (which we are working hard to replace and get them on 
> PostgreSQL.)  (So, at least you know the 8i stuff is helping multiple 
> companies :)
>  
>
>> > 
>> > The cleanest approach is specifying that the 'primaryjoin' to the 
>> > relationship in the mapper should include an extra join clause.  I hate 
>> > doing this, but after many other approaches, I've found this is by far 
>> the 
>> > cleanest approach due to bad database design (which I can't control -- 
>> > legacy). 
>> > 
>> > Anyway, the attached script shows an simplified, analogous mock-up, 
>> which 
>> > works correctly when joins are ANSI and incorrectly with 
>> use_ansi=False. 
>> > 
>> > The script demonstrates an inconsistency in use_ansi True vs. False on 
>> > sqlalchemy version 1.1.14 (although my sqlalchemy is older). 
>> > 
>> > In the use_ansi=False SQL, the correct "fix" would be changing the 
>> rendered: 
>> > 
>> > AND bugs_1.deathdate IS NULL 
>> > 
>> > into 
>> > 
>> > AND bugs_1.deathdate(+) IS NULL 
>> > 
>> > This then matches the ANSI join and works on 8i (I've tested it). 
>> > 
>> > Is this something we can fix?  Since the column is on the remote table 
>> and 
>> > specified in the join condition, it really needs "(+)" after the column 
>> name 
>> > in SQL.  This accomplishes the same thing as the ANSI version placing 
>> this 
>> > join condition in the "ON ..." clause instead of the "WHERE". 
>> > 
>> > Alternatively, is there a hack I could use to fix the rendered SQL on 
>> > joinedloads for this particular relationship? 
>>
>> the miracle of Docker means that I now have easy to run Oracle, SQL 
>> Server, etc. databases anywhere I need them so I can quickly confirm 
>> that this works with ansi or not: 
>>
>> mapper(Rock, rocks_table, 
>> properties={ 
>> 'livingbugs': relationship(Bug, 
>> primaryjoin=and_( 
>> bugs_table.c.rockid == rocks_table.c.id, 
>> bugs_table.c.deathdate.op("(+)=")(null()), 
>> )), 
>> }) 
>>
>>
> I'll find a shield to hide behind and then dare to ask "Is there a way to 
> hack that fix on SqlAlchemy 0.7?"
>
> Thanks for looking at this!
> Kent
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
using

connection = pyodbc.connect()
connection.autocommit = 0
cursor = connection.cursor()
cursor.execute([proper sql statement that references a table])
rows = corsor.fetchall()
print(rows)
cursor.close()

gives me the output out of the table that I expect.
So if you were wondering if a raw pyodbc connection works, this is 
confirmed.

On Wednesday, September 13, 2017 at 12:49:33 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger 
>  wrote: 
> > I have the 'patched' pyodbc.py file active. 
> > Executing your code snippet does NOT produce an error or any output for 
> that 
> > matter. 
> > 
> > 
> > On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
> >>  wrote: 
> >> > Got ya, 
> >> > 
> >> > so we could solve the issue on the sqlalchemy end with the alteration 
> of 
> >> > the 
> >> > pyodbc.py file. 
> >> > I assume you'll include this in the next release? 
> >> 
> >> um. 
> >> 
> >> can you just confirm for me this makes the error? 
> >> 
> >> 
> >> connection = pyodbc.connect() 
> >> connection.autocommit = 0 
> >> connection.rollback() 
>
>
> try it like this: 
>
>
> connection = pyodbc.connect() 
> connection.autocommit = 0 
> cursor = connection.cursor() 
> cursor.execute("SELECT 1") 
> cursor.close() 
>
> connection.rollback() 
>
>
> >> 
> >> 
> >> 
> >> > The issue with creating a table when the option "if_exists='append'" 
> is 
> >> > set 
> >> > in the df.to_sql() call, is a pandas problem. 
> >> > 
> >> > Thank you for your help. 
> >> > 
> >> > Best, 
> >> > DB 
> >> > 
> >> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > I don't get why the table is getting created in the first place. A 
> >> >> > table 
> >> >> > with this name exists, and the option "if_exists='append'" should 
> >> >> > append 
> >> >> > the 
> >> >> > dataframe to the existing table. 
> >> >> > There should not be a dropping of the table (which I have not 
> seen) 
> >> >> > nor 
> >> >> > creation of the table. 
> >> >> > 
> >> >> > And in case of creating the table, I think it should be possible 
> to 
> >> >> > define 
> >> >> > the length of the field, so varchar([variable_to_be_submitted]). 
> >> >> > In my case I expect this particular table to grow to several 
> hundred 
> >> >> > million 
> >> >> > rows, so assigned storage space is a factor. 
> >> >> > 
> >> >> > the existing table was created like this: 
> >> >> > 
> >> >> > CREATE TABLE dbo.DSI 
> >> >> > ( 
> >> >> > a datetime 
> >> >> > b varchar(10) null, 
> >> >> > c varchar(100) null, 
> >> >> > d varchar(10) null, 
> >> >> > e varchar(100) null, 
> >> >> > f decimal (8,6) null, 
> >> >> > g decimal (8,6) null 
> >> >> > ) 
> >> >> > 
> >> >> > If I read and understand the error stack correct, the cause is the 
> >> >> > create 
> >> >> > table statement, which I would very strongly hope to cause an 
> error, 
> >> >> > as 
> >> >> > the 
> >> >> > table exists. 
> >> >> > Should the create table statement not be omitted if the option 
> >> >> > "if_exists='append'" option is set? 
> >> >> 
> >> >> This is all on the Pandas side so you'd need to talk to them. 
> >> >> 
> >> >> 
> >> >> > 
> >> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer 
> >> >> > wrote: 
> >> >> >> 
> >> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like? 
>  We 
> >> >> >> have most of them and you can make new ones too. 
> >> >> >> 
> >> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >> >> >>  wrote: 
> >> >> >> > Mike, 
> >> >> >> > 
> >> >> >> > here's the error stack (I had to mask some details): 
> >> >> >> > The columns (dataformats) in the create table statement are 
> wrong. 
> >> >> >> > Also, 
> >> >> >> > this table does not have an index. 
> >> >> >> > 
> >> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > SERVERPROPERTY('ProductVersion') 
> >> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > schema_name() 
> >> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > CAST('test 
> >> >> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine 
> SELECT 
> >> >> >> > CAST('test 
> >> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine 
> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
 wrote:
> I have the 'patched' pyodbc.py file active.
> Executing your code snippet does NOT produce an error or any output for that
> matter.
>
>
> On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
>>  wrote:
>> > Got ya,
>> >
>> > so we could solve the issue on the sqlalchemy end with the alteration of
>> > the
>> > pyodbc.py file.
>> > I assume you'll include this in the next release?
>>
>> um.
>>
>> can you just confirm for me this makes the error?
>>
>>
>> connection = pyodbc.connect()
>> connection.autocommit = 0
>> connection.rollback()


try it like this:


connection = pyodbc.connect()
connection.autocommit = 0
cursor = connection.cursor()
cursor.execute("SELECT 1")
cursor.close()

connection.rollback()


>>
>>
>>
>> > The issue with creating a table when the option "if_exists='append'" is
>> > set
>> > in the df.to_sql() call, is a pandas problem.
>> >
>> > Thank you for your help.
>> >
>> > Best,
>> > DB
>> >
>> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>> >>
>> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>> >>  wrote:
>> >> > I don't get why the table is getting created in the first place. A
>> >> > table
>> >> > with this name exists, and the option "if_exists='append'" should
>> >> > append
>> >> > the
>> >> > dataframe to the existing table.
>> >> > There should not be a dropping of the table (which I have not seen)
>> >> > nor
>> >> > creation of the table.
>> >> >
>> >> > And in case of creating the table, I think it should be possible to
>> >> > define
>> >> > the length of the field, so varchar([variable_to_be_submitted]).
>> >> > In my case I expect this particular table to grow to several hundred
>> >> > million
>> >> > rows, so assigned storage space is a factor.
>> >> >
>> >> > the existing table was created like this:
>> >> >
>> >> > CREATE TABLE dbo.DSI
>> >> > (
>> >> > a datetime
>> >> > b varchar(10) null,
>> >> > c varchar(100) null,
>> >> > d varchar(10) null,
>> >> > e varchar(100) null,
>> >> > f decimal (8,6) null,
>> >> > g decimal (8,6) null
>> >> > )
>> >> >
>> >> > If I read and understand the error stack correct, the cause is the
>> >> > create
>> >> > table statement, which I would very strongly hope to cause an error,
>> >> > as
>> >> > the
>> >> > table exists.
>> >> > Should the create table statement not be omitted if the option
>> >> > "if_exists='append'" option is set?
>> >>
>> >> This is all on the Pandas side so you'd need to talk to them.
>> >>
>> >>
>> >> >
>> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer
>> >> > wrote:
>> >> >>
>> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> >> >> have most of them and you can make new ones too.
>> >> >>
>> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >> >>  wrote:
>> >> >> > Mike,
>> >> >> >
>> >> >> > here's the error stack (I had to mask some details):
>> >> >> > The columns (dataformats) in the create table statement are wrong.
>> >> >> > Also,
>> >> >> > this table does not have an index.
>> >> >> >
>> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > SERVERPROPERTY('ProductVersion')
>> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > schema_name()
>> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > CAST('test
>> >> >> > plain returns' AS VARCHAR(60)) AS anon_1
>> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > CAST('test
>> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> >> >> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> >> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> >> >> > NVARCHAR(max))
>> >> >> > AND 

Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent

>
>
> dude!it is 2017.   get on this client!   :)   I literally have to 
> maintain this feature for you personally :). 
>
>
>
Hahaha!  If you could only feel my pain!  Unfortunately, there are multiple 
clients still on 8i because they are stuck on it due to their legacy 
application (which we are working hard to replace and get them on 
PostgreSQL.)  (So, at least you know the 8i stuff is helping multiple 
companies :)
 

> > 
> > The cleanest approach is specifying that the 'primaryjoin' to the 
> > relationship in the mapper should include an extra join clause.  I hate 
> > doing this, but after many other approaches, I've found this is by far 
> the 
> > cleanest approach due to bad database design (which I can't control -- 
> > legacy). 
> > 
> > Anyway, the attached script shows an simplified, analogous mock-up, 
> which 
> > works correctly when joins are ANSI and incorrectly with use_ansi=False. 
> > 
> > The script demonstrates an inconsistency in use_ansi True vs. False on 
> > sqlalchemy version 1.1.14 (although my sqlalchemy is older). 
> > 
> > In the use_ansi=False SQL, the correct "fix" would be changing the 
> rendered: 
> > 
> > AND bugs_1.deathdate IS NULL 
> > 
> > into 
> > 
> > AND bugs_1.deathdate(+) IS NULL 
> > 
> > This then matches the ANSI join and works on 8i (I've tested it). 
> > 
> > Is this something we can fix?  Since the column is on the remote table 
> and 
> > specified in the join condition, it really needs "(+)" after the column 
> name 
> > in SQL.  This accomplishes the same thing as the ANSI version placing 
> this 
> > join condition in the "ON ..." clause instead of the "WHERE". 
> > 
> > Alternatively, is there a hack I could use to fix the rendered SQL on 
> > joinedloads for this particular relationship? 
>
> the miracle of Docker means that I now have easy to run Oracle, SQL 
> Server, etc. databases anywhere I need them so I can quickly confirm 
> that this works with ansi or not: 
>
> mapper(Rock, rocks_table, 
> properties={ 
> 'livingbugs': relationship(Bug, 
> primaryjoin=and_( 
> bugs_table.c.rockid == rocks_table.c.id, 
> bugs_table.c.deathdate.op("(+)=")(null()), 
> )), 
> }) 
>
>
I'll find a shield to hide behind and then dare to ask "Is there a way to 
hack that fix on SqlAlchemy 0.7?"

Thanks for looking at this!
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I have the 'patched' pyodbc.py file active.
Executing your code snippet does NOT produce an error or any output for 
that matter.


On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger 
>  wrote: 
> > Got ya, 
> > 
> > so we could solve the issue on the sqlalchemy end with the alteration of 
> the 
> > pyodbc.py file. 
> > I assume you'll include this in the next release? 
>
> um. 
>
> can you just confirm for me this makes the error? 
>
>
> connection = pyodbc.connect() 
> connection.autocommit = 0 
> connection.rollback() 
>
>
>
> > The issue with creating a table when the option "if_exists='append'" is 
> set 
> > in the df.to_sql() call, is a pandas problem. 
> > 
> > Thank you for your help. 
> > 
> > Best, 
> > DB 
> > 
> > On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote: 
> >> 
> >> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
> >>  wrote: 
> >> > I don't get why the table is getting created in the first place. A 
> table 
> >> > with this name exists, and the option "if_exists='append'" should 
> append 
> >> > the 
> >> > dataframe to the existing table. 
> >> > There should not be a dropping of the table (which I have not seen) 
> nor 
> >> > creation of the table. 
> >> > 
> >> > And in case of creating the table, I think it should be possible to 
> >> > define 
> >> > the length of the field, so varchar([variable_to_be_submitted]). 
> >> > In my case I expect this particular table to grow to several hundred 
> >> > million 
> >> > rows, so assigned storage space is a factor. 
> >> > 
> >> > the existing table was created like this: 
> >> > 
> >> > CREATE TABLE dbo.DSI 
> >> > ( 
> >> > a datetime 
> >> > b varchar(10) null, 
> >> > c varchar(100) null, 
> >> > d varchar(10) null, 
> >> > e varchar(100) null, 
> >> > f decimal (8,6) null, 
> >> > g decimal (8,6) null 
> >> > ) 
> >> > 
> >> > If I read and understand the error stack correct, the cause is the 
> >> > create 
> >> > table statement, which I would very strongly hope to cause an error, 
> as 
> >> > the 
> >> > table exists. 
> >> > Should the create table statement not be omitted if the option 
> >> > "if_exists='append'" option is set? 
> >> 
> >> This is all on the Pandas side so you'd need to talk to them. 
> >> 
> >> 
> >> > 
> >> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer 
> wrote: 
> >> >> 
> >> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> >> >> have most of them and you can make new ones too. 
> >> >> 
> >> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >> >>  wrote: 
> >> >> > Mike, 
> >> >> > 
> >> >> > here's the error stack (I had to mask some details): 
> >> >> > The columns (dataformats) in the create table statement are wrong. 
> >> >> > Also, 
> >> >> > this table does not have an index. 
> >> >> > 
> >> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > SERVERPROPERTY('ProductVersion') 
> >> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > schema_name() 
> >> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > CAST('test 
> >> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > CAST('test 
> >> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> >> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> >> >> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> >> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> >> >> > NVARCHAR(max)) 
> >> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> >> >> > NVARCHAR(max)) 
> >> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> >> >> > ('dbo.MSODS_DSI', 
> >> >> > 'dbo') 
> >> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> >> >> > CREATE TABLE [dbo.MSODS_DSI] ( 
> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
 wrote:
> Got ya,
>
> so we could solve the issue on the sqlalchemy end with the alteration of the
> pyodbc.py file.
> I assume you'll include this in the next release?

um.

can you just confirm for me this makes the error?


connection = pyodbc.connect()
connection.autocommit = 0
connection.rollback()



> The issue with creating a table when the option "if_exists='append'" is set
> in the df.to_sql() call, is a pandas problem.
>
> Thank you for your help.
>
> Best,
> DB
>
> On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
>>  wrote:
>> > I don't get why the table is getting created in the first place. A table
>> > with this name exists, and the option "if_exists='append'" should append
>> > the
>> > dataframe to the existing table.
>> > There should not be a dropping of the table (which I have not seen) nor
>> > creation of the table.
>> >
>> > And in case of creating the table, I think it should be possible to
>> > define
>> > the length of the field, so varchar([variable_to_be_submitted]).
>> > In my case I expect this particular table to grow to several hundred
>> > million
>> > rows, so assigned storage space is a factor.
>> >
>> > the existing table was created like this:
>> >
>> > CREATE TABLE dbo.DSI
>> > (
>> > a datetime
>> > b varchar(10) null,
>> > c varchar(100) null,
>> > d varchar(10) null,
>> > e varchar(100) null,
>> > f decimal (8,6) null,
>> > g decimal (8,6) null
>> > )
>> >
>> > If I read and understand the error stack correct, the cause is the
>> > create
>> > table statement, which I would very strongly hope to cause an error, as
>> > the
>> > table exists.
>> > Should the create table statement not be omitted if the option
>> > "if_exists='append'" option is set?
>>
>> This is all on the Pandas side so you'd need to talk to them.
>>
>>
>> >
>> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>> >>
>> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> >> have most of them and you can make new ones too.
>> >>
>> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>> >>  wrote:
>> >> > Mike,
>> >> >
>> >> > here's the error stack (I had to mask some details):
>> >> > The columns (dataformats) in the create table statement are wrong.
>> >> > Also,
>> >> > this table does not have an index.
>> >> >
>> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > SERVERPROPERTY('ProductVersion')
>> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > schema_name()
>> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > CAST('test
>> >> > plain returns' AS VARCHAR(60)) AS anon_1
>> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > CAST('test
>> >> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> >> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> >> > NVARCHAR(max))
>> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS
>> >> > NVARCHAR(max))
>> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine
>> >> > ('dbo.MSODS_DSI',
>> >> > 'dbo')
>> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
>> >> > CREATE TABLE [dbo.MSODS_DSI] (
>> >> > [a] DATETIME NULL,
>> >> > [b] VARCHAR(max) NULL,
>> >> > [c] VARCHAR(max) NULL,
>> >> > [d] VARCHAR(max) NULL,
>> >> > [e] VARCHAR(max) NULL,
>> >> > [f] FLOAT(53) NULL,
>> >> > [g] FLOAT(53) NULL
>> >> > )
>> >> >
>> >> >
>> >> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
>> >> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
>> >> >
>> >> >
>> >> > ---
>> >> > ProgrammingError  Traceback (most recent call
>> >> > last)
>> 

Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
Got ya,

so we could solve the issue on the sqlalchemy end with the alteration of 
the pyodbc.py file.
I assume you'll include this in the next release?
The issue with creating a table when the option "if_exists='append'" is set 
in the df.to_sql() call, is a pandas problem.

Thank you for your help.

Best,
DB

On Wednesday, September 13, 2017 at 11:45:15 AM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger 
>  wrote: 
> > I don't get why the table is getting created in the first place. A table 
> > with this name exists, and the option "if_exists='append'" should append 
> the 
> > dataframe to the existing table. 
> > There should not be a dropping of the table (which I have not seen) nor 
> > creation of the table. 
> > 
> > And in case of creating the table, I think it should be possible to 
> define 
> > the length of the field, so varchar([variable_to_be_submitted]). 
> > In my case I expect this particular table to grow to several hundred 
> million 
> > rows, so assigned storage space is a factor. 
> > 
> > the existing table was created like this: 
> > 
> > CREATE TABLE dbo.DSI 
> > ( 
> > a datetime 
> > b varchar(10) null, 
> > c varchar(100) null, 
> > d varchar(10) null, 
> > e varchar(100) null, 
> > f decimal (8,6) null, 
> > g decimal (8,6) null 
> > ) 
> > 
> > If I read and understand the error stack correct, the cause is the 
> create 
> > table statement, which I would very strongly hope to cause an error, as 
> the 
> > table exists. 
> > Should the create table statement not be omitted if the option 
> > "if_exists='append'" option is set? 
>
> This is all on the Pandas side so you'd need to talk to them. 
>
>
> > 
> > On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote: 
> >> 
> >> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> >> have most of them and you can make new ones too. 
> >> 
> >> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
> >>  wrote: 
> >> > Mike, 
> >> > 
> >> > here's the error stack (I had to mask some details): 
> >> > The columns (dataformats) in the create table statement are wrong. 
> Also, 
> >> > this table does not have an index. 
> >> > 
> >> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > SERVERPROPERTY('ProductVersion') 
> >> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > schema_name() 
> >> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > CAST('test 
> >> > plain returns' AS VARCHAR(60)) AS anon_1 
> >> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > CAST('test 
> >> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> >> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> >> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> >> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> >> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> >> > NVARCHAR(max)) 
> >> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> >> > NVARCHAR(max)) 
> >> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> >> > ('dbo.MSODS_DSI', 
> >> > 'dbo') 
> >> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> >> > CREATE TABLE [dbo.MSODS_DSI] ( 
> >> > [a] DATETIME NULL, 
> >> > [b] VARCHAR(max) NULL, 
> >> > [c] VARCHAR(max) NULL, 
> >> > [d] VARCHAR(max) NULL, 
> >> > [e] VARCHAR(max) NULL, 
> >> > [f] FLOAT(53) NULL, 
> >> > [g] FLOAT(53) NULL 
> >> > ) 
> >> > 
> >> > 
> >> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine () 
> >> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK 
> >> > 
> >> > 
> --- 
> >> > ProgrammingError  Traceback (most recent call 
> >> > last) 
> >> > 
> >> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
>
> >> > in _execute_context(self, dialect, constructor, statement, 
> parameters, 
> >> > *args) 
> >> >1181 parameters, 
> >> > -> 1182   

Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Mike Bayer
https://bitbucket.org/zzzeek/sqlalchemy/issues/4076/oracle-8-non-ansi-joins-needs-to-apply-to

On Wed, Sep 13, 2017 at 2:41 PM, Mike Bayer  wrote:
> this is also a bug.   the operator here shouldn't matter for the "(+)"
> operator to be added, that's easy to fix.  (doing it for 1.2 for now).
>
> On Wed, Sep 13, 2017 at 2:26 PM, Mike Bayer  wrote:
>> On Wed, Sep 13, 2017 at 12:01 PM, Kent  wrote:
>>> I've got a strange relationship on a legacy Oracle 8i database which I need
>>> to support (whether I like it or not).
>>
>> dude!it is 2017.   get on this client!   :)   I literally have to
>> maintain this feature for you personally :).
>>
>>
>>>
>>> The cleanest approach is specifying that the 'primaryjoin' to the
>>> relationship in the mapper should include an extra join clause.  I hate
>>> doing this, but after many other approaches, I've found this is by far the
>>> cleanest approach due to bad database design (which I can't control --
>>> legacy).
>>>
>>> Anyway, the attached script shows an simplified, analogous mock-up, which
>>> works correctly when joins are ANSI and incorrectly with use_ansi=False.
>>>
>>> The script demonstrates an inconsistency in use_ansi True vs. False on
>>> sqlalchemy version 1.1.14 (although my sqlalchemy is older).
>>>
>>> In the use_ansi=False SQL, the correct "fix" would be changing the rendered:
>>>
>>> AND bugs_1.deathdate IS NULL
>>>
>>> into
>>>
>>> AND bugs_1.deathdate(+) IS NULL
>>>
>>> This then matches the ANSI join and works on 8i (I've tested it).
>>>
>>> Is this something we can fix?  Since the column is on the remote table and
>>> specified in the join condition, it really needs "(+)" after the column name
>>> in SQL.  This accomplishes the same thing as the ANSI version placing this
>>> join condition in the "ON ..." clause instead of the "WHERE".
>>>
>>> Alternatively, is there a hack I could use to fix the rendered SQL on
>>> joinedloads for this particular relationship?
>>
>> the miracle of Docker means that I now have easy to run Oracle, SQL
>> Server, etc. databases anywhere I need them so I can quickly confirm
>> that this works with ansi or not:
>>
>> mapper(Rock, rocks_table,
>> properties={
>> 'livingbugs': relationship(Bug,
>> primaryjoin=and_(
>> bugs_table.c.rockid == rocks_table.c.id,
>> bugs_table.c.deathdate.op("(+)=")(null()),
>> )),
>> })
>>
>>
>>
>>
>>
>>>
>>> Thanks very much in advance!
>>> Kent
>>>
>>> --
>>> SQLAlchemy -
>>> The Python SQL Toolkit and Object Relational Mapper
>>>
>>> http://www.sqlalchemy.org/
>>>
>>> To post example code, please provide an MCVE: Minimal, Complete, and
>>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>>> description.
>>> ---
>>> You received this message because you are subscribed to the Google Groups
>>> "sqlalchemy" group.
>>> To unsubscribe from this group and stop receiving emails from it, send an
>>> email to sqlalchemy+unsubscr...@googlegroups.com.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sqlalchemy.
>>> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
 wrote:
> I don't get why the table is getting created in the first place. A table
> with this name exists, and the option "if_exists='append'" should append the
> dataframe to the existing table.
> There should not be a dropping of the table (which I have not seen) nor
> creation of the table.
>
> And in case of creating the table, I think it should be possible to define
> the length of the field, so varchar([variable_to_be_submitted]).
> In my case I expect this particular table to grow to several hundred million
> rows, so assigned storage space is a factor.
>
> the existing table was created like this:
>
> CREATE TABLE dbo.DSI
> (
> a datetime
> b varchar(10) null,
> c varchar(100) null,
> d varchar(10) null,
> e varchar(100) null,
> f decimal (8,6) null,
> g decimal (8,6) null
> )
>
> If I read and understand the error stack correct, the cause is the create
> table statement, which I would very strongly hope to cause an error, as the
> table exists.
> Should the create table statement not be omitted if the option
> "if_exists='append'" option is set?

This is all on the Pandas side so you'd need to talk to them.


>
> On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>>
>> OK sodon't use VARCHAR(max)?  What datatype would you like?  We
>> have most of them and you can make new ones too.
>>
>> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
>>  wrote:
>> > Mike,
>> >
>> > here's the error stack (I had to mask some details):
>> > The columns (dataformats) in the create table statement are wrong. Also,
>> > this table does not have an index.
>> >
>> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
>> > SERVERPROPERTY('ProductVersion')
>> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
>> > schema_name()
>> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT
>> > CAST('test
>> > plain returns' AS VARCHAR(60)) AS anon_1
>> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT
>> > CAST('test
>> > unicode returns' AS NVARCHAR(60)) AS anon_1
>> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
>> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
>> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
>> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
>> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
>> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
>> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
>> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
>> > FROM [INFORMATION_SCHEMA].[COLUMNS]
>> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS
>> > NVARCHAR(max))
>> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS
>> > NVARCHAR(max))
>> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine
>> > ('dbo.MSODS_DSI',
>> > 'dbo')
>> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
>> > CREATE TABLE [dbo.MSODS_DSI] (
>> > [a] DATETIME NULL,
>> > [b] VARCHAR(max) NULL,
>> > [c] VARCHAR(max) NULL,
>> > [d] VARCHAR(max) NULL,
>> > [e] VARCHAR(max) NULL,
>> > [f] FLOAT(53) NULL,
>> > [g] FLOAT(53) NULL
>> > )
>> >
>> >
>> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
>> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
>> >
>> > ---
>> > ProgrammingError  Traceback (most recent call
>> > last)
>> >
>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py
>> > in _execute_context(self, dialect, constructor, statement, parameters,
>> > *args)
>> >1181 parameters,
>> > -> 1182 context)
>> >1183 except BaseException as e:
>> >
>> >
>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>> > in do_execute(self, cursor, statement, parameters, context)
>> > 469 def do_execute(self, cursor, statement, parameters,
>> > context=None):
>> > --> 470 cursor.execute(statement, parameters)
>> > 471
>> >
>> > ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL
>> > Server][SQL Server]The statement failed. Column 'b' has a data type that
>> > cannot participate in a columnstore index.\r\nOperation cancelled by
>> > user.
>> > (35343) (SQLExecDirectW)")
>> >
>> > The above exception was the direct cause of the 

Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Mike Bayer
this is also a bug.   the operator here shouldn't matter for the "(+)"
operator to be added, that's easy to fix.  (doing it for 1.2 for now).

On Wed, Sep 13, 2017 at 2:26 PM, Mike Bayer  wrote:
> On Wed, Sep 13, 2017 at 12:01 PM, Kent  wrote:
>> I've got a strange relationship on a legacy Oracle 8i database which I need
>> to support (whether I like it or not).
>
> dude!it is 2017.   get on this client!   :)   I literally have to
> maintain this feature for you personally :).
>
>
>>
>> The cleanest approach is specifying that the 'primaryjoin' to the
>> relationship in the mapper should include an extra join clause.  I hate
>> doing this, but after many other approaches, I've found this is by far the
>> cleanest approach due to bad database design (which I can't control --
>> legacy).
>>
>> Anyway, the attached script shows an simplified, analogous mock-up, which
>> works correctly when joins are ANSI and incorrectly with use_ansi=False.
>>
>> The script demonstrates an inconsistency in use_ansi True vs. False on
>> sqlalchemy version 1.1.14 (although my sqlalchemy is older).
>>
>> In the use_ansi=False SQL, the correct "fix" would be changing the rendered:
>>
>> AND bugs_1.deathdate IS NULL
>>
>> into
>>
>> AND bugs_1.deathdate(+) IS NULL
>>
>> This then matches the ANSI join and works on 8i (I've tested it).
>>
>> Is this something we can fix?  Since the column is on the remote table and
>> specified in the join condition, it really needs "(+)" after the column name
>> in SQL.  This accomplishes the same thing as the ANSI version placing this
>> join condition in the "ON ..." clause instead of the "WHERE".
>>
>> Alternatively, is there a hack I could use to fix the rendered SQL on
>> joinedloads for this particular relationship?
>
> the miracle of Docker means that I now have easy to run Oracle, SQL
> Server, etc. databases anywhere I need them so I can quickly confirm
> that this works with ansi or not:
>
> mapper(Rock, rocks_table,
> properties={
> 'livingbugs': relationship(Bug,
> primaryjoin=and_(
> bugs_table.c.rockid == rocks_table.c.id,
> bugs_table.c.deathdate.op("(+)=")(null()),
> )),
> })
>
>
>
>
>
>>
>> Thanks very much in advance!
>> Kent
>>
>> --
>> SQLAlchemy -
>> The Python SQL Toolkit and Object Relational Mapper
>>
>> http://www.sqlalchemy.org/
>>
>> To post example code, please provide an MCVE: Minimal, Complete, and
>> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
>> description.
>> ---
>> You received this message because you are subscribed to the Google Groups
>> "sqlalchemy" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to sqlalchemy+unsubscr...@googlegroups.com.
>> To post to this group, send email to sqlalchemy@googlegroups.com.
>> Visit this group at https://groups.google.com/group/sqlalchemy.
>> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
I don't get why the table is getting created in the first place. A table 
with this name exists, and the option "if_exists='append'" should append 
the dataframe to the existing table.
There should not be a dropping of the table (which I have not seen) nor 
creation of the table.

And in case of creating the table, I think it should be possible to define 
the length of the field, so varchar([variable_to_be_submitted]).
In my case I expect this particular table to grow to several hundred 
million rows, so assigned storage space is a factor.

the existing table was created like this:

CREATE TABLE dbo.DSI
(
a datetime
b varchar(10) null,
c varchar(100) null,
d varchar(10) null,
e varchar(100) null,
f decimal (8,6) null,
g decimal (8,6) null
)
 
If I read and understand the error stack correct, the cause is the create 
table statement, which I would very strongly hope to cause an error, as the 
table exists.
Should the create table statement not be omitted if the option 
"if_exists='append'" option is set?

On Wednesday, September 13, 2017 at 11:15:27 AM UTC-7, Mike Bayer wrote:
>
> OK sodon't use VARCHAR(max)?  What datatype would you like?  We 
> have most of them and you can make new ones too. 
>
> On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger 
>  wrote: 
> > Mike, 
> > 
> > here's the error stack (I had to mask some details): 
> > The columns (dataformats) in the create table statement are wrong. Also, 
> > this table does not have an index. 
> > 
> > 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
> > SERVERPROPERTY('ProductVersion') 
> > 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
> > schema_name() 
> > 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
> CAST('test 
> > plain returns' AS VARCHAR(60)) AS anon_1 
> > 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
> CAST('test 
> > unicode returns' AS NVARCHAR(60)) AS anon_1 
> > 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
> > [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
> > [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
> > [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
> > [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
> > [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
> > FROM [INFORMATION_SCHEMA].[COLUMNS] 
> > WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
> NVARCHAR(max)) 
> > AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS 
> NVARCHAR(max)) 
> > 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
> ('dbo.MSODS_DSI', 
> > 'dbo') 
> > 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
> > CREATE TABLE [dbo.MSODS_DSI] ( 
> > [a] DATETIME NULL, 
> > [b] VARCHAR(max) NULL, 
> > [c] VARCHAR(max) NULL, 
> > [d] VARCHAR(max) NULL, 
> > [e] VARCHAR(max) NULL, 
> > [f] FLOAT(53) NULL, 
> > [g] FLOAT(53) NULL 
> > ) 
> > 
> > 
> > 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine () 
> > 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK 
> > 
> --- 
> > ProgrammingError  Traceback (most recent call 
> last) 
> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
>
> > in _execute_context(self, dialect, constructor, statement, parameters, 
> > *args) 
> >1181 parameters, 
> > -> 1182 context) 
> >1183 except BaseException as e: 
> > 
> > 
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>  
>
> > in do_execute(self, cursor, statement, parameters, context) 
> > 469 def do_execute(self, cursor, statement, parameters, 
> > context=None): 
> > --> 470 cursor.execute(statement, parameters) 
> > 471 
> > 
> > ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL 
> > Server][SQL Server]The statement failed. Column 'b' has a data type that 
> > cannot participate in a columnstore index.\r\nOperation cancelled by 
> user. 
> > (35343) (SQLExecDirectW)") 
> > 
> > The above exception was the direct cause of the following exception: 
> > 
> > ProgrammingError  Traceback (most recent call 
> last) 
> >  in () 
> >  17 #cnxn = pyodbc.connect(connection_str) 
> >  18 #engn.connect() 
> > 

Re: [sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 12:01 PM, Kent  wrote:
> I've got a strange relationship on a legacy Oracle 8i database which I need
> to support (whether I like it or not).

dude!it is 2017.   get on this client!   :)   I literally have to
maintain this feature for you personally :).


>
> The cleanest approach is specifying that the 'primaryjoin' to the
> relationship in the mapper should include an extra join clause.  I hate
> doing this, but after many other approaches, I've found this is by far the
> cleanest approach due to bad database design (which I can't control --
> legacy).
>
> Anyway, the attached script shows an simplified, analogous mock-up, which
> works correctly when joins are ANSI and incorrectly with use_ansi=False.
>
> The script demonstrates an inconsistency in use_ansi True vs. False on
> sqlalchemy version 1.1.14 (although my sqlalchemy is older).
>
> In the use_ansi=False SQL, the correct "fix" would be changing the rendered:
>
> AND bugs_1.deathdate IS NULL
>
> into
>
> AND bugs_1.deathdate(+) IS NULL
>
> This then matches the ANSI join and works on 8i (I've tested it).
>
> Is this something we can fix?  Since the column is on the remote table and
> specified in the join condition, it really needs "(+)" after the column name
> in SQL.  This accomplishes the same thing as the ANSI version placing this
> join condition in the "ON ..." clause instead of the "WHERE".
>
> Alternatively, is there a hack I could use to fix the rendered SQL on
> joinedloads for this particular relationship?

the miracle of Docker means that I now have easy to run Oracle, SQL
Server, etc. databases anywhere I need them so I can quickly confirm
that this works with ansi or not:

mapper(Rock, rocks_table,
properties={
'livingbugs': relationship(Bug,
primaryjoin=and_(
bugs_table.c.rockid == rocks_table.c.id,
bugs_table.c.deathdate.op("(+)=")(null()),
)),
})





>
> Thanks very much in advance!
> Kent
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
OK sodon't use VARCHAR(max)?  What datatype would you like?  We
have most of them and you can make new ones too.

On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
 wrote:
> Mike,
>
> here's the error stack (I had to mask some details):
> The columns (dataformats) in the create table statement are wrong. Also,
> this table does not have an index.
>
> 2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
> SERVERPROPERTY('ProductVersion')
> 2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT
> schema_name()
> 2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT CAST('test
> plain returns' AS VARCHAR(60)) AS anon_1
> 2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT CAST('test
> unicode returns' AS NVARCHAR(60)) AS anon_1
> 2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT
> [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA],
> [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME],
> [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME],
> [INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE],
> [INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE],
> [INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION],
> [INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH],
> [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION],
> [INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE],
> [INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT],
> [INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME]
> FROM [INFORMATION_SCHEMA].[COLUMNS]
> WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS NVARCHAR(max))
> AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
> 2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine ('dbo.MSODS_DSI',
> 'dbo')
> 2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine
> CREATE TABLE [dbo.MSODS_DSI] (
> [a] DATETIME NULL,
> [b] VARCHAR(max) NULL,
> [c] VARCHAR(max) NULL,
> [d] VARCHAR(max) NULL,
> [e] VARCHAR(max) NULL,
> [f] FLOAT(53) NULL,
> [g] FLOAT(53) NULL
> )
>
>
> 2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
> 2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
> ---
> ProgrammingError  Traceback (most recent call last)
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py
> in _execute_context(self, dialect, constructor, statement, parameters,
> *args)
>1181 parameters,
> -> 1182 context)
>1183 except BaseException as e:
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
> in do_execute(self, cursor, statement, parameters, context)
> 469 def do_execute(self, cursor, statement, parameters,
> context=None):
> --> 470 cursor.execute(statement, parameters)
> 471
>
> ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL
> Server][SQL Server]The statement failed. Column 'b' has a data type that
> cannot participate in a columnstore index.\r\nOperation cancelled by user.
> (35343) (SQLExecDirectW)")
>
> The above exception was the direct cause of the following exception:
>
> ProgrammingError  Traceback (most recent call last)
>  in ()
>  17 #cnxn = pyodbc.connect(connection_str)
>  18 #engn.connect()
> ---> 19 df.to_sql(tbl_server_out, engn, if_exists='append', index=False)
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py
> in to_sql(self, name, con, flavor, schema, if_exists, index, index_label,
> chunksize, dtype)
>1343 sql.to_sql(self, name, con, flavor=flavor, schema=schema,
>1344if_exists=if_exists, index=index,
> index_label=index_label,
> -> 1345chunksize=chunksize, dtype=dtype)
>1346
>1347 def to_pickle(self, path, compression='infer'):
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in
> to_sql(frame, name, con, flavor, schema, if_exists, index, index_label,
> chunksize, dtype)
> 469 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
> 470   index_label=index_label, schema=schema,
> --> 471   chunksize=chunksize, dtype=dtype)
> 472
> 473
>
> /home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in
> to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize,
> dtype)
>1148  if_exists=if_exists,
> index_label=index_label,
>1149  schema=schema, dtype=dtype)
> -> 1150 table.create()
>1151 table.insert(chunksize)
>1152 if (not 

[sqlalchemy] Re: pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread dirk.biesinger
Mike,

here's the error stack (I had to mask some details):
The columns (dataformats) in the create table statement are wrong. Also, 
this table does not have an index.

2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT 
 SERVERPROPERTY('ProductVersion')
2017-09-13 15:07:50,202 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,246 INFO sqlalchemy.engine.base.Engine SELECT 
schema_name()
2017-09-13 15:07:50,247 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2017-09-13 15:07:50,522 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,562 INFO sqlalchemy.engine.base.Engine SELECT 
CAST('test unicode returns' AS NVARCHAR(60)) AS anon_1
2017-09-13 15:07:50,563 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:50,604 INFO sqlalchemy.engine.base.Engine SELECT 
[INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA], 
[INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME], 
[INFORMATION_SCHEMA].[COLUMNS].[COLUMN_NAME], 
[INFORMATION_SCHEMA].[COLUMNS].[IS_NULLABLE], 
[INFORMATION_SCHEMA].[COLUMNS].[DATA_TYPE], 
[INFORMATION_SCHEMA].[COLUMNS].[ORDINAL_POSITION], 
[INFORMATION_SCHEMA].[COLUMNS].[CHARACTER_MAXIMUM_LENGTH], 
[INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_PRECISION], 
[INFORMATION_SCHEMA].[COLUMNS].[NUMERIC_SCALE], 
[INFORMATION_SCHEMA].[COLUMNS].[COLUMN_DEFAULT], 
[INFORMATION_SCHEMA].[COLUMNS].[COLLATION_NAME] 
FROM [INFORMATION_SCHEMA].[COLUMNS] 
WHERE [INFORMATION_SCHEMA].[COLUMNS].[TABLE_NAME] = CAST(? AS 
NVARCHAR(max)) AND [INFORMATION_SCHEMA].[COLUMNS].[TABLE_SCHEMA] = CAST(? 
AS NVARCHAR(max))
2017-09-13 15:07:50,605 INFO sqlalchemy.engine.base.Engine 
('dbo.MSODS_DSI', 'dbo')
2017-09-13 15:07:52,151 INFO sqlalchemy.engine.base.Engine 
CREATE TABLE [dbo.MSODS_DSI] (
[a] DATETIME NULL, 
[b] VARCHAR(max) NULL, 
[c] VARCHAR(max) NULL, 
[d] VARCHAR(max) NULL, 
[e] VARCHAR(max) NULL, 
[f] FLOAT(53) NULL, 
[g] FLOAT(53) NULL
)


2017-09-13 15:07:52,152 INFO sqlalchemy.engine.base.Engine ()
2017-09-13 15:07:54,374 INFO sqlalchemy.engine.base.Engine ROLLBACK
---
ProgrammingError  Traceback (most recent call last)
/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py 
in _execute_context(self, dialect, constructor, statement, parameters, 
*args)
   1181 parameters,
-> 1182 context)
   1183 except BaseException as e:

/home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
 
in do_execute(self, cursor, statement, parameters, context)
469 def do_execute(self, cursor, statement, parameters, 
context=None):
--> 470 cursor.execute(statement, parameters)
471 

ProgrammingError: ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL 
Server][SQL Server]The statement failed. Column 'b' has a data type that 
cannot participate in a columnstore index.\r\nOperation cancelled by user. 
(35343) (SQLExecDirectW)")

The above exception was the direct cause of the following exception:

ProgrammingError  Traceback (most recent call last)
 in ()
 17 #cnxn = pyodbc.connect(connection_str)
 18 #engn.connect()
---> 19 df.to_sql(tbl_server_out, engn, if_exists='append', index=False)

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/core/generic.py 
in to_sql(self, name, con, flavor, schema, if_exists, index, index_label, 
chunksize, dtype)
   1343 sql.to_sql(self, name, con, flavor=flavor, schema=schema,
   1344if_exists=if_exists, index=index, 
index_label=index_label,
-> 1345chunksize=chunksize, dtype=dtype)
   1346 
   1347 def to_pickle(self, path, compression='infer'):

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
to_sql(frame, name, con, flavor, schema, if_exists, index, index_label, 
chunksize, dtype)
469 pandas_sql.to_sql(frame, name, if_exists=if_exists, index=index,
470   index_label=index_label, schema=schema,
--> 471   chunksize=chunksize, dtype=dtype)
472 
473 

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
to_sql(self, frame, name, if_exists, index, index_label, schema, chunksize, 
dtype)
   1148  if_exists=if_exists, 
index_label=index_label,
   1149  schema=schema, dtype=dtype)
-> 1150 table.create()
   1151 table.insert(chunksize)
   1152 if (not name.isdigit() and not name.islower()):

/home/saravji/anaconda3/lib/python3.6/site-packages/pandas/io/sql.py in 
create(self)
596 "'{0}' is not valid for 
if_exists".format(self.if_exists))
597 else:
--> 598 self._execute_create()
599 
600 def insert_statement(self):


[sqlalchemy] use_ansi inconsistency with relationship's primaryjoin

2017-09-13 Thread Kent
I've got a strange relationship on a legacy Oracle 8i database which I need 
to support (whether I like it or not).

The cleanest approach is specifying that the 'primaryjoin' to the 
relationship in the mapper should include an extra join clause.  I hate 
doing this, but after many other approaches, I've found this is by far the 
cleanest approach due to bad database design (which I can't control -- 
legacy).

Anyway, the attached script shows an simplified, analogous mock-up, which 
works *correctly* when joins are ANSI and *incorrectly* with use_ansi=False.

The script demonstrates an inconsistency in use_ansi True vs. False on 
sqlalchemy version 1.1.14 (although my sqlalchemy is older). 

In the use_ansi=False SQL, the correct "fix" would be changing the rendered:

AND bugs_1.deathdate IS NULL

into

AND bugs_1.deathdate(+) IS NULL

This then matches the ANSI join and works on 8i (I've tested it).

Is this something we can fix?  Since the column is on the* remote table *and 
*specified in the join condition*, it really needs "(+)" after the column 
name in SQL.  This accomplishes the same thing as the ANSI version placing 
this join condition in the "ON ..." clause instead of the "WHERE".

Alternatively, is there a hack I could use to fix the rendered SQL on 
joinedloads for this particular relationship?

Thanks very much in advance!
Kent

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.
from sqlalchemy import *
from sqlalchemy.orm import *
from datetime import date


# if use_ansi=True, this script succeeds
# if False, this script fails

use_ansi = False
#use_ansi = True

engine = create_engine('oracle://kent:kent@localhost:1521/xe', use_ansi=use_ansi, echo=True) 

metadata = MetaData(engine)
Session = sessionmaker(bind=engine)

# a rock has many bugs

rocks_table = Table("rocks", metadata,
Column("id", Integer, primary_key=True),
)

bugs_table = Table("bugs", metadata,
Column("id", Integer, primary_key=True),
Column("rockid", Integer, ForeignKey('rocks.id')),
Column("deathdate", Date),
)

class Rock(object):
pass

class Bug(object):
pass

mapper(Rock, rocks_table, 
properties={
'livingbugs': relationship(Bug,
primaryjoin=and_(
bugs_table.c.rockid == rocks_table.c.id,
bugs_table.c.deathdate == None,
)),
})

mapper(Bug, bugs_table)

metadata.create_all()
try:
s = Session()

r=Rock()
r.id = 55

b=Bug()
b.id = 1
b.rockid = 55
b.deathdate = date.today()

s.add(r)
s.add(b)
s.commit()

s = Session()

rocks = s.query(Rock).options(joinedload('livingbugs')).all()
if not rocks:
# When not using ANSI, if 
# AND bugs_1.deathdate IS NULL
# is changed to:
# AND bugs_1.deathdate(+) IS NULL
# then the join is consistent with ANSI join and doesn't fail
raise Exception("Rock not selected")

finally:
metadata.drop_all()




Re: [sqlalchemy] pandas to MS SQL DataWarehouse (to_sql)

2017-09-13 Thread Mike Bayer
We will need to add a new flag to the SQLAlchemy dialect that
disallows any transaction calls of any kind including rollbacks on an
autocommit connection since pyodbc is being buggy here.

On Wed, Sep 13, 2017 at 11:34 AM, Dirk Biesinger
 wrote:
> Actually Mike,
>
> the error is with the create table statement that gets executed by
> sqlalchemy.engine.base.Engine (the df.to_sql function has option
> if_exists='append' set):
> The table that is being created has different dataformat than the one that
> exists.
> This sql statement seems to be going thru, as the output continues
> afterwards.
>
> I have to check on the datawarehouse server once I am in the office if the
> create table statement did result in a new table or alterations to the
> existing table. (I don't have the correct ip address range from home to be
> able to connect to the server)
>
>
>
> On Wed, Sep 13, 2017 at 8:12 AM, Dirk Biesinger 
> wrote:
>>
>> Mike,
>>
>> for whatever reason the importlib.reload() did not actually reload the
>> patched file.
>> A fresh kernel did the trick.
>> When using the engine.connect() function, I have a connection.
>> When using the df.to_sql function I get a error message related to
>> incorrect datatypes.
>> I'd say this is success.
>> I'll fix the datatypes issue and post an update.
>>
>> Thanks.
>>
>> Dirk Biesinger
>>
>> dirk.biesin...@gmail.com  |  206.349.9769
>>
>> "Simplicity is the Mastery of Complexity"
>>
>> On Tue, Sep 12, 2017 at 10:46 PM, Mike Bayer 
>> wrote:
>>>
>>> On Tue, Sep 12, 2017 at 5:03 PM, Dirk Biesinger
>>>  wrote:
>>> >
>>> >
>>> > Here's the new error stack:
>>>
>>> no this is wrong:
>>>
>>>
>>> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/dialects/mssql/base.py
>>> in initialize(self, connection)
>>>1741 def initialize(self, connection):
>>> -> 1742 super(MSDialect, self).initialize(connection)
>>>1743 self._setup_version_attributes()
>>>
>>>
>>> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>>> in initialize(self, connection)
>>> 265
>>> --> 266 self.do_rollback(connection.connection)
>>> 267
>>>
>>>
>>> /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/default.py
>>> in do_rollback(self, dbapi_connection)
>>> 439 def do_rollback(self, dbapi_connection):
>>> --> 440 dbapi_connection.rollback()
>>> 441
>>>
>>>
>>> do_rollback() is overridden with the patch, can't be called in
>>> default.py.   Please make sure the patch applies an empty
>>> do_rollback() method to MSDialect_pyodbc.   you're definitely using
>>> "mssql+pyodbc://" style URL right?
>>>
>>>
>>>
>>>
>>>
>>> >
>>> > 2017-09-12 21:01:39,990 INFO sqlalchemy.engine.base.Engine SELECT
>>> > SERVERPROPERTY('ProductVersion')
>>> > 2017-09-12 21:01:39,991 INFO sqlalchemy.engine.base.Engine ()
>>> > 2017-09-12 21:01:40,032 INFO sqlalchemy.engine.base.Engine SELECT
>>> > schema_name()
>>> > 2017-09-12 21:01:40,033 INFO sqlalchemy.engine.base.Engine ()
>>> > 2017-09-12 21:01:40,422 INFO sqlalchemy.engine.base.Engine SELECT
>>> > CAST('test
>>> > plain returns' AS VARCHAR(60)) AS anon_1
>>> > 2017-09-12 21:01:40,423 INFO sqlalchemy.engine.base.Engine ()
>>> > 2017-09-12 21:01:40,463 INFO sqlalchemy.engine.base.Engine SELECT
>>> > CAST('test
>>> > unicode returns' AS NVARCHAR(60)) AS anon_1
>>> > 2017-09-12 21:01:40,465 INFO sqlalchemy.engine.base.Engine ()
>>> >
>>> >
>>> > ---
>>> > Empty Traceback (most recent call
>>> > last)
>>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/pool.py
>>> > in
>>> > _do_get(self)
>>> >1121 wait = use_overflow and self._overflow >=
>>> > self._max_overflow
>>> > -> 1122 return self._pool.get(wait, self._timeout)
>>> >1123 except sqla_queue.Empty:
>>> >
>>> >
>>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/util/queue.py
>>> > in get(self, block, timeout)
>>> > 144 if self._empty():
>>> > --> 145 raise Empty
>>> > 146 elif timeout is None:
>>> >
>>> > Empty:
>>> >
>>> > During handling of the above exception, another exception occurred:
>>> >
>>> > ProgrammingError  Traceback (most recent call
>>> > last)
>>> >
>>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/engine/base.py
>>> > in _wrap_pool_connect(self, fn, connection)
>>> >2146 try:
>>> > -> 2147 return fn()
>>> >2148 except dialect.dbapi.Error as e:
>>> >
>>> > /home/saravji/anaconda3/lib/python3.6/site-packages/sqlalchemy/pool.py
>>> > in
>>> > unique_connection(self)
>>> > 327 """
>>> > --> 328 return _ConnectionFairy._checkout(self)
>>> > 329
>>> >
>>> > 

Re: [sqlalchemy] Non-server-side defaults for insert query with multiple values

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 7:42 AM, Colin Deasy  wrote:
> Hi Mike,
>
> Thanks for the detailed response.
> The workaround looks fine but I'm cautious of relying on the internal
> behaviour; in particular the naming convention "_m" and modifying the
> current params dictionary rather than returning a value.
>
> A concern I have with executemany is that we render the SQL only for the
> driver (mysql-python in this case) to parse it again.
> This seems like a needless and error prone operation since we have enough
> information present in SQLAlchemy to generate
> the correct SQL and parameters for a multi-row insert statement.

it is definitely not error prone, else those would have been reported
as bugs which would have been fixed.  If you look at how mysqlclient
(which I hope you are using, as mysql-python hasn't had a release or
commit in three years) does it, it invokes the statement multiple
times per chunks of values based on a fixed maximum statement length,
thereby preventing the statement from being too large for the MySQL
protocol as well as using less memory resources on the server (parsing
a dozen 64K strings instead of a 1M string).Using executemany() in
this regard is *less* error prone than working up a multi-values
statement outside of the driver unless you are taking similar
measures.

as far as "modifying the params dictionary instead of returning a
value", it was not clear in the context in which you are looking at
context.current_parameters, and if it is a column default generator as
I had assumed, then yes you are given the same column repeatedly and
you have no way to know which one you've landed upon.   A patch to
provide this context and a helper method is at
https://bitbucket.org/zzzeek/sqlalchemy/issues/4075/multi-valued-insert-invokes-default-for.

To solve the problem though for MySQL I'd definitely stick to letting
the driver do it.   The overhead of the patch looks comparable to
driver doing a minor regexp on the statement.   Or, play with the
patch and do some profiling to see how the difference works out in
terms of Python call counts.




>
> Cheers
> Colin
>
> On Thu, Sep 7, 2017 at 4:38 PM, Mike Bayer  wrote:
>>
>> On Thu, Sep 7, 2017 at 11:10 AM, Colin Deasy 
>> wrote:
>> > Hi,
>> >
>> > I have a use-case where I need to set the value of a column based on the
>> > values of other columns during an insert.
>> > This works fine for an insert for a single row where
>> > `context.current_parameters` reflects the values of the other columns I
>> > care
>> > about.
>> > When I have a multi-row insert, `context.current_parameters` is no
>> > longer
>> > usable since the column names are munged (e.g. assuming a `user_id`
>> > column,
>> > and an insert like `insert(table).values([{'user_id': 1}, {'user_id':
>> > 2},
>> > {'user_id': 3}])` you might get something like `{'user_id_m0': 1,
>> > 'user_id_m1': 2, 'user_id_m2': 3}`. It doesn't seem possible to tell
>> > which
>> > row we are operating on just by looking at `context` so we cannot
>> > determine
>> > which munged name we should look at. Ideally the current parameters
>> > would
>> > only reflect the parameters for a single row.
>> >
>> > Is this a known limitation of defaults usage? Any workarounds?
>>
>> normally you would use standard executemany() style:
>>
>> conn.execute(insert(table), [{"user_id": 1}, {"user_id": 2}, {"user_id":
>> 3}])
>>
>> In this form, you would receive each dictionary individually.
>>
>> The insert.values(< multiparams > ) syntax is specifically a single
>> SQL statement:
>>
>>   INSERT INTO table (user_id) VALUES (:user_id_m0) (:user_id_m1)
>> (:user_id_m2) (:user_id_m3) ...
>>
>> in this form, you are getting the parameters that are passed with the
>> statement, and it is a single dictionary with many values in it.
>> this statement form is unlike any others in SQL so no special steps
>> were taken within the context system to make it more palatable for the
>> calling program.
>>
>> This is easy to work around, and I'm also not sure if integrating the
>> workaround into the API emitted towards the default callable is the
>> right thing to do - perhaps some folks expect to see the full set of
>> parameters or have their own workarounds which would be impacted.
>> Workaround is:
>>
>> def my_callable(parameters):
>>
>> if "user_id" in parameters:
>> # do the default thing
>> else:
>> cols_we_care_about = ('user_id', 'some_id')
>> i = 0
>> while "user_id_m%d" % i in parameters:
>> d = dict((key, parameters["%s_%d" % (key, i)]) for key in
>> cols_we_care_about)
>> my_callable(d)
>> parameters.update(("%s_%d" % (key, i), d[key]) for key in d)
>> i += 1
>>
>>
>>
>> >
>> > Cheers
>> > Colin
>> >
>> > --
>> > SQLAlchemy -
>> > The Python SQL Toolkit and Object Relational Mapper
>> >
>> > http://www.sqlalchemy.org/
>> >
>> > To post 

Re: [sqlalchemy] Creat table with IF NOT EXISTS caulse in MySQL

2017-09-13 Thread Mike Bayer
On Wed, Sep 13, 2017 at 1:18 AM, sqlalchemy_mysql  wrote:
> Noticed that generated 'create table' is missing 'IF NOT EXISTS' clause so
> when the code is run in parallel. It throws OperationalError 1050 Table
> already exists. We call metadata.create_all method, checkfirst is set to
> True as it is default. is there any workaround or fix for this issue?

you should never run multiple create_all() in parallel.   use a mutex
to ensure only one runs.


>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to the Google Groups
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Non-server-side defaults for insert query with multiple values

2017-09-13 Thread Colin Deasy
Hi Mike,

Thanks for the detailed response.
The workaround looks fine but I'm cautious of relying on the internal
behaviour; in particular the naming convention "_m" and modifying
the current params dictionary rather than returning a value.

A concern I have with executemany is that we render the SQL only for the
driver (mysql-python in this case) to parse it again.
This seems like a needless and error prone operation since we have enough
information present in SQLAlchemy to generate
the correct SQL and parameters for a multi-row insert statement.

Cheers
Colin

On Thu, Sep 7, 2017 at 4:38 PM, Mike Bayer  wrote:

> On Thu, Sep 7, 2017 at 11:10 AM, Colin Deasy 
> wrote:
> > Hi,
> >
> > I have a use-case where I need to set the value of a column based on the
> > values of other columns during an insert.
> > This works fine for an insert for a single row where
> > `context.current_parameters` reflects the values of the other columns I
> care
> > about.
> > When I have a multi-row insert, `context.current_parameters` is no longer
> > usable since the column names are munged (e.g. assuming a `user_id`
> column,
> > and an insert like `insert(table).values([{'user_id': 1}, {'user_id':
> 2},
> > {'user_id': 3}])` you might get something like `{'user_id_m0': 1,
> > 'user_id_m1': 2, 'user_id_m2': 3}`. It doesn't seem possible to tell
> which
> > row we are operating on just by looking at `context` so we cannot
> determine
> > which munged name we should look at. Ideally the current parameters would
> > only reflect the parameters for a single row.
> >
> > Is this a known limitation of defaults usage? Any workarounds?
>
> normally you would use standard executemany() style:
>
> conn.execute(insert(table), [{"user_id": 1}, {"user_id": 2}, {"user_id":
> 3}])
>
> In this form, you would receive each dictionary individually.
>
> The insert.values(< multiparams > ) syntax is specifically a single
> SQL statement:
>
>   INSERT INTO table (user_id) VALUES (:user_id_m0) (:user_id_m1)
> (:user_id_m2) (:user_id_m3) ...
>
> in this form, you are getting the parameters that are passed with the
> statement, and it is a single dictionary with many values in it.
> this statement form is unlike any others in SQL so no special steps
> were taken within the context system to make it more palatable for the
> calling program.

This is easy to work around, and I'm also not sure if integrating the
> workaround into the API emitted towards the default callable is the
> right thing to do - perhaps some folks expect to see the full set of
> parameters or have their own workarounds which would be impacted.
> Workaround is:
>
> def my_callable(parameters):
>
> if "user_id" in parameters:
> # do the default thing
> else:
> cols_we_care_about = ('user_id', 'some_id')
> i = 0
> while "user_id_m%d" % i in parameters:
> d = dict((key, parameters["%s_%d" % (key, i)]) for key in
> cols_we_care_about)
> my_callable(d)
> parameters.update(("%s_%d" % (key, i), d[key]) for key in d)
> i += 1
>
>
>
> >
> > Cheers
> > Colin
> >
> > --
> > SQLAlchemy -
> > The Python SQL Toolkit and Object Relational Mapper
> >
> > http://www.sqlalchemy.org/
> >
> > To post example code, please provide an MCVE: Minimal, Complete, and
> > Verifiable Example. See http://stackoverflow.com/help/mcve for a full
> > description.
> > ---
> > You received this message because you are subscribed to the Google Groups
> > "sqlalchemy" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to sqlalchemy+unsubscr...@googlegroups.com.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > Visit this group at https://groups.google.com/group/sqlalchemy.
> > For more options, visit https://groups.google.com/d/optout.
>
> --
> SQLAlchemy -
> The Python SQL Toolkit and Object Relational Mapper
>
> http://www.sqlalchemy.org/
>
> To post example code, please provide an MCVE: Minimal, Complete, and
> Verifiable Example.  See  http://stackoverflow.com/help/mcve for a full
> description.
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "sqlalchemy" group.
> To unsubscribe from this topic, visit https://groups.google.com/d/to
> pic/sqlalchemy/intpex4tpRc/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at https://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message