Add ?autocommit=true to your pyodbc connection string
On Thu, Jan 31, 2019, 12:48 PM Mark Pearl No the error related to this:
>
> sqlalchemy ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000]
> [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]111214;An attempt to
> complete a tra
No the error related to this:
sqlalchemy ProgrammingError: (pyodbc.ProgrammingError) ('42000', '[42000]
[Microsoft][ODBC Driver 13 for SQL Server][SQL Server]111214;An attempt to
complete a transaction has failed. No corresponding transaction found.
(111214) (SQLEndTran)') (Background on this erro
for "dm_exec_sessions" ? that's an old SQLAlchemy bug that was fixed
long ago. see https://github.com/sqlalchemy/sqlalchemy/issues/3994
please upgrade.
On Wed, Jan 30, 2019 at 10:52 PM wrote:
>
> Any solution for this?
>
> On Monday, September 11, 2017 at 6:34:47 PM UTC-4, dirk.biesinger wrote:
Any solution for this?
On Monday, September 11, 2017 at 6:34:47 PM UTC-4, dirk.biesinger wrote:
>
> I am encountering errors when trying to use the pd.to_sql function to
> write a dataframe to MS SQL Data Warehouse.
> The connection works when NOT using sqlalchemy engines.
> I can read dataframes
Hi Dirk,
Happy to report that there are more projects using dw. I have the same
issues here. Using Azure SQL DW at the moment and building a serverless
function app that reads and sends data back to the SQL DW.
Did you eventually find a solution other than looping through the dataframe?
Cheer
There is definitely interest in this platform and we will need to support
it. It's just it falls under the category of a subscriber database like
Redhift or google whatever it was called so needs some folks with access to
get it working and possibly support an external overlay project -
sqlalchemy
yeah, I ran into some 'nice' 'features' in this project.
the azure datawarehouse does not behave or work like a typical old-school
sql server.
There might be some potential there, just not sure how many projects would
be using the datawarehouse
On Wednesday, September 13, 2017 at 3:33:20 PM
On Wed, Sep 13, 2017 at 6:13 PM, dirk.biesinger
wrote:
> "oh that is very interesting." he says and then it's getting eerily quiet.
> I guess Mike and Dilly are somewhere in the depth of code and docs...
unfortunately not, azure seems to offer free trials if you are willing
to give them a credit
"oh that is very interesting." he says and then it's getting eerily quiet.
I guess Mike and Dilly are somewhere in the depth of code and docs...
On Wednesday, September 13, 2017 at 2:05:22 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
> > wrote:
> >
> > as for
On Wed, Sep 13, 2017 at 4:29 PM, dirk.biesinger
wrote:
>
> as for the rollback call,
> adding it after the print command, does not change anything.
> When I insert it between cursor.execute() and rows = cursor.fetchall(), I
> get an error as expected.
oh that is very interesting.
>
>
> On Wedne
I hear your pain,
unfortunately we have to have very tight security on our server instances;
I only can log into the server when I am in the office, I can't log in with
my work laptop from home. (I have a not authorized ip address there)
If it were a possible, I would have given you access to a
On Wed, Sep 13, 2017 at 3:58 PM, dirk.biesinger
wrote:
> using
>
> connection = pyodbc.connect()
> connection.autocommit = 0
> cursor = connection.cursor()
> cursor.execute([proper sql statement that references a table])
> rows = corsor.fetchall()
> print(rows)
> cursor.close()
>
> gives me th
using
connection = pyodbc.connect()
connection.autocommit = 0
cursor = connection.cursor()
cursor.execute([proper sql statement that references a table])
rows = corsor.fetchall()
print(rows)
cursor.close()
gives me the output out of the table that I expect.
So if you were wondering if a raw p
On Wed, Sep 13, 2017 at 3:27 PM, dirk.biesinger
wrote:
> I have the 'patched' pyodbc.py file active.
> Executing your code snippet does NOT produce an error or any output for that
> matter.
>
>
> On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>>
>> On Wed, Sep 13, 2017 at
I have the 'patched' pyodbc.py file active.
Executing your code snippet does NOT produce an error or any output for
that matter.
On Wednesday, September 13, 2017 at 12:22:30 PM UTC-7, Mike Bayer wrote:
>
> On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
> > wrote:
> > Got ya,
> >
> > so we c
On Wed, Sep 13, 2017 at 3:14 PM, dirk.biesinger
wrote:
> Got ya,
>
> so we could solve the issue on the sqlalchemy end with the alteration of the
> pyodbc.py file.
> I assume you'll include this in the next release?
um.
can you just confirm for me this makes the error?
connection = pyodbc.conn
Got ya,
so we could solve the issue on the sqlalchemy end with the alteration of
the pyodbc.py file.
I assume you'll include this in the next release?
The issue with creating a table when the option "if_exists='append'" is set
in the df.to_sql() call, is a pandas problem.
Thank you for your hel
On Wed, Sep 13, 2017 at 2:41 PM, dirk.biesinger
wrote:
> I don't get why the table is getting created in the first place. A table
> with this name exists, and the option "if_exists='append'" should append the
> dataframe to the existing table.
> There should not be a dropping of the table (which I
I don't get why the table is getting created in the first place. A table
with this name exists, and the option "if_exists='append'" should append
the dataframe to the existing table.
There should not be a dropping of the table (which I have not seen) nor
creation of the table.
And in case of cr
OK sodon't use VARCHAR(max)? What datatype would you like? We
have most of them and you can make new ones too.
On Wed, Sep 13, 2017 at 1:07 PM, dirk.biesinger
wrote:
> Mike,
>
> here's the error stack (I had to mask some details):
> The columns (dataformats) in the create table statement ar
Mike,
here's the error stack (I had to mask some details):
The columns (dataformats) in the create table statement are wrong. Also,
this table does not have an index.
2017-09-13 15:07:50,200 INFO sqlalchemy.engine.base.Engine SELECT
SERVERPROPERTY('ProductVersion')
2017-09-13 15:07:50,202 INFO
re-attaching the files
On Monday, September 11, 2017 at 3:34:47 PM UTC-7, dirk.biesinger wrote:
>
> I am encountering errors when trying to use the pd.to_sql function to
> write a dataframe to MS SQL Data Warehouse.
> The connection works when NOT using sqlalchemy engines.
> I can read dataframes
22 matches
Mail list logo