Oops, I was trying it with
engine.connect().execution_options(autocommit=True).execute as well,
and didn't notice there was a difference in the echo when editing the
pastebin. There's of course no COMMIT at the end with eng.execute().

I've updated pastebin with both versions.

The behaviour doesn't change though. In both cases, if the file
doesn't exist, it doesn't exist after finishing the BACKUP. If an
empty file already exists, _even without the COMMIT_, the server still
writes 2KB to it.

As for the autocommit error, the server complains about doing a backup
within a transaction. I'm guessing sqlalchemy's engine.execute()
doesn't automatically start a transaction, while pyodbc's
conn.execute() without autocommit does start a transaction.

I originally got the backup code from:
http://ryepup.unwashedmeme.com/blog/2010/08/26/making-sql-server-backups-using-python-and-pyodbc/
The author seems to have more knowledge on how SQL Server does
backups. To quote:

> If you try to run a BACKUP via pyodbc, the cursor.execute() call starts and 
> finishes with no error, but the backup doesn’t get made.  With help from 
> CubicWeb‘s post MS SQL Server Backuping gotcha, I learned that BACKUP and 
> RESTOREs over ODBC trigger some kind of asynchronous / multiple result set 
> mode.


I'm guessing sqlalchemy closes the cursor before the server can finish
the backup?


On Jul 12, 4:25 pm, Michael Bayer <mike...@zzzcomputing.com> wrote:
> On Jul 12, 2011, at 8:27 AM, Maciej Filip Szkodziński wrote:
>
> > The database is part of a desktop accounting application running on
> > Windows. I'm writing a program to interface with it, to automate
> > adding documents.
>
> > The idea was that it would be run once a month to add a bunch of
> > invoices. The backup would run before this operation, in case
> > something unexpected goes wrong.
>
> > Reducing the scope of the program, and dealing with backups separately
> > does sound like a better idea.
>
> engine.execute() is equivalent to:
>
> conn = dbapi.connect(...)
> cursor = conn.cursor()
> cursor.execute(<statement>)
> < possibly call conn.commit() here if the statement qualifies for autocommit>
> return cursor via ResultProxy()
> cursor.close()/conn.close() when ResultProxy is closed, automatic if no 
> cursor.description was present.
>
> your examples with pyodbc indicate pyodbc is a little conflicted how to 
> handle this.  The nonstandard conn.execute() call seems to fail if Pyodbc's 
> "autocommit" (note this is entirely distinct from the SQLAlchemy engine 
> autocommit feature) is not enabled, yet your SQLAlchemy script didn't throw 
> this error when using cursor.execute() (where again the pyodbc autocommit 
> flag remains at false unless you modified it yourself).
>
> If the "commit" call is required then you'd need to be using 
> engine.connect().execution_options(autocommit=True).execute(my statement).   
> The BACKUP command doesn't trigger autocommit, so I'm not sure how your SQL 
> log ended up generating a COMMIT statement.
>
>
>
> > On Jul 12, 7:44 am, Eric Ongerth <ericonge...@gmail.com> wrote:
> >> I think more common practice is just to use shell scripts (whether in
> >> scheduled tasks / cron jobs or manually) for backup.  But I don't know
> >> MSSQL specifically.  I just have rarely heard of anyone trying to
> >> accomplish their backup with SQLAlchemy as part of the chain of
> >> command.
>
> >> On Jul 11, 2:06 pm, Maciej Filip Szkodziñski
>
> >> <maciej.szkodzin...@gmail.com> wrote:
> >>> Hello,
>
> >>> I'm trying to backup a database running on SQL Server 2008 Express.
> >>> I'm using pyodbc as the driver.
>
> >>> This backup code executes happily, however no file is written to the
> >>> provided path. I've also tried placing an empty file in the path, and
> >>> only 2KB of data gets written to it.
>
> >>> eng = create_engine("mssql+pyodbc://%s:%s@%s" % (uid, pwd, server))
> >>> eng.execute('BACKUP DATABASE test TO DISK=?',
> >>>                      backupFilePath)
> >>> # <sqlalchemy.engine.base.ResultProxy object at 0x015868F0>
>
> >>> os.path.isfile(backupFilePath)
> >>> # False
>
> >>> I am able to backup the database with the same parameters in 'bare'
> >>> pyodbc.
>
> >>> Here's a more verbose version of both, sqlalchemy and pyodbc, backup
> >>> code:http://pastebin.com/6x1RRTqz
>
> >>> I've also tried restoring an existing backup with sqlalchemy. Again, I
> >>> get the ResultProxy, but the newly 'restored' database is stuck in
> >>> perpetual 'Restoring...' state, and trying to use it results in:
> >>> # Database 'test' cannot be opened. It is in the middle of a restore.
>
> >>> I had a similar problem with bare pyodbc, and googling suggested that
> >>> this loop is required for the backup/restore operation to continue and
> >>> finish:
>
> >>> while backupCursor.nextset():
> >>>     pass
>
> >>> where backupCursor is the one returned by execute('BACKUP...').
>
> >>> Is there a different way of doing backups via sqlalchemy, or some way
> >>> around this, or should I stick with bare pyodbc for backups?
>
> > --
> > You received this message because you are subscribed to the Google Groups 
> > "sqlalchemy" group.
> > To post to this group, send email to sqlalchemy@googlegroups.com.
> > To unsubscribe from this group, send email to 
> > sqlalchemy+unsubscr...@googlegroups.com.
> > For more options, visit this group 
> > athttp://groups.google.com/group/sqlalchemy?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to