On Jul 12, 2011, at 12:44 PM, Maciej Filip Szkodziński wrote:

> Oops, I was trying it with
> engine.connect().execution_options(autocommit=True).execute as well,
> and didn't notice there was a difference in the echo when editing the
> pastebin. There's of course no COMMIT at the end with eng.execute().
> 
> I've updated pastebin with both versions.
> 
> The behaviour doesn't change though. In both cases, if the file
> doesn't exist, it doesn't exist after finishing the BACKUP. If an
> empty file already exists, _even without the COMMIT_, the server still
> writes 2KB to it.
> 
> As for the autocommit error, the server complains about doing a backup
> within a transaction. I'm guessing sqlalchemy's engine.execute()
> doesn't automatically start a transaction, while pyodbc's
> conn.execute() without autocommit does start a transaction.

DBAPI doesn't have a BEGIN, a connection is always in a transaction, unless a 
non-standard "autocommit" flag like that of PyODBCs is used.

PyODBC should be starting a transaction the moment the first statement is 
emitted.   There should not be a behavioral difference between conn.execute() 
(which is a non-standard convenience method specific to PyODBC) and 
cursor.execute() (the DBAPI method), though this is definitely possible.

In any case this is well in the range of PyODBC quirks and you should check out 
the pyodbc site/mailing list/tracker for more details on this.

> 
>> If you try to run a BACKUP via pyodbc, the cursor.execute() call starts and 
>> finishes with no error, but the backup doesn’t get made.  With help from 
>> CubicWeb‘s post MS SQL Server Backuping gotcha, I learned that BACKUP and 
>> RESTOREs over ODBC trigger some kind of asynchronous / multiple result set 
>> mode.
> 
> 
> I'm guessing sqlalchemy closes the cursor before the server can finish
> the backup?

the two terms "asynchronous" "multiple result set" don't really go together 
here.  "multiple result sets" refers to a cursor that has multiple results via 
cursor.nextset().   SQLA currently doesn't support this mode of usage.  If 
SQLA's closing of the cursor is the issue here then just use the PyODBC 
connection directly.    engine.raw_connection() or engine.connect().connection 
will give you one from the pool.   If you flip any "autocommit" flags you'd 
want to set them back when complete, or alternatively remove the connection 
from the pool using connection.detach() (SQLAlchemy method added to the DBAPI 
connection).

the "asynchronous background process" concept sounds a little more sketchy 
here, if that were the case I would think the BACKUP command has some way to 
check on the "status" of the operation before commit() occurs, but that's 
really not how SQL commands usually work - standard behavior is to block until 
complete.



> 
> 
> On Jul 12, 4:25 pm, Michael Bayer <mike...@zzzcomputing.com> wrote:
>> On Jul 12, 2011, at 8:27 AM, Maciej Filip Szkodziński wrote:
>> 
>>> The database is part of a desktop accounting application running on
>>> Windows. I'm writing a program to interface with it, to automate
>>> adding documents.
>> 
>>> The idea was that it would be run once a month to add a bunch of
>>> invoices. The backup would run before this operation, in case
>>> something unexpected goes wrong.
>> 
>>> Reducing the scope of the program, and dealing with backups separately
>>> does sound like a better idea.
>> 
>> engine.execute() is equivalent to:
>> 
>> conn = dbapi.connect(...)
>> cursor = conn.cursor()
>> cursor.execute(<statement>)
>> < possibly call conn.commit() here if the statement qualifies for autocommit>
>> return cursor via ResultProxy()
>> cursor.close()/conn.close() when ResultProxy is closed, automatic if no 
>> cursor.description was present.
>> 
>> your examples with pyodbc indicate pyodbc is a little conflicted how to 
>> handle this.  The nonstandard conn.execute() call seems to fail if Pyodbc's 
>> "autocommit" (note this is entirely distinct from the SQLAlchemy engine 
>> autocommit feature) is not enabled, yet your SQLAlchemy script didn't throw 
>> this error when using cursor.execute() (where again the pyodbc autocommit 
>> flag remains at false unless you modified it yourself).
>> 
>> If the "commit" call is required then you'd need to be using 
>> engine.connect().execution_options(autocommit=True).execute(my statement).   
>> The BACKUP command doesn't trigger autocommit, so I'm not sure how your SQL 
>> log ended up generating a COMMIT statement.
>> 
>> 
>> 
>>> On Jul 12, 7:44 am, Eric Ongerth <ericonge...@gmail.com> wrote:
>>>> I think more common practice is just to use shell scripts (whether in
>>>> scheduled tasks / cron jobs or manually) for backup.  But I don't know
>>>> MSSQL specifically.  I just have rarely heard of anyone trying to
>>>> accomplish their backup with SQLAlchemy as part of the chain of
>>>> command.
>> 
>>>> On Jul 11, 2:06 pm, Maciej Filip Szkodziñski
>> 
>>>> <maciej.szkodzin...@gmail.com> wrote:
>>>>> Hello,
>> 
>>>>> I'm trying to backup a database running on SQL Server 2008 Express.
>>>>> I'm using pyodbc as the driver.
>> 
>>>>> This backup code executes happily, however no file is written to the
>>>>> provided path. I've also tried placing an empty file in the path, and
>>>>> only 2KB of data gets written to it.
>> 
>>>>> eng = create_engine("mssql+pyodbc://%s:%s@%s" % (uid, pwd, server))
>>>>> eng.execute('BACKUP DATABASE test TO DISK=?',
>>>>>                      backupFilePath)
>>>>> # <sqlalchemy.engine.base.ResultProxy object at 0x015868F0>
>> 
>>>>> os.path.isfile(backupFilePath)
>>>>> # False
>> 
>>>>> I am able to backup the database with the same parameters in 'bare'
>>>>> pyodbc.
>> 
>>>>> Here's a more verbose version of both, sqlalchemy and pyodbc, backup
>>>>> code:http://pastebin.com/6x1RRTqz
>> 
>>>>> I've also tried restoring an existing backup with sqlalchemy. Again, I
>>>>> get the ResultProxy, but the newly 'restored' database is stuck in
>>>>> perpetual 'Restoring...' state, and trying to use it results in:
>>>>> # Database 'test' cannot be opened. It is in the middle of a restore.
>> 
>>>>> I had a similar problem with bare pyodbc, and googling suggested that
>>>>> this loop is required for the backup/restore operation to continue and
>>>>> finish:
>> 
>>>>> while backupCursor.nextset():
>>>>>     pass
>> 
>>>>> where backupCursor is the one returned by execute('BACKUP...').
>> 
>>>>> Is there a different way of doing backups via sqlalchemy, or some way
>>>>> around this, or should I stick with bare pyodbc for backups?
>> 
>>> --
>>> You received this message because you are subscribed to the Google Groups 
>>> "sqlalchemy" group.
>>> To post to this group, send email to sqlalchemy@googlegroups.com.
>>> To unsubscribe from this group, send email to 
>>> sqlalchemy+unsubscr...@googlegroups.com.
>>> For more options, visit this group 
>>> athttp://groups.google.com/group/sqlalchemy?hl=en.
>> 
>> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> To unsubscribe from this group, send email to 
> sqlalchemy+unsubscr...@googlegroups.com.
> For more options, visit this group at 
> http://groups.google.com/group/sqlalchemy?hl=en.
> 

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.

Reply via email to