Hi all!
I'm looking for a programming pattern to deal with a radio button like
behavior, thatis in a group of rows, all with same group_id, only one
can have a flag column set as true. Without using table triggers that
make all rows in the group lose the flag if the updating row carries it
I never for a moment thought that your change was thoughtless. To the
contrary, I have huge respect for SQLAlchemy. I will try to test the
drop_all and your pyodbc issue with my setup and to report here later
today.
Meanwhile, I can tell you how to build the stack because it is a bit
tricky given
On Sep 8, 2011, at 9:37 AM, Victor Olex wrote:
I never for a moment thought that your change was thoughtless. To the
contrary, I have huge respect for SQLAlchemy. I will try to test the
drop_all and your pyodbc issue with my setup and to report here later
today.
thanks ! Unfortunately
On Sep 8, 2011, at 9:32 AM, Vlad K. wrote:
As a by the way to this question, I've noticed that the order of queries
given before flush() is not preserved for the flush(). Any way to enforce the
order?
Trying to parse what this means. Suppose you did a single SELECT, loaded five
Pyodbc issue 209 works fine in my setup. I think the key thing is
matching SQL Server version with the correct TDS protocol version and
correct FreeTDS version. Also with regards to your Mac testing, check
if you have the libiconv installed and that FreeTDS is built with it.
On Sep 8, 2011, at 11:37 AM, Victor Olex wrote:
Pyodbc issue 209 works fine in my setup.
that is very strange ? There are files missing from the .zip. If you
installed from the zip I don't see how it built for you. Here's the original
issue:
I know of those issues with pyodbc package. Michael, please read my
first response where I wrote how to build the unixODBC, FreeTDS and
pyodbc stack. I gave this detail for a reason - i.e. that you can
replicate my built.
By the way I did sqlalchemy level testing as promised. Predictably,
the DDL
Since you are effectively overwriting the table with new file
contents, the fastest may well be to truncate the table then insert
all contents. If you were to just append and update then
session.merge() is convenient way to do this though I am not sure if
the fastest.
On Sep 7, 5:53 pm, Vlad K.
Is there a method available to unregister an event listener? Can't think of
a specific use case right now, but it would go something like this.
define a listener with some complex logic
do stuff that uses that listener
unregister the listener because the complex stuff is done
continue on without
There's a remove() method that isn't documented, since there are event-specific
implementations that need to be built out, and there is also no test coverage.
Ticket 2268 is the placeholder for the task:
http://www.sqlalchemy.org/trac/ticket/2268
On Sep 8, 2011, at 12:49 PM, Mike Conley wrote:
Hi Victor -
Since you're there, do you have any luck actually running unit tests ? The
test in particular here is:
./sqla_nose.py -v test.sql.test_types:UnicodeTest
--dburi=mssql+pyodbc://user:pass@dsn
Also, on the Mac, iODBC is the standard ODBC product. unixODBC can be built
though in
Unfortunately I don't have access to a blank database and I took the
chance and ran your tests on a non-empty database. Tests are mostly
good: 5/7 pass. You should know that I used current trunk and simply
commented out the line, which resets the supports_unicode_binds but it
should be equivalent
No, I can't truncate the table for other reasons, as I mentioned in my
original question. :)
The issue here was not how to sync the data, but whether processed rows
stay in session even though the objects (model instances) are discarded
at the end of each iteration (each csv row), or in
For example the following:
row = session.query(Model).filter_by(pkey=pkey_value).first() or Model()
row.some_field = 123;
...
session.query(Model).filter_by(nonprimary_key=some_value).update({...},
false)
session.merge(row)
session.flush()
When flush() gets called, the merge() is executed
On Sep 8, 2011, at 3:32 PM, Vlad K. wrote:
For example the following:
row = session.query(Model).filter_by(pkey=pkey_value).first() or Model()
row.some_field = 123;
...
session.query(Model).filter_by(nonprimary_key=some_value).update({...}, false)
session.merge(row)
Your're welcome. As for no response from pyodbc that is indeed sloppy
as is the fact that PyPi package does not work. Hats off to you for
always being responsive (afaik). I often wonder what keeps you so
motivated but that's off topic.
On Sep 8, 4:07 pm, Michael Bayer mike...@zzzcomputing.com
well I'm visibly upset about this one because i really *don't* have the
motivation to go down another FreeTDS hole :)
On Sep 8, 2011, at 4:13 PM, Victor Olex wrote:
Your're welcome. As for no response from pyodbc that is indeed sloppy
as is the fact that PyPi package does not work. Hats
Ok, because the totalDue and totalPaid attributes are also SQLAlchemy
declarative model object properties, I converted them (and all other
similar property dependencies) to hybrid_properties and created the
associated @[property].expression methods for every hybrid_property
(though I think those
On Sep 8, 2011, at 5:32 PM, Tim Black wrote:
Ok, because the totalDue and totalPaid attributes are also SQLAlchemy
declarative model object properties, I converted them (and all other
similar property dependencies) to hybrid_properties and created the
associated @[property].expression
Yes that's how I know the order of events. I just checked the logs again
and put some sleep() between update() and merge(). It appears that the
update() does some kind of implicit flush because that commits the
dirtied properties of the row instance BEFORE the update is issued, so
that when
On Sep 8, 2011, at 6:00 PM, Vlad K. wrote:
Yes that's how I know the order of events. I just checked the logs again and
put some sleep() between update() and merge(). It appears that the update()
does some kind of implicit flush because that commits the dirtied
properties of the row
Yes, I earlier said it was merge() that took effect before update()
because that's how it looked like (didn't know about autoflush). Putting
a sleep before update() and merge() showed that merge() issued no SQL
because the autoflush (as you say) of the update() practically synced
the session
I was getting some strange transaction isolation behavior with
SQLAlchemy (0.7.2), psycopg2 (2.4.2), and PostgreSQL 8.4. In order to
investigate I wrote up a usage sequence that does this:
1. starts a transaction with a session (s1)
2. starts another transaction with session (s2)
3. updates a
23 matches
Mail list logo