Re: Using alembic with two projects, one with foreign keys referencing the other

2016-09-29 Thread Mike Bayer



On 09/29/2016 07:52 PM, Wesley Weber wrote:

I have an uncommon use case that might or might not be supported (or
even a good idea):

I have a main, independent project with its own tables. I would like to
keep track of migrations of its database. I will also have several
"plugin" projects, which can add functionality to the main software.
These may create and manage their own tables, and the main project need
not know about them. They may not even share the same source tree.


this is actually a common use case.


However, I would like to support referential integrity in the tables
created by the plugin projects (e.g., there might be a "users" table in
the main project, and the plugin might add its own "users_extra_info"
table referencing the "users" table).

I know how to have the plugin projects use their own alembic folder and
use a different alembic_version table, but what happens when the main
project undergoes a migration that breaks referential integrity with the
plugin projects? (the "users" table changes, violating foreign keys in
"users_extra_info" from another project, for example). Is there a way to
handle this? Alternatively, is this not a good design pattern?


that's actually up to how you advertise your model for plugin authors. 
A foreign key constraint typically refers to the primary key of a target 
table, or in less common cases towards columns that are inside of a 
UNIQUE constraint.   You'd not want to change around those columns if 
your plugin authors are making FKs towards them.  or if you want to 
change them, your plugin authors would have to know this.   You might 
want to use a semantic versioning scheme where you don't make any such 
changes within minor versions, and have your plugin authors target 
towards a ceiling version so that their plugins aren't broken by such 
changes.







Thanks

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy-alembic+unsubscr...@googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy-alembic" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] twopahse for pyodbc dialect

2016-09-29 Thread Mike Bayer



On 09/29/2016 03:50 AM, Andrea Cassioli wrote:

Hi,
I am working with SQLAlchemy to insert data in a MSSQL database.
Typically we need to make regular insert of say half million rows in few
tables.

Right now I am using Core flavour to make batch insertion via pyodbc and
I have notice that in the DB trace I have an entry per row

|
declare @p1 int
set @p1=12996
exec sp_prepexec @p1 output,N'@P1 nvarchar(36),@P2 bigint,@P3 float,@P4
float,@P5 float,@P6 float,@P7 nvarchar(5),@P8 nvarchar(5),@P9
nvarchar(3),@P10 nvarchar(3),@P11 nvarchar(9),@P12 nvarchar(1),@P13
nvarchar(7),@P14 float,@P15 float,@P16 float,@P17 float,@P18 float,@P19
float,@P20 float,@P21 float,@P22 int,@P23 float,@P24 varchar(1),@P25
nvarchar(2),@P26 varchar(1),@P27 float,@P28 int,@P29 float',N'INSERT
INTO tmp.guest.[PythonAccess] ([RunID], [DemandID], [FFEPerWeek],
[WeightPerFFE], [MXNPerFFE], [TimeLimit], [PortFrom], [PortTo],
[CargoType], [CtrType], [StringCode], [Direction], [TradeLane],
[20fPct], [40fPct], [45fPct], [MaxTransshipments], [CostLimit],
[FFEPerWeekSource], [WeightPerFFESource], [TimeLimitSource],
[MaxTransshipmentsSource], [CostLimitSource], [SourceChanges],
[RouteCode], [Source], [TimeLimitSlack], [MaxTransshipmentsSlack],
[CostLimitSlack]) VALUES (@P1, @P2, @P3, @P4, @P5, @P6, @P7, @P8, @P9,
@P10, @P11, @P12, @P13, @P14, @P15, @P16, @P17, @P18, @P19, @P20, @P21,
@P22, @P23, @P24, @P25, @P26, @P27, @P28,
@P29)',N'737f1545-a49d-4120-81d4-47fa642d4e0b',12988,0,2759662062713899,36343,73611109,5,50,625,N'LVRIX',N'CMDLA',N'DRY',N'DRY',N'WAFEUR999',N'S',N'EUR/WAF',0,83302701910413379,0,16697298089586618,0,1,803,52160288929122,NULL,NULL,NULL,NULL,NULL,NULL,N'W3',NULL,NULL,NULL,NULL
select @p1
|


what surprise me is that there is a prepare per row.


OK I think you are referring to a "prepared statement", not a two-phase 
PREPARE.   those are two different concepts.



A similar C++

implementation shows only an initial prepare and then only exec. And
indeed is much faster.

I have then saw I can force a prepare on the transaction

|
  with engine.connect() as conn:
  trans= conn.begin_twophase()
  trans.prepare()
|




But the two phase transaction is not supported for pyodbc,

Why is that? Any suggestion/alternative??


So assuming you mean "prepared statements", the Python DBAPI does not 
have an explicit "prepared statements" feature.  A DBAPI like Pyodbc can 
choose to use prepared statements internally, either for some statements 
based on statement type, or sometimes as the implementation for the 
executemany() call that runs multiple parameters.It may or may not 
do so if you are actually looking to invoke this stored procedure, which 
you would invoke via the DBAPI cursor.callproc() method.  This method is 
not part of SQLAlchemy, you would access the cursor directly.  An 
example of this is at 
http://docs.sqlalchemy.org/en/rel_1_0/core/connections.html?highlight=callproc#calling-stored-procedures 
.


As for two-phase, that feature isn't supported for SQL server either, 
overall just about every database driver in Python can barely get 
reliable behavior out of 2PC with the exception of Postgresql, so this 
is basically something nobody ever uses.  It's not related to what you 
are illustrating above.









Thanks a lot.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] on_conflict_do_update (ON CONFLICT ... Postgresql 9.5) WHERE clause problem

2016-09-29 Thread Mike Bayer


I just pushed that and it should close the bitbucket issue.


On 09/29/2016 03:07 AM, pszynk wrote:

I see you already looked into it. Thanks!

W dniu środa, 28 września 2016 20:55:05 UTC+2 użytkownik Mike Bayer napisał:

this is likely use cases that have been untested, if you can file this
w/ a complete test case as a bug report on bitbucket we can start
looking into it.


On 09/28/2016 12:05 PM, Paweł Szynkiewicz wrote:
> Hello all
>
> SA: 1.1.0b3
> db: postgresql 9.5
>
> I have a problem with method on_conflict_do_update for pg specific
insert.
> Precisely with the where argument. It looks like wrong SQL
statement is
> generated.
>
> example:
>
> class Foo(Base):
>   ...
>   bar = Column(Integer)
>
> insert_stmt = insert(Foo)
>
> on_update_stmt = insert_stmt.on_conflict_do_update(
> set_=dict(
> bar=insert_stmt.excluded.bar,
> ),
> where=(Foo.bar < insert_stmt.excluded.bar)
> )
>
> session.execute(on_update_stmt, data)
>
> it gives error and rightly so:
>> column reference "bar" is ambiguous
>
> SQL looks like that:
>
> SQL: 'INSERT INTO foo (...) VALUES (...) ON CONFLICT (...) DO
UPDATE SET
> bar = excluded.bar WHERE bar < bar'
>
> WHERE clause is not expanded properly, the alias EXCLUDED is
omitted. Is
> this a bug or I'm doing sht wrong?
>
> the workaround I use is:
>
> ...
> where=(text('foo.bar < EXCLUDED.bar'))
> ...
>
> Cheers
>
>
>
> --
> You received this message because you are subscribed to the Google
> Groups "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it,
send
> an email to sqlalchemy+...@googlegroups.com 
> .
> To post to this group, send email to sqlal...@googlegroups.com

> .
> Visit this group at https://groups.google.com/group/sqlalchemy
.
> For more options, visit https://groups.google.com/d/optout
.

--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Session / declarative_base scope

2016-09-29 Thread Mike Bayer



On 09/29/2016 01:38 AM, Warwick Prince wrote:

Hi Mike

I would like a little insight into the session object, and the
declarative_base class.

I have a process running many threads, where each thread may be
connected to potentially a different engine/database.  If the database
connection between 2 or more threads is the same, then they will share
the same engine.  However, they each have their own MetaData objects.

There is a global sessionmaker() that has no binding at that time.
When each thread creates its OWN session, then it processes mySession =
Session(bind=myThreadsEngine).

The Engines and MetaData part has worked perfectly for years, using
basic queries like Table(’some_table', threadMetaData,
autoload=True).select().execute().fetchall(). etc.

I’ve started to use the ORM more now, and am using the relationships
between the objects.  However, I’m hitting and issue that appears to
centre around some shared registry or class variables or something that
is causing a conflict.

I’ve made it so each THREAD has is own Base =
declarative_base(metadata=theSessionsMetaData)

Then, classes are mapped dynamically based on this new Base, and the
columns are autoload’ed.  Again, this is working - sometimes.   There’s
some random-like problem that mostly means it does not work when I do a
mySession.query(myMappedClassWithRelationships) and I get the following
exception being raised;


so generating new classes in threads can be problematic because the 
registry of mappers is essentially global state.   Initialization of 
mappers against each other, which is where your error here is, is 
mutexed and is overall thread-safe, but still, you need to make sure 
that all the things that your class needs to be used exist.  Here, 
somewhere in your program you have a class called SalesDocumentLine, and 
that class has not been seen by your Python interpreter yet.   That the 
problem only happens randomly in threads implies some kind of race 
condition which will make this harder to diagnose, but basically that 
name has to exist, if your mapping refers to it.   You might want to 
play with the configure_mappers() call that will cause this 
initialization to occur at the point you tell it.







  File
"C:\Python27\lib\site-packages\dap-2.1.2-py2.7.egg\dap\db\dbutils.py",
line 323, in DAPDB_SetColumns
query = session.query(mappedClass).filter_by(**whereCriteria)
  File "build\bdist.win32\egg\sqlalchemy\orm\session.py", line 1260, in
query
return self._query_cls(entities, self, **kwargs)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 110, in
__init__
self._set_entities(entities)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 120, in
_set_entities
self._set_entity_selectables(self._entities)
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 150, in
_set_entity_selectables
ent.setup_entity(*d[entity])
  File "build\bdist.win32\egg\sqlalchemy\orm\query.py", line 3446, in
setup_entity
self._with_polymorphic = ext_info.with_polymorphic_mappers
  File "build\bdist.win32\egg\sqlalchemy\util\langhelpers.py", line 754,
in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1891, in
_with_polymorphic_mappers
configure_mappers()
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 2768, in
configure_mappers
mapper._post_configure_properties()
  File "build\bdist.win32\egg\sqlalchemy\orm\mapper.py", line 1708, in
_post_configure_properties
prop.init()
  File "build\bdist.win32\egg\sqlalchemy\orm\interfaces.py", line 183,
in init
self.do_init()
  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line
1628, in do_init
self._process_dependent_arguments()
  File "build\bdist.win32\egg\sqlalchemy\orm\relationships.py", line
1653, in _process_dependent_arguments
setattr(self, attr, attr_value())
  File
"build\bdist.win32\egg\sqlalchemy\ext\declarative\clsregistry.py", line
293, in __call__
(self.prop.parent, self.arg, n.args[0], self.cls)
InvalidRequestError: When initializing mapper
Mapper|SalesDocument|rm_dt_documents, expression
'SalesDocumentLine.parentID==SalesDocument.id' failed to locate a name
("name 'SalesDocumentLine' is not defined"). If this is a class name,
consider adding this relationship() to the  class after both dependent classes
have been defined.

I understand what this is trying to tell me, however, the classes ARE
defined.  Sometimes the code works perfectly, but mostly not.  If I have
ONE Thread working and then start up another using exactly the same
code, then it will probably NOT work but more importantly, the one that
WAS working then dies with the same error.  Clearly something somewhere
is shared - I just can’t find out what it is, or how I can separate the
code further.

In summary;

one global sessionmaker()
global Session=sessionmaker()
each thread (for the example here) shares an Engine
each thread has it’s OWN 

Re: [sqlalchemy] Feedback appreciated

2016-09-29 Thread Mike Bayer



On 09/28/2016 06:48 PM, Seth P wrote:

On Wednesday, September 28, 2016 at 5:43:04 PM UTC-4, Mike Bayer wrote:

looks incredibly difficult.   I'm not really about to have the
resources
to work with a type that awkward anytime soon, unfortunately.   If it
could be made to be a drop-in for 1.1's ARRAY feature, that would be
helpful but it at least needs bound parameter support to be solid.


Would it be possible to add read-only support? It looks like cx_Oracle
returns selected varray values in a pretty straightforward form.
That would still be very useful (at least in my case, where I would be
populating the database using SQL*Loader anyway).


you can add your own types to do these things also, especially 
read-only, just make any subclass of UserDefinedType and apply whatever 
result-row handling is needed for how cx_Oracle is returning the data.


The hard part about types is the elaborate expression support (e.g. like 
JSON foo ->> bar vs. foo -> bar in PG for example).   Reading and 
writing a value is not that hard and especially if the type is just 
specific to what you need right now, you don't have the burden of making 
sure your type works for all versions / flags / settings of Oracle / 
cx_Oracle etc.






--
You received this message because you are subscribed to the Google
Groups "sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to sqlalchemy+unsubscr...@googlegroups.com
.
To post to this group, send email to sqlalchemy@googlegroups.com
.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] twopahse for pyodbc dialect

2016-09-29 Thread Andrea Cassioli
Hi,
I am working with SQLAlchemy to insert data in a MSSQL database. Typically 
we need to make regular insert of say half million rows in few tables.

Right now I am using Core flavour to make batch insertion via pyodbc and I 
have notice that in the DB trace I have an entry per row

declare @p1 int
set @p1=12996
exec sp_prepexec @p1 output,N'@P1 nvarchar(36),@P2 bigint,@P3 float,@P4 
float,@P5 float,@P6 float,@P7 nvarchar(5),@P8 nvarchar(5),@P9 
nvarchar(3),@P10 nvarchar(3),@P11 nvarchar(9),@P12 nvarchar(1),@P13 
nvarchar(7),@P14 float,@P15 float,@P16 float,@P17 float,@P18 float,@P19 
float,@P20 float,@P21 float,@P22 int,@P23 float,@P24 varchar(1),@P25 
nvarchar(2),@P26 varchar(1),@P27 float,@P28 int,@P29 float',N'INSERT INTO 
tmp.guest.[PythonAccess] ([RunID], [DemandID], [FFEPerWeek], 
[WeightPerFFE], [MXNPerFFE], [TimeLimit], [PortFrom], [PortTo], 
[CargoType], [CtrType], [StringCode], [Direction], [TradeLane], [20fPct], 
[40fPct], [45fPct], [MaxTransshipments], [CostLimit], [FFEPerWeekSource], 
[WeightPerFFESource], [TimeLimitSource], [MaxTransshipmentsSource], 
[CostLimitSource], [SourceChanges], [RouteCode], [Source], 
[TimeLimitSlack], [MaxTransshipmentsSlack], [CostLimitSlack]) VALUES (@P1, 
@P2, @P3, @P4, @P5, @P6, @P7, @P8, @P9, @P10, @P11, @P12, @P13, @P14, @P15, 
@P16, @P17, @P18, @P19, @P20, @P21, @P22, @P23, @P24, @P25, @P26, @P27, 
@P28, 
@P29)',N'737f1545-a49d-4120-81d4-47fa642d4e0b',12988,0,2759662062713899,36343,73611109,5,50,625,N'LVRIX',N'CMDLA',N'DRY',N'DRY',N'WAFEUR999',N'S',N'EUR/WAF',0,83302701910413379,0,16697298089586618,0,1,803,52160288929122,NULL,NULL,NULL,NULL,NULL,NULL,N'W3',NULL,NULL,NULL,NULL
select @p1


what surprise me is that there is a prepare per row. A similar C++ 
implementation shows only an initial prepare and then only exec. And indeed 
is much faster. 

I have then saw I can force a prepare on the transaction

  with engine.connect() as conn:
  trans= conn.begin_twophase()
  trans.prepare()




But the two phase transaction is not supported for pyodbc, 

Why is that? Any suggestion/alternative??

Thanks a lot.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How can I programmatically give a hybrid_property its name?

2016-09-29 Thread Simon King
On Thu, Sep 29, 2016 at 6:32 AM, Jinghui Niu  wrote:
> The documentation shows that hybrid_property should used as a decorator,
> like:
> @hybrid_property
> def my_property(self):
> pass
>
>
> What if I wanted to give this hybrid property a name by referring a variable
> in runtime? Is this allowed? Thanks.
>

I'm not entirely sure what you mean - I can't imagine a situation
where this would be useful. But you ought to be able to add hybrid
properties to a class after the class has been defined, and you can
use setattr if the name is stored in a variable. So I imagine
something like this ought to work:

def fget(self):
# code when property is accessed on instance
pass

def fset(self, value):
# code when property is assigned to
pass

def expr(cls):
# code when property is accessed on class
pass

prop = hybrid_property(fget, fset=fset, expr=expr)

# Add the hybrid property to a class under the name "propertyname"
setattr(SomeClass, 'propertyname', prop)

Hope that helps,

Simon

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] on_conflict_do_update (ON CONFLICT ... Postgresql 9.5) WHERE clause problem

2016-09-29 Thread pszynk
I see you already looked into it. Thanks!

W dniu środa, 28 września 2016 20:55:05 UTC+2 użytkownik Mike Bayer napisał:
>
> this is likely use cases that have been untested, if you can file this 
> w/ a complete test case as a bug report on bitbucket we can start 
> looking into it. 
>
>
> On 09/28/2016 12:05 PM, Paweł Szynkiewicz wrote: 
> > Hello all 
> > 
> > SA: 1.1.0b3 
> > db: postgresql 9.5 
> > 
> > I have a problem with method on_conflict_do_update for pg specific 
> insert. 
> > Precisely with the where argument. It looks like wrong SQL statement is 
> > generated. 
> > 
> > example: 
> > 
> > class Foo(Base): 
> >   ... 
> >   bar = Column(Integer) 
> > 
> > insert_stmt = insert(Foo) 
> > 
> > on_update_stmt = insert_stmt.on_conflict_do_update( 
> > set_=dict( 
> > bar=insert_stmt.excluded.bar, 
> > ), 
> > where=(Foo.bar < insert_stmt.excluded.bar) 
> > ) 
> > 
> > session.execute(on_update_stmt, data) 
> > 
> > it gives error and rightly so: 
> >> column reference "bar" is ambiguous 
> > 
> > SQL looks like that: 
> > 
> > SQL: 'INSERT INTO foo (...) VALUES (...) ON CONFLICT (...) DO UPDATE SET 
> > bar = excluded.bar WHERE bar < bar' 
> > 
> > WHERE clause is not expanded properly, the alias EXCLUDED is omitted. Is 
> > this a bug or I'm doing sht wrong? 
> > 
> > the workaround I use is: 
> > 
> > ... 
> > where=(text('foo.bar < EXCLUDED.bar')) 
> > ... 
> > 
> > Cheers 
> > 
> > 
> > 
> > -- 
> > You received this message because you are subscribed to the Google 
> > Groups "sqlalchemy" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> > an email to sqlalchemy+...@googlegroups.com  
> > . 
> > To post to this group, send email to sqlal...@googlegroups.com 
>  
> > . 
> > Visit this group at https://groups.google.com/group/sqlalchemy. 
> > For more options, visit https://groups.google.com/d/optout. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Re: SQLAlchemy - Bulk update using sqlalchemy core table.update() expects all columns in the values data

2016-09-29 Thread Rajesh Rolo
Mike,

Thanx for your reply. I'll try out the 3rd option. It would still be better 
than updating record by record at object level.

Thanx,
Rajesh


On Wednesday, September 28, 2016 at 1:59:38 PM UTC+5:30, Rajesh Rolo wrote:
>
> I'm trying to do a bulk update using core SQLAlchemy to a postgres 
> database. bulk_update_mappings does not work (reports StaleDataError). So 
> I'm trying to use core functions to do a bulk update. This works fine when 
> the update data passed to the values have all the columns in the db but 
> fails to work when we update only a certain columns. In my application, 
> during periodic syncs between the server and the client only a few of the 
> columns will get updated most of the times.
>
> The code snippet I have for update is :
>
> conn = session.connection()
> table = table_dict[table_key].__table__
> stmt=table.update().where(and_(table.c.user_id==bindparam('uid'),
> tbl_pk[table_key]==bindparam('pkey'))).values()
> conn.execute(stmt, update_list)
>
> Since I update multiple tables on every sync, table names and primary keys 
> are indexed through an array. For the example below table_dict[table_key] 
> would translate to the table 'nwork' and tbl_pk[table_key] would translate 
> to 'table.c.nwid' which would be 'nwork.nwid'.
>
> The update_list is a list of records (that need to get updated) as a 
> python dictionary. When the record has values for all the columns it works 
> fine and when only some of the columns is getting updated it's throwing the 
> following error:
>
> StatementError: (sqlalchemy.exc.InvalidRequestError) A value is required 
> for bind parameter 'last_sync', in parameter group 1 
> [SQL: u'UPDATE nwork SET type=%(type)s, name=%(name)s, 
> last_sync=%(last_sync)s, 
> update_time=%(update_time)s, status=%(status)s, 
> total_contacts=%(total_contacts)s, 
> import_type=%(import_type)s, cur_index=%(cur_index)s WHERE 
> nwork.user_id = %(uid)s AND nwork.nwid = %(pkey)s']
>
> In this case the error was happening for a record where the column 
> 'last_sync' was not getting updated.
>
> What's the way of doing a bulk update where the records may not have all 
> the columns (the same set of them) getting updated? 
>
>
> I'm running SQLAlchemy 1.0.14.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.