>     Where would the desired data get lost on the transmission to the database 
> table
>     during usage of background process?
>
> When using a background process you must first call engine.dispose()
> when first starting the process, then you start a new transaction
> and work from there.  You cannot transfer the work of an ongoing transaction
> to another process , a new transaction must be used.

I admit that I did not take such a data processing requirement into account
so far for my recent developments of SmPL scripts.
It seems that I can not add customised process preparation code for the selected
execution environment which is supported by the current Coccinelle software.

See also:
Complete support for fork-join work flows
https://github.com/coccinelle/coccinelle/issues/50

Other software architectures can support parallelisation better, can't they?

Regards,
Markus

-- 
SQLAlchemy - 
The Python SQL Toolkit and Object Relational Mapper

http://www.sqlalchemy.org/

To post example code, please provide an MCVE: Minimal, Complete, and Verifiable 
Example.  See  http://stackoverflow.com/help/mcve for a full description.
--- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at https://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to