On Jan 20, 2014, at 3:59 PM, pr64 <pierrerot...@gmail.com> wrote:

> OK, sorry for my explanation which is not right.
> 
> I launch two separate processes from the command line. Each is importing my 
> API and therefore creates its own connection to the sqlite database.
> Commiting in process 1 should be visible from process 2. The problem I have 
> is that the change is seen seconds later in process 2. No session is shared, 
> each process having its own. What I can observe is that if I close session 
> just after commiting in process 1, then process 2 sees the change as 
> expected. If the session is not closed in process 1 (just commiting) then, 
> the change is not seen in process 2... :-(

this sounds like basic isolation behavior.  If process 2 is still running in a 
transaction that was started before process 1 committed, then the database 
isolation behavior would determine what can be seen in process 2 within that 
transaction.

SQLite’s concurrency behavior is not something I have a deep understanding of, 
as it is file-based and more simplistic than a modern MVCC-style database, it 
has some complicated locking and journaling behavior described at 
http://sqlite.org/lang_transaction.html.   The answer to your question is 
likely described within this documentation.


> 
> I've got a an OrmManager class:
> 
> class OrmManager:
> 
>     def  __init__(self, database, metadata, echo=False):
>         self.database = database
> 
>         engine = create_engine('sqlite:///' + database,
>                                echo=echo,
>                                connect_args={'detect_types': 
> sqlite3.PARSE_DECLTYPES|
>                                               sqlite3.PARSE_COLNAMES},
>                                native_datetime=True,
>                                poolclass=NullPool,
>                                convert_unicode=True
>                            )
> 
>     metadata.create_all(engine)
> 
>     # this factory is thread safe: a session object is returned (always the 
> same) to the
>     # caller. If called from another thread, the returned session object will 
> be different
>     session_factory = sessionmaker(bind=engine, expire_on_commit=False)
>     self.session = scoped_session(session_factory)
> 
> def get_session(self):
> 
>     session = self.session()
>     return session
> and in P1 and P2, I instantiate it:
> 
> orm_mgr = OrmManager(database=<path/to/my/.sqlite/file>, metadata=METADATA)
> 
> session = orm_mgr.get_session()
> 
> # do some stuff here
> 
> session.commit()
> 
> On Monday, January 20, 2014 5:12:58 PM UTC+1, pr64 wrote:
> Hi,
> 
> Strange behaviour with sqa in multi-process environment... already posted on 
> StackOverflow for a web app but still missing some understanding so posting 
> here.
> 
> I've created an application where my "sqa calls" are encapsulated: My API's 
> methods always do the same kind of stuff:
>     1- request a session
>     2- do my stuff - mainly requesting the db through session.query(...)
>     3- eventually:
>        - modify some mapped object attributes
>        - and write to db (session.commit()).
> 
> Several consecutive calls to my API's methods return the same session object 
> as long as I'm in the same thread which is the required behaviour. Some 
> commits happen but the session is always the same (expire_on_commit = False)
> 
>         # this factory is thread safe: a session object is returned (always 
> the same) to the
>         # caller. If called from another thread, the returned session object 
> will be different
>         session_factory = sessionmaker(bind=engine, expire_on_commit=False)
>         session_maker = scoped_session(session_factory)
> 
>         # to get a session:
>         session = session_maker()
> 
> If I fork another thread, using the API is still safe since the first step 
> (getting a session) will return a new session instance (due to scoped_session)
> 
> Here's my problem:
> 
> I have two processes which such an architecture (based on this API). While P1 
> is writing to DB (session.commit()), P2 is reading from it 
> (session.query(...)). The problem is that P2 does not see the DB change and 
> has to poll several times before getting it. I'm actually requesting the DB 
> through session.query(...) calls from P2 but nothing happens. Sometimes it 
> works though! :-( Don't know why (did not change the code) -> Over 10 times, 
> it will work 3 times...
> 
> If I change my API to close the session after every commit, P2 sees the 
> change as if commiting was not actually writing to disk and closing the 
> session did it. I do not understand this behavour as the sqa documentation 
> encourages to create a global session in a thread a work with it (which is 
> done with the scoped_session mechanism)
> 
> My configuration is : Linux OS, SQA 0.8.4 and a ram based db 
> (/dev/shm/my_db.sqlite but the problem is still there with regular disk based 
> db on my ~/my_db.sqlite)
> 
> Thanks a lot for your time,
> 
> Pierre
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "sqlalchemy" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to sqlalchemy+unsubscr...@googlegroups.com.
> To post to this group, send email to sqlalchemy@googlegroups.com.
> Visit this group at http://groups.google.com/group/sqlalchemy.
> For more options, visit https://groups.google.com/groups/opt_out.

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to