Re: AttributeError: 'MetaData' object has no attribute 'get'

2015-06-17 Thread Reece Hart


 this line here:

File alembic/env.py, line 87, in run_migrations_online 
 target_metadata=target_metadata.get(name)
  

 is part of something that was coded on your end, because it is in 
 env.py.   The MetaData object doesn't have a get() method.   It does have a 
 .tables dictionary from which you can call get(), however, but that would 
 return a Table object, not a MetaData object.

 So you'd need to fix your env.py here.


I got here by following the alembic tutorial. Specifically, I did `alembic 
init alembic --template multidb`, which generated the env.py. Then, 
following along from 
https://alembic.readthedocs.org/en/latest/autogenerate.html, I added

from myapp.mymodel import Base
target_metadata = Base.metadata


The expression `target_metadata=target_metadata.get(name),` comes from the 
env.py provided during alembic init.

What am I missing?

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy-alembic group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy-alembic+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] How to do a revision table or How to get a 1:1 and 1:n relationship between the same tables to work properly

2015-06-17 Thread Mike Bayer



On 6/17/15 12:00 PM, Adrian wrote:
I'm trying to store old versions of (some of) the data in one of my 
tables.
To do so, I'm thinking about models like this (not including anything 
not relevant to the case):


class EventNote(db.Model):
id = db.Column(db.Integer, primary_key=True)
latest_revision = db.relationship(
'EventNoteRevision',
lazy=False,
uselist=False,
primaryjoin=lambda: (EventNote.id == 
EventNoteRevision.note_id)  EventNoteRevision.is_latest,

back_populates='note'
)
revisions = db.relationship(
'EventNoteRevision',
lazy=True,
cascade='all, delete-orphan',
primaryjoin=lambda: EventNote.id == EventNoteRevision.note_id,
order_by=lambda: EventNoteRevision.created_dt.desc(),
back_populates='note'
)


class EventNoteRevision(db.Model):
id = db.Column(db.Integer, primary_key=True)
note_id = db.Column(db.Integer, db.ForeignKey('events.notes.id'), 
nullable=False, index=True)

is_latest = db.Column(db.Boolean, nullable=False, default=False)
# ...and some columns for the actual data of that revision
note = db.relationship(
'EventNote',
lazy=False,
back_populates='revisions'
)



However, it always breaks somewhere (I was trying around with some 
variations in the relationship configurations)...

These are the problems I've encountered so far:

- A newly created revision assigned to `latest_revision` is flushed 
with a null `note_id


- Assigning a new revision to `latest_revision` (i.e. with another 
revision already existing) results in the old one being DELETEd or its 
note_id being NULLed out (neither should happen)
Well the concept of back_populates pointing in three directions like 
that is not how it was intended to be used.  back_populates is intended 
to point two relationships to each other mutually.  I'm not sure why 
these effects are happening but it probably has something to do with 
that.  I could dig into what's going on and I may do so, but I'm sure 
whatever I see will come to the same immediate solution anyway.


If keeping this model, I would keep EventNote.revisions and 
EventNoteRevision.note as the two relationships here with a traditional 
back_populates between them.  The latest_revision relationship here at 
most should just be a viewonly=True. reviisions/note should be used as 
the persistence channel exclusively.




I could really use some help on how to do this properly. The model 
posted above can be changed in any way. For example, I wouldn't mind 
having a `latest_revision_id` column in `EventNote`,
but when I tried that (including `use_alter` and `post_update`) I also 
ended up with tons of different errors, including some that showed up 
every other time I started my application (seems like
something doesn't happen in a deterministic order during mapper 
configuration).
I'd definitely advise doing it that way, that's the supported way to do 
a favorite id approach and is more relationally correct (e.g. not 
possible to have multiple favorites).It seems like you read the 
docs at 
http://docs.sqlalchemy.org/en/rel_1_0/orm/relationship_persistence.html#rows-that-point-to-themselves-mutually-dependent-rows, 
so I'd give that a revisit and feel free to share the errors from that case.




--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Concurrent use of `bulk_insert_mappings` and `bulk_update_mappings`

2015-06-17 Thread Soeren Medard
Hi,
I have had an issue with bulk operations arising from the following use 
case:
I was looping an iterator of mappings, which are either already in the DB 
(inserts) or not (updates). As I did not want to loop over it twice and 
wanted to do only on transaction, I used greenlets do split the iterator 
and concurrently run `bulk_insert_mappings` and `bulk_update_mappings` on 
the same session object. That looks like that:


Insert some greenlet magic into generators

from greenlet import greenlet


def greenletify_gen(mapping):
 This generator will be passed to bulk operations
Greenlets allow us to get out of the bulk methods
And thus run them concurrently 
if mapping is None:
raise StopIteration()
yield mapping
for mapping in iter(greenlet.getcurrent().parent.switch, None):
yield mapping



Concurrently run bulk operations

from test_model import TestModel
from session_ctx_mgr import session_ctx_mgr
from sqlalchemy.exc import ResourceClosedError


with session_ctx_mgr() as session:
insert_greenlet = greenlet(lambda mapping: 
session.bulk_insert_mappings(TestModel, greenletify_gen(mapping)))
update_greenlet = greenlet(lambda mapping: 
session.bulk_update_mappings(TestModel, greenletify_gen(mapping)))
insert_greenlet.switch({'id': 2, 'value': 2})
update_greenlet.switch({'id': 1, 'value': 2})
insert_greenlet.switch(None)
update_greenlet.switch(None)

However the aforementioned example raises this error:

Traceback (most recent call last):
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 2332, in _bulk_save_mappings
isstates, update_changed_only)
  File 
/mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/persistence.py, 
line 100, in _bulk_update
if session_transaction.session.connection_callable:
AttributeError: 'NoneType' object has no attribute 'connection_callable'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File drunken-octo-dubstep.py, line 43, in module
update_greenlet.switch(None)
  File drunken-octo-dubstep.py, line 37, in lambda
update_greenlet = greenlet(lambda mapping: 
session.bulk_update_mappings(TestModel, greenletify_gen(mapping)))
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/scoping.py, 
line 150, in do
return getattr(self.registry(), name)(*args, **kwargs)
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 2318, in bulk_update_mappings
self._bulk_save_mappings(mapper, mappings, True, False, False, False)
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 2340, in _bulk_save_mappings
transaction.rollback(_capture_exception=True)
  File 
/mnt/vendor/lib/python3.4/site-packages/sqlalchemy/util/langhelpers.py, 
line 63, in __exit__
compat.reraise(type_, value, traceback)
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/util/compat.py, 
line 182, in reraise
raise value
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 2340, in _bulk_save_mappings
transaction.rollback(_capture_exception=True)
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 408, in rollback
self._assert_active(prepared_ok=True, rollback_ok=True)
  File /mnt/vendor/lib/python3.4/site-packages/sqlalchemy/orm/session.py, 
line 223, in _assert_active
raise sa_exc.ResourceClosedError(closed_msg)
sqlalchemy.exc.ResourceClosedError: This transaction is closed

Now, what's funny is that when you invert the 2 last lines, the exception 
disappears (ie, `bulk_update_mappings` ends before `bulk_insert_mappings`).
I managed to generalize the behavior, and it seems it all depends on the 
1st mapping of the loop. If it was an insert, `bulk_update_mappings` must 
end first. If it was an update, `bulk_insert_mappings` must end first.

The whole source code for the example is available 
here: https://github.com/Loamhoof/drunken-octo-dubstep, in the file 
`drunken-octo-dubsstep.py`.

Now, I posted here and not as an issue because:
1) I'm not sure there isn't an other, more legit way to do what I want
2) Such a way to use the API should be supported

So, questions related:
1) Is there indeed a better way to run both bulk operations while looping 
over an iterator only once?
2) Should this be considered a bug?

Versions used:
SQLAlchemy==1.0.5
greenlet==0.4.7
psycopg2==2.6.1
and Postgres 9.4.1, Python 3.4.3

Thanks for your time!

Regards,
Soeren

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Concurrent use of `bulk_insert_mappings` and `bulk_update_mappings`

2015-06-17 Thread Mike Bayer



On 6/17/15 4:05 AM, Soeren Medard wrote:

Hi,
I have had an issue with bulk operations arising from the following 
use case:
I was looping an iterator of mappings, which are either already in the 
DB (inserts) or not (updates). As I did not want to loop over it twice 
and wanted to do only on transaction, I used greenlets do split the 
iterator and concurrently run `bulk_insert_mappings` and 
`bulk_update_mappings` on the same session object.


that won't work, because it means you are running multiple greenlets 
against a single database connection, and that is not thread/greenlet 
safe assuming you are using a greenlet-monkeypatched DBAPI.   you need 
to synchronize the work of each greenlet, or use individual connections.


additionally, as the stack trace you've passed on indicates, the bulk 
methods on Session still make use of a subtransaction internally, which 
is stateful.So that's the specific error you're seeing, multiple 
greenlets are competing for the state of the session.transaction object 
which is a linked list of transaction nests.  It absolutely is not 
threadsafe (which means greenlets also).The Session doesn't have 
much need to support concurrent threads/greenlets doing things on it 
without locking because the DBAPI connections it refers to are virtually 
never safe in to use this way in any case.





So, questions related:
1) Is there indeed a better way to run both bulk operations while 
looping over an iterator only once?

2) Should this be considered a bug?

Versions used:
|
SQLAlchemy==1.0.5
greenlet==0.4.7
psycopg2==2.6.1
|
and Postgres 9.4.1, Python 3.4.3


So the async support for psycopg2 is ultimately using a single 
Postgresql Connection with async=1, this is documented at 
http://initd.org/psycopg/docs/advanced.html#asynchronous-support. We can 
see here that this use is not supported:  Two cursors can’t execute 
concurrent queries on the same asynchronous connection.


So your options are to either synchronize the work of the multiple 
greenlets, which will pretty much eliminate any point to doing it that 
way, or to use a connection/session per greenlet.At the end of the 
day you're communicating on a TCP socket where an INSERT/UPDATE string 
is sent along the wire and a response is being waited for, and multiple 
statements cannot be simultaneously multiplexed on a single connection.



--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


Re: [sqlalchemy] Append AND NULL condition on joins

2015-06-17 Thread Mike Bayer



On 6/17/15 12:12 PM, david.ceresu...@gmail.com wrote:

Hi all,

I was giving SQLAlchemy 1.0 series a try but I have found some 
problems along
the way. There are some queries that in the 0.9.9 version used to 
work, but

they do not work as expected anymore. An example of one of those is:
|

feeds =DBSession.query(Feed,Client,ClientPro).outerjoin(
Client,Client.id ==Feed.clientid).outerjoin(
ClientPro,ClientPro.clientid ==Client.id)
|

and it used to return:

|
SELECT feed.id AS feed_id,feed.clientid AS feed_clientid ...
FROM feed
LEFT OUTER JOIN client ON client.id =feed.clientid
LEFT OUTER JOIN clientpro ON clientpro.clientid =client.id
|


But since I changed to 1.0 series it returns:

|
SELECT feed.id AS feed_id,feed.clientid ...
FROM feed
LEFT OUTER JOIN client ON client.id =feed.clientid AND NULL
LEFT OUTER JOIN clientpro ON clientpro.clientid =client.id AND NULL
|


Obvously SQLAlchemy isn't doing this without some direction from your 
own application.   The example code you've given illustrates nothing 
that would cause this effect, so would need an actual reproducing test case.


I have tested it from version 1.0.0 to 1.0.5 and it returns the same 
SQL query

in all of them.

The relevant part of the models.py file is:

|
classFeed(Base,ModelBase):
__tablename__ ='feed'
id =Column(Integer,primary_key=True)
clientid =Column(Integer,ForeignKey('client.id'),nullable=False)
...

classClient(Base,ModelBase):
__tablename__ ='client'
id =Column(Integer,primary_key=True)
...

classClientPro(Base,ModelBase):
__tablename__ ='clientpro'
id =Column(Integer,primary_key=True)
clientid 
=Column(Integer,ForeignKey('client.id',ondelete='CASCADE'),nullable=False)

...
|



These fragments of code illustrate nothing that would cause such an effect:

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()


class Feed(Base):
__tablename__ = 'feed'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id'), nullable=False)


class Client(Base):
__tablename__ = 'client'
id = Column(Integer, primary_key=True)


class ClientPro(Base):
__tablename__ = 'clientpro'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id', 
ondelete='CASCADE'), nullable=False)


e = create_engine(sqlite://, echo=True)
Base.metadata.create_all(e)

sess = Session(e)

q = sess.query(Feed, Client, ClientPro).outerjoin(
Client, Client.id == Feed.clientid).outerjoin(
ClientPro, ClientPro.clientid == Client.id)

print(q)

result = q.all()


output:

SELECT feed.id AS feed_id, feed.clientid AS feed_clientid, client.id AS 
client_id, clientpro.id AS clientpro_id, clientpro.clientid AS 
clientpro_clientid
FROM feed LEFT OUTER JOIN client ON client.id = feed.clientid LEFT OUTER 
JOIN clientpro ON clientpro.clientid = client.id

2015-06-17 13:03:18,335 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2015-06-17 13:03:18,336 INFO sqlalchemy.engine.base.Engine SELECT 
feed.id AS feed_id, feed.clientid AS feed_clientid, client.id AS 
client_id, clientpro.id AS clientpro_id, clientpro.clientid AS 
clientpro_clientid
FROM feed LEFT OUTER JOIN client ON client.id = feed.clientid LEFT OUTER 
JOIN clientpro ON clientpro.clientid = client.id


--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] How to do a revision table or How to get a 1:1 and 1:n relationship between the same tables to work properly

2015-06-17 Thread Adrian
I'm trying to store old versions of (some of) the data in one of my tables.
To do so, I'm thinking about models like this (not including anything not 
relevant to the case):

class EventNote(db.Model):
id = db.Column(db.Integer, primary_key=True)
latest_revision = db.relationship(
'EventNoteRevision',
lazy=False,
uselist=False,
primaryjoin=lambda: (EventNote.id == EventNoteRevision.note_id)  
EventNoteRevision.is_latest,
back_populates='note'
)
revisions = db.relationship(
'EventNoteRevision',
lazy=True,
cascade='all, delete-orphan',
primaryjoin=lambda: EventNote.id == EventNoteRevision.note_id,
order_by=lambda: EventNoteRevision.created_dt.desc(),
back_populates='note'
)


class EventNoteRevision(db.Model):
id = db.Column(db.Integer, primary_key=True)
note_id = db.Column(db.Integer, db.ForeignKey('events.notes.id'), 
nullable=False, index=True)
is_latest = db.Column(db.Boolean, nullable=False, default=False)
# ...and some columns for the actual data of that revision
note = db.relationship(
'EventNote',
lazy=False,
back_populates='revisions'
)



However, it always breaks somewhere (I was trying around with some 
variations in the relationship configurations)...
These are the problems I've encountered so far:

- A newly created revision assigned to `latest_revision` is flushed with a 
null `note_id`
- Assigning a new revision to `latest_revision` (i.e. with another revision 
already existing) results in the old one being DELETEd or its note_id being 
NULLed out (neither should happen)


I could really use some help on how to do this properly. The model posted 
above can be changed in any way. For example, I wouldn't mind having a 
`latest_revision_id` column in `EventNote`,
but when I tried that (including `use_alter` and `post_update`) I also 
ended up with tons of different errors, including some that showed up every 
other time I started my application (seems like
something doesn't happen in a deterministic order during mapper 
configuration).

One option to avoid all the problems could be using the revision table only 
for OLD data, i.e. keeping all the latest data inside `EventNote` and only 
adding a new revision when something changes.
That way I would avoid having two relationships and all the problems would 
go away. I know at least one big site doing it like this (Stack Overflow), 
so maybe it's not the worst option... even though
they probably had other reasons to do it like this since they aren't using 
SQLAlchemy. But after having spent half the afternoon trying to get the 
two-relationship solution working I'm really tempted
to do it like this... Especially since I wouldn't have to worry about 
allowing only one `is_latest` revision per `note_id` (easy with a 
conditional unique index, but needs extra code to mark the old ones
as not being the latest one anymore)

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.


[sqlalchemy] Append AND NULL condition on joins

2015-06-17 Thread david . ceresuela
Hi all,

I was giving SQLAlchemy 1.0 series a try but I have found some problems 
along
the way. There are some queries that in the 0.9.9 version used to work, but
they do not work as expected anymore. An example of one of those is:

feeds = DBSession.query(Feed, Client, ClientPro).outerjoin(
Client, Client.id == Feed.clientid).outerjoin(
ClientPro, ClientPro.clientid == Client.id)

and it used to return:

SELECT feed.id AS feed_id, feed.clientid AS feed_clientid ... 
FROM feed
LEFT OUTER JOIN client ON client.id = feed.clientid
LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id


But since I changed to 1.0 series it returns:

SELECT feed.id AS feed_id, feed.clientid ...
FROM feed
LEFT OUTER JOIN client ON client.id = feed.clientid AND NULL
LEFT OUTER JOIN clientpro ON clientpro.clientid = client.id AND NULL


As you can see, it adds the 'AND NULL' condition to the joins so the columns
corresponding to the client and clientpro are NULL.

I have tested it from version 1.0.0 to 1.0.5 and it returns the same SQL 
query
in all of them.

The relevant part of the models.py file is:

class Feed(Base, ModelBase):
__tablename__ = 'feed'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id'), nullable=False)
...

class Client(Base, ModelBase):
__tablename__ = 'client'
id = Column(Integer, primary_key=True)
...

class ClientPro(Base, ModelBase):
__tablename__ = 'clientpro'
id = Column(Integer, primary_key=True)
clientid = Column(Integer, ForeignKey('client.id', ondelete='CASCADE'), 
nullable=False)
...


And finally, the versions I am using:
- PostgreSQL 9.3
- Pyramid 1.5.7 (zope.sqlalchemy 0.7.6)
- psycopg2 2.6

What it is that I am missing?

Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sqlalchemy+unsubscr...@googlegroups.com.
To post to this group, send email to sqlalchemy@googlegroups.com.
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.