Re: [sqlalchemy] SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
On Tue, Feb 18, 2014 at 4:08 PM, Michael Bayer mike...@zzzcomputing.com wrote: OK some questions: 1. what options are you passing to create_engine()? sqlalchemy.default.url = postgresql://DBNAME:DBPASS@DBADDRESS/DBNAME sqlalchemy.default.client_encoding=utf8 sqlalchemy.default.pool_recycle = 3600 sqlalchemy.default.pool_size=20 sqlalchemy.default.max_overflow=30 2. what kind of column types and data do you have in the table that is mapped to the “users” mapping denoted by: self.organization.users ? Lots of types, mostly Unicode() column fields, DateTime, Integer, Boolean 3. are you running with the C extensions installed? what happens differently if you run without the C extensions? (depends on how reproducible this is) the stack trace doesn’t say much here as it seems like C code is where things are going wrong. We're running with the C extension, haven't tried without as it's a bit cumbersome to update our machines to run without it. 4. is this error only under load ? it may be the issue noted by Jeff in another email, or not. Nah, this issue is all the time, with or without load and we applied Jeff fix and so far, early to say as it's a race condition, the errors have gone away. Maybe in the next couple of days they won't come back and we can consider it fixed. 5. Potential workaround, assuming the issue is the one Jeff refers to. Do this in your application startup, *before* any requests are served: # create engine normally eng = create_engine(…) # initialize the dialect. conn = eng.connect() conn.close() Oh, is that enough? We're running an eng.execute(select 1; rollback;). specifically, do the above anytime you are calling create_engine(). Note that the app would ideally be calling create_engine() only once per URL. If your app is using an anti pattern like “create_engine per operation” or something like that, that could cause more problems (even it not, it is very non-performant). Nah, we're using the create_engine one per app. :). 6. more extreme workaround, if Amazon RDS is being weird here: # create engine normally eng = create_engine(…) # initialize the dialect. conn = eng.connect() conn.close() # override the unicode returns detection to force SQLAlchemy to check for # decode in all cases (will perform poorly if no C extensions are in use or pre-0.9.2 in use) eng.dialect.returns_unicode_strings = “conditional We've used RDS also with the 0.7.9 version of sqlalchemy and it didn't give any issues. I cannot stress enough how as soon as we deployed 0.9.2 these errors started while with 0.7.9 there were none and that's mostly the only change we've done. -- Valentino Volonghi | Chief Architect P: +1-877-723-7655 | F: +1-415-738-8026 AdRoll.com Increase Sales. Raise Awareness. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Re: SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
On Tue, Feb 18, 2014 at 4:11 PM, Jonathan Vanasco jonat...@findmeon.com wrote: I just want to bring this up, because the Pylons/Pyramid error reporting -- and the SqlAlchemy reporting -- can often make an error look to be in a different spot than it really is It looks like it might be what Jeff was talking about. Sidenote: How are you liking the pg RDS support? I like a lot of AWS products and we use a lot of them. We also use a lot of them really really well and they work flawlessly with DynamoDB being the best of the best. PG RDS is fairly bad, we're going to move off of it because it's really unusable but we're creating our own replacement for it. But if you enable Multi-AZ support you'll have a ton of latency that's unexplained (multiple seconds of latency on each query or connection). Sometimes write latencies will spike to multiple seconds and so on. We ran a pg 8.4 database in ec2 on our own instance with EBS and everything and never had these issues a single time in 5 years. We've had more problems with PG 9.3.2 in RDS in 4 weeks than in 8.4 on our hardware in 5 years. Still vastly immature IMHO, we're using PIOPS and EBS optimized instances. The other issue is that there are no c3s in RDS yet so you get a much faster machine (5x) by running your own c3s instead of the m2s for 50% more cost. -- Valentino Volonghi | Chief Architect P: +1-877-723-7655 | F: +1-415-738-8026 AdRoll.com Increase Sales. Raise Awareness. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Issue with cx_oracle
Am 18.02.2014 23:20, schrieb Michael Bayer: good catch, I've rewritten the docs here: http://docs.sqlalchemy.org/en/latest/dialects/oracle.html#unicode. The more common approach of using text() is included. Thanks a lot for the quick reponse and explanations. Your recipe of specifying columns explicitly as Unicode worked well for me. The only gotcha was that I needed to explicitly specify the encoding as argument to create_engine(), while that had not been necessary when the unicode outputtypehandler was installed (I guess cx_Oracle derived the proper encoding automatically from NLS_LANG). I'll try to create a pullreq for the coerce_to_unicode (turned off by default) feature over the weekend anyway. -- Christoph -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Is there a way to examine how many objects a session currently contains?
I've looked through the documentation on session, however I didn't seem to be able to locate a method that allows me to list all the objects a session currently is holding. Could someone point out a tip? Many thanks. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Is there a way to examine how many objects a session currently contains?
Hi, read the section Session Attributeshttp://docs.sqlalchemy.org/en/latest/orm/session.html#sqlalchemy.orm.session.Session.identity_map. You might just need something like len(session.identity_map) + len(session.new) ... but there are some caveats, involving exactly what you mean by the session holding an object (i.e. what you need this set for): - That assumes you want to include objects that have been added to the session but haven't yet been flushed to the database, so they do not yet have an identity - It also assumes you don't want to *exclude* objects which have been deleted in the session but whose deletion has not yet been flushed to the database; the session is still holding those objects - The identity_map is a weak-referencing map, so it may or may not still reference objects to which your application no longer holds a reference, depending on garbage collection. So you may get a different set before and after calling gc.collect(). Cheers, Gulli -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] So session.merge() does not automatically do .add()?
I thought I understood the behaviour of session.merge() from reading the documentation, but I'm wrong. Here is an example. I have codes below, class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) addresses = relationship(Address, backref=user) This class has a corresponding table 'user' which has already been populated with one row, with its primary key equal to 1. Now I run the following instructions from python -i: u = User(id=1) session.merge(u) session.commit() I expected that there should be a sqlalchemy.orm.exc.FlushError, because both object u (which has been merged) and the row which has been already in the table have their primary key equal to 1. However, nothing happened. Sqlalchemy just ignored that merged u! If I do these instructions: session.add(u) session.commit() Then I got a sqlalchemy.orm.exc.FlushError, which was expected. So, session.merge() does not include .add()? From the documentation it seems that it does. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] So session.merge() does not automatically do .add()?
On Feb 19, 2014, at 6:08 AM, Bao Niu niuba...@gmail.com wrote: I thought I understood the behaviour of session.merge() from reading the documentation, but I'm wrong. Here is an example. I have codes below, class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) addresses = relationship(Address, backref=user) This class has a corresponding table 'user' which has already been populated with one row, with its primary key equal to 1. Now I run the following instructions from python -i: u = User(id=1) session.merge(u) session.commit() I expected that there should be a sqlalchemy.orm.exc.FlushError, because both object u (which has been merged) and the row which has been already in the table have their primary key equal to 1. However, nothing happened. Sqlalchemy just ignored that merged u! If I do these instructions: session.add(u) session.commit() Then I got a sqlalchemy.orm.exc.FlushError, which was expected. So, session.merge() does not include .add()? From the documentation it seems that it does. So first is that the object you pass to merge() is *never* itself added to the session. The object you pass in is unmodified. The return value of merge() is the object that is actually in the Session. This object will either have been newly created, in which case it was added using add(), or it was loaded from the database. But that object is never the one you passed in. So in your test if primary key “1” already exists, then merge() returned the User object it had for that primary key. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
On Feb 19, 2014, at 3:03 AM, Valentino Volonghi dialt...@adroll.com wrote: 4. is this error only under load ? it may be the issue noted by Jeff in another email, or not. Nah, this issue is all the time, with or without load and we applied Jeff fix and so far, early to say as it's a race condition, the errors have gone away. Maybe in the next couple of days they won't come back and we can consider it fixed. OK so we’ll put that patch in today for 0.8/0.9, though I’m not sure why you don’t see this in 0.7 as the logic to determine if strings are decoded is the same; it tests the returns on the first connection. The mechanism in 0.7 for testing first connection is the same as that of 0.9, using an event with “exec once”, which doesn’t have any thread sync on it. Also its odd you’d get this under no load because it really needs the system to create and use multiple connections immediately out of the gate in order to trip the condition. going to make some test cases now. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] So session.merge() does not automatically do .add()?
Thanks for replying. Isn't it the case when I just provide a primary key for the source object, the attribute shoul be marked expired on the target instance? In my example I only give the object u a primary key value, so the newly created object should be an object that only has a primary key value and wipe out other attributes originally for that row in the table. However the result is that row stays unchanged after all my instructions. Is there something I missed here? On Feb 19, 2014 6:47 AM, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 19, 2014, at 6:08 AM, Bao Niu niuba...@gmail.com wrote: I thought I understood the behaviour of session.merge() from reading the documentation, but I'm wrong. Here is an example. I have codes below, class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) addresses = relationship(Address, backref=user) This class has a corresponding table 'user' which has already been populated with one row, with its primary key equal to 1. Now I run the following instructions from python -i: u = User(id=1) session.merge(u) session.commit() I expected that there should be a sqlalchemy.orm.exc.FlushError, because both object u (which has been merged) and the row which has been already in the table have their primary key equal to 1. However, nothing happened. Sqlalchemy just ignored that merged u! If I do these instructions: session.add(u) session.commit() Then I got a sqlalchemy.orm.exc.FlushError, which was expected. So, session.merge() does not include .add()? From the documentation it seems that it does. So first is that the object you pass to merge() is *never* itself added to the session. The object you pass in is unmodified. The return value of merge() is the object that is actually in the Session. This object will either have been newly created, in which case it was added using add(), or it was loaded from the database. But that object is never the one you passed in. So in your test if primary key 1 already exists, then merge() returned the User object it had for that primary key. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
FWIW I'm seeing not reproducible UnicodeEncodeErrors in my production systems, too. Alas only on ones with heavier load, and also with sqlalchemy 0.7.9. On Wed, Feb 19, 2014 at 3:45 PM, Michael Bayer mike...@zzzcomputing.comwrote: On Feb 19, 2014, at 3:03 AM, Valentino Volonghi dialt...@adroll.com wrote: 4. is this error only under load ? it may be the issue noted by Jeff in another email, or not. Nah, this issue is all the time, with or without load and we applied Jeff fix and so far, early to say as it's a race condition, the errors have gone away. Maybe in the next couple of days they won't come back and we can consider it fixed. OK so we'll put that patch in today for 0.8/0.9, though I'm not sure why you don't see this in 0.7 as the logic to determine if strings are decoded is the same; it tests the returns on the first connection. The mechanism in 0.7 for testing first connection is the same as that of 0.9, using an event with exec once, which doesn't have any thread sync on it. Also its odd you'd get this under no load because it really needs the system to create and use multiple connections immediately out of the gate in order to trip the condition. going to make some test cases now. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] So session.merge() does not automatically do .add()?
On Feb 19, 2014, at 10:01 AM, Bao Niu niuba...@gmail.com wrote: Thanks for replying. Isn't it the case when I just provide a primary key for the source object, the attribute shoul be marked expired on the target instance? no because the target instance a product of either a. essentially loaded by session.query(User).get(1), so not expired or b. primary key identity 1 doesn’t exist in the database yet in which case the new object is only pending. In my example I only give the object u a primary key value, so the newly created object should be an object that only has a primary key value and wipe out other attributes originally for that row in the table. attributes that are not specified in the source object are not copied. this allows one to set a subset of attributes on the target object. On Feb 19, 2014 6:47 AM, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 19, 2014, at 6:08 AM, Bao Niu niuba...@gmail.com wrote: I thought I understood the behaviour of session.merge() from reading the documentation, but I'm wrong. Here is an example. I have codes below, class User(Base): __tablename__ = 'user' id = Column(Integer, primary_key=True) name = Column(String(50), nullable=False) addresses = relationship(Address, backref=user) This class has a corresponding table 'user' which has already been populated with one row, with its primary key equal to 1. Now I run the following instructions from python -i: u = User(id=1) session.merge(u) session.commit() I expected that there should be a sqlalchemy.orm.exc.FlushError, because both object u (which has been merged) and the row which has been already in the table have their primary key equal to 1. However, nothing happened. Sqlalchemy just ignored that merged u! If I do these instructions: session.add(u) session.commit() Then I got a sqlalchemy.orm.exc.FlushError, which was expected. So, session.merge() does not include .add()? From the documentation it seems that it does. So first is that the object you pass to merge() is *never* itself added to the session. The object you pass in is unmodified. The return value of merge() is the object that is actually in the Session. This object will either have been newly created, in which case it was added using add(), or it was loaded from the database. But that object is never the one you passed in. So in your test if primary key “1” already exists, then merge() returned the User object it had for that primary key. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
I got it, this is a regression due to #2880. On Feb 19, 2014, at 10:15 AM, Robert Forkel xrotw...@googlemail.com wrote: FWIW I'm seeing not reproducible UnicodeEncodeErrors in my production systems, too. Alas only on ones with heavier load, and also with sqlalchemy 0.7.9. On Wed, Feb 19, 2014 at 3:45 PM, Michael Bayer mike...@zzzcomputing.com wrote: On Feb 19, 2014, at 3:03 AM, Valentino Volonghi dialt...@adroll.com wrote: 4. is this error only under load ? it may be the issue noted by Jeff in another email, or not. Nah, this issue is all the time, with or without load and we applied Jeff fix and so far, early to say as it's a race condition, the errors have gone away. Maybe in the next couple of days they won't come back and we can consider it fixed. OK so we’ll put that patch in today for 0.8/0.9, though I’m not sure why you don’t see this in 0.7 as the logic to determine if strings are decoded is the same; it tests the returns on the first connection. The mechanism in 0.7 for testing first connection is the same as that of 0.9, using an event with “exec once”, which doesn’t have any thread sync on it. Also its odd you’d get this under no load because it really needs the system to create and use multiple connections immediately out of the gate in order to trip the condition. going to make some test cases now. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] Sharding without the ORM
Hi, I have a legacy application that doesn't use the ORM but I need to implement horizantal sharding on mysql. Did anybody has done something similar or have any ideas. thanks! -silviu ps: the examples I've seen are using orm (examples/sharding/attribute_shard.py -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Sharding without the ORM
On Wed, Feb 19, 2014 at 1:03 PM, Silviu Dicu silviud...@gmail.com wrote: Hi, I have a legacy application that doesn't use the ORM but I need to implement horizantal sharding on mysql. Did anybody has done something similar or have any ideas. You mean besides manually computing a consistent hash and binding the query to one engine or another? (which btw isn't hard at all) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Relationship issue with composite foreign key
Hi everyone, I am hitting an error while defining a relationship with a parent table with a composite primary key (my child class has two attributes that point to the same parent class -with a composite pk- but both relationships are properly defined with the 'foreign_keys' parameter): I am not sure the reason I get an AmbiguousForeignKeysError, I have followed the docs and tried several alternatives to no avail. Probably some typo or a silly mistake like that but I am unable to see it. I would appreciate a pointer. first, here is the error message I am getting: File /home/mariano/Code/n-p/env/lib/python3.3/site-packages/SQLAlchemy-0.9.2-py3.3-linux-i686.egg/sqlalchemy/orm/relationships.py, line 1868, in _determine_joins % self.prop) sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship Bet.runner - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table. and here is a completed self contained example to reproduce the problem: from sqlalchemy import ( Table, Column, ForeignKey, Index, CheckConstraint, ForeignKeyConstraint, Integer, Text, String, Boolean, Date, Enum, Numeric, inspect, create_engine, and_ ) from sqlalchemy.ext.declarative import declarative_base from sqlalchemy.orm import ( scoped_session, sessionmaker, relationship ) DBSession = scoped_session(sessionmaker()) Base = declarative_base() class User(Base): __tablename__ = user uid = Column(String(600), primary_key=True) oid = Column(String(100), primary_key=True) name = Column(String(600), nullable=False) lastname = Column(String(600), nullable=False) class Bet(Base): __tablename__ = bet __table_args__ = ( ForeignKeyConstraint((uid, oid), (user.uid, user.oid)), ForeignKeyConstraint((loader_uid, oid), (user.uid, user.oid)), ) id = Column(Integer, autoincrement=True, primary_key=True) uid = Column(String(600), nullable=False) oid = Column(String(100), nullable=False) number = Column(Integer, CheckConstraint(number1)) loader_uid = Column(String(600), nullable=False) # relationships runner = relationship(User, foreign_keys=[uid, oid], #primaryjoin=and_(Bet.uid==User.uid, Bet.oid==User.oid), ) loader = relationship(User, foreign_keys=[loader_uid, oid]) Index(runner_idx, Bet.__table__.c.uid, Bet.__table__.c.oid) if __name__ == __main__: engine = create_engine('sqlite://') DBSession.configure(bind=engine) Base.metadata.create_all(engine) u1 = User(uid='example1', oid='MDQ', name='Mariano', lastname='Mara') u2 = User(uid='example2', oid='MDQ', name='Mariano', lastname='Mara') DBSession.add_all([u1, u2]) DBSession.flush() b = Bet(uid='example1', oid='MDQ', number=20, loader_uid='example2') DBSession.add(b) DBSession.commit() Thanks in advance -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Relationship issue with composite foreign key
On Feb 19, 2014, at 1:14 PM, Mariano Mara mariano.m...@gmail.com wrote: # relationships runner = relationship(User, foreign_keys=[uid, oid], #primaryjoin=and_(Bet.uid==User.uid, Bet.oid==User.oid), ) loader = relationship(User, foreign_keys=[loader_uid, oid]) you’ve got the quoting off on that commented-out primaryjoin, here’s the most direct way to set it up: runner = relationship(User, primaryjoin=and_(foreign(Bet.uid)==User.uid, foreign(Bet.oid)==User.oid), ) loader = relationship(User, primaryjoin=and_(foreign(Bet.loader_uid)==User.uid, foreign(Bet.oid)==User.oid), ) signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] HSTORE intermittent bug
Hi there, I've been hitting an intermittent bug with SQLAlchemy/PostgreSQL using an HSTORE column. Some times, when I restart my process (pyramid over gevent over chaussette), I get a spew of such errors... some other time, everything goes just fine. It seems this bug hasn't been documented anywhere, or at least the error I get wasn't reported. If you guys have any insight at all that would help me out in my search for the bug, I'd be eternally grateful. Here is the traceback: File /home/abourget/build/bitc/Bitc/bitc/api.py, line 126, in website_from_origin website = Website.get_by_host(origin) File /home/abourget/build/bitc/Bitc/bitc/models.py, line 299, in get_by_host Website.deleted == False).first() File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py, line 2320, in first ret = list(self[0:1]) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py, line 2187, in __getitem__ return list(res) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 72, in instances rows = [process[0](row, None) for row in fetch] File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 451, in _instance populate_state(state, dict_, row, isnew, only_load_props) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 305, in populate_state populator(state, dict_, row) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py, line 154, in fetch_col dict_[key] = row[col] File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/hstore.py, line 291, in process return _parse_hstore(value.decode(encoding)) AttributeError: 'dict' object has no attribute 'decode' Just for little context, models.py:299 in full is: website = db.query(Website).filter(Website.domain == host, Website.deleted == False).first() There is one field on Website, declared as such: data_apis = Column(MutableDict.as_mutable(HSTORE), default=dict) perhaps that last default=dict would induce such a problem ? I'll try investigating that path. thanks for any help! Alexandre -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Relationship issue with composite foreign key
On 19/02/14 15:28, Michael Bayer wrote: On Feb 19, 2014, at 1:14 PM, Mariano Mara mariano.m...@gmail.com wrote: # relationships runner = relationship(User, foreign_keys=[uid, oid], #primaryjoin=and_(Bet.uid==User.uid, Bet.oid==User.oid), ) loader = relationship(User, foreign_keys=[loader_uid, oid]) you've got the quoting off on that commented-out primaryjoin, here's the most direct way to set it up: runner = relationship(User, primaryjoin=and_(foreign(Bet.uid)==User.uid, foreign(Bet.oid)==User.oid), ) loader = relationship(User, primaryjoin=and_(foreign(Bet.loader_uid)==User.uid, foreign(Bet.oid)==User.oid), ) I have tried with the primaryjoin too (although not including the foreign()) and I hit a different error. Your suggestion -of course- works. I am confuse now: when do I use the primaryjoin and/or foreign_keys and/or foreign? Thanks in advance! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] tying multiple sessions together with one transaction
Hi All, My other usual question now ;-) I have multiple databases that I'm connecting to from my application. I need to commit or rollback a single transaction across all of them. So, two phase commit, right? Okay, but how do I tie the sessions together? What and how do I call commit? Is there anything better than zope.transaction? (which does address this need...) Chris -- Simplistix - Content Management, Batch Processing Python Consulting - http://www.simplistix.co.uk -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Sharding without the ORM
yes - if something is built in already. On Wednesday, 19 February 2014 12:46:48 UTC-5, Klauss wrote: On Wed, Feb 19, 2014 at 1:03 PM, Silviu Dicu silvi...@gmail.comjavascript: wrote: Hi, I have a legacy application that doesn't use the ORM but I need to implement horizantal sharding on mysql. Did anybody has done something similar or have any ideas. You mean besides manually computing a consistent hash and binding the query to one engine or another? (which btw isn't hard at all) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] HSTORE intermittent bug
On Feb 19, 2014, at 1:28 PM, a...@bitcredits.io wrote: Hi there, I've been hitting an intermittent bug with SQLAlchemy/PostgreSQL using an HSTORE column. Some times, when I restart my process (pyramid over gevent over chaussette), I get a spew of such errors... some other time, everything goes just fine. It seems this bug hasn't been documented anywhere, or at least the error I get wasn't reported. If you guys have any insight at all that would help me out in my search for the bug, I'd be eternally grateful. Here is the traceback: if you’re on 0.8.4 or any 0.9, there is a bug fix going out today that is very likely part of this, has to do with the engine determining if HSTORE support is turned on. Assuming this is your issue, a workaround would be: eng = create_engine(…) conn = eng.connect() conn.close() before any other use of that engine occurs. this will ensure that the correct info about HSTORE is determined up front. File /home/abourget/build/bitc/Bitc/bitc/api.py, line 126, in website_from_origin website = Website.get_by_host(origin) File /home/abourget/build/bitc/Bitc/bitc/models.py, line 299, in get_by_host Website.deleted == False).first() File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py, line 2320, in first ret = list(self[0:1]) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/query.py, line 2187, in __getitem__ return list(res) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 72, in instances rows = [process[0](row, None) for row in fetch] File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 451, in _instance populate_state(state, dict_, row, isnew, only_load_props) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/loading.py, line 305, in populate_state populator(state, dict_, row) File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/orm/strategies.py, line 154, in fetch_col dict_[key] = row[col] File /home/abourget/build/bitc/env/local/lib/python2.7/site-packages/sqlalchemy/dialects/postgresql/hstore.py, line 291, in process return _parse_hstore(value.decode(encoding)) AttributeError: 'dict' object has no attribute 'decode' Just for little context, models.py:299 in full is: website = db.query(Website).filter(Website.domain == host, Website.deleted == False).first() There is one field on Website, declared as such: data_apis = Column(MutableDict.as_mutable(HSTORE), default=dict) perhaps that last default=dict would induce such a problem ? I'll try investigating that path. thanks for any help! Alexandre -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out. signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] Relationship issue with composite foreign key
On 19/02/14 16:01, Michael Bayer wrote: On Feb 19, 2014, at 1:34 PM, Mariano Mara mariano.m...@gmail.com wrote: On 19/02/14 15:28, Michael Bayer wrote: On Feb 19, 2014, at 1:14 PM, Mariano Mara mariano.m...@gmail.com wrote: # relationships runner = relationship(User, foreign_keys=[uid, oid], #primaryjoin=and_(Bet.uid==User.uid, Bet.oid==User.oid), ) loader = relationship(User, foreign_keys=[loader_uid, oid]) you've got the quoting off on that commented-out primaryjoin, here's the most direct way to set it up: runner = relationship(User, primaryjoin=and_(foreign(Bet.uid)==User.uid, foreign(Bet.oid)==User.oid), ) loader = relationship(User, primaryjoin=and_(foreign(Bet.loader_uid)==User.uid, foreign(Bet.oid)==User.oid), ) I have tried with the primaryjoin too (although not including the foreign()) and I hit a different error. Your suggestion -of course- works. I am confuse now: when do I use the primaryjoin and/or foreign_keys and/or foreign? well these approaches are largely interchangeable in the vast majority of cases. your schema here is actually throwing two curveballs at once, one is the usual more than one foreign key constraint problem, but the other is that two of these constraints actually share the same column. For example I can also set foreign_keys=[loader_uid] / foreign_keys=[uid], and it configures, because the 'oid' column isn't there to confuse things, but then oid isn't taken into account. Really, SQLA's here's how i figure things out based on foreign key logic is not quite ready for that specific set of inputs. I'll add a bug report for it. the approach of primaryjoin with foreign() is the most specific method which leaves the least amount of questions for SQLAlchemy to figure out. Crystal clear. Thanks for your answer and time. I really appreciate it. Mariano -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Re: SQLAlchemy 0.9.2 UnicodeEncodeErrors with PG 9.3.2 on Amazon RDS
Wow! Thanks. This is by far, the best written and more informative stuff I've read on this. I've been considering switching to AWS sooner-than-later for the RDS ( we bootstrapped on Linode with a plan to redeploy on AWS in the future ) , but it looks like we shouldn't pursue that yet! On Wednesday, February 19, 2014 3:08:15 AM UTC-5, Valentino Volonghi wrote: PG RDS is fairly bad, we're going to move off of it because it's really unusable but we're creating our own replacement for it. But if you enable Multi-AZ support you'll have a ton of latency that's unexplained (multiple seconds of latency on each query or connection). Sometimes write latencies will spike to multiple seconds and so on. We ran a pg 8.4 database in ec2 on our own instance with EBS and everything and never had these issues a single time in 5 years. We've had more problems with PG 9.3.2 in RDS in 4 weeks than in 8.4 on our hardware in 5 years. Still vastly immature IMHO, we're using PIOPS and EBS optimized instances. The other issue is that there are no c3s in RDS yet so you get a much faster machine (5x) by running your own c3s instead of the m2s for 50% more cost. -- Valentino Volonghi | Chief Architect P: +1-877-723-7655 | F: +1-415-738-8026 AdRoll.com Increase Sales. Raise Awareness. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Measure time to checkout a connection from the pool
Hello, I'd like to measure how much time my code spends waiting to check out a connection from the pool. Why? Because I suspect that I have too many workers and not enough connections in the pool, and I'd like to gather some hard numbers to prove my theory one way or the other. I see that there are events I can register for: http://docs.sqlalchemy.org/en/rel_0_9/core/events.html#connection-pool-events. However, off hand I don't see a documented way to get the time spent waiting in checkout. I took a peek at pool.py (I'm using sqlalchemy version 0.9.1) . I see _ConnectionRecord doing the following: def __connect(self): try: self.starttime = time.time() connection = self.__pool._creator() self.__pool.logger.debug(Created new connection %r, connection) return connection except Exception as e: self.__pool.logger.debug(Error on connect(): %s, e) raise Is it safe to use this starttime? Does it measure the time-to-checkout? Or time-to-establish-new-connection-in-pool? Or something else? Is there a better way to get the info I'm looking for? thanks! --Shahaf -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] Measure time to checkout a connection from the pool
On Feb 19, 2014, at 9:27 PM, Shahaf Abileah sha...@gmail.com wrote: Hello, I'd like to measure how much time my code spends waiting to check out a connection from the pool. Why? Because I suspect that I have too many workers and not enough connections in the pool, and I'd like to gather some hard numbers to prove my theory one way or the other. I see that there are events I can register for: http://docs.sqlalchemy.org/en/rel_0_9/core/events.html#connection-pool-events. However, off hand I don't see a documented way to get the time spent waiting in checkout. I took a peek at pool.py (I'm using sqlalchemy version 0.9.1) . I see _ConnectionRecord doing the following: def __connect(self): try: self.starttime = time.time() connection = self.__pool._creator() self.__pool.logger.debug(Created new connection %r, connection) return connection except Exception as e: self.__pool.logger.debug(Error on connect(): %s, e) raise Is it safe to use this starttime? Does it measure the time-to-checkout? Or time-to-establish-new-connection-in-pool? Or something else? Is there a better way to get the info I'm looking for? for profiling code you should use the Python profiler module. See http://stackoverflow.com/questions/1171166/how-can-i-profile-a-sqlalchemy-powered-application/1175677#1175677 for an introduction to using Python profiling with specifics regarding SQLAlchemy. signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] SQLAlchemy 0.8.5 Released
Hey list - SQLAlchemy release 0.8.5 is now available. Release 0.8.5 is a bugfix release which includes a wide array of fixes backported from the 0.9 series. Both 0.8.5 and 0.9.3 are being released simultaneously to resolve a concurrency-related regression introduced in version 0.8.4, which can cause a dialect to sporadically be used in a mis-configured state when it first connects. For this reason, 0.8.5 is recommended for all users on 0.8.4. Other improvements include updates to the Postgresql dialect to better handle schema reflection when working with old versions of Postgresql, namely pre-0.8.1 versions, which should also have a positive effect on those users using Amazon Redshift. Users should carefully review the Changelog at http://www.sqlalchemy.org/changelog/CHANGES_0_8_5 to note which behaviors and issues are affected. We'd like to thank the many contributors who helped with this release. SQLAlchemy 0.8.5 is available via the download page at: http://www.sqlalchemy.org/download.html signature.asc Description: Message signed with OpenPGP using GPGMail
[sqlalchemy] SQLAlchemy 0.9.3 Released
Hey list - SQLAlchemy release 0.9.3 is now available. Release 0.9.3 includes a wide array of bugfixes and a few new features. A critical subset of these fixes are also backported to release 0.8.5. Both 0.9.3 and 0.8.5 are being released simultaneously to resolve a concurrency-related regression introduced in version 0.8.4, which can cause a dialect to sporadically be used in a mis-configured state when it first connects. For this reason, 0.9.3 is recommended for all users of the 0.9 series. Other improvements include updates to the Postgresql dialect to better handle schema reflection when working with old versions of Postgresql, namely pre-0.8.1 versions, which should also have a positive effect on those users using Amazon Redshift. 0.9.3 also includes improved support for SQLite type reflection, using SQLite's type affinity rules to unconditionally reflect all possible datatypes declared within a database. Users should carefully review the Changelog at http://www.sqlalchemy.org/changelog/CHANGES_0_9_3 to note which behaviors and issues are affected. We'd like to thank the many contributors who helped with this release. SQLAlchemy 0.9.3 is available via the download page at: http://www.sqlalchemy.org/download.html signature.asc Description: Message signed with OpenPGP using GPGMail
Re: [sqlalchemy] So session.merge() does not automatically do .add()?
Please allow me to clarify here. So if attributes on source object u are different from attributes loaded from database, database will always prevail? I thought the source object u should represent newer version of the data, shouldn't it? On Wed, Feb 19, 2014 at 7:21 AM, Michael Bayer mike...@zzzcomputing.comwrote: On Feb 19, 2014, at 10:01 AM, Bao Niu niuba...@gmail.com wrote: Thanks for replying. Isn't it the case when I just provide a primary key for the source object, the attribute shoul be marked expired on the target instance? no because the target instance a product of either a. essentially loaded by session.query(User).get(1), so not expired or b. primary key identity 1 doesn't exist in the database yet in which case the new object is only pending. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] So session.merge() does not automatically do .add()?
On Feb 19, 2014, at 11:57 PM, Bao Niu niuba...@gmail.com wrote: Please allow me to clarify here. So if attributes on source object u are different from attributes loaded from database, database will always prevail? I thought the source object u should represent newer version of the data, shouldn't it? All attributes that are present on your source object are copied to the database object. All attributes that are not present, are not. There is a difference between an attribute that is “None” and an attribute that is missing. In python, this corresponds to whether or not you’d see the attribute actually present in the object’s __dict__. If I make an object like this: obj = User(id=1) the only attribute that object has is, “id”. All other attributes are non-present. If my user table has a column like “name”, it is only “present” if I say this: obj = User(id=1, name=‘some name’) or if I want it to be “None”: obj = User(id=1, name=None) The merge process doesn't do anything with attributes that are missing. You can quickly see which attributes are present and which are not by just looking at the __dict__ of your object: print obj.__dict__ If you’d like to make an object that has all attributes populated with None, assuming you’re using declarative constructors you can do something like this: from sqlalchemy import inspect obj = User(id=1, **dict((attr.key, None) for attr in inspect(User).attrs if not attr.uselist and attr.key != ‘id’)) Though that doesn’t account for collections, which to initialize as blank typically accept a collection of a certain type. That line is kind of awkward, and what should be used instead sort of depends on what you’re trying to do. signature.asc Description: Message signed with OpenPGP using GPGMail