Re: [sqlalchemy] Issue with relations on postgres timestamps with the value "-infinity"
Thanks for the quick update. I did not have the reflex to look into psycopg2 for this. But indeed, a custom SA type would help here too. On Thursday, 26 October 2017 17:46:29 UTC+2, Mike Bayer wrote: > > On Thu, Oct 26, 2017 at 11:16 AM, Michel Albert <exh...@gmail.com > > wrote: > > I've run into an issue with tables containing the special "-infinity" > value > > in postgres. More precisely, when resolving relations on such tables. > > > > SQLAlchemy will convert the value "-infinity" into "datetime(1, 1, 1, 0 > , > > 0)" and try to find rows in the related table with that value. > > SQLAlchemy does not apply conversions of any kind to date values > except in the case of the SQLite dialect.In this case, you are > seeing the database driver's documented behavior: > > http://initd.org/psycopg/docs/usage.html#infinite-dates-handling > > The workaround they provide allows your case to work, note you need to > adapt to datetime, not date: > > import psycopg2.extensions > > class InfDateAdapter: > def __init__(self, wrapped): > self.wrapped = wrapped > def getquoted(self): > if self.wrapped == datetime.datetime.max: > return b"'infinity'::date" > elif self.wrapped == datetime.datetime.min: > return b"'-infinity'::date" > else: > return > psycopg2.extensions.DateFromPy(self.wrapped).getquoted() > > psycopg2.extensions.register_adapter(datetime.datetime, InfDateAdapter) > > This same adaptation can be applied at the SQLAlchemy level using > TypeDecorator. > > > > > > > This will > > obviously not match and return no rows. Using a "joinedload" on such > > relations will work, but then the backreference will have the same > issue. > > > > I've created a small runnable example here: > > https://gist.github.com/exhuma/3e7d0c7d176b41b208af684e55bcf020 > > > > It should be self explanatory. > > > > I have not tested this with positive "infinity" but I assume it will be > > converted into "datetime.datetime.max" and thus fall victim to the same > > issue. > > > > Any ideas for a workaround which does not involve modifying the values > in > > the DB? > > > > -- > > SQLAlchemy - > > The Python SQL Toolkit and Object Relational Mapper > > > > http://www.sqlalchemy.org/ > > > > To post example code, please provide an MCVE: Minimal, Complete, and > > Verifiable Example. See http://stackoverflow.com/help/mcve for a full > > description. > > --- > > You received this message because you are subscribed to the Google > Groups > > "sqlalchemy" group. > > To unsubscribe from this group and stop receiving emails from it, send > an > > email to sqlalchemy+...@googlegroups.com . > > To post to this group, send email to sqlal...@googlegroups.com > . > > Visit this group at https://groups.google.com/group/sqlalchemy. > > For more options, visit https://groups.google.com/d/optout. > -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Issue with relations on postgres timestamps with the value "-infinity"
I've run into an issue with tables containing the special "-infinity" value in postgres. More precisely, when resolving relations on such tables. SQLAlchemy will convert the value "-infinity" into "datetime(1, 1, 1, 0 , 0)" and try to find rows in the related table with that value. This will obviously not match and return no rows. Using a "joinedload" on such relations will work, but then the backreference will have the same issue. I've created a small runnable example here: https://gist.github.com/exhuma/3e7d0c7d176b41b208af684e55bcf020 It should be self explanatory. I have not tested this with positive "infinity" but I assume it will be converted into "datetime.datetime.max" and thus fall victim to the same issue. Any ideas for a workaround which does not involve modifying the values in the DB? -- SQLAlchemy - The Python SQL Toolkit and Object Relational Mapper http://www.sqlalchemy.org/ To post example code, please provide an MCVE: Minimal, Complete, and Verifiable Example. See http://stackoverflow.com/help/mcve for a full description. --- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at https://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
[sqlalchemy] Unit testing, mocking and dependency injection with SA Declarative.
I am trying to wrap my head around how to do Dependency Injection with SQLAlchemy and I am walking in circles. I want to be able to mock out SA for most of my tests. I trust SA and don't want to test serialisation into the DB. I just want to test my own code. So I was thinking to do dependency injection, and mock out SA during testing. But I don't know what to mock out, how and when to set up the session properly, without doing it at the module level (which causes unwanted side-effects only by importing the module). The only solution which comes to mind is to have one singleton which deals with that. But that feels very unpythonic to me and I am wondering if there's a better solution. I also saw that create_engine has an optional module kwarg, which I could mock out. But then SA begins complaining that the return types are not correct. And I don't want to specify return values for every possible db-module call. That's way out of scope of my tests. I am not calling anything on the db-module. That's SA's job, and, as said, I already trust SA. Whenever I work on this I always run into the session_maker initialisation as well. The examples to this on the module level, which I really make me feel uneasy. Any tips? Just prodding myself in the right direction might help me out enough. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Do not add entities automatically to the session?
Hi, I realised that SA adds an instance to the session as soon as I instantiate it. How/Where can I disable this? For example: currently I have this behaviour: session = get_session() my_user = User(email='f...@example.com') len(session.query(User)) 1 but instead I would like to have the following: session = get_session() my_user = User(email='f...@example.com') len(session.query(User)) 0 session.add(my_user) len(session.query(User)) 1 -- mich. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
[sqlalchemy] Re: strange attributeerror module object has no attribute exc when using sqlalchemy from a mod_wsgi handle
On Tuesday, 27 March 2012 23:39:11 UTC+2, alonn wrote: this is what I got from tailing the mod_wsgi error stack: [Tue Mar 27 23:14:16 2012] [error] [client 127.0.0.1] from sqlalchemy import create_engine,String,Unicode,Integer, Column, func,distinct, desc [Tue Mar 27 23:14:16 2012] [error] [client 127.0.0.1] File /path/to/virtualenv/app/data/virtenv/lib/python2.6/site-packages/sqlalchemy/__init__.py, line 10, in module [Tue Mar 27 23:35:50 2012] [error] [client 127.0.0.1] AttributeError: 'module' object has no attribute 'exc' actually when I run the file directly from python without mod_wsgi the error doesn' t show up.. strange I'll be glas any help/exprience with this strange problem? I have the exact same problem... Did you find a solution yet on your end? -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/mohP4_bTXnYJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: strange attributeerror module object has no attribute exc when using sqlalchemy from a mod_wsgi handle
On Tuesday, 12 June 2012 15:31:26 UTC+2, Michel Albert wrote: On Tuesday, 27 March 2012 23:39:11 UTC+2, alonn wrote: this is what I got from tailing the mod_wsgi error stack: [Tue Mar 27 23:14:16 2012] [error] [client 127.0.0.1] from sqlalchemy import create_engine,String,Unicode,Integer, Column, func,distinct, desc [Tue Mar 27 23:14:16 2012] [error] [client 127.0.0.1] File /path/to/virtualenv/app/data/virtenv/lib/python2.6/site-packages/sqlalchemy/__init__.py, line 10, in module [Tue Mar 27 23:35:50 2012] [error] [client 127.0.0.1] AttributeError: 'module' object has no attribute 'exc' actually when I run the file directly from python without mod_wsgi the error doesn' t show up.. strange I'll be glas any help/exprience with this strange problem? I have the exact same problem... Did you find a solution yet on your end? Upon closer inspection it was an access rights problem. mod_wsgi could not write into it's egg-cache. The following section of the docs explains how to fix this: http://code.google.com/p/modwsgi/wiki/ApplicationIssues#Access_Rights_Of_Apache_User After that I had to restart the server. Not only touch the WSGI file! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To view this discussion on the web visit https://groups.google.com/d/msg/sqlalchemy/-/TRdHQW89m0IJ. To post to this group, send email to sqlalchemy@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: Referential integrity actions are not doing what I want
On Jun 22, 4:07 pm, Michael Bayer mike...@zzzcomputing.com wrote: On Jun 22, 2010, at 9:27 AM, exhuma.twn wrote: Hi, I have a table of items, where each item can be owned by one person, and held by someone else. I want the owner to be compulsory (not nullable), and the holder to be optional (nullable). To model this I have two tables, one for contacts and one for items. The item table has two fields owner_id and holder_id. Bot are references to the contact table and have the on delete rule set to restrict and set null respectively. The problem is that when I want to delete the contact attached to the holder_id column, it seems that SA tries to set *both* references to null. It should not do this! For example: If you have an item which has an owner_id 1 and a holder_id 2, then deleting the contact with ID 2 will cause the following query: 'UPDATE item SET owner_id=%(owner_id)s, holder_id=%(holder_id)s WHERE item.item_id = %(item_item_id)s' {'holder_id': None, 'item_item_id': 10, 'owner_id': None} First of all, *why* is SA issuing this query at all? I can't answer very specifically to what the case here is since you didn't provide mappings or example code. I'm assuming you're using the ORM and we're talking about relationship() here. It appears like holder_id and owner_id are handled by different relationships() altogether. Assuming that's the case, affecting the holder relationship would have no affect on owner_id. So if the case is that owner is being detached and you'd like item to be deleted, you need to be using delete, delete-orphan cascade so that the row is deleted, rather than dereferenced from the parent. http://www.sqlalchemy.org/docs/session.html#cascades However, I see you have RESTRICT set on owner_id. If you would like to instead entirely prevent the modification of owner_id when the parent object is deleted, and allow referential integrity to raise an error, you should set passive_deletes to all, which will disable the nulling out on delete of the parent. This is documented under relationship athttp://www.sqlalchemy.org/docs/reference/orm/mapping.html?#sqlalchemy A delete query would suffice. The ref. integrity should be handled by the DB, shouldn't it? More importantly, it updates both owner_id and holder_id. But as previously said, owner_id=1 and holder_id=2. So deleting contact #2 should only trigger - if at all - an update query to set holder_id to null. Any ideas as to what I am doing wrong here? Here's my model for those two tables: contact = Table( 'contact', metadata, Column( 'contact_id', Integer, primary_key=True, nullable=False), Column( 'label', Unicode(64), nullable=False ), ) item = Table( 'item', metadata, Column( 'item_id', Integer, nullable=False, primary_key=True ), Column( 'label', Unicode(64) ), Column( 'barcode', String(64) ), Column( 'comment', UnicodeText() ), Column( 'owner_id', Integer, ForeignKey('contact.contact_id', onupdate=CASCADE, ondelete=RESTRICT), nullable=False), Column( 'holder_id', Integer, ForeignKey('contact.contact_id', onupdate=CASCADE, ondelete=SET NULL), nullable=True), ) -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group athttp://groups.google.com/group/sqlalchemy?hl=en. Thanks for the answer. From the docs, this seems to be what I am looking for. But it sill isn't quite giving me what I want. For reference, here's the mapper: mapper( Item, item, properties={ 'owner': relation( Contact, backref='owned_items', primaryjoin=item.c.owner_id==contact.c.contact_id, passive_deletes=all), 'holder': relation( Contact, backref='held_items', primaryjoin=item.c.holder_id==contact.c.contact_id, passive_deletes=True ), } ) In the case of my application I want owner to be a compulsory relation, and holder an optional one. So, if an owner gets deleted, I want all items to be removed as well. For that I have the ON DELETE CASCADE rule in the DB. On the other hand however, as holders are optional they should simply be set to NULL if the related contact disappears. This is why I used the two different values for passive_deletes. The way I understand the docs, having passive_deletes set to True or all let's the DB handle referential integrity. SA will not touch it, which will render a cascade rule on the ORM-relation useless. As I have the rules set in the DB itself, the above mapper config seems okay to me. But still, if I have an Item owned by contact #9 (and held by another one for example) and delete that contact, SA issues an update query nevertheless. It still tries to set