[sqlalchemy] Re: Error when trying to use a dict
BEES INC wrote: > > phwoar, ok. I think I understand. > > To break it down: > > I have a class Foo which, among other things, contains a dict called > vals with keys of dates and values of floats. > > When I store an instance of Foo, it goes to the foo table, and any > items in the vals dict should go into the vals table. > > For sql alchemy to understand how to store the items from the vals > dict, i need an ORM(?) object that describes a single row in the vals > table. > > This manifests as a python object called Val, which for my case has a > datetime and float scalar. This maps to the datatypes of the vals > table, i.e. one row in the vals table will have a datetime and a > float. > > class Val(object): > def __init__(self, key, value): > self.date = key > self.val = value > > Then, in the mapping of the Foo object, I create a relation with the > Val object called vals_dict, which is a column_mapped_collection with > a key of Val.date. This manifests as Foo.vals_dict, which is a dict > with datetime keys and Val object as the value. > > mapper(Foo, foo, properties = dict( > vals_dict = relation(Val, secondary=foo_vals, collection_class > = column_mapped_collection(vals.c.date)) > )) > > Since a dict with a datetime key giving a Val object is somewhat > unwieldy to work with, i create an AssociationProxy called Foo.vals, > which when accessed by a given key, returns the 'val' attribute of the > given Val object. > > Foo.vals = association_proxy('vals_dict', 'val') > > Which seems to do what I want :) > > Have I understood this correctly, and is this the right way to do it? > > Thank you for your patience, I am still somewhat new to SQLAlchemy and > ORM's in general. you're good to go, nice job ! > > #!/usr/bin/env python2.6 > > from sqlalchemy import * > from sqlalchemy.orm import sessionmaker, mapper, relation > from sqlalchemy.orm.collections import column_mapped_collection > from sqlalchemy.ext.associationproxy import association_proxy > > from datetime import datetime > > class Val(object): > def __init__(self, key, value): > self.date = key > self.val = value > > class Foo(object): > > def __init__(self, name): > self.name = name > > engine = create_engine('sqlite:///:memory:', echo=True) > meta = MetaData(bind=engine) > > vals = Table('vals', meta, > Column('id', Integer, primary_key=True), > Column('date', DateTime), > Column('val', Float), > ) > > foo = Table('foo', meta, > Column('id', Integer, primary_key=True), > Column('name', String(50)) > ) > > foo_vals = Table('foo_vals', meta, > Column('foo_id', Integer, ForeignKey('foo.id'), primary_key=True), > Column('val_id', Integer, ForeignKey('vals.id'), primary_key=True) > ) > > mapper(Val, vals) > > mapper(Foo, foo, properties = dict( > vals_dict = relation(Val, secondary=foo_vals, collection_class = > column_mapped_collection(vals.c.date)) > )) > > Foo.vals = association_proxy('vals_dict', 'val') > > meta.create_all() > > Session = sessionmaker(bind=engine) > session = Session() > > f = Foo('hello') > f.vals[datetime(2009, 9, 28)] = 1.0 > > f2 = Foo('hello again') > f2.vals[datetime(2008, 8, 28)] = 2.0 > > session.add(f) > session.add(f2) > session.commit() > > print "%s -> %s" % (f.name, f.vals) > print "%s -> %s" % (f2.name, f2.vals) > > print f.vals_dict > for x in f.vals_dict: > print "x = %s" % x > y = f.vals_dict[x] > print "y = %s" % y > print "y.date = %s, y.val = %s" % (y.date, y.val) > > On Sun, Sep 27, 2009 at 5:59 AM, Michael Bayer > wrote: >> >> >> On Sep 26, 2009, at 1:25 PM, BEES INC wrote: >> >>> class Vals(dict): pass >>> >>> class Foo(object): >>> >>> def __init__(self, name): >>> self.vals = Vals() >>> self.name = name >>> >>> def add(self, key, value): >>> self.vals[key] = value >> >> you would need to say self.vals[key] = Vals() here. >> column_mapped_collection uses full ORM instances as values. In any >> case I don't think you mean to map the "vals" table to a "dict" >> subclass - the mapping of a table to a class implies how to represent >> a single *row*, not the full table as a whole. >> >> If you are attempting to map from scalar keys to scalar (non object) >> values, you should look into using the AssociationProxy to accomplish >> that. >> >> >> >> >>> >>> engine = create_engine('sqlite:///:memory:', echo=True) >>> meta = MetaData(bind=engine) >>> Session = sessionmaker(bind=engine) >>> >>> vals = Table('vals', meta, >>> Column('id', Integer, primary_key=True), >>> Column('date', DateTime), >>> Column('val', Integer), >>> Column('foo_id', Integer, ForeignKey('foo.id')) >>> ) >>> >>> mapper(Vals, vals) >>> >>> foo = Table('foo', meta, >>> Column('id', Integer, primary_key=True), >>> Column('name', String(50)) >>> ) >>> >>> mapper(Foo, foo, propertie
[sqlalchemy] Re: Custom ID Generator
nkhalasi wrote: > > However with this I am getting unwanted commits. Essentially when the > newid(ctx) function executes the update it also does an commit which > results into my data committed which I would have otherwise expected > to be committed at some other point. I am trying to figure out how can > this update of next ID be done along with my regular application logic > commit. if you are doing something like engine.execute(statement), you'd need to use a transaction. i.e. conn = engine.connect(); trans = conn.begin(); conn.execute(statement); trans.commit(). if you were executing the INSERT via the ORM (i.e. Session.commit()) the connection you receive is within a transaction and no autocommit will occur. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Error when trying to use a dict
phwoar, ok. I think I understand. To break it down: I have a class Foo which, among other things, contains a dict called vals with keys of dates and values of floats. When I store an instance of Foo, it goes to the foo table, and any items in the vals dict should go into the vals table. For sql alchemy to understand how to store the items from the vals dict, i need an ORM(?) object that describes a single row in the vals table. This manifests as a python object called Val, which for my case has a datetime and float scalar. This maps to the datatypes of the vals table, i.e. one row in the vals table will have a datetime and a float. class Val(object): def __init__(self, key, value): self.date = key self.val = value Then, in the mapping of the Foo object, I create a relation with the Val object called vals_dict, which is a column_mapped_collection with a key of Val.date. This manifests as Foo.vals_dict, which is a dict with datetime keys and Val object as the value. mapper(Foo, foo, properties = dict( vals_dict = relation(Val, secondary=foo_vals, collection_class = column_mapped_collection(vals.c.date)) )) Since a dict with a datetime key giving a Val object is somewhat unwieldy to work with, i create an AssociationProxy called Foo.vals, which when accessed by a given key, returns the 'val' attribute of the given Val object. Foo.vals = association_proxy('vals_dict', 'val') Which seems to do what I want :) Have I understood this correctly, and is this the right way to do it? Thank you for your patience, I am still somewhat new to SQLAlchemy and ORM's in general. #!/usr/bin/env python2.6 from sqlalchemy import * from sqlalchemy.orm import sessionmaker, mapper, relation from sqlalchemy.orm.collections import column_mapped_collection from sqlalchemy.ext.associationproxy import association_proxy from datetime import datetime class Val(object): def __init__(self, key, value): self.date = key self.val = value class Foo(object): def __init__(self, name): self.name = name engine = create_engine('sqlite:///:memory:', echo=True) meta = MetaData(bind=engine) vals = Table('vals', meta, Column('id', Integer, primary_key=True), Column('date', DateTime), Column('val', Float), ) foo = Table('foo', meta, Column('id', Integer, primary_key=True), Column('name', String(50)) ) foo_vals = Table('foo_vals', meta, Column('foo_id', Integer, ForeignKey('foo.id'), primary_key=True), Column('val_id', Integer, ForeignKey('vals.id'), primary_key=True) ) mapper(Val, vals) mapper(Foo, foo, properties = dict( vals_dict = relation(Val, secondary=foo_vals, collection_class = column_mapped_collection(vals.c.date)) )) Foo.vals = association_proxy('vals_dict', 'val') meta.create_all() Session = sessionmaker(bind=engine) session = Session() f = Foo('hello') f.vals[datetime(2009, 9, 28)] = 1.0 f2 = Foo('hello again') f2.vals[datetime(2008, 8, 28)] = 2.0 session.add(f) session.add(f2) session.commit() print "%s -> %s" % (f.name, f.vals) print "%s -> %s" % (f2.name, f2.vals) print f.vals_dict for x in f.vals_dict: print "x = %s" % x y = f.vals_dict[x] print "y = %s" % y print "y.date = %s, y.val = %s" % (y.date, y.val) On Sun, Sep 27, 2009 at 5:59 AM, Michael Bayer wrote: > > > On Sep 26, 2009, at 1:25 PM, BEES INC wrote: > >> class Vals(dict): pass >> >> class Foo(object): >> >> def __init__(self, name): >> self.vals = Vals() >> self.name = name >> >> def add(self, key, value): >> self.vals[key] = value > > you would need to say self.vals[key] = Vals() here. > column_mapped_collection uses full ORM instances as values. In any > case I don't think you mean to map the "vals" table to a "dict" > subclass - the mapping of a table to a class implies how to represent > a single *row*, not the full table as a whole. > > If you are attempting to map from scalar keys to scalar (non object) > values, you should look into using the AssociationProxy to accomplish > that. > > > > >> >> engine = create_engine('sqlite:///:memory:', echo=True) >> meta = MetaData(bind=engine) >> Session = sessionmaker(bind=engine) >> >> vals = Table('vals', meta, >> Column('id', Integer, primary_key=True), >> Column('date', DateTime), >> Column('val', Integer), >> Column('foo_id', Integer, ForeignKey('foo.id')) >> ) >> >> mapper(Vals, vals) >> >> foo = Table('foo', meta, >> Column('id', Integer, primary_key=True), >> Column('name', String(50)) >> ) >> >> mapper(Foo, foo, properties = dict( >> vals = relation(Vals, collection_class = column_mapped_collection >> (vals.c.date)) >> )) >> >> sess = Session() >> meta.create_all() >> >> f = Foo('hello') >> f.add(datetime.now(), 1.0) >> >> >> > > > > > > --~--~-~--~~~---~--~--
[sqlalchemy] Re: How to get the instance back from a PropComparator?
Ah ha, so SQL expression operations are all translated directly to their appropriate SQL clauses. This makes sense I guess. Too bad this means I have to implement the same function twice, one in Python and another in SQL. Would be nice if there was some magic to morph custom properties to have some capabilities to operate on the Python side, but I guess that would be much slower. Right now I solved this problem with this new comparator: class AgeComparator(PropComparator): def __clause_element__(self): return (func.datediff(func.curdate(), self.mapper.c.date_of_birth) / 365) def operate(self, op, *args, **kwargs): return op(self.__clause_element__(), *args, **kwargs) Thanks a lot! On Sep 27, 1:14 pm, Michael Bayer wrote: > On Sep 27, 2009, at 12:49 PM, Yuen Ho Wong wrote: > > > > > > > > > So I have this following code: > > > class User(Base): > > > > > class AgeComparator(PropComparator): > > > def __lt__(self, other): > > pass > > > def __gt__(self, other): > > pass > > > def __eq__(self, other): > > pass > > > def __ne__(self, other): > > return not (self == other) > > > def __le__(self, other): > > return self < other or self == other > > > def __ge__(self, other): > > return self > other or self == other > > > @comparable_using(AgeComparator) > > @property > > def age(self): > > today = date.today() > > age = today - (self.date_of_birth or (today + 1)) > > return age.days / 365 > > > All I want to do is this: > > > user = User(date_of_birth=date.today()) > > session.add(user) > > new_borns = session.query(User).filter(User.age == 0).all() > > > The doc for comparable_property() suggests that this is possible, but > > I'm lost finding my way to call the descriptor bound on this instance. > > The age(self): function is only called when you actually have an > instance, such as: > > user = sess.query(User).get(10) > print "age is: " , user.age > > The point of @comparable_using associates the behavior of > AgeComparator to the "age" attribute on the User class, no instance: > > User.age == 0 > > User.age == 0 is going to invoke the __eq__() method on the > AgeComparator you created. There is no instance within the query() > call here. __eq__() needs to return a clause expression of your > choosing, which must be expressed in terms of SQL functions, since > you're rendering a SQL statement. That's a little tricky here since > there's a lot of date arithmetic there, but using hypothetical > functions it would look something like: > > def __eq__(self, other): > return func.count_days(func.date_diff(func.now() - > mapped_table.c.date_of_birth)) / 365 > > The comparable_using example should probably include a short > PropComparator to give more context. A sample comparator is at: > > http://www.sqlalchemy.org/docs/05/mappers.html#custom-comparators > > > > > > > The problem I have is that the ComparableProperty only gave itself and > > the mapper to the comparator, but the user instance is nowhere to be > > found inside either ComparableProperty, PropComparator or mapper. I'd > > appreciate some help here if this is at all possible. The documents on > > this is a little too sparse IMO. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom ID Generator
Comments below On Sep 27, 6:47 pm, Michael Bayer wrote: > On Sep 27, 2009, at 4:11 AM, nkhalasi wrote: > in theory, you want to say session.flush() so that "n.next_id" is > persisted but the transaction is not committed. > > However the approach you have above wont work in any case assuming > the INSERT itself takes place within Session.flush() - the reentrant > call to flush should be raising an error immediately. maybe you're > on an older SQLA version that doesn't raise this assertion which is > why it chokes later on the double commits (which is again, an > assertion that something isnt being called in the proper sequence). > Yes, flush() or commit() raises an error immediately. I am using 0.5.4 version of SQLAlchemy. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom ID Generator
Thanks Mike for the response. Comments below. On Sep 27, 6:47 pm, Michael Bayer wrote: > On Sep 27, 2009, at 4:11 AM, nkhalasi wrote: > > in theory, you want to say session.flush() so that "n.next_id" is > persisted but the transaction is not committed. > > However the approach you have above wont work in any case assuming > the INSERT itself takes place within Session.flush() - the reentrant > call to flush should be raising an error immediately. maybe you're > on an older SQLA version that doesn't raise this assertion which is > why it chokes later on the double commits (which is again, an > assertion that something isnt being called in the proper sequence). > > you don't want to be using the ORM layer within a Column default > generator. Just the inefficiency alone of retrieiving an entire > object and doing a whole flush for every row in an enclosing series > of inserts makes it a bad idea. do this instead: > > def get_next_id(context): > r = context.connection.execute(select([pk_generator_table]).where > (pk_genrator_table.c.table_name=='principals')) > row = r.fetchone() > if row: > id = row['next_id'] > if id > row['end_id']: > raise Exception(...) > context.connection.execute(pk_generator_table.update().where > (pk_genrator_table.c.table_name=='principals').values > (next_id=pk_generator_table.c.next_id + 1)) > return id > else: > raise Exception(...) > > the above will get you out the door but still does a lot of work, if > you had a large volume INSERTs. Ideally you'd use a MySQL stored > procedure or trigger to generate the new values, SQLAlchemy would > just fetch the new values after the fact. > I had tried this (which is similar to what you described above) def newid(ctx): log.debug('generator.newid() invoked with context = %s' %ctx) id = ctx.connection.execute(select([pkgt.c.next_id], pkgt.c.table_name=='principals', for_update=True)).scalar() log.debug('Complied? %s, Autocommit? %s' %(ctx.compiled, ctx.should_autocommit)) ctx.connection.execute(pkgt.update(values={pkgt.c.next_id : pkgt.c.next_id +1})) log.debug('Finished writing back the next id') return id However with this I am getting unwanted commits. Essentially when the newid(ctx) function executes the update it also does an commit which results into my data committed which I would have otherwise expected to be committed at some other point. I am trying to figure out how can this update of next ID be done along with my regular application logic commit. Also what I am doing here is just a dummy hook. So my final version will be lot more different. Regards, Naresh --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to get the instance back from a PropComparator?
On Sep 27, 2009, at 12:49 PM, Yuen Ho Wong wrote: > > So I have this following code: > > class User(Base): > > > class AgeComparator(PropComparator): > > def __lt__(self, other): > pass > > def __gt__(self, other): > pass > > def __eq__(self, other): > pass > > def __ne__(self, other): > return not (self == other) > > def __le__(self, other): > return self < other or self == other > > def __ge__(self, other): > return self > other or self == other > > @comparable_using(AgeComparator) > @property > def age(self): > today = date.today() > age = today - (self.date_of_birth or (today + 1)) > return age.days / 365 > > All I want to do is this: > > user = User(date_of_birth=date.today()) > session.add(user) > new_borns = session.query(User).filter(User.age == 0).all() > > The doc for comparable_property() suggests that this is possible, but > I'm lost finding my way to call the descriptor bound on this instance. The age(self): function is only called when you actually have an instance, such as: user = sess.query(User).get(10) print "age is: " , user.age The point of @comparable_using associates the behavior of AgeComparator to the "age" attribute on the User class, no instance: User.age == 0 User.age == 0 is going to invoke the __eq__() method on the AgeComparator you created. There is no instance within the query() call here. __eq__() needs to return a clause expression of your choosing, which must be expressed in terms of SQL functions, since you're rendering a SQL statement. That's a little tricky here since there's a lot of date arithmetic there, but using hypothetical functions it would look something like: def __eq__(self, other): return func.count_days(func.date_diff(func.now() - mapped_table.c.date_of_birth)) / 365 The comparable_using example should probably include a short PropComparator to give more context. A sample comparator is at: http://www.sqlalchemy.org/docs/05/mappers.html#custom-comparators > > The problem I have is that the ComparableProperty only gave itself and > the mapper to the comparator, but the user instance is nowhere to be > found inside either ComparableProperty, PropComparator or mapper. I'd > appreciate some help here if this is at all possible. The documents on > this is a little too sparse IMO. > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How to get the instance back from a PropComparator?
So I have this following code: class User(Base): class AgeComparator(PropComparator): def __lt__(self, other): pass def __gt__(self, other): pass def __eq__(self, other): pass def __ne__(self, other): return not (self == other) def __le__(self, other): return self < other or self == other def __ge__(self, other): return self > other or self == other @comparable_using(AgeComparator) @property def age(self): today = date.today() age = today - (self.date_of_birth or (today + 1)) return age.days / 365 All I want to do is this: user = User(date_of_birth=date.today()) session.add(user) new_borns = session.query(User).filter(User.age == 0).all() The doc for comparable_property() suggests that this is possible, but I'm lost finding my way to call the descriptor bound on this instance. The problem I have is that the ComparableProperty only gave itself and the mapper to the comparator, but the user instance is nowhere to be found inside either ComparableProperty, PropComparator or mapper. I'd appreciate some help here if this is at all possible. The documents on this is a little too sparse IMO. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle and the Table Operator
On Sep 25, 2009, at 4:11 PM, Andrew wrote: > > So in a nut shell, its a function that uses the table operator to > generate an *actual* table, with five named columns. While this is > not great Oracle behavior, the PLSQL cannot be changed at this point > in time. Now, after discussing this with Michael, he suggested using > the compiler extension, so I came up with the following (basic) > construct: ok this is going to be awesome.to map to a selectable needs a few things in what you're selecting from - namely a column collection and a primary key. So we can instead map your serailized object to a TableClause subclass, where in addition to the serailized payload you'd give it information on what the column names and primary key cols would be. the mapper also wants things it selects from to have a name, so we just give it a name which is used as an oracle "alias name", i.e. "select foo.* from table(...) foo". Also I know you wanted to be able to change the thing being deserialized ad-hoc, so I've used the approach of an "alias()" of the "table" to provide that effect, which you can see below using the "using_source()" and orm "aliased" method to create ad-hoc mapped classes: from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql import expression class TableOper(expression.TableClause): def __init__(self, name, source, pk, *columns): super(TableOper, self).__init__(name, *columns) for name in pk: c = self.c[name] c.primary_key = True# for the mapper. less than ideal self.primary_key.add(c) self.source = source def using_source(self, source): return TableOper(self.name, source, [c.name for c in self.primary_key], *self.c) @compiles(TableOper) def compile_tableoper(element, compiler, **kw): return "table(%s) %s" % (element.source, element.name) device = TableOper('myname', 'some deserialization', ['id'], expression.column('id'), expression.column('x'), expression.column('y') ) class MyClass(object): pass mapper(MyClass, device) sess = sessionmaker()() print sess.query(MyClass).filter(MyClass.x==2) myclass_alias = aliased(MyClass, device.using_source('some other deserialization')) print sess.query(myclass_alias).filter(myclass_alias.x==2) The "myclass_alias" approach returns instances of MyClass as rows. I'm going to add this to the examples/ in 0.6 since the basic idea here can be used for a huge variety of situations (I just need to lookup the offical syntax for oracle table()). --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Custom ID Generator
On Sep 27, 2009, at 4:11 AM, nkhalasi wrote: > > My comments below. > > On Sep 25, 9:05 pm, Conor wrote: >> On Sep 25, 2:11 am, nkhalasi wrote: > >>> def get_next_id(): >>> session = meta.Session() >>> query = session.query(PKGenerator).filter_by >>> (table_name='principals') >>> nxpkgen = query.first() >>> if nxpkgen: >>> nxid = nxpkgen.next_id >>> if nxid > nxpkgen.end_id: >>> raise Exception('Primary Key range has been exhausted') >>> nxpkgen.next_id += 1 >>> session .commit() >>> return nxid >>> else: >>> raise Exception('Next Primary Key could not be found') >> >> Leave off the () after get_next_id. You want to pass the function >> itself as the default, not the result of calling it once. >> > > In the above function > 1) if I do a commit, it commits the main transaction and subsequent > commit()s fail complaining that the transaction is already closed. > 2) if I do not commit, the next_id value is not updated resulting in > next run to use the IDs that were already used in the last run. > 3) I tried session.begin_nested as well but that also did not result > into commiting my next_id and only did a release save point. I guess > this should be attributed to MySQL's behavior. in theory, you want to say session.flush() so that "n.next_id" is persisted but the transaction is not committed. However the approach you have above wont work in any case assuming the INSERT itself takes place within Session.flush() - the reentrant call to flush should be raising an error immediately. maybe you're on an older SQLA version that doesn't raise this assertion which is why it chokes later on the double commits (which is again, an assertion that something isnt being called in the proper sequence). you don't want to be using the ORM layer within a Column default generator.Just the inefficiency alone of retrieiving an entire object and doing a whole flush for every row in an enclosing series of inserts makes it a bad idea. do this instead: def get_next_id(context): r = context.connection.execute(select([pk_generator_table]).where (pk_genrator_table.c.table_name=='principals')) row = r.fetchone() if row: id = row['next_id'] if id > row['end_id']: raise Exception(...) context.connection.execute(pk_generator_table.update().where (pk_genrator_table.c.table_name=='principals').values (next_id=pk_generator_table.c.next_id + 1)) return id else: raise Exception(...) the above will get you out the door but still does a lot of work, if you had a large volume INSERTs. Ideally you'd use a MySQL stored procedure or trigger to generate the new values, SQLAlchemy would just fetch the new values after the fact. > > I am wondering how would I commit my ID range/next_id independent of > my regular application object commits? This is important for me > because I wanted to ultimately implement a variation of ID generator > like Sybase so that autoincrement happens in memory for the duration > of key cache size and then reset the memory counters to next key cache > sizes. > Here is my table : > pk_generator_table = Table('pk_generator', meta.metadata, > Column('table_name', types.CHAR(30), > primary_key=True, nullable=False), > Column('next_id', bigint, nullable=False), > Column('last_id', bigint, nullable=False), > Column('cache_size', types.Integer, > nullable=False), > mysql_engine='InnoDB', > mysql_row_format='DYNAMIC', > mysql_charset='utf8' > ) > > > Regards, > Naresh > > > --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Declarative base - Joined Table Inheritence
I might be getting a bit ambitious here, But is this possible? I'm using a different polymorphic_on for the second level of inheritance. I tried it but it only seems to polymorphicly return records of type _UtConfReconcilerActions class _UtConfActions(Base): __tablename__ = 'tblActions' # 1 to Many relationship to the managed id = Column(Integer, primary_key=True) managed_id = Column(Integer, ForeignKey('tblManagedDetails.id')) component = Column(String) __mapper_args__ = {'polymorphic_on': component, 'with_polymorphic': '*'} class _UtConfReconcilerActions(_UtConfActions): __tablename__ = 'tblReconcilerActions' # 1 to Many relationship to the managed id = Column(Integer, ForeignKey('tblActions.id'), primary_key=True) action = Column(String) __mapper_args__ = {'polymorphic_identity': 'RECONCILER', 'polymorphic_on': action, 'with_polymorphic': '*'} class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions): __tablename__ = 'tblReconcilerActionSnapshot' __mapper_args__ = {'polymorphic_identity': 'SNAPSHOT'} # Joined table inheritence id = Column(Integer, ForeignKey('tblReconcilerActions.id'), primary_key=True) revision = Column(String) comment = Column(String) On Sep 17, 10:48 am, Jarrod Chesney wrote: > That worked, Thanks, ITS AWESOME :-) > > On Sep 17, 6:03 am, Conor wrote: > > > > > On Sep 15, 11:03 pm,Jarrod Chesney wrote: > > > > Hi All > > > I've been reading the documentation for ages and i can't figure out > > > why when i print the results a query from my inherited table, It just > > > prints them as the base type. > > > > I was hoping someone here would be nice enough to help me solve this > > > problem. > > > > I thought the last print statement would print an instance of the > > > _UtConfReconcilerActionSnapshot class but it doesn't > > > > I've got one record in both tables and 'id' = 1 in each table. > > > What am i doing wrong? > > > You are missing "polymorphic_on" in > > _UtConfReconcilerActions.__mapper_args__. Without this, SQLAlchemy > > cannot do polymorphic loads. Try this as your __mapper_args__: > > {'polymorphic_on': 'object_type', 'with_polymorphic': '*'} > > > Hope it helps, > > -Conor > > > > <<< Begin code > > > > > from sqlalchemy import Table, Column, Integer, String, MetaData, > > > ForeignKey, CheckConstraint > > > from sqlalchemy.orm import relation > > > from sqlalchemy.ext.declarative import declarative_base > > > from sqlalchemy import create_engine > > > from sqlalchemy.orm import sessionmaker > > > > __DATABASE_NAME__='UtConfSom.sqlite' > > > > Base = declarative_base() > > > > # == Reconciler Actions > > > === > > > > class _UtConfReconcilerActions(Base): > > > > __tablename__ = 'tblReconcilerActions' > > > __mapper_args__ = {'with_polymorphic': '*'} > > > > # 1 to Many relationship to the managed > > > id = Column(Integer, primary_key=True) > > > action = Column(String, CheckConstraint("action in ('SNAPSHOT', > > > 'COMPARE', 'UPGRADE')")) > > > object_type = Column(String, CheckConstraint("object_type in > > > ('ALL', > > > 'SCHEMA', 'TABLE', 'COLUMN', 'INDEX')")) > > > > def __repr__(self): > > > return ("'%s'" % _UtConfReconcilerActions.__name__ > > > + "\n id='%i'" % self.id > > > + "\n managed_id='%i'" % self.managed_id > > > + "\n action='%s'" % self.action > > > + "\n object_type='%s'" % self.object_type > > > ) > > > > class _UtConfReconcilerActionSnapshot(_UtConfReconcilerActions): > > > > __tablename__ = 'tblReconcilerActionSnapshot' > > > # __mapper_args__ = {'with_polymorphic': '*'} > > > __mapper_args__ = {'polymorphic_identity': 'snapshot', > > > 'with_polymorphic': '*'} > > > > # Joined table inheritence > > > id = Column(Integer, ForeignKey('tblReconcilerActions.id'), > > > primary_key=True) > > > > revision = Column(String) > > > comment = Column(String) > > > > def __repr__(self): > > > return (_UtConfReconcilerActions.__repr__(self) > > > + "\n '%s'" % > > > _UtConfReconcilerActionSnapshot.__name__ > > > + "\n id='%s'" % self.revision > > > + "\n revision='%s'" % self.revision > > > ) > > > > __db_exists = os.path.exists(__DATABASE_NAME__) > > > > engine = create_engine('sqlite:///' + __DATABASE_NAME__) > > > > # New database, create the tables > > > if not __db_exists: > > > Base.metadata.create_all(engine) > > > print >> sys.stderr, ("WARINING - Creating empty '%s' database" % > > > __DATABASE
[sqlalchemy] Re: Custom ID Generator
My comments below. On Sep 25, 9:05 pm, Conor wrote: > On Sep 25, 2:11 am, nkhalasi wrote: > > def get_next_id(): > > session = meta.Session() > > query = session.query(PKGenerator).filter_by > > (table_name='principals') > > nxpkgen = query.first() > > if nxpkgen: > > nxid = nxpkgen.next_id > > if nxid > nxpkgen.end_id: > > raise Exception('Primary Key range has been exhausted') > > nxpkgen.next_id += 1 > > session .commit() > > return nxid > > else: > > raise Exception('Next Primary Key could not be found') > > Leave off the () after get_next_id. You want to pass the function > itself as the default, not the result of calling it once. > In the above function 1) if I do a commit, it commits the main transaction and subsequent commit()s fail complaining that the transaction is already closed. 2) if I do not commit, the next_id value is not updated resulting in next run to use the IDs that were already used in the last run. 3) I tried session.begin_nested as well but that also did not result into commiting my next_id and only did a release save point. I guess this should be attributed to MySQL's behavior. I am wondering how would I commit my ID range/next_id independent of my regular application object commits? This is important for me because I wanted to ultimately implement a variation of ID generator like Sybase so that autoincrement happens in memory for the duration of key cache size and then reset the memory counters to next key cache sizes. Here is my table : pk_generator_table = Table('pk_generator', meta.metadata, Column('table_name', types.CHAR(30), primary_key=True, nullable=False), Column('next_id', bigint, nullable=False), Column('last_id', bigint, nullable=False), Column('cache_size', types.Integer, nullable=False), mysql_engine='InnoDB', mysql_row_format='DYNAMIC', mysql_charset='utf8' ) Regards, Naresh --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---