Re: [sqlalchemy] running parallel migrations using sharded/partioned/spaced queries?
If you can partition the rows numerically, this is trivially easily to implement using redis as the orchestrator. For example if you have integer PKs, you might have a loop like: offset = 0 while offset < tablesize: for row in query[offset:batchsize]: migrate(row) commit() offset += batchsize With redis orchestrating, you use a key in redis and INCRBY to reliably distribute batches to an arbitrary number of workers on an arbitrary number of hosts. while True: offset = redis.incrby('migration-offset', batchsize) rows = query[offset:batchsize] if not rows: break for row in rows: migrate(row) commit() INCRBY is atomic and returns the adjusted value, so every invocation of this script that calls into redis and INCRBYs by, say, 1000, has its own chunk of 1000 to work on. For a starting value of -1000 and four invocations, you'd see 0, 1000, 2000 and 3000. I'll typically do this on one invocation, see that it's running well and that I chose a performant batch size, and then spin up additional workers on more cores until the migration hits the overall throughput required. On Wed, Oct 14, 2015 at 9:32 AM, Jonathan Vanascowrote: > I have to run a script on 2MM objects to update the database. Not really > a schema migration, more like changing the internal data representation in > the fields. > > There's a bit of post-processing and bottlenecks involved, so doing > everything one-at-a-time will take a few days. > > I'd like to split this out into 5-10 'task runners' that are each > responsible for a a section of the database (ie, every 5th record). That > should considerably drop the runtime. > > I thought I had seen a recipe for this somewhere, but checked and couldn't > find anything. That leads me to question if this is a good idea or not. > Anyone have thoughts/pointers? > > -- > You received this message because you are subscribed to the Google Groups > "sqlalchemy" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to sqlalchemy+unsubscr...@googlegroups.com. > To post to this group, send email to sqlalchemy@googlegroups.com. > Visit this group at http://groups.google.com/group/sqlalchemy. > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "sqlalchemy" group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] Sending queue messages/emails after model commit
Hi Alex, I have a similar use case, and fixed it by buffering the signals until the session transaction completes. On rollback, the buffered signals are discarded; on successful commit, the signals are truly emitted. Cheers, Jason On Mon, Sep 22, 2014 at 2:20 AM, Alex Michael alex...@tictail.com wrote: Hey, From my understanding it's recommended that the business logic does not commit the session and that the application itself handles the session lifecycle. Following that, I have all the session handling logic in my controllers so the business logic just changes the objects as necessary and then the controllers call .commit() when needed. When a model is committed and say X property has changed, I need to send a queue message. My problem is that I'm not sure where the logic for emitting such signals should live in order to avoid duplicating logic all over the place. An example: I have an order which I take a payment for. If the payment is successful, I mark the order as paid. At this point I need to emit a signal. If the order is pending, I wait for a notification to come in from the payment gateway and then mark the order as paid. My business logic has a `mark_as_paid` function which changes the status of the order. Ideally I would like to emit the signal in the `mark_as_paid` method but I don't know at that point in time if the session commit will succeed or not. The alternative would be to emit the signal manually after the session was committed but that would (1) lead to duplicated logic since `mark_as_paid` can be triggered from many code paths (2) not always work since the status of the order is determined dynamically so the caller doesn't actually know what changed in order to emit the correct signal. Am I missing something here? I'd appreciate any help. Thanks! -- alex -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/d/optout.
Re: [sqlalchemy] matches_any: an extension to the Query object, and a HSTORE attribute access property
On Tue, Jan 7, 2014 at 11:14 AM, Philip Scott safetyfirstp...@gmail.comwrote: Hi folks, SQLAlchemy is at the heart of what we do at work (UK based hedge fund); so first of all a big thank you; I am not quite sure where we'd be without you. We would like to give back to the community as much as we can (I tried to get some of the developers on the company xmas present list this year but was too late.. cross your fingers for next year). We have extended SQLAlchemy in a few places, though it is quite intermingled with our domain specific stuff I keep an eye out for little snippets that might be useful to others. So here's a trivial one; take it or leave it (and feel free to think of a better name). Knowing my luck it already exists; though I have looked hard through the docs! class QueryEnhanced(Query): ''' Add a few extra bells and whistles to the standard Query object ''' def matches_any(self): ''' Returns true if your query would return 1 or more rows; false otherwise. The following two statements ask the same question; but matches_any is _much_ quicker on large tables: my_query.matches_any() my_query.count() != 0 ''' return self.session.scalar(select([self.exists()])) The other bit of technology we have that could be unpicked without _too_ much trouble is a sort of reverse CompositeProperty; many attributes of different types, including collections, out of one HSTORE column (with a sort of side-loaded instrumentation for mutation tracking that I think could have been done in a more idiosyncratic way). Paraphrasing a bit but you can do things like: class Animal(Base): data = Column(MutableDict.as_mutable(HSTORE)) colour = HsProperty(data, String) legs = HsProperty(data, Integer) discovered = HsProperty(data, Date) fun_facts = HsProperty(data, JSONEncoded(list)) 'colour', 'legs', 'discovered', and 'fun_facts' end up as keys in the HSTORE and the values are strings, integers, dates and lists on the python side but stored as strings in the HSTORE such a way that they can be CAST-ed in a server query [where possible]: session().query(Animal).filter(Animal.legs 2) and get a query like SELECT ... FROM animal WHERE CAST(animal.data - legs AS INTEGER) 2 You can also put an arbitrary JSONEncodable object in there too. Collections get converted to Mutable counterparts for change-tracking. In many ways it is similar to ColumnProperty except that - the properties are writable (and when written only trigger the relevant bits of the hstore to be updated). Also on object instances the values in HsProperties are fetched as part of the query; we lazily de-serialise them directly from the hstore dictionary. Before spend a couple of days removing our corporate clutter from that, getting permission to license it etc.. and posting either as a patch or extension I thought I would see if there is any interest (or if someone has already done it better?). It's implemented as a custom metaclass right now, but I think I might be able to do it fully with events. I would be very interested in this work. At my org we have a subset of the same idea that we're depending upon, but it's tied to an ancient SQLAlchemy version and we never took it all the way into the query space like that. That looks absolutely fabulous! -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] testing for an association proxy (possible bug and patch included)
On Fri, Aug 23, 2013 at 2:31 PM, Gombas, Gabor (IT) gabor.gom...@morganstanley.com wrote: On Fri, Aug 23, 2013 at 12:11:39PM -0700, Jonathan Vanasco wrote: i think a simple fix could be something like this ( line 240, sqlalchemy/ext/associationproxy.py ) if self.scalar: -if not getattr(obj, self.target_collection) -return self._scalar_get(getattr(obj, self.target_collection)) else: if self.scalar: +proxied = getattr(obj, self.target_collection) +if not proxied : +return None +return self._scalar_get(proxied) else: We're monkey-patching AssociationProxy.__get__ with the same change since SQLA 0.5.x, so it would be nice to get it applied upstream... Maybe in 0.9? The patch seems like surprising Python behavior to me. Traversing across a None is almost certainly a bug in regular code, and quashing that error by default feels dangerous. I would want this to raise by default (and I have found bugs because it did.) I think you could opt into this behavior by supplying an alternate, custom getter function that quashed None when creating the proxy. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To unsubscribe from this group and stop receiving emails from it, send an email to sqlalchemy+unsubscr...@googlegroups.com. To post to this group, send email to sqlalchemy@googlegroups.com. Visit this group at http://groups.google.com/group/sqlalchemy. For more options, visit https://groups.google.com/groups/opt_out.
Re: [sqlalchemy] association_proxy as property?
On Tue, Nov 16, 2010 at 9:05 AM, A.M. age...@themactionfaction.com wrote: On Nov 16, 2010, at 11:43 AM, Michael Bayer wrote: On Nov 16, 2010, at 11:14 AM, A.M. wrote: To generate json from our SQLAlchemy model objects, we are using iterate_properties to determine how to dictify the object. One of our objects uses association_proxy which we would like to represent in the JSON. Unfortunately, because it is not a property, the dictification misses this property. I feel like I am missing something simple here. How can I make an association_proxy appear to be a property which appears in iterate_properties of the mapper? 2. implicit conversion to JSON and such is a little sloppy. You'd be better off using a structured approach like Colander: http://docs.repoze.org/colander/ It looks like I would have to either re-define all objects using the Colander syntax or implement a method which converts existing SQLAlchemy models to Colander schema objects. Even if the latter function already exists, I still have the problem of determining automatically which properties to encode, no? You may find you'll need to do even further work to determine which properties to encode. I do the same (using Flatland for serialization), and part of that challenge was determining where the edges of the business objects were. (If you have relations, maybe some of them are part of the object (as user's email addresses) and some of them aren't (a User-Users list of the user's friends). In the end I went with a combination of class annotation and heuristics based on iterating mapper properties. This allowed me to traverse the mappings to reliably find the edges and also include the occasional transient attribute or other oddball that needed to be in the serialized form. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Inheriting custom collection to create another custom collection. Issues with the appenders/parents
Hi Hector, On Fri, Nov 12, 2010 at 7:46 AM, Hector Blanco white.li...@gmail.com wrote: Hello everyone. I was wondering if it's possible to inherit a custom collection to create another custom collection. A few days ago I was trying to use my own class as a custom_collection (http://groups.google.com/group/sqlalchemy/msg/ba1c64c3d227f586). Thanks to Michael Bayer I was able to do it, but now I would like to go one step further, and inherit my custom collection to create another custom collection. To simplify a little what I asked in the other message, let's say I have a: def ClassA(declarativeBase): __tablename__ = aes id = Column(id, Integer, primary_key=True) _whatever = Column(type, String(64)) def __init__(self): self._whatever = whatever Then I have my custom collection for instances of ClassA: def ContainerOfA(dict): __emulates__ = set def __init__(self): self._field = I'm a great... awesom! container #I also defined the appender, remover and iterator �...@collection.iterator def __iter__(self): return self.itervalues() �...@collection.appender def append(self, item): self[item.getUniqueHash()] = item �...@collection.remover def remove(self, item): if item.getUniqueHash() in self.keys(): del self[item.getUniqueHash()] And then I was happily able to use it in any relationships: def YetAnotherClass(declarativeBase): id = Column(id, Integer, primary_key=True) classesA = relationship(ClassA, uselist=True, secondary=intermediate_table, collection_class=lambda: ContainerOfA(), cascade=all, delete, delete-orphan, single_parent=True ) Now I needed to extend ClassA in a Class B and ContainerOfA in ContainerOfB. I added the polymorphic stuff to ClassA and ClassB to create a joined table inheritance, as detailed in http://www.sqlalchemy.org/docs/orm/inheritance.html#joined-table-inheritance . (it seems to be working fine, that's why I am not completely detailing it here) def ClassB(ClassA): __tablename__ = bs #Sorry for that __mapper_args__ = {'polymorphic_identity': 'ClassB'} id = Column(id, Integer, ForeignKey('aes.id'), primary_key=True) def __init__(self): self._anotherWhatever = another whatever def ContainerOfB(ContainerOfA): def __init__(self): super(ContainerOfB, self).__init__() def anotherMethodOnlyForBInstances(self): # do interesting stuff for B classes Then I tried to use it in a relationship: def YetYetAnotherClass(declarativeBase): id = Column(id, Integer, primary_key=True) classesB = relationship(ClassB, uselist=True, secondary=another_intermediate_table, collection_class=lambda: ContainerOfB(), cascade=all, delete, delete-orphan, single_parent=True ) But when I tried to append a ClassB instance through the relationship detailed above, I got this exception: Type ContainerOfB must elect an appender method to be a collection class I haven't been able to replicate this behavior. When testing your code I did notice that you are using 'def' to declare your classes, which won't actually create the type. I make that same typo myself periodically and it can be quite tricky to track down the one def'd class that's causing seemingly unrelated errors. Anyhow, I've attached the working test case I put together. If you can modify this to replicate your behavior, we can track down any bugs that might be present in the collection API's appender metadata bookkeeping. You definitely should not have to re-declare an @appender on a subclass- the collection mechanics should be sweeping over your inherited class and transparently picking up the methods. This is definitely working for the cases in the SQLA unit tests, but it's definitely possible you've found some corner case with that dict that's declared to be emulating a set. Cheers, Jason I thought... ok, ok... let's just explicitly add the 'appender' to the ContainerOfB class... The only thing I need to do is calling the appender of the super class, anyway... no biggie and so I did: def ContainerOfB(ContainerOfA): # [ . . . ] # �...@collection.appender def append(self, classBInstance): return super(ContainerOfB, self).append(classBInstance) But then... another exception when I tried to add an instance of ClassB(): InvalidRequestError: Instance ClassB at 0xba9726c is already associated with an instance of class 'mylibraries.classesTest.YetYetAnotherClass' via its YetYetAnotherClass.classesB attribute, and is only allowed a single parent. Well... I need
[sqlalchemy] Slides from the Advanced SQLAlchemy Customization tutorial at EuroPython
Fellow Alchemers, I've posted the slides and code from the Advanced SQLAlchemy Customization tutorial I presented yesterday at EuroPython 2010 in Birmingham. Enjoy! http://discorporate.us/jek/talks/#d2010-07-18 Talk description: http://www.europython.eu/talks/talk_abstracts/#talk67 Cheers, Jason -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Re: blocking on Base.metadata.drop_all() in test
Your scoped session still has an active connection, which is blocking the drop. Call session.remove() before the drop, or configure the session with expires_on_commit=False to not issue SELECTs to fetch object state after the final commit(). On Thu, Jul 8, 2010 at 9:27 AM, zende mtam...@gmail.com wrote: I reproduced the issue the script below: http://gist.github.com/468199 Sorry for the weak explanation before. This has little to do with being in tests except that's the only code that drops and creates the db for any reason. Ctrl-C does nothing when it blocks. Chris, try running the script in the link, and let me know if you are able to reproduce the issue -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] session lifecycle and wsgi
On Wed, Apr 28, 2010 at 7:52 AM, Chris Withers ch...@simplistix.co.uk wrote: Diana Clarke wrote: Finally, we're using pylons and are removing the contextual session in the finally clause of the base controller's __call__ method. class BaseController(WSGIController): def __call__(self, environ, start_response): try: ... finally: session.remove() Yeah, I'm trying to find out if this .remove() is actually necessary. .remove() as the final operation in a request ensures that no session state leaks from one web request to another. The next request in that thread or scoping context will get an entirely fresh session to work with. If finishing with a .remove() is a big deal in your environment, which it seems like it is, you could do a .remove() at the start of the request instead. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] session lifecycle and wsgi
On Wed, Apr 28, 2010 at 8:55 AM, Chris Withers ch...@simplistix.co.uk wrote: jason kirtland wrote: On Wed, Apr 28, 2010 at 7:52 AM, Chris Withers ch...@simplistix.co.uk wrote: Diana Clarke wrote: Finally, we're using pylons and are removing the contextual session in the finally clause of the base controller's __call__ method. class BaseController(WSGIController): def __call__(self, environ, start_response): try: ... finally: session.remove() Yeah, I'm trying to find out if this .remove() is actually necessary. .remove() as the final operation in a request ensures that no session state leaks from one web request to another. The next request in that thread or scoping context will get an entirely fresh session to work with. Okay, would .close() be equivalent here? Not really, .close is a Session method. See below. If finishing with a .remove() is a big deal in your environment, which it seems like it is, you could do a .remove() at the start of the request instead. What happens if you call .remove() on a virgin session? .remove() is specific to the ScopedSession container. It's not a Session method. It will .close() the session for the current scope, if any (which is effectively a no-op if there is a session but it hasn't performed any work), then remove that session from the scope. The next access to the ScopedSession container will produce a fresh session. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] Mapping dictionaries with string keys and record values
On Mon, Apr 26, 2010 at 8:24 AM, Michael Bayer mike...@zzzcomputing.com wrote: Torsten Landschoff wrote: Thanks for your reply and the remarks! Do you plan to extend attribute_mapped_collection to update the key like in my example? here's some things to note on that: 1. I'm not sure why it wasn't that way already, and I'd want to hear from Jason Kirtland, its author, on if we are missing something or otherwise whats up. I have a vague notion that there was a reason for this, or maybe not. It's not something that can be easily solved in the general case with the current API. The mapped collections use a 'keyfunc' to figure out the dictionary key for loaded instances, for example 'operator.attrgetter(name)' for attribute_mapped_collection(name). Mechanically reversing that logic in a setting operation sounds pretty hard to me, but perhaps if we allowed an 'assignfunc' function to be supplied that would do the trick. Internally, the collection code would call it during a dict['key'] = instance assignment operation, maybe passing just the key value and the instance: def assignfunc(key, instance): instance.name = key For highly constrained types like the attribute- and column-mapped collections, these functions would be easy to generate. A good test for the feature would be a mapped collection that maps a tuple of attributes, such as one created by attribute_mapped_collection(('x', 'y')). Assigning collection[1, 2] = instance should assign both instance.x and instance.y in that case. 2. I wonder if there's a way to make this happen more deeply than within setattr(). Like the collection internals would include an event to operate upon the target object that includes the other args from the collection decorator. I have a hunch this is only meaningful for mapped collections- mutations like list_collection[2:5] would be difficult to translate and I'm not sure what information one would want to capture there. Worth a look though. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
Re: [sqlalchemy] error handling for sessionmaker function
On Mon, Jan 11, 2010 at 4:07 PM, Manlio Perillo manlio.peri...@gmail.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi. I'm updating some of my code to SQLALchemy 0.6, and I have noted a problem with the sessionmaker function. The problem is a compatibility one: old versions use the transactional parameter, new ones the autocommit parameter. Usually, to handle these problems I use the try/except method: try: return orm.sessionmaker(bind=bind, autocommit=autocommit) except TypeError: # COMPAT: SQLAlchemy 0.4.x, deprecated in 0.5.x transactional = not autocommit return orm.sessionmaker(bind=bind, transactional=transactional) However this does not work, since error is raise only ewhen the actual Session instance is created. As far as can understand, the sessionmaker function supports keyword arguments since user can specify a custom session class to use. Can error handling be improved? How about: try: orm.create_session(autocommit=autocommit) except TypeError: # COMPAT: SQLAlchemy 0.4.x, deprecated in 0.5.x transactional = not autocommit return orm.sessionmaker(bind=bind, transactional=transactional) else: return orm.sessionmaker(bind=bind, autocommit=autocommit) Creating and disposing a session via create_session() in this way isn't particularly expensive and won't initiate any database connections or activity. -- You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalch...@googlegroups.com. To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en.
[sqlalchemy] Re: MySQL has gone away (again)
Kamil Gorlo wrote: On Thu, Jun 4, 2009 at 4:20 PM, Michael Bayermike...@zzzcomputing.com wrote: the connection went from good to dead within a few seconds (assuming SQL was successfully emitted on the previous checkout). Your database was restarted or a network failure occurred. There is no other option? I'm pretty sure that DB was not restarted, network failure is of course possible but still.. (this is the same LAN). Another cause of went away messages is a query that exceeds the configured memory resources on the server. Taking a look at MySQL's logs may shed more light give hints for which buffers need tuning if that's the problem. But, assuming this is external problem - is there any way to tell SQLAlchemy to try another connection for the same request (instead of returning HTTP 500 for user), or maybe other pooling strategy or even something else? Yes, with a simple pool event listener you can ensure the liveliness of connections before the pool hands them out for use. Usage example is attached. Cheers, Jason class LookLively(object): Ensures that MySQL connections checked out of the pool are alive. def checkout(self, dbapi_con, con_record, con_proxy): try: try: dbapi_con.ping(False) except TypeError: dbapi_con.ping() except dbapi_con.OperationalError, ex: if ex.args[0] in (2006, 2013, 2014, 2045, 2055): # caught by pool, which will retry with a new connection raise exc.DisconnectionError() else: raise --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- from sqlalchemy import exc class LookLively(object): Ensures that MySQL connections checked out of the pool are alive. Specific to the MySQLdb DB-API. Note that this can not totally guarantee live connections- the remote side can drop the connection in the time between ping and the connection reaching user code. This is a simplistic implementation. If there's a lot of pool churn (i.e. implicit connections checking in and out all the time), one possible and easy optimization would be to add a timer check: 1) On check-in, record the current time (integer part) into the connection record's .properties 2) On check-out, compare the current integer time to the (possibly empty) record in .properties. If it is still the same second as when the connection was last checked in, skip the ping. The connection is probably fine. Something much like this logic will go into the SQLAlchemy core eventually. -jek def checkout(self, dbapi_con, con_record, con_proxy): try: try: dbapi_con.ping(False) except TypeError: dbapi_con.ping() except dbapi_con.OperationalError, ex: if ex.args[0] in (2006, 2013, 2014, 2045, 2055): raise exc.DisconnectionError() else: raise # To see a connection die post-ping, take the sleep out of reap() # below and run this in a tight loop. It should happen eventually on # a fast machine. # # $ while thisscript.py; do echo; done if __name__ == '__main__': import sys, time if len(sys.argv) 1: from pkg_resources import require require('mysql-python==%s' % sys.argv[1]) from sqlalchemy import * e = create_engine('mysql:///test?unix_socket=/var/tmp/mysql.sock', max_overflow=0, pool_size=2, # constrain our test listeners=[LookLively()]) # reserve a connection. reaper = e.connect() def reap(id): reaper.execute(text('KILL :thread'), thread=id) time.sleep(0.15) # give the thread a chance to die c2 = e.connect() c2_threadid = c2.scalar('SELECT CONNECTION_ID()') print Connection c2 id=%s % c2_threadid # return c2 to the pool. (the db-api connection will remain open) c2.close() del c2 reap(c2_threadid) c2 = e.connect() new_threadid = c2.scalar('SELECT CONNECTION_ID()') print Connection c2 now has id=%s % new_threadid try: # connection is still alive, kill it mid-stream reap(new_threadid) c2.execute('SELECT 1') assert False except Exception, ex: print Expected: Did not reconnect mid-transaction, exception:, ex c2 = e.connect() final_threadid = c2.scalar('SELECT CONNECTION_ID()') print Connection c2 now has id=%s % final_threadid
[sqlalchemy] Re: moving an object
jean-philippe dutreve wrote: Hi all, I wonder if SA can handle this use case: An Account can contain Entries ordered by 'position' attribute. mapper(Account, table_accounts, properties = dict( entries = relation(Entry, lazy=True, collection_class=ordering_list ('position'), order_by=[table_entries.c.position], passive_deletes='all', cascade='save-update', backref=backref('account', lazy=False), ), )) I'd like to move an entry from accountA to accountB and let SA remove the link between the entry and accountA: entry = accountA.entries[0] insort_right(accountB.entries, entry) assert not entry in accountA.entries# false, entry is still in accountA It is possible? Try removing the entry from accountA: entry = accountA.pop(0) ... Also beware that bisect insort has a bug that prevents it from working properly with list subclasses like ordering_list (or any SA list-based collection). I think it's fixed in Python 3.0, not sure if the fix was backported to 2.x. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Suggestions on using a dialect outside of the Alchemy installation tree
It'd look like this: http://code.google.com/p/ibm-db/source/browse/trunk/IBM_DB/ibm_db_sa/setup.py Your dialect will be available to SA after you 'python setup.py install' or 'python setup.py develop' in your -ase distribution. phrrn...@googlemail.com wrote: Thanks Mike. This sounds great although I have to admit that I don't follow it completely as I have not used authored anything via setuptools. If this is trivial for you, could you sketch out what this would look like? pjjH On Mar 10, 11:25 am, Michael Bayer mike...@zzzcomputing.com wrote: you can install the dialect using a setuptools entry point. SQLAlchemy looks for dialect modules using the sqlalchemy.databases entry point name, so in this case you might name it sqlalchemy.databases.sybase-ase. phrrn...@googlemail.com wrote: Hi, I have done a fair bit on a Sybase ASE dialect for Alchemy and it is now in a primitive but usable condition for simple applications. My employers are fine with contributing the code back to the project and I intended to coordinate with Mike Bayer about this shortly. In the meantime, we would like to deploy the driver locally and work out some of the bugs. Ideally, we would like to do this separately from our centralized SQL Alchemy installation as the release cycles for production s/w are much longer than the anticipated cycles for the Sybase dialect.Is it possible to use a dialect located outside the main installation by something as simple as the connection URI? Have any of you similar situations? Have you any suggestions on ways to address this issue? pjjH --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problems/Bug in ordering_list (UNIQUE KEY violation)
Michael Bayer wrote: On Feb 19, 2009, at 4:33 PM, oberger wrote: Thank you Michael, but I am not able to bring this to work. Even with a flush and a commit after every Statement. I understand the problem with dependend UPDATES/DELETES. But how is the ordering_list suposed to work? When I delete on entry with: del short_trip.trip_stops[1] and then flush() and commit(). The ordering_list has to do some work in the corresponding database table. im not sure, perhaps Jason can chime in on this For this constraint configuration you might try making the DB constraint initially deferred. Given the ordering of statement execution in the unit of work, no other ideas are coming to mind. The ordering_list itself is totally ignorant of the ORM. It doesn't issue any flushes or deletions, though one could make an implementation that did embed that level of control over the unit of work. -j --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: how to handle Date values of the form YYYY-MM-00
rdmur...@bitdance.com wrote: I have an existing MySQL database (that I do not control) with schema fields defined using the 'Date' type. The values that occur in these fields often have a 'day' of '00', and sometimes a month of '00', and sometimes the field's value is -00-00. The zeros are used to indicate don't know (or, sometimes, don't care). Since '00' is invalid for the fields in a Python DateTime, it seems as though I can't actually use DateTime to manage these values. My application should be able to use them as strings, but how do I arrange to do that? The conversion to DateTime is presumably taking place at the DBAPI level. Check out the MySQLdb docs for the 'conv' type mapping option to connect(). I think you should be able to override the default datetime with your own convert that falls back to a string or whatever you'd like it to do. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: how to handle Date values of the form YYYY-MM-00
rdmur...@bitdance.com wrote: Quoth jason kirtland j...@discorporate.us: rdmur...@bitdance.com wrote: I have an existing MySQL database (that I do not control) with schema fields defined using the 'Date' type. The values that occur in these fields often have a 'day' of '00', and sometimes a month of '00', and sometimes the field's value is -00-00. The zeros are used to indicate don't know (or, sometimes, don't care). Since '00' is invalid for the fields in a Python DateTime, it seems as though I can't actually use DateTime to manage these values. My application should be able to use them as strings, but how do I arrange to do that? The conversion to DateTime is presumably taking place at the DBAPI level. Check out the MySQLdb docs for the 'conv' type mapping option to connect(). I think you should be able to override the default datetime with your own convert that falls back to a string or whatever you'd like it to do. That sounds promising, and I doubt I would have found that just by googling, so thanks! Now, how do I get SQLAlchemy to pass that dictionary into the MySQLdb 'connect'? :) You can pass it in via the create_engine's connect_args: http://www.sqlalchemy.org/docs/05/dbengine.html#custom-dbapi-connect-arguments Cheers, Jason --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: find only loaded objects in relation collections
GHZ wrote: Hi, I have a Subscriber and an Address table. Subscriber can have many Addresses mapper(Subscriber, subscriber_table, properties={ 'addresses' : relation(Address, collection_class=Addresses, backref='customer')}) From the a Subscriber object, I want to inspect all loaded objects in any collections, but do it quietly - without causing any more to load. class MyBase(object): @reconstructor def __my_init__(self): self.rules = [] def get_all_rules_on_all_loaded_related_objects(self): for collection in (p for p in object_mapper (self).iterate_properties if type(p) is RelationProperty): # How to access this collection without causing it to load? # I want to look at the 'rules' property on all loaded objects The collection will be present in the instance's __dict__ if it has been loaded. So something like if 'addresses' in self.__dict__: # loaded, can access self.addresses without triggering db access -j --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: INSERT ... ON DUPLICATE KEY UPDATE
camlost wrote: Thank you for the reply. However, this solution (though I'm ready to use it) would create a lot of SQL queries comparing it with simple INSERT ... ON DUPLICATE KEY UPDATE. On the other hand, I admit the INSERT ... IN DUPLICATE KEY UPDATE might not be available in other DBs. I would like the application would be independent of the database engine bellow. So... is there some way how to achieve this while keeping number of SQL queries low? :-) (The number of objects handled this way is about 20 000.) Sure, if your process will be the only one inserting and changing these rows. Working through your 20k python objects in batches of 1000 or whatever size you like, collect the key values from the python objects. Run a database select to see which of those keys are present in the database, and then divide your batch into two parts: data needing insert and data needing update. If you've got write contention for this data you'd need to work more granularly (likely row by row) instead, keeping in mind the database engine's transaction model and ideally taking advantage of any tools the db engine provides (like ON DUPLICATE or sql's MERGE) . Performance and engine agnosticism may be mutually exclusive here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: passing tuple argument into sqlalchemy.sql.text string
Faheem Mitha wrote: On Wed, 21 Jan 2009 08:26:13 -0800, jason kirtland j...@discorporate.us wrote: Faheem Mitha wrote: Hi, I've got a query as follows: from sqlalchemy.sql import text gq = text( SELECT decode_genotype(cell.snpval_id, snp.allelea_id, snp.alleleb_id) FROM cell JOIN snp ON snp.fid = cell.snp_id WHERE cell.patient_chipid IN ('DUKE1_plateA_A10.CEL', 'DUKE1_plateA_A11.CEL') ) I want to pass in the tuple as an argument, and was wondering how to do it. So, I'm looking for something conceptually like gq = text( SELECT decode_genotype(cell.snpval_id, snp.allelea_id, snp.alleleb_id) FROM cell JOIN snp ON snp.fid = cell.snp_id WHERE cell.patient_chipid IN :plist ) gq = conn.execute(gq, plist=('DUKE1_plateA_A10.CEL', 'DUKE1_plateA_A11.CEL')) Note, I want to pass in a tuple of arbitary length, so changing this to pass two string arguments would not do. Perhaps I'm supposed to pass in some bindparams too, but I don't know what type I should be using. IN takes a list of scalars, each of which requires its own :bind parameter. On Postgresql you might find it more convenient to use ANY, which takes a single array argument. WHERE cell.patient_chipid ANY (:plist) Thanks for the suggestion. Can such an array argument be passed in from Python? Give it a try and let us know how it goes. Cheers, Jason --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to sqlalchemy+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL
Simon wrote: Hi all, I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a table with a float column and would like to have a default value of 0: Column('col', Float(), default=0.0) However, executing metadata.create_all(engine) yields CREATE TABLE `Table` ( ... `col` float default NULL, ... ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Is that a bug, or am I erring somewhere? default= is purely a client-side default executed in Python. For a server-side (DDL) default, you want Column(, server_default='0.0') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: default=0.0 on Float Column produces `col` float default NULL
With 0.4 it's a positional argument to Column: Column('col', Float(), PassiveDefault('0.0')) Simon wrote: Thanks Jason! Is there any way of doing this in SA 0.4 as well? On 10 Nov., 16:42, jason kirtland [EMAIL PROTECTED] wrote: Simon wrote: Hi all, I'm using SA 0.5.0rc3 and MySQL 5.0.51a on Mac OS X 10.4.11. I have a table with a float column and would like to have a default value of 0: Column('col', Float(), default=0.0) However, executing metadata.create_all(engine) yields CREATE TABLE `Table` ( ... `col` float default NULL, ... ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Is that a bug, or am I erring somewhere? default= is purely a client-side default executed in Python. For a server-side (DDL) default, you want Column(, server_default='0.0') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: foreign key problem when using reflection and schemas
That should be working now in r5203. The reflection code was missing an edge case where an explicit schema= is the same as the connection's schema. Switching those to schema=None should work as intended if you need a workaround on a released version. Cheers, Jason Martijn Faassen wrote: Hi there, I have a problem with foreign keys that seems to occur when I combine reflection and explicit schemas, in the context of MySQL. I've confirmed this problem with both rc2 and the trunk. It's best demonstrated with some failing code: Imagine the following MySQL database 'somedb': CREATE TABLE somedb.a ( id int PRIMARY KEY auto_increment NOT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE somedb.b ( id int PRIMARY KEY auto_increment NOT NULL, a_id int NOT NULL, FOREIGN KEY (a_id) REFERENCES somedb.a(id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; And the following code: from sqlalchemy import * from sqlalchemy.orm import mapper, relation, sessionmaker engine = create_engine('mysql:///somedb') meta = MetaData() meta.bind = engine a_table = Table( 'a', meta, schema='somedb', autoload=True) b_table = Table( 'b', meta, schema='somedb', autoload=True) class A(object): pass class B(object): pass mapper(A, a_table, properties={'bs': relation(B)}) mapper(B, b_table) Session = sessionmaker(bind=engine) session = Session() print session.query(A).all() When executing this code, the last line fails with the following error: Traceback (most recent call last): File bin/devpython, line 138, in ? execfile(sys.argv[0]) File experiment.py, line 33, in ? print session.query(A).all() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/session.py, line 914, in query return self._query_cls(entities, self, **kwargs) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 95, in __init__ self.__setup_aliasizers(self._entities) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/query.py, line 109, in __setup_aliasizers mapper, selectable, is_aliased_class = _entity_info(entity) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 454, in _entity_info mapper = class_mapper(entity, compile) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/util.py, line 531, in class_mapper mapper = mapper.compile() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 371, in compile mapper.__initialize_properties() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/mapper.py, line 393, in __initialize_properties prop.init(key, self) File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/interfaces.py, line 384, in init self.do_init() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 531, in do_init self._determine_joins() File /home/faassen/.buildout/eggs/SQLAlchemy-0.5.0rc2-py2.4.egg/sqlalchemy/orm/properties.py, line 604, in _determine_joins raise sa_exc.ArgumentError(Could not determine join condition between sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation A.bs. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. This code *only* fails if I designate an explicit 'schema' in the table statements. If I leave these out, things work as expected. Since I'm interested in working with reflected tables that reside in multiple schemas, this is a problem. Digging around indicates this that _search_for_join, defined in _determine_joins, does not actually find the join clause. Going deeper traces the failure down to the Join class in sqlalchemy.sql.expression, which fails in self._match_primaries in its __init__ method. This in turn brings us to sqlalchemy.sql.util.join_condition, which has fk.get_referent() return None if schemas are explicitly specified, and work fine if not. fk.get_referent() uses corresponding_column, and this in turn tries to use contains_column() which returns False in the schema case, but true if 'schema' is not explicitly verified. Why I don't know. The repr of the column passed into contains_column looks the same as the repr of the column in the table, but apparently it's not exactly the same instance. Something somewhere is making the column to be different. Is this a bug? If so, how would we go around solving it? Regards, Martijn --~--~-~--~~~---~--~~ You received this message because you
[sqlalchemy] Re: in_( bindparam(list) ) ?
Michael Bayer wrote: On Oct 13, 2008, at 10:58 AM, [EMAIL PROTECTED] wrote: On Monday 13 October 2008 17:21:31 Michael Bayer wrote: On Oct 13, 2008, at 6:41 AM, [EMAIL PROTECTED] wrote: why i cannot give in_() a bindparam? q.filter( x.in_( somelistorset )) works q.filter( x.in_( bindparam('somename') )) fails ... File sqlalchemy/sql/expression.py, line 1368, in _in_impl for o in seq_or_selectable: TypeError: '_BindParamClause' object is not iterable is this possible or not ? or sending a list/tuple/iterable as bindparam-value is not supported? is this a resend? I answered a week ago (hm, GG didnt deliver ? ) didnt... should be x.in_([bindparam('somename')]) but that is a list containing one param. i want the whole list to be a parameter. yeah does PG even support that ? im assuming PG as ANY(array[]) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Aw: [sqlalchemy] Re: 0.4: can not append objects to instrumentedlists
[EMAIL PROTECTED] wrote: I spent some time to migrate to sqlalchemy 0.4 and it's to late to go back to 0.3. What can I do to add objects to properties (InstumentedLists) in sqlalchemy 0.4 (with different mappers)? I suspect that case will work if you add the user to the session under the desired entity_name before appending to the collection. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: reflection unavailable for mysql temporary tables?
Andy Davidoff wrote: On Oct 11, 1:29 pm, Michael Bayer [EMAIL PROTECTED] wrote: On Oct 11, 2008, at 12:49 PM, AndyDavidoffwrote: This fixes the first part of this problem, but unfortunately the `show create table` is performed in the connection, not the session in which the temporary table was created. MySQL doesn't expose temporary tables between sessions, so the `show create table` raises a MySQL exception due to a non-existent table. you can reflect any table on a specific connection using autoload_with=someconnection. if by Session you mean ORM session, get the current connection using session.connection(). Thanks, but MySQL's temporary tables are invisible to connection objects; the reflection would need to occur via queries issued in the actual Session (ORM session) in which the tables were created. I doubt this'll be easy to elegantly hack into SQLA, though. No hacking needed, it works just as Mike described. from sqlalchemy import * from sqlalchemy.orm import create_session session = create_session() session.bind = create_engine('mysql:///test') session.begin() session.execute('CREATE TEMPORARY TABLE foo (x INT)') session.execute('INSERT INTO foo VALUES (1)') m = MetaData() tt = Table('foo', m, autoload=True, autoload_with=session.connection()) print session.execute(tt.select()).fetchall() session.commit() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SA 0.5 rc1 - Mysql Unicode(1) decode error
Martijn Moeling wrote: Hi I needed a Unicode(1) Column in one of my tables. It was translated into a char(1) column in MySQL. When querying the table, I get a: AttributeError: 'Set' object has no attribute 'decode' Which disappears if I make the column a Unicode(2), so there might be a small bug in the MySQL code translating Unicode(1) to char(1) ….??? Try upgrading your MySQL-python library. I've seen it do that on older versions. -j --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: OrderingList and list.sort()
Adam Dziendziel wrote: Hi, It seems that sorting of ordering list doesn't work. Attribute object.items is an OrderingList: object.items.sort(cmp=my_cmp) The list is sorted, but the ordering column is not updated. I need to call explicitly: object.items._reorder() Maybe override sort() in OrderingList to invoke self._reorder() after sorting? Sure, makes sense. reverse() too. Interested in making a patch for the implementation in sqlalchemy/ext/ and add a new test? -j --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mysql utf8 encoding problem
Michael Bayer wrote: On Oct 8, 2008, at 8:54 PM, jason kirtland wrote: Most likely you just need to configure the db-api's client encoding by adding ?charset=utf8 onto your connection URL. Enough folks have hit this recently that I'm (again) considering passing through the engine encoding= parameter to the MySQLdb connection setup. I've resisted the urge for a while because we don't to my knowledge re-configure any db-apis in any of the backends. But this keeps coming up despite being documented in the mysql section of the docs, and last time I traced through it, it seemed like MySQLdb was ignoring the server's configured connection_encoding so a little assist from the SA side would probably be useful. I'll look at sneaking that into the upcoming rc2 unless the implementation is untenable for some reason or there's an outcry. since im a total dumdum, why have i never had this issue in my own dealings with MySQL and Unicode ? I use the Unicode type, i dont use any charset= on my URL, and things work fine, including all of our unit tests. Is it actually storing the data incorrectly and we just see the same info at the SQLA round trip level ? i don't know about your env, but the unit tests under mysql use testing.engines.utf8_engine to configure a connection that can send unicode across the wire without encoding failures. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mysql utf8 encoding problem
Michael Bayer wrote: On Oct 9, 2008, at 3:39 AM, jason kirtland wrote: i don't know about your env, but the unit tests under mysql use testing.engines.utf8_engine to configure a connection that can send unicode across the wire without encoding failures. I can run python test/sql/testtypes.py --db mysql --verbose UnicodeTest.testbasic with a raise or pdb.set_trace() inside of utf8_engine, and engines.utf8_engine is never called. Might be something about the data being tested in that test method. It's needed in the tests where it's used. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mysql utf8 encoding problem
joelanman wrote: Hi, Firstly - I'm hugely impressed with SQLAlchemy - it's really helped me a lot with my new project. I'm having problems storing international characters in mysql using SQLAlchemy. For example: école—school looks like this in mysql: école—school I'm using the following engine call: engine = create_engine(config.db, encoding='utf-8') and using Unicode as the column type: Column('content', UnicodeText), and utf8 for the table: mysql_charset='utf8' I'm pretty sure all my mySQL options are set to utf8. This looks really similar to a 'double encoding' issue I found while searching the group, but it seems that was fixed in python-mysql 1.2.2, which is what I'm using. Any help would be much appreciated. Most likely you just need to configure the db-api's client encoding by adding ?charset=utf8 onto your connection URL. Enough folks have hit this recently that I'm (again) considering passing through the engine encoding= parameter to the MySQLdb connection setup. I've resisted the urge for a while because we don't to my knowledge re-configure any db-apis in any of the backends. But this keeps coming up despite being documented in the mysql section of the docs, and last time I traced through it, it seemed like MySQLdb was ignoring the server's configured connection_encoding so a little assist from the SA side would probably be useful. I'll look at sneaking that into the upcoming rc2 unless the implementation is untenable for some reason or there's an outcry. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: adding children to ORM object using property causes problems, maybe a bug?
Randy Syring wrote: After some work with Gedd on #sqlalchemy, it seems that adding children to a parent object using a custom property() doesn't work as we expected it would. A test case is here: http://paste.pocoo.org/show/86848/ The error is triggered by line #53. Are we doing something wrong or is this a bug in SA? The only error I see in that test is: external_link.url may not be NULL u'INSERT INTO external_link (url) VALUES (?)' [None] which is expected from the test setup. (No .url is ever assigned.) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: reflection unavailable for mysql temporary tables?
Andy Davidoff wrote: Reflection of temporary tables under MySQL works around revision 4000 and probably as recently as May but doesn't seem to work under revision 5000 or later; I get a NoSuchTableError. Is this change intentional? If the only change I make to my code is to create and reflect a normal table versus a temporary table, then SQLA works as expected. Unfortunately, temporary tables offer me an easy concurrency crutch, so I deeply regret this missing functionality. I could dig deeper into when and where this stopped working, but perhaps I'm the one missing something. What is it? :-) Give it a try with the trunk, r5129. -j --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
Shawn Church wrote: On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. Ok, that works. I thought that create_engine(uri, encoding = latin1, convert_unicode = True) would do this. I am guessing from this that the create_engine arguments are NOT being passed along to the dbapi connector? No. I believe both of those are specifying the treatment of string data going _to_ the DB-API only, not bidirectional behavior. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Fwd: Support for ordered lists of child items
Emmett Lazich wrote: Thank you Jason. orderinglist looks like what I am after! Is your orderinglist plugin fully functional in 0.4.7p1? Yep. Before I attempt it, pls advise if there any technical reason preventing the integration of orderinglist into the basic_tree.py (adjacency list) example? See http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/adjacencytree/basic_tree.py That example uses a dict-based collection, so it's not a drop in. But I don't see any obstacle to using the orderinglist on adjacency lists in general. jason kirtland wrote: Yep, orderinglist handles that case. Michael Bayer wrote: forwarded from pvt email orderinglist ? Begin forwarded message: *From: *Emmett [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] *Date: *September 22, 2008 9:51:31 AM EDT *To: *Michael Bayer [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] *Subject: **Re: Support for ordered lists of child items* Hello Michael, 18 months later, would your answer to Aaron still be the same? I have a problem fitting what Aaron described. ie. save+restore of child order after re-ordering in the Python side. Re-ordering child list elements would obviously be ideal, but I could cope with updating an extra integer node attribute instead. I'm completely new to SA and at this stage skimming documentation and looking at the tree examples. Found this thread, so wondering if some newer SA magic can solve this, or if a custom collection class or something else is the best solution. I also looked at the ElementTree examples, but they don't appear to guarantee child order either - correct? Thanks in advance. I looked at your activity in this group. Amazing! On Apr 10 2007, 4:02 am, Michael Bayer [EMAIL PROTECTED] wrote: we dont have the capability to automatically update ordering columns when the elements of a list are moved around. if you move the elements around, you need to execute some step that will update the index columns (or create a custom collection class that does this for you). On Apr 9, 2007, at 12:42 PM, Aaron Digulla wrote: Hello, I'm looking for a feature but couldn't find it in the docs. I have atreelike structure where the user can specify theorderof thechildrenof a node. In DB lingo, I have a parentId and an index column. When I loadchildren, they should be ordered by the index. This seems to be supported. Can SA also update the index column when I movechildrenin the list around? Like: # ... parent has threechildrenA, B C item = parent.children[0] del parent.children[0] parent.children.insert (1, item) # now, parent has threechildrenB, A, C Regards, --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
Shawn Church wrote: On Wed, Sep 24, 2008 at 7:37 PM, Michael Bayer [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: we can of course add more functions to the list of known functions such as ifnull() (it would be best if ifnull() is a SQL standard function, I'm not sure if it is). Not sure this will work for IFNULL since it's type depends upon the runtime arguments. I missed the func type_ argument when I read the documentation. That is a good solution for the general case of specifiying the type when it cannot be determined from the function or the function arguments. In fact I'm going to use it any time the type is not obvious. For what it is worth the following patch modifies ResultProxy to convert strings to unicode if convert_unicode == True. It 'fixes' my example and test/testall.py still passes. Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Fwd: Support for ordered lists of child items
Yep, orderinglist handles that case. Michael Bayer wrote: forwarded from pvt email orderinglist ? Begin forwarded message: *From: *Emmett [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] *Date: *September 22, 2008 9:51:31 AM EDT *To: *Michael Bayer [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] *Subject: **Re: Support for ordered lists of child items* Hello Michael, 18 months later, would your answer to Aaron still be the same? I have a problem fitting what Aaron described. ie. save+restore of child order after re-ordering in the Python side. Re-ordering child list elements would obviously be ideal, but I could cope with updating an extra integer node attribute instead. I'm completely new to SA and at this stage skimming documentation and looking at the tree examples. Found this thread, so wondering if some newer SA magic can solve this, or if a custom collection class or something else is the best solution. I also looked at the ElementTree examples, but they don't appear to guarantee child order either - correct? Thanks in advance. I looked at your activity in this group. Amazing! On Apr 10 2007, 4:02 am, Michael Bayer [EMAIL PROTECTED] wrote: we dont have the capability to automatically update ordering columns when the elements of a list are moved around. if you move the elements around, you need to execute some step that will update the index columns (or create a custom collection class that does this for you). On Apr 9, 2007, at 12:42 PM, Aaron Digulla wrote: Hello, I'm looking for a feature but couldn't find it in the docs. I have atreelike structure where the user can specify theorderof thechildrenof a node. In DB lingo, I have a parentId and an index column. When I loadchildren, they should be ordered by the index. This seems to be supported. Can SA also update the index column when I movechildrenin the list around? Like: # ... parent has threechildrenA, B C item = parent.children[0] del parent.children[0] parent.children.insert (1, item) # now, parent has threechildrenB, A, C Regards, --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ordering_list performance
I'm sure there is potential for improvement on the current orderinglist code- please feel free to send a patch with optimizations you've found to the SA trac. The orderinglist hasn't changed much since 0.3, but with 0.5 there may be entirely new implementations possible. For example, I could imagine one that defers calculation and manipulation of the positioning information until a before_flush hook. That may be perform better, with the trade-off that the position attribute can't be trusted to be in sync with the list order. jean-philippe dutreve wrote: Below is the profiling of code that added 1200 items into an ordering_list relation. I had to bypass the ordering_list stuff for bulk additions in order to have better performance (down to 2 seconds). Hope this post helps to improve this part (using 0.5.0rc1, python 2.5, linux i686, 1.5Go RAM) SA is rocking! jean-philippe Time elapsed: 48.4475638866 s 8875046 function calls (8869157 primitive calls) in 48.443 CPU seconds Ordered by: internal time, call count List reduced from 390 to 10 due to restriction 10 ncalls tottime percall cumtime percall filename:lineno(function) 1292937/12922507.8790.000 12.1340.000 attributes.py: 132(__get__) 12410137.6620.000 39.8360.000 orderinglist.py: 221(_order_entity) 12410135.8700.000 16.9160.000 orderinglist.py: 202(_get_order_value) 4408094.5220.0009.5270.000 attributes.py:394(set) 12364.1980.003 44.0250.036 orderinglist.py: 208(reorder) 1299736/12990483.7520.0004.3730.000 attributes.py: 310(get) 4482253.3370.0005.1570.000 identity.py: 208(modified_event) 4370612.7040.000 14.3310.000 orderinglist.py: 205(_set_order_value) 4408092.2250.000 11.7520.000 attributes.py: 126(__set__) 4482251.7750.0001.8120.000 attributes.py: 958(modified_event) Function was called by... attributes.py:132(__get__) - domain.py:200(addEntry) (1236) 46.741 domain.py:248(__init__) (1236) 47.832 domain.py:272(get)(49452) 0.609 orderinglist.py: 202(_get_order_value)(1241013) 16.916 orderinglist.py:221(_order_entity) - orderinglist.py:208(reorder) (1240326) 44.025 orderinglist.py:232(append) (687)0.013 orderinglist.py:202(_get_order_value) - orderinglist.py: 221(_order_entity)(1241013) 39.836 attributes.py:394(set) - attributes.py:126(__set__) (440809) 11.752 orderinglist.py:208(reorder) - orderinglist.py: 266(__setslice__)(1236) 44.061 attributes.py:310(get) - attributes.py:132(__get__) (1292937) 12.134 attributes.py: 347(get_committed_value)(1)0.000 attributes.py:500(set) (3708)0.367 attributes.py: 837(value_as_iterable)(3090)0.108 identity.py:208(modified_event)- attributes.py:394(set) (440809)9.527 attributes.py: 525(fire_replace_event)(3708)0.236 attributes.py: 579(fire_append_event)(3708)1.960 orderinglist.py:205(_set_order_value) - orderinglist.py: 221(_order_entity)(437061) 39.836 attributes.py:126(__set__) - domain.py: 237(_set_attributes)(1276)0.079 domain.py:255(update) (2472)0.089 orderinglist.py: 205(_set_order_value)(437061) 14.331 attributes.py:958(modified_event) - identity.py: 208(modified_event)(448225)5.157 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ordering_list performance
A warning: that depends on a bug in the C version of bisect. When given a list subclass, it mistakenly ignores the subclass method implementations. The below will break, if and when that's fixed to match the pure Python implementation in the standard lib. Calling list.extend(account_entries, new_entries) is probably a safe alternative. * http://bugs.python.org/issue3935 jean-philippe dutreve wrote: What I've done is something like this: account_entries = self.entries[:] for entry in new_entries: insort_right(account_entries, entry) for i, entry in enumerate(account_entries): entry.position = i self.entries = account_entries Don't know if it's the right way to do it but it's much faster. On 22 sep, 18:41, jason kirtland [EMAIL PROTECTED] wrote: I'm sure there is potential for improvement on the current orderinglist code- please feel free to send a patch with optimizations you've found to the SA trac. The orderinglist hasn't changed much since 0.3, but with 0.5 there may be entirely new implementations possible. For example, I could imagine one that defers calculation and manipulation of the positioning information until a before_flush hook. That may be perform better, with the trade-off that the position attribute can't be trusted to be in sync with the list order. jean-philippe dutreve wrote: Below is the profiling of code that added 1200 items into an ordering_list relation. I had to bypass the ordering_list stuff for bulk additions in order to have better performance (down to 2 seconds). Hope this post helps to improve this part (using 0.5.0rc1, python 2.5, linux i686, 1.5Go RAM) SA is rocking! jean-philippe Time elapsed: 48.4475638866 s 8875046 function calls (8869157 primitive calls) in 48.443 CPU seconds Ordered by: internal time, call count List reduced from 390 to 10 due to restriction 10 ncalls tottime percall cumtime percall filename:lineno(function) 1292937/12922507.8790.000 12.1340.000 attributes.py: 132(__get__) 12410137.6620.000 39.8360.000 orderinglist.py: 221(_order_entity) 12410135.8700.000 16.9160.000 orderinglist.py: 202(_get_order_value) 4408094.5220.0009.5270.000 attributes.py:394(set) 12364.1980.003 44.0250.036 orderinglist.py: 208(reorder) 1299736/12990483.7520.0004.3730.000 attributes.py: 310(get) 4482253.3370.0005.1570.000 identity.py: 208(modified_event) 4370612.7040.000 14.3310.000 orderinglist.py: 205(_set_order_value) 4408092.2250.000 11.7520.000 attributes.py: 126(__set__) 4482251.7750.0001.8120.000 attributes.py: 958(modified_event) Function was called by... attributes.py:132(__get__) - domain.py:200(addEntry) (1236) 46.741 domain.py:248(__init__) (1236) 47.832 domain.py:272(get)(49452) 0.609 orderinglist.py: 202(_get_order_value)(1241013) 16.916 orderinglist.py:221(_order_entity) - orderinglist.py:208(reorder) (1240326) 44.025 orderinglist.py:232(append) (687)0.013 orderinglist.py:202(_get_order_value) - orderinglist.py: 221(_order_entity)(1241013) 39.836 attributes.py:394(set) - attributes.py:126(__set__) (440809) 11.752 orderinglist.py:208(reorder) - orderinglist.py: 266(__setslice__)(1236) 44.061 attributes.py:310(get) - attributes.py:132(__get__) (1292937) 12.134 attributes.py: 347(get_committed_value)(1)0.000 attributes.py:500(set) (3708)0.367 attributes.py: 837(value_as_iterable)(3090)0.108 identity.py:208(modified_event)- attributes.py:394(set) (440809)9.527 attributes.py: 525(fire_replace_event)(3708)0.236 attributes.py: 579(fire_append_event)(3708)1.960 orderinglist.py:205(_set_order_value) - orderinglist.py: 221(_order_entity)(437061) 39.836 attributes.py:126(__set__) - domain.py: 237(_set_attributes)(1276)0.079 domain.py:255(update) (2472)0.089 orderinglist.py: 205(_set_order_value)(437061) 14.331 attributes.py:958(modified_event) - identity.py: 208(modified_event)(448225)5.157 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy
[sqlalchemy] Re: ordering_list performance
Ah, looking more closely i see you're replacing self.entries with a list, not insorting into a SA list collection- that's totally ok. It might squeeze a little more speed out to do: updated_entries = list(self.entries) + new_entries base = len(self.entries) for idx, entry in enumerate(new_entries): entry.position = base + idx self.entries = updated_entries orderinglist's extend method could be made to do something much like the above quite efficiently. jason kirtland wrote: A warning: that depends on a bug in the C version of bisect. When given a list subclass, it mistakenly ignores the subclass method implementations. The below will break, if and when that's fixed to match the pure Python implementation in the standard lib. Calling list.extend(account_entries, new_entries) is probably a safe alternative. * http://bugs.python.org/issue3935 jean-philippe dutreve wrote: What I've done is something like this: account_entries = self.entries[:] for entry in new_entries: insort_right(account_entries, entry) for i, entry in enumerate(account_entries): entry.position = i self.entries = account_entries Don't know if it's the right way to do it but it's much faster. On 22 sep, 18:41, jason kirtland [EMAIL PROTECTED] wrote: I'm sure there is potential for improvement on the current orderinglist code- please feel free to send a patch with optimizations you've found to the SA trac. The orderinglist hasn't changed much since 0.3, but with 0.5 there may be entirely new implementations possible. For example, I could imagine one that defers calculation and manipulation of the positioning information until a before_flush hook. That may be perform better, with the trade-off that the position attribute can't be trusted to be in sync with the list order. jean-philippe dutreve wrote: Below is the profiling of code that added 1200 items into an ordering_list relation. I had to bypass the ordering_list stuff for bulk additions in order to have better performance (down to 2 seconds). Hope this post helps to improve this part (using 0.5.0rc1, python 2.5, linux i686, 1.5Go RAM) SA is rocking! jean-philippe Time elapsed: 48.4475638866 s 8875046 function calls (8869157 primitive calls) in 48.443 CPU seconds Ordered by: internal time, call count List reduced from 390 to 10 due to restriction 10 ncalls tottime percall cumtime percall filename:lineno(function) 1292937/12922507.8790.000 12.1340.000 attributes.py: 132(__get__) 12410137.6620.000 39.8360.000 orderinglist.py: 221(_order_entity) 12410135.8700.000 16.9160.000 orderinglist.py: 202(_get_order_value) 4408094.5220.0009.5270.000 attributes.py:394(set) 12364.1980.003 44.0250.036 orderinglist.py: 208(reorder) 1299736/12990483.7520.0004.3730.000 attributes.py: 310(get) 4482253.3370.0005.1570.000 identity.py: 208(modified_event) 4370612.7040.000 14.3310.000 orderinglist.py: 205(_set_order_value) 4408092.2250.000 11.7520.000 attributes.py: 126(__set__) 4482251.7750.0001.8120.000 attributes.py: 958(modified_event) Function was called by... attributes.py:132(__get__) - domain.py:200(addEntry) (1236) 46.741 domain.py:248(__init__) (1236) 47.832 domain.py:272(get)(49452) 0.609 orderinglist.py: 202(_get_order_value)(1241013) 16.916 orderinglist.py:221(_order_entity) - orderinglist.py:208(reorder) (1240326) 44.025 orderinglist.py:232(append) (687)0.013 orderinglist.py:202(_get_order_value) - orderinglist.py: 221(_order_entity)(1241013) 39.836 attributes.py:394(set) - attributes.py:126(__set__) (440809) 11.752 orderinglist.py:208(reorder) - orderinglist.py: 266(__setslice__)(1236) 44.061 attributes.py:310(get) - attributes.py:132(__get__) (1292937) 12.134 attributes.py: 347(get_committed_value)(1)0.000 attributes.py:500(set) (3708)0.367 attributes.py: 837(value_as_iterable)(3090)0.108 identity.py:208(modified_event)- attributes.py:394(set) (440809)9.527 attributes.py: 525(fire_replace_event)(3708)0.236 attributes.py: 579(fire_append_event)(3708)1.960 orderinglist.py:205(_set_order_value) - orderinglist.py: 221(_order_entity)(437061) 39.836 attributes.py:126(__set__) - domain.py
[sqlalchemy] Re: Problem with coverage and sqlalchemy declarative synonym?
Doug Latornell wrote: Over on the TurboGears list a TG2 user pointed out a problem that arises when nosetests --with-coverage is run on a project with a sqlalchemy identity model: http://groups.google.com/group/turbogears/t/7fd3639a5a4d4b8c I dug into it and have reproduced the problem outside of TurboGears 2 and without nose: http://paste.turbogears.org/paste/7051 I've also shown that the problem isn't there for a plain Python property (in contrast to a sqlalchemy synonym): http://paste.turbogears.org/paste/7052 So, it looks like a problem maybe with the metaclass that sqlalchemy uses to build properties, or with coverage not understanding what that metaclass produces, or something. Or is there something else I'm missing? Doug Seems to be a problem in coverage. This narrows down a problem that was first seen only when running inside Bitten. $ cat c.py def foo(somearg): class Bar(object): def __getattr__(self, attribute): return getattr(somearg, attribute) assert 'somearg' not in Bar.__dict__ foo('hi') $ python c.py $ coverage -e $ coverage -x c.py Traceback (most recent call last): File /Users/jek/bin/coverage, line 8, in module load_entry_point('coverage==2.80', 'console_scripts', 'coverage')() File build/bdist.macosx-10.5-i386/egg/coverage.py, line 978, in main File build/bdist.macosx-10.5-i386/egg/coverage.py, line 398, in command_line File c.py, line 8, in module foo('hi') File c.py, line 6, in foo assert 'somearg' not in Bar.__dict__ AssertionError --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple statements in a ddl construct
[EMAIL PROTECTED] wrote: On Monday 08 September 2008 18:45:17 jason kirtland wrote: alex bodnaru wrote: hello friends, i wanted to do a few sql commands in a ddl construct, but i failed with: pysqlite2:dbapi2 warning: you can execute one statement at a time i'm not very familiar with python db layer, but i know sqlite may be invoked specifically to execute one, or many statements divided by ';', so i suspect it's specifically invoked for one statement. while this is a good security measure for sql from untrusted sources, like user input, it's quite annoying for a situation where free sql should be specifically added. as for my case, i had a batch of inserts based on an external file, and i couldn't invoke ddl.execute_at in a loop, so i had to switch to inserting a batch of unioned selects in one insert, which was nice to learn :). The use case behind the DDL() construct is a single statement. You can fire multiple statements by using multiple DDL()s. But for inserts, I've found it more useful to write a 'after-create' event listener from scratch. Here's one that I use in pretty much every project, in some form or another: def fixture(table, column_names, *rows): Insert data into table after creation. def onload(event, schema_item, connection): insert = table.insert() connection.execute( insert, [dict(zip(column_names, column_values)) for column_values in rows]) table.append_ddl_listener('after-create', onload) Looks like this in use: fixture(some_table, ('x', 'y'), (1, 2), (3, 4), (5, 6)) hmm.. interesting. how would u do an initial insert of batch of objects (orm-mapped to whatever entangled bunch of tables)? any possible optimization? for obj in objfactory(somedicts): sess.save(obj); sess.flush() isn't very fast thing... any needed gymnastics with the objects is possible (grouping by type or whatever) I don't see any optimizations offhand for ORM-driven loads. The general case is going to have dependencies on the niceties that the ORM provides, like assoc proxies, mapper extension actions on insert, etc. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: multiple statements in a ddl construct
alex bodnaru wrote: hi jason, On Mon, Sep 8, 2008 at 6:45 PM, jason kirtland [EMAIL PROTECTED] wrote: alex bodnaru wrote: hello friends, i wanted to do a few sql commands in a ddl construct, but i failed with: pysqlite2:dbapi2 warning: you can execute one statement at a time i'm not very familiar with python db layer, but i know sqlite may be invoked specifically to execute one, or many statements divided by ';', so i suspect it's specifically invoked for one statement. while this is a good security measure for sql from untrusted sources, like user input, it's quite annoying for a situation where free sql should be specifically added. as for my case, i had a batch of inserts based on an external file, and i couldn't invoke ddl.execute_at in a loop, so i had to switch to inserting a batch of unioned selects in one insert, which was nice to learn :). The use case behind the DDL() construct is a single statement. You can fire multiple statements by using multiple DDL()s. But for inserts, I've found it more useful to write a 'after-create' event listener from scratch. Here's one that I use in pretty much every project, in some form or another: def fixture(table, column_names, *rows): Insert data into table after creation. def onload(event, schema_item, connection): insert = table.insert() connection.execute( insert, [dict(zip(column_names, column_values)) for column_values in rows]) table.append_ddl_listener('after-create', onload) Looks like this in use: fixture(some_table, ('x', 'y'), (1, 2), (3, 4), (5, 6)) thanks for your idea. it looks cool. i understand this will be triggered after all DDL end. but i wanted to do arbitrary sql as DDL provides, and the insert was just an example for a series of statements. The DDL() function is just some sugar for the DDL event shown interface above. If you want to perform more than one statement, the API is in place for any customization you'd like. just wondered why would this be the place to limit to one statement: it isn't parsed by SA in any way. SA doesn't put any limits on what SQL gets pushed through. It's passed through directly to the DB-API execute() method. I'd guess that most DB-API implementations will probably reject multiple statements in a single execution. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: using the native c implementation of ordereddict
Michael Bayer wrote: On Sep 4, 2008, at 12:40 AM, gniquil wrote: Hi All, I am doing some work with xmlrpc. One thing I realize is that whenever I pass dict(row) through xmlrpc, I get an key-ordered struct. But this isn't what i really want. What I want is ordered by insertion or the original list order. This led me to look at the util.ordereddict implementation, which is pure python, which is slow. I looked around and found this: http://www.xs4all.nl/~anthon/Python/ordereddict/ which is a c-implementation. At the bottom of the page, there are performance tests. It's much faster. I've got some pretty gigantic tables to pass around, which i think this would really help. Hopefully this could somehow find itself into next official python. But before that, we can use this or we can just incorporate it somehow in sqlalchemy...as a suggestion. the problem with saying utility class X is slow, therefore use Y is that you haven't evaluated if the slowness of X is really impacting the performance of SQLAlchemy overall in a negative way. I think if you ran some profiling results you'd see that OrderedDict calls make up a miniscule portion of time spent for doing all operations, so an external dependency is not necessarily worth it in this case (though it may be). I have some vague recollection that our own ODict does some things the native one does not but I'd have to dig back into the code to remember what they were. If our own ODict could be swappable with ordereddict, we could at least try to import it then fall back to our own (this is what it would look like if ordereddict were introduced into python core anyway). fwiw i spiked this out a while back (just before 0.4.0, maybe), and swapping in a native ordered dict was a very marginal speed improvement, and most of it was in metadata setup rather than runtime speed. as svil said, it's easy to try this out by monkeypatching in alternate implementations and then hitting the various profiling and speed tests in the test suite. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
DDL() has some simple templating capabilities that can help out a bit here, but I'd suggest taking the ForeignKey code Mike provided as a start and putting together an after-create listener using Table.append_ddl_listener directly: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_sqlalchemy.schema_Table It would look something like: def fks_for_sqlite(event, table, bind): for c in table.c: for fk in c.foreign_keys: sql = your_code_to_make_trigger_for_fk(fk) bind.execute(sql) tbl.append_ddl_listener('after-create', fks_for_sqlite) Michael Bayer wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_sqlalchemy.schema_DDL the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_trigger_generator Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLite Foreign Key Triggers: how to extend sqlalchemy to generate them
Yep, though possibly you'd want it on before-drop. You can actually handle both tasks in the same function if you like- the event name will be passed in as the first argument. Randy Syring wrote: Jason, Thank you for the response. Using the method you suggest, am I understanding correctly that fks_for_sqlite() would only be run when a create() was processed for that table? Also, I am assuming I would need to create a complimentary function for handling the 'after-drop' event. On Aug 22, 1:25 pm, jason kirtland [EMAIL PROTECTED] wrote: DDL() has some simple templating capabilities that can help out a bit here, but I'd suggest taking the ForeignKey code Mike provided as a start and putting together an after-create listener using Table.append_ddl_listener directly: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... It would look something like: def fks_for_sqlite(event, table, bind): for c in table.c: for fk in c.foreign_keys: sql = your_code_to_make_trigger_for_fk(fk) bind.execute(sql) tbl.append_ddl_listener('after-create', fks_for_sqlite) Michael Bayer wrote: you can build this functionality using the DDL() construct provided by SQLAlchemy: http://www.sqlalchemy.org/docs/05/sqlalchemy_schema.html#docstrings_s... the ForeignKey objects on the table can be pulled out using: for c in table.c: for fk in c.foreign_keys: do_something_with_fk(fk) On Aug 22, 2008, at 11:19 AM, Randy Syring wrote: I would like sqlalchemy to generate triggers on an SQLite database to enforce foreign key relationships. The method is documented here: http://www.sqlite.org/cvstrac/wiki?p=ForeignKeyTriggers and I have written a foreign key trigger generator here: http://rcs-comp.com/site/index.php/view/Utilities-SQLite_foreign_key_... Although it does not recognize the kind of references sqlalchemy generates in the CREATE TABLE statements. Anyway, the point of this post is that I would like to know how I should go about extending sqlalchemy so that when I use ForeignKey constructs in the metadata, create statements on the tables would also create the triggers and drop statements on the tables would drop the said triggers to enforce the Foreign Key relationship. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Code working on SQLA 0.4.6 is breaking on SQLA 0.5beta3
Thanks for the traceback. Give r5050 a try. Cheers, Jason Harish K Vishwanath wrote: Hello Michael, Thanks for your input! I got the latest trunk from SVN. However, if my app specific baseclass is an old style class, it still breaks., this time in orm\attributes.py Traceback (most recent call last): File SqlDB_Test.py, line 9, in ? dbo = SqlDB(SqlDB_Test.Sqlite,c:\\testdbs) File d:\recogsys\src\python\RSITerm\SQLConvert\SqlDB.py, line 77, in __init_ _ self.initSetup(echo) File d:\recogsys\src\python\RSITerm\SQLConvert\SqlDB.py, line 115, in initSe tup elixir.setup_all(True) File d:\recogsys\src\python\rsiterm\__init__.py, line 145, in setup_all File c:\Python24\lib\site-packages\elixir-0.6.1-py2.4.egg\elixir\entity.py, line 816, in setup_entities File c:\Python24\lib\site-packages\elixir-0.6.1-py2.4.egg\elixir\entity.py, line 409, in setup_mapper File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\__init__.py, line 643, in mapper return Mapper(class_, local_table, *args, **params) File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\mapper.py, line 197, in __init__ self.__compile_class() File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\mapper.py, line 852, in __compile_class manager = attributes.create_manager_for_cls(self.class_) File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\attributes.py, line 1494, in create_manager_for_cls manager = factory(class_) File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\attributes.py, line 1047, in __init__ cls_state = manager_of_class(base) File c:\python24\lib\site-packages\SQLAlchemy-0.5.0beta4dev_r5048-py2.4.egg\s qlalchemy\orm\attributes.py, line 1511, in manager_of_class finder = self.manager_finders[cls] File C:\Python24\lib\weakref.py, line 219, in __getitem__ return self.data[ref(key)] TypeError: cannot create weak reference to 'classobj' object This error goes away when I make my app baseclass as a new style class. It is not giving any problems with Exceptions class in Py2.4 though. Could this be a possible issue? On Wed, Aug 20, 2008 at 9:28 PM, Michael Bayer [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: On Aug 20, 2008, at 11:52 AM, Harish K Vishwanath wrote: Hello, Sorry if I am being stupid. I saw the ticket and r0535 and 537 changeset. Which version of SQLA incorporates these changesets? no worries...use the latest trunk, which will ultimately be released either as 0.5beta4 or 0.5.0. http://0.5.0. -- Regards, Harish --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Can't shake off the 'MySQL server has gone away' Error
Alen Ribic wrote: I still seem to get the 'MySQL server has gone away' Error. I am suspecting that the problem is in my use of FCGI in production. Thats the only difference between my Development environment and Production and it works 100% without the error in Development env. I guess I'll have to put some more debugging in my application in Production setup. MySQL will also throw that error when a query needs more resources than the configuration allows. If there's more data in your production environment or the my.cnf differs that could be it. In any case, enabling error logging for the MySQL server process may shed some light on the root cause. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL encoding problems
Raoul Snyman wrote: Hi, I'm writing a Pylons app, connecting to an existing oldish database, and while connecting from my Mac desktop everything is fine, but when I connect from our dev server, I get the following error: LookupError: unknown encoding: latin1_swedish_ci I've done some Googling, found a couple of posts on here, as well as elsewhere, and I'm not sure what they're talking about in those posts (specifically, I don't see how they solved the problem). Desktop versions: Mac OS X 10.4 Python 2.5 SQLAlchemy: 0.5.0beta2 MySQLdb: 1.2.2 final Pylons: 0.9.6.2 Dev server versions: Linux Server: Gentoo 3.3.5.20050130-r1 MySQL Server: 4.1.9-max-log Python: 2.4.4 SQLAlchemy: 0.5.0beta2 MySQLdb: 1.2.2 final Pylons: 0.9.6.2 Unfortunately I can't change the db in any way, as this app is simply pulling a subsection of data out of an already existing system. Any ideas? Do you need more info? A stack trace? LookupError is pretty general... Would need to see a stack trace. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problems with query of single column in a table
Venkatesh wrote: Hello, I'm using Elixir with SQLAlchemy, and I'm having trouble with querying a single column in the database. Here is my class and the error that it throws up when I access a column: import elixir from sqlalchemy import orm, create_engine, MetaData from sqlalchemy.orm import scoped_session, sessionmaker from sqlalchemy.types import * class dateTest(elixir.Entity): UserID= elixir.Field(elixir.Integer, primary_key=True, autoincrement=True) Date1 = elixir.Field(elixir.TIMESTAMP(timezone=True)) Date2 = elixir.Field(elixir.DateTime) def __init__(self, uId = None, date1 = None, date2 = None): self.UserID = uId self.Date1 = date1 self.Date2 = date2 def __repr__(self): return '%s %s %s' %(repr(self.UserID), self.Date1, self.Date2) elixir.session.query(dateTest.Date1).all() Traceback (most recent call last): File stdin, line 1, in ? File d:\python24\lib\site-packages\sqlalchemy-0.4.6-py2.4.egg \sqlalchemy\orm\scoping.py, line 98 , in do return getattr(self.registry(), name)(*args, **kwargs) File d:\python24\lib\site-packages\sqlalchemy-0.4.6-py2.4.egg \sqlalchemy\orm\session.py, line 76 0, in query q = self._query_cls(mapper_or_class, self, **kwargs) File d:\python24\lib\site-packages\sqlalchemy-0.4.6-py2.4.egg \sqlalchemy\orm\query.py, line 68, in __init__ self.__init_mapper(_class_to_mapper(class_or_mapper, entity_name=entity_name)) File d:\python24\lib\site-packages\sqlalchemy-0.4.6-py2.4.egg \sqlalchemy\orm\util.py, line 399, in _class_to_mapper return class_or_mapper.compile() AttributeError: 'InstrumentedAttribute' object has no attribute 'compile' When I query for the entire object, I can get it without any problems: elixir.session.query(dateTest).all() 2008-07-16 16:06:12,010 INFO sqlalchemy.engine.base.Engine.0x..10 SELECT datetest_datetest.UserID AS datetest_datetest_UserID, datetest_datetest.Date1 AS datetest_datetest_Date1, datetest_date test.Date2 AS datetest_datetest_Date2 FROM datetest_datetest ORDER BY datetest_datetest.oid 2008-07-16 16:06:12,010 INFO sqlalchemy.engine.base.Engine.0x..10 [] [1 2008-07-16 14:17:22 None, 2 2008-07-16 14:19:24 2008-07-16 14:19:24, 3 2008-07-16 14:19:27 2 008-07-16 14:19:27, 4 2008-07-16 14:19:29 2008-07-16 14:19:29, 5 2008-07-16 15:03:04 2008-07-16 15:03:04, 6 2008-07-16 15:03:07 2008-07-16 15:03:07, 7 2008/07/16 15:03:09.390 GMT-7 2008/07/16 15:03:09.390 GMT-7, 8 2008/07/16 15:03:11.315 GMT-7 2008/07/16 15:03:11.315 GMT-7, 100 2008-07-1 6 00:00:00 2008-07-17 15:07:10, 101 2008/07/16 16:04:28.346 GMT-7 2008/07/16 16:04:28.346 GMT-7, 102 2008/07/16 16:04:32.681 GMT-7 2008/07/16 16:04:32.681 GMT-7] Any ideas what could be wrong with a query to get a single column from the database? Columns and scalars via .query() is a SQLAlchemy 0.5 feature. 0.4 doesn't support that usage. Cheers, Jason --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: srid autodiscovery mechanism
Michael Bayer wrote: On Jul 13, 2008, at 5:42 PM, Eric Lemoine wrote: So far, so good; user can do: wifi_table = Table('wifi', metadata, Column('the_geom', Geometry(4326)), autoload=True) But ultimately I'd like that my users can do: wifi_table = Table('wifi', metadata, autoload=True) I tried this: from sqlalchemy.databases import postgres postgres.ischema_names['geometry'] = Geometry This is ok, but during reflection, when SQLA creates Geometry objects, it obviously passes no srid argument to the Geometry constructor, so the Geometry objects all end up with the srid property set to -1. The proper srid value to pass to the Geometry constructor is actually in a PostGIS table (geometry_columns). So if a geometry column is discovered, the table's srid value could be read from that table and passed to the Geometry constructor. I thought about doing something like that: from sqlalchemy.databases import postgres def geometry_factory(): // go read srid associated with table from geometry_columns srid = return Geometry(srid) postgres.ischema_names['geometry'] = geometry_factory but geometry_factory doesn't have any connection object to go read the srid value. My question is simple: do you see solutions to my problem? like before with asdecimal=False, we dont have a standard API for the ischema_names dict and again here is a place where you're looking for one. Such an API might look like: def create_postgis_type(table, connection): srid = connection.execute(select whatever you need to figure out SRID value).scalar() return Geometry(srid=srid) engine = create_engine('postgres://...', type_reflectors={ 'numeric':PGFloat, 'PostGIS':create_postgis_type }) where reflecttable() distinguishes between a TypeEngine class and a plain callable, which is assumed to implement a particular API. But thats just a guess. I wouldn't implement such an API casually since while its very easy to add little features like this, its much harder to change them or take them away after you've observed they're a bad idea or were not well thought out (additionally this one's a pretty big job to implement across every dialect). Any opinions from Jason/ Rick/other ? Would be pretty useful. Would the mapping have to go deeper, and control the resolution of (e.g.) String - PGString across the board for the dialect? The reflection factories would probably want some *args and **kw to pass along column/type metadata snarfed up in the first phase of reflection. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Connecting to MySQL
Lukasz Szybalski wrote: On Thu, Jul 10, 2008 at 11:59 AM, jason kirtland [EMAIL PROTECTED] wrote: Lukasz Szybalski wrote: On Thu, Jul 10, 2008 at 11:26 AM, Heston James - Cold Beans [EMAIL PROTECTED] wrote: Session.add is a version 0.5 method, you're maybe running 0.4.6? In the 0.4.x series, it's going to be: Session.save() for objects that are to be newly added to the session Session.update() for objects that are already in the session, or Session.save_or_update() to have the library figure it out as it does for Session.add in v0.5.x Hi Rick, That's exactly what the problem was :-) Is there any reason I should avoid using 0.5? I'm running python 2.4 at the moment, are they compatible? Next quick question: I have a habbit of using 'created' and 'modified' columns on my tables, is there any way in which I can have the ORM update the dates for me when creating and modifying rows? From the link I sent you previously: sqlalchemy.Column('CreatedDate', sqlalchemy.Date, default=datetime.now().date()), sqlalchemy.Column('CreatedTime', sqlalchemy.Time, default=datetime.now().time()) Not so much. That'll stamp every inserted row with the same time- whatever time it was when python evaluated the Table definition. Here's a cross-db way to get timestamps: from sqlalchemy import Table, Column, DateTime, func Table('abc', metadata, ... Column('created', DateTime, default=func.now()), Column('updated', DateTime, onupdate=func.now())) What exactly is func ? Is that a function that just gets time or? Can I use onupdate=func.now().time() for time onupdate=func.now().date() for date I don't really prefer to have both date and time mixed in datetime field. func is a SQL function expression builder: func.now() emits the sql function NOW() as the column value in the insert, moving responsibility for timestamp calculation to the database. func can build any function the database supports, like current_date or current_time. http://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_everythingelse_functions --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Connecting to MySQL
Lukasz Szybalski wrote: On Thu, Jul 10, 2008 at 11:26 AM, Heston James - Cold Beans [EMAIL PROTECTED] wrote: Session.add is a version 0.5 method, you're maybe running 0.4.6? In the 0.4.x series, it's going to be: Session.save() for objects that are to be newly added to the session Session.update() for objects that are already in the session, or Session.save_or_update() to have the library figure it out as it does for Session.add in v0.5.x Hi Rick, That's exactly what the problem was :-) Is there any reason I should avoid using 0.5? I'm running python 2.4 at the moment, are they compatible? Next quick question: I have a habbit of using 'created' and 'modified' columns on my tables, is there any way in which I can have the ORM update the dates for me when creating and modifying rows? From the link I sent you previously: sqlalchemy.Column('CreatedDate', sqlalchemy.Date, default=datetime.now().date()), sqlalchemy.Column('CreatedTime', sqlalchemy.Time, default=datetime.now().time()) Not so much. That'll stamp every inserted row with the same time- whatever time it was when python evaluated the Table definition. Here's a cross-db way to get timestamps: from sqlalchemy import Table, Column, DateTime, func Table('abc', metadata, ... Column('created', DateTime, default=func.now()), Column('updated', DateTime, onupdate=func.now())) You can set both default= and onupdate= on the same Column if you want 'updated' to be non-NULL on insert. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Printing the SQL generated by table.create()
Aaron Torres wrote: Hey all, I've been looking through the documentation and searching google for answers to this, but I can't seem to find a solution. if I set meta.bind.echo=True, I can see the SQL statement that is being generated when I call table.create(). Is there any way I can easily grab this sql statement and store it into a variable (as a string)? I know you can easily print the SQL generated for insert() commands etc, but I can't find an easy way to do this with creates. Sorry if I missed something obvious! Any help would be greatly appreciated. There is a recipe in the FAQ: http://www.sqlalchemy.org/trac/wiki/FAQ#HowcanIgettheCREATETABLEDROPTABLEoutputasastring Cheers, Jason --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: register dml to be triggerred after create_all
alex bodnaru wrote: hi friends, i wish to insert some initial data in a few management tables (like applications groups, roles etc). is there a way to register dml to be done after create_all ends? i'd specifically like it to happen after the entire ddl dust reaches the ground. MetaData and Tables emit DDL events that you can listen for with .append_ddl_listener. http://www.sqlalchemy.org/docs/04/sqlalchemy_schema.html#docstrings_sqlalchemy.schema_MetaData Here's an example insert-after-CREATE function from the SA test suite: def fixture(table, columns, *rows): Insert data into table after creation. def onload(event, schema_item, connection): insert = table.insert() column_names = [col.key for col in columns] connection.execute(insert, [dict(zip(column_names, column_values)) for column_values in rows]) table.append_ddl_listener('after-create', onload) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL 5.0 DateTime initialization during Table creation is failing.
Gloria W wrote: Hi All, Looking back in these posts, I tried several older variants of MySQL datetime column initialization discussed here, and they're not working. This works in Postgresql: sqlalchemy.Column('date_created', sqlalchemy.DateTime, sqlalchemy.PassiveDefault(sqlalchemy.sql.func.now()), nullable=False) But the MySQL equivalent fails: sqlalchemy.Column('date_created', sqlalchemy.DateTime, sqlalchemy.PassiveDefault(text(CURRENT_TIMESTAMP)), nullable=False) What is the valid syntax? Is it failing for other reasons? The MySQL TIMESTAMP type is required for that default: from sqlalchemy.databases import mysql sqlalchemy.Column('date_created', mysql.MSDateTime, sqlalchemy.PassiveDefault(text(CURRENT_TIMESTAMP)), nullable=False) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL 5.0 DateTime initialization during Table creation is failing.
jason kirtland wrote: Gloria W wrote: Hi All, Looking back in these posts, I tried several older variants of MySQL datetime column initialization discussed here, and they're not working. This works in Postgresql: sqlalchemy.Column('date_created', sqlalchemy.DateTime, sqlalchemy.PassiveDefault(sqlalchemy.sql.func.now()), nullable=False) But the MySQL equivalent fails: sqlalchemy.Column('date_created', sqlalchemy.DateTime, sqlalchemy.PassiveDefault(text(CURRENT_TIMESTAMP)), nullable=False) What is the valid syntax? Is it failing for other reasons? The MySQL TIMESTAMP type is required for that default: from sqlalchemy.databases import mysql sqlalchemy.Column('date_created', mysql.MSDateTime, sqlalchemy.PassiveDefault(text(CURRENT_TIMESTAMP)), nullable=False) err, mysql.MSTimeStamp --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sa0.5 __init__ replacement
[EMAIL PROTECTED] wrote: hi i have class X(object): X.__init__ = setattr_kargs where def setattr_kargs( *args, **kargs): assert len(args)==1 x = args[0] for k,v in kargs.iteritems(): setattr( x, k, v) when SA comes to play, it fails to find a 'self' in __init__ arguments. r4880 now considers args[0] as 'self' when introspecting def(*args): ... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: MySQL 5.0 DateTime initialization during Table creation is failing.
Gloria W wrote: This gives me an error: sqlalchemy.Column('date_created', mysql.MSTimeStamp, sqlalchemy.PassiveDefault(text(CURRENT_TIMESTAMP)), nullable=False)) NameError: global name 'text' is not defined from sqlalchemy import text --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Multiple encodings in my database
my understanding is that mysql works a little differently here. the column-level character sets are storage encodings only. all data to and from the database is encoded in the database connection's configured encoding. that can either be left as-is or converted to Unicode for you. Bobby Impollonia wrote: If I am using the mysql-specific Column constructs with the charset option, will things be automatically encoded/ decoded by SA using that charset? Or is the charset option only used for Create Table? On Thu, Jun 26, 2008 at 7:20 PM, Michael Bayer [EMAIL PROTECTED] wrote: first of all, the stack trace suggests you have not set the encoding parameter on create_engine() as it's still using UTF-8. If you mean that a single database column may have different encodings in different rows, you want to do your own encoding/decoding with encoding errors set to something liberal like ignore. You also need to use your own custom type, as below: from sqlalchemy import types class MyEncodedType(types.TypeDecorator): impl = String def process_bind_param(self, value, dialect): assert isinstance(value, unicode) return value.encode('latin-1') def process_result_value(self, value, dialect): return value.decode('latin-1', 'ignore') then use MyEncodedType() as the type for all your columns which contain random encoding. No convert_unicode setting should be used on your engine as this type replaces that usage. On Jun 26, 2008, at 6:55 PM, Hermann Himmelbauer wrote: Hi, I'm trying to access a database via SA, which contains varchars with different, arbitrary encodings. Most of them are ascii or ISO-8859-2 encoded, however, many are windows-1252 encoded and there are also some other weird ones. In my engine setup, I set the encoding to latin1 and set convert_unicode to True, as I my application requires the database values in unicode format. If SA now tries to retrieve such a key, the following traceback occurs: -- File /home/dusty/prog/python_modules/sqlalchemy/engine/base.py, line 1605, in _get_col return processor(row[index]) File /home/dusty/prog/python_modules/sqlalchemy/databases/ maxdb.py, line 112, in process return value.decode(dialect.encoding) File /local/home/dusty/python/Python-2.4.4/lib/python2.4/encodings/ utf_8.py, line 16, in decode return codecs.utf_8_decode(input, errors, True) UnicodeDecodeError: 'utf8' codec can't decode bytes in position 3-6: invalid data - What can I do? It's not so important that all characters are correctly displayed, but it's vital that such improper encodings do not crash my application. Perhaps, there's some universal encoding that is able to deal with such problems? Best Regards, Hermann -- [EMAIL PROTECTED] GPG key ID: 299893C7 (on keyservers) FP: 0124 2584 8809 EF2A DBF9 4902 64B4 D16B 2998 93C7 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: overzealous check breaks doctesting
Martijn Faassen wrote: jason kirtland wrote: [snip] Could the check somehow be modified to still find true builtins but not those defined in a doctest? Sure. Any suggestions for an alternate check? Heh, no. It's quite difficult to come up with any alternative.. I wonder why doctest.DocFileSuite makes these classes appear as __builtin__. I just went digging in doctest, but unfortunately this seems to be an unavoidable side effect of the behavior of the 'exec' statement, which doctest uses. I've just did some experiments, but whatever I do, any class definition I exec ends up with a __module__ set to __builtin__. I think that comes from __name__ in the exec globals context: d = {'__name__': 'foo'} exec 'class Quux(object): pass' in d d['Quux'].__module__ 'foo' or __name__ = 'bar' class O(object): pass ... O.__module__ 'bar' --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Moving On
Paul Johnston wrote: Hi, I've had fun over the last 18 months doing odd bits of work on SQLAlchemy. It works pretty damn well on MSSQL now, although I never did quite get all the unit tests nailed. It's been great seeing the library continue to evolve, and particularly satisfying to see things I've started (e.g. AutoCode) being taken forward. Just of late, I've been reassessing priorities in my life, and open source development isn't going to be a big one going forward. In fact, I may even be giving up the computer completely for a year or two and going travelling. I'll be unsubscribing from the mailing list in a couple of days, although I'm happy to receive SA related emails at my personal address, for the next couple of months at least. Thanks for the interesting times, Paul Hi Paul, Thanks for all of your great work on SA and best of luck with the new road ahead. All the best, Jason --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: overzealous check breaks doctesting
Martijn Faassen wrote: Hi there, I'm writing a doctest in which I include a MappedCollection subclass. In my doctest, I create such a subclass:: class Foo(MappedCollection): ...pass Unfortunately later on, sqlalchemy.orm.collections.py has a check to determine whether Foo is really a builtin, and if so, it fails to instrument, here:: def _instrument_class(cls): ... # In the normal call flow, a request for any of the 3 basic collection # types is transformed into one of our trivial subclasses # (e.g. InstrumentedList). Catch anything else that sneaks in here... if cls.__module__ == '__builtin__': raise sa_exc.ArgumentError( Can not instrument a built-in type. Use a subclass, even a trivial one.) Unfortunately, when Foo is 'cls', it will have __module__ set to '__builtin__' even while Foo is not a builtin. I can work around this issue in the doctest by something something evil like:: Foo.__module__ = 'foo' Things then seem to work. Could the check somehow be modified to still find true builtins but not those defined in a doctest? Sure. Any suggestions for an alternate check? I can also see this as being a doctest problem; perhaps the __module__ should really be set to '__main__' in them, but it might be easier to get it fixed here... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Py 2.3 supported in SA 0.5?
Christoph Zwerschke wrote: Will Python 2.3 still be supported by SA 0.5? I noticed that sqlalchemy.ext.orderinglist uses the new decorator syntax. Oops, that snuck in accidentally is fixed in the trunk for the time being. However, 2.3 will probably not be supported in the final 0.5.0 release. We've had a couple threads on the 2.3 support and no one came to 2.3's defense, so it's on the chopping block. If the migration from list comprehensions to generators negatively impacts performance, then perhaps 2.3 support will stay, but barring that... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How can I execute alter session commands at application initialization?
Dr.T wrote: I want to make my oracle 10g queries case insensitive. To do this, I need to execute: alter session set NLS_SORT=BINARY_CI; alter session set NLS_COMP=LINGUISTIC; at application initialization. How might I do this via SQLAlchemy? Thanks for your help, class MySetup: def connect(self, dbapi_con, con_record): dbapi_con.execute('alter session set NLS_SORT=BINARY_CI') dbapi_con.execute('alter session set NLS_COMP=LINGUISTIC') engine = create_engine('oracle:...', listeners=[MySetup()]) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mySQL force index?
Geoff wrote: Does SQLA have any mechanism to use FORCE INDEX? Not in generated SQL. There is a ticket to add hinting support, but currently you'd need to use text() selects or join conditions to get the hints in. http://www.sqlalchemy.org/trac/ticket/921 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: proxied attribute access
[EMAIL PROTECTED] wrote: g'day i need to have an attribute with setter/getter, say foo, and so that the underlaying DB/SA-attribute (_foo) to be completely hidden for users of the class - to avoid someone setting/getting it by mistake. is this possible within SA - via new AttributeAccess layer - or else? Yep. There are examples in examples/custom_attributes plus more in test/orm/extendedattr.py and test/orm/instrumentation.py. Also doc in sqlalchemy.orm.attributes. (You'll need to consult the source on that one until we get a doc generator that can extract attribute docstrings.) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: 64-bit postgres produces: This SchemaItem is not connected to any Engine
robert rottermann wrote: Hi there, I am building a zope/plone site that uses sqlalchemy (collective.lead). on two systems I am developping on everything works fine, a third one that has 64 bit linux installed (all systems use SuSE 10.3) I get an traceback: Module ruagaero.intradevcontent.db.db, line 22, in _setup_tables Module sqlalchemy.schema, line 166, in __call__ Module sqlalchemy.schema, line 70, in get_engine InvalidRequestError: This SchemaItem is not connected to any Engine 0.3.x? could be an attempt in '_setup_tables' to autoload tables without a database configured to load from. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: 64-bit postgres produces: This SchemaItem is not connected to any Engine
robert rottermann wrote: thanks jason, jason kirtland schrieb: robert rottermann wrote: Hi there, I am building a zope/plone site that uses sqlalchemy (collective.lead). on two systems I am developping on everything works fine, a third one that has 64 bit linux installed (all systems use SuSE 10.3) I get an traceback: Module ruagaero.intradevcontent.db.db, line 22, in _setup_tables Module sqlalchemy.schema, line 166, in __call__ Module sqlalchemy.schema, line 70, in get_engine InvalidRequestError: This SchemaItem is not connected to any Engine 0.3.x? could be an attempt in '_setup_tables' to autoload tables without a database configured to load from. I am usin 0.4.6 the same confuguration works on 32 bit linux The get_engine function in that traceback don't exist in the 0.4 series. Is it possible you're picking up another (maybe system) SA installation instead of 0.4.6? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?
Allen Bierbaum wrote: On Fri, May 16, 2008 at 4:54 PM, jason kirtland [EMAIL PROTECTED] wrote: [..] Anyway, I think this is a bit non-intuitive. What I propose instead is that SA could automatically set the 'keyword' attribute of the Note object as part of the process of assigning it to the mapped collection dictionary. This way the insert could look more like: item.notes['not-color'] = Note(value='blue') and behind the scenes SA would call: new Note.keyword = 'not-color' Any thoughts on this? Has anyone tried this in the past? MappedCollection doesn't currently have a mismatch guard on __setitem__ (d[key] = val) or setdefault(), but easily could. There *is* a guard protecting against item.notes = {'not-color': Note('color', 'blue')}, so that machinery is available and applying it to the other setters is straightforward. Automatically setting the value for the attribute_ and column_mapped dict collections would be pretty convenient and DRY. This is a great time to integrate that feature, if you want to try your hand at putting together a patch and tests. If it's not too disruptive to existing users it could slide right in as a new feature of 0.5. I would be more then happy to look into this (I already have), but I think my skills aren't quite up to the challenge. Could you point me in the general direction? You might start looking at _convert here: http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/lib/sqlalchemy/orm/collections.py#L1402 It could be the case that that logic can be combined with the proposed value-setting logic and used for @converter, __setitem__, etc. The basic MappedCollection would probably have a default implementation that does no attribute setting, being as there's no reasonable way to intuit a reverse operation given only an arbitrary keying function lambda. The attribute_ and column_mapped_ front ends would set up their own implementations of the function that does key checking plus attribute setting behavior. On a related note, I think it would be good to make this behavior come through a user customizable callback method that takes the index value and the newly assigned class item as values. This would allow users to add more automatic behavior that may be needed. For example I my current relationship is actually like this: 'input_vars' : relation(Var, primaryjoin = and_(script_table.c.id == var_table.c.script_id, var_table.c.input_output_type == 0), collection_class=column_mapped_collection(var_table.c.name)), So I would want to not only set the name automatically based on the key, but I would want to set the input_output_type to 0 in this case. Something like this would be good. def input_cb(key, item): item.name = key item.input_output_type = 0 If the setup I described above works out, this kind of thing could be had pretty much for free. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Is inserting new data with column_mapped_collection inconsistent?
Allen Bierbaum wrote: I have just started using column_mapped_collections. (http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_relation_collections_dictcollections I must say, these are very powerful and extremely nice when reading data. But I have run into one thing that seems confusing when it comes to creating new objects in a session. Namely, it is possible to add data to the mapped dictionary in such a way that the data structure is inconsistent and not what it would be when reading the same data back. Using the example from the documentation as a start: mapper(Item, items_table, properties={ 'notes': relation(Note, collection_class=column_mapped_collection(notes_table.c.keyword)), }) # ... item = Item() item.notes['color'] = Note('color', 'blue') # Set keyword attribute to 'color' print item.notes['color'] Everything is good here, but what if I did it this way instead item.notes['not-color'] = Note('color', 'blue') This last line is the problem because it has inserted a link to a new Note that has a keyword of 'color' but is showing up in the dictionary as 'not-color'. If we flush all of this and reload from the database using a query, there will be no 'not-color' entry in the database. Anyway, I think this is a bit non-intuitive. What I propose instead is that SA could automatically set the 'keyword' attribute of the Note object as part of the process of assigning it to the mapped collection dictionary. This way the insert could look more like: item.notes['not-color'] = Note(value='blue') and behind the scenes SA would call: new Note.keyword = 'not-color' Any thoughts on this? Has anyone tried this in the past? MappedCollection doesn't currently have a mismatch guard on __setitem__ (d[key] = val) or setdefault(), but easily could. There *is* a guard protecting against item.notes = {'not-color': Note('color', 'blue')}, so that machinery is available and applying it to the other setters is straightforward. Automatically setting the value for the attribute_ and column_mapped dict collections would be pretty convenient and DRY. This is a great time to integrate that feature, if you want to try your hand at putting together a patch and tests. If it's not too disruptive to existing users it could slide right in as a new feature of 0.5. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Default collection class for unordered relations
Nick Murphy wrote: mmh. between db's - maybe u're right. But the order will also change depending on current hash-values between 2 runs on otherwise same system... There's plenty of difficulties to get a repeatable flow for tests etc already. That's exactly my point in fact -- unless order_by is specified, a collection with a defined order is illogical. Using an unordered multiset instead would ensure that users don't accidentally rely on this behavior. Of course, if order_by is given, a list makes perfect sense. Logic that depends on any ordering from a non-ORDER BY result is a bug, but I don't know that the impact of presenting all users with a new, non-standard, non-native collection type and injecting some kind of __eq__ into mapped classes to satisfy a multiset contract is worth it for what amounts to nannying. Not to mention that unless the implementation did something really silly like rand() its internal ordering for each __iter__ call, it doesn't offer a huge safety improvement for the common case of 'for x in obj.collection: ...' --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Default collection class for unordered relations
Nick Murphy wrote: Logic that depends on any ordering from a non-ORDER BY result is a bug, but I don't know that the impact of presenting all users with a new, non-standard, non-native collection type and injecting some kind of __eq__ into mapped classes to satisfy a multiset contract is worth it for what amounts to nannying. Not to mention that unless the implementation did something really silly like rand() its internal ordering for each __iter__ call, it doesn't offer a huge safety improvement for the common case of 'for x in obj.collection: ...' I have to disagree: it's hardly nannying as much as it is representing the underlying reality with greater fidelity. Relations in SQL are by definition unordered, so there's something of an logical mismatch in representing them with a type for which order is defined. There's no disagreement from me on that. I just don't see purity winning out over practicality here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Case insensitive queries
GK wrote: I've been digging around the archives, web and source code to figure case insensitive queries using SA, and am posting these notes: (a) for review and comment, and (b) to provide some possible hints for others who may follow For me, a big benefit of using SA is that it insulates me from database specifics. Ideally, I wanted a way to handle case insensitive queries that would work across all supported databases, or at least SQLite, MySQL and Postgres. Previous messages on this list [1][2] suggest that won't happen before SA 0.5 (with no guarantee of that, I guess) [3]. There was a possible answer at [4], but I didn't like the potential implications for query performance. [1] http://groups.google.com/group/sqlalchemy/browse_frm/thread/ce87bd74fa4311ba/9c06644a94b358b6?lnk=gstq=case+insensitive+query#9c06644a94b358b6 [2] http://groups.google.com/group/sqlalchemy/browse_frm/thread/cc8c775dd54b7b9c/18e059753d819455?lnk=gstq=case+insensitive+query#18e059753d819455 [3] http://www.sqlalchemy.org/trac/ticket/487 [4] http://groups.google.com/group/sqlalchemy/browse_frm/thread/a7b1c325c267bba8/6ef14c5957b59395?lnk=gstq=case+insensitive+query#6ef14c5957b59395 So the solution I've adopted, and tested with SQLite, is to use a TypeDecorator class, thus: [[ class CI_String(sqlalchemy.types.TypeDecorator): Defines a case insensitive string type using SQLite dialect options TODO: extend case insensitive specification to support other databases impl = sqlalchemy.String def get_col_spec(self): return super(CI_String,self).get_col_spec()+ COLLATE NOCASE ]] Which I use in a Table definition thus: [[ def defineAffyLocalTable(tablename, metadata): Return SQLAlchemy table for Affymetrix local data entry. table = Table(tablename, metadata, Column('recordid',Integer, Sequence('recordid_seq'), primary_key=True), Column('probeid', CI_String(25), index=True), Column('aly_mean',Float), Column('can_mean',Float), Column('comr_mean', Float), Column('topi_mean', Float), Column('zaa_mean',Float), Column('red_e_mean', Float), Column('description', Text), ) return table ]] Might something like this be a basis for a fairly simple SA-common type that can be implemented appropriately for each database? The generic character types in types.py should accept collate and character set options. The only dialect with implementations for these options currently is mysql, and some code the docstrings can be cherry picked from msyql for use in types.py. DDL generation implementations can go in for the other databases with collation support. But that only gets partway to something like CI_String. SQLite is the only database I know of with a workable across-the-board 'lower' collation. I'm used to seeing collation tied to the character set of the column storage, with more options than a simple binary upper/lower: Latin1_General_BIN Latin1_General_CI_AI_KS Latin1_General_CS_AS_KS_WS SQL_Latin1_General_CP1_CI_AS utf8_general_ci utf8_bin utf8_unicode_ci utf8_spanish_ci utf8_swedish_ci utf8_turkish_ci The default character set used for column storage is usually unknown to SA at DDL time, and, even if known, which collation to pick as insensitive? Collations are database specific and don't always have a 'general' variant. And then there are database such as Postgres that don't yet have collation support and would need queries to be re-written to use lower(col). I'd love to see easy insensitive query support in SA but I don't see an obvious path forward. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: trunk is now on 0.5
Steve Zatz wrote: Trunk Rev 4726 Note the following: Python 2.5.2 (r252:60911, May 7 2008, 15:19:09) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2 Type help, copyright, credits or license for more information. from sqlalchemy import * Traceback (most recent call last): File stdin, line 1, in module File .../sqlalchemy/__init__.py, line 34, in module from sqlalchemy.engine import create_engine, engine_from_config File .../sqlalchemy/engine/__init__.py, line 54, in module from sqlalchemy.engine.base import Dialect, ExecutionContext, Compiled, \ File .../sqlalchemy/engine/base.py, line 16, in module from sqlalchemy import exc, schema, util, types, log File .../sqlalchemy/log.py, line 35, in module rootlogger = logging.getLogger('sqlalchemy') AttributeError: 'module' object has no attribute 'getLogger' Some modules have moved around for 0.5. If you're updating an exsiting trunk check out, be sure to clean out the .pyc files. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problem with non-null fields
TP wrote: Hi, I have a model with a field called 'name' that is set to be non- null. When I look at the actual table created in MySQL the field really does say it cannot be null. However, when I try to set it to None and commit() the changes, I get a warning printed /Users/tp/sw/python-extensions/lib/python2.5/site-packages/ SQLAlchemy-0.4.5-py2.5.egg/sqlalchemy/databases/mysql.py:1488: Warning: Column 'name' cannot be null But the underlying database IS changed with the field being set to the empty string ''. This seems wrong doesn't it? Am I doing something wrong? I'm using SQLAlchemy 0.4.5 on Mac OS X with MySQL 5.1. Thanks for any help! It's a server configuration issue- the server is using a legacy compatibility mode. Check the mysql docs for configuring sql mode to a modern, strict setting. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Properly closing all database connections
Karlo Lozovina wrote: I'm using SA with SQLite, and after executing session.close() and clear_mappers(), on Linux, lsof still says my SQLite file is open. Running engine.dispose() seems to finally close it, but I'm not sure if that's the proper way? That's correct. The engine holds the connection pool and dispose() will close the connections it manages. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Rick Morrison wrote: Sounds nice, thanks for the heads-up. There'll be opportunities for dialects to set up pool events as well. One of the things I'm looking to see is better reconnect support for dead database connections from network partitions, sql server restarts, etc. Is that going to be fully Dialect controlled, or is there some coming support for auto-reconnect as well? I was thinking of a user-level option for liveliness checking on pool checkout, with dialect-specific implementations (e.g. execute a 'SELECT 1', or something more efficient if the driver allows). Is that in line with what you were thinking? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Rick Morrison wrote: I was thinking of a user-level option for liveliness checking on pool checkout, with dialect-specific implementations (e.g. execute a 'SELECT 1', or something more efficient if the driver allows). Is that in line with what you were thinking? I had in mind something more of a optimistic / reactive nature, like a retry on a cursor failure. But this could work equally well and could be much simpler, albeit at some round-trip time on every pool checkout. What's the recovery strategy if the connection is found to be dead? An auto-reconnect with some retry count limit, or would it just notify the dialect and that's it? I believe right now it's a limited # of retries. (The basic support for ping-on-checkout is already in the pool as of 0.4, but no dialect hooks yet.) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: expensive .rollback() in pool implementation?
Michael Bayer wrote: thanks for the effort. Please use r4640 and specify rollback_returned=False to the Pool constructor. I changed that name in r4643 to 'reset_on_return'. I'm hoping to be able to support database-specific methods for state reset in 0.5, and the new option name is a little more future proof in that regard. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Rick Morrison wrote: There's a Dialect refactor underway for 0.5.0 that will likely change the way that options are fed to db engines: Currently the munging of url params and connect_args into a connect() lambda is happening outside of the dialect's control. In 0.5 that's all moving into dialect-land and the dialects will be able to receive connect_args (e.g. processing odbc_options, if specified there). There'll be opportunities for dialects to set up pool events as well. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: branch:user_defined_state questions
Michael Bayer wrote: On Apr 30, 2008, at 8:50 AM, [EMAIL PROTECTED] wrote: - replacing __init__(...) - i see that some effort is taken to keep the original signature. But the result wont be debuggable IMO. cant it be some_init(*a,**kw) doing whatever( *a,**kw) and/or calling original_init(*a,**kw) ? whats inside is not changing as sequence/logic anyway... OR, maybe fix/hack with the co_filename and co_firstlineno code-attributes or whatever so inspect.getsource( damnedclass.__init__) works... are we talking about the __init__ placed on instances ? how is that not debuggable ? I know that pdb is forced to illustrate one line in the trace as being part of a string but thats not such a big deal. Also, the __init__ decorator is optional in UDS/0.5. The class instrumentor will receive a 'install_member('__init__', sa's default genned function)' call and can do whatever it likes with that. The toolkit is in place for building and substituting your own non-exec'd __init__ that does the setup work SA wants done on init. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: association proxy error: stale association proxy
Paul K wrote: The following code duplicates a situation I'm seeing with the association proxy. There are at least two ways I can avoid having the error happen. But since I wasn't sure if the error is a usage error, I wanted to post here first before implementing my work around. One work around for the test code is to delete/comment out line 77 (commented as such). I understand why I'm seeing the error. But should the user really be required to keep the parent around in a variable? I would have thought that the session would be tracking each successive changes. This is fixed in the trunk @ r4593. The issue was in association proxy's handling of a stale cache attribute it stashes on instances and was fundamentally: p_copy = copy.copy(parent) del parent p_copy.kids.append(a_kid) # previously, boom That's similar to what was going under the orm hood with the modified instances coming in and out of scope in have_a_kid. The patch in the trunk is pretty small, but if that's not an option you can work around the issue somewhat painfully by removing the cache attribute from instances: for attr in dir(p_copy): if attr.startswith('_AssociationProxy_kid_associations_'): delattr(p_copy, attr) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQL for (col1, col2) NOT IN (SELECT ...)
This could be expanded slightly to include 'prefixes=[]' support ala select() and insert(). Sqlite could use that for creating full text tables, e.g. 'CREATE VIRTUAL TABLE foo (...) USING ...'. I haven't thought about this extensively but I think I'd prefer prefixes=['TEMPORARY'] to a temporary=True flag until such a time as we can guarantee that the temp tables are cleaned up when the defining connection is returned to the pool. Matthew Zwier wrote: Thanks for the quick reply! I've always been quite impressed with the quality of SA and its support. I'm a bit swamped at work at the moment but I'll see about putting a 'CREATE TEMPORARY TABLE' patch together. MZ On Mon, Apr 28, 2008 at 4:09 PM, Michael Bayer [EMAIL PROTECTED] wrote: that's pretty good you came up with that. We haven't placed explicit support for multiple items as the subject of an IN statement. You can do what you have there without _Grouping by saying ClauseList(...).self_group(). I think you're safe with that for now though we should add a public function for this purpose at some point. if its just a matter of saying CREATE TEMPORARY TABLE instead of CREATE TABLE, we can accept a patch for temporary=True, sure. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to lock tables in mysql with SqlAlchemy?
Ting Zhou wrote: Dear All, I would like to lock a table like LOCK TABLES table_name in mysql command. How can I do that with SqlAlchemy. I have defined a class |//|//|class Pointer(Entity): using_options(tablename='Pointer',autosetup=True) id=Field(MSInteger,primary_key=True) ||I need to lock table ||'Pointer'.|// You can lock the tables by executing the SQL directly. I'm not sure what that looks like in Elixir, but in plain SA it'd be something like: conn = engine.connect() conn.execute(LOCK TABLES Pointer WRITE) ... do stuff with conn conn.execute(UNLOCK TABLES) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: mssql, Linux, unixODBC - Could not locate column in row for column
Rick Morrison wrote: Yeah, I was under the impression that config args passed in via create_engine() ctor and via dburi were treated the same, but looking over engine/strategies.py, it looks as if they have two separate injection points. I'll see if I can get it to allow either, stay tuned. create_engine('mssql://h/db', connect_args=dict(odbc_options='bar')) create_engine('mssql://h/db?odbc_options=bar') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Session binding to existing table: what am I missing?
Gloria W wrote: Hi all, I am trying to use the session syntax to bind to an existing table, but I am apparently missing something. I want my session to be bound to this: my_table = sqlalchemy.Table('my_table', meta, autoload=True, autoload_with=engine) and if I use the same engine here: session = sqlalchemy.orm.sessionmaker(bind=engine, autoflush=True, transactional=True) isn't the session bound to the table? But when I run this: all_records = session.query(my_table).all() I get this error: Traceback (most recent call last): [..zip...[ AttributeError: 'PGCompiler' object has no attribute 'mapped_table' The docs I've seen only show session bindings using Table classes, so please point me to a good example or give me the quick hint. The ORM maps your Python classes to tables rather than working with tables directly. You're missing a step in the middle like: class MyClass(object): def my_stuff(self): self.yadayada sqlalchemy.orm.mapper(MyClass, my_table) all_records = session.query(MyClass).all() The ORM tutorial lays out the steps in more detail: http://www.sqlalchemy.org/docs/04/ormtutorial.html --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sqlalchemy in virtualenv Instructions
Lukasz Szybalski wrote: Hello, Below you can find instructions on how to setup sqlalchemy in virtual environment. http://lucasmanual.com/mywiki/TurboGears#head-36fb4094da01b8c28e8bdca803c0f05774eb13b8 Installing SQLAlchemy in a virtualenv is the same as for any package on listed on PYPI: $ source myenv/bin/activate $ easy_install SQLAlchemy --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: sqlalchemy in virtualenv Instructions
Lukasz Szybalski wrote: On Mon, Apr 14, 2008 at 11:30 AM, jason kirtland [EMAIL PROTECTED] wrote: Lukasz Szybalski wrote: Hello, Below you can find instructions on how to setup sqlalchemy in virtual environment. http://lucasmanual.com/mywiki/TurboGears#head-36fb4094da01b8c28e8bdca803c0f05774eb13b8 Installing SQLAlchemy in a virtualenv is the same as for any package on listed on PYPI: What does this line do? $ source myenv/bin/activate http://pypi.python.org/pypi/virtualenv#activate-script Which version of sqlalchemy does it install? current stable? trunk? or? $ easy_install SQLAlchemy The latest on pypi. You can also do $ easy_install SQLAlchemy==dev # for svn trunk $ easy_install SQLAlchemy==0.4.5 # whatever version http://peak.telecommunity.com/DevCenter/EasyInstall --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Does SQLAlchemy ORM use column indexes to optimize queries?
GK wrote: Michael, Thank you for your response - it was very helpful for me. It turns out my main problem was that I was importing an order of magnitude or so more data than I realized, but you were also right about using flush(). You were also right about the overhead of creating extra indexes. In the spirit of putting some data in a public space... Starting with en empty database, with a test dataset of 1200 values (about 1150 unique insertions) and flushing after every insertion I have the following timings: No extra indexes: 2:00 Three extra indexes: 2:15 This is using SQLite with a flat file on a 1.8GHz laptop. The records are each nearly 1Kb. There's an overhead of about 5 seconds for reading the data, so most of the above time is loading the database. I haven't yet had time to judge how the performance varies with larger datasets. 2:00 seems very high- is that 2 minutes? Below are two similar bulk table loads. The first uses the same insert-or-update methodology and only the relational layer (no ORM)- that clocks in at 1.25 seconds on my laptop. The second is an ORM implementation with a different duplicate detection methodology- that clocks in at 2.0 seconds. --- ## ## Relational version ## import os import time import random from sqlalchemy import * from sqlalchemy.exceptions import IntegrityError data_cols = ( 'sendadr', 'fromadr', 'toadr', 'sendnam', 'fromnam', 'tonam', 'subject', 'received', 'spam', 'folderid' ) chunk = lambda: '%x' % random.getrandbits(400) dataset = [dict((col, chunk()) for col in data_cols) for _ in xrange(1200)] dupes = random.sample(dataset, 50) db = '1krows.db' if os.path.exists(db): os.unlink(db) engine = create_engine('sqlite:///%s' % db) metadata = MetaData(engine) table = Table('t', metadata, Column('id', Integer, primary_key=True), Column('occurs', Integer, default=1), *(Column(col, Text) for col in data_cols)) table.append_constraint(UniqueConstraint(*data_cols)) metadata.create_all() table.insert().execute(dupes) assert table.select().count().scalar() == 50 start = time.time() insert = table.insert() update = (table.update(). where(and_(*((table.c[col] == bindparam(col)) for col in data_cols))). values({'occurs': table.c.occurs+1})) conn = engine.connect() tx = conn.begin() for row in dataset: try: conn.execute(insert, row) except IntegrityError: conn.execute(update, row) tx.commit() end = time.time() assert table.select().count().scalar() == 1200 assert select([func.count(table.c.id)], table.c.occurs==2).scalar() == 50 print elapsed: %04f % (end - start) ## ## ORM version ## import hashlib import os import time import random from sqlalchemy import * from sqlalchemy.orm import * data_cols = ( 'sendadr', 'fromadr', 'toadr', 'sendnam', 'fromnam', 'tonam', 'subject', 'received', 'spam', 'folderid' ) chunk = lambda: '%x' % random.getrandbits(400) dataset = [dict((col, chunk()) for col in data_cols) for _ in xrange(1200)] def hashrow(row): return hashlib.sha1( ','.join(row[c] for c in data_cols)).hexdigest() dupes = [] for row in random.sample(dataset, 50): dupe = row.copy() dupe['hash'] = hashrow(dupe) dupes.append(dupe) db = '1krows.db' if os.path.exists(db): os.unlink(db) engine = create_engine('sqlite:///%s' % db) metadata = MetaData(engine) table = Table('t', metadata, Column('id', Integer, primary_key=True), Column('occurs', Integer, default=1), Column('hash', String(40), unique=True), *(Column(col, Text) for col in data_cols)) metadata.create_all() table.insert().execute(dupes) assert table.select().count().scalar() == 50 class Email(object): def __init__(self, **kw): for key, value in kw.items(): setattr(self, key, value) def hashval(self): return hashrow(dict((col, getattr(self, col)) for col in data_cols)) mapper(Email, table) start = time.time() session = create_session() session.begin() data = [Email(**row) for row in dataset] chunk, remaining = [], [(e.hashval(), e) for e in data] while remaining: chunk, remaining = remaining[:100], remaining[100:] by_hash = dict(chunk) dupes = (session.query(Email). filter(Email.hash.in_(by_hash.keys(.all() for dupe in dupes: dupe.occurs += 1 by_hash.pop(dupe.hash) for hashval, email in by_hash.items(): email.hash = hashval session.save(email) session.flush() session.commit() end = time.time() assert table.select().count().scalar() == 1200 assert select([func.count(table.c.id)], table.c.occurs==2).scalar() == 50 print elapsed: %04f % (end - start) --~--~-~--~~~---~--~~ You received
[sqlalchemy] Re: Use of new_instance() in the user-defined-state branch?
Phillip J. Eby wrote: I just noticed that in the latest version of the branch, there's a new_instance() call that is using a class' __new__ method in order to create a new instance, rather than using 'class()'. What I'd like to find out is how to get around this, because Trellis objects will not be properly initialized unless the 'class()' is called, with any initialization taking place inside __new__ and/or __init__. Trellis doesn't override __new__ or __init__, and doesn't care what they do. But the creation of an instance *must* be wrapped by the class' __call__ (i.e. class()), as there is a try/finally involved that must execute. Any thoughts on how this might be refactored? What is new_instance() used for? new_instance creates an instance without invoking __init__. The ORM uses it to recreate instances when loading from the database. new_instance can be added to InstrumentationManager as an extension method... The ORM doesn't care how empty instances are manufactured so long as they can be created without initialization arguments, e.g. a no-arg constructor. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Use of new_instance() in the user-defined-state branch?
Phillip J. Eby wrote: At 12:02 PM 3/27/2008 -0700, jason kirtland wrote: Phillip J. Eby wrote: I just noticed that in the latest version of the branch, there's a new_instance() call that is using a class' __new__ method in order to create a new instance, rather than using 'class()'. What I'd like to find out is how to get around this, because Trellis objects will not be properly initialized unless the 'class()' is called, with any initialization taking place inside __new__ and/or __init__. Trellis doesn't override __new__ or __init__, and doesn't care what they do. But the creation of an instance *must* be wrapped by the class' __call__ (i.e. class()), as there is a try/finally involved that must execute. Any thoughts on how this might be refactored? What is new_instance() used for? new_instance creates an instance without invoking __init__. The ORM uses it to recreate instances when loading from the database. new_instance can be added to InstrumentationManager as an extension method... The ORM doesn't care how empty instances are manufactured so long as they can be created without initialization arguments, e.g. a no-arg constructor. Does that mean that no attributes must be set from new_instance(), either? You should be able to set whatever you like there. On a separate note, I noticed that the class manager machinery allowed one to just directly subclass ClassManager instead of making an InstrumentationManager. Was that intentional? I preserved this behavior when I corrected the staticmethod failure problem, but the tests don't appear to test for that. Yes, that's an intentional capability. I'm actively refactoring the entire mechanism to work in the MapperExtension-driven instrumentation election that's desired for the Zope security proxy integration; covering tests will be committed shortly as the whole starts to take a usable shape. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Integrating the ORM with Trellis
Michael Bayer wrote: On Feb 27, 2008, at 12:19 PM, jason kirtland wrote: Michael Bayer wrote: You also get a hook that receives the collection_class argument in the case of a collection-based attribute and you get to return your own class, in which case all the collections API stuff can kick in on that side. This can be opened up a bit to allow custom collection event systems as well. We'd move the 'collections._prepare_instrumentation(typecallable)' out of the Attribute and into SA's default implementation of 'instrument_collection_class'. If custom extenders want SA instrumentation, they can call that themselves. The _prepare_instrumentation itself can become a public function, it's stable. go for it Ok, that's in. At the extreme end, collections can route events to anything that quacks like a CollectionAdapter. The moderate path is to make the collection's events compatible with the built-in CollectionAdapter. And of course the easy path is to just use the existing collections instrumentation toolkit, it's already plenty flexible. The sample script has an example of the extreme route, and the moderate is in test/. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: SQLError: (OperationalError) (2013, 'Lost connection to MySQL server during query')
Ross Vandegrift wrote: Hello everyone, I've got a particularly perplexing case of SQLAlchemy losing SQL connections to a MySQL 4.1 database. Before you roll your eyes, I am familiar with both wait_timeout and pool_recycle! On the MySQL server, this is set to eight hours: mysql show variables like 'wait_timeout'; +---+---+ | Variable_name | Value | +---+---+ | wait_timeout | 28800 | +---+---+ In my production.ini on the web server, I have tried a few settings: sqlalchemy.default.pool_recycle = 3600 and even: sqlalchemy.default.pool_recycle = 600 Neither seems to help though. Anyone have any ideas? Perhaps I'm doing something wrong in configuring production.ini? MySQL can throw a 2013 and hang up if the resources needed by a SELECT exceed the server's configuration. I'd check the server logs for clues, then adjust the key_buffer, sort_buffer, etc. as required. The ORM can generate some pretty intense SQL with extreme ease. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: weird SA + PostgreSQL with two schemas issue
David Gardner wrote: Ran across a weird issue this morning, not sure if its even a SA issue, may just be PostgreSQL (8.3) being weird. I recently merged two pgsql databases into one database with two schemas, and a foreign key connecting two of the tables. he first schema is in the search path, the second is not. The problem occurred when I specified the schema='schamaA' in my Table() calls then SA wasn't able to see any foreign keys referencing it, but things worked just fine when I removed them. I suspect the problem is that even though I am specifying the schema name in my add constraint ... foreign key... references schema_name.table.column, PostgreSQL only seems to store the table name, I believe this is because that schema is in the search path. Yeah, that seems to be the case. The allegedly fully qualified paths we're reading during reflection are sensitive to the search path. There are a couple other options: 1) don't use a schema= for the Tables on the search_path. SQLAlchemy interprets 'schema=None' to mean 'anything that can be referenced without a schema qualifier'- could be 'public', could be anything in the path. 2) remove the other schemas from the search path prior to reflection, and restore them after: con = engine.connect() con.execute('set search_path to public') tbl_a = Table('nodehierarchy', metadata, autoload=True, autoload_with=con) tbl_b = Table(...) con.execute('set search_path to public,asset') 3) provide a column override for that foreign key: tbl_b = Table('job', metadata, Column('outdir_assetuid', Integer, ForeignKey('alt_schema.nodehierarchy.uid')), schema='alt_schema_2', autoload=True) It may be that there's an improvement that can be made to the cross-schema reflection, but I think the methodology of #1 above usually works out pretty well. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Reflection + override error with SA 0.3.10 and MySQL
Tim Lesher wrote: I'm using SQLAlchemy, reflecting from an existing MySQL database. I want to override two DateTime columns to provide proper created and updated timestamps (since MySQL can't handle auto-updating two TIMESTAMP columns in the same row). According to the SA docs, this should work; however, when I autoload my Table objects, I get the error: class 'sqlalchemy.exceptions.ArgumentError': Table 'tablename' is already defined for this MetaData instance. This short example illustrates the issue; the test_users table fails to load. The error goes away if I either remove the foreign key constraints in the 'test_pets' table, or remove the Column overrides from the 'test_users' table. It seems as if SA is instantiating the users mapper first (because the pets table refers to it), but not paying attention to the override; it then tries to instantiate the users mapper to effect the override, but fails. You just need to swap the order of the two autoloads here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Behavior question with association proxy with custom collection class
Ronald Lew wrote: I wanted to use a mapped collection or a customized collection instead of the default list for the collection class and have the keyword's id as the key. In addition, I wanted to keep using the append but that seems impossible now since I cannot fool association dict to use it. I wanted to not have to change the below code: for kw in (Keyword('1','one'), Keyword('2','two'), Keyword('3','three')): user.keywords.append(kw) # cannot use this any more The association proxy accepts the proxy implementation type as an optional argument. You can subclass the proxy dict type and add an append method. See: http://www.sqlalchemy.org/docs/04/sqlalchemy_ext_associationproxy.html#docstrings_sqlalchemy.ext.associationproxy_AssociationProxy and: http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/lib/sqlalchemy/ext/associationproxy.py#L227 --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Many to many self referential relationship.
[EMAIL PROTECTED] wrote: slightly OT... http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/examples/ve rtical/dictlike-polymorphic.py hmmm, figuratively speaking, what u describe above is a sort of single-table-inheritance approach over a single-value classes, one per value-type, right? so u have objects, which have attributes of any name and a value of one of the listed types. same-name attributes may have different type-of-values allright. if a class/value has more than just the value (e.g. measurement, units, scale, etc), and there are lots and lots of them, single-table approach would be an overkill, i suppose. Any specific recipe to follow here? That example is more of a mapping recipe than a scratch design recipe: I've seen that properties table over and over in all sorts of legacy schemas. Sometimes it's just like that, sometimes it's shared storage for lots of entities and the setup is more like vertical.py. Sometimes it's worse and has foreign keys. If you *are* doing a scratch design of a triple store, then I'd personally look outside of the RDBMS. Perhaps the RDFAlchemy project, which I've been meaning to check out... --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: How to load a complex data set into an SA aware object?
Jorge Vargas wrote: On Feb 14, 2008 10:51 AM, jason kirtland [EMAIL PROTECTED] wrote: Jorge Vargas wrote: Hi, I'm working on a project where I got several read only tables that are dependent on a third-party, I got several vainlla SQL queries to get the data I need of them and I was wondering which will be the best way to load them up into SA. The queries themselfs are quite complex with several inner joins and other nasty SQL, the queries don't change except for 1 paramenter I need to pass in which is the root item i'm looking for. I was wondering if there was a way - I could create a class with no Table object that will be populated from the resulting query, - or if I should go with a db view and/or stored procedure, (how will I call that form sa?) - or if I should translate the raw query into SA's sqlexpresions - or should I just bypass SA and do a raw dbapi call? which will be the best way to handle this situation? Keep in mind this data is read-only so the only function I need is getInfo(itemId), which will execute the query and return Table-like object. If you've already got the complex SQL and it's working for you, might as well use it: query = text('SELECT foo, bar FROM baz WHERE root_item = :root') resultset = connection.execute(query, root=123) ok I tried that and it works standalone, but then when I try to map it to a table, http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_selects it is still asking me for a Table object. class 'sqlalchemy.exceptions.InvalidRequestError': Could not find any Table objects in mapped table I poked into SA's code and found the following: func text() creates an instance of _TextClause which inherits from ClauseElement which comes from Object on the other hand Selectable extends ClauseElement too, so they belong to different inheritance trees since Selectable is just a marker class http://www.sqlalchemy.org/trac/browser/sqlalchemy/trunk/lib/sqlalchemy/sql/expression.py#L1559 I guess making _TextCLause extend Selectable should make this work, and have no side effects. Is this correct? To my knowledge you can't define a primary mapper against a free-form text SQL statement- without knowing what the columns will be, the mapper can't set up the attribute mappings in the target class. There are probably other ways to set this up, but what I've done in the past for a read-only query mapping is similar to how I've mapped some views: first, spell out a placeholder Table in SA with all the columns and types that the custom query returns. Then map against the Table, but use the hand-written SQL for queries: baz_table = Table('baz_placeholder', metadata, Column(...), ...) mapper(Baz, baz_table) sql = text('SELECT foo, bar FROM baz WHERE root_item = :root') xyz_filtered_bazzes = \ session.query(Baz).from_statement(sql).params(root='xyz') --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---