[sqlalchemy] Re: Unicode Results from SQL functions
On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED]wrote: Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. Ok, that works. I thought that create_engine(uri, encoding = latin1, convert_unicode = True) would do this. I am guessing from this that the create_engine arguments are NOT being passed along to the dbapi connector? --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
Shawn Church wrote: On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. Ok, that works. I thought that create_engine(uri, encoding = latin1, convert_unicode = True) would do this. I am guessing from this that the create_engine arguments are NOT being passed along to the dbapi connector? No. I believe both of those are specifying the treatment of string data going _to_ the DB-API only, not bidirectional behavior. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
On Wed, Sep 24, 2008 at 11:04 PM, jason kirtland [EMAIL PROTECTED]wrote: Shawn Church wrote: On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. Ok, that works. I thought that create_engine(uri, encoding = latin1, convert_unicode = True) would do this. I am guessing from this that the create_engine arguments are NOT being passed along to the dbapi connector? No. I believe both of those are specifying the treatment of string data going _to_ the DB-API only, not bidirectional behavior. OK, lets see, check database encoding, table encoding, column encoding, connection encoding/convert to unicode, sqlalchemy encoding/convert to unicode, and client encoding and if they all match up I should be good to go :-) Please don't take that as a criticism of SQLAlchemy which is an excellent package it just always amazes me how a simple (YES Unicode is SIMPLE) idea can get so complicated. Thanks again for the help, Shawn --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: column renaming and adding
Thanks Michael! That's a good thing to know! On Thu, Sep 25, 2008 at 3:51 AM, Michael Bayer [EMAIL PROTECTED] wrote: On Sep 24, 2008, at 2:34 PM, Wim Verhavert wrote: How is this possible? I thought that by saying: properties={'achternaam':entity_tabel.c.naam} you actually rename the column, but it seems it creates another attribute which would lead to unexpected results in my case. normally this is the case but since your Persoon object inherits from Entity, the naam attribute is inherited from Entity and is still present.The case of a subclass trying to move the column which is named differently on the base class has not been addressed as of yet (which basically means, it doesn't work). It would also be a little tricky to make it work within SA since if you set different data on Persoon.naam and Persoon.achternaam, that would be kind of ambiguous. There's no real way for Persoon to not have naam at all since its a subclass of Entity and from a Python point of view will inherit all attributes. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: cascade defaults
On Thursday 25 September 2008 04:30:19 Michael Bayer wrote: On Sep 24, 2008, at 6:23 AM, [EMAIL PROTECTED] wrote: hi i have cascade= option of relation switched automaticaly between all and other values, and nothing. so i used cascade=None as nothing, hoping it will get the default behaviour. the result was some query got 6x slower (actualy thats why i went hunting it)... nope, it happens that source-wise, exactly cascade=False triggers the default behavior (whatever that is) and nothing else. Not really intuituve, well, fine, i'll fix me code, but the question still remains: why cascade=None or cascade='' or whatever empty thing makes queries (sqlite) sooo slow. i've compared the sql of the query - it's same. something in the schema went wrong? or result-processing? or what? is it worth chasing or there is a known reason? wild guess, something to do with objects that normally would have been cascaded into the session were not, and are being loaded instead. but otherwise no clue, you'd have to provide specifics. i digged further.. it's an implicit m2m relation, and with cascade=None it makes/pulls into the session some parasite incomplete m2m records (with one link being null) - one per each normal record. and seems that gives sqlite headaches. and a related suggestion: why not use symbols e.g. some singleton called DefaultValue, instead of any hard-to-guess default values (be them False, None, '', whatever)? the actual default values are mostly set up later, so the if something is DefaultValue: something = actual-default-value is there anyway. i think the foo=False as a default might have been some habit I picked up when looking at the source code to SQLObject (or maybe SQLObject2). I didn't think we had too much public API with the default=False thing going on but I havent taken a poll.Usually these defaults should just be documented. A symbol like DefaultValue might be nice except I don't see how that's any more obvious for someone who isn't reading docstrings. well, it can go in interfaces.py and be documented with one sentence somewhere at the top of doco, and be used everywhere a None cannot - e.g. in the lazy= argument, or in this cascade=, i.e. where None has other meaning or is meaningless. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
On Sep 25, 2008, at 2:19 AM, Shawn Church wrote: On Wed, Sep 24, 2008 at 11:04 PM, jason kirtland [EMAIL PROTECTED] wrote: Shawn Church wrote: On Wed, Sep 24, 2008 at 10:45 PM, jason kirtland [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote: Adding ?charset=utf8use_unicode=1 to your MySQL connection URL is a much easier way to get Unicode back from all DB access. Ok, that works. I thought that create_engine(uri, encoding = latin1, convert_unicode = True) would do this. I am guessing from this that the create_engine arguments are NOT being passed along to the dbapi connector? No. I believe both of those are specifying the treatment of string data going _to_ the DB-API only, not bidirectional behavior. OK, lets see, check database encoding, table encoding, column encoding, connection encoding/convert to unicode, sqlalchemy encoding/convert to unicode, and client encoding and if they all match up I should be good to go :-) Please don't take that as a criticism of SQLAlchemy which is an excellent package it just always amazes me how a simple (YES Unicode is SIMPLE) idea can get so complicated. use sqlite, and everything is unicode instantly ;) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: cascade defaults
On Sep 25, 2008, at 3:03 AM, [EMAIL PROTECTED] wrote: I didn't think we had too much public API with the default=False thing going on but I havent taken a poll.Usually these defaults should just be documented. A symbol like DefaultValue might be nice except I don't see how that's any more obvious for someone who isn't reading docstrings. well, it can go in interfaces.py and be documented with one sentence somewhere at the top of doco, and be used everywhere a None cannot - e.g. in the lazy= argument, or in this cascade=, i.e. where None has other meaning or is meaningless. its fine by me once we get Jason's input on it. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Newbie many-to-many using Declarative question
Michael, class PressRoutingPress(Base): '''This class defines the many-to-many join table between press and press_routing. ''' __tablename__ = press_routing_press __table_args__ = {'autoload' : True} press_id = Column(Integer, ForeignKey('press.id'), primary_key=True) press_routing_id = Column(Integer, ForeignKey('press_routing.id'), primary_key=True) class PressRouting(Base): '''This class defines the press_routing table information. ''' __tablename__ = press_routing __table_args__ = {'autoload' : True} class Press(Base): '''This class defines the press table information. ''' __tablename__ = press __table_args__ = {'autoload' : True} # many to many Press-PressRouting press_routing = relation('PressRouting', secondary=PressRoutingPress, primaryjoin=id==PressRoutingPress.press_id, foreign_keys=[PressRoutingPress.press_id], secondaryjoin=PressRouting.id==PressRoutingPress.press_routing_id, foreign_keys=[PressRoutingPress.press_routing_id], uselist=False) #backref=backref('press')) #viewonly=True) This all works till I try to instantiate an instance of a Press() object, then I get the following exception: when you use the secondary argument on relation(), that should be a plain Table object and should not be mapped (i.e. there should be no separate class for it): press_routing_press = Table(press_routing_press, Base.metadata, Column(press_id, Integer, ForeignKey('press.id'), primary_key=True), Column(press_routing_id, Integer, ForeignKey('press_routing.id'),primary_key=True) ) class Press(Base): ... press_routing = relation(PressRouting, secondary=press_routing_press) no other arguments to relation() are needed. If you do want PressRoutingPress to be mapped, you use the association object pattern, which means you aren't using the secondary keyword. The non-declarative version is here: http://www.sqlalchemy.org/docs/05/mappers.html#advdatamapping_relation_ patterns_association I implemented your suggestion and that cleared things right up, and made the set up code much simpler. Thanks for your help, considering how much time you spend answering questions, I really appreciate your attention to my issues! Thanks again, Doug --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Problems with join query, can't figure out what's wrong. Noob alert!
OK, I'm back again. Played with Elixir and was able to re-implement your example - got the query desired. However I've found a couple of strange parts of your code. 1. Why do you mix mapper(), Table and Elixir? Elixir creates tables and maps everything for you, this is what it was designed for. If you want total control, use direct mapping from the previous example. I think it can be the root cause. 2. using_options(shortnames=True) - is not neccessary, this option is ON by default 3. Elixir naming convention for relations is : member name + parent key name, so for 'Prosjekt', line p_prosjektid = ManyToOne('Prosjekt') , gives you p_prosjektid_prosjektid key, what can be ok, however p_prosjektid naming itself is confusing, since in your case p_prosjektid - is a relation property, not an id as reader thinks. OK, and here is the working thing: metadata.bind = app.session.connection() metadata.bind.echo = True class Prosjekt(Entity): using_options(shortnames=True) prosjektid = Field(Integer, primary_key=True) kundeid = Field(Integer) p_prosjekt = ManyToOne('Prosjekt') sak = OneToMany('Sak') class Sak(Entity): using_options(shortnames=True) saksnr = Field(Integer, primary_key=True) prosjektid = ManyToOne('Prosjekt') setup_all() create_all() session.query(Sak).join((Prosjekt,Sak.prosjektid)).filter(Prosjekt.kundeid==1532).all() #works Hope this helps, Alex --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Oracle BFILE methods
Perhaps someone can comment on this question about Oracle BFILE's. (Oracle BFILE's are like BLOB's but the data is kept outside the database tables, in regular files on disk. Each BFILE has a directory and filename entry in the table that points to the file on disk). SQLAlchemy handles BFILE's in the same way as BLOB's, and they work nicely, with one exception. BFILE's have several methods that can be called on the LOB handle, and there doesn't appear to be a way in SQLAlchemy that one can call these methods. In particular it would be very useful to be able to call getfilename() and fileexists(). These are implemented in cx_Oracle, so the underlying facility is available. SQLAlchemy always gets the data with read() and returns the data. In the particular application I'm working on the user can upload various files to a database and these are stored as BFILE's. The user can also browse through the database in various ways, and what he wants to see at first is the name of the file he uploaded (that is, from lob.getfilename()), rather than the contents of the file. Is there a way to call getfilename() on a BFILE column currently, or could that be implemented? Thanks, Matthew --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle BFILE methods
On Sep 25, 2008, at 2:37 PM, MHC wrote: Perhaps someone can comment on this question about Oracle BFILE's. (Oracle BFILE's are like BLOB's but the data is kept outside the database tables, in regular files on disk. Each BFILE has a directory and filename entry in the table that points to the file on disk). SQLAlchemy handles BFILE's in the same way as BLOB's, and they work nicely, with one exception. BFILE's have several methods that can be called on the LOB handle, and there doesn't appear to be a way in SQLAlchemy that one can call these methods. In particular it would be very useful to be able to call getfilename() and fileexists(). These are implemented in cx_Oracle, so the underlying facility is available. SQLAlchemy always gets the data with read() and returns the data. In the particular application I'm working on the user can upload various files to a database and these are stored as BFILE's. The user can also browse through the database in various ways, and what he wants to see at first is the name of the file he uploaded (that is, from lob.getfilename()), rather than the contents of the file. Is there a way to call getfilename() on a BFILE column currently, or could that be implemented? SQLAlchemy's default handling of all streamed oracle objects like BLOB, BFILE, etc. is to pre-read the contents of them into the result set. The rationale here is that cursor methods like fetchmany() and fetchall() can be used without the cursor associated with the individual column objects getting lost, if more than cursor.arraysize rows have been read (we currently set cursor.arraysize to 50). It is also so that the binary data returned by the result row is directly compatible with that of all other DBAPIs. Recognizing the usefulness of cx_oracle's LOB object, SQLAlchemy allows one to disable this behavior by sending the parameter auto_convert_lobs=False to your create_engine() call. You'll then get the LOB objects directly present in your result rows.These need to be consumed before the next page of data is fetched from the database (i.e., within every group of arraysize rows, which is 50 by default but can also be configured using the arraysize parameter to create_engine()). Going to add a note of this now to http://www.sqlalchemy.org/trac/wiki/DatabaseNotes . --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle BFILE methods
On Sep 25, 2008, at 2:51 PM, Michael Bayer wrote: On Sep 25, 2008, at 2:37 PM, MHC wrote: Perhaps someone can comment on this question about Oracle BFILE's. (Oracle BFILE's are like BLOB's but the data is kept outside the database tables, in regular files on disk. Each BFILE has a directory and filename entry in the table that points to the file on disk). SQLAlchemy handles BFILE's in the same way as BLOB's, and they work nicely, with one exception. BFILE's have several methods that can be called on the LOB handle, and there doesn't appear to be a way in SQLAlchemy that one can call these methods. In particular it would be very useful to be able to call getfilename() and fileexists(). These are implemented in cx_Oracle, so the underlying facility is available. SQLAlchemy always gets the data with read() and returns the data. In the particular application I'm working on the user can upload various files to a database and these are stored as BFILE's. The user can also browse through the database in various ways, and what he wants to see at first is the name of the file he uploaded (that is, from lob.getfilename()), rather than the contents of the file. Is there a way to call getfilename() on a BFILE column currently, or could that be implemented? sigh...I just checked the source, and i didn't quite finish the story. The OracleBinary type is whats actually calling LOB.read() here. auto_convert_lobs is the part which is invoking OracleBinary for all result sets, whether OracleBinary was specified or not. But if you are in fact reading from a column that you've stated has a Binary type (either directly or via table reflection), its still going to LOB.read(). So you'd additionally have to forego the usage of the Binary type and use an agnostic type such as NullType to get the raw cx_oracle LOB object in those cases. putting on my thinking cap to see if theres a way to smooth this out, given that the unicode question from earlier in the day has some overlap here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle BFILE methods
Here's a patch against the current 0.5 trunk which I am going to add some test coverage for, feel free to try it out. This will instruct the OracleBinary type to not read() the LOBs if the flag is set. It makes no sense for it to do so otherwise. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~--- Index: lib/sqlalchemy/databases/oracle.py === --- lib/sqlalchemy/databases/oracle.py (revision 5124) +++ lib/sqlalchemy/databases/oracle.py (working copy) @@ -93,6 +93,9 @@ def result_processor(self, dialect): super_process = super(OracleText, self).result_processor(dialect) +if not dialect.auto_convert_lobs: +return super_process + lob = dialect.dbapi.LOB def process(value): if isinstance(value, lob): @@ -123,6 +126,9 @@ return None def result_processor(self, dialect): +if not dialect.auto_convert_lobs: +return None + lob = dialect.dbapi.LOB def process(value): if isinstance(value, lob): Index: CHANGES === --- CHANGES (revision 5124) +++ CHANGES (working copy) @@ -33,6 +33,10 @@ than strptime/strftime, to generically support pre-1900 dates, dates with microseconds. [ticket:968] +- oracle +- Setting the auto_convert_lobs flag to False on + create_engine() will also instruct the OracleBinary type + to return the cx_oracle LOB object unchanged. 0.5.0rc1 On Sep 25, 2008, at 3:09 PM, Michael Bayer wrote: On Sep 25, 2008, at 2:51 PM, Michael Bayer wrote: On Sep 25, 2008, at 2:37 PM, MHC wrote: Perhaps someone can comment on this question about Oracle BFILE's. (Oracle BFILE's are like BLOB's but the data is kept outside the database tables, in regular files on disk. Each BFILE has a directory and filename entry in the table that points to the file on disk). SQLAlchemy handles BFILE's in the same way as BLOB's, and they work nicely, with one exception. BFILE's have several methods that can be called on the LOB handle, and there doesn't appear to be a way in SQLAlchemy that one can call these methods. In particular it would be very useful to be able to call getfilename() and fileexists(). These are implemented in cx_Oracle, so the underlying facility is available. SQLAlchemy always gets the data with read() and returns the data. In the particular application I'm working on the user can upload various files to a database and these are stored as BFILE's. The user can also browse through the database in various ways, and what he wants to see at first is the name of the file he uploaded (that is, from lob.getfilename()), rather than the contents of the file. Is there a way to call getfilename() on a BFILE column currently, or could that be implemented? sigh...I just checked the source, and i didn't quite finish the story. The OracleBinary type is whats actually calling LOB.read() here. auto_convert_lobs is the part which is invoking OracleBinary for all result sets, whether OracleBinary was specified or not. But if you are in fact reading from a column that you've stated has a Binary type (either directly or via table reflection), its still going to LOB.read(). So you'd additionally have to forego the usage of the Binary type and use an agnostic type such as NullType to get the raw cx_oracle LOB object in those cases. putting on my thinking cap to see if theres a way to smooth this out, given that the unicode question from earlier in the day has some overlap here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle BFILE methods
Michael, Thanks very much for your quick replies. I'll try out the patch. Matthew On Sep 25, 3:15 pm, Michael Bayer [EMAIL PROTECTED] wrote: Here's a patch against the current 0.5 trunk which I am going to add some test coverage for, feel free to try it out. This will instruct the OracleBinary type to not read() the LOBs if the flag is set. It makes no sense for it to do so otherwise. oracle_no_lobs_patch.txt 1KViewDownload On Sep 25, 2008, at 3:09 PM, Michael Bayer wrote: On Sep 25, 2008, at 2:51 PM, Michael Bayer wrote: On Sep 25, 2008, at 2:37 PM, MHC wrote: Perhaps someone can comment on this question about Oracle BFILE's. (Oracle BFILE's are like BLOB's but the data is kept outside the database tables, in regular files on disk. Each BFILE has a directory and filename entry in the table that points to the file on disk). SQLAlchemy handles BFILE's in the same way as BLOB's, and they work nicely, with one exception. BFILE's have several methods that can be called on the LOB handle, and there doesn't appear to be a way in SQLAlchemy that one can call these methods. In particular it would be very useful to be able to call getfilename() and fileexists(). These are implemented in cx_Oracle, so the underlying facility is available. SQLAlchemy always gets the data with read() and returns the data. In the particular application I'm working on the user can upload various files to a database and these are stored as BFILE's. The user can also browse through the database in various ways, and what he wants to see at first is the name of the file he uploaded (that is, from lob.getfilename()), rather than the contents of the file. Is there a way to call getfilename() on a BFILE column currently, or could that be implemented? sigh...I just checked the source, and i didn't quite finish the story. The OracleBinary type is whats actually calling LOB.read() here. auto_convert_lobs is the part which is invoking OracleBinary for all result sets, whether OracleBinary was specified or not. But if you are in fact reading from a column that you've stated has a Binary type (either directly or via table reflection), its still going to LOB.read(). So you'd additionally have to forego the usage of the Binary type and use an agnostic type such as NullType to get the raw cx_oracle LOB object in those cases. putting on my thinking cap to see if theres a way to smooth this out, given that the unicode question from earlier in the day has some overlap here. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] 1919 year problem
Try this code. Problem of reading back from db datetimes with year = 1919 def testVeryOldDate(self): class DateHolder(object): def __init__(self, name, date): self.name = name self.date = date users_table = Table('users', metadata, Column('id', Integer, primary_key=True), dhAC = AlchemyClass(DateHolder) dhAC.AddField(PrimitiveAlchemyFieldInfo(name, String())) dhAC.AddField(PrimitiveAlchemyFieldInfo(date, DateTime(timezone = True), nullable = True)) postgresSQLEngine = sqlalchemy.create_engine(%(protocol)s://%(user)s:%(password)s@:% (port)d/%(dbName)s%\ { 'protocol': Configuration.db.Protocol, 'user': Configuration.db.User, 'password': Configuration.db.Password, 'port':Configuration.db.Port, 'dbName':Configuration.db.DBName } ) metadata = MetaData() dhAC.RegistrTablesInMetadata(metadata) dhAC.RegistrMappersInMetadata(metadata) metadata.drop_all(bind = postgresSQLEngine) metadata.create_all(bind = postgresSQLEngine) Session = sessionmaker(bind = postgresSQLEngine) session = Session() beforMatrixRevolution = datetime.date(year = 1919, month = 1, day = 1) afterMatrixRevolution = datetime.date(year = 1920, month = 1, day = 1) tmp = datetime.datetime.now().time() beforMatrixRevolution = datetime.datetime.combine(beforMatrixRevolution, tmp) afterMatrixRevolution = datetime.datetime.combine(afterMatrixRevolution, tmp) session.add(DateHolder(First, afterMatrixRevolution)) session.commit() session = Session() amrHld = [dh for dh in session.query(DateHolder)] amrHld = amrHld[0] self.assertEqual(1920, amrHld.date.year) session.add(DateHolder(First, beforMatrixRevolution)) session.commit() session = Session() cought = False try: dHld = [dh for dh in session.query(DateHolder)] self.assertEqual(2, len(dHld)) years = [h.date.year for h in dHld] years.sort() self.assertEqual([1919, 1920], years) except: cought = True self.assertTrue(cought) session.commit() metadata.drop_all(bind = postgresSQLEngine) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: 1919 year problem
Sorry, in previous message I pressed Enter occasionally: def testVeryOldDate(self): class DateHolder(object): def __init__(self, name, date): self.name = name self.date = date metadata = MetaData() dateTable = Table('DateHolder', metadata, Column('id', Integer, primary_key=True), Column('date', DateTime(timezone=true), nullable = True) , Column('name', String())) postgresSQLEngine = sqlalchemy.create_engine(%(protocol)s://%(user)s:%(password)s@:% (port)d/%(dbName)s%\ { 'protocol': Configuration.db.Protocol, 'user': Configuration.db.User, 'password': Configuration.db.Password, 'port':Configuration.db.Port, 'dbName':Configuration.db.DBName } ) mapper(DateHolder, dateTable ) metadata.drop_all(bind = postgresSQLEngine) metadata.create_all(bind = postgresSQLEngine) Session = sessionmaker(bind = postgresSQLEngine) session = Session() beforMatrixRevolution = datetime.date(year = 1919, month = 1, day = 1) afterMatrixRevolution = datetime.date(year = 1920, month = 1, day = 1) tmp = datetime.datetime.now().time() beforMatrixRevolution = datetime.datetime.combine(beforMatrixRevolution, tmp) afterMatrixRevolution = datetime.datetime.combine(afterMatrixRevolution, tmp) session.add(DateHolder(First, afterMatrixRevolution)) session.commit() session = Session() amrHld = [dh for dh in session.query(DateHolder)] amrHld = amrHld[0] self.assertEqual(1920, amrHld.date.year) session.add(DateHolder(First, beforMatrixRevolution)) session.commit() session = Session() cought = False try: dHld = [dh for dh in session.query(DateHolder)] self.assertEqual(2, len(dHld)) years = [h.date.year for h in dHld] years.sort() self.assertEqual([1919, 1920], years) except: cought = True self.assertTrue(cought) session.commit() metadata.drop_all(bind = postgresSQLEngine) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] from as400 via sqlalchemy to ?
Hello, A while back somebody mentioned they were able to connect to as400. Are there any instructions on how to do that? Could you email them to me. My final goal is to move the data from as400 to mysql or postgresql, while keeping the table layout, and hopefully keys. Any info on what needs to happen before I could do that? Thanks, Lucas -- Python and OpenOffice documents and templates http://lucasmanual.com/mywiki/OpenOffice Fast and Easy Backup solution with Bacula http://lucasmanual.com/mywiki/Bacula --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
On Thu, Sep 25, 2008 at 6:47 AM, Michael Bayer [EMAIL PROTECTED]wrote: Converting *all* strings to unicode would then lead to havoc as soon as someone adds a Binary column to their schema. Another solution, which we have support for, would be for ResultProxy to attempt to match TypeEngine objects to the DBAPI types in cursor.description. We already do this for Oracle binary types, so perhaps this functionality could be enabled for all types. Its backwards incompatible though so it would need to be a create_engine() option (I'd call it detect_result_types). A technical issue with this option is that it creates issues with table reflection which would have to be worked around somehow. The ?charset=utf8use_unicode=1 parameter produces the expected behavior. I DID check the documentation and mailing list archives (many posts) before raising this issue but I thought use_unicode and convert_unicode were different forms of the same thing. I in fact DID try ?convert_unicode=1 as a URI paramater which of course was incorrecct. I'll try to write up a wiki page on Unicode. I'm assuming that the '?uri_params' are specific to each driver (ie mysqldb) so the syntax for unicode support may be different for each? I'm also planning on using binary strings so I need to write some more tests, although I THINK I understand what is going on now. Thanks again, Shawn --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Oracle BFILE methods
On Sep 25, 2008, at 4:12 PM, MHC wrote: Michael, I installed 0.5.0rc1 and the patch. It works great -- I can set auto_convert_lobs to False and call the getfilename, fileexists, and size methods on the bfile columns, and I can set it to True and get the default behavior. Thanks very much. A couple of questions... Can I change this setting on the fly if I want to vary the behavior, or does it need to be set once at engine creation? And, would you expect the patch to go into the trunk at some point? You can change the setting on the fly to some degree by varying the types in use - this means you'd create your own Binary type which either does or does not do the conversion. If you look at the source to OracleBinary, its pretty easy to do this. But this is not entirely on the fly since its associated with the type of construct in use, not a flag on the connection or execution context.If your app is sticking with the SQL construction language and not the ORM, you'd have to send expressions to your select() objects that affect the type. To really allow it to happen on the fly we'd have to probably include ExecutionContext as an argument to TypeEngine.result_processor() and allow some other flags to be passed to ExecutionContext somehow...its basically a lot of new API to add. For the time being there's not a really spectacular way to do this except for the boring ways (basically, run result rows through a function). If you're integrating with ORM theres other ways to get hooks in there, its sort of use case specific how you'd go about it. Another complication to keep in mind is the arraysize boundary - unless you turn the auto_convert_lobs flag back on, the ResultProxy you get back is not going to pre-fetch columns which means a fetchmany() or fetchall() might leave unread LOB objects which no longer have a cursor. I have a bunch of trac tickets that are all in patch but no tests yet format, so I hope to go through these within the next 7 days and hopefully by the weekend. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] How to perform inner joins
Hi all, Well, I'm still stumped by SqlAlchemy syntax, but it's getting better. I've got some tables built this way using SqlAlchemy 0.5: press_table = Table('press', metadata, autoload=True) press_routing_table = Table('press_routing', metadata, autoload=True) press_routing_press_table = Table('press_routing_press', metadata, Column('press_id', Integer, ForeignKey('press.id'), primary_key=True), Column('press_routing_id', Integer, ForeignKey('press_routing.id'), primary_key=True), Column('type', MSEnum), autoload=True) class Press(object): pass class PressRouting(object): pass mapper(Press, press_table, properties=dict(routes=relation(PressRouting, secondary=press_routing_press_table, backref='presses'))) mapper(PressRouting, press_routing_table) I'm trying to represent a many-to-many relationship between the press_table and the press_routing table using the linking table, press_routing_press. I think I've got the table structure and mapping set up, but now I need some help to build a query using SqlAlchemy that does the same thing as this MySQL query: select p.id, p.code from press p inner join press_routing_press prp on p.id=prp.press_id inner join press_routing pr on pr.id=prp._press_routing_id where pr.code='A' This gives me the results I want from the MySQL command line against the existing tables in the database, but I can't figure out how to construct an equivalent SqlAlchemy version to do the same thing passing in 'A' as the paramter. I know I'm being dense about this, thanks in advance for the help, Doug --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Unicode Results from SQL functions
A proposed patch which addresses the issue at the SQLAlchemy ResultProxy level is at: http://www.sqlalchemy.org/trac/ticket/1179 On Sep 25, 2008, at 3:43 PM, Shawn Church wrote: On Thu, Sep 25, 2008 at 6:47 AM, Michael Bayer [EMAIL PROTECTED] wrote: Converting *all* strings to unicode would then lead to havoc as soon as someone adds a Binary column to their schema. Another solution, which we have support for, would be for ResultProxy to attempt to match TypeEngine objects to the DBAPI types in cursor.description. We already do this for Oracle binary types, so perhaps this functionality could be enabled for all types. Its backwards incompatible though so it would need to be a create_engine() option (I'd call it detect_result_types). A technical issue with this option is that it creates issues with table reflection which would have to be worked around somehow. The ?charset=utf8use_unicode=1 parameter produces the expected behavior. I DID check the documentation and mailing list archives (many posts) before raising this issue but I thought use_unicode and convert_unicode were different forms of the same thing. I in fact DID try ?convert_unicode=1 as a URI paramater which of course was incorrecct. I'll try to write up a wiki page on Unicode. I'm assuming that the '?uri_params' are specific to each driver (ie mysqldb) so the syntax for unicode support may be different for each? I'm also planning on using binary strings so I need to write some more tests, although I THINK I understand what is going on now. Thanks again, Shawn --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Two foreignkey's to the same table, rendering a join error.
hello I'm having a bit of troubles with the following case. http://paste.turbogears.org/paste/8177 This is the error I'm getting. Now I know it has to do with SA not being able to know to which field to map state.id, but how do I fix it? Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. % (self)) sqlalchemy.exc.ArgumentError: Could not determine join condition between parent/child tables on relation MappedPolicy.state. Specify a 'primaryjoin' expression. If this is a many-to-many relation, 'secondaryjoin' is needed as well. PS: in case you where wondering for commodity I need those to values present in the table and since they are always going to be two it isn't such a bad design decision... although I may optimize the state table into a python list or dict in the future. I want to know how to fix this. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---