[sqlalchemy] Re: Postgre e pg_largeobject
* mercoledì 17 gennaio 2007, alle 09:30, Michael Bayer wrote : print Content-type:text/pdf\n\n# whatever the header is for pdf for chunk in res.pdf: sys.stdout.write(chunk.data) I would agree that the approach taken by pg_largeobject is a useful approach in that you can read just chunked sections. I've converted the column in bytea ... I've realized that I've no benefits using BLOB and pg_largeobject ... and the .pdf is much more usable in this form ... tanks a lot ... -- #include stdio.h int main(void){char c[]={10,65,110,116,111,110,105,111,32,98,97,114,98,111,110, 101,32,60,104,105,110,100,101,109,105,116,64,116,105,115,99,97,108,105,110,101, 116,46,105,116,62,10,10,0};printf(%s,c);return 0;} --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ORM ramblings 2 - and announcing some o2r wrapper
From that point, we have a category of issue that comes up all the time, where what you want to do is possible, but SA never expected exactly what youre doing and ... This hinted me about another view point of what this wrapper of mine is: a storage for all the knowledge about how to use SA properly, for that particular field-area - easy transparent O2R mapping. For example, i don't know SQL - u do. And SA encapsulates that knowledge. My wrapper will essentialy encapsulate the 'how i use SA' knowledge. Anything out of this scope would again be done manualy, in a way or another. Anyway, to me all Software is just a (machine-interpretable) storage of human's knowledge about something. bye svil --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Test data apparently not saving
After upgrading sqlite to the most recent version (3.3.10), my problem went away and everything works as expected. Thanks for the help, and sorry for the noise. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: [Fwd: [sqlalchemy] Re: scalar select: wrong result types]
without seeing a full example, its looking like a mysql bug. SA's datetime implementation for MySQL doesnt do any conversion from return value since MySQLDB handles that task. whereas the sqlite dialect in SA *does* do conversion since sqlite doesnt handle that. Below my small example (It doesn't clarify where the problem is) I hope someone else will check it, with other databases too (postgres, for example) I hope it can help My results: sqlite_engine (True, True) mysql_engine (False, False) My configuration: debian etch sqlite 3 sqlalchemy trunk Revision: 2212 mysql 5.0.30 python 2.4.4 Alessandro PS: I hope this email will keep identation. I have also put this test in http://pastebin.com/861936 import datetime from sqlalchemy import * def check(engine): mytest = Table('mytest', engine, Column('date', DateTime, primary_key=True, \ default=datetime.datetime.now)) try: mytest.drop() except: pass mytest.create() mytest.insert().execute() tt = mytest.alias('tt') selTagged = select([tt.c.date, select([tt.c.date], tt.c.date==mytest.c.date, from_obj=[mytest], limit=1, offset=0, scalar=True).label('last_mod')] ) #Note: 'text' need typemap selText = text(selTagged.__str__(), engine, \ typemap={'last_mod':types.DateTime}) def isdatetime(sel): return isinstance(sel.execute().fetchone().last_mod, \ datetime.datetime) return isdatetime(selTagged), isdatetime(selText) sqlite_engine = create_engine('sqlite:///database_test.db') mysql_engine = create_engine('mysql://[EMAIL PROTECTED]/xxx') print sqlite_engine, check(sqlite_engine) print mysql_engine, check(mysql_engine) --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Questions about polymorphic mappers
Michael Bayer wrote: Simon King wrote: [requirements for instances returned from MapperExtension.create_instance] at this point the entity_name should get set after your custom create_instance is called (at least thats in the trunk). init_attr is not required, it pre-sets attributes on the object that are otherwise auto-created later (but the autocreation step throws a single AttributeError per attribute, which hits performance a little bit). Thanks a lot for explaining that. It looks to me like I would be better off simply using this method to load my class hierarchy, rather than trying to twist polymorphic_identity into something that it was never meant to do. Also, adding get_polymorphic_identity as a MapperExtension method would add an overhead for every single object load for what is probably a very infrequently used feature - I'd hate to be responsible for that! Yet again, SQLAlchemy is already able to do exactly what I want - sorry it's taken a while for me to realise it. Cheers, Simon --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Full Text Search using PostgreSQL and tsearch2
Hi All, Trying to implement full text search with PostgreSQL and tsearch2, being a beginner, I am facing some basic hurdles: 1. How to declare a table in SA, I mean, what SA datatype should tsvector correspond to? 2. Can the index be defined using SA, or we need to do it in the backend? 3. Can the trigger for auto update be defined using SA, or we need to do it in the backend? 4. How to frame the query in SA? A reference, tutorial, receipe or at least some key guidance could really help me. Dennis has mentioned in a post - I'm currently doing that with Postgresql(tsearch2) for a project and it works quite well., but I am not being able to guess the pattern. thanks sanjay --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Simple transactions
On 1/17/07, Miki [EMAIL PROTECTED] wrote: # Notice we didn't commit yet This means that if someone is querying the table in the middle of the transaction it'll get wrong results. (I'm not a DB expert, this might be total nonsense) no, it just means that queries run in the same uncommitted transaction can see the uncommitted data. this is the way transactions are supposed to work; only connections _outside_ the transaction shouldn't be able to see it. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Full Text Search using PostgreSQL and tsearch2
On Thursday 18 January 2007 07:11, Sanjay wrote: Hi All, Trying to implement full text search with PostgreSQL and tsearch2, being a beginner, I am facing some basic hurdles: 1. How to declare a table in SA, I mean, what SA datatype should tsvector correspond to? 2. Can the index be defined using SA, or we need to do it in the backend? 3. Can the trigger for auto update be defined using SA, or we need to do it in the backend? 4. How to frame the query in SA? SA doesn't support tsvectors, as far as I know. The simplest thing in my mind is to build one table that contains the tsvector (and is not mapped to SA) and a key to the original record. Then, you can do whatever queries you like (using the underlying database engine) to get keys to the data in the source table and use SA to grab these data. Not ideal, but I think it will work. Alternatively, you can create a new type for SA; there is a recent message in the archives about doing that (not with tsvector, but). As for querying, I'm not sure how that would be best implemented in SA. Sean --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Simple transactions
just a note, when you use SQLite, by default you get the same connection every time within a single thread, since its using SingletonThreadPool. read the docs for options on how to change this. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Simple transactions
FAQ updated with this issue since its all too frequent. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Full Text Search using PostgreSQL and tsearch2
i dont know much about tsvector, but as to the various can the trigger/index be defined in SA, no, you have to use literal postgres commands to do that (which you can of course issue from SA as textual statements). since they are pg-specific anyway, theres not much advantage to SA supporting expressions for those. as to the datatype, create your own datatype. as to the query, i dont know what a tsvector query looks like so youd have to show me that. if it involves special operators, use the op() method on the column. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Full Text Search using PostgreSQL and tsearch2
I'd be interested in how you work this out as I want to do something similar. Would you be willing to write it up and perhaps post it to the wiki? I was able to get MySQL's fulltext search working more quickly than PostgreSQL's and that's what my customer is used to so that's what I'm going to use for them. I haven't yet gotten to integrate it with SA. Down the road, I'd like to know how to do the same with PG. Thanks. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Full Text Search using PostgreSQL and tsearch2
On Thursday 18 January 2007 14:19, Chris Shenton wrote: I'd be interested in how you work this out as I want to do something similar. Would you be willing to write it up and perhaps post it to the wiki? I was able to get MySQL's fulltext search working more quickly than PostgreSQL's and that's what my customer is used to so that's what I'm going to use for them. I haven't yet gotten to integrate it with SA. Down the road, I'd like to know how to do the same with PG. See here for at least an overview http://www.devx.com/opensource/Article/21674 There are other similar sites available via google. Sean --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Accessing DBAPI cursor object from connection
I would like to access the underlying psycopg2 connection to get at a DBAPI2 cursor with the ultimate goal of using the copy_from/copy_to protocol for moving large amounts of data to/from the database. I can't seem to find a way to do that from a db engine or connection object. Is there a way (assuming that I am using postgres)? Thanks, Sean --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Accessing DBAPI cursor object from connection
engine.raw_connection().cursor() On 1/18/07, Sean Davis [EMAIL PROTECTED] wrote: I would like to access the underlying psycopg2 connection to get at a DBAPI2 cursor with the ultimate goal of using the copy_from/copy_to protocol for moving large amounts of data to/from the database. I can't seem to find a way to do that from a db engine or connection object. Is there a way (assuming that I am using postgres)? Thanks, Sean --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Accessing DBAPI cursor object from connection
On Thursday 18 January 2007 17:42, Jonathan Ellis wrote: engine.raw_connection().cursor() Ah, yes! Thanks. Sean On 1/18/07, Sean Davis [EMAIL PROTECTED] wrote: I would like to access the underlying psycopg2 connection to get at a DBAPI2 cursor with the ultimate goal of using the copy_from/copy_to protocol for moving large amounts of data to/from the database. I can't seem to find a way to do that from a db engine or connection object. Is there a way (assuming that I am using postgres)? Thanks, Sean --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ORM ramblings 2 - and announcing some o2r wrapper
in both test cases, the stack trace reveals that the error occurs during the lazy load operation of the child items: File test_case2b.py, line 57, in ? print a.manager ... File /Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py, line 220, in lazyload the query being issued is: SELECT p_union.atype AS p_union_atype, p_union.manager_id AS p_union_manager_id, p_union.name AS p_union_name, p_union.id AS p_union_id \nFROM (SELECT anon_e513.manager_id AS manager_id, anon_e513.atype AS atype, anon_e513.id AS id, anon_e513.name AS name \nFROM (SELECT Employee.id AS id, Employee.name AS name, Employee.atype AS atype, Employee.manager_id AS manager_id \nWHERE Employee.atype = ?) AS anon_e513) AS p_union, Employee \nWHERE Employee.manager_id = ? ORDER BY p_union.oid if you read closely, you can see that the embedded query for selecting the employee is wrong; it has no FROM clause: SELECT Employee.id AS id, Employee.name AS name, Employee.atype AS atype, Employee.manager_id AS manager_id \nWHERE Employee.atype = ? SQLite is a little dumb in that it doesnt give a reasonable error message for this condition (other DB's do). when you see a SELECT that has no FROM, it usually means SA is trying to correlate the select as a subquery to the enclosing query. this will occur anytime a select involving the employee table occurs within another select involving the employee table. in this case its clearly wrong. so the fix is just to insure that the subquery doesnt get correlated: ajoin = { 'Employee': employee_table.select( employee_table.c.atype =='Employee', correlate=False), 'Manager': join( employee_table, manager_table,manager_table.c.id ==employee_table.c.id), } and all is well again. the polymorphic union thing is one of the most ambitious queries SQLAlchemy's ORM produces. which is why I still have not made it automatic, i.e. you have to explicitly create it yourself (even though theres a helper function). its also why its still plastered with all those alpha feature warnings. I think if theres a change to be made to SA its that I have to try a little harder to insure that the auto-correlation doesnt result in a SELECT that has no FROM, meaning this little confusion wouldnt occur. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Multiple joins and cascading delete
Consider the following model: class Comment(object): def __init__(self, text): self.text = text CommentTable=Table('Comment', metadata, Column('id', Integer, primary_key=True), Column('text', Unicode(512)), ) CommentMapper = mapper(Comment, CommentTable ) class Image(object): def __init__(self, name): self.name = name ImageTable = Table(Image, metadata, Column('id', Integer, primary_key=True), Column('name', Unicode(128)), ) #relationship between Images and Comments ImageCommentTable = Table(ImageComment, metadata, Column(imageID, Integer, ForeignKey(Image.id), primary_key=True), Column(commentID, Integer, ForeignKey(Comment.id), primary_key=True), ) class ImageComment(object): pass mapper(ImageComment, ImageCommentTable, properties ={ 'comments':relation(Comment, lazy=False, cascade=all) }) imageMapper = mapper(Image, ImageTable, properties={'comments': relation(Comment, secondary=ImageCommentTable, lazy=False)} ) and the following code: i = Image(new) session.save(i) session.flush() c = Comment(new comment) session.save(c) session.flush() i.comments.append(c) session.save(i) session.flush() OK, so that should make an entry in all three tables. Now, I want to remove the comment: c = session.query(Comment).get(c.id) session.delete(c) session.flush() session.clear() Now, if you run the sqlcomment: select * from imagecomment you still see the relationship to the comment that is no longer there. Perhaps I do not understand cascade properly, but delete-orphan causes a problem on object creation. help? -chris --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: ORM ramblings 2 - and announcing some o2r wrapper
and that change is in rev 2214, your two test scripts run unmodified now. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Multiple joins and cascading delete
the ImageComment mapper is superfluous to the example. you should generally not map a class to a table, then also use that table as the secondary join in another mapping; since changes to one wont get reflected in the other. If you want to use ImageComment, then you need to use the association object pattern and not the secondary join argument; the two patterns are mutually exclusive. for this email, ill use the secondary table pattern. SA does not keep track of all the collections that an entity belongs to. so when you say session.delete(c), it has no idea that c is part of a comments collection on an Image, so no operation occurs. the solution is to remove the comment from the comments collection: i = session.query(Image).get(i.id) c = session.query(Comment).get(c.id) i.comments.remove(c) session.delete(c) session.flush() if youd like to skip the separate remove() and delete() operation, you can use delete-orphan. for some reason, using delete-orphan with a many-to-many relationship is requiring a bi-directional reference for it to work properly...ive just added ticket # 427 for this which will be fixed by tomorrow and the images backref you see below will no longer be required. imageMapper = mapper(Image, ImageTable, properties={'comments': relation(Comment, secondary=ImageCommentTable, lazy=False, cascade=all, delete-orphan, backref=images)} ) session = create_session() i = Image(new) session.save(i) session.flush() c = Comment(new comment) i.comments.append(c) session.flush() i = session.query(Image).get(i.id) c = session.query(Comment).get(c.id) i.comments.remove(c) session.flush() session.clear() notice above that the Comment is automatically added to the session when its appended to the Image, this is due to the save-update cascade present on the comments relation. If you try to save() the Comment by itself in the session, without it being attached to any Image, and then flush(), thats an invalid operation since its an orphan (not attached to anything). then we remove the comment from the parent Image object, and the delete of both the Comment and the many-to-many record succeeds. --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---
[sqlalchemy] Re: Multiple joins and cascading delete
ticket 427 is fixed in rev 2216. so using the trunk or 0.3.4 when i get around to releasing, this is the program: imageMapper = mapper(Image, ImageTable, properties={'comments': relation(Comment, secondary=ImageCommentTable, lazy=False, cascade=all, delete-orphan)} ) session = create_session() i = Image(new) session.save(i) session.flush() c = Comment(new comment) session.save(c) i.comments.append(c) session.flush() i = session.query(Image).get(i.id) c = session.query(Comment).get(c.id) i.comments.remove(c) session.flush() session.clear() --~--~-~--~~~---~--~~ You received this message because you are subscribed to the Google Groups sqlalchemy group. To post to this group, send email to sqlalchemy@googlegroups.com To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/sqlalchemy?hl=en -~--~~~~--~~--~--~---