* mercoledì 17 gennaio 2007, alle 09:30, Michael Bayer wrote :
print Content-type:text/pdf\n\n# whatever the header is for pdf
for chunk in res.pdf:
sys.stdout.write(chunk.data)
I would agree that the approach taken by pg_largeobject is a useful
approach in that you can read just
From that point, we have a category of issue that comes up all the
time, where what you want to do is possible, but SA never expected
exactly what youre doing and ...
This hinted me about another view point of what this wrapper of mine
is: a storage for all the knowledge about how to use SA
After upgrading sqlite to the most recent version (3.3.10), my problem
went away and everything works as expected.
Thanks for the help, and sorry for the noise.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
without seeing a full example, its looking like a mysql bug. SA's
datetime implementation for MySQL doesnt do any conversion from return
value since MySQLDB handles that task. whereas the sqlite dialect in
SA *does* do conversion since sqlite doesnt handle that.
Below my small example (It
Michael Bayer wrote:
Simon King wrote:
[requirements for instances returned from
MapperExtension.create_instance]
at this point the entity_name should get set after your
custom create_instance is called (at least thats in the
trunk). init_attr is not required, it pre-sets attributes on
Hi All,
Trying to implement full text search with PostgreSQL and tsearch2,
being a beginner, I am facing some basic hurdles:
1. How to declare a table in SA, I mean, what SA datatype should
tsvector correspond to?
2. Can the index be defined using SA, or we need to do it in the
backend?
3. Can
On 1/17/07, Miki [EMAIL PROTECTED] wrote:
# Notice we didn't commit yet
This means that if someone is querying the table in the middle of the
transaction it'll get wrong results.
(I'm not a DB expert, this might be total nonsense)
no, it just means that queries run in the same uncommitted
On Thursday 18 January 2007 07:11, Sanjay wrote:
Hi All,
Trying to implement full text search with PostgreSQL and tsearch2,
being a beginner, I am facing some basic hurdles:
1. How to declare a table in SA, I mean, what SA datatype should
tsvector correspond to?
2. Can the index be defined
just a note, when you use SQLite, by default you get the same
connection every time within a single thread, since its using
SingletonThreadPool. read the docs for options on how to change this.
--~--~-~--~~~---~--~~
You received this message because you are
FAQ updated with this issue since its all too frequent.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this
i dont know much about tsvector, but as to the various can the
trigger/index be defined in SA, no, you have to use literal postgres
commands to do that (which you can of course issue from SA as textual
statements). since they are pg-specific anyway, theres not much
advantage to SA supporting
I'd be interested in how you work this out as I want to do something
similar. Would you be willing to write it up and perhaps post it to
the wiki?
I was able to get MySQL's fulltext search working more quickly than
PostgreSQL's and that's what my customer is used to so that's what I'm
going to
On Thursday 18 January 2007 14:19, Chris Shenton wrote:
I'd be interested in how you work this out as I want to do something
similar. Would you be willing to write it up and perhaps post it to
the wiki?
I was able to get MySQL's fulltext search working more quickly than
PostgreSQL's and
I would like to access the underlying psycopg2 connection to get at a DBAPI2
cursor with the ultimate goal of using the copy_from/copy_to protocol for
moving large amounts of data to/from the database. I can't seem to find a
way to do that from a db engine or connection object. Is there a way
engine.raw_connection().cursor()
On 1/18/07, Sean Davis [EMAIL PROTECTED] wrote:
I would like to access the underlying psycopg2 connection to get at a DBAPI2
cursor with the ultimate goal of using the copy_from/copy_to protocol for
moving large amounts of data to/from the database. I can't
On Thursday 18 January 2007 17:42, Jonathan Ellis wrote:
engine.raw_connection().cursor()
Ah, yes! Thanks.
Sean
On 1/18/07, Sean Davis [EMAIL PROTECTED] wrote:
I would like to access the underlying psycopg2 connection to get at a
DBAPI2 cursor with the ultimate goal of using the
in both test cases, the stack trace reveals that the error occurs
during the lazy load operation of the child items:
File test_case2b.py, line 57, in ?
print a.manager
...
File
/Users/classic/dev/sqlalchemy/lib/sqlalchemy/orm/strategies.py, line
220, in lazyload
the query being issued
Consider the following model:
class Comment(object):
def __init__(self, text):
self.text = text
CommentTable=Table('Comment', metadata,
Column('id', Integer, primary_key=True),
Column('text', Unicode(512)),
)
and that change is in rev 2214, your two test scripts run unmodified
now.
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To
the ImageComment mapper is superfluous to the example. you should
generally not map a class to a table, then also use that table as the
secondary join in another mapping; since changes to one wont get
reflected in the other. If you want to use ImageComment, then you need
to use the association
ticket 427 is fixed in rev 2216. so using the trunk or 0.3.4 when i
get around to releasing, this is the program:
imageMapper = mapper(Image, ImageTable,
properties={'comments': relation(Comment,
secondary=ImageCommentTable, lazy=False, cascade=all, delete-orphan)}
21 matches
Mail list logo