t = Table('mytable', meta,
Column(...)
)
someothermeta = MetaData()
t2 = Table('mytable', someothermetadata, autoload=True,
autoload_with=connection)
assert t.compare(t2)
I believe this should be done somehow automatically. Because everyone
needs this.
Thanks again for you enlightening answer.
I had no understanding for these different scopes.
What I haven't managed is to call a db function from a session/query
object.
I had the following line in my code, but it doesn't work anymore.
select([func.max(table.c.DocID)]).execute().scalar()
How
Guys,
I am getting this error on reading data from a MySQL table. I have specified
the charset of the table as utf-8 and collation utf8_general_ci. Do I need
to do anything else ? Will *convert_unicode=True help?
*
--
Cheers,
- A
--~--~-~--~~~---~--~~
You
this is along the recent threads about metadata consistency between
code and DB, and the DB-migration. Both these require a full metadata
reflection from database.
Here a version of autocode.py, hacked for couple of hours.
It has more systematic approach, replaces back column types with SA
Hello:
How do I get only a few columns from a query object?
q = session.query(Document).select_by(Complete=False)
would give me a list of rows (all columns) where Complete == False.
I have seen the select([doc.c.name]) notation but it doesn't work for
me.
select([docs.c.DocID]).execute()
I see the difference between getting an object (with all the columns)
and doing an SQL statement.
As fas as I can see, yes it is bounded.
The code is:
from sqlalchemy import *
metadata = MetaData()
docs = Table('docs', metadata)
docs.append_column(Column('DocID', Integer, primary_key=True))
from sqlalchemy import *
metadata = MetaData()
docs = Table('docs', metadata)
docs.append_column(Column('DocID', Integer, primary_key=True))
docs.append_column(Column('Path', String(120)))
docs.append_column(Column('Complete', Boolean))
class Doc(object):
def __init__(self, id,
another version, separated autoload from code-generation,
which is now the __main__ test.
http://dbcook.svn.sourceforge.net/viewvc/*checkout*/dbcook/trunk/autoload.py
now it is possible to do something like:
$ python autoload.py postgres://[EMAIL PROTECTED]/db1 | python - sqlite:///db2
On Jul 25, 2007, at 4:57 AM, Anton V. Belyaev wrote:
I believe this should be done somehow automatically. Because everyone
needs this. There should be two separate disjoint options:
1) Autoload, when working with previously created database for rapid
start. Columns arent specified at all
On Jul 25, 2007, at 8:51 AM, alex.schenkman wrote:
I see the difference between getting an object (with all the columns)
and doing an SQL statement.
As fas as I can see, yes it is bounded.
The code is:
from sqlalchemy import *
metadata = MetaData()
docs = Table('docs', metadata)
seems like convert_unicode=True (or Unicode type) is already
happening there. its not going to play well with MySQL client
encoding if you have use_unicode=1 turned on in my.cnf or on MySQLDB.
On Jul 25, 2007, at 9:12 AM, Arun Kumar PG wrote:
the stack trace is as follows:
File
On Jul 25, 2007, at 12:02 PM, Anton V. Belyaev wrote:
again, im not opposed to this feature and ill patch in an adequate
(and fully unit-tested) implementation. but have you actually ever
*had* this problem? or is it just hypothetical ?
For example, a developer modifies the metadata and
Anton V. Belyaev ha scritto:
again, im not opposed to this feature and ill patch in an adequate
(and fully unit-tested) implementation. but have you actually ever
*had* this problem? or is it just hypothetical ?
For example, a developer modifies the metadata and checks in. Another
On Wednesday 25 July 2007 19:34:50 Marco Mariani wrote:
Anton V. Belyaev ha scritto:
again, im not opposed to this feature and ill patch in an
adequate (and fully unit-tested) implementation. but have you
actually ever *had* this problem? or is it just hypothetical ?
For example, a
here some theory on comparing data trees, in order to produce the
changeset edit scripts.
http://www.pri.univie.ac.at/Publications/2005/Eder_DAWAK2005_A_Tree_Comparison_Approach_to_Detect.pdf
of course full automation is not possible and not needed - but why not
do maximum effect/help with
The trouble is (and, sorry, this is getting beyond SQLAlchemy-
specific), that leaves me without any good ideas for how to
distinguish sequence types (lists, tuples, and user-defined objects
resembling them) from mappings (dicts).
i usualy differ between sequences/generators and dicts by if
Well, the docs you link say slicing should be done via __getitem__
now. Which is also present in dicts of course.
Why not approach the problem from the other direction?
try:
maybedict.keys()
except AttributeError:
ismapping = True
else:
ismapping = False
?
On 7/25/07, Catherine [EMAIL
Yes, that's good! Still, I couldn't help myself and submitted a patch
to support slicing:
http://www.sqlalchemy.org/trac/ticket/686
It's all of four lines. I love Python.
My need is to verify that something *is* a sequence but *is not* a
dict, so without the patch, I'd have to check that it
Whoops! Never mind; deprecation is a really good reason.
http://docs.python.org/ref/sequence-methods.html
__getslice__( self, i, j)
Deprecated since release 2.0.
The trouble is (and, sorry, this is getting beyond SQLAlchemy-
specific), that leaves me without any good ideas for how to
Hi! Does anyone know if there's a reason
sqlalchemy.engine.base.RowProxy doesn't define __getslice__ ?
If I
resultrow = mytable.select().execute().fetchone()
then I can access resultrow[2] or resultrow[4], but not
resultrow[2:4].
I ask primarily because checking for __getslice__ is useful
why not just pass model_instance.__dict__ ?
On 7/23/07, HiTekElvis [EMAIL PROTECTED] wrote:
Anybody know a way to change a model into a dictionary?
For those to whom it means anything, I'm hoping to pass that
dictionary into a formencode.Schema.from_python method.
Any ideas?
-Josh
21 matches
Mail list logo