[sqlalchemy] Re: Get default value

2008-01-10 Thread Rick Morrison
Join inheritance is a bit of a new one to me, so there may be more possible
than I thought. But in that scheme, it looks to me as if the resultant
object is a composite of fetches from a number of tables. The core of that
object, though, seems to be a single table, a "root" table. It is that table
that holds the discriminator.

In the scheme you posted, you're trying to use *two* roots, each with its
own discriminator column. I think that is the,  um,   "root" of your
problem.



On Jan 10, 2008 10:08 PM, Alexandre da Silva <[EMAIL PROTECTED]> wrote:

>
>
> Em Qui, 2008-01-10 às 21:20 -0500, Rick Morrison escreveu:
> > You're mixing single-table inheritance (using the discriminator
> > column), with concrete inheritance (using multiple tables).
> >
> > You have to pick one scheme or the other. Either use a single
> > inheritance chain, or separate the two class hierarchies into two
> > separate chains that don't inherit from each other. In the separate
> > scheme, each chain can use it's own discriminator column, but you
> > cannot combine two class hierarchies that each use a different
> > discriminator column.
> >
>
> In fact I am doing what is sugested here:
>
> http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_inheritance_joined
> It works fine with one level inheritance  class->subclass
> but the third subclassing don't working, it was because objects was in
> cache... savint to db and tryin to load again the identity is lost.
>
> how do you suggest to I do this? or I cannot?
>
> --
> Alexandre da Silva
> Analista de Sistemas - Bacharel em Sistemas de Informação (2003-2007)
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Exclude Autogenerated Timestamp Column

2008-01-10 Thread Rick Morrison
I'm not sure I understand what you're looking for...you want the column to
remain NULL after an insert?

Then take off the default from the column definition and make it a datetime
field instead of a timestamp.


On Jan 10, 2008 9:15 PM, deanH <[EMAIL PROTECTED]> wrote:

>
> Hello,
>
> I am having a problem inserting an object into a MS SQL table that
> contains a timestamp field (now) that is generated automatically -
> sqlalchemy is defaulting this column to None and when it is generating
> the SQL insert.  Is there a way to configure the mapper so that it
> ignores specific columns?
>
> I looked at the related topic below, but that is resolved by using a
> sqlalchemy construct specific to primary keys, and i have not seen one
> that is designated for timestamps.
>
> http://groups.google.com/group/sqlalchemy/browse_thread/thread/749c55a835b7458/51b38f4b08d31d6f?lnk=gst&q=column+exclude#51b38f4b08d31d6f
>
> I am new to sqlalchemy so I may be going about this the wrong way, but
> my attempts at overriding with a reflected column were similarly
> unsuccessful.
>
> Column('now', MSTimeStamp, nullable=False)
>
> Any thoughts on how to exclude columns from generated inserts?
>
> cheers
> dean
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get default value

2008-01-10 Thread Rick Morrison
You're mixing single-table inheritance (using the discriminator column),
with concrete inheritance (using multiple tables).

You have to pick one scheme or the other. Either use a single inheritance
chain, or separate the two class hierarchies into two separate chains that
don't inherit from each other. In the separate scheme, each chain can use
it's own discriminator column, but you cannot combine two class hierarchies
that each use a different discriminator column.


On Jan 10, 2008 8:46 PM, Alexandre da Silva <[EMAIL PROTECTED]> wrote:

>
>
> Em Qui, 2008-01-10 às 17:13 -0800, jason kirtland escreveu:
> > col.default.arg
>
> Yes, exactly this.
>
> another question
>
> How many levels I can inherit classes/tables without get something
> wrong?
>
> let me show a simplest sample hierarchy:
>
>resource
>  / \
>person   material
> /  \
> employeecustomer
>
>
> now, I am creating a type column on resource to map persons and
> materials, them I am creating another type column on person, to get
> mapped the various types of persons.
>
> by this way, I could get a employee instance, just selecting a resource
> but something is going wrong, because the column type on resource is not
> filled with correct value, and I am getting NOT NULL CONSTRAINT by
> insert an employee.
>
>
> the tables
>
> resource_table = Table(
>Column('id',Integer, primary_key=True),
>Column('poly', String(31), nullable=False)
> )
>
> person_table = Table(
>Column('id',Integer, primary_key=True, ForeignKey('resource.id'),
> primary_key=True)),
>Column('poly', String(31), nullable=False)
> )
>
> employee_table = Table(
>Column('id',Integer, primary_key=True, ForeignKey('person.id'),
> primary_key=True)),
> )
>
> the classes
>
> class Resource(object):
>pass
>
> class Person(Resource):
>pass
>
> class Employee(Person):
>pass
>
>
> mappers
>
> mapper(Resource, resource_table,
>polymorphic_on=resource_table.c.poly,
>polymorphic_identity='resource'
>)
>
> mapper(Person, person_table,
>polymorphic_on=person_table.c.poly,
>inherits=Resource, polymorphic_identity='person'
>)
>
>
> mapper(Employee employee_table,
>inherits=Person, polymorphic_identity='employee',
>)
>
>
> is all, now when I create an instance of Employee and try to save, I get
> back an integrity error, that resource.poly cannot be null
>
> any suggestion?
>
> thank's for previous replies.
>
>
> --
> Alexandre da Silva
> Analista de Sistemas - Bacharel em Sistemas de Informação (2003-2007)
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get default value

2008-01-10 Thread Rick Morrison
Ah, I read too fast, you are getting back the ColumnDefault object

   try column.default.arg

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Get default value

2008-01-10 Thread Rick Morrison
Isn't it just

   column.default ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Doing a dynamic Update

2008-01-10 Thread Rick Morrison
For a stepwise migration from raw, SQL, it will probably be easier to get
your mind around the SQL-expression side of the library, and then adopt ORM
features as you feel comfortable with them.

On the SQL-expression side of the library, you'll find that your Table()
object has a collection called "c" (for Columns). It's a dict-like
collection that supports retrieving the column by name:


Table.update() takes a dictionary of updates, so the name-based access is
already in there:

  tbl.update(tbl.c.dateworked == mydate).execute(dict(columnname =
newvalue))

or using sessions:

  S.execute(tbl.update(tbl.c.dateworked == mydate), dict(columnname
= newvalue))

HTH,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: can everyone try r4032 please (was: string->text message is broke)

2008-01-09 Thread Rick Morrison
Runs clean here now -- thanks!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "String type with no length" deprecation warnings in 0.4.2b

2008-01-09 Thread Rick Morrison
> actually that explaination makes no sense.  the warning is only raised
> when that "_for_ddl" flag is True which should *only* occur during
> CREATE TABLE.
>
> the issue is only with CREATE TABLE.


Well, here's the traceback:

-> main.metadata.drop_all(checkfirst=True)
  /home/xram/Projects/oss/satrunk/lib/sqlalchemy/schema.py(1234)drop_all()
-> bind.drop(self, checkfirst=checkfirst, tables=tables)
  /home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(1136)drop()
-> self._run_visitor(self.dialect.schemadropper, entity,
connection=connection, **kwargs)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(1160)_run_visitor()
-> visitorcallable(self.dialect, conn, **kwargs).traverse(element)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/sql/visitors.py(76)traverse()
-> meth(target)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/sql/compiler.py(892)visit_metadata()
-> collection = [t for t in metadata.table_iterator(reverse=True, tables=
self.tables) if (not self.checkfirst or  self.dialect.has_table(
self.connection, t.name, schema=t.schema))]

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/databases/mssql.py(539)has_table()
-> c = connection.execute(s)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(844)execute()
-> return Connection.executors[c](self, object, multiparams, params)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(895)execute_clauseelement()
-> return self._execute_compiled(elem.compile(dialect=self.dialect,
column_keys=keys, inline=len(params) > 1), distilled_params=params)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(904)_execute_compiled()
-> context = self.__create_execution_context(compiled=compiled,
parameters=distilled_params)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/base.py(941)__create_execution_context()
-> return self.engine.dialect.create_execution_context(connection=self,
**kwargs)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/databases/mssql.py(459)create_execution_context()
-> return MSSQLExecutionContext(self, *args, **kwargs)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/databases/mssql.py(287)__init__()
-> super(MSSQLExecutionContext, self).__init__(*args, **kwargs)

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/engine/default.py(147)__init__()
-> self.processors = dict([

/home/xram/Projects/oss/satrunk/lib/sqlalchemy/sql/expression.py(1753)bind_processor()
-> return self.type.dialect_impl(dialect).bind_processor(dialect)
> /home/xram/Projects/oss/satrunk/lib/sqlalchemy/types.py(398)dialect_impl()
-> return TypeEngine.dialect_impl(self, dialect, **kwargs)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "String type with no length" deprecation warnings in 0.4.2b

2008-01-09 Thread Rick Morrison
Alright, I found this annoying little ?&!#$*&

Happens when an MSSQL metadata.drop_all(checkfirst=True) is executed. That
causes a query to the INFORMATION_SCHEMA to be run.

If you look at databases/information_schema.py, you'll see all the nice
non-length strings that are generating the problem.  I'm not sure it makes
sense to change the String columns to TEXT, though -- they have very
different semantics in MSSQL.

What exactly is so horrible about an non-length string again?



On Jan 8, 2008 11:09 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:

> its line 289 of compiler.py
> def visit_typeclause(self, typeclause, **kwargs):
> return typeclause.type.dialect_impl(self.dialect,
> _for_ddl=True).get_col_spec()
>
> if "type" subclasses Text, there should be no warning
>
> On Jan 8, 2008, at 10:51 PM, Rick Morrison wrote:
>
> I guess I'm being dense: I don't see it.
>
> Here is what I think is doing the inheritence From mssql.py line 191-ish:
>
> class MSText(*sqltypes.Text*):
> def get_col_spec(self):
> if self.dialect.text_as_varchar:
> return "VARCHAR(max)"
> else:
> return "TEXT"
>
> it currently uses no dialect_impl() override.
>
> and then on line 366:
>
> class MSSQLDialect( default.DefaultDialect):
> colspecs = {
> sqltypes.Unicode : MSNVarchar,
>...
> sqltypes.String : MSString,
> *sqltypes.Text : MSText,
> *...
> }
>
>
>
>
> On Jan 8, 2008 10:27 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> > it has to do with what the string/text types in mssql.py inherit - if
> > you inherit String you get the warning, if Text/TEXT, you dont.  just look
> > inside of dialect_impl().
> >
> >
> > On Jan 8, 2008, at 8:28 PM, Rick Morrison wrote:
> >
> >  I still get this on r4031 with MSSQL/pymssql. Are there changes that
> > need to be made in the database module, maybe? Far as I can see all my
> > Table() defs use the TEXT type identifier.
> >
> > On Jan 8, 2008 4:36 AM, Felix Schwarz < [EMAIL PROTECTED]> wrote:
> >
> > >
> > >
> > > Michael Bayer schrieb:
> > > > can you try r4030 please
> > >
> > > Thank you very much, r4030 fixes the described problems. :-)
> > >
> > > fs
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
>
>
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "String type with no length" deprecation warnings in 0.4.2b

2008-01-08 Thread Rick Morrison
I guess I'm being dense: I don't see it.

Here is what I think is doing the inheritence From mssql.py line 191-ish:

class MSText(*sqltypes.Text*):
def get_col_spec(self):
if self.dialect.text_as_varchar:
return "VARCHAR(max)"
else:
return "TEXT"

it currently uses no dialect_impl() override.

and then on line 366:

class MSSQLDialect(default.DefaultDialect):
colspecs = {
sqltypes.Unicode : MSNVarchar,
   ...
sqltypes.String : MSString,
*sqltypes.Text : MSText,
*...
}




On Jan 8, 2008 10:27 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:

> it has to do with what the string/text types in mssql.py inherit - if you
> inherit String you get the warning, if Text/TEXT, you dont.  just look
> inside of dialect_impl().
>
>
> On Jan 8, 2008, at 8:28 PM, Rick Morrison wrote:
>
>  I still get this on r4031 with MSSQL/pymssql. Are there changes that need
> to be made in the database module, maybe? Far as I can see all my Table()
> defs use the TEXT type identifier.
>
> On Jan 8, 2008 4:36 AM, Felix Schwarz < [EMAIL PROTECTED]> wrote:
>
> >
> >
> > Michael Bayer schrieb:
> > > can you try r4030 please
> >
> > Thank you very much, r4030 fixes the described problems. :-)
> >
> > fs
> >
> >
> >
> >
> >
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: "String type with no length" deprecation warnings in 0.4.2b

2008-01-08 Thread Rick Morrison
 I still get this on r4031 with MSSQL/pymssql. Are there changes that need
to be made in the database module, maybe? Far as I can see all my Table()
defs use the TEXT type identifier.

On Jan 8, 2008 4:36 AM, Felix Schwarz <[EMAIL PROTECTED]> wrote:

>
>
> Michael Bayer schrieb:
> > can you try r4030 please
>
> Thank you very much, r4030 fixes the described problems. :-)
>
> fs
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy 0.4.2 released

2008-01-07 Thread Rick Morrison
Hey I've been busy and haven't had a chance to comment on the 0.4.2 release
yet.

Just wanted to give a big congrats on this release. I know it's one of the
few releases out that have spawned an "a" (and looks like a forthcoming "b")
interim releases, but aside from those issues, it's a really nice release.
Getting rid of lazy loads on backref operations was a huge win in my app: I
saw speed-up times of upwards of 30% nearly across the board, with no
regressions. The library continues to impress and I'm really glad to see
this kind of development pace maintained even after years into the project.
Makes me feel like I climbed on the right horse way back in the pre-0.1days.

Thanks yet again to Mike and to the rest of the team!


On 1/2/08, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
> hey all and happy new year -
>
> I've just put out 0.4.2, a release that's a lot bigger than a point
> release would usually be.  I've been kind of procrastinating with it,
> since we're in this phase where the source to SA is changing somewhat
> dramatically and a lot of old crufty things are being replaced with
> simpler ideas that do more and work better...while this was the main
> idea when we went from 0.3 to 0.4,  it still continues.  With this
> release we kept going and goingand there's still a bunch more
> things I want to do !
>
> This release has a serious chunk of new features and other less
> noticeable things intended increase efficiency and/or improve
> behavior.  Of the less noticeable things, the methodology used for
> attribute history tracking has changed from copy-on-load to copy-on-
> write (cuts down on ORM load times by about 25%), a new "generic
> functions" mechanism has been added which, while its only used a
> little bit so far, will eventually lead to better behavior of func.XXX
> constructs with regards to typing behavior and such, and the way
> mappers are linked to their classes has changed to be simpler and more
> efficient, as well as self-dereferencing.  Another big one is that
> backref operations don't trigger lazy loads anymore; the changes to
> the reverse side of a relation are queued up if and when the reverse
> side actually loads.  Still more, reconnect support was still a little
> flaky in 0.4.1 and has been fixed and unit tested like mad, adding the
> ability for a Connection to reconnect on its own; reconnect support
> was also added to Oracle.  For the caching-enthused, you can
> pickle.dumps() any mapped instance, loads() it back, and all of its
> deferred/lazy/expired attribute loaders, even if they were set at the
> instance level via query.options() will remain intact; all the loaders
> are now serializable (plus unit tests galore).  The unit of work's
> internals have been refactored significantly for clarity and decreased
> complexity.
>
> Big noticeable things include, mutable primary keys - change the
> primary key of an instance and flush().  There's some complexity to
> deal with if you have foreign keys referencing that PK, namely that
> you need to configure "ON UPDATE" cascades for a database that
> enforces FK integrity.  For a non-FK enforcing database like sqlite or
> mysql MyISAM, you can set "cascade_updates=True" on relation() and
> SQLAlchemy will cascade the primary key change on its own, along
> mapped relationships.
>
> Another noticeable thing is that the Unicode type will issue a warning
> if you send a plain string to it; note that the Unicode type as well
> as flags like convert_unicode=True are meant to shuttle your Python
> unicode objects to the database using the db's native encoding; they
> are not intended to decode your application's raw Python
> bytestrings !  So now the assert_unicode flag, which is configurable
> as an error, warning (the default on Unicode), or neither (the default
> on String/engine), will let you know about that.
>
> synonym() has been totally overhauled to be way more usable.  Check
> the docs on the site for this; you can now just set up your class-
> level property(), and just set up a synonym() on the mapper which will
> move the column out of the way for you in one step, and will also add
> the SQL-clause instrumentation to your property(), so saying
> MyClass.myprop == "foo" works just fine.
>
> Query.select_from() is a whole new animal now; you can literally put
> *any select statement* into it (now as a scalar argument), and
> continue to filter(), join(), and order_by(); all the criterion will
> be properly aliased against the selectable as though it were the
> mapped table.  I wanted to go further with this, i.e. using the same
> methodology for polymorphic loads and also creating "sub-queries" of
> queries, but that stuff will be out in 0.4.3 and subsequent
> releases.   Query has been refined in this release, particularly the
> instances() method (the main method that loads instances from a
> cursor), but still has some major things to be done (including fixing
> up count(), max(), min(), etc., yes I know the

[sqlalchemy] Re: Is there a way to replace object in DB?

2007-12-27 Thread Rick Morrison
I think you're thinking of .load()

   .get() does not throw on not found.



On 12/27/07, braydon fuller <[EMAIL PROTECTED]> wrote:
>
>
> you can also use 'try' to avoid error messages:
>
> def ensure_object(db, id):
> try:
> o = db.Query(ModelObject).get(id)
> except:
> o = ModelObject(1, u'title')
> db.save(o)
> db.commit()
> return o
>
> -Braydon
>
>
> Rick Morrison wrote:
> >
> > ...if you're just checking to see if something exists in the database,
> > why not just try to .load() it, and then construct it afresh if you
> > don't find it?
> >
> > This kind of operation is sometimes called an "upsert" ...some
> > database engines support it, some don't. Most don't. But what all
> > database engines DO support is a query, followed by either an insert,
> > or an update as appropriate.
> >
> > Here's the idiom that should work:
> >
> > def ensure_object(sess, id):
> > o = sess.Query(ModelObject).get(id)# if found, o is now loaded
> > into session
> > if not o:
> > o = ModelObject(1, u'title'
> > sess.save(o)
> > sess.flush()
> > return o
> >
>
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Is there a way to replace object in DB?

2007-12-27 Thread Rick Morrison
...if you're just checking to see if something exists in the database, why
not just try to .load() it, and then construct it afresh if you don't find
it?

This kind of operation is sometimes called an "upsert" ...some database
engines support it, some don't. Most don't. But what all database engines DO
support is a query, followed by either an insert, or an update as
appropriate.

Here's the idiom that should work:

def ensure_object(sess, id):

o = sess.Query(ModelObject).get(id)# if found, o is now loaded into
session

if not o:

o = ModelObject(1, u'title')

sess.save(o)

sess.flush()

return o



On 12/27/07, Denis S. Otkidach <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 26, 2007 10:38 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:
> > if you have an instance which you are unsure if it already exists, you
> > can add it to a session using session.save_or_update(instance).  The
> > decision between INSERT and UPDATE is ultimately decided by the
> > presence of an attribute on the instance called "_instance_key".
>
> I'd like mapper to use UPDATE for newly constructed object (i.e.
> object without _instance_key attribute) when a record with the same
> primary key already exists in DB.
>
> > In most cases, this attribute is not something you need to worry about;
> > if an instance has been flushed or loaded from a session, it will have
> > the attribute, or if you've just constructed it and not yet persisted
> > it, the attribute will not be there.  If you think you need to
> > manually manipulate this attribute, perhaps you can describe your
> > specific use case so that we can recommend the best way to accomplish
> > it.
>
> OK, below is a use/test case:
>
> --->8---
> import sqlalchemy as sa, logging
> from sqlalchemy.orm import mapper, sessionmaker
>
> logging.getLogger('sqlalchemy.engine').setLevel(logging.INFO)
> logging.basicConfig()
>
> class ModelObject(object):
>
> def __init__(self, id, title):
> self.id = id
> self.title = title
>
> metadata = sa.MetaData()
>
> objectTable = sa.Table(
> 'Objects', metadata,
> sa.Column('id', sa.Integer, primary_key=True),
> sa.Column('title', sa.String(255), nullable=False),
> )
>
> objectsMapper = mapper(ModelObject, objectTable)
>
> engine = sa.create_engine('sqlite://')
> metadata.create_all(engine, checkfirst=True)
>
> session = sessionmaker(bind=engine)()
>
> obj = ModelObject(1, u'title')
> session.save(obj)
> session.commit()
>
> session.clear()
>
> # Another program. We have to insure that object with id=1 exists in DB
> and has
> # certain properties.
> obj = ModelObject(1, u'title')
> session.save_or_update(obj)
> session.commit()
> --->8---
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Objects are stored in DB when nobody asks to do so

2007-12-27 Thread Rick Morrison
Yah, that's a surprise to me too. I usually .flush() anyway (it's just good
hygiene :), but never knew that session.commit() implied session.flush()

On 12/26/07, Denis S. Otkidach <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 26, 2007 6:29 PM, Michael Bayer <[EMAIL PROTECTED]> wrote:
> > yet another scenario, you want to use transactions that are
> > independent of session flushes.  To accomplish this, use engine- or
> > connection-level transactions, as described in the second half of
> > http://www.sqlalchemy.org/docs/04/session.html#unitofwork_sql .  in
> > this case you control the transactions independently of any session
> > behavior, yet the session still participates (at your choosing) in the
> > overall transaction.
> >
> > Hope this helps.
>
> Sure, this is what I needed. Thanks! I believe SA documentation should
> explicitly state that session.commit() always flushes all modified
> objects independent on autoflush option.
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Readonly objects/protecting objects from modifications

2007-12-21 Thread Rick Morrison
Something like this is available on a roll-your-own basis via Python
properties along with some mapper tricks:


http://www.sqlalchemy.org/docs/04/mappers.html#advdatamapping_mapper_overriding

I would be +1 for such a feature implemented on mapped instances, could be
useful for detecting those hard-to-find bugs, but I can't think of a nice
and simple API for it. For mapped instances via Query(), it  could be an
.option(), but I can't see a good way for its use on relation()s. Also not
sure if such a feature would throw an exception on attribute setting, or
whether it ought to simply be ingored during a flush (OIOW, have it's
"dirty" flag locked down to False)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy object serialization / deserialization

2007-12-20 Thread Rick Morrison
Hey Matt.

Class hinting was a json-rpc 1.0 feature, subsequently dropped in 1.1. But
that's a nit.

The real problem, though is reconstituing those hints -- the machinery for
reinstantiating those objects has to come from somewhere, and that somewhere
is going to your code -- there is no magic "just add water and reconstitute"
my object built into Python.

If you really want to get into writing your own deserialization, you're
already heading in the right direction: simplejson and cjson (and maybe
others) -- have hooks to plug in your own serialization and deserialization
code. I've used them both to do something almost exactly like this to
serialize and deserialize JS dates, which normally can't be sent in JSON.
You'll find that the built-in serialization works for *most* Python objects,
especially for basic types like str(), int(), etc., but most certainly not
*all* Python objects.

> I don't really want to store JSON in the DB, but just use it as a
> serialization format for sqlalchemy objects. We want our frontend to
> render data from the same type of object with a couple different
> possible backend sources. One being the database through the
> sqlalchemy ORM and another being some sort of JSON/XML interface that
> we can backend from whatever...

Your best bet is going to be to deserialize the JSON to some intermediate
object and then construct the SqlAlchemy instance. Re-implementing pickle
with a JSON storage format is going to be a huge job, and really won't give
you anything that you don't already get from pickle -- what other app
besides you own is ever going to understand those heavily class-hinted JSON
files?


On 12/20/07, Matt <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 20, 5:04pm, "Rick Morrison" <[EMAIL PROTECTED]> wrote:
> > You're not going to be able to serialize Python class instances in JSON:
> > json strings are simple object literals limited to basic Javascript
> types.
> > Pickle does some pretty heavy lifting to serialize and reconstitute
> class
> > instances.
>
> I've already figured this out -- you can use class "hinting" as
> outlined in this page:
>
> http://json-rpc.org/wiki/specification
>
> So a datetime using this format and converted to JSON might look like:
>
> "modification_time": {"__jsonclass__": ["datetime.datetime", [2007,
> 12, 19, 23, 3, 2, 2]]}
>
> My deserialization routine loads the datetime module, then gets the
> datetime attribute (which in this case is a type, but could be a
> function too), and calls that with the arguments in the list.  This
> sort of thing works for generic classes too using the pickle
> __reduce__ hooks.  The JSON part I'm actually handling through cjson
> or simplejson -- my serialization/deserialization is working all on
> python objects.
>
> > Easiest way to store JSON in the database is to limit the type of data
> you
> > store in JSON strings to straightforward objects that only use primitive
> JS
> > types, and then serialize back and forth to Python dictionaries. That's
> what
> > libraries like SimpleJSON or cJSON do. Using Sqlalchemy, you can then
> store
> > those JSON strings in database VARCHARS, TEXT and so on fields.
>
> I don't really want to store JSON in the DB, but just use it as a
> serialization format for sqlalchemy objects.  We want our frontend to
> render data from the same type of object with a couple different
> possible backend sources.  One being the database through the
> sqlalchemy ORM and another being some sort of JSON/XML interface that
> we can backend from whatever...
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: sqlalchemy object serialization / deserialization

2007-12-20 Thread Rick Morrison
You're not going to be able to serialize Python class instances in JSON:
json strings are simple object literals limited to basic Javascript types.
Pickle does some pretty heavy lifting to serialize and reconstitute class
instances.

Easiest way to store JSON in the database is to limit the type of data you
store in JSON strings to straightforward objects that only use primitive JS
types, and then serialize back and forth to Python dictionaries. That's what
libraries like SimpleJSON or cJSON do. Using Sqlalchemy, you can then store
those JSON strings in database VARCHARS, TEXT and so on fields.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Conventions for creating SQL alchemy apps

2007-12-20 Thread Rick Morrison
Agreed, it's easier to start with a single file, and then split from there
as maintaining it becomes more difficult.

You'll find the table defs and mappers, and mapped classes are pretty
inter-related, and you'll drive yourself nuts if you have them in 20
different places.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: query.filter_or() ?

2007-12-20 Thread Rick Morrison
How would you specify the logical "parenthesis" to control evaluation order
in complex expressions?

On Dec 20, 2007 5:14 AM, svilen <[EMAIL PROTECTED]> wrote:

>
> query.filter() does criterion = criterion & new
> why not having one that does criterion = criterion | new ?
> its useful to have some query.this.that.filter.whatever.filter_or(...)
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Syntax for IN (1,2,3)

2007-12-19 Thread Rick Morrison
I believe that as of 0.4.0, that's now:

column.in_([1,2,3,4])

On Dec 19, 2007 6:23 AM, Bertrand Croq <[EMAIL PROTECTED]> wrote:

>
> Le mercredi 19 décembre 2007 12:06, Marcin Kasperski a écrit:
> > Maybe I missed something but can't find... Does there exist
> > SQLExpression syntax for
> >
> >  WHERE column IN (1,2,3,4)
>
> column.in_(1,2,3,4)
>
> --
> Bertrand Croq
> ___
> Net-ng  Tel   : +33 (0)223 21 21 53
> 14, rue Patis Tatelin   Fax   : +33 (0)223 21 21 60
> Bâtiment G  Web   : http://www.net-ng.com
> 35000 RENNESe-mail: [EMAIL PROTECTED]
> FRANCE
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-19 Thread Rick Morrison
OK, I checked to make sure the updates were being fired (and from the looks
of the log, they are).

But I think I see that the lack of update executions hasn't been the problem
all along, but rather that those updates are not finding their row... never
checked that part.

I'm offsite right now and can't look at the code, but I suspect that the
milliseconds are the problem -- MSSQL rounds milliseconds to some multiple,
so what you put in is not always  what you get back.

Since the program saves the initial date PK as the result of a datetime.now()
call, I'll bet that it doesn't match the DB stored value. Here's a couple of
things you can do to work around that:

  a) Truncate the milliseconds from the datetime.now() call before you write
the initial job object

  b) Fetch the job object back after the first flush() to get the DB stored
value.

See if one of those fixes your issue.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-18 Thread Rick Morrison
Same here on pymssql.

I tried it with 'start' as the only PK, and with both 'identifier' and
'start' as PK. Both work fine.

Are you sure your in-database tabledef matches your declared schema?

I've attached a script that works here. This one has both 'identifier' and
'start' set as PK.

  ***---WARNING ---***:
I've added a table.drop() to the script to simplify testing and make
sure the schemas match

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

import sqlalchemy as sa
import datetime, time
from sqlalchemy.orm import sessionmaker
#dburi = 'mssql://driveulator:[EMAIL PROTECTED]/d2test'

sa_engine=sa.create_engine("mssql://:[EMAIL PROTECTED]/xxx", echo=True)
metadata = sa.MetaData(sa_engine)
Session = sessionmaker(bind=sa_engine, autoflush=True,
transactional=True)
sa_session = Session()


jobs = sa.Table('jobs', metadata,
sa.Column('identifier', sa.Numeric(18), primary_key=True),
sa.Column('section', sa.VARCHAR(20)),
sa.Column("start",sa.DateTime, primary_key=True),
sa.Column("stop",sa.DateTime),
sa.Column("station", sa.VARCHAR(20)),
autoload=False)#ok

class Job(object):
   def __init__(self, identifier, start, station="TCHUKI"):
   self.identifier, self.start, self.station=identifier, start, station

sa.orm.mapper(Job, jobs)

try:
jobs.drop()
except:
pass
jobs.create()

j = Job(22, datetime.datetime.now())
sa_session.save(j)
sa_session.commit()
sa_session.clear()
time.sleep(1)
j1=sa_session.query(Job).all()[0]

while True:
j1.stop=datetime.datetime.now()
sa_session.save_or_update(j1)
sa_session.commit()
time.sleep(2)









[sqlalchemy] Fetchall() in 0.4x

2007-12-13 Thread Rick Morrison
Just noticed that ResultProxy.fetchall() is a bit broken in 0.4x (I think
it's for queries that do not populate the DBAPI cursor.description). In my
case, it's executing a stored procedure that returns data:

S.execute('exec schema.storedproc 1234').fetchall()

Traceback (most recent call last):
  File "msh.py", line 49, in ?
print S.execute('schema.storedproc 1234').fetchall()
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 1249, in __repr__
return repr(tuple(self))
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 1241, in __iter__
yield self.__parent._get_col(self.__row, i)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 1500, in _get_col
type_, processor, index = self._key_cache[key]
AttributeError: 'ResultProxy' object has no attribute '_key_cache'





> ResultProxy has no

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-12 Thread Rick Morrison
Hey Fabio, would you please post a full non-working copy with the new schema
and all the PKs that you want set up? There are a few too many variants in
this thread to see what's going on now. Your earlier versions didn't include
'station' as a PK, but did include 'start', while this one's the opposite.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-10 Thread Rick Morrison
> I did not get any exception...  doh! :)  What kind of exception did
> you get?

The traceback I get is below. If you're not getting one, it may be a pyodbc
issue, which I don't have installed right now.




Traceback (most recent call last):
  File "test.py", line 31, in ?
sa_session.commit()
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py", line 484, in commit
self.transaction = self.transaction.commit()
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py", line 211, in commit
self.session.flush()
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/session.py", line 684, in flush
self.uow.flush(self, objects)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py", line 207, in flush
flush_context.execute()
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py", line 434, in execute
UOWExecutor().execute(self, head)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py", line 1053, in
execute
self.execute_save_steps(trans, task)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py", line 1067, in
execute_save_steps
self.save_objects(trans, task)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/unitofwork.py", line 1058, in
save_objects
task.mapper.save_obj(task.polymorphic_tosave_objects, trans)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/orm/mapper.py", line 1129, in save_obj
c = connection.execute(statement.values(value_params), params)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 796, in execute
return Connection.executors[c](self, object, multiparams, params)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 847, in
execute_clauseelement
return self._execute_compiled(elem.compile(dialect=self.dialect,
column_keys=keys, inline=len(params) > 1), distilled_params=params)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 859, in
_execute_compiled
self.__execute_raw(context)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 871, in
__execute_raw
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File "/usr/lib/python2.4/site-packages/SQLAlchemy-
0.4.2dev_r3844-py2.4.egg/sqlalchemy/engine/base.py", line 887, in
_cursor_execute
raise exceptions.DBAPIError.instance(statement, parameters, e)
sqlalchemy.exceptions.DatabaseError: (DatabaseError) internal error: SQL
Server message 245, severity 16, state 1, line 1:
Conversion failed when converting the varchar value 'TEST1' to data type
int.
DB-Lib error message 20018, severity 5:
General SQL Server error: Check messages from the SQL Server.
 'INSERT INTO jobs (identifier, section, start, stop, station) VALUES
(%(identifier)s, %(section)s, %(start)s, %(stop)s, %(station)s)' {'start':
datetime.datetime(2007, 12, 10, 18, 15, 23, 170889), 'section': None,
'station': None, 'stop': None, 'identifier': 'TEST1'}

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lazy ID Fetching/generation

2007-12-10 Thread Rick Morrison
> But another thing, is that the whole idea of "save/update/save-or-
> update", which we obviously got from hibernate, is something ive been
> considering ditching, in favor of something more oriented towards a
> "container" like add().  since i think even hibernate's original idea
> of save/update has proven to be naive (for example, this is why they
> had to implement saveOrUpdate()).  we like to keep things explicit as
> much as possible since thats a central philosophical tenet of Python.

Hmm, that sounds interesting. Would it have similar flush() semantics like
.save(), or would it be a kind of auto-flush thing? The issues with any
implicit kind of flush()  are tricky. Maybe not so much for the instance
being .add() ed or .save() ed, those are usually somewhat stratightforward.
The tricky parts are the related instances. Would relation()-based instances
also be auto-flushed() and etc.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lazy ID Fetching/generation

2007-12-10 Thread Rick Morrison
Wouldn't a flavor of .save() that always flush()'ed work for this case?

say, Session.persist(obj)

Which would then chase down the relational references and persist the object
graph of that object...and then add the now-persisted object to the identity
map.

...something like a 'mini-flush'.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lazy ID Fetching/generation

2007-12-10 Thread Rick Morrison
> Having to
> call flush before doing anything that might require the ID seems
> excessive and too low-level for code like that.

Why? To me, having to work around the implications of an implicit persisting
of the object for nothing more than a simple attribute access is much worse.
For example, I have code that examines the instance id attribute of such
objects to determine, for example, whether an item discount needs to be
recalculated (not saved) or a differential discount needs to be calculated
(item already saved). It's natural to simply examine the PK attribute to see
if the item is persisted to make the decision. In this case, an
object.is_saved() method could stand in, but imagine passing your object
instance to some other Python function that just happened to sniff around
attributes -- such behavior could cause the item to flush() without explicit
permission? Yuck.

> "As far as the application is concerned, objects in the Pending and
> Persistent states should function identically."

To me, this is a fallacy of how ORMs work, and ignores the particulars of
what happens, or what could happen, during a database save round-trip. You
could have default columns. You could have triggers. Your could have DRI
violations. The database engine could do implicit type conversion. You
simply cannot expect an unchanged object on a round-trip to a relational
database in a real-world case. To always expect this is to invite huge
complexity issues that have been the downfall of other ORM attempts. For
pure unchanged round-trip behavior, you want a real OO database, not an ORM.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-10 Thread Rick Morrison
I did get an exception, that's how I knew to change the type!

On 12/10/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
>
> On Dec 10, 2007, at 12:47 PM, Rick Morrison wrote:
>
> > This works here on MSSQL/pymssql with a small change:
> >
> > -- j = Job("TEST1", datetime.datetime.now())
> >
> > ++ j = Job(1, datetime.datetime.now())
> >
> > MSSQL (and most other db engines) are going to enforce type on the
> > 'identifier' column. In the new code, it's an int, so...no strings
> > allowed. The original example user "uniqueidentifier", which is a
> > rather odd duck, and I'm not sure would support an arbitrary string
> > as a key. Unless you need real GUID keys for some reason, I would
> > suggest using a normal string or int surrogate key like the new
> > example does.
> >
> how come no exception is thrown ?  silent failure is the party pooper.
>
>
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-10 Thread Rick Morrison
This works here on MSSQL/pymssql with a small change:

-- j = Job("TEST1", datetime.datetime.now())

++ j = Job(1, datetime.datetime.now())

MSSQL (and most other db engines) are going to enforce type on the
'identifier' column. In the new code, it's an int, so...no strings allowed.
The original example user "uniqueidentifier", which is a rather odd duck,
and I'm not sure would support an arbitrary string as a key. Unless you need
real GUID keys for some reason, I would suggest using a normal string or int
surrogate key like the new example does.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Matching a DateTime-field

2007-12-10 Thread Rick Morrison
Yeah, it was a "for instance" answer, you'll need to use the correct MySql
syntax of course.

On 12/10/07, Adam B <[EMAIL PROTECTED]> wrote:
>
>
> On Dec 10, 1:16 am, "Rick Morrison" <[EMAIL PROTECTED]> wrote:
> > Any query using sql expressions is going to want to use correctly typed
> data
> > -- you're trying to query a date column with a string value. The LIKE
> > operator is for string data.
> >
> > I'm not up on my mssql date expressions, but the answer is going to
> resemble
> > something like this:
> >
> > .filter(and_(func.datepart('year', List.expire) == 2007,
> > func.datepart('month', List.expire) == the_month_number))
> >
>
> Ok, isnt this mssql specifik?  I only find datepart in various
> VB / .net documentation/solutions.
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Matching a DateTime-field

2007-12-09 Thread Rick Morrison
Any query using sql expressions is going to want to use correctly typed data
-- you're trying to query a date column with a string value. The LIKE
operator is for string data.

I'm not up on my mssql date expressions, but the answer is going to resemble
something like this:

.filter(and_(func.datepart('year', List.expire) == 2007,
func.datepart('month', List.expire) == the_month_number))

On 12/9/07, Adam B <[EMAIL PROTECTED]> wrote:
>
>
> Hello!
>
> I'm trying to do a query that gets all lists within a specific month,
> but
> SQLAlchemy whines. :/
>
>
> The error:
> /usr/local/lib/python2.4/site-packages/SQLAlchemy-0.4.1-py2.4.egg/
> sqlalchemy/databases/mysql.py:1475: Warning: Incorrect datetime value:
> '"2007-"+str(month)+"%"' for column 'expire' at row 1
>   cursor.execute(statement, parameters)
>
> The code:
> L =
> session.query(List).join('friends').filter(
> Friend.username==identity.current.user_name).filter(List.expire.like
> ('"2007-"+str(month)
> +"%"')).all()
>
>
> columnt "expire" is DateTime in the model.
>
> Any ideas how to do this?
>
> br
>
> Adam
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time

2007-12-06 Thread Rick Morrison
> I still need to sort out a way to have MSSQL unit test periodically

I'm still planning on hosting a buildbot as I promised some months (how
embarassing) ago. The first one will be Linux + pymssql, but once I get the
new VMware host provisioned out here, I can put up a Windows + pyodbc host
too.

If there are any buildbot wiz kids out there, I could use some setup
help/tips.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.4.1 and MS-SQL problem at create time

2007-12-06 Thread Rick Morrison
column.foreign_key is now a list: foreign_keys[]. Trunk looks correct and
should work. Works here.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: IMPORTANT: Does SA caches objects in memory forever?

2007-12-06 Thread Rick Morrison
If I recall, your application is using localthread strategy and
scoped_session(), doesn't it? Doesn't scoped_session() collect references
from otherwise transient Session()'s and hold on to them between calls?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] mapper relations with "foreign_keys="

2007-11-29 Thread Rick Morrison
Just noticed that for mappers that use tables that do not define foreign
keys, specifying only 'primaryjoin='  plus 'foreign_keys=' doesn't seem to
be sufficient to define the relationship, adding 'remote_side=' fixed it.

Also for such mappers, if there is a 'backref', the backref doesn't seem to
be able to use the foreign key relationships for the mapper, it wants the
'primaryjoin=', 'foreign_keys=' and 'remote_side=' to be specified all over
again.

Quick example:

t = table('test', meta,

Column('id', INT, primary_key=True, nullable=False),

Column('idparent', INT)   # Note FK ommitted, MySQL/MSSQL self-table FK
delete cascade bug workaround

)

class Foo: pass

mapper(Foo, t,

properties: {'children': relation(Foo,

  primaryjoin = t.c.idparent = t.c.id,

  foreign_keys=[t.c.idparent],

  backref='parent'

  })

won't work until you add more attributes as per above. Shouldn't this
definition be enough?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy denormalization.

2007-11-29 Thread Rick Morrison
Another technique to think about would be to enumerate the child addresses
with a small integer value, say "sequence". You can then add the "order_by"
to the relation to fetch the addresses in sequence order, and by convention
the first address in the result list -- the one with min(sequence) in the
group -- is the default address. If you then use 'dynamic' on the relation,
you can fetch only the first (and default) address by using .first() on the
resulting Query object, or .all() to get all the addresses. A small side
benefit is that if the default address is then deleted, the next in line
will become the new default.

Of course the price of this behavior is to maintain the sequences, but
that's pretty straightforward.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New assert_unicode flag

2007-11-27 Thread Rick Morrison
Running much better this way...

>I think this will make the next release and adoption of the feature a
>lot easier.
Me too, many thanks for the quick turnaround!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New assert_unicode flag

2007-11-27 Thread Rick Morrison
> the assert_unicode thing only happens with python SQL expressions,
>usually DDL is issued using textual SQL.  if youre using a SQL
> expression for your DDL

The creates/drops are via Metadata.create_all() / .drop_all(), so perhaps
there's an expression in that path? Didn't trace it yet (as I thought I
might still have a chance to talk you out of it).

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: New assert_unicode flag

2007-11-27 Thread Rick Morrison
> also, what im considering doing, is having assert_unicode only be
> implicitly turned on for the Unicode type specifically.   the engine-
> wide convert_unicode and String convert_unicode would not implicitly
> set the assert_unicode flag.

That makes more sense to me, and I would prefer it. The contract for
'covert_unicode' was pretty simple (but maybe wrong) in my mind - "any
str/unicode() put in will be converted according to Dialect.encoding when
going to the database, and decoded from Dialect.encoding when coming from
the database".

> in this case you get the inconsistent
> round trips.

That's expected and fine by me. I don't really understand the insistence on
consistent round trips to a database. Databases convert types, sometimes
implicitly. That insistence on consistent round trips is a fixation on on
the "O" portion of ORM, while not taking into account the "R".

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] New assert_unicode flag

2007-11-27 Thread Rick Morrison
What's the point of the new 'assert_unicode' flag? It's causing failures
with pymssql DDL (in my case CREATE / DROP)

Is there a good reason to not only allow unicode, but to *enforce* it for
DDL?

I'd like to have one set of tabledefs, prefererably in ASCII for DB
portability.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unicode support for MSSQL

2007-11-25 Thread Rick Morrison
If memory serves, there is already an 'encoding' attribute on each Dialect
instance that is normally used in conjunction with another Dialect flag
'convert_unicode'. Not sure if it dovetails with your plans, tho

On 11/25/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
>
> Hi Florent,
>
> Just realised we'd gone quiet on this thread...
>
> >humm What bothers me is that I already get this comportement when
> >running my query program from a Linux host (using pyodbc same version)
> >but need the above mentioned patch on a windows host so there is
> >definitely a different behavior.
> >
> >
> Is this a difference in PyODBC or SQLAlchemy? I suspect the former, but
> good if you can confirm.
>
> >>From my point of view I am responsible to give the engine the right
> >encoding when I instantiate it. At the moment I have a master database
> >that provides me this info and so I feed it to the constructor at
> >engine creation time.
> >
> >
> That sounds ok. I'd be happy to add a "string_charset" option or
> something that defaults to None which means no decoding. In fact, this
> wouldn't have to be MSSQL specific, it could apply to any DB.
>
> Paul
>
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Configuring logging

2007-11-23 Thread Rick Morrison
Is there any way to configure logging on an engine instance after the engine
has been instantiated?

it looks to me as if the engine init checks the module logger status and
sets a couple of flags "_should_log_info" and "_should_log_debug". (I'm
guessing these are there to keep the logging function call count down).

The issue is that there seems to be no way to make the engine instance
re-evaluate those flags, with the end effect being that once the engine is
instantiated, calling logging.getLogger('sqlalchemy.engine').setLevel(whatever)
has no effect.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: features: database drivers and ssl

2007-11-14 Thread Rick Morrison
Hi Marco,

There is a DB2 driver in the works, but I haven't heard much noise about it
lately, so I don't know what kind of progress is being made.

As for supported drivers, the three engines you mention are all supported, I
think that PG is probably has better test coverage than either Oracle or
MSSQL, and probably Oracle more than MSSQL. The pymssql Dialect for MSSQL
has been working for me, but it has somewhat poor test coverage.  For the
best test coverage, I would stick with Sqlite, MySQL and Postgresql - I
think those are the current set of 'gold standard' tests are are checked
before releases.

and as regards SSL, there is no direct support by SA, that's going to happen
via the DBAPI module. Note that pymssql (or any MSSQL driver) is not going
to work with SSL on *nix -- it's not supported by the underlying FreeTDS
library. I have no idea what the story is with Oracle or PG -- it's going to
depend on SSL support in their associated DBAPI modules.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: threadlocal transactions, engine, and the Session

2007-11-13 Thread Rick Morrison
I use a similar technique with a Pylons controller, but instead of
engine.begin(), I use session.begin(). Then by passing around the session
for all calls made by that controller, I can use Session.execute() for
expression-based and text-based SQL mixed with ORM ops, and it all commits
in one shot on one thread without using threadlocal. This allows the freedom
to open another session in a separate transaction for those odd places where
it's needed, and have arbitrary expression-based SQL execute in that other
transaction -- something I don't think
you'll be able to do with threadlocal + implicit execution.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-13 Thread Rick Morrison
>in the after_*() there are (mapper, connection, instance) arguments -
>but there's no session. Any way to get to that? mapext.get_session()
>does not look like one

http://www.sqlalchemy.org/docs/04/sqlalchemy_orm.html#docstrings_sqlalchemy.orm_modfunc_object_session

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: 2 questions

2007-11-13 Thread Rick Morrison
>i have non-ORM updates happening inside
>ORM transaction (in after_insert() etc). How to make them use the
>parent transaction? i have a connection there.

You can pass in the Session and use Session.execute() to reuse the session
connection.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Oracle date/datetime oddities

2007-11-12 Thread Rick Morrison
The issue is that in essence, both answers are right. ANSI SQL specifies
different data types for DATE fields and DATETIME fields, where DATE fields
do not hold the time portion. Oracle, SQL Server and other database engines
have their own ideas about how best to handle dates / datetimes.

SA splits the difference sometimes. All the various dialects inherit from an
ANSI dialect which treats the two types as distinct. On the other hand, most
dialects take the philosophy of "just hand whatever value you got off to the
DBAPI and what happens, happens".

At least one of SA purported benefits is that it helps to abstract the
various behaviors of it's supported database engines and thereby make SA
code at least theoretically a bit easier to work on multiple database
engines. This is kind of in direct conflict with the laissez-faire "let the
DBAPI decide" philosophy, which is why you sometimes see some of this
arguably schizophrenic behavior. If we give you DBAPI-neutral types today,
tomorrow somebody on the list will be complaining that it doesn't work the
other way around.

For the short term, you should look into the OracleDateTime type. But in
general, we need to know more about what users are looking for. I'm guessing
we can put you in the "DBAPI-neutral" camp?


On 11/12/07, Michael Schlenker <[EMAIL PROTECTED]> wrote:
>
>
> Hi all,
>
> I'm not sure if its a bug or an intended feature, but the default
> behaviour
> of sqlalchemy when reading Oracle DATE fields is annoying. cx_Oracle
> rightfully returns datetime.datetime objects, but Sqlalchemy truncates
> this
> to datetime.date objects.
>
> Why is it done like this (in lib/sqlalchemy/databases/oracle.py:34-60)?
> Wouldn't it be a better choice to default to OracleDateTime instead of
> OracleDate
> for queries without bound metadata?
>
> Its not a (major) problem when querying via a table object, where i can
> override
> the column type with a sane version (OracleDateTime), but for queries
> directly
> using conn.execute() its ugly.
>
> Basically this throws up:
>
> import sqlalchemy as sa
> import datetime
> engine = sa.create_engine('oracle://scott:[EMAIL PROTECTED]')
> conn = engine.connect()
> conn.execute('create table dtest (a DATE)')
> # insert a row with date and time
> now = datetime.datetime(2007,12,11,13,11,00)
> conn.execute('insert into dtest values (:dt)', {'dt':now})
> # check its there
> rows = conn.execute('select a from dtest where a=:dt',{'dt':now})
> for r in rows:
>  if rows[0]==now:
> print "Found"
>  else:
> print "Not Found"
>
> This prints 'Not Found' even though the row is there and is returned
> correctly
> by cx_Oracle.
>
> I would expect to get at least identity for this.
>
> So is this a bug and should i add a report or is it a 'feature' of some
> kind and will
> not change even if i report a bug?
>
> Michael
>
> --
> Michael Schlenker
> Software Engineer
>
> CONTACT Software GmbH   Tel.:   +49 (421) 20153-80
> Wiener Straße 1-3   Fax:+49 (421) 20153-41
> 28359 Bremen
> http://www.contact.de/  E-Mail: [EMAIL PROTECTED]
>
> Sitz der Gesellschaft: Bremen | Geschäftsführer: Karl Heinz Zachries
> Eingetragen im Handelsregister des Amtsgerichts Bremen unter HRB 13215
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Deprecation error raised for query on singly inherited table query ?

2007-11-11 Thread Rick Morrison
Ah good thanks: I had noticed that too and was just ignoring it until it
bugged me enough. Laziness pays off again!

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ensuring all connection-related file descriptors are closed?

2007-11-11 Thread Rick Morrison
Also depending on how you are starting the subprocess, you may have control
over open file handle inheritance. The subprocess module, for example allows
you to close all open inherited files, which would include open sockets for
DBAPI connections (or a file handle in the case of SQLite). You could then
use dispose() to close up the buffer side and start recreating the
connections.

Please report back to the list if you get anything working, you're in rather
uncharted territory and it would be interesting to hear how it works out.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Save the default function into the database

2007-11-11 Thread Rick Morrison
I guess you know that storing the actual bytecodes (or the source) of a
Python function in the database itself is not going to buy you much:

Since the function bytecodes or source
would be in Python, only a Python interpreter could run it to produce
the function result, and if you know you're going to be accessing
the database via a Python interpreter (most likely via SA), then you may as
well just bundle the function in a module, import it and use it there like
the rest of us do.

If your'e searching for some more "portable" database logic that can be run
from both a Python interpreter and other types of tools, you want database
user-defined functions (UDF)s and stored procedures. SA supports several
database that sport one or both of these tools:

Oracle both
SQL Server  both
Mysql  both
Postgresql   functions only, but can modify data like stored procedures

All of these will run the UDF or the stored procedure in the process of the
database server, not in the context of your Python program. That means you
won't be able to access variables in your Python program from the UDF or
procedure, you'll need to supply them as parameters.

If you really have to have the function be a Python function, the PLPython
language for Postgresql allows you to run a real Python interpreter in the
context of the server itself. There was a guy named James Pye who was
working on a full-blown Python programming environment for Postgresql in
which Python would be a full stored procedure language and could share
variables with SQL and lots of other interesting stuff. Might be worth
checking into -- check PGforge or Postgres contrib.

Good luck with this,

Rick

On 11/11/07, luyang.han <[EMAIL PROTECTED]> wrote:
>
>
> Hello all. The problem I want to solve is a little tricky. .I have
> created a table with some columns, which use other python functions to
> generate the default values (default = _some_python_function_). The
> problem is that such function is not saved in the database itsself. By
> just simple reading from the database one cannot get this function
> out. So the problem is how to save such construction into the database
> as well as how to retrieve these.
>
> I have some ideas but have not yet experimented.
>
> 1. One can save the corresponding function as a separate py file and
> refer to this file in some metadata for the table, and the program
> tries to get this py file.
>
> 2. Or one can pickle this default python function and save it to the
> metadata for the table.
>
> 3. Or one can simple pickle the Table or Column object and save it to
> somewhere in the database?
>
> Does anyone has similar experience? Any advices?
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: unicode support for MSSQL

2007-11-08 Thread Rick Morrison
This is going to be messy, as the support for unicode varies among the
various MSSQL DBAPIS (which is in lart part why multiple DBAPI support is
needed by the MSSQL driver). ODBC looks to me to tell the best story:

The newer ODBC drivers have an "autotranslate" feature that somehow
retrieves the codepage setting of the server, and will translate from
unicode on the client side to whatever the database encoding is. There may
be a way to get the code page from that, or perhaps if the ODBC connection
can be set to autotranslate, the code page fetch might not even be needed.

In addition to Florent's case, I would lay odds that there are a fair number
of legacy databases out there that store utf-8 data in varchar, binary and
varbinary fields, and it would be great to support them too. If there was a
way to set the autotranslate feature to utf8 on the database side, it might
be possible to support both utf-8 data on any server and native codepage on
those other servers.

The question is whether pyodbc can do that or not. Any idea?



On 11/8/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> >I have isolated the problem a little bit more: my column is defined in
> >the MSSQL server as a user defined type which is based on VARCHAR.
> >
> >
> Ok, so in this case you'd like SA to return a python unicode object when
> a VARCHAR is fetched, by decoding using the database's encoding? While I
> understand your requirement, this seems to me to be a special case. I
> think most people would expect a normal string in this case. I wonder if
> you should define a MyString class in your app and use that.
>
> Rick - do you have a feel on this one?
>
> If we do decide to implement this, does anyone know how python can find
> out what database encoding MSSQL is using?
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: how can I do such a sorted query?

2007-11-08 Thread Rick Morrison
Most database engines support a couple of  SQL functions that help in cases
like this, read your database docs for either the ISNULL or the COALESCE
function.

Another technique is to use an SQL CASE statement.

For all three methods the idea is to supply a default value to substitute
when the value in question is NULL.

In your case that substituted value would be the value of the 'f_date'
column, so in essence the logic would be "use the value of 's_date'
if it's NOT NULL, otherwise use the value of 'f_date'".

SA has constructs for both the SQL function method and the CASE statemen
method; all are documented (well, I think the CASE is)

Rick


On 11/8/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> a table, say like this:
> name  f_dates_dtae
> 
> name12007-10-01   null
> name22007-06-03   2007-10-07
> name32007-09-20   null
> name4...   ...
>
> the 'f_date' column always contains a value but the 's_date' can
> sometimes be null.
> now I want to get a collection -- all items will be in it and if the
> 's_date' is not null it must
> be ordered with 'f_date' of those the 's_date' is null. Just like
> this:
>
> name   f_dates_date
> ---
> name3 2007-09-20 null
> name1 2007-10-01 null
> name2 2007-06-03 2007-10-07
>
> any idea? use "select" querymaybe with 'UNION'?
> thanks in advance!
>
> artman
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Select entire column

2007-11-07 Thread Rick Morrison
One of the reasons that Query.select() is deprecated is that the way it was
named led to this kind of confusion.

The Query() class is used for ORM operations, and when it's used as mapped
against a table, it's going to give you all the columns from the table by
default. There are ways of defining the table mapper to only get one column,
but I suspect that you are really barking up the wrong tree, and want to use
the non-ORM or "SQL expression" part of SqlAlchemy.

It's that library that Barry was referring to.
Using that library, it's going to be really easy to specify only the
columns that you want, and apply things like DISTINCT.
Read the docs on SQL expressions and try what he's suggesting.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: access mapped object attributes

2007-11-06 Thread Rick Morrison
all of them? same as any Python object:

obj_attributes = [k for k in obj.__dict__]



On 11/6/07, Christophe Alexandre <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> I am also interested in retrieving all the attributes resulting from the
> ORM.
>
> The loop on '.c' will list only the database columns. Is there a way to
> list the object attributes?
>
> Thanks a lot for your help,
>
> Chris
>
> -Original Message-
> From: sqlalchemy@googlegroups.com [mailto:[EMAIL PROTECTED]
> On Behalf Of Paul Johnston
> Sent: Monday, November 05, 2007 8:45 PM
> To: sqlalchemy@googlegroups.com
> Subject: [sqlalchemy] Re: access mapped object attributes
>
>
> Hi,
>
> >Given a Message object, do I have a way to retrieve all the attributes
> >that result from the database mapping?
> >
> Try this:
>
> for col in Message.c:
> ...
>
> Paul
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Multiple mapper extensions broken in 0.4

2007-11-05 Thread Rick Morrison
Ah. OK, thanks!

I checked in a small update to the 3.x -> 4.0 migration guide in the docs to
note this.

On 11/5/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> you need to return EXT_CONTINUE for your TimestampExtension methods.
>


Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Multiple mapper extensions broken in 0.4

2007-11-05 Thread Rick Morrison
Seems that when multiple mapper extensions are used, only the first is run.
Testcase attached

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *
from sqlalchemy.orm import *
import datetime
import time

class TimestampExtension(MapperExtension):
def _decorate_instance(self, instance):
print 'timestamp.exec'
if hasattr(instance, 'audit_ts'):
instance.audit_ts = datetime.datetime.now()

def before_insert(self, mapper, connection, instance):
self._decorate_instance(instance)

def before_update(self, mapper, connection, instance):
self._decorate_instance(instance)


class InstanceEvent(MapperExtension):
def before_insert(self, mapper, connection, instance):
print 'instanceevent.oninsert.exec'
if hasattr(instance, 'oninsert'):
instance.oninsert()
return EXT_CONTINUE

def before_update(self, mapper, connection, instance):
print 'instanceevent.onupdate.exec'
if hasattr(instance, 'onupdate'):
instance.onupdate()
return EXT_CONTINUE

def populate_instance(self, mapper, context, row, instance, instancekey, isnew):
print 'instanceevent.onload.exec'
if isnew and hasattr(instance, 'onload'):
mapper.populate_instance(context, instance, row, instancekey=instancekey, isnew=isnew)
instance.onload()
return None
return EXT_CONTINUE


engine = create_engine('sqlite:///:memory:', echo=True)
#engine = create_engine('sqlite:///:memory:')
meta = MetaData(bind=engine)

a = Table('a', meta,
  Column('id',   INT, nullable=False, primary_key=True),
  Column('nm',   VARCHAR(8)),
  Column('typ',  VARCHAR(8)),
  Column('audit_ts', DATETIME)
  )

meta.create_all()

class K(object):
def __init__(self, **kw):
for k,v in kw.items():
setattr(self, k,v)

class A(K):
def oninsert(self):
self.typ = 'new'

def onupdate(self):
self.typ = 'mod'

mapper(A, a, extension = [TimestampExtension(), InstanceEvent()])

S = create_session()
x = A(nm='one')
S.save(x)
S.flush()

assert x.audit_ts is not None
assert x.typ == 'new'
old_ts = x.audit_ts

time.sleep(1) # ensure timestamps are different

x.nm = 'one-1'
S.flush()
assert x.typ == 'mod'
assert x.audit_ts > old_ts


[sqlalchemy] Re: Polymorphic relations

2007-11-03 Thread Rick Morrison
Yeah, I see that now, thanks. I checked in a small clarification to the
mapper documentation in r3718 about this.
It would be easier if you would open the ticket for me if you don't mind.
Trac seems to hate me and won't let me stay logged in long enough to save a
ticket (does anybody else run into that?).

On 11/2/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
> you need to have "None" for the tables in all the subclass mappers
> (SecUser, SecRole, SecFilter, etc.) .  this is one of those cases
> where SQLAlchemy should really be raising some kind of error, im
> surprised its trying to chug along with that (also that its not trying
> to make some bizarre self-referential join on the table).  id
> actually want to file a bug report that this didn't raise a whole
> bunch of errors and alarms...
>
> On Nov 2, 5:23 pm, "Rick Morrison" <[EMAIL PROTECTED]> wrote:
> > Please have a look at the attached testcase that mixes single-table
> > polymorphic inheritance, primaryjoin overides on the relation() and a
> > secondary table.
> >
> > SA seems to add a WHERE clause to filter for the polymorphic type on
> > relations that it calculates, but forgets to add it for the test case.
> Is
> > this supposed to work, or is it necessary to specify the filter in the
> > relation()?
> >
> > Thx,
> > Rick
> >
> >  poly1m.py
> > 4KDownload
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Polymorphic relations

2007-11-02 Thread Rick Morrison
Please have a look at the attached testcase that mixes single-table
polymorphic inheritance, primaryjoin overides on the relation() and a
secondary table.

SA seems to add a WHERE clause to filter for the polymorphic type on
relations that it calculates, but forgets to add it for the test case. Is
this supposed to work, or is it necessary to specify the filter in the
relation()?

Thx,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *
from sqlalchemy.orm import *

#engine = create_engine('sqlite:///:memory:', echo=True)
engine = create_engine('sqlite:///:memory:')
meta = MetaData(bind=engine)

secent = Table('secent', meta,
   Column('id',   INT, nullable=False, primary_key=True),
   Column('typ',  VARCHAR(8), nullable=False),
   Column('nm',   VARCHAR(20))
   )
secrel = Table('secrel', meta,
   Column('idparent', INT, ForeignKey('secent.id'), primary_key=True),
   Column('idchild',  INT, ForeignKey('secent.id'), primary_key=True)
   )

meta.create_all()

class K(object):
def __init__(self, **kw):
for k,v in kw.items():
setattr(self, k,v)
class Secent(K):
pass
class SecUser(Secent):
pass
class SecRole(Secent):
pass
class SecFilter(Secent):
pass
class SecGroup(Secent):
pass


m_secent = mapper(Secent, secent, polymorphic_on = secent.c.typ, polymorphic_identity = '?',
  properties = {'roles': relation(SecRole,  primaryjoin = secrel.c.idparent == secent.c.id,
  secondaryjoin = secrel.c.idchild == secent.c.id, secondary = secrel),
'filters': relation(SecFilter,  primaryjoin = secrel.c.idparent == secent.c.id,
secondaryjoin = secrel.c.idchild == secent.c.id, secondary = secrel),
'groups': relation(SecGroup, secondary=secrel, primaryjoin = secrel.c.idchild == secent.c.id,
   secondaryjoin = secrel.c.idparent == secent.c.id)
}
  )

mapper(SecUser,  secent, inherits = m_secent, polymorphic_identity = 'U')
mapper(SecRole,  secent, inherits = m_secent, polymorphic_identity = 'R')
mapper(SecFilter,secent, inherits = m_secent, polymorphic_identity = 'F')
mapper(SecGroup, secent, inherits = m_secent, polymorphic_identity = 'G',
   properties = {'members': relation(SecUser, secondary=secrel, primaryjoin = secrel.c.idparent == secent.c.id,
 secondaryjoin = secrel.c.idchild == secent.c.id),
 'subgroups': relation(SecGroup, secondary=secrel, primaryjoin = secrel.c.idparent == secent.c.id,
   secondaryjoin = secrel.c.idchild == secent.c.id, backref='parent_groups')
 }
   )



S = create_session()

u1 = SecUser(nm='user1')
u2 = SecUser(nm='user2')
u3 = SecUser(nm='user3')
map(S.save, (u1,u2,u3))

r1 = SecRole(nm='r1')
r2 = SecRole(nm='r2')
r3 = SecRole(nm='r3')
map(S.save, (r1,r2,r3))

f1 = SecFilter(nm='f1')
f2 = SecFilter(nm='f2')
f3 = SecFilter(nm='f3')
map(S.save, (f1,f2,f3))

g1 = SecGroup(nm='g1')
g2 = SecGroup(nm='g2')
g3 = SecGroup(nm='g3')
map(S.save, (g1,g2,g3))

u1.roles = [r1, r2]
u2.roles = [r1, r3]
u3.roles = [r3]

u1.filters = [f2, f3]
u2.filters = [f2, f3]
u3.filters = [f1]

u1.groups = [g1, g2, g3]
u2.groups = [g1, g2]
u3.groups = [g2, g3]

S.flush()
S.clear()

u = S.query(SecUser).filter(SecUser.nm == 'user1').first()
assert len(u.roles) == 2
assert len(u.filters) == 2
assert len(u.groups) == 3



[sqlalchemy] Re: Add arbitrary information to some classes

2007-11-02 Thread Rick Morrison
I don't think that two dicts would be that "out of hand", for an average
schema, say 50 tables x 10 cols, with 8-16 bytes each for each dict
(how big is an empty dict anyway?), you'd be looking at 50 * ((10*2)  + 1) *
16 = about 16.8K of memory. Not that big of a deal.

But you're right, you don't two, or even one. There is already a perfectly
good dict to use for this stuff -- the __dict__ of the instance itself. How
about something like these methods on Table() and Column():

class Table(object):

def userdata_get(key):
return self.__dict__[('userdata', key)]
def userdata_set(key, v):
self.__dict__[('userdata', key)] = v
 ...

These are a bit hacky, but something like this, along with similar methods
for extension data, and you're there.


On 11/1/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
> sorry, i havent been following.  two++ dicts ?!   this is getting out of
> hand.   if we have to have any dicts at all, it would be just one dict.  and
> also, it should be proxied through a property so that if you dont access it,
> its never even created.  we have this on ConnectionFairy right now and its
> called "properties".   I'd vote for some extremely neutral word like "attr",
> and we put it on Table/Column/ConnectionFairy, and we're done.  i think the
> total usage for these dicts is very lowif we are concerned about
> extensions colliding with user data in these dicts, then we'd also be
> concerned about extensions colliding with other extensions, and mutliple
> dicts arent helping in that case anyway.   keys can be placed as tuples
> (such as ('myext', 'somekey')) if namespace collisions are a concern, but
> that kind of thing has to be done by conventions regardless.
> On Nov 1, 2007, at 10:40 AM, Rick Morrison wrote:
>
> That sounds reasonable to me; my knee-jerk thought was that we might need
> to worry about memory usage, but these references are only on low-count
> instances like tables, columns, sessions and mappers, not ORM object
> instances.
>
>
>
> On 10/31/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
> >
> > Hi,
> >
> > Ah sure, so it's to be a namespace for namespaces, a shared dict()
> > > parking lot. Got it.
> > >
> >
> > How about having two dicts? One is purely for user data, libraries and
> > such never touch it. I suggest "userdata".
> >
> > The other is for use in extensions and stuff, say "extdata".
> >
> > Paul
> >
> >
> >
> >
> >
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Foreign key error

2007-11-01 Thread Rick Morrison
Autoloading and foreign key detection are problematic on some DB engines (I
think SQlite falls in this category as it doesn't directly support FKs), on
other engines they are simply hard to detect.

I would think that your override of the FK in the t_incident table should be
all that's needed to pick up the relationship, but for a workaround you can
spell out the columns, as you've seen, or if the relationships are your only
issue, you can specify the join keys in the relationship itself:

mapper(Incident, t_incident, properties={
   'entries': relation(Entry, backref="incident", primaryjoin =
t_entry.c.orr_id == t_incident.c.orr_id),
})

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Add arbitrary information to some classes

2007-11-01 Thread Rick Morrison
That sounds reasonable to me; my knee-jerk thought was that we might need to
worry about memory usage, but these references are only on low-count
instances like tables, columns, sessions and mappers, not ORM object
instances.



On 10/31/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> Ah sure, so it's to be a namespace for namespaces, a shared dict() parking
> > lot. Got it.
> >
>
> How about having two dicts? One is purely for user data, libraries and
> such never touch it. I suggest "userdata".
>
> The other is for use in extensions and stuff, say "extdata".
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLalchemy coding style

2007-10-30 Thread Rick Morrison
If you don't want to pollute the current namespace, then make an indirect
reference.

1) Make a module of your own, say db.py

2) In db.py:
 ...
  from sqlalchemy import *
  from sqlalchemy.orm import *
 ...
  table1 = Table('footable', ...)
 ...
  # other table defs, mappers, classes, etc.

3) In your non-polluted code:
 ...
  import db# your indirect reference
 ...
 ...

  qry = db.select([db.table1.c.col], db.table1.c.col2 == 'foo')

So now you have most of the convenience of all of the SqlAlchemy names all
loaded, but your namespace only contains the 'db' indirect reference. You
still need to worry about the 'db' namespace being polluted, but at least
it's only in that one module.



On 10/29/07, McA <[EMAIL PROTECTED]> wrote:
>
>
> Hi all,
>
> I'm intersted in using sqlalchemy and started to read the manuals.
> I didn't find a hint for my question, so I'm asking here. I hope it's
> not too annoying.
>
> Most code examples in the documentation use something like this
> >>>from sqlalchemy. import 
>
> My question is: Is this the good/proper way to import the sqlalchemy
> stuff.
> I'm concerned about polluting the current namespace. I could assume
> that
> class names like 'Table' are too common to reserve them for the
> sqlalchemy
> classes.
>
> What would you recommend? How are the gurus out there using
> sqlalchemy?
>
> Thanks in advance.
>
> Best regards
> Andreas Mock
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Add arbitrary information to some classes

2007-10-30 Thread Rick Morrison
Ah sure, so it's to be a namespace for namespaces, a shared dict() parking
lot. Got it.

So then, how about

"aux"
"etc"
"other"

or maybe

"miscdata"
"extra"
"more"
"additional"
"supplemental"
"auxiliary"
"adjunct"
"appendix"
"surplus"
"spare"
"augment"

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Add arbitrary information to some classes

2007-10-30 Thread Rick Morrison
>
> > The core can (and does) use these buckets too, so I'm not sure about the
> > user-y moniker.


Hold it. I thought the whole point of this was to separate core usage from
user usage? To create a "safe-zone" for library user's private data.


>  But if that were it, I'd only be +1 on a spelled out
> > version like 'userdata' that can be grokked without consulting the docs.


Sure, agreed here.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Add arbitrary information to some classes

2007-10-30 Thread Rick Morrison
personal opinion: I'm not wild about either 'attributes' or 'properties',
  (a) they seem too long, and
  (b) yes, they are too similar to generic ORM terms

many many moons ago (pre Windows-1.0) I used an Ascii-GUI thing called
C-scape (I think it's called "vermont views" now).

anyway, most of its objects had a space for a pointer to arbitrary user
data, and they consistently used something like "udata" for
the name of the pointer.

So I'm +1 on a short, non-generic and uniquely "user-y" kind of name like
"udata". I know it sounds ugly, but we're dealing with database and ORM
terminology. Just about every generic name you can think of is bound to be
confused with something database-oriented.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Add arbitrary information to some classes

2007-10-30 Thread Rick Morrison
I often use Session as a context placeholder, and have felt a bit uneasy
about this as you never know when some new release is going to stake a claim
on the name you've used. I know I'd feel better if there was a name that
would be kept aside.



On 10/26/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> Ticket #573 mentions adding a field for arbitrary application-specific
> information to tables. I now have a need for this, so I'm prepared to do the
> work to make it happen.
>
> The main consideration is the name of the field, with the ticket
> suggesting "attributes". Personally I'd prefer "info", but I'm fine with
> "attributes".
>
> The other decision is what objects to add this to. I need it on column,
> and table seems sensible. Query, session, metadata have been mentioned,
> although I'd expect the requirement there is less common.
>
> So, I propose adding "attributes" to Table and Column. Any thoughts?
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy 0.4.0 Released

2007-10-17 Thread Rick Morrison
..coincidentally released on the self-same day when I am finally taking the
wraps off 0.4.0 for a spin on a new project.

Congrats on this huge release, everybody!




On 10/17/07, Michael Bayer <[EMAIL PROTECTED]> wrote:
>
>
> Hey list -
>
> I'm very happy to announce that we've put out 0.4.0 final.   Thanks
> to all the contributors as well as all the beta testers who have
> helped us move through six beta releases, just to make sure we've got
> everything right (or as much right as we can).  For those still
> working with 0.3, its time to upgrade ! :)  Lots of folks have
> already done it and it's not so hard.  I think this is the most well
> documented and easy to use SQLAlchemy yet, and its definitely the
> fastest by a wide margin...many kinds of operations are 50% faster
> and the large majority of applications should be at least 20-30%
> faster.   We now have hotspot profiling tests as part of our
> testsuite so that performance-reducing changes immediately raise red
> flags (and of course, performance-increasing changes unleash a shower
> of balloons).
>
> Of course, 0.4 is a lot more than just an internal refactoring
> release - the public facing side also shifts the paradigms up another
> notch or two.  The Whats New document (http://www.sqlalchemy.org/trac/
> wiki/WhatsNewIn04 ) has been tracking all the enhancements and
> changes.  The emphasis is on reduced complexity and increased
> flexibility, including a very consistent Query object as well as a
> generative select() construct, far better integration of explicit
> transactions with engines and sessions, and mappers that are much
> smarter about multi-table and inheritance mappings.  We've also
> addressed a lot of the framework integration confusion and produced
> patterns and helpers that standardize web framework integration..not
> as plugins but as core features.
>
> The changelog documents 0.4 on a beta-by-beta basis.  Big changes
> since 0.4.0beta6 include an experimental Sybase driver, as well as a
> new in_() syntax which standardizes on being passed a list rather
> than *args (i.e. in_([1,2,3]) instead of in_(1,2,3)).  The old way
> still works of course but is deprecated.
>
> 0.4 download:  http://www.sqlalchemy.org/download.html
> documentation/migration overview (worth a read):  http://
> www.sqlalchemy.org/docs/04/intro.html
>
> - mike
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: LIMIT in update()

2007-10-16 Thread Rick Morrison
Yes, that would be the only thing that has a hope of working across database
engines. LIMIT with UPDATE is MySQL-only AFAIK.

For SA, joins in updates can be tricky, so the correlated query would best
be a IN() or EXISTS() query that has the limit you want. To get a
deterministic set of records that will be updated, you'll probably need an
ORDER BY on that subquery too.

Rick


On 10/16/07, Jim Musil <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> I'm trying to determine  a way to append  a LIMIT to an update()
> object.
>
> Is the solution a correlated update on the same table with the LIMIT
> on the select() object?
>
> Cheers,
> Jim
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Can't insert records into a MS SQL database

2007-10-07 Thread Rick Morrison
BTW the "assume autoincrement for integer PK" behavior is a hangover from
the early days of the module, where I originally copied from the PG
module. Since PG doesn't have the funky "it's ok to explicit ID insert now"
mode, the problem doesn't surface there.

I think the behavior was meant more for explicit table definition than
for reflected tables, but there's a bit of muddle between the two.


There was some new MSSQL reflection code posted a couple of months ago that
might do a better job of sniffing the actual state of
the IDENTITY column. Instead of reading from the INFORMATION_SCHEMA views, I
think it did some lower-level system table queries.

The point of the code was for performance reasons, not correctness,
but it may be worth a second look.


(Hey what is with the weird text wrap in Gmail and OSX Opera 9? It's
annoying.
Forces
you
to
make
new
lines
)

Rick

On 10/7/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
>
> >>Try adding autoincrement=False to the t_year column.
> >>
> >>
> >why would this fix the issue exactly ?
> >
> >
> Fair question. The explanation is a bit convoluted, not as nice and
> simple as the fix.
>
> MSSQL's equivalent of SERIAL/autoincrement is an "identity" flag on a
> column. To insert an explicit value into that, you have to issue "set
> identity_insert  on". The MSSQL dialect does this for you
> automatically, and that was the statement failing in Paulino's example.
> The MSSQL dialect also automatically gives a column the identity flag if
> it's an integer primary key, and autoincrement is true. So it would have
> for t_year. Paulino is using table reflection, and I reckoned the real
> db table does not have the identity flag - it would certainly be odd to
> put that on a year column.
>
> Regards,
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Nested selects

2007-09-20 Thread Rick Morrison
Yes, those are called subqueries; they're fully supported by SA. Your query
above has a couple of items of note:

 a) it's a correlated subquery: the inner query references items in the
outer query. (supported by SA)
 b) it's a self-join: the inner query and outer query reference the same
table. (also supported by SA)

your query roughly translates into:

  p2 = p.alias()
  select([p.c.id], p.c.number == select(func.max(p2.c.number),
p2.c.attribute == p.attribute))

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Feature suggestion: Description attribute in Tables/Columns

2007-09-20 Thread Rick Morrison
This is Python, after all, and it would be trivial to simply put whatever
attribute you want on a Table, Column or any SA object.

SA would just need to stay out of the way and agree not to use a certain
attribute like "description" or "userdata", or whatever.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Insert select results

2007-09-20 Thread Rick Morrison
It's on the to-do. This would be a great place to start hacking on SA if
you're interested, it's a feature that's been requested a few times now.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQLAlchemy: like and security (sql injection attacks)

2007-09-20 Thread Rick Morrison
Don't build SQL strings up from fragments that contain user input -- it's
what makes the application subject to SQL injection in the first place.

Safest would be to use a bound parameter for the literal. See here for
details:

http://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_everythingelse_bindparams

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL connection url format?

2007-09-17 Thread Rick Morrison
Hi Scott,

> ...to develop on Windows and deploy on Linux. It sounds
> like pyodbc is the best option

pyodbc works well in Windows, but I've heard on Linux, not so much. pymssql
is your best bet for Linux. Note that pymssql does not work with unicode,
limits SQL identifiers to 30 characters, and will only support up to 255
chars in VARCHAR columns. We could use some help getting pyodbc on Linux in
good shape, I'm far too busy these days to assist much.

> I saw an archived email that said there was better support for
> MSSQL in 0.3 than in 0.4. Is that still the case?

yeah, sadly MSSQL is pretty lighly tested on 0.4 right now. AFAIK, it's not
part of the regular test cycle.

> I'll be piggy backing on an existing ERP system and I'm trying to
> decide what would be the best way to store new tables

Well if you want to join against those other tables, and I'm guessing you
do, you'd be far better off using the same DB engine as the current system.
Doesn't have to be the same database, MSSQL works fine in cross-database
joins.

SQLite is a great little database engine, but beware trying to make it scale
up for highly concurrent situations. It's fast as hell as long as there are
only readers, or only writers, but mix the two and it will fall over pretty
fast. We use it here very successfully as a "static" database repository,
for geocode caches, user settings, archive data, that kind of stuff. If
you're going to have lots of simultaneous users and hitting the tables hard,
I would go with something designed for concurrent access.

HTH,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Profiling support

2007-09-14 Thread Rick Morrison
I believe the 0.4 unit tests have profiling support, have a look there.



On 9/14/07, Hermann Himmelbauer <[EMAIL PROTECTED]> wrote:
>
>
> Hi,
> I'd like to know if there is some profiling support in SQLAlchemy. It
> would be
> nice if there would be some data accompanying SQL statements in the
> logfiles
> that can be used to optimize the RDB-part of the application (e.g.
> lazy/eager
> loading, specifying queries, creating indexes etc.).
>
> How do you handle this?
>
> Best Regards,
> Hermann
>
> --
> [EMAIL PROTECTED]
> GPG key ID: 299893C7 (on keyservers)
> FP: 0124 2584 8809 EF2A DBF9  4902 64B4 D16B 2998 93C7
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Why is explicit 'and_' required for filter but not filter_by?

2007-09-14 Thread Rick Morrison
I think it might be more historical than anything else. Back when what is
now filter() was a single argument to the select() call, on the SQL-API
side, and there couldn't take any additional arguments, as the select() call
was already pretty heavy with keyword arguments and it was easy to get
things mixed up. That and the historical SelectResults() extension on which
this whole API is modeled, only took a single expression in its filter()
method.

Now that filter() is it's own construct, I can't see any reason why it
couldn't take a list of expressions that were assumed to be AND'ed. Well,
any good reason except that it's also pretty easy,  -- and arguably more
readable -- to just chain multiple .filter() calls using the generative
style:

.filter(and-expr-1).filter(and-expr-2)

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey + schema

2007-09-14 Thread Rick Morrison
> Im not sure about creation, but I've not had any problems using
> cross-schema foreign keys, relations, joins and so forth using
> SQLAlchemy 0.3 and PostgreSQL.

..and of course the test case I wrote up to show the problem worked fine.

Turns out the issue was in the PK declaration for the table in the alternate
schema...I was assuming that the sequence for that table should be created
in the schema as well, so I was prefixing the schema name on the Sequence()
object. SA creates the sequence in the public schema, but with
the prefix on the sequence name, and then can't find it during the join.
Removing the bogus prefix makes everything play nice.

Pilot error, sorry for the noise..

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] ForeignKey + schema

2007-09-13 Thread Rick Morrison
SA 0.3* doesn't seem to handle relationships between tables in different
schemas very well:  it seems to think that

  schema.A -> public.B

   is:

  schema.A -> schema.B

and even specifying "primaryjoin=" in the mapper won't help it. There seems
to be no directive to say "use the default / primary schema"

(I'm using MSSQL, but I've seen very similar q's regarding both PG and
Oracle, so I suspect it's not a db-specific issue)


Just curious if there's been any work in this area on 0.4 in this area? I've
been avoiding jumping in to 0.4 so far, but this could be the issue that
gives me the push.

Thx,
Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: db autogenerated pks?

2007-09-12 Thread Rick Morrison
SQL Server provides no facilities for retrieving a GUID key after an insert
-- it's not a true autoincrementing key. The MSSQL driver for SA uses either
@@IDENTITY or SCOPE_IDENTITY() to retreive the most-recently inserted
autoincrement value, but there is no such facility for getting GUID keys.

SA provides a mechanism called "passive default" to handle these kinds of
things. What it does under the covers, or what you can do explicitly without
it is:

 a) first call newid() to get the new GUID
 b) then do the insert using the GUID value as a normal attribute


My personal opinion is that GUID keys are over-utilized, and there are
usually better alternatives that will perform better overall, and will not
make you swim upstream with SA. Here's a link to an article about an
alternate scheme to get rid of GUID keys that talks about performance
implications, I'm sure you can find more if you look:

http://www.sql-server-performance.com/articles/per/guid_performance_p1.aspx

Rick


On 9/12/07, Smoke <[EMAIL PROTECTED]> wrote:
>
>
> Still have problems...
>
> if i change:
>
> t = sa.Table("Machines", metadata, autoload=True)
>
> to:
>
> t = sa.Table("Rpm_MacchineConfig", metadata,
>  sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
> primary_key=True),
>  autoload=False)
>
> i have:
>
> sqlalchemy.exceptions.DBAPIError: (ProgrammingError) ('42000',
> "[42000] [Microso
> ft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near
> ')'. (102)") u'INSERT INTO [Rpm_MacchineConfig] () VALUES ()' []
>
> and if i put autoload= True, like this:
>
> t = sa.Table("Rpm_MacchineConfig", metadata,
>  sa.Column('uId', sa.databases.mssql.MSUniqueIdentifier,
> primary_key=True),
>  autoload=True)
>
> i have this:
>
> sqlalchemy.exceptions.DBAPIError: (IntegrityError) ('23000', "[23000]
> [Microsoft
> ][ODBC SQL Server Driver][SQL Server]Impossible to insert NULL value
> into column 'uId' on table 'dbtests.dbo.Rpm_MachineConfig'.The column
> does not support NULL values (515); [01000] [Microsoft][O
> DBC SQL Server Driver][SQL Server]Instruction interrupted. (3621)") u
> 'INSERT INTO [Rpm_MacchineConfig] (nome, nodo, descrizione) VALUES
> (?, ?, ?)' ['MARIO', 'FI', None ]
>
> Any hint before i start changing my table design? I'm a newbie on
> sqlalchemy so i'm probably missing something...
>
> thanks
>
> FP
>
> On 12 Set, 02:18, KyleJ <[EMAIL PROTECTED]> wrote:
> > You probably need to override the autoloaded primary key column:
> http://www.sqlalchemy.org/docs/04/metadata.html#metadata_tables_refle...
> >
> > Specify the type with MSUniqueIdentifier from
> > sqlalchemy.databases.mssql
> >
> > On Sep 11, 9:01 am, Smoke <[EMAIL PROTECTED]> wrote:
> >
> > > Hi All,
> >
> > > I'm new to sqlalchemy and was checking if i can introduce it into some
> > > of my projects... I have some database ( sql server ) tables with
> > > MSUniqueidenfier columns set as PK and with newid() as default
> > > values...
> > > So, if i try to map this table into a class,save a new record and
> > > flush, then i have errors because it says that column doesn't support
> > > NULL values..
> > > Is there any option i'm missing that can make me exclude this PK from
> > > the INSERT query or somehow tell sqlalchemy that this pk column value
> > > is autogenerated by the database?
> >
> > > thanks,
> > > FP
> >
> > > P.S.:
> >
> > > My code is something very simple... like this:
> >
> > > t = sa.Table("Machines", metadata, autoload=True)
> > > Session = sessionmaker(bind=db, autoflush=False, transactional=False)
> > > class Machine(object):
> > > pass
> >
> > > session = Session()
> > > sa.orm.mapper(Machine, t)#, exclude_properties=['uId'])
> > > m = Machine()
> >
> > > nm = Machine()
> > > nm.name, nm.node = "Mac1", "P"
> > > session.save(nm)
> > > session.flush()
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to get ID back from database after save?

2007-09-11 Thread Rick Morrison
You should find it at z.USER_SID after the flush.

Not sure about your save() and flush() calls howevershould be
session.save_or_update(z) and session.flush()



On 9/11/07, Lukasz Szybalski <[EMAIL PROTECTED]> wrote:
>
>
> Hello,
>
> I am saving to my column in this way. How do I get the primary key id
> back from database?
>
> >>> z=User()
> >>> z.LASTNAME='smith'
> >>> z.FIRSTNAME='joe'
> >>> z.save_or_update()
> >>> z.flush()
>
> How do I get my z.USER_SID which is my primary auto increment key in
> database.
> Lucas
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & default_schema

2007-08-15 Thread Rick Morrison
> there's just a few odd places, and I wonder if "drop table" is one of
> them, resulting in the original cause of this thread.

I don't think that DROP is a special case. Look upthread. The incorrect DROP
happened in the same wrong schema as the incorrect CREATE. The problem is
that the check-table correctly looked in the *right* schema, and so didn't
find the table. If the default schema hadn't been monkeyed with along the
way, the behavior would be correct.

> I think MSSQL's set_default_schema is a bad idea from the start. People
> expect it to work as a shortcut for specifying a schema on every table,
> and it doesn't work like that. I think we should remove it for now.

I agree, it's been a fiddly pain in the neck since inception. Unless someone
yells pretty soon, consider it gone.

One thing left unresolved here is determining whether pyodbc is the culprit
in adding that spurious schema name. Would someone with a working pyodbc
verify that it is by trying some direct-text SQL against a pyodbc DBAPI
cursor? I would think that

   create table foo(id int)

would do the trick. Watch the SQL that actually gets sent over the wire, and
where the table ends up.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & default_schema

2007-08-14 Thread Rick Morrison
> Should ansisql recognize and use default schemas,
> or should the DB dialect somehow override the construction of the table
name?

The more I think about this, the more I'm becoming convinced that specifying
an implicit default schema in all generated SQL is a pretty bad idea. The
reason is that it would break a feature in a few database engines that might
be called "schema cascade", or Postgres explicitly calls the "schema path".

The basics of the feature is that during execution of an SQL statement, the
current schema (that is, the schema associated with the current connection)
is searched for the objects specified in the query, and the search for those
items fails, the search continues in the default schema, or along an
explicit schema path.

This allows for the construction of "local", or "override" tables/objects
that would be seen by particular user, or role, while others would see the
other, underlying table. For example, consider the following schema / table
layout.

schema 'public':
   table 'a':
   table 'b':
   table 'c'

schema 'archive':
   table 'a'


and the query:

select a.*, b.* from a, b where b.id_a = a.id


user "x" might see the following underlying query plan:
select a.*, b.*  from public.a as a, public.b as b where.

while user "archive" might instead see:
select a.*, b.* from archive.a as a, public.b as b where.

If SA were to specify the implicit default schema in all queries, this
behavior obviously breaks.

I think the SQL that SA currently generates is actually the 'correct' SQL in
Christophe's situation:

create table foo(...)

which pyobdbc then changes into

create table user.foo(...)

where 'user' is the user name of the logged-in user. While an explicit
schema specification from SA would stop that, it breaks the schema
cascade behavior.

So I think the bug is really in pyodbc, for adding the explicit schema where
none was requested. Thoughts?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & default_schema

2007-08-14 Thread Rick Morrison
>> Sure, but the drop is being issued in the correct default schema (dbo).

>No it's not. If I don't enable checkfirst the table is dropped, which
>means both statements are issued on the wrong schema (considering that
>the check is right).

Ah OK I didn't get that from the previous
messages. Then it sounds like the check is looking at the default
schema, but the issued statements aren't using it.

Looking at the code, I can see the explicit use of the default schema for
table existence checks and table reflects, but no special treatment for
normal SQL ops like SELECT/UPDATE/DELETE. The code assumes that these
details will be handled by the default ansisql.py Dialect, which would then
issue it's own checks for default schema by callbacks to get the default
schema. But at least in the 0.3 branch, those calls are never made, the SQL
generator just looks for the table.schema, which is presumably an explicit
schema set in the metadata.

Mike, are you watching this thread? Should ansisql recognize and use default
schemas, or should the DB dialect somehow override the construction of the
table name?

>I'm going back and forth between both implementations, and am willing
>to help a bit on mssql support. From time to time I try to see why a
>unittest wouldn't pass and if I can see why I propose a patch.

That is a very much appreciated effort; making unit tests pass for MSSQL is
a big job.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & default_schema

2007-08-14 Thread Rick Morrison
> It's for the delete (which then does not happen because the table is not
found)

Sure, but the drop is being issued in the correct default schema (dbo). The
error is not that the drop is being issued in the wrong schema, it is that
the table was *created* in the wrong schema, and so is not where it ought to
be.

> pyodbc on a linux platform

That is a problematic combination right now, currently pymssql is a better
bet on Linux unless you're working with unicode columns. We're working on
better support for pyodbc + *nix, but we first need to get through the
0.4beta cycle, as MSSQL is currently broken on the
0.4 trunk. A stable short-term platform for you would be the 0.3.10 release
or the 0.3 branch tip + pymssql.

Rick


> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-13 Thread Rick Morrison
> but it would be more sane
> if power(connect_args) >= power(url_args), that is, connect_args is
> the more general/powerful/ way, and url allows a subset (or all) of
> connect_args items; and not vice versa -- connect_args is the
> programatical way, so that should do _everything_..

If we ever get around to getting SA options in the db-url, this makes
perfect sense, at least to me.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SA to move data between databases

2007-08-13 Thread Rick Morrison
It's an SQLite-ism. See the recent thread on the type system. I've had
exactly this issue with SQLite vs. MSSQL.

On 8/11/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> > >i'm Wondering if all the unicode strings (at least table/column
> > > names) should be converted back into plain strings as they have
> > > been before autoload reflecting them from database.
> >
> > Well, some databases do support unicode identifier names, some
> > don't. I'd say don't do any conversion for now; if someone is faced
> > with migrating tables with unicode names to a database that doesn't
> > support it, well, let them sweat that one :-)
>
> hmmm. i'll probably put that as some option, as my model's
> table/column names are never unicode, but once they go into db, all
> gets unicoded there. so i'm not sure if after some migration the
> model will match the database...
> e.g. sqlite turns everything into unicode and hence does not care if
> unicode or not - so it's all ok there; but once db-structure migrates
> into something that _does_ care about unicode or not, trouble
> trouble..
> is this unicodeing everything a sqlite specific behaviour?
> de-unicoding it then should go into sqlite-dialect specific
> reflection then.
>
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & default_schema

2007-08-13 Thread Rick Morrison
That SQL log is from the table existence check.

Although it's unclear from the trace and log as to whether the check
is for the table create or for the table drop, it is correctly using
the default schema, which is 'dbo' on all MSSQL platforms.


So, the table check and the drop are working correctly. It's the table
create that is incorrectly creating the table in the old "owner" schema,
rather than using the default schema.

I believe that's an pyodbc-ism, as I routinely use implicit (default) schema
on pymssql, and all tables are created in the default schema correctly.
Paul's workaround for this will work fine for now.

Christophe, Are you using pyodbc, or some other DBAPI?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
Yeah of course date formats vary, it's one of the trickier issues in type
adaption, can be computationally expensive, etc. A full-on date
parser is probably just way out of scope for SA (the excellent
dateutil package already
handles it pretty well).  I'm of the opinion that it would not be so
horrible to restrict string-to-date conversions to the ISO format,
else fail.

On 8/8/07, Michael Bayer < [EMAIL PROTECTED]> wrote:
>
> this particular feature is easily implemented right now as an end user
> recipe, using TypeDecorator. no new type system is needed for this one
> (although there is a ticket for such).
> the advantage to TypeDecorator is that you get to define what kind of date
> representation you'd like to convert from/to, i.e. a particular string
> representation, mxDateTime, etc. that the string format of "MM/DD/
> HH:MM:SS" just happened to work doesnt seem to me to be terribly useful,
> considering that if you were in some other country the date would be entered
> as "DD/MM/" or such. hardcoding in a "-MM-DD HH:MM:SS" format into
> the code doesnt seem like such a "solution".
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
> I think there's something a little simpler we need - some
> documentation. For all the SA types we should document the type that
> convert_result_value returns, and that convert_bind_param expects, and
> check that all the DBAPIs stick to this (probably with unit tests). I'm
> pretty sure there's some inconsistency at the minute.

Hi Paul,

As long as you're working with only one database, everything is fine. The SA
Dialect specifies the type conversion, and
handles any discrepancies between what the DBAPI gives, and what the SA
published type is via the convert_bind_param() and convert_result_value().

The issue comes up when working with multiple databases. I first noticed the
problem when I was surprised to see all strings coming back from SQLite as
unicode(), and not str(). I had to convert the SQLite output to MSSQL input
when working with the two databases. I would have expected SA as a database
abstraction layer to
handle stuff like this. But it for the most part, the various Dialects
just return what the DBAPI gives back.


And the various DBAPI's each have their own idea of what the "proper" Python
type is for the various SQL types. We've seen this before in various
flavors: String vs. Unicode, float vs. Decimal, String vs. date (esp.
thinking of SQLite here), and etc. If you want to shard data across say,
SQLite and Postgresql, it would be nice not to have to worry about the data
source being the determinant of the type of data coming back. Likewise for a
data migration from one to the other. If you were to migrate from SQLite to
MSSQL, you'd end with unicode data in text columns unless you took specific
steps to convert it.

Whether the type mapping are determined by convention and unit tests as you
suggest, or by an adaption layer as I imagine is of course up for grabs.
Certainly the convention method has interia on its side.  But is it really
the best way?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: DateTime version issues

2007-08-08 Thread Rick Morrison
FYI I believe there is a ticket to make improvements in the type system that
would allow strings to be given as date input (among other conveniences),
and I don't think it's a bad thing. Lots of databases make the conversion
anyway, and it's ultimately pretty confusing thing to have various Dialects
treat data in various ways.

I've been bugging Mike for a long, long time about a better type system, and
I think I may have oversold it and made it sound too big and grandiose.  All
it needs to be is a layer that adapts the incoming type to the desired
column type. This case would be a good example, the incoming string would be
converted into a datetime (or whatever the Dialects's DBAPI needed) and sent
to the database that way. Conversely, when fetching data, the Dialect could
adapt the data back to the standard Python type for the column (as it is
pretty much already done).

IMO this is going to become increasingly important with SA being used for
data migration as we've recently seen (thanks to you) and with upcoming
sharding support. Type differences in those cases can make life hell.

Rick


On 8/8/07, Paul Johnston <[EMAIL PROTECTED]> wrote:
>
>
> Dave,
>
> >I recently upgraded from SQLAlchemy 0.3.8 to 0.3.10.  The only
> >problem I ran into is that 0.3.10 no longer allows you to set datetime
> >columns using strings.
> >
> >
> What database are you using? I did some rework around MSSQL and dates
> between those versions.
>
> You now need to use datetime.* objects; it was a bug that it previously
> allowed strings. The problem with string is that it makes you dependent
> on the date format configured in the database. This had caused me a
> problem is live app - when the damn database got rebuilt and configured
> a little differently.
>
> Paul
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: autoload'ing metadata

2007-07-27 Thread Rick Morrison
MSSQL is case-sensitive, and wants to see queries to INFORMATION_SCHEMA in
UPPER CASE.

See mssql.py.uppercase_table() for the gory details, or rather, THE GORY
DETAILS ;-)

On 7/27/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> Hi svil,
>
> Still no luck. I don't know if the information_schema module is
> supposed to work well with pymssql. Anyway :
>
> Traceback (most recent call last):
>   File "autoload.py", line 233, in ?
> autoloader = AutoLoader( engine)
>   File "autoload.py", line 100, in __init__
> me.table_names = engine.execute( sqltext)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 773, in execute
> return connection.execute(statement, *multiparams, **params)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 517, in execute
> return Connection.executors[c](self, object, *multiparams, **params)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 557, in execute_clauseelement
> return self.execute_compiled(elem.compile(dialect=self.dialect,
> parameters=param), *multiparams, **params)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 568, in execute_compiled
> self._execute_raw(context)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 581, in _execute_raw
> self._execute(context)
>   File "/home/cdevienne/prog/sqlalchemy/lib/sqlalchemy/engine/base.py",
> line 599, in _execute
> raise exceptions.SQLError(context.statement, context.parameters, e)
> sqlalchemy.exceptions.SQLError: (DatabaseError) internal error (SQL
> Server message 208, severity 16, state 1, line 1:
> Invalid object name 'information_schema.tables'.
> DB-Lib error message 20018, severity 5:
> General SQL Server error: Check messages from the SQL Server.
> ): SQL Server message 208, severity 16, state 1, line 1:
> Invalid object name 'information_schema.tables'.
> DB-Lib error message 20018, severity 5:
> General SQL Server error: Check messages from the SQL Server.
> 'SELECT tables_a3c4.table_name, tables_a3c4.table_schema \nFROM
> information_schema.tables AS tables_a3c4 \nWHERE
> tables_a3c4.table_schema = %(tables_table_schema)s'
> {'tables_table_schema': 'sf_tmp'}
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-26 Thread Rick Morrison
>
>
> what needs to be modified?
> can it be done by a src-patch? maybe i can apply it runtime (;-)


I think the patch has been out for a few weeks now, so it will likely be in
the next release of pyodbc.

There is also an easy workaround, pass use_scope_identity = False to the
Engine ctor, and it will use @@IDENTITY instead of SCOPE_IDENTITY().

Rick


>
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-26 Thread Rick Morrison
The dependancy chains looks like this (more or less):


pymssql ==> DBlib ==> FreeTDS ==> wire

pyodbc (Unix)  ==> iodbc / unixodbc ==> FreeTDS ==> wire

pyodbc (Win)  ==> ms-odbc ==> OLEdb  ==> wire

The unicode problem for pymssql is in DBlib, not FreeTDS


Rick

On 7/26/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> 2007/7/26, Paul Johnston <[EMAIL PROTECTED]>:
> > One thing PyMSSQL will never support is Unicode - the MSSQL interface is
> > uses is deprecated and just doesn't do unicode. For that reason, you may
> > do better to focus your efforts on getting PyODBC working better on
> > Unix. Right at the minute though it's more problematic than PyMSSQL.
>
> I could see that...
> One thing though :
> - pymssql, on unix, is based on freetds
> - pyODBC, at least on my system, use an ODBC driver based on freetds
> The question is : if pymssql does not support unicode, does it mean
> that freetds doesn't ?
> Sub-Question : if FreeTDS supports unicode, is it realistic to try and
> modify pymssql to make it more specific to freetds and support unicode
> ?
>
> Christophe
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dialect & default schema

2007-07-26 Thread Rick Morrison
I'm going to reply here, as I can't seem to login to trac again. I did
manage to get comments in for #685

#679 - committed in r3050
#684 - committed in r3051
#685 - needs more discussion see the Trac comments

Thanks for the patches!
Rick



On 7/26/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> 2007/7/26, Christophe de VIENNE <[EMAIL PROTECTED]>:
> > Last thing : could someone tell me if the patches I posted for mssql
> > looks 'commitable', because I'd prefer fixing those if needed before
> > continuing my failing unit-tests slow (one per day) review.
>
> Forgot the ticket numbers, sorry :
> http://www.sqlalchemy.org/trac/ticket/679
> http://www.sqlalchemy.org/trac/ticket/684
> http://www.sqlalchemy.org/trac/ticket/685
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-26 Thread Rick Morrison
> you may do better to focus your efforts on getting PyODBC working better
on Unix.

Agreed here. One stable and supportable DBAPI for MSSQL would be really
nice.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Dialect & default schema

2007-07-26 Thread Rick Morrison
Thanks for having a look at the tests, Christophe.

Responses below:

The reason is that MSSQLDialect.get_default_schema_name takes no
> argument, while the unittext gives it an engine. The Mssql dialect,
> for example, does take an argument.



It looks to me like get_default_schema_name() has recently been refactored a
bit: it previously took no parameters. For some reason, the MSSQL module
(and from the looks of things, the mysql module as well) were not updated.

Not sure I follow your second sentence here.


Having a closer look at MSSQLDialect.get_default_schema_name , it
> simply returns self.schema_name, which is initialised in __init__ with
> the value "dbo". This is were I don't know what to think. Isn't the
> default schema depending on the connection ? If so why is it
> initialized to "dbo" ?



No, the *current* schema is dependent on connection. The default
schema is what you get if you don't specify anything else.

For MSSQL the default schema is "dbo" (short for DabaseBaseOwner). More on
that below.


Another strange thing : MSSQLDialect defines a
> set_default_schema_name, and it's the only dialect to do that.


That was by user request, see the archives.


Before patching this in the wrong way, I'd like to have a little bit
> more details on how the default_shema stuffs are supposed to work.
> Sub-question : shouldn't the set_default_schema_name issue a "USE "
> ?


No, you're confusing schemas and databases. Use  sets a current
database name, not a current schema. Search the archives again for previous
discussion of schemas on MSSQL. There's been confusion in the past between
the terms "owner" and "schema". MSSQL and (some) other databases support a
three-level namespace:

   select * from ..

Confusingly, in earlier documentation (pre MSSQL-2005) for MSSQL, Microsoft
would refer to the middle name as an "owner":

   select * from ..

That middle name, whatever you want to call it, has a system default if not
explicitly specified. That's what get_default_schema_name() is supposed to
return, or at least that's my understanding of it.


Last thing : could someone tell me if the patches I posted for mssql
> looks 'commitable', because I'd prefer fixing those if needed before
> continuing my failing unit-tests slow (one per day) review.


I'll have a look and reply in the tickets themselves.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Rick Morrison
The list is useful only for a hacker on the MSSQL module, not for general
users, but FWIW, I've added it to
http://www.sqlalchemy.org/trac/wiki/DatabaseNotes

I generally try to respond directly to help encourage anyone willing to
offer a hand on the MSSQL module, as I don't have the time these days
to get all the tests fixed, so sorry if the repeated postings are bugging you.


On 7/24/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>
>
> > The reasons for the failures that I've had time to look into have
> > so far had as much to do with the tests as with the MSSQL module.
> > They are mostly fixable within the constraints that the various
> > supported MSSQL DBAPI modules impose:
> >
> > -- DBLib (required by pymssql) does not like Unicode, and has a
> > 30 character identifier limit
> > -- pymssql seems to
> > have troubles with multiple cursors on a single connection and
> > doesn't properly implement rowcounts on executemany()
> > -- Pyodbc doesn't work with multiple result sets (I believe a
> > patch to fix this is about)
> > -- Pyodbc has troubles with the SQL 2000 syntax of "select
> > scope_identity()"
> > -- There are issues on all DBAPI modules (really, an issue with
> > MSSQL) comparing Python strings to MSSQL TEXT columns
> > -- Probably a half-dozen more issues that I've missed here.
>
> one suggestion - why not make one "organisational" ticket and list all
> these there? compiled from other mails too? and any other issues
> about mssql? and keep it up-to-date?
> otherwise it could be 20 separate tickets... with little (if any)
> relation to SA.
> or it can be a wiki page. probably better, easier to update.
>
> i see such list for n-th time last 2-3 months, IMO its a waste of time
> to recall all this, lets put it into some place...
>
> ciao
> svilen
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: MSSQL & Time

2007-07-24 Thread Rick Morrison
Hi Christope,

> I see. Are the reasons for thoses failures well known ? fixable ? If
> it's not too tricky I could spend a bit of time on it in a little
> while.


The reasons for the failures that I've had time to look into have so far had
as much to do with the tests as with the MSSQL module. They are mostly
fixable within the constraints that the various supported MSSQL DBAPI
modules impose:

-- DBLib (required by pymssql) does not like Unicode, and has a 30
character identifier limit
-- pymssql seems to
have troubles with multiple cursors on a single connection and doesn't
properly implement rowcounts on executemany()
-- Pyodbc doesn't work with multiple result sets (I believe a patch to
fix this is about)
-- Pyodbc has troubles with the SQL 2000 syntax of "select
scope_identity()"
-- There are issues on all DBAPI modules (really, an issue with MSSQL)
comparing Python strings to MSSQL TEXT columns
-- Probably a half-dozen more issues that I've missed here.

The failing tests were largely
either written before the MSSQL module became part of SA, or were
written by someone without access
to a MSSQL server, hence a lot of the issues. Writing tests that run across
database servers with different SQL syntax and capabilities is hard, so it
may be easier in some cases to provide a test
that is specific to the database server than try to get a single test
to work with all servers, especially for the more esoteric features.


Paul has already fixed quite a few, a continuing effort to fix more
(especially on *nix), would be *greatly* welcomed,
as "passes all unit tests" is often a required checklist item for a lot of
users before they'll continue with the software.

Rick




On 7/24/07, Christophe de VIENNE <[EMAIL PROTECTED]> wrote:
>
>
> 2007/7/24, Paul Johnston <[EMAIL PROTECTED]>:
> >
> > Hi,
> >
> > >Is there any special condition for the unittests to run on mssql ?
> > >
> > >
> > A few... you really need to be running on Windows and using PyODBC. And
> > then append ?text_as_varchar=1 to your DBURI.
>
> I'll try that :-)
>
> > Looking at your command line, it looks like you're running from Unix.
> > You can use PyMSSQL from Unix, and the basic features work fine, but a
> > number of edge cases don't, so you get quite a lot of unit test
> failures.
>
> I see. Are the reasons for thoses failures well known ? fixable ? If
> it's not too tricky I could spend a bit of time on it in a little
> while.
>
> Thanks,
>
> Christophe
>
> >
>

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



<    1   2   3   4   5   >