Re: [sqlalchemy] Re: joining sessions / two phase commit

2010-02-09 Thread Chris Withers

Ants Aasma wrote:

On Feb 4, 12:41 am, Chris Withers ch...@simplistix.co.uk wrote:

The problem is that session2 (and it's engine) are only created in a
small part of the code, while session1 is created in a much wider
encompassing framework. As such, there's no obvious way to get session1
to the piece of code that calls commit on session2.

(an aside: what happens here, assuming the first of your possibilities:
session1.commit()
raise RuntimeError('something goes bang')
session2.commit())


This is the reason why you need a transaction manager when using two-
phase transactions.


Okay, but what does the transaction manager do that's different from 
calling commit on session1 and session2 in order?



The second transaction will remain in a prepared
state (modifications not visible to other transactions, still holding
any locks), even after a database crash and restart. 


So how do you un-f?$k it then? ;-)


The transaction
manager needs to ensure that all transactions in a group either get
committed or are rolled back. This should preferably be an automatic
process, as any prepared transactions left hanging will grind your
database to a halt pretty quickly.


I know that zope's transaction package aims to do just this, I wonder if 
anyone's used that, or anything else, with SA to solve this problem?


cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: Another tutorial!

2010-02-09 Thread Chris Withers

Mike Driscoll wrote:


On Feb 4, 10:36 am, John Trammell jo...@holmescorp.com wrote:

You made some serious blunders.  Check your comments on the blog post.



It looks like Werner found the same issue that Simon already told me
about. This has been fixed per Simon's notes. I also found that I
forgot to import ForeignKey in the first part of the series. This was
also fixed. Thanks for being so nice.


Writing in ReST and running things through as doctests (or use a tool 
like Manuel) are great ways of getting around this ;-)


Chris

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: joining sessions / two phase commit

2010-02-09 Thread Wichert Akkerman

On 2010-2-9 09:48, Chris Withers wrote:

I know that zope's transaction package aims to do just this, I wonder if
anyone's used that, or anything else, with SA to solve this problem?


You mean ZODB's transaction package? :). I use that all the time to get 
transactions working across multiple storage systems. Most commonly 
using repoze.tm2 to integrate with a WSGI stack, zope.sqlalchemy to 
integrate SQLAlchemy with transaction and repoze.filesafe to do 
transaction-safe file creation.


Wichert.

--
Wichert Akkerman wich...@wiggy.net   It is simple to make things.
http://www.wiggy.net/  It is hard to make things simple.

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Chris Withers

Michael Bayer wrote:

Suppose you had a joined table setup, and you placed a Constraint in the
__table_args__ of the base class.   You certainly don't want that feeding
into the table of the subclass.  So like the case with __mapper_args__,
unconditional inheritance of the attribute is a bad idea.


Because SA has chosen to model table inheritence with python 
inheritance, I'd agree. However, when __mixin__ has been specified on a 
class, it should be treated as just that; a mixin, that behaves exactly 
like any normal python class, inheriting everything from its bases, etc...


As an aside, for me it's a shame that declarative chose to model table 
inheritance with python inheritance. I think it's confusing that, when 
using declarative, inheritance suddenly behaves differently than it does 
in any other use of python. I think it's also a shame that you end up 
having to do metaclass programming just to get what is, everywhere else, 
a normal feature of python. For me, it would have been better to have a 
special attribute to say this inheritance is table inheritance rather 
than having to do that just to say please make this class behave like a 
normal python class...



So again, and at this point you should just consider this research towards
an actual feature being added to SQLAlchemy, you need to roll this into
the DeclarativeMixin recipe at
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DeclarativeMixins .

If/when we add this to SQLA, it will be :  any class that has
__declarative_mixin__ = True on it will be used to propagate everything to
subclasses.   you can of course stick that on the base you send to
declarative_base() too for the same effect.

What's not yet clear - should this mixin affect an existing
__mapper_args__ or __table_args__ on the subclass, i.e. add its own values
into the collections ?   At the moment I think it should do it for all the
keyword arguments.


Why make work here? Just let these behave as normal python classes would 
treat them...


class A:
  something = {'foo':1}

class B(A):
  something = {'bazz':2}

A python programmer would expect B's something to be {'bazz':2} not 
{'foo':1,'bazz':2}.


If they wanted that, they'd do:

# this should be in the python core :-S
class classproperty(property):
def __get__(desc, self, cls):
return desc.fget(cls)

from unittest import TestCase

class Tests(TestCase):

def test_class_property(self):

class A(object):
something = {'foo':1}

class B(A):

@classproperty
def something(cls):
d = dict(super(B,cls).something)
d.update({'bazz':2})
return d

self.assertEqual(B.something,{
'foo':1,
'bazz':2,
})

cheers,

Chris

--
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] draft SQL schema support in SQLAlchemy

2010-02-09 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Bayer ha scritto:
 Manlio Perillo wrote:
 Hi.
 
 I have written a module to implement support to SQL Schema in SQLALchemy.
 
 The code is available here:
 http://paste.pocoo.org/show/17/
 
 It requires this patch:
 http://paste.pocoo.org/show/175556/
 
 obviously we can't accept the patch, if that's what you're proposing,
 since it removes necessary functionality.   I'm not really sure why
 breaking SQLA core should be necessary.
 

It is necessary because it prevents the before-create MetaData listeners
to do things that affects how tables are handled.

What is the reason why the entire tables collection should be passed to
the before-create MetaData listeners?

It is not even documented:
http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html#sqlalchemy.schema.MetaData


Regards  Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktxL90ACgkQscQJ24LbaUS0vQCgjqqjdviuSqqmqjzUYwGGNjCx
eMcAn1WdacMxHltGDNyAXHfdmME0KTYp
=hH/f
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] suggestions about SQL SCHEMA handling in SQLAlchemy

2010-02-09 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Bayer ha scritto:
 Manlio Perillo wrote:
 When writing this code I found some problems with SQLAlchemy:

 1) There is no support for SQL Schema at all.

It is ok, for me, if there is no direct support for SQL Schema in
SQLAlchemy, but this is a standard feature, so SQLALchemy dialects
should:

* define a `supports_schemas` attribute, as a boolean
* define a `has_schema` method, to check is a schema exists

Can I fill a ticket with this feature request?
 
 can you describe this for me ?   We support schemas from the perspective
 that you can define a table that is in a particular schema, so i dont
 understand the statement no support for SQL schema at all.
 

The at all was unnecessary, sorry.
What I meant was that there is no support to check if a dialect supports
schemas, and to check is a schema is already defined.

 [...]
 
 `has_schema()` is a totally fine method for which we'd accept a patch to
 the Inspector class as well as dialects.
 

Ok, thanks.
I can only provide a patch for PostgreSQL, but it's a starting point.

 
 2) I would like to emulate SQL Schema support in SQLite, using secondary
databases, attached to the main database.

 [...]
The problem is that SQLAlchemy executes these queries **before** my
DDL listener is called, so the query fails because I don't have yet
attached the secondary database.
 
 if your database requires special configuration in order for schemas to
 be present, you would do that before issuing anything with metadata
 creation.   I'd advise using a PoolListener to handle this configuration. 
 It should not be the job of create_all to attach to a particular schema,
 since create_all is not a configure the database command - many
 applications don't ever call create_all but still may be using sqlite
 schemas.
 

Please note that the code I have posted is only meant to support my
current usage pattern of SQLAlchemy.

PoolListener is rather low level; if I use them I need to know in
advance the secondary databases to attach.

 
I think that this behaviour is incorrect.
The before-create listeners of a MetaData object should be called
before checking if the tables exist.
 
 I can see the awkardness here but the current contract of before-create
 is that it means before I create this list of Tables that I have found
 need creation, and here they are.Its something we may have to revisit
 if there's really a reason for a before I do anything hook - but I don't
 buy the case of I need to attach schemas in the create_all() as a
 necessary use case.   

Ok, fine for me.
Auto attach database for SQLite it's not something i really need and it
can still can be implemented using a PoolListener.

See my reply to the more recent thread about Schema in SQLAlchemy about
the reason why I was assuming that changing current behaviour was fine.



Regards  Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktxNRIACgkQscQJ24LbaURRdACcCj51Sb5Gp1cqZNDUv3U6zM1Y
LKgAn35CBel2vOrfkCd4yK5XMVMFf+V/
=xBEd
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Chris Withers

Michael Bayer wrote:

Suppose you had a joined table setup, and you placed a Constraint in the
__table_args__ of the base class.   You certainly don't want that feeding
into the table of the subclass.  So like the case with __mapper_args__,
unconditional inheritance of the attribute is a bad idea.

So again, and at this point you should just consider this research towards
an actual feature being added to SQLAlchemy, you need to roll this into
the DeclarativeMixin recipe at
http://www.sqlalchemy.org/trac/wiki/UsageRecipes/DeclarativeMixins .


To this end, please find attached the mixable.py I'm working and it's 
unit tests.


All the tests bar test_mapper_args_composite pass, and that currently 
blows up with a rather bizarre:


Traceback (most recent call last):
  File test_mixable.py, line 177, in test_mapper_args_composite
class MyModel(Base,MyMixin1,MyMixin2):
  File mixable.py, line 27, in __init__
return DeclarativeMeta.__init__(cls, classname, bases, dict_)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py, 
line 561, in __init__

_as_declarative(cls, classname, dict_)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py, 
line 554, in _as_declarative
cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff, 
**mapper_args)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/__init__.py, 
line 751, in mapper

return Mapper(class_, local_table, *args, **params)
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py, 
line 198, in __init__

self._configure_properties()
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py, 
line 514, in _configure_properties

if self._should_exclude(col.key, col.key, local=False):
  File 
/usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py, 
line 985, in _should_exclude

if getattr(self.class_, assigned_name, None)\
TypeError: Error when calling the metaclass bases
getattr(): attribute name must be string

Any idea what's causing that? If I move the type_ columns the the 
__mapper_args__ to MyModel, the tests passes...


(also note evilness required because declarative gets __table_args__ 
from dict_ rather than the cls, where it should ;-) )


cheers,

Chris
import unittest

from mixable import declarative_base
from decorators import classproperty
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy.schema import Column
from sqlalchemy.types import Integer, String


class Test(unittest.TestCase):

def setUp(self):
self.engine = create_engine(sqlite://)
self.Session = sessionmaker(
bind=self.engine,
autoflush=True,
autocommit=False
)

def test_simple(self):

Base = declarative_base()

class MyMixin:
__mixin__ = True
id =  Column(Integer, primary_key=True)

def foo(self):
return 'bar'+str(self.id)

class MyModel(Base,MyMixin):
__tablename__='test'
name = Column(String(1000), nullable=False, index=True)

Base.metadata.create_all(self.engine)

session = self.Session()
session.add(MyModel(name='testing'))
session.commit()

session = self.Session()
obj = session.query(MyModel).one()
self.assertEqual(obj.id,1)
self.assertEqual(obj.name,'testing')
self.assertEqual(obj.foo(),'bar1')


def test_hierarchical_bases(self):

Base = declarative_base()

class MyMixinParent:
__mixin__ = True
id =  Column(Integer, primary_key=True)

def foo(self):
return 'bar'+str(self.id)

class MyMixin(MyMixinParent):
baz = Column(String(1000), nullable=False, index=True)

class MyModel(Base,MyMixin):
__tablename__='test'
name = Column(String(1000), nullable=False, index=True)

Base.metadata.create_all(self.engine)

session = self.Session()
session.add(MyModel(name='testing',baz='fu'))
session.commit()

session = self.Session()
obj = session.query(MyModel).one()
self.assertEqual(obj.id,1)
self.assertEqual(obj.name,'testing')
self.assertEqual(obj.foo(),'bar1')
self.assertEqual(obj.baz,'fu')

def test_table_args_inherited(self):

Base = declarative_base()

class MyMixin:
__mixin__ = True
__table_args__ = {'mysql_engine':'InnoDB'} 

class MyModel(Base,MyMixin):
__tablename__='test'
id =  Column(Integer, primary_key=True)

self.assertEqual(MyModel.__table__.kwargs,{'mysql_engine': 'InnoDB'})

def 

[sqlalchemy] Re: joining sessions / two phase commit

2010-02-09 Thread Ants Aasma
On Feb 9, 10:48 am, Chris Withers ch...@simplistix.co.uk wrote:
 Okay, but what does the transaction manager do that's different from
 calling commit on session1 and session2 in order?

A TM should write to durable storage when a transaction group is
prepared before committing any transaction. When doing crash recovery
this information must be used to decide whether to commit or rollback
the rest of the prepared transactions.

  The second transaction will remain in a prepared
  state (modifications not visible to other transactions, still holding
  any locks), even after a database crash and restart.

 So how do you un-f?$k it then? ;-)

For MySQL you can obtain the list of prepared transactions with the XA
RECOVER command. You can then use XA COMMIT or XA ROLLBACK commands as
appropriate to handle them.

 I know that zope's transaction package aims to do just this, I wonder if
 anyone's used that, or anything else, with SA to solve this problem?

I've only used two phase for the relatively trivial case of doing
filesystem updates atomically along with metadata updates in the
database. The zope transaction package doesn't seem to have any
disaster recovery story, but maybe I'm missing something. Depending on
your exact environment and requirements you might also find an easier
way, but be very-very careful with distributed transactions. It's a
really hard problem to get 100% correct in the face of arbitrary
software, network and hardware failures.

Ants

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Michael Bayer
Chris Withers wrote:
 Michael Bayer wrote:
 Suppose you had a joined table setup, and you placed a Constraint in the
 __table_args__ of the base class.   You certainly don't want that
 feeding
 into the table of the subclass.  So like the case with __mapper_args__,
 unconditional inheritance of the attribute is a bad idea.

 Because SA has chosen to model table inheritence with python
 inheritance, I'd agree. However, when __mixin__ has been specified on a
 class, it should be treated as just that; a mixin, that behaves exactly
 like any normal python class, inheriting everything from its bases, etc...

yes agreed that's the opposite of unconditional ;)



 As an aside, for me it's a shame that declarative chose to model table
 inheritance with python inheritance.

Um, so if I wanted to model table inheritance in my class hierarchy, I
*shouldn't* use subclassing ?   how would that work ?

 I think it's confusing that, when
 using declarative, inheritance suddenly behaves differently than it does
 in any other use of python.

Or are you saying its a shame that you need a __marker__ to indicate one
kind of inheritance and not the other ?   Don't you think if it were built
the other way around, I'd be fielding just as many complaints for the
other direction  ?

 I think it's also a shame that you end up
 having to do metaclass programming just to get what is, everywhere else,
 a normal feature of python.

This is a supremely unfair complaint.  I have stated on several occasions,
including in my previous email:

 So again, and at this point you should just consider this research
 towards
 an actual feature being added to SQLAlchemy

... not to mention have updated the very top of the actual wiki page on
this recipe, that this a feature under consideration for implementation,
so that you could be spared the great expense of having to cut and paste a
ten line metaclass into your application.   I hope you can bear the
shame until I have the time to implement your latest request.


 For me, it would have been better to have a
 special attribute to say this inheritance is table inheritance rather
 than having to do that just to say please make this class behave like a
 normal python class...

again, __marker__ here, __marker__ there, its got to be one place or the
other, or both.   Keep in mind that what you call normal behavior means
that Column based attributes would magically copy themselves onto a
subclass.   The simple fact is that declarative is a metaclass based
approach.  The whole point is that the class is not normal.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] suggestions about SQL SCHEMA handling in SQLAlchemy

2010-02-09 Thread Michael Bayer
Manlio Perillo wrote:

 PoolListener is rather low level; if I use them I need to know in
 advance the secondary databases to attach.

What precisely is the use case for an application that will use a database
X, will be using remote schemas P, Q, and R, but its unreasonable for
those schemas P, Q and R to be configured along with the database
connectivity ?  I see an easy feature add to the SQLite dialect that would
allow:

create_engine('sqlite:///X.db', schemas=['p', 'q', 'r'])

Alternatively, I can even see using connection options for this, if you
want to attach to those schemas per connection:

conn = engine.connect().execution_options(sqlite_attach_schemas=['p', 'q',
'r'])









I think that this behaviour is incorrect.
The before-create listeners of a MetaData object should be called
before checking if the tables exist.

 I can see the awkardness here but the current contract of
 before-create
 is that it means before I create this list of Tables that I have found
 need creation, and here they are.Its something we may have to
 revisit
 if there's really a reason for a before I do anything hook - but I
 don't
 buy the case of I need to attach schemas in the create_all() as a
 necessary use case.

 Ok, fine for me.
 Auto attach database for SQLite it's not something i really need and it
 can still can be implemented using a PoolListener.

 See my reply to the more recent thread about Schema in SQLAlchemy about
 the reason why I was assuming that changing current behaviour was fine.



 Regards  Manlio
 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)
 Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

 iEYEARECAAYFAktxNRIACgkQscQJ24LbaURRdACcCj51Sb5Gp1cqZNDUv3U6zM1Y
 LKgAn35CBel2vOrfkCd4yK5XMVMFf+V/
 =xBEd
 -END PGP SIGNATURE-

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Michael Bayer
Chris Withers wrote:
 All the tests bar test_mapper_args_composite pass, and that currently
 blows up with a rather bizarre:

There's a Column there getting sent to the mapper that doesn't yet have a
key.  It's None, so you get that error.   Declarative sets up those
keys/names on the columns in the metaclass so somehow this test is messing
up the order of initialization.  I'll try to have a deeper look.




 Traceback (most recent call last):
File test_mixable.py, line 177, in test_mapper_args_composite
  class MyModel(Base,MyMixin1,MyMixin2):
File mixable.py, line 27, in __init__
  return DeclarativeMeta.__init__(cls, classname, bases, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 561, in __init__
  _as_declarative(cls, classname, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 554, in _as_declarative
  cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
 **mapper_args)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/__init__.py,
 line 751, in mapper
  return Mapper(class_, local_table, *args, **params)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 198, in __init__
  self._configure_properties()
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 514, in _configure_properties
  if self._should_exclude(col.key, col.key, local=False):
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 985, in _should_exclude
  if getattr(self.class_, assigned_name, None)\
 TypeError: Error when calling the metaclass bases
  getattr(): attribute name must be string

 Any idea what's causing that? If I move the type_ columns the the
 __mapper_args__ to MyModel, the tests passes...

 (also note evilness required because declarative gets __table_args__
 from dict_ rather than the cls, where it should ;-) )

 cheers,

 Chris
 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Michael Bayer
Michael Bayer wrote:
 Chris Withers wrote:
 All the tests bar test_mapper_args_composite pass, and that currently
 blows up with a rather bizarre:

 There's a Column there getting sent to the mapper that doesn't yet have a
 key.  It's None, so you get that error.   Declarative sets up those
 keys/names on the columns in the metaclass so somehow this test is messing
 up the order of initialization.  I'll try to have a deeper look.

I think the likely cause is this:

class MyMixin1:
type_ = Column(String(50))
__mapper_args__=dict(polymorphic_on=type_)
__mixin__ = True


the metaclass makes a copy of type_, but that doesn't access type_
inside of __mapper_args__.   So a full feature here would need to dig into
 __mapper_args__, identify all Column objects that are on the mixin class,
and ensure the copies are placed in those collections.   Probably as we go
through each column, store old/new in a lookup dictionary.  then do a copy
of __mapper_args__ using that lookup for column objects located.

this is the kind of messy scenario that makes it very time consuming for
some recipes to become supported features.






 Traceback (most recent call last):
File test_mixable.py, line 177, in test_mapper_args_composite
  class MyModel(Base,MyMixin1,MyMixin2):
File mixable.py, line 27, in __init__
  return DeclarativeMeta.__init__(cls, classname, bases, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 561, in __init__
  _as_declarative(cls, classname, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 554, in _as_declarative
  cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
 **mapper_args)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/__init__.py,
 line 751, in mapper
  return Mapper(class_, local_table, *args, **params)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 198, in __init__
  self._configure_properties()
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 514, in _configure_properties
  if self._should_exclude(col.key, col.key, local=False):
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 985, in _should_exclude
  if getattr(self.class_, assigned_name, None)\
 TypeError: Error when calling the metaclass bases
  getattr(): attribute name must be string

 Any idea what's causing that? If I move the type_ columns the the
 __mapper_args__ to MyModel, the tests passes...

 (also note evilness required because declarative gets __table_args__
 from dict_ rather than the cls, where it should ;-) )

 cheers,

 Chris
 --
 You received this message because you are subscribed to the Google
 Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] factoring out when using declarative

2010-02-09 Thread Michael Bayer
Michael Bayer wrote:
 Chris Withers wrote:
 All the tests bar test_mapper_args_composite pass, and that currently
 blows up with a rather bizarre:

 There's a Column there getting sent to the mapper that doesn't yet have a
 key.  It's None, so you get that error.   Declarative sets up those
 keys/names on the columns in the metaclass so somehow this test is messing
 up the order of initialization.  I'll try to have a deeper look.

this fixes it:

class MixinMeta(DeclarativeMeta):
def __init__(cls, classname, bases, dict_):
for base in bases:
names = dir(base)
if __mixin__ in names:
to_mix = []
column_copies = dict((c, c.copy())
for c in
[getattr(base, name) for name in names]
if isinstance(c, Column)
   )

for name in names:
obj = getattr(base,name)
if isinstance(obj, Column):
to_mix.append((name,column_copies[obj]))

elif name=='__table_args__' and name not in cls.__dict__:
setattr(cls,name,obj)

elif name == '__mapper_args__':
d = {}
for k, v in base.__mapper_args__.iteritems():
if v in column_copies:
v = column_copies[v]
d[k] = v
cls.__mapper_args__ = d
dict_.update(to_mix)

# work around declarative evilness
# ..with more evilness :-(
if '__table_args__' in cls.__dict__:
cls.__table_args__=cls.__table_args__

return DeclarativeMeta.__init__(cls, classname, bases, dict_)


the __mapper_args__ behavior still needs to be fixed in core as per #1393
as well, not sure if that changes the requirements for the recipe.







 Traceback (most recent call last):
File test_mixable.py, line 177, in test_mapper_args_composite
  class MyModel(Base,MyMixin1,MyMixin2):
File mixable.py, line 27, in __init__
  return DeclarativeMeta.__init__(cls, classname, bases, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 561, in __init__
  _as_declarative(cls, classname, dict_)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/ext/declarative.py,
 line 554, in _as_declarative
  cls.__mapper__ = mapper_cls(cls, table, properties=our_stuff,
 **mapper_args)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/__init__.py,
 line 751, in mapper
  return Mapper(class_, local_table, *args, **params)
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 198, in __init__
  self._configure_properties()
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 514, in _configure_properties
  if self._should_exclude(col.key, col.key, local=False):
File
 /usr/lib/python2.5/site-packages/SQLAlchemy-0.5.8-py2.5.egg/sqlalchemy/orm/mapper.py,
 line 985, in _should_exclude
  if getattr(self.class_, assigned_name, None)\
 TypeError: Error when calling the metaclass bases
  getattr(): attribute name must be string

 Any idea what's causing that? If I move the type_ columns the the
 __mapper_args__ to MyModel, the tests passes...

 (also note evilness required because declarative gets __table_args__
 from dict_ rather than the cls, where it should ;-) )

 cheers,

 Chris
 --
 You received this message because you are subscribed to the Google
 Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Problem with many-to-many relation on the same table

2010-02-09 Thread drakkan
Hi all,
my class Clients inherits from BaseApparati
and a client can have multiple relations with others clients.
I use the class ClientCrossRef as table for the relation.


class BaseApparati(Base, Dictionaryable):
__tablename__ = 'baseapparati'
id = Column(Integer, Sequence((__tablename__+'id_seq')[-30:]),
autoincrement=True, primary_key=True)
realtype = Column(Unicode(30), nullable=False, index=True)

__mapper_args__ = {'polymorphic_on': realtype,
'polymorphic_identity': 'baseapparati'


class Clients(BaseApparati):
__tablename__ = 'clients'
__mapper_args__ = {'polymorphic_identity': 'client'}

id = Column(Integer, ForeignKey('baseapparati.id'), primary_key=True)


class ClientCrossRef(Base):
__tablename__ = 'clientcrossref'

id = Column(Integer, Sequence((__tablename__+'id_seq')[-30:]),
autoincrement=True, primary_key=True)
master_id = Column(Integer, ForeignKey('clients.id'))
slave_id = Column(Integer, ForeignKey('clients.id'))

master = relation(Clients, uselist=False, primaryjoin= master_id ==
Clients.id, foreign_keys = Clients.id)
slave = relation(Clients, uselist=False, primaryjoin= slave_id ==
Clients.id, foreign_keys = Clients.id)

def __init__(self, master, slave):
self.master = master
self.slave = slave



When i try to commit a new ClientCrossRef, SQLAlchemy seems not to
know the objects i passed to the constructor.

Example:

  v= sa.Session.query(sa.Clients).all()
  v[0]
Client('client_1')
  v[1]
Client('client_2')

so v[0] and v[1] are instances of Clients model


  sa.Session.add(  sa.ClientCrossRef( v[0], v[1] )  )
  sa.Session.commit()

this is the sql genrated:

INSERT INTO clientcrossref (id, master_id, slave_id) VALUES
(:id, :master_id, :slave_id)
{'master_id': None, 'id': 3, 'slave_id': None}

as you can see, master_id and slave_id are == None, so it raise an
exception.


If i change the constructor of ClientCrossRef to accept the id of the
object (not the object!)
it obviously works:

def __init__(self, master_id, slave_id):
self.master_id = master_id
self.slave_id = slave_id

  sa.Session.add( sa.ClientCrossRef( v[0].id, v[1].id) )
  sa.Session.commit()

INSERT INTO clientcrossref (id, master_id, slave_id) VALUES
(:id, :master_id, :slave_id)
{'master_id': 1, 'id': 3, 'slave_id': 2}



So i think the problem is the definition of the relations 'master' and
'slave'.
Can anyone help me? I need to pass objects to the constructor, not the
ids.

Thanks


-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] suggestions about SQL SCHEMA handling in SQLAlchemy

2010-02-09 Thread Manlio Perillo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Michael Bayer ha scritto:
 Manlio Perillo wrote:
 PoolListener is rather low level; if I use them I need to know in
 advance the secondary databases to attach.
 
 What precisely is the use case for an application that will use a database
 X, will be using remote schemas P, Q, and R, but its unreasonable for
 those schemas P, Q and R to be configured along with the database
 connectivity ? 

The reason is that I'm abusing the MetaData object to implement
namespace (namespace = SQL schema) support.

So it is natural (for me) to create the schemas as a DDL attached to the
MetaData.
Note that I'm also abusing SQLite attach database support, in order to
emulate schemas.

 I see an easy feature add to the SQLite dialect that would
 allow:
 
 create_engine('sqlite:///X.db', schemas=['p', 'q', 'r'])
 

Not sure if SQLAlchemy should implement an high level interface (schema)
to a low level SQLite feature.
You have to force an algorithm to associate a schema name to a database
file name.

 Alternatively, I can even see using connection options for this, if you
 want to attach to those schemas per connection:
 
 conn = engine.connect().execution_options(sqlite_attach_schemas=['p', 'q',
 'r'])
 


Regards  Manlio
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktxj0kACgkQscQJ24LbaURZBgCgjX5eSrRudVh04ccoomIGMZw9
fvoAn1mLAiZd79mNdjHITAvAhHngrTce
=h8BJ
-END PGP SIGNATURE-

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Kent
I am on version 0.5.8.

As far as how upset it is making me: well, I certainly have no right
to demand this very nice, free software be enhanced or changed: I'm
just grateful for it.

We will be supporting clients on webservers that are removed by a long
distance from the database server, so I would like to limit the round
trips as much as is feasible...

I've taken out most everything and left the logic in a simple case to
create the behavior.  Here is the script that will demonstrate:

=

from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
dbuser',echo=True)
metadata = MetaData()
Session = sessionmaker(bind=engine)
session = Session()

order_table = Table(orders, metadata,
Column(orderid, Unicode, primary_key=True)
)

orderdetail_table = Table(orderdetails,metadata,
Column(orderid, Unicode, ForeignKey('orders.orderid'),
primary_key=True),
Column(lineid, Integer, primary_key=True),
Column(saleprice, Numeric, nullable=False)
)

class Order(object):
pass

class OrderDetail(object):
pass

order_mapper = mapper(Order, order_table,
properties=dict(orderdetails=relation(OrderDetail,
cascade='all,delete-orphan',
single_parent=True,
lazy=False,
backref=backref('parentorder',
cascade='refresh-expire,expunge'

orderdetail_mapper = mapper(OrderDetail, orderdetail_table)

metadata.create_all(engine)

o=Order()
o.orderid = '0206001A134'  #this order exists in the database - You'll
need to set add it to the DB

line1=OrderDetail() #line exists in database - You'll need to
set add it to the DB
line1.orderid = '0206001A134'
line1.lineid = '15'

line2=OrderDetail() #new line does not exist in database
line2.orderid = '0206001A134'

o.orderdetails = [line1, line2]

#
#
# Question a above - the following merge results in 3 SELECT
statements, but the first
# is an eagerly loaded query joined with orderdetails.  So, unless the
JOIN returned fewer rows
# (for example, an inner join instead of outer was used), all the
orderdetails should
# already be in existence as persistent objects:
merged=session.merge(o)


merged in session.new#this order exists in the database

merged.orderdetails[0]
merged.orderdetails[0] in session.new  # already in database (in new =
False)

merged.orderdetails[1]
merged.orderdetails[1] in session.new  # not yet in database (in new =
True)

#
# Question b:
# Why does this issue another select?  The object should have been
eagerly loaded,
# but even if not that, it was later reSELECTED during the merge()
merged.orderdetails[0].saleprice


#
# Question c:
# Are there databases that allow part of a primary key to be undefined
(None)?
# That is a foreign concept to me, so I expected this object would
realize it
# needn't query the database.
merged.orderdetails[1].saleprice

=


Thanks in advance,

Kent





On Feb 8, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2010, at 4:12 PM, Kent wrote:

  Ok, here are my questions:

  a) The merge eagerloads the order, along with its lines, but then,
  directly afterward, issues two additional SELECT statements for the
  two lines, even though these were already eagerly-loaded.  That
  surprised me.  Why is that occurring?

 I dont know.  I would need more than code fragments to reproduce your 
 behavior.   (nor do I know what version you're on).   It doesn't reproduce 
 with a simple test.



  b) When I ask for the property .saleprice on the order line, another
  SELECT statement is issued.  Why does that occur when it was eagerly
  loaded already?

 Same.  If the value is in __dict__ it would not issue another load.



  c) In the case of line2, can SQLAlchemy be made to realize that part
  of the primary key is not set and therefore there is no reason to
  attempt a fetch from the database?  It already detected this was a new
  record during the merge.

 the fetch for None, meaning issuing a fetch when the primary key was 
 completely None, was resolved in a recent 0.5 version, probably 0.5.8.  
 However, a partial primary key is considered to be valid.   There is a flag 
 on the mapper() called allow_null_pks=True which in 0.5 is set to False by 
 default - it means that partial primary key is not valid.    That flag is not 
 in fact checked by merge() in this case, which is because the flag was 
 already being removed in 0.6 by the time this fix went into place.    The 
 flag only led to confusion over and over again when users mapped to 
 outerjoins, and didn't receive rows.   Whereas nobody ever complained about 
 merge issuing a load for None as a key - the issue was fixed because I 
 noticed it myself.   So you're the first person to ever 

Re: [sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Michael Bayer
Kent wrote:
 I am on version 0.5.8.

part of your issue is this:

line1.lineid = '15'

should be this:

line1.lineid = 15

This because the value comes back from the DB as a numeric, not a string,
producing the wrong identity key ( (class '__main__.OrderDetail',
('0206001A134', '15')) vs (class '__main__.OrderDetail',
(u'0206001A134', 15)) ).


The merge then issues the eager load for the lead order + 1 detail, and a
second select for the additional detail.






 As far as how upset it is making me: well, I certainly have no right
 to demand this very nice, free software be enhanced or changed: I'm
 just grateful for it.

 We will be supporting clients on webservers that are removed by a long
 distance from the database server, so I would like to limit the round
 trips as much as is feasible...

 I've taken out most everything and left the logic in a simple case to
 create the behavior.  Here is the script that will demonstrate:

 =

 from sqlalchemy import *
 from sqlalchemy.orm import *

 engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
 dbuser',echo=True)
 metadata = MetaData()
 Session = sessionmaker(bind=engine)
 session = Session()

 order_table = Table(orders, metadata,
 Column(orderid, Unicode, primary_key=True)
 )

 orderdetail_table = Table(orderdetails,metadata,
 Column(orderid, Unicode, ForeignKey('orders.orderid'),
 primary_key=True),
 Column(lineid, Integer, primary_key=True),
 Column(saleprice, Numeric, nullable=False)
 )

 class Order(object):
 pass

 class OrderDetail(object):
 pass

 order_mapper = mapper(Order, order_table,
 properties=dict(orderdetails=relation(OrderDetail,
 cascade='all,delete-orphan',
 single_parent=True,
 lazy=False,
 backref=backref('parentorder',
 cascade='refresh-expire,expunge'

 orderdetail_mapper = mapper(OrderDetail, orderdetail_table)

 metadata.create_all(engine)

 o=Order()
 o.orderid = '0206001A134'  #this order exists in the database - You'll
 need to set add it to the DB

 line1=OrderDetail() #line exists in database - You'll need to
 set add it to the DB
 line1.orderid = '0206001A134'
 line1.lineid = '15'

 line2=OrderDetail() #new line does not exist in database
 line2.orderid = '0206001A134'

 o.orderdetails = [line1, line2]

 #
 #
 # Question a above - the following merge results in 3 SELECT
 statements, but the first
 # is an eagerly loaded query joined with orderdetails.  So, unless the
 JOIN returned fewer rows
 # (for example, an inner join instead of outer was used), all the
 orderdetails should
 # already be in existence as persistent objects:
 merged=session.merge(o)


 merged in session.new#this order exists in the database

 merged.orderdetails[0]
 merged.orderdetails[0] in session.new  # already in database (in new =
 False)

 merged.orderdetails[1]
 merged.orderdetails[1] in session.new  # not yet in database (in new =
 True)

 #
 # Question b:
 # Why does this issue another select?  The object should have been
 eagerly loaded,
 # but even if not that, it was later reSELECTED during the merge()
 merged.orderdetails[0].saleprice


 #
 # Question c:
 # Are there databases that allow part of a primary key to be undefined
 (None)?
 # That is a foreign concept to me, so I expected this object would
 realize it
 # needn't query the database.
 merged.orderdetails[1].saleprice

 =


 Thanks in advance,

 Kent





 On Feb 8, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Feb 8, 2010, at 4:12 PM, Kent wrote:

  Ok, here are my questions:

  a) The merge eagerloads the order, along with its lines, but then,
  directly afterward, issues two additional SELECT statements for the
  two lines, even though these were already eagerly-loaded.  That
  surprised me.  Why is that occurring?

 I dont know.  I would need more than code fragments to reproduce your
 behavior.   (nor do I know what version you're on).   It doesn't
 reproduce with a simple test.



  b) When I ask for the property .saleprice on the order line, another
  SELECT statement is issued.  Why does that occur when it was eagerly
  loaded already?

 Same.  If the value is in __dict__ it would not issue another load.



  c) In the case of line2, can SQLAlchemy be made to realize that part
  of the primary key is not set and therefore there is no reason to
  attempt a fetch from the database?  It already detected this was a new
  record during the merge.

 the fetch for None, meaning issuing a fetch when the primary key was
 completely None, was resolved in a recent 0.5 version, probably 0.5.8.
  However, a partial primary key is considered to be valid.   There is a
 flag on the mapper() called allow_null_pks=True which in 0.5 is set 

[sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Kent
Ah ha.  Thanks for tracking that down, makes sense.

On Feb 9, 2:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Kent wrote:
  I am on version 0.5.8.

 part of your issue is this:

 line1.lineid = '15'

 should be this:

 line1.lineid = 15

 This because the value comes back from the DB as a numeric, not a string,
 producing the wrong identity key ( (class '__main__.OrderDetail',
 ('0206001A134', '15')) vs (class '__main__.OrderDetail',
 (u'0206001A134', 15)) ).

 The merge then issues the eager load for the lead order + 1 detail, and a
 second select for the additional detail.



  As far as how upset it is making me: well, I certainly have no right
  to demand this very nice, free software be enhanced or changed: I'm
  just grateful for it.

  We will be supporting clients on webservers that are removed by a long
  distance from the database server, so I would like to limit the round
  trips as much as is feasible...

  I've taken out most everything and left the logic in a simple case to
  create the behavior.  Here is the script that will demonstrate:

  =

  from sqlalchemy import *
  from sqlalchemy.orm import *

  engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
  dbuser',echo=True)
  metadata = MetaData()
  Session = sessionmaker(bind=engine)
  session = Session()

  order_table = Table(orders, metadata,
      Column(orderid, Unicode, primary_key=True)
  )

  orderdetail_table = Table(orderdetails,metadata,
      Column(orderid, Unicode, ForeignKey('orders.orderid'),
  primary_key=True),
      Column(lineid, Integer, primary_key=True),
      Column(saleprice, Numeric, nullable=False)
  )

  class Order(object):
      pass

  class OrderDetail(object):
      pass

  order_mapper = mapper(Order, order_table,
          properties=dict(orderdetails=relation(OrderDetail,
                          cascade='all,delete-orphan',
                          single_parent=True,
                          lazy=False,
                          backref=backref('parentorder',
                                  cascade='refresh-expire,expunge'

  orderdetail_mapper = mapper(OrderDetail, orderdetail_table)

  metadata.create_all(engine)

  o=Order()
  o.orderid = '0206001A134'  #this order exists in the database - You'll
  need to set add it to the DB

  line1=OrderDetail()         #line exists in database - You'll need to
  set add it to the DB
  line1.orderid = '0206001A134'
  line1.lineid = '15'

  line2=OrderDetail()         #new line does not exist in database
  line2.orderid = '0206001A134'

  o.orderdetails = [line1, line2]

  #
  #
  # Question a above - the following merge results in 3 SELECT
  statements, but the first
  # is an eagerly loaded query joined with orderdetails.  So, unless the
  JOIN returned fewer rows
  # (for example, an inner join instead of outer was used), all the
  orderdetails should
  # already be in existence as persistent objects:
  merged=session.merge(o)

  merged in session.new    #this order exists in the database

  merged.orderdetails[0]
  merged.orderdetails[0] in session.new  # already in database (in new =
  False)

  merged.orderdetails[1]
  merged.orderdetails[1] in session.new  # not yet in database (in new =
  True)

  #
  # Question b:
  # Why does this issue another select?  The object should have been
  eagerly loaded,
  # but even if not that, it was later reSELECTED during the merge()
  merged.orderdetails[0].saleprice

  #
  # Question c:
  # Are there databases that allow part of a primary key to be undefined
  (None)?
  # That is a foreign concept to me, so I expected this object would
  realize it
  # needn't query the database.
  merged.orderdetails[1].saleprice

  =

  Thanks in advance,

  Kent

  On Feb 8, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Feb 8, 2010, at 4:12 PM, Kent wrote:

   Ok, here are my questions:

   a) The merge eagerloads the order, along with its lines, but then,
   directly afterward, issues two additional SELECT statements for the
   two lines, even though these were already eagerly-loaded. That
   surprised me. Why is that occurring?

  I dont know. I would need more than code fragments to reproduce your
  behavior. (nor do I know what version you're on). It doesn't
  reproduce with a simple test.

   b) When I ask for the property .saleprice on the order line, another
   SELECT statement is issued. Why does that occur when it was eagerly
   loaded already?

  Same. If the value is in __dict__ it would not issue another load.

   c) In the case of line2, can SQLAlchemy be made to realize that part
   of the primary key is not set and therefore there is no reason to
   attempt a fetch from the database? It already detected this was a new
   record during the merge.

  the fetch for None, meaning issuing a fetch when the primary key was
  

[sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Kent
Maybe you're still looking into that, but I still don't understand why
this:

merged.orderdetails[0].saleprice

causes a new issue to the database.

(Also, wondering if some databases allow a primary key to be null...)

Thanks again.


On Feb 9, 2:50 pm, Kent k...@retailarchitects.com wrote:
 Ah ha.  Thanks for tracking that down, makes sense.

 On Feb 9, 2:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  Kent wrote:
   I am on version 0.5.8.

  part of your issue is this:

  line1.lineid = '15'

  should be this:

  line1.lineid = 15

  This because the value comes back from the DB as a numeric, not a string,
  producing the wrong identity key ( (class '__main__.OrderDetail',
  ('0206001A134', '15')) vs (class '__main__.OrderDetail',
  (u'0206001A134', 15)) ).

  The merge then issues the eager load for the lead order + 1 detail, and a
  second select for the additional detail.

   As far as how upset it is making me: well, I certainly have no right
   to demand this very nice, free software be enhanced or changed: I'm
   just grateful for it.

   We will be supporting clients on webservers that are removed by a long
   distance from the database server, so I would like to limit the round
   trips as much as is feasible...

   I've taken out most everything and left the logic in a simple case to
   create the behavior.  Here is the script that will demonstrate:

   =

   from sqlalchemy import *
   from sqlalchemy.orm import *

   engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
   dbuser',echo=True)
   metadata = MetaData()
   Session = sessionmaker(bind=engine)
   session = Session()

   order_table = Table(orders, metadata,
       Column(orderid, Unicode, primary_key=True)
   )

   orderdetail_table = Table(orderdetails,metadata,
       Column(orderid, Unicode, ForeignKey('orders.orderid'),
   primary_key=True),
       Column(lineid, Integer, primary_key=True),
       Column(saleprice, Numeric, nullable=False)
   )

   class Order(object):
       pass

   class OrderDetail(object):
       pass

   order_mapper = mapper(Order, order_table,
           properties=dict(orderdetails=relation(OrderDetail,
                           cascade='all,delete-orphan',
                           single_parent=True,
                           lazy=False,
                           backref=backref('parentorder',
                                   cascade='refresh-expire,expunge'

   orderdetail_mapper = mapper(OrderDetail, orderdetail_table)

   metadata.create_all(engine)

   o=Order()
   o.orderid = '0206001A134'  #this order exists in the database - You'll
   need to set add it to the DB

   line1=OrderDetail()         #line exists in database - You'll need to
   set add it to the DB
   line1.orderid = '0206001A134'
   line1.lineid = '15'

   line2=OrderDetail()         #new line does not exist in database
   line2.orderid = '0206001A134'

   o.orderdetails = [line1, line2]

   #
   #
   # Question a above - the following merge results in 3 SELECT
   statements, but the first
   # is an eagerly loaded query joined with orderdetails.  So, unless the
   JOIN returned fewer rows
   # (for example, an inner join instead of outer was used), all the
   orderdetails should
   # already be in existence as persistent objects:
   merged=session.merge(o)

   merged in session.new    #this order exists in the database

   merged.orderdetails[0]
   merged.orderdetails[0] in session.new  # already in database (in new =
   False)

   merged.orderdetails[1]
   merged.orderdetails[1] in session.new  # not yet in database (in new =
   True)

   #
   # Question b:
   # Why does this issue another select?  The object should have been
   eagerly loaded,
   # but even if not that, it was later reSELECTED during the merge()
   merged.orderdetails[0].saleprice

   #
   # Question c:
   # Are there databases that allow part of a primary key to be undefined
   (None)?
   # That is a foreign concept to me, so I expected this object would
   realize it
   # needn't query the database.
   merged.orderdetails[1].saleprice

   =

   Thanks in advance,

   Kent

   On Feb 8, 9:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
   On Feb 8, 2010, at 4:12 PM, Kent wrote:

Ok, here are my questions:

a) The merge eagerloads the order, along with its lines, but then,
directly afterward, issues two additional SELECT statements for the
two lines, even though these were already eagerly-loaded. That
surprised me. Why is that occurring?

   I dont know. I would need more than code fragments to reproduce your
   behavior. (nor do I know what version you're on). It doesn't
   reproduce with a simple test.

b) When I ask for the property .saleprice on the order line, another
SELECT statement is issued. Why does that occur when it was eagerly
loaded 

[sqlalchemy] SqlSoup and joined-table inheritance

2010-02-09 Thread Rob
Hi,

I have a question regarding the most efficient way of solving what
should be a fairly simple task.
I have (on a server) a database set up using SQLAlchemy to define a
joined-table (parent/child) inheritance structure (using a
discriminator field as per the documentation etc)

On a client machine, I want to use SqlSoup to insert data (I don't
want to replicate the object model on the client machine) however, the
only way I can see to do this is along the following lines:

parentRecord = db.parent.insert( [fields] )
db.flush()
childRecord = db.child.insert(id=parentRecord.id, [fields])
db.flush()

A flush is required (at some point) to commit the data to the table,
but for multiple inserts the method above is horribly slow.
Am I missing something fundamental?  Is there a faster/better
(possibly correct!) way to do this?

Many thanks,
Rob

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SqlSoup and joined-table inheritance

2010-02-09 Thread Michael Bayer
Rob wrote:
 Hi,

 I have a question regarding the most efficient way of solving what
 should be a fairly simple task.
 I have (on a server) a database set up using SQLAlchemy to define a
 joined-table (parent/child) inheritance structure (using a
 discriminator field as per the documentation etc)

 On a client machine, I want to use SqlSoup to insert data (I don't
 want to replicate the object model on the client machine) however, the
 only way I can see to do this is along the following lines:

 parentRecord = db.parent.insert( [fields] )
 db.flush()
 childRecord = db.child.insert(id=parentRecord.id, [fields])
 db.flush()

 A flush is required (at some point) to commit the data to the table,
 but for multiple inserts the method above is horribly slow.
 Am I missing something fundamental?  Is there a faster/better
 (possibly correct!) way to do this?

the fastest way to insert many rows is to execute a table.insert() using
executemany syntax.

Above, I'm fairly certain SqlSoup can also map to a join, which is
probably what you'd want to do here.  The inserts occur in a batch where
it sends the id of the parent table into the child row before inserting
that one.

Alternatively, you could create the list of parentRecords first, do a
single flush(), then get the list of all the child ids and populate those.

If it were me I'd just use a simple joined-table inheritance model with
declarative.  Using reflected tables, its barely any more typing than what
SqlSoup requires.  SqlSoup is really just a typing saver in any case its
still mapping classes to tables, just in a very rigid and difficult to
customize way.


 Many thanks,
 Rob

 --
 You received this message because you are subscribed to the Google Groups
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: SqlSoup and joined-table inheritance

2010-02-09 Thread Rob
Thanks for your super-quick response Michael !!

I have a feeling that (especially given that there are a number of
foreign keys involved in this) I may be best off, as you suggest,
using the mappings/reflected tables.

Many thanks again,
Rob


On Feb 9, 11:17 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Rob wrote:
  Hi,

  I have a question regarding the most efficient way of solving what
  should be a fairly simple task.
  I have (on a server) a database set up using SQLAlchemy to define a
  joined-table (parent/child) inheritance structure (using a
  discriminator field as per the documentation etc)

  On a client machine, I want to use SqlSoup to insert data (I don't
  want to replicate the object model on the client machine) however, the
  only way I can see to do this is along the following lines:

  parentRecord = db.parent.insert( [fields] )
  db.flush()
  childRecord = db.child.insert(id=parentRecord.id, [fields])
  db.flush()

  A flush is required (at some point) to commit the data to the table,
  but for multiple inserts the method above is horribly slow.
  Am I missing something fundamental?  Is there a faster/better
  (possibly correct!) way to do this?

 the fastest way to insert many rows is to execute a table.insert() using
 executemany syntax.

 Above, I'm fairly certain SqlSoup can also map to a join, which is
 probably what you'd want to do here.  The inserts occur in a batch where
 it sends the id of the parent table into the child row before inserting
 that one.

 Alternatively, you could create the list of parentRecords first, do a
 single flush(), then get the list of all the child ids and populate those.

 If it were me I'd just use a simple joined-table inheritance model with
 declarative.  Using reflected tables, its barely any more typing than what
 SqlSoup requires.  SqlSoup is really just a typing saver in any case its
 still mapping classes to tables, just in a very rigid and difficult to
 customize way.





  Many thanks,
  Rob

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Michael Bayer
Kent wrote:
 Maybe you're still looking into that, but I still don't understand why
 this:

 merged.orderdetails[0].saleprice

 causes a new issue to the database.

as I mentioned earlier, if the value isn't in __dict__ on a persistent
instance, it will be loaded when accessed.   Your example doesn't set this
field on the object to be merged - so the merge operation actually expires
the value on the loaded object.   At the moment that's because the
attribute missing from __dict__ is assumed to have been expired, so it
expires it on the to-be-merged side as well (otherwise, what would it
merge it to?  particularly if the load=False flag is set).



 (Also, wondering if some databases allow a primary key to be null...)

I've researched this in the past and they don't.   I will look into
re-introducing allow_null_pks as a new flag allow_partial_pks,
defaults to True, will be honored by merge(), you set yours to False. 
this is 0.6 only.




 Thanks again.


 On Feb 9, 2:50 pm, Kent k...@retailarchitects.com wrote:
 Ah ha.  Thanks for tracking that down, makes sense.

 On Feb 9, 2:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:

  Kent wrote:
   I am on version 0.5.8.

  part of your issue is this:

  line1.lineid = '15'

  should be this:

  line1.lineid = 15

  This because the value comes back from the DB as a numeric, not a
 string,
  producing the wrong identity key ( (class '__main__.OrderDetail',
  ('0206001A134', '15')) vs (class '__main__.OrderDetail',
  (u'0206001A134', 15)) ).

  The merge then issues the eager load for the lead order + 1 detail,
 and a
  second select for the additional detail.

   As far as how upset it is making me: well, I certainly have no right
   to demand this very nice, free software be enhanced or changed: I'm
   just grateful for it.

   We will be supporting clients on webservers that are removed by a
 long
   distance from the database server, so I would like to limit the
 round
   trips as much as is feasible...

   I've taken out most everything and left the logic in a simple case
 to
   create the behavior.  Here is the script that will demonstrate:

   =

   from sqlalchemy import *
   from sqlalchemy.orm import *

   engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
   dbuser',echo=True)
   metadata = MetaData()
   Session = sessionmaker(bind=engine)
   session = Session()

   order_table = Table(orders, metadata,
       Column(orderid, Unicode, primary_key=True)
   )

   orderdetail_table = Table(orderdetails,metadata,
       Column(orderid, Unicode, ForeignKey('orders.orderid'),
   primary_key=True),
       Column(lineid, Integer, primary_key=True),
       Column(saleprice, Numeric, nullable=False)
   )

   class Order(object):
       pass

   class OrderDetail(object):
       pass

   order_mapper = mapper(Order, order_table,
           properties=dict(orderdetails=relation(OrderDetail,
                           cascade='all,delete-orphan',
                           single_parent=True,
                           lazy=False,
                           backref=backref('parentorder',
                                   cascade='refresh-expire,expunge'

   orderdetail_mapper = mapper(OrderDetail, orderdetail_table)

   metadata.create_all(engine)

   o=Order()
   o.orderid = '0206001A134'  #this order exists in the database -
 You'll
   need to set add it to the DB

   line1=OrderDetail()         #line exists in database - You'll need
 to
   set add it to the DB
   line1.orderid = '0206001A134'
   line1.lineid = '15'

   line2=OrderDetail()         #new line does not exist in database
   line2.orderid = '0206001A134'

   o.orderdetails = [line1, line2]

   #
   #
   # Question a above - the following merge results in 3 SELECT
   statements, but the first
   # is an eagerly loaded query joined with orderdetails.  So, unless
 the
   JOIN returned fewer rows
   # (for example, an inner join instead of outer was used), all the
   orderdetails should
   # already be in existence as persistent objects:
   merged=session.merge(o)

   merged in session.new    #this order exists in the database

   merged.orderdetails[0]
   merged.orderdetails[0] in session.new  # already in database (in new
 =
   False)

   merged.orderdetails[1]
   merged.orderdetails[1] in session.new  # not yet in database (in new
 =
   True)

   #
   # Question b:
   # Why does this issue another select?  The object should have been
   eagerly loaded,
   # but even if not that, it was later reSELECTED during the merge()
   merged.orderdetails[0].saleprice

   #
   # Question c:
   # Are there databases that allow part of a primary key to be
 undefined
   (None)?
   # That is a foreign concept to me, so I expected this object would
   realize it
   # needn't query the database.
   merged.orderdetails[1].saleprice

   =

   Thanks 

[sqlalchemy] SQLAlchemy ForeignKey relation via an intermediate table

2010-02-09 Thread Boda Cydo
Hello everyone!

Please help me with this difficult problem. I can't find a solution
myself:

Suppose that I have a table `Articles`, which has fields `article_id`,
`content` and it contains one article with id `1`.

I also have a table `Categories`, which has fields `category_id`
(primary key), `category_name`, and it contains one category with id
`10`.

Now suppose that I have a table `ArticleProperties`, that adds
properties to `Articles`. This table has fields `article_id`,
`property_name`, `property_value`.

Suppose that I want to create a mapping from `Categories` to
`Articles` via `ArticleProperties` table.

I do this by inserting the following values in the `ArticleProperties`
table: (article_id=1, property_name=category, property_value=10).

Is there any way in SQLAlchemy to express that rows in table
`ArticleProperties` with `property_name` category are actually
FOREIGN KEYS of table `Articles` to table `Categories`?

Any help appreciated!

Thanks, Boda Cydo.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] newbie to sqlalchemy :not null constraint

2010-02-09 Thread anusha kadambala
hello all,

I am newbie in sqlalchemy.I am thrilled by the sqlachemy features. But i got
struck in the how to write the not null  for the following:

create table organisation(orgcode varchar(30) not null,orgname text not
null,primary key(orgcode));

I have written the using declarative base as follows but dont know how to
impose not null constraint

from sqlalchemy import create_engine
from sqlalchemy import Table, Column, Integer, String, MetaData, ForeignKey
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker,scoped_session

engine = create_engine('postgresql:///gkanusha', echo=False)
Base = declarative_base()
class Organisation(Base):
__tablename__ = 'Organisation'
orgcode = Column(String,primary_key=True)
orgname = Column(String)
def __init__(self,orgcode,orgname):
self.orgcode = orgcode
self.orgname = orgname

organisation_table = Organisation.__table__

metadata = Base.metadata
metadata.create_all(engine)
Session = scoped_session(sessionmaker(bind=engine))
Session.commit()

There are other columns also but i have taken two columns for simplicity.

Thanks in advance
-- 


Njoy the share of Freedom :)
Anusha Kadambala

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] newbie to sqlalchemy :not null constraint

2010-02-09 Thread Michael Trier

 hello all,
 
 I am newbie in sqlalchemy.I am thrilled by the sqlachemy features. But i got 
 struck in the how to write the not null  for the following:
 
 create table organisation(orgcode varchar(30) not null,orgname text not 
 null,primary key(orgcode));
 
 I have written the using declarative base as follows but dont know how to 
 impose not null constraint

You want to use the nullable=False argument.

http://www.sqlalchemy.org/docs/reference/sqlalchemy/schema.html?highlight=nullable

Thanks,

Michael

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] SQLAlchemy ForeignKey relation via an intermediate table

2010-02-09 Thread Michael Bayer

On Feb 9, 2010, at 9:43 PM, Boda Cydo wrote:

 Hello everyone!
 
 Please help me with this difficult problem. I can't find a solution
 myself:
 
 Suppose that I have a table `Articles`, which has fields `article_id`,
 `content` and it contains one article with id `1`.
 
 I also have a table `Categories`, which has fields `category_id`
 (primary key), `category_name`, and it contains one category with id
 `10`.
 
 Now suppose that I have a table `ArticleProperties`, that adds
 properties to `Articles`. This table has fields `article_id`,
 `property_name`, `property_value`.


this sounds like joined table inheritance.   Have you looked over 
http://www.sqlalchemy.org/docs/mappers.html#joined-table-inheritance ?

Otherwise you may consider building a relation() from 
Article-ArticleProperties and a second from ArticleProperties-Categories.



 
 Suppose that I want to create a mapping from `Categories` to
 `Articles` via `ArticleProperties` table.
 
 I do this by inserting the following values in the `ArticleProperties`
 table: (article_id=1, property_name=category, property_value=10).
 
 Is there any way in SQLAlchemy to express that rows in table
 `ArticleProperties` with `property_name` category are actually
 FOREIGN KEYS of table `Articles` to table `Categories`?
 
 Any help appreciated!
 
 Thanks, Boda Cydo.
 
 -- 
 You received this message because you are subscribed to the Google Groups 
 sqlalchemy group.
 To post to this group, send email to sqlalch...@googlegroups.com.
 To unsubscribe from this group, send email to 
 sqlalchemy+unsubscr...@googlegroups.com.
 For more options, visit this group at 
 http://groups.google.com/group/sqlalchemy?hl=en.
 

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



Re: [sqlalchemy] Re: SQLAlchemy goes back to database when I wouldn't expect it to...

2010-02-09 Thread Michael Bayer

On Feb 9, 2010, at 7:09 PM, Michael Bayer wrote:

 Kent wrote:
 Maybe you're still looking into that, but I still don't understand why
 this:
 
 merged.orderdetails[0].saleprice
 
 causes a new issue to the database.
 
 as I mentioned earlier, if the value isn't in __dict__ on a persistent
 instance, it will be loaded when accessed.   Your example doesn't set this
 field on the object to be merged - so the merge operation actually expires
 the value on the loaded object.   At the moment that's because the
 attribute missing from __dict__ is assumed to have been expired, so it
 expires it on the to-be-merged side as well (otherwise, what would it
 merge it to?  particularly if the load=False flag is set).

if the load flag is set to True though, not sure if this is really ideal 
behavior and it seems like resetting any pending state on the 
we-know-to-be-loaded attribute might be the better way to go, so ticket 1681 is 
a reminder for me to think about this before the 0.6.0 release.



 
 
 
 (Also, wondering if some databases allow a primary key to be null...)
 
 I've researched this in the past and they don't.   I will look into
 re-introducing allow_null_pks as a new flag allow_partial_pks,
 defaults to True, will be honored by merge(), you set yours to False. 
 this is 0.6 only.
 
 
 
 
 Thanks again.
 
 
 On Feb 9, 2:50 pm, Kent k...@retailarchitects.com wrote:
 Ah ha.  Thanks for tracking that down, makes sense.
 
 On Feb 9, 2:25 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 
 Kent wrote:
 I am on version 0.5.8.
 
 part of your issue is this:
 
 line1.lineid = '15'
 
 should be this:
 
 line1.lineid = 15
 
 This because the value comes back from the DB as a numeric, not a
 string,
 producing the wrong identity key ( (class '__main__.OrderDetail',
 ('0206001A134', '15')) vs (class '__main__.OrderDetail',
 (u'0206001A134', 15)) ).
 
 The merge then issues the eager load for the lead order + 1 detail,
 and a
 second select for the additional detail.
 
 As far as how upset it is making me: well, I certainly have no right
 to demand this very nice, free software be enhanced or changed: I'm
 just grateful for it.
 
 We will be supporting clients on webservers that are removed by a
 long
 distance from the database server, so I would like to limit the
 round
 trips as much as is feasible...
 
 I've taken out most everything and left the logic in a simple case
 to
 create the behavior.  Here is the script that will demonstrate:
 
 =
 
 from sqlalchemy import *
 from sqlalchemy.orm import *
 
 engine = create_engine('postgres://dbuser:dbu...@localhost:5444/
 dbuser',echo=True)
 metadata = MetaData()
 Session = sessionmaker(bind=engine)
 session = Session()
 
 order_table = Table(orders, metadata,
   Column(orderid, Unicode, primary_key=True)
 )
 
 orderdetail_table = Table(orderdetails,metadata,
   Column(orderid, Unicode, ForeignKey('orders.orderid'),
 primary_key=True),
   Column(lineid, Integer, primary_key=True),
   Column(saleprice, Numeric, nullable=False)
 )
 
 class Order(object):
   pass
 
 class OrderDetail(object):
   pass
 
 order_mapper = mapper(Order, order_table,
   properties=dict(orderdetails=relation(OrderDetail,
   cascade='all,delete-orphan',
   single_parent=True,
   lazy=False,
   backref=backref('parentorder',
   cascade='refresh-expire,expunge'
 
 orderdetail_mapper = mapper(OrderDetail, orderdetail_table)
 
 metadata.create_all(engine)
 
 o=Order()
 o.orderid = '0206001A134'  #this order exists in the database -
 You'll
 need to set add it to the DB
 
 line1=OrderDetail() #line exists in database - You'll need
 to
 set add it to the DB
 line1.orderid = '0206001A134'
 line1.lineid = '15'
 
 line2=OrderDetail() #new line does not exist in database
 line2.orderid = '0206001A134'
 
 o.orderdetails = [line1, line2]
 
 #
 #
 # Question a above - the following merge results in 3 SELECT
 statements, but the first
 # is an eagerly loaded query joined with orderdetails.  So, unless
 the
 JOIN returned fewer rows
 # (for example, an inner join instead of outer was used), all the
 orderdetails should
 # already be in existence as persistent objects:
 merged=session.merge(o)
 
 merged in session.new#this order exists in the database
 
 merged.orderdetails[0]
 merged.orderdetails[0] in session.new  # already in database (in new
 =
 False)
 
 merged.orderdetails[1]
 merged.orderdetails[1] in session.new  # not yet in database (in new
 =
 True)
 
 #
 # Question b:
 # Why does this issue another select?  The object should have been
 eagerly loaded,
 # but even if not that, it was later reSELECTED during the merge()
 merged.orderdetails[0].saleprice
 
 #
 # Question c:
 # Are there databases that allow part of a primary key to be
 undefined
 (None)?
 # That is a foreign concept to me, so 

[sqlalchemy] Problems with 0.6beta1

2010-02-09 Thread Chris
Hi All

Have just upgraded to this version and am having the following issue.

I use a the execute method on a ScopedSession to run generic SQL
Statements in a TextClause,  in the resulting BufferedRowResultProxy
object their used to list field keys with was all the names of the
fields in the result set. This appears to have been removed.
How do a get a list of the field name for my query

Chris

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.