[sqlalchemy] Sane rowcount

2007-12-19 Thread Lele Gaifax
Hi all,

I'm trying to better understand the rowcount issue, starting from
ticket:370.

Currently Firebird has both 'supports_sane_rowcount' and
'supports_sane_multi_rowcount' set to False. I do not exactly get what
sane means there... seeing the result of my tests (below).

Maybe something changed from my last checks on
it (in fact, AFAIK the multi version of the flag was introduced later
than then...), but the strange thing is that setting the first one, ie
'supports_sane_rowcount' to True does not trigger any test failure
anymore, while there are still failures when the second flag is set to
True.

I wrote the attached test/orm/rowcount.py script mimicing
test/sql/rowcount.py in a cascading scenario, but again I must be
misunderstanding the sane meaning, because both test succeed on
Firebird...

Can you point me to a more correct way of testing the expected
behaviour of sane_multi_rowcount? And given new results on
sane_rowcount, is it right turning it to True on Firebird too?

thank you,
ciao, lele.
-- 
nickname: Lele Gaifax| Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas| comincerò ad aver paura di chi mi copia.
[EMAIL PROTECTED] | -- Fortunato Depero, 1929.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

import testbase

from sqlalchemy import *
from sqlalchemy.orm import *
from testlib import *

class MultiRowCountTest(AssertMixin):
Test rowcount in a cascading context

def setUpAll(self):
global metadata, Author, Book, Account, authors_table, books_table, accounts_table

metadata = MetaData(testbase.db)

authors_table = Table(authors, metadata,
  Column(id, Integer, Sequence(s_authors_id, optional=True), primary_key=True),
  Column(name, String(100), unique=True, nullable=False))
books_table = Table(books, metadata,
Column(id, Integer, Sequence(s_books_id, optional=True), primary_key=True),
Column(author_id, Integer),
Column(title, String(50), nullable=False),
ForeignKeyConstraint([author_id], [authors.id], ondelete=SET NULL, onupdate=CASCADE))
accounts_table = Table('accounts', metadata,
   Column('author_id', Integer, primary_key=True),
   Column('amount', Float()),
   ForeignKeyConstraint([author_id], [authors.id], ondelete=CASCADE, onupdate=CASCADE))

metadata.create_all()

class Author(object):
def __init__(self, name):
self.name = name
class Book(object):
def __init__(self, title):
self.title = title
class Account(object):
def __init__(self, money):
self.amount = money

mapper(Author, authors_table, properties={
'books': relation(Book, backref=author, lazy=True, cascade='all, delete-orphan', passive_deletes=True),
'account': relation(Account, backref=author, lazy=True, uselist = False, cascade='all, delete-orphan'),
})
mapper(Book, books_table)
mapper(Account, accounts_table)

def setUp(self):
session = create_session()

# Some authors
t = Author('Tolkien')
b = Author('Baricco')
a = Author('Asimov')

# Some books
lotr = Book('Lord of the rings')
ts = Book('The silmarillion')

ss = Book('Senza sangue')
om = Book('Oceano mare')

f = Book('Foundation')
ir = Book('I, Robot')

t.books.append(lotr)
t.books.append(ts)

b.books.append(ss)
b.books.append(om)

a.books.append(f)
a.books.append(ir)

# Some money

t.account = Account(100)
b.account = Account(50)
a.account = Account(66)

session.save(t)
session.save(b)
session.save(a)
session.flush()

def tearDown(self):
accounts_table.delete().execute()
books_table.delete().execute()
authors_table.delete().execute()

def tearDownAll(self):
metadata.drop_all()

def testbasic(self):
session = create_session()

t = session.query(Author).filter(Author.name=='Baricco').one()
assert len(t.books) == 2

def test_ticket_370(self):
session = create_session()

t = session.query(Author).filter(Author.name=='Asimov').one()

[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-19 Thread Smoke

On 19 Dic, 01:37, Rick Morrison [EMAIL PROTECTED] wrote:
 Same here on pymssql.

 I tried it with 'start' as the only PK, and with both 'identifier' and
 'start' as PK. Both work fine.

 Are you sure your in-database tabledef matches your declared schema?

 I've attached a script that works here. This one has both 'identifier' and
 'start' set as PK.

   ***---WARNING ---***:
 I've added a table.drop() to the script to simplify testing and make
 sure the schemas match

I understand it could seem impossible Rick, but if i run your script
it doesn't update the row!!! ( I swear!! ). I'm really confused on
what's going on...  maybe py_odbc?

Here's the log:

2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
DROP TABLE jobs
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0
CREATE TABLE jobs (
identifier NUMERIC(18, 2) NOT NULL,
section VARCHAR(20),
start DATETIME NOT NULL,
stop DATETIME,
station VARCHAR(20),
PRIMARY KEY (identifier, start)
)


2007-12-19 10:18:29,421 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
BEGIN
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
INSERT INTO jo
bs (identifier, section, start, stop, station) VALUES (?, ?, ?, ?, ?)
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
['22', None, d
atetime.datetime(2007, 12, 19, 10, 18, 29, 437000), None, 'TCHUKI']
2007-12-19 10:18:29,437 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0
BEGIN
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0
SELECT jobs.id
entifier AS jobs_identifier, jobs.section AS jobs_section, jobs.start
AS jobs_st
art, jobs.stop AS jobs_stop, jobs.station AS jobs_station
FROM jobs ORDER BY jobs.identifier
2007-12-19 10:18:30,437 INFO sqlalchemy.engine.base.Engine.0x..d0 []
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0 SET
nocount ON

2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0 {}
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0
UPDATE jobs SE
T stop=? WHERE jobs.identifier = ? AND jobs.start = ?
2007-12-19 10:18:30,453 INFO sqlalchemy.engine.base.Engine.0x..d0
[datetime.date
time(2007, 12, 19, 10, 18, 30, 453000), '22.00',
datetime.datetime(2007, 12, 19,
 10, 18, 29)]
2007-12-19 10:18:30,467 INFO sqlalchemy.engine.base.Engine.0x..d0
COMMIT


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Syntax for IN (1,2,3)

2007-12-19 Thread Marcin Kasperski


Maybe I missed something but can't find... Does there exist
SQLExpression syntax for

 WHERE column IN (1,2,3,4)

?

-- 
--
| Marcin Kasperski   | Software is not released,
| http://mekk.waw.pl |  it is allowed to escape.
||
--


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Syntax for IN (1,2,3)

2007-12-19 Thread Bertrand Croq

Le mercredi 19 décembre 2007 12:06, Marcin Kasperski a écrit :
 Maybe I missed something but can't find... Does there exist
 SQLExpression syntax for

  WHERE column IN (1,2,3,4)

column.in_(1,2,3,4)

-- 
Bertrand Croq
___
Net-ng  Tel   : +33 (0)223 21 21 53
14, rue Patis Tatelin   Fax   : +33 (0)223 21 21 60
Bâtiment G  Web   : http://www.net-ng.com
35000 RENNESe-mail: [EMAIL PROTECTED]
FRANCE


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: chunks of blob data...

2007-12-19 Thread Chris Withers

Michael Bayer wrote:
 
 cx_oracle's blob object is able to stream out blobs like filehandles,  
 since thats how OCI wants it to be done.  Im not sure if it only  
 allows sequential access or access to any range.  As far as other  
 DBAPIs like psycopg2 and MySQLDB, you'd have to dig into the docs/ 
 mailing lists/source code of those to see what resources are  
 available.  Another way might be some SQL function available within  
 the database that truncates the binary stream to a certain portion of  
 itself before selecting, but Im not aware of any function which does  
 that (havent checked tho).

Ah well, this is better than I was expecting :-)

 Current SA behavior is to abstract away the streamingness of  
 cx_oracles binary objects, but theres no reason we couldnt expose this  
 functionality through something like a StreamingBinary type or similar.

Cool, if I run into it, I'd be interesting in helping to work on this!

cheers,

Chris

-- 
Simplistix - Content Management, Zope  Python Consulting
- http://www.simplistix.co.uk

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Syntax for IN (1,2,3)

2007-12-19 Thread Rick Morrison
I believe that as of 0.4.0, that's now:

column.in_([1,2,3,4])

On Dec 19, 2007 6:23 AM, Bertrand Croq [EMAIL PROTECTED] wrote:


 Le mercredi 19 décembre 2007 12:06, Marcin Kasperski a écrit:
  Maybe I missed something but can't find... Does there exist
  SQLExpression syntax for
 
   WHERE column IN (1,2,3,4)

 column.in_(1,2,3,4)

 --
 Bertrand Croq
 ___
 Net-ng  Tel   : +33 (0)223 21 21 53
 14, rue Patis Tatelin   Fax   : +33 (0)223 21 21 60
 Bâtiment G  Web   : http://www.net-ng.com
 35000 RENNESe-mail: [EMAIL PROTECTED]
 FRANCE


 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: save_or_update and composit Primary Keys... MSSQL / pyodbc issue ?

2007-12-19 Thread Rick Morrison
OK, I checked to make sure the updates were being fired (and from the looks
of the log, they are).

But I think I see that the lack of update executions hasn't been the problem
all along, but rather that those updates are not finding their row... never
checked that part.

I'm offsite right now and can't look at the code, but I suspect that the
milliseconds are the problem -- MSSQL rounds milliseconds to some multiple,
so what you put in is not always  what you get back.

Since the program saves the initial date PK as the result of a datetime.now()
call, I'll bet that it doesn't match the DB stored value. Here's a couple of
things you can do to work around that:

  a) Truncate the milliseconds from the datetime.now() call before you write
the initial job object

  b) Fetch the job object back after the first flush() to get the DB stored
value.

See if one of those fixes your issue.

Rick

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Virtual column-attribute

2007-12-19 Thread maxi

Hi,
I hope sombody can help me whit this.

I've the next table, for instance:

customer = Table('customer', metadata,
 Column('id', Integer, primery_key=True),
 Column('name', String(50), nullable=False)
)

andthe class

class Customer(object):
  pass

and.. the mapper

mapper(Customer, customer)

Now, I want add a virtual attribute to my class which is not part of
my table, for example

class Customer(object)
 def __init__(self)
   self.selected = False

Then, I do a simple query:

cust = session.query(Customer).get(1)

but, when I want use

cust.selected = True

An exception ocurr, the cust object have not 'selected' attribute


How can solve this?
Is possible do it?

Than in advance.


























--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Virtual column-attribute

2007-12-19 Thread Bertrand Croq

Le mercredi 19 décembre 2007 15:32, maxi a écrit :
 Then, I do a simple query:

 cust = session.query(Customer).get(1)

 but, when I want use

 cust.selected = True

 An exception ocurr, the cust object have not 'selected' attribute

__init__ is not called when objects are fetched from the DB.

http://www.sqlalchemy.org/trac/wiki/FAQ#whyisntmy__init__calledwhenIloadobjects

-- 
Bertrand Croq
___
Net-ng  Tel   : +33 (0)223 21 21 53
14, rue Patis Tatelin   Fax   : +33 (0)223 21 21 60
Bâtiment G  Web   : http://www.net-ng.com
35000 RENNESe-mail: [EMAIL PROTECTED]
FRANCE


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sane rowcount

2007-12-19 Thread Michael Bayer


On Dec 19, 2007, at 4:19 AM, Lele Gaifax wrote:


 Can you point me to a more correct way of testing the expected
 behaviour of sane_multi_rowcount? And given new results on
 sane_rowcount, is it right turning it to True on Firebird too?


sane_multi_rowcount is specifically for an executemany, like:

table.update().execute([{params1}, {params2}, {params3}]

at the DBAPI level, it means if you say cursor.executemany(statement,  
[params]), cursor.rowcount should be the sum of all rows updated  
across all sets of parameters.  i think some DBAPIs actually do that.   
Even though technically its a decision only the database itself can  
make, i.e. some of the parameter sets may be overlapping..we havent  
gotten into it that deeply as of yet since we use non-overlapping sets  
of parameters with ORM updates/deletes.


sane_rowcount is set to False on FB probably because someones version  
of FB did not support it correctly.  it might be worth tracking it  
down in svn blame to see why it was changed.
  
  

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Implementing ranking

2007-12-19 Thread voltron

Could someone tell me how to simulate the rank() function? I am using
PostgreSQL which does not have this a s a native function.

Thanks
--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Caching

2007-12-19 Thread Anton V. Belyaev

Hello,

Several people already wrote something about memcached + SqlAlchemy.

Remember, Mike Nelson wrote a mapper extention, it is available at:
http://www.ajaxlive.com/repo/mcmapper.py
http://www.ajaxlive.com/repo/mcache.py

I've rewritten it a bit to fit 0.4 release of SA.

Any response and comments are welcome, since I am not sure I am doing
right things in the code :) I dont like that dirty tricks with
deleting _state, etc. Maybe it could be done better?

But it works somehow. It manages to cache query get operations.

It has some problems with deferred fetch on inherited mapper because
of some issues of SA (I've found them in Trac).

import memcache as mc

class MCachedMapper(MapperExtension):
def get(self, query, ident, *args, **kwargs):
key = query.mapper.identity_key_from_primary_key(ident)
obj = query.session.identity_map.get(key)
if not obj:
mkey = gen_cache_key(key)
log.debug(Checking cache for %s, mkey)
obj = mc.get(mkey)
if obj is not None:
obj.__dict__[_state] = InstanceState(obj)
obj.__dict__[_entity_name] = None
log.debug(Found in cache for %s : %s, mkey, obj)
query.session.update(obj)
else:
obj = query._get(key, ident, **kwargs)
if obj is None:
return None
_state = obj._state
del obj.__dict__[_state]
del obj.__dict__[_entity_name]
mc.set(mkey, obj)
obj.__dict__[_state] = _state
obj.__dict__[_entity_name] = None
return obj

def before_update(self, mapper, connection, instance):
mkey =
gen_cache_key(mapper.identity_key_from_instance(instance))
log.debug(Clearing cache for %s because of update, mkey)
mc.delete(mkey)
return EXT_PASS

def before_delete(self, mapper, connection, instance):
mkey =
gen_cache_key(mapper.identity_key_from_instance(instance))
log.debug(Clearing cache for %s because of delete, mkey)
mc.delete(mkey)
return EXT_PASS

The mapper can be used like this:

mapper(User, users_table, extension=MCachedMapper())
session = create_session()
user_1234 = session.query(User).get(1234) # this one loads from the DB
session.clear()
user_1234 = session.query(User).get(1234) # this one fetches from
Memcached

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sane rowcount

2007-12-19 Thread Lele Gaifax

On Wed, 19 Dec 2007 10:21:15 -0500
Michael Bayer [EMAIL PROTECTED] wrote:

 sane_rowcount is set to False on FB probably because someones
 version of FB did not support it correctly.  it might be worth
 tracking it down in svn blame to see why it was changed.

I did that already, having changed that by myself a few time in the
past thru patches I sent in the past. More recently it was changed by
Roger, I think as a tentative fix to #370.

And I think that at the time, I did the test on the very same
FB2 engine. I'm sure I remember seeing unmatched rowcount error
message, that now appears *only* activating sane_multi_rowcount.

thank you,
ciao, lele.
-- 
nickname: Lele Gaifax| Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas| comincerò ad aver paura di chi mi copia.
[EMAIL PROTECTED] | -- Fortunato Depero, 1929.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] association object to associate table to itself

2007-12-19 Thread Brett
Hello,

I'm trying to create an association between two objects of the same
type.  For example I have table A and then I have an association table
that has two foreign keys to table A. 

What I'm looking for is to be able to say:

one_typeA.append(two_typeA) 

but the association_proxy seems only to be able to set the associated
column and doesn't fill in the both foreign keys based on the relation.

sqlalchemy.exceptions.IntegrityError: (IntegrityError)
species_synonym.synonym_id may not be NULL u'INSERT INTO species_synonym
(species_id, synonym_id) VALUES (?, ?)' [1, None]


I've also tried without using the association_proxy but I get an error
telling me that the collection I'm appending to expects the type of the
association table and not the type of the table I'm trying to associate.

sqlalchemy.exceptions.FlushError: Attempting to flush an item of type
class '__main__.Species' on collection 'Species.synonyms
(SpeciesSynonym)', which is handled by mapper 'Mapper|SpeciesSynonym|
species_synonym' and does not load items of that type.  Did you mean to
use a polymorphic mapper for this relationship ?  Set
'enable_typechecks=False' on the relation() to disable this exception.
Mismatched typeloading may cause bi-directional relationships (backrefs)
to not function properly.


Has anyone tried this or gotten it to work? 

See the attachment for the code.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---

from sqlalchemy import *
from sqlalchemy.orm import *
from sqlalchemy.ext.associationproxy import association_proxy

uri = 'sqlite:///:memory:'
metadata = MetaData()

species_table = Table('species', metadata,
  Column('id', Integer, primary_key=True),
  Column('sp', String(64)))

species_synonym_table = Table('species_synonym', metadata,
Column('id', Integer, primary_key=True),
Column('species_id', Integer, ForeignKey('species.id'),
   nullable=False),
Column('synonym_id', Integer, ForeignKey('species.id'),
   nullable=False))


class Species(object):
synonyms = association_proxy('_synonyms', 'synonym')
pass

class SpeciesSynonym(object):
pass

mapper(Species, species_table,
properties = \
{'_synonyms':
 relation(SpeciesSynonym,
primaryjoin=species_table.c.id==species_synonym_table.c.species_id,
cascade='all, delete-orphan', uselist=True
  )})


mapper(SpeciesSynonym, species_synonym_table,
properties = \
   {'synonym':
relation(Species, uselist=False,
primaryjoin=species_synonym_table.c.synonym_id==species_table.c.id),
'species':
relation(Species, uselist=False,
primaryjoin=species_synonym_table.c.species_id==species_table.c.id)})


engine = create_engine(uri)
engine.connect()
metadata.bind = engine
metadata.create_all()
session = create_session()

species_table.insert().execute({'id': 1, 'sp': 'test species 1'})
species_table.insert().execute({'id': 2, 'sp': 'test species 2'})

s = session.load(Species, 1)
sp2 = session.load(Species, 2)
s.synonyms.append(sp2)
session.flush()



[sqlalchemy] Re: Sane rowcount

2007-12-19 Thread Roger Demetrescu

Hi Lele -

On Dec 19, 2007 9:08 PM, Lele Gaifax [EMAIL PROTECTED] wrote:

 On Wed, 19 Dec 2007 10:21:15 -0500
 Michael Bayer [EMAIL PROTECTED] wrote:

  sane_rowcount is set to False on FB probably because someones
  version of FB did not support it correctly.  it might be worth
  tracking it down in svn blame to see why it was changed.

 I did that already, having changed that by myself a few time in the
 past thru patches I sent in the past. More recently it was changed by
 Roger, I think as a tentative fix to #370.

Yeaph...  I was running into the same problem described in #370.

If I recall correctly, I was running FB 1.5 with SA 0.3.10. There are some
ML threads urls described in that ticket comments which you may find
interesting: [1] and [2].


At that time, the test script was also failing with 0.4 (unreleased),
and turning off
`supports_sane_multi_rowcount` fixed it.


 And I think that at the time, I did the test on the very same
 FB2 engine. I'm sure I remember seeing unmatched rowcount error
 message, that now appears *only* activating sane_multi_rowcount.

Hmmm... I don't have FB in this machine now... but would you mind test
the script
from [2] in the scenario you described above ?


 thank you,
 ciao, lele.

Hei, thanks for the great job you are doing with firebird.py...   :)


[]s
Roger


[1] - http://tinyurl.com/2gsr8t
[2] - 
http://groups.google.com/group/sqlalchemy/browse_thread/thread/49aaa4945721b15a

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Sane rowcount

2007-12-19 Thread Lele Gaifax

On Wed, 19 Dec 2007 22:00:54 -0300
Roger Demetrescu [EMAIL PROTECTED] wrote:

 At that time, the test script was also failing with 0.4 (unreleased),
 and turning off
 `supports_sane_multi_rowcount` fixed it.

Yes, in fact. As said, `supports_sane_multi_rowcount` should stay to
False, while I'm arguing about the other, `supports_sane_rowcount`
also currently set to False under FB (IIRC, you switched both to False
in a single patch a few months ago), but which I'm unable to notice
any difference in test results changing its setting.

 Hmmm... I don't have FB in this machine now... but would you mind test
 the script
 from [2] in the scenario you described above ?

I'll do my best at reboot, I'm way too sleepy right now |-)

And, BTW, any chance for a review of the testcase I attached at the
beginning of this thread? I'll try to revisit it in the light of what
Michael said... the intention was to cover also that ticket.

 Hei, thanks for the great job you are doing with firebird.py...   :)

I have been waiting for months, very busy on other things (well, if
SA over M$SQL is another thing... ;-) but I'm finally having lot
of fun with it [almost as usual, when it comes to Python]!  You're
welcome anyway!!

ciao, lele.
-- 
nickname: Lele Gaifax| Quando vivrò di quello che ho pensato ieri
real: Emanuele Gaifas| comincerò ad aver paura di chi mi copia.
[EMAIL PROTECTED] | -- Fortunato Depero, 1929.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Caching

2007-12-19 Thread Michael Bayer


On Dec 19, 2007, at 5:19 PM, Anton V. Belyaev wrote:


 Hello,

 Several people already wrote something about memcached + SqlAlchemy.

 Remember, Mike Nelson wrote a mapper extention, it is available at:
 http://www.ajaxlive.com/repo/mcmapper.py
 http://www.ajaxlive.com/repo/mcache.py

 I've rewritten it a bit to fit 0.4 release of SA.

 Any response and comments are welcome, since I am not sure I am doing
 right things in the code :) I dont like that dirty tricks with
 deleting _state, etc. Maybe it could be done better?

what happens if you just leave _state alone ?  there shouldnt be any  
need to mess with _state (nor _entity_name).   the only attribute  
worth deleting for the cache operation is _sa_session_id so that the  
instance isnt associated with any particular session when it gets  
cached.  Id also consider using session.merge(dont_load=True) which is  
designed for use with caches (and also watch out for that log.debug(),  
debug() calls using the standard logging module are notoriously slow).

 It has some problems with deferred fetch on inherited mapper because
 of some issues of SA (I've found them in Trac).

the only trac ticket for this is #490, which with our current  
extension architecture is pretty easy to fix so its resolved in 3967 -  
MapperExtensions are now fully inherited.  If you apply the same  
MapperExtension explicitly to a base mapper and a subclass mapper,  
using the same ME instance will have the effect of it being applied  
only once (and using two different ME instances will have the effect  
of both being applied to the subclass separately).


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: association object to associate table to itself

2007-12-19 Thread Michael Bayer

two tricks here - set up SpeciesSynonym as:

class SpeciesSynonym(object):
 def __init__(self, species):
 self.synonym = species

that it wasnt raising an exception for no constructor is a bug -  
ticket #908 added.

The other thing that helps here is to set up your bidirectional  
relation using a backref, so that the opposite side is set  
automatically:

mapper(Species, species_table,
properties = \
{'_synonyms':
 relation(SpeciesSynonym,
 
primaryjoin=species_table.c.id==species_synonym_table.c.species_id,
cascade='all, delete-orphan', uselist=True,  
backref=species
  )})

mapper(SpeciesSynonym, species_synonym_table,
properties = \
   {
   'synonym':
relation(Species, uselist=False,
 
primaryjoin=species_synonym_table.c.synonym_id==species_table.c.id),
   })



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Conventions for creating SQL alchemy apps

2007-12-19 Thread Morgan

Hi Guys,

This may be a stupid question so flame away I don't care, but I have 
been wondering. Is there a better way to layout your SQL alchemy files 
that my random method? Does anyone have a convention that works well for 
them.

I'm only asking this because I cannot decide how I want to lay out the 
SQLAlchemy component of my application.

I'm thinking of putting it all in files like engines.py, mapping.py, 
metadata.py etc or should I just shove this all in one file.

Let me know if I have had too much coffee or not.
Morgan

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] 0.4 deprecation warnings, what is the new get_by/select_by?

2007-12-19 Thread iain duncan

Sorry if this seems a stupid question, but I thought that Mike had said
that in sa0.4, if you used session_context that this 

User.query.get_by(name='john')

was the replacement for the old assign mapper convenience call. 
But I'm getting deprecation warnings. What should I be doing instead of
the (almost as) convenient:

User.query.get_by( **kwargs )
User.query.select_by( **kwargs )
User.query.select()

Thanks
iain


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---