[sqlalchemy] Re: remote nondirect access to DB

2006-12-08 Thread ml

Client application doesn't mean web application and SqlAlchemy would
be a too big hammer for SQL console (in my opinion). Well it will be an
information system written with wxPython running on multiple computers.
I don't want users to see information for direct access to the database.
I was trying to expose DBAPI (connections, cursors) via Pyro but it is
falling. Sometimes with segfault.

David



Michael Bayer píše v Čt 07. 12. 2006 v 15:36 -0800:
 um, web interface ?  SQL console ?  it would help to know what kind of
 client youre talking about.
 
 
  


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] guessing sql joins from object level

2006-12-08 Thread che

Hi,
I am trying to translate to sqlalchemy my queries having on object
level clauses like the a.b.c.d == some_value where a is instance of
class A, b - instance of class B and...
How such queries can be expressed in sqlalchemy code (supposing that
every class is mapped to its own table)? Are there some possibility SA
to automagically guess the needed joins when generating appropriate sql
select?
Thanks in advance
StefanB


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey and onupdate/ondelete

2006-12-08 Thread Manlio Perillo

Michael Bayer ha scritto:
 use ForeignKeyConstraint.
 

Yes, I have used it.
But why ForeignKey does not allow this?



Regards  Manlio Perillo

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SA 0.3.x: performance issues = proposal

2006-12-08 Thread Sébastien LELONG
As attachement, here's the patch (against rev 2132). It's local to 
sqlalchemy/orm/, so:

cd /sqlalchemy/orm
patch -p0  attribute_cache.patch


About the patch itself:
 1. the cache should be a WeakKeyDictionary
OK, done. Performances are still OK, differences between built-in dict are 
negligible.

 2. the cache needs to be cleared for a particular cache (class ?)
If I understand right, while calling register_attribute(), the cache will be 
cleared for the given class, if this one has been cached. Is there anywhere 
else the cache needs to cleared ?

 3. the cache should *probably* be at the module level, and not within
 the AttributeManager itself; SA uses only a single AttributeManager per
 application so it probably doesnt matter
I let the cache within the AttributeManager... Things can be moved at the 
module level, but if it's not necessary right now, it might not be a good 
idea to do it right now (when it will be necessary, the design may have been 
changed and modifications not valid anymore).

 4. the raise should use an exception class of some kind
I've made it raise a TypeError... Yes, string based exceptions are very a bad 
thing !


Attributes are cached while using the managed_attributes() *and* 
noninherited_managed_attributes() funcs. Those are very similar. I hesitated 
refactor them, but centralize cache management is probably a good thing. This 
is the purpose of the second patch, which include this refactoring. As 
expected, performances decrease. Here's the traditionnal benchmark results:

SA 0.3.1, rev 2127:
total time 4.29376721382

real0m5.420s
user0m4.088s
sys 0m0.108s

SA with attr. cache, no refactoring (~2X faster, back closed to 0.2.8 
speed):
total time 2.34819602966

real0m3.013s
user0m2.344s
sys 0m0.088s

SA with attr. cache, with refactoring (30% slower than without 
refactoring)
total time 3.05679416656

real0m3.747s
user0m2.988s
sys 0m0.068s


It's up to you choosing the patch ! IMO, I'm *not* in favor to use refactoring 
in this case :). 

Finally, I've put a clear_attribute_cache func which, ... clear the attribute 
cache. While client code may not have to worry about caching, it may need to 
clear it...


 thanks much, this is the kind of user involvement i like.
Well, you're very welcome ! Glad to help !


Cheers,

Seb
-- 
Sébastien LELONG
sebastien.lelong[at]sirloon.net


P.S:

 im beginning
 to suspect that yield introduces overhead vs. a straight list ?
 (python interpreter storing stack frames ?  dunno).
dunnotoo :)


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---
Index: attributes.py
===
--- attributes.py	(révision 2132)
+++ attributes.py	(copie de travail)
@@ -609,8 +609,18 @@
 
 class AttributeManager(object):
 allows the instrumentation of object attributes.  AttributeManager is stateless, but can be
-overridden by subclasses to redefine some of its factory operations.
+overridden by subclasses to redefine some of its factory operations. Also be aware AttributeManager
+will cache attributes for a given class, allowing not to determine those for each objects (used
+in managed_attributes() and noninherited_managed_attributes()). This cache is cleared for a given class
+while calling register_attribute(), and can be cleared using clear_attribute_cache()
 
+def __init__(self):
+# will cache attributes, indexed by class objects
+self._attribute_cache = weakref.WeakKeyDictionary()
+
+def clear_attribute_cache(self):
+self._attribute_cache.clear()
+
 def rollback(self, *obj):
 retrieves the committed history for each object in the given list, and rolls back the attributes
 each instance to their original value.
@@ -639,21 +649,31 @@
 
 def managed_attributes(self, class_):
 returns an iterator of all InstrumentedAttribute objects associated with the given class.
+if self._attribute_cache.has_key(class_):
+return self._attribute_cache[class_]
+
+self._attribute_cache[class_] = []   
 if not isinstance(class_, type):
-raise repr(class_) +  is not a type
+raise TypeError(repr(class_) +  is not a type)
 for key in dir(class_):
 value = getattr(class_, key, None)
 if 

[sqlalchemy] Re: Using pickle to save reflected metadata

2006-12-08 Thread shday

I still get the error:

Traceback (most recent call last):
  File U:/ADA_tools/pickle_tables.py, line 27, in ?
pickle.dump(metadata_to_pickle,pfile)
  File C:\Python24\lib\pickle.py, line 1382, in dump
Pickler(file, protocol, bin).dump(obj)
  File C:\Python24\lib\pickle.py, line 231, in dump
self.save(obj)
  File C:\Python24\lib\pickle.py, line 338, in save
self.save_reduce(obj=obj, *rv)
  File C:\Python24\lib\pickle.py, line 433, in save_reduce
save(state)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 338, in save
self.save_reduce(obj=obj, *rv)
  File C:\Python24\lib\pickle.py, line 433, in save_reduce
save(state)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 338, in save
self.save_reduce(obj=obj, *rv)
  File C:\Python24\lib\pickle.py, line 433, in save_reduce
save(state)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 338, in save
self.save_reduce(obj=obj, *rv)
  File C:\Python24\lib\pickle.py, line 433, in save_reduce
save(state)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 676, in _batch_setitems
save(k)
  File C:\Python24\lib\pickle.py, line 338, in save
self.save_reduce(obj=obj, *rv)
  File C:\Python24\lib\pickle.py, line 433, in save_reduce
save(state)
  File C:\Python24\lib\pickle.py, line 293, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python24\lib\pickle.py, line 663, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python24\lib\pickle.py, line 677, in _batch_setitems
save(v)
  File C:\Python24\lib\pickle.py, line 313, in save
rv = reduce(self.proto)
  File C:\Python24\lib\copy_reg.py, line 69, in _reduce_ex
raise TypeError, can't pickle %s objects % base.__name__
TypeError: can't pickle module objects


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: guessing sql joins from object level

2006-12-08 Thread Michael Bayer

SA can form joins between tables automatically if the tables express
the proper foreign key relationship between each other, and if there is
no ambiguity in that relationship; i.e. table A and table B have only
one ForeignKeyConstraint (or single ForeignKey) between each other.

if you have table A and table B, a join is just:

A.join(B)

with regards to integrating those joins with mapper queries, see
http://www.sqlalchemy.org/docs/datamapping.myt#datamapping_selectrelations_queryjoins
.  the main keyword argument to select() here is the from_obj
parameter.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: ForeignKey and onupdate/ondelete

2006-12-08 Thread Michael Bayer

it was modeled after SQL CREATE TABLE syntax itself, where the
ForeignKey keyword looks like column REFERENCES
sometable(othercolumn) and does not allow any ON UPDATE or ON DELETE
clauses at that level; they would be declared at the bottom of the
table def inside the CONSTRAINT foo FOREIGN KEY... line.

I suppose I could put onupdate/ondelete into ForeignKey but I am trying
to minimize the number of ways to do the same thing.  ill try to
remember to add this in the next release.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using pickle to save reflected metadata

2006-12-08 Thread Michael Bayer

latest trunk, this test works for me:

from sqlalchemy import *
meta = BoundMetaData('mysql://user:[EMAIL PROTECTED]/dbname')

table = Table('foo', meta, autoload=True)

meta2 = MetaData()

t2 = table.tometadata(meta2)

import pickle
pickle.dumps(meta2)

post your test case (with some sample table DDLs) and we'll try it out


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with mapper relationship

2006-12-08 Thread Michael Bayer

the test object you are loading in the update function is local to
the update function itself.  when the function completes, test
falls out of scope and is removed from the session (since it is weakly
referenced in the session).

a change needs to be made to session whereby dirty objects dont get
de-referenced from the session (clean objects have to, thats a
long-accepted behavior).  since this is the second email in three days
about this issue, ticket #388 is added.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] count()

2006-12-08 Thread Alexandre CONRAD

Hello,

I would like to query a big table and display the information. But at 
the top of the list I'm going to display, I need a little counter that 
tell how many items where returned.

What would be the most efficient way to count the results ?

client_list = session.query(Client).select()

Should I:

1/ Use the python len() function:

 len(client_list)


2/ Use the count() query method. But doesn't it re-query the database ?

Thanks for your advices.

Regards,
-- 
Alexandre CONRAD


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] assign_mapper breaks polymorphic multi-table inheritance in 3.1?

2006-12-08 Thread Ken Kuhlman

I'm trying to use assign_mapper with polymorphic multiple table
inheritance, and running into problems with the primary key sequencing.
 Is this a supported use of sqlalchemy?  I'm running version 0.3.1.

Sample code below.  If use_assign_mapper is false, then the script
works with both postgres  sqlite.  If it's true, then sqlite fails on
the assertion that the manager  employee id's be different, and
postgres throws an exception that 'managers_person_id_seq does not
exist'

Thanks in advance for any help, and thanks for an otherwise awesome
product!
-Ken


#!/usr/bin/env python
# The setup here is identical to the example at:
#
http://sqlalchemy.org/docs/adv_datamapping.myt#advdatamapping_inheritance_multiple
# Except that it conditionally uses assignmapper instead of the default
mapper
from sqlalchemy import *
from sqlalchemy.ext.assignmapper import assign_mapper
from sqlalchemy.ext.sessioncontext import SessionContext

use_postgres = False
use_assign_mapper = True

if not use_postgres:
url = 'sqlite:///:memory:'
else:
url=postgres://ken:[EMAIL PROTECTED]/tutorial

engine = create_engine(url)
metadata = BoundMetaData(engine)
context = SessionContext(create_session)

employees = Table('employees', metadata,
   Column('person_id', Integer, primary_key=True),
   Column('name', String(50)),
   Column('type', String(30)))

engineers = Table('engineers', metadata,
   Column('person_id', Integer, ForeignKey('employees.person_id'),
primary_key=True),
   Column('engineer_info', String(50)),
  )

managers = Table('managers', metadata,
   Column('person_id', Integer, ForeignKey('employees.person_id'),
primary_key=True),
   Column('manager_data', String(50)),
   )

class Employee(object):
pass
class Engineer(Employee):
pass
class Manager(Employee):
pass

person_join = polymorphic_union(
{   'engineer':employees.join(engineers),
'manager':employees.join(managers),
'person':employees.select(employees.c.type=='person'),
}, None, 'pjoin')

metadata.create_all()

###
if not use_assign_mapper:
#This works:
person_mapper = mapper(Employee, employees,
select_table=person_join,
   polymorphic_on=person_join.c.type,
polymorphic_identity='person')
mapper(Engineer, engineers, inherits=person_mapper,
polymorphic_identity='engineer')
mapper(Manager, managers, inherits=person_mapper,
polymorphic_identity='manager')
else:
#This doesn't
person_mapper = assign_mapper(context, Employee, employees,
select_table=person_join,
   polymorphic_on=person_join.c.type,
polymorphic_identity='person')
assign_mapper(context, Engineer, engineers, inherits=person_mapper,
polymorphic_identity='engineer')
assign_mapper(context, Manager, managers, inherits=person_mapper,
polymorphic_identity='manager')

context.current.echo_uow=True

eng = Engineer()
mgr = Manager()
context.current.save(eng)
context.current.save(mgr)
context.current.flush()

print eng: %d mgr: %d % (eng.person_id, mgr.person_id)
assert(eng.person_id != mgr.person_id)


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: problem with mapper relationship

2006-12-08 Thread Manlio Perillo

Michael Bayer ha scritto:
 the test object you are loading in the update function is local to
 the update function itself.  when the function completes, test
 falls out of scope and is removed from the session (since it is weakly
 referenced in the session).
 
 a change needs to be made to session whereby dirty objects dont get
 de-referenced from the session (clean objects have to, thats a
 long-accepted behavior).  
 since this is the second email in three days
 about this issue, ticket #388 is added.
 



Thanks.
This is a crucial feature for me, since I use this pattern in Twisted 
(where in a separate thread I create a transaction and a session, 
passing the connection and session objects to an user define function).



Regards  Manlio Perillo

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: count()

2006-12-08 Thread Michael Bayer

if you are going to fetch all results in all cases, then len(result) is
fine.  if the result list is enormous and youre only going to fetch a
small portion of it within a request, the count() query as a separate
operation is better.


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using pickle to save reflected metadata

2006-12-08 Thread shday

whoops...

I've been using IDLE to test this and I guess it has to be reopened in
order for imports to be redone (I was just closing the shell window).
pickle.dump(metadata) works fine now.

But now I get a (recursive?) error when I try to unpickle (and I have
to kill the shell):

 import pickle
 pfile = file('pickled_tables','rb')
 metadata = pickle.load(pfile)

Traceback (most recent call last):
  File pyshell#7, line 1, in -toplevel-
metadata = pickle.load(pfile)
  File C:\Python24\lib\pickle.py, line 1390, in load
return Unpickler(file).load()
  File C:\Python24\lib\pickle.py, line 872, in load
dispatch[key](self)
  File C:\Python24\lib\pickle.py, line 1235, in load_build
setstate = getattr(inst, __setstate__, None)
  File C:\Python24\lib\site-packages\sqlalchemy\util.py, line 113, in
__getattr__
return self.__data[key]
  File C:\Python24\lib\site-packages\sqlalchemy\util.py, line 113, in
__getattr__
return self.__data[key]
  File C:\Python24\lib\site-packages\sqlalchemy\util.py, line 113, in
__getattr__
.
.
.
keeps on going


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PickleType with custom pickler

2006-12-08 Thread dmiller


On Dec 8, 2006, at 1:30 PM, Michael Bayer wrote:

 the reason that get_history calls attribute_manager.get_history is
 because all awareness of attribute history is handled by the
 attribute module.  the reason attribute_manager then calls the
 InstrumentedAttribute off of the class is because thats where
 AttributeManager reprents the behavior for a specific class attribute.
 ColumnProperty knows nothing about InstrumentedAttribute.

Hmm...I guess ColumnProperty is not really what I'm looking for then-- 
it looks more like a layer between user code and the attribute  
manager. I want to be able to control the communication between the  
attribute manager and the mapped object. Currently AttributeManager  
and friends just access object.__dict__ directly, which means I can't  
get between them...too bad.

I guess what I'm really looking to customize is  
InstrumentedAttribute, which is even more formidable than  
ColumnProperty. Is there an easy way to tell SA to use a custom  
InstrumentedAttribute class?


 if you look at the constructor for InstrumentedAttribute itself, youll
 see it takes two callables which provide the compare function and  
 the
 copy function.  these are provided by the TypeEngine object via the
 ColumnLoader, which is off the ColumnProperty.  So i would propose  
 that
 these two arguments become optional keyword arguments to  
 ColumnProperty
 which ColumnLoader/DeferredLoader pick up on and send to their
 init_class_attribute method, so that you can send in whatever compare
 and copy callables you want.

I'm not sure if that would actually solve my problem since compare  
and copy do not allow me to mutate the data as it passes from SA to  
the object. Could we create a mapper extension point that would get  
called to check if an object is modified? It would need be hooked  
into the logic inside locate_dirty(). That would allow me to do this  
whole thing with a mapper extension.

~ Daniel

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PickleType with custom pickler

2006-12-08 Thread Michael Bayer

while youre free to make your own InstrumentedAttribute, i think that
is a long uphill battle which also might be incompatible with new
releases.

I think you might want to go with sticking some hidden attributes on
your pickleable object so that your custom PickleType can just use
those to identify the information it needs (a good choice might be the
id of the session, so that you can get the session back by saying
session._sessions[id]).  or, perhaps the item gets a reference back to
its parent so it can find the session via object_session(parent).  or
you could decouple it by using a weak-keyed dictionary somewhere so
that your pickle instances can be mapped back to parent and therefore
session.

this might be something you can set up within a MapperExtension using
populate_instance() when instances are loaded and before_insert() when
instances are about to be first saved.

parent_dict = weakref.WeakValueDictionary()

class MyExt(MapperExtension):
def populate_instance(self, mapper, selectcontext, row, instance,
identitykey, isnew):
if not isnew:
return EXT_PASS
mapper.populate_instance(context, instance, row, identitykey,
isnew)
parent_dict[instance.my_pickled_thing] = instance
return None

class MyPickleType(MapperExtension):
def convert_bind_param(self, value, dialect):
return MyPickler(session =
object_session(parent_dict[value])).dumps(value)
...


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: SQL Alchemy

2006-12-08 Thread Robin Munn

On 12/4/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

 Can you use real numbers and date into SQL Alchemy database?

In a word: yes. For real numbers, you'd use the Float() column type,
and for dates, you'd use the Date() column type. (Or the DateTime()
column type if you wanted both a date and time value).

http://www.sqlalchemy.org/docs/types.myt has a list of the basic data
types available, and an example of how to write your own types should
one of those not be sufficient.

http://www.sqlalchemy.org/docs/metadata.myt shows how to declare a table.

Hopefully this will answer your question.

-- 
Robin Munn
[EMAIL PROTECTED]
GPG key 0x4543D577

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: remote nondirect access to DB

2006-12-08 Thread ml

Jonathan Ellis píše v Pá 08. 12. 2006 v 11:05 -0700:
 On 12/7/06, ml [EMAIL PROTECTED] wrote:
  I want to have a client application accessing a remote Postgres database
  but I don't want to distribute the user/password to the DB. I want
  clients to authenticate against some other table of passwords so there
  must be a stub between client and DB which will know the DB user/pwd.
  How can I manage this in SA?
 
 You can write your stub in SA easily; your client should interface
 with the stub not via SQL but via a simple protocol you define that
 limits it to predetermined actions.
 

How? What protocol? Any clues or examples are welcome.

DS


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: remote nondirect access to DB

2006-12-08 Thread Jonathan Ellis

On 12/8/06, ml [EMAIL PROTECTED] wrote:

 Jonathan Ellis píše v Pá 08. 12. 2006 v 11:05 -0700:
  On 12/7/06, ml [EMAIL PROTECTED] wrote:
   I want to have a client application accessing a remote Postgres database
   but I don't want to distribute the user/password to the DB. I want
   clients to authenticate against some other table of passwords so there
   must be a stub between client and DB which will know the DB user/pwd.
   How can I manage this in SA?
 
  You can write your stub in SA easily; your client should interface
  with the stub not via SQL but via a simple protocol you define that
  limits it to predetermined actions.
 

 How? What protocol? Any clues or examples are welcome.

You get to make one up!  It's your app!

The point is, if you trust the client to send valid SQL you are
screwed whether or not the client logs into the database directly or
not.  So instead of insert into users (name) values (...) your
client would sent NEWUSER ... or something.

If this sounds intimidating, I would suggest picking up a book
covering client/server programming.  I believe Programming Python and
Core Python both include chapters on socket programming.

-- 
Jonathan Ellis
http://spyced.blogspot.com

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: PickleType with custom pickler

2006-12-08 Thread Michael Bayer

this is a simple example of what youre trying to do.  its not even as
fancy as what i described above and just uses a threadlocal to keep
track of the current Session:

from sqlalchemy import *
from sqlalchemy.orm.session import object_session
from cStringIO import StringIO
from pickle import Pickler, Unpickler
import threading

meta = BoundMetaData('sqlite://', echo=True)
class MyExt(MapperExtension):
def populate_instance(self, mapper, selectcontext, row, instance,
identitykey, isnew):
# if you access the row before this step, it fails
MyPickler.sessions.current = selectcontext.session
return EXT_PASS
def before_insert(self, mapper, connection, instance):
MyPickler.sessions.current = object_session(instance)
return EXT_PASS
def before_update(self, mapper, connection, instance):
MyPickler.sessions.current = object_session(instance)
return EXT_PASS

class MyPickler(object):
sessions = threading.local()

def persistent_id(self, obj):
if hasattr(obj, id):
key = %s:%s % (type(obj).__name__, obj.id)
return key
return None

def persistent_load(self, key):
print KEY, key
name, ident = key.split(:)
sess = MyPickler.sessions.current
return sess.query(Bar).get(ident)

def dumps(self, graph, protocol):
src = StringIO()
pickler = Pickler(src)
pickler.persistent_id = self.persistent_id
pickler.dump(graph)
return src.getvalue()

def loads(self, data):
dst = StringIO(data)
unpickler = Unpickler(dst)
unpickler.persistent_load = self.persistent_load
return unpickler.load()

foo_table = Table('foo', meta,
Column('id', Integer, primary_key=True),
Column('bar', PickleType(pickler=MyPickler()), nullable=False))

bar_table = Table('bar', meta,
Column('id', Integer, primary_key=True),
Column('data', String(40)))

meta.create_all()

class Foo(object):
pass

class Bar(object):
def __init__(self, value):
self.data = value

mapper(Foo, foo_table, extension=MyExt())
mapper(Bar, bar_table)

sess = create_session()
b = Bar('some bar')
sess.save(b)
sess.flush()
sess.clear()

sess = create_session()
f = Foo()
f.bar = b
sess.save(f)
sess.flush()
sess.clear()

# delete current session to make sure its populated
del MyPickler.sessions.current

f = sess.query(Foo).get(f.id)
assert f.bar.data == 'some bar'


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: remote nondirect access to DB

2006-12-08 Thread ml

Jonathan Ellis píše v Pá 08. 12. 2006 v 15:40 -0700:
 On 12/8/06, ml [EMAIL PROTECTED] wrote:
 
  Jonathan Ellis píše v Pá 08. 12. 2006 v 11:05 -0700:
   On 12/7/06, ml [EMAIL PROTECTED] wrote:
I want to have a client application accessing a remote Postgres database
but I don't want to distribute the user/password to the DB. I want
clients to authenticate against some other table of passwords so there
must be a stub between client and DB which will know the DB user/pwd.
How can I manage this in SA?
  
   You can write your stub in SA easily; your client should interface
   with the stub not via SQL but via a simple protocol you define that
   limits it to predetermined actions.
  
 
  How? What protocol? Any clues or examples are welcome.
 
 You get to make one up!  It's your app!
 
 The point is, if you trust the client to send valid SQL you are
 screwed whether or not the client logs into the database directly or
 not.  So instead of insert into users (name) values (...) your
 client would sent NEWUSER ... or something.
 
 If this sounds intimidating, I would suggest picking up a book
 covering client/server programming.  I believe Programming Python and
 Core Python both include chapters on socket programming.
 

Now I see where are you heading. So the idea is to put all SA into the
stub and each action will be exposed via a single method. It is not what
I wanted but it seems I have no other choice. Thanks.

DS


--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Proposal: session identity_map no longer weak referencing

2006-12-08 Thread Jonathan Ellis

On 12/8/06, Michael Bayer [EMAIL PROTECTED] wrote:
 when SA was first released, someone immediately suggested that the
 identity map of Session be weak referencing, which appeared to be an
 obvious improvement, so that you could load as many objects as you want
 from the session and whatever you didnt use would just go away.  but
 now it appears that the more intuitive operation for a Session is that
 things that get loaded into it, stay there, until you say otherwise.

 So id like to make the session act this way, again like
 hibernate's...more explicit.  what you load into the session stays
 there until you clean it out.  that behavior is more predictable than
 what we have now.

+1

 Of course we can always add an option weak_identity_map if people
 really do want the old behavior.

-1, it's only a matter of time before this would bite you

-- 
Jonathan Ellis
http://spyced.blogspot.com

--~--~-~--~~~---~--~~
 You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---