[sqlalchemy] ResourceClosedError

2011-11-06 Thread AZ (Wei-Ning Huang)
Hi list,

I've been getting ResoruceClosedError with MySQL server, which worked
fine with SQLite.
I'am using Celery queue, and those erros always happen within the
celery tasks.

@celery.task
def do_something():
  records = SomeTable.pop_timeout()
  

and for SomeTable.pop_timeout(), the pop_timeout method is as follows:

class SomeTable(DeclarativeBase):
...
def pop_timeout(klass):
stamp = str(time())
now = time()
klass.query(klass).filter((klass.transtamp == None)  (now -
klass.timestamp  TIMEOUT)).update({'transtamp': stamp})
klass.commit()
records = klass.get_by(transtamp=stamp, order_by=order_by,
eager=eager)
klass.delete_by(transtamp=stamp)
klass.commit()
return records

And the tracebak is:
caches = ImageCache.pop_timeout()
  File /home/aitjcize/Work/shotwill/shotwill/backend/database.py,
line 1081, i
n pop_timeout
now - ImageCache.timestamp = app.config['IMAGE_CACHE_TIMEOUT']
  File /home/aitjcize/Work/shotwill/shotwill/backend/database.py,
line 226, in
 pop
records = klass.get_by(transtamp=stamp, order_by=order_by,
eager=eager)
  File /home/aitjcize/Work/shotwill/shotwill/backend/database.py,
line 204, in
 get_by
return query_object.all()
  File /home/aitjcize/Work/shotwill/python/SQLAlchemy-0.7.2-py2.7-
linux-x86_64.
egg/sqlalchemy/orm/query.py, line 1729, in all
return list(self)
  File /home/aitjcize/Work/shotwill/python/SQLAlchemy-0.7.2-py2.7-
linux-x86_64.
egg/sqlalchemy/orm/query.py, line 1953, in instances
fetch = cursor.fetchall()
  File /home/aitjcize/Work/shotwill/python/SQLAlchemy-0.7.2-py2.7-
linux-x86_64.
egg/sqlalchemy/engine/base.py, line 2979, in fetchall
l = self.process_rows(self._fetchall_impl())
  File /home/aitjcize/Work/shotwill/python/SQLAlchemy-0.7.2-py2.7-
linux-x86_64.
egg/sqlalchemy/engine/base.py, line 2948, in _fetchall_impl
self._non_result()
  File /home/aitjcize/Work/shotwill/python/SQLAlchemy-0.7.2-py2.7-
linux-x86_64.
egg/sqlalchemy/engine/base.py, line 2953, in _non_result
This result object does not return rows. 
ResourceClosedError: This result object does not return rows. It has
been closed
 automatically.

Does anyone has any idea what is going on? It only happen when using
mysql.
And another thing, because there maybe multiple worker executing the
same celery task
at the same time, so the pop_timeout() function is what I came up
with. Is there a better
solution for popping records from a database?

Thanks in advance.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Using update()

2010-06-15 Thread Az
Hi there,

Can someone give me an example of using update()?

Thanks.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Primary key not unique?

2010-06-14 Thread Az
I've got an error that says the primary key is not unique. This
shouldn't be possible since my primary keys are unique IDs :S

This is only happening after I switched to a physical sqlite3 database
from the :memory: one


#BEGIN#

File Main.py, line 97, in module
MCS.addToTable()
  File /XXX/MonteCarloSimulation.py, line 77, in addToTable
session.flush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1354, in flush
self._flush(objects)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1432, in _flush
flush_context.execute()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 261, in execute
UOWExecutor().execute(self, tasks)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 753, in execute
self.execute_save_steps(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 768, in execute_save_steps
self.save_objects(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 759, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
mapper.py, line 1428, in _save_obj
c = connection.execute(statement.values(value_params), params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 874, in _execute_clauseelement
return self.__execute_context(context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 931, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.IntegrityError: (IntegrityError) PRIMARY KEY must be
unique u'INSERT INTO projs (proj_id, proj_allocated, proj_blocked,
proj_sup, presim_pop) VALUES (?, ?, ?, ?, ?)' [111, None, None,
44, 0]

#END#

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Questions about session

2010-06-12 Thread Az
Hi Conor,

Many apologies for being pushy but since I'm pretty much in the
processing of finishing up my code (due in two days), I wonder if you
could just take a look at the last three posts of mine---these
constitute the final hurdle and I'll be done :)

Cheers,

Az

On Jun 9, 9:46 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/09/2010 02:45 PM, Az wrote:



  Expected: students, supervisors, projects, dictionaries of said objects,
  and other attribute values (strings, ints, lists, etc.). Unexpected:
  anything else, especially sessions, InstanceState objects, or other ORM
  support objects.

  Actually got some stuff like the following (copy-pasting bits from my
  print output):

  (class 'sqlalchemy.orm.state.InstanceState',)
  {'_sa_instance_state': sqlalchemy.orm.state.InstanceState object at
  0x2d5beb0, 'proj_id': 1100034, 'postsim_probs': [], 'proj_sup': 1291,
  'presim_pop': 0, 'own_project': False, 'allocated': False,
  'proj_name': 'MPC on a Chip', 'blocked': False}

  Stuff like that :S

 I'm not sure what that printout indicates. Try this as your debug printout:

 def get_memo_type_count(memo):
     retval = {}
     for obj in memo.itervalues():
         type_ = obj.__class__
         retval[type_] = retval.get(type_, 0) + 1
     return retval

 [perform deep copies]
 type_count = get_memo_type_count(memo)
 import pprint
 pprint.pprint(type_count)

 This will tell you, e.g. how may Student objects were copied, how many
 InstanceState objects were copied, etc. Remember that you will have to
 override __deepcopy__ on your mapped classes or use the
 use-case-specific copy function to prevent ORM attributes (such as
 _sa_instance_state) from being copied.



  [...]
  The most likely cause is if you call session.add(temp_alloc) after
  calling session.merge(temp_alloc) for the same temp_alloc object. I
  noticed your original monteCarloBasic had two calls to
  session.add(temp_alloc); did both get changed to
  session.merge(temp_alloc)? If that doesn't work, can you verify that
  SQLAlchemy's primary key for SimAllocation matches the database's
  primary key for sim_alloc? What column type are you using for uid? Which
  call to session.merge is failing (line 163 according to your traceback),
  the one inside your for rank in ranks loop or the one outside?

  Oh yeah good point, they're separate calls. Basically for the one in
  for rank in ranks
  adds for a student getting a project, the other adds if a student
  doesn't get a project since we want
  to track all students (allocated or not, since the state of being
  unallocated is what gives
  us motivation to optimise the results).

 Your original monteCarloBasic definition had this:

 for rank in ranks:
     proj = random.choice(list(student.preferences[rank]))
     if not (proj.allocated or proj.blocked or proj.own_project):
         [...]
         session.add(temp_alloc) # #1
         break

 ident += 1
 session.add(temp_alloc) # #2

 session.add #1 is redundant since #2 gets called regardless of whether
 the student gets allocated a project or not (ignoring exceptions). Just
 a minor nitpick.

  Anyway, session.merge() is for overwriting previously existing values
  right? Now thanks to the UUID I can add multiple calls to
  monteCarloBasic() to my physical database :)

 session.merge gives you find or create behavior: look for an existing
 object in the database, or create a new one if no existing object is
 found. Note that session.merge requires you to completely fill in the
 object's primary key whereas session.add does not.

  I basically wrote a small function that, for everytime the
  monteCarloBasic() is called, will append the UUID, the number of
  trials ran and the date-time to a text file. My supervisor would have
  to copy paste that into a GUI text field or the command line but it's
  not that much of a hassle, given the usefulness of the database.

 Sounds pretty ugly. What if you add extra tables to represent runs
 and/or trials?

 class Run(Base):
     # Having a separate table here gives you nice auto-incrementing run ids
     # and lets you attach additional information to a run, such as timestamp,
     # human-supplied comment, etc.
     __tablename__ = 'run'
     id = Column(Integer, primary_key=True)
     timestamp = Column(DateTime, nullable=False)
     # comment = Column(UnicodeText(100), nullable=False)

     trials = relationship('Trial',
                           back_populates='run',
                           order_by=lambda: Trial.id.asc())

 class Trial(Base):
     # Having a separate table here is of dubious value, but hey it makes the
     # relationships a bit nicer!
     __tablename__ = 'trial'
     __table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
     run_id = Column(Integer, ForeignKey('run.id'))
     id = Column(Integer)

     run = relationship('Run', back_populates='trials')
     sim_allocs = relationship('SimAllocation', back_populates='trial')

 class SimAllocation(Base

[sqlalchemy] Re: Questions about session

2010-06-10 Thread Az
The pprintout was:

{type 'collections.defaultdict': 156,
 type 'bool': 2,
 type 'float': 1,
 type 'int': 538,
 type 'list': 1130,
 type 'dict': 867,
 type 'NoneType': 1,
 type 'set': 932,
 type 'str': 577,
 type 'tuple': 1717,
 type 'type': 5,
 class 'sqlalchemy.util.symbol': 1,
 class 'sqlalchemy.orm.state.InstanceState': 236,
 class 'ProjectParties.Student': 156,
 class 'ProjectParties.Supervisor': 39,
 class 'ProjectParties.Project': 197}

I think the InstanceStates come from the Supervisor and Project
classes (197+39 = 236)

 Sounds pretty ugly. What if you add extra tables to represent runs
 and/or trials?

 class Run(Base):
 # Having a separate table here gives you nice auto-incrementing run ids
 # and lets you attach additional information to a run, such as timestamp,
 # human-supplied comment, etc.
 __tablename__ = 'run'
 id = Column(Integer, primary_key=True)
 timestamp = Column(DateTime, nullable=False)
 # comment = Column(UnicodeText(100), nullable=False)

 trials = relationship('Trial',
   back_populates='run',
   order_by=lambda: Trial.id.asc())

 class Trial(Base):
 # Having a separate table here is of dubious value, but hey it makes the
 # relationships a bit nicer!
 __tablename__ = 'trial'
 __table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
 run_id = Column(Integer, ForeignKey('run.id'))
 id = Column(Integer)

 run = relationship('Run', back_populates='trials')
 sim_allocs = relationship('SimAllocation', back_populates='trial')

 class SimAllocation(Base):
 ...
 __table_args__ = (PrimaryKeyConstraint('run_id', 'trial_id', 'stud_id'),
   ForeignKeyConstraint(['run_id', 'trial_id'],
['trial.run_id', 'trial.id']),
   {})

 run_id = Column(Integer)
 trial_id = Column(Integer)
 stud_id = Column(Integer)

 trial = relationship('Trial', back_populates='sim_allocs')

Ah true, my solution was rather hacky and not very elegant.

Your class definitions... are you defining both table and Class in one
go? Would I have to change the way my monteCarloBasic creates
instances of SimAllocation?



On Jun 9, 9:46 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/09/2010 02:45 PM, Az wrote:



  Expected: students, supervisors, projects, dictionaries of said objects,
  and other attribute values (strings, ints, lists, etc.). Unexpected:
  anything else, especially sessions, InstanceState objects, or other ORM
  support objects.

  Actually got some stuff like the following (copy-pasting bits from my
  print output):

  (class 'sqlalchemy.orm.state.InstanceState',)
  {'_sa_instance_state': sqlalchemy.orm.state.InstanceState object at
  0x2d5beb0, 'proj_id': 1100034, 'postsim_probs': [], 'proj_sup': 1291,
  'presim_pop': 0, 'own_project': False, 'allocated': False,
  'proj_name': 'MPC on a Chip', 'blocked': False}

  Stuff like that :S

 I'm not sure what that printout indicates. Try this as your debug printout:

 def get_memo_type_count(memo):
     retval = {}
     for obj in memo.itervalues():
         type_ = obj.__class__
         retval[type_] = retval.get(type_, 0) + 1
     return retval

 [perform deep copies]
 type_count = get_memo_type_count(memo)
 import pprint
 pprint.pprint(type_count)

 This will tell you, e.g. how may Student objects were copied, how many
 InstanceState objects were copied, etc. Remember that you will have to
 override __deepcopy__ on your mapped classes or use the
 use-case-specific copy function to prevent ORM attributes (such as
 _sa_instance_state) from being copied.



  [...]
  The most likely cause is if you call session.add(temp_alloc) after
  calling session.merge(temp_alloc) for the same temp_alloc object. I
  noticed your original monteCarloBasic had two calls to
  session.add(temp_alloc); did both get changed to
  session.merge(temp_alloc)? If that doesn't work, can you verify that
  SQLAlchemy's primary key for SimAllocation matches the database's
  primary key for sim_alloc? What column type are you using for uid? Which
  call to session.merge is failing (line 163 according to your traceback),
  the one inside your for rank in ranks loop or the one outside?

  Oh yeah good point, they're separate calls. Basically for the one in
  for rank in ranks
  adds for a student getting a project, the other adds if a student
  doesn't get a project since we want
  to track all students (allocated or not, since the state of being
  unallocated is what gives
  us motivation to optimise the results).

 Your original monteCarloBasic definition had this:

 for rank in ranks:
     proj = random.choice(list(student.preferences[rank]))
     if not (proj.allocated or proj.blocked or proj.own_project):
         [...]
         session.add(temp_alloc) # #1
         break

 ident += 1
 session.add(temp_alloc) # #2

 session.add #1 is redundant since #2 gets

[sqlalchemy] Re: Questions about session

2010-06-10 Thread Az
So I laid them out like this:

class Run(Base):
# For autoincrementing run IDs
# Allows addition of more information to a run
__tablename__ = 'run'
id = Column(Integer, primary_key=True)
timestamp = Column(DateTime, nullable=False)
# comment = Column(UnicodeText(100), nullable=False)

trials = relationship('Trial',
  back_populates='run',
  order_by=lambda: Trial.id.asc())

class Trial(Base):
# Having a separate table here is of dubious value, but hey it
makes the
# relationships a bit nicer!
__tablename__ = 'trial'
__table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
run_id = Column(Integer, ForeignKey('run.id'))
id = Column(Integer)

run = relationship('Run', back_populates='trials')
sim_allocs = relationship('SimAllocation', back_populates='trial')

class SimAllocation(Base):
#
__tablename__ = 'sim_alloc'
__table_args__ = (PrimaryKeyConstraint('run_id', 'trial_id',
'stud_id'),
 ForeignKeyConstraint(['run_id', 'trial_id'],
  ['trial.run_id',
'trial.id']),
 {})

   run_id = Column(Integer)
   trial_id = Column(Integer)
   stud_id = Column(Integer)

   trial = relationship('Trial', back_populates='sim_allocs')


def __init__(self, ident, uid, session_id, stud_id, alloc_proj_rank):
self.ident = ident
self.uid = uid
self.session_id = session_id
self.stud_id = stud_id
self.alloc_proj = None
self.alloc_proj_ref = None
self.alloc_proj_rank = alloc_proj_rank

def __repr__(self):
return str(self)

def __str__(self):
return Row: %s UID: %s - %s: Student: %s (Project: %s - Rank: 
%s) %
(self.ident, self.uid, self.session_id, self.stud_id, self.alloc_proj,
self.alloc_proj_rank)

#

The original mapping was:

simulation_allocation = Table('sim_alloc', metadata,
Column('ident', Integer),
Column('uid', String, primary_key=True),
Column('session_id', Integer, primary_key=True),
Column('stud_id', Integer, ForeignKey('studs.ee_id'),
primary_key=True),
Column('alloc_proj', Integer, ForeignKey('projs.proj_id')),
Column('alloc_proj_rank', Integer)
)

mapper(SimAllocation, simulation_allocation, properties={'stud' :
relation(StudentDBRecord), 'proj' : relation(Project)})

Of course, I'd get rid of the project relationship since an
allocated_project and allocated_proj_ref *can* be a NoneType
(realised that right now!)

Additionally, I'd like to maintain the ForeignKey relationship with
the StudentDRRecord table for pulling in info about a student.

Also, I've not got rid of ident because I don't know how else to map
SimAllocation to a dictionary as well. The only thing I could use for
keys was the IDENT before but now that we have a composite key, what
happens to the dictionary? However, the dictionary will just hold
information for the current run really.



On Jun 10, 6:33 pm, Az azfarul.is...@gmail.com wrote:
 The pprintout was:

 {type 'collections.defaultdict': 156,
  type 'bool': 2,
  type 'float': 1,
  type 'int': 538,
  type 'list': 1130,
  type 'dict': 867,
  type 'NoneType': 1,
  type 'set': 932,
  type 'str': 577,
  type 'tuple': 1717,
  type 'type': 5,
  class 'sqlalchemy.util.symbol': 1,
  class 'sqlalchemy.orm.state.InstanceState': 236,
  class 'ProjectParties.Student': 156,
  class 'ProjectParties.Supervisor': 39,
  class 'ProjectParties.Project': 197}

 I think the InstanceStates come from the Supervisor and Project
 classes (197+39 = 236)



  Sounds pretty ugly. What if you add extra tables to represent runs
  and/or trials?

  class Run(Base):
      # Having a separate table here gives you nice auto-incrementing run ids
      # and lets you attach additional information to a run, such as 
  timestamp,
      # human-supplied comment, etc.
      __tablename__ = 'run'
      id = Column(Integer, primary_key=True)
      timestamp = Column(DateTime, nullable=False)
      # comment = Column(UnicodeText(100), nullable=False)

      trials = relationship('Trial',
                            back_populates='run',
                            order_by=lambda: Trial.id.asc())

  class Trial(Base):
      # Having a separate table here is of dubious value, but hey it makes the
      # relationships a bit nicer!
      __tablename__ = 'trial'
      __table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
      run_id = Column(Integer, ForeignKey('run.id'))
      id = Column(Integer)

      run = relationship('Run', back_populates='trials')
      sim_allocs = relationship('SimAllocation', back_populates='trial')

  class SimAllocation(Base):
      ...
      __table_args__ = (PrimaryKeyConstraint('run_id', 'trial_id', 'stud_id

[sqlalchemy] Re: Questions about session

2010-06-10 Thread Az
Let me take a guess:

class Supervisor(object):
def __init__(self, ee_id, name, original_quota, loading_limit):
self.ee_id = ee_id
self.name = name
self.original_quota = original_quota
self.loading_limit = loading_limit
self.predecr_quota = 0
self.offered_proj = set()
self.total_prealloc_pop = 0
self.total_postalloc_pop = 0

def __repr__(self):
return str(self)

def __str__(self):
return self.name
return %s %s %s (Offered projects: %s) %(self.ee_id, 
self.name,
self.predecr_quota, self.offered_proj)

So *inside* the Supervisor class would I define it like this (trying
to have a go at it)?

def __deepcopy__(self, memo):
dc = type(self)()
dc.__dict__.update(self.__dict__)
for attr in dir(supervisor):
if not attr.startswight('__'):
self.attr = deepcopy(self.attr, memo)

So this only overrides __deepcopy__ when I call it for a Supervisor
and not for any of the other classes right?



On Jun 10, 6:56 pm, Az azfarul.is...@gmail.com wrote:
 So I laid them out like this:

 class Run(Base):
     # For autoincrementing run IDs
         # Allows addition of more information to a run
     __tablename__ = 'run'
     id = Column(Integer, primary_key=True)
     timestamp = Column(DateTime, nullable=False)
     # comment = Column(UnicodeText(100), nullable=False)

     trials = relationship('Trial',
                           back_populates='run',
                           order_by=lambda: Trial.id.asc())

 class Trial(Base):
     # Having a separate table here is of dubious value, but hey it
 makes the
     # relationships a bit nicer!
     __tablename__ = 'trial'
     __table_args__ = (PrimaryKeyConstraint('run_id', 'id'), {})
     run_id = Column(Integer, ForeignKey('run.id'))
     id = Column(Integer)

     run = relationship('Run', back_populates='trials')
     sim_allocs = relationship('SimAllocation', back_populates='trial')

 class SimAllocation(Base):
 #
         __tablename__ = 'sim_alloc'
         __table_args__ = (PrimaryKeyConstraint('run_id', 'trial_id',
 'stud_id'),
                              ForeignKeyConstraint(['run_id', 'trial_id'],
                                                   ['trial.run_id',
 'trial.id']),
                              {})

    run_id = Column(Integer)
    trial_id = Column(Integer)
    stud_id = Column(Integer)

    trial = relationship('Trial', back_populates='sim_allocs')

         def __init__(self, ident, uid, session_id, stud_id, alloc_proj_rank):
                 self.ident = ident
                 self.uid = uid
                 self.session_id = session_id
                 self.stud_id = stud_id
                 self.alloc_proj = None
                 self.alloc_proj_ref = None
                 self.alloc_proj_rank = alloc_proj_rank

         def __repr__(self):
                 return str(self)

         def __str__(self):
                 return Row: %s UID: %s - %s: Student: %s (Project: %s - 
 Rank: %s) %
 (self.ident, self.uid, self.session_id, self.stud_id, self.alloc_proj,
 self.alloc_proj_rank)

 #

 The original mapping was:

 simulation_allocation = Table('sim_alloc', metadata,
         Column('ident', Integer),
         Column('uid', String, primary_key=True),
         Column('session_id', Integer, primary_key=True),
         Column('stud_id', Integer, ForeignKey('studs.ee_id'),
 primary_key=True),
         Column('alloc_proj', Integer, ForeignKey('projs.proj_id')),
         Column('alloc_proj_rank', Integer)
 )

 mapper(SimAllocation, simulation_allocation, properties={'stud' :
 relation(StudentDBRecord), 'proj' : relation(Project)})

 Of course, I'd get rid of the project relationship since an
 allocated_project and allocated_proj_ref *can* be a NoneType
 (realised that right now!)

 Additionally, I'd like to maintain the ForeignKey relationship with
 the StudentDRRecord table for pulling in info about a student.

 Also, I've not got rid of ident because I don't know how else to map
 SimAllocation to a dictionary as well. The only thing I could use for
 keys was the IDENT before but now that we have a composite key, what
 happens to the dictionary? However, the dictionary will just hold
 information for the current run really.

 On Jun 10, 6:33 pm, Az azfarul.is...@gmail.com wrote:

  The pprintout was:

  {type 'collections.defaultdict': 156,
   type 'bool': 2,
   type 'float': 1,
   type 'int': 538,
   type 'list': 1130,
   type 'dict': 867,
   type 'NoneType': 1,
   type 'set': 932,
   type 'str': 577,
   type 'tuple': 1717,
   type 'type': 5,
   class 'sqlalchemy.util.symbol': 1,
   class 'sqlalchemy.orm.state.InstanceState': 236,
   class 'ProjectParties.Student': 156,
   class 'ProjectParties.Supervisor': 39,
   class 'ProjectParties.Project': 197}

  I think the InstanceStates come from the Supervisor

[sqlalchemy] Re: Questions about session

2010-06-09 Thread Az
 Expected: students, supervisors, projects, dictionaries of said objects,
 and other attribute values (strings, ints, lists, etc.). Unexpected:
 anything else, especially sessions, InstanceState objects, or other ORM
 support objects.

Actually got some stuff like the following (copy-pasting bits from my
print output):

(class 'sqlalchemy.orm.state.InstanceState',)
{'_sa_instance_state': sqlalchemy.orm.state.InstanceState object at
0x2d5beb0, 'proj_id': 1100034, 'postsim_probs': [], 'proj_sup': 1291,
'presim_pop': 0, 'own_project': False, 'allocated': False,
'proj_name': 'MPC on a Chip', 'blocked': False}

Stuff like that :S

 Calling session.close() is not necessary if you have a single global
 session like you do. You only need it if you are worried that the
 database might get modified concurrently by another transaction (from a
 different process, session, etc.). Having said this, session.close()
 does not prevent you from using the session later on: it just closes out
 any pending transaction and expunges all object instances (including any
 student, supervisor, and project instances you may have added/loaded).
 This ensures that it sees fresh data for any future queries.

 In conclusion, using session.query(Student)... should work whether you
 have run monteCarloBasic or not.

Excellent :)

 The most likely cause is if you call session.add(temp_alloc) after
 calling session.merge(temp_alloc) for the same temp_alloc object. I
 noticed your original monteCarloBasic had two calls to
 session.add(temp_alloc); did both get changed to
 session.merge(temp_alloc)? If that doesn't work, can you verify that
 SQLAlchemy's primary key for SimAllocation matches the database's
 primary key for sim_alloc? What column type are you using for uid? Which
 call to session.merge is failing (line 163 according to your traceback),
 the one inside your for rank in ranks loop or the one outside?

Oh yeah good point, they're separate calls. Basically for the one in
for rank in ranks
adds for a student getting a project, the other adds if a student
doesn't get a project since we want
to track all students (allocated or not, since the state of being
unallocated is what gives
us motivation to optimise the results).

Anyway, session.merge() is for overwriting previously existing values
right? Now thanks to the UUID I can add multiple calls to
monteCarloBasic() to my physical database :)

I basically wrote a small function that, for everytime the
monteCarloBasic() is called, will append the UUID, the number of
trials ran and the date-time to a text file. My supervisor would have
to copy paste that into a GUI text field or the command line but it's
not that much of a hassle, given the usefulness of the database.

On Jun 9, 4:25 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/09/2010 12:44 AM, Az wrote:



  Traceback (most recent call last):
    File Main.py, line 39, in module
      MCS.monteCarloBasic(trials)
    File //MonteCarloSimulation.py, line 163, in monteCarloBasic
      session.merge(temp_alloc)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  session.py, line 1158, in merge
      self._autoflush()
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  session.py, line 897, in _autoflush
      self.flush()
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  session.py, line 1354, in flush
      self._flush(objects)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  session.py, line 1432, in _flush
      flush_context.execute()
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  unitofwork.py, line 261, in execute
      UOWExecutor().execute(self, tasks)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  unitofwork.py, line 753, in execute
      self.execute_save_steps(trans, task)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  unitofwork.py, line 768, in execute_save_steps
      self.save_objects(trans, task)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  unitofwork.py, line 759, in save_objects
      task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
    File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
  lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
  mapper.py, line 1428, in _save_obj
      c = connection.execute

[sqlalchemy] Re: Questions about session

2010-06-08 Thread Az
 memo = {}
 copied_students = copy.deepcopy(students, memo)
 copied_supervisors = copy.deepcopy(supervisors, memo)
 copied_projects = copy.deepcopy(projects, memo)

 After you do this, memo will contain a record of all copied objects. You
 should examine memo.values() to see if it is copying more than you
 expected. If it did copy just what you expected, then my worries were
 unfounded.

I'll let you know how that turns out soonish. While I know it's my
data, is there anything you can suggest from your experience that you
consider to be unexpected?

 Yes, session_id/trial_id and stud_id can repeat, and you can still group
 things together by run_id. Alternatively, you could add an
 autoincrementing primary key to SimAllocation, but I believe it is
 redundant since the combination (run_id, session_id/trial_id, stud_id)
 should be unique anyway. run_id can definitely be a datetime, but I'm
 not sure how well sqlite (it sounds like you're using sqlite) supports
 datetimes in queries 
 (seehttp://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#date-an...).
 A GUID (or UUID) is just a 128-bit value (usually random); the benefit
 here is you can generate it on the client side and be confident that it
 will be unique on the server (to avoid duplicate primary key errors).
 Using datetimes or database sequences would also work. You can
 definitely pass the run_id as an argument to monteCarloBasic, or to each
 object's create_db_record method.

Also I get why you mention three keys: run_id/guid/uuid and session_id/
trial_id alone won't suffice... but since we know there are unique
students (within each single allocation run, etc. So I can get rid of
the ident then? It serves no other purpose really if I can get a key
combo that's unique and works for.

I am indeed using SQLite3. I take it take my physical database has to
something like:
engine = create_engine('sqlite:///Database/spalloc.sqlite3',
echo=False)?

Also I take it I should generate the UUID (http://docs.python.org/
library/uuid.html) when I call the MonteCarloBasic function right?
Since it should be the same for each call, I take I'll have to
generate it before the loop. Additionally, how would I actually query
a 128-bit value? Say I have a bit in my GUI where the supervisor can
put in a UUID to pull the data off the Database. How would he actually
know which UUID to put in? Any ideas?

Also once I've got my stuff in the physical database and after my
program is done, I'd call session.close() right? How do I access the
DB data then? Would I have to write some separate functions that allow
me to access the data without using (for example)
'session.query(Student)...`? This way the user (i.e. my supervisor)
won't have to keep running the readData, monteCarloBasic, etc
functions just to access the DB (that would be poor indeed!).


On Jun 8, 3:07 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/07/2010 07:27 PM, Az wrote:



  By default, deepcopy will make one copy of everything in the object
  graph reachable by the object you feed it. The scary part is that,
  unless you also pass in a /memo/ argument to each call to deepcopy, it
  will copy the entire graph /every single call/. So if you deepcopy the
  students dictionary and then deepcopy the projects dictionary, each
  student's allocated_proj attribute will not match any instance in the
  projects dictionary. This is why a use-case-specific copy function is
  recommended: it is a lot easier to predict which objects will get copied
  and which objects will be shared.

  Shouldn't it match? I mean the student can only get allocated a
  project if it exists in the projects dictionary... or is that not the
  point?

  By use-case-specific, you mean I'll have to redefine deepcopy inside
  each class like this: def __deepcopy__(self): something, something?

  The only two places where this is an issue is for Supervisor's
  offered_proj attribute (a set) where, naturally, each project is an
  object and in Project where proj_sup is naturally a supervisor
  object :D

  The usefulness of my data structures comes back to bite me now...

 In theory, the following will work, ignoring ORM deepcopy issues
 discussed at the beginning of this thread:

 memo = {}
 copied_students = copy.deepcopy(students, memo)
 copied_supervisors = copy.deepcopy(supervisors, memo)
 copied_projects = copy.deepcopy(projects, memo)

 After you do this, memo will contain a record of all copied objects. You
 should examine memo.values() to see if it is copying more than you
 expected. If it did copy just what you expected, then my worries were
 unfounded.

 By use-case-specific, I meant define your own copy_objects function that
 explicitly specifies what is copied:

 def copy_objects(students, supervisors, projects):
     memo = {}
     copied_students = {}
     copied_supervisors = {}
     copied_projects = {}

     def copy_student(student):
         student_id = id(student)
         if student_id in memo

[sqlalchemy] Re: Questions about session

2010-06-08 Thread Az
Also:

Traceback (most recent call last):
  File Main.py, line 39, in module
MCS.monteCarloBasic(trials)
  File /Users/Azfar/Dropbox/Final Year Project/SPAllocation/
MonteCarloSimulation.py, line 163, in monteCarloBasic
session.merge(temp_alloc)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1158, in merge
self._autoflush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 897, in _autoflush
self.flush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1354, in flush
self._flush(objects)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1432, in _flush
flush_context.execute()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 261, in execute
UOWExecutor().execute(self, tasks)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 753, in execute
self.execute_save_steps(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 768, in execute_save_steps
self.save_objects(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 759, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
mapper.py, line 1428, in _save_obj
c = connection.execute(statement.values(value_params), params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 874, in _execute_clauseelement
return self.__execute_context(context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 931, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.IntegrityError: (IntegrityError) columns uid,
session_id, stud_id are not unique u'INSERT INTO sim_alloc (ident,
uid, session_id, stud_id, alloc_proj, alloc_proj_rank) VALUES
(?, ?, ?, ?, ?, ?)' [1, '1d295f48-7386-11df-8e87-00264a052efc', 1,
5796, 119, 1]

Good news: Got the UUID working in a snap.
Bad news: See error :(

On Jun 9, 4:54 am, Az azfarul.is...@gmail.com wrote:
  memo = {}
  copied_students = copy.deepcopy(students, memo)
  copied_supervisors = copy.deepcopy(supervisors, memo)
  copied_projects = copy.deepcopy(projects, memo)

  After you do this, memo will contain a record of all copied objects. You
  should examine memo.values() to see if it is copying more than you
  expected. If it did copy just what you expected, then my worries were
  unfounded.

 I'll let you know how that turns out soonish. While I know it's my
 data, is there anything you can suggest from your experience that you
 consider to be unexpected?

  Yes, session_id/trial_id and stud_id can repeat, and you can still group
  things together by run_id. Alternatively, you could add an
  autoincrementing primary key to SimAllocation, but I believe it is
  redundant since the combination (run_id, session_id/trial_id, stud_id)
  should be unique anyway. run_id can definitely be a datetime, but I'm
  not sure how well sqlite (it sounds like you're using sqlite) supports
  datetimes in queries 
  (seehttp://www.sqlalchemy.org/docs/reference/dialects/sqlite.html#date-an...).
  A GUID (or UUID) is just a 128-bit

[sqlalchemy] Re: Questions about session

2010-06-08 Thread Az
Traceback (most recent call last):
  File Main.py, line 39, in module
MCS.monteCarloBasic(trials)
  File //MonteCarloSimulation.py, line 163, in monteCarloBasic
session.merge(temp_alloc)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1158, in merge
self._autoflush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 897, in _autoflush
self.flush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1354, in flush
self._flush(objects)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1432, in _flush
flush_context.execute()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 261, in execute
UOWExecutor().execute(self, tasks)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 753, in execute
self.execute_save_steps(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 768, in execute_save_steps
self.save_objects(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 759, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
mapper.py, line 1428, in _save_obj
c = connection.execute(statement.values(value_params), params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 824, in execute
return Connection.executors[c](self, object, multiparams, params)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 874, in _execute_clauseelement
return self.__execute_context(context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 896, in __execute_context
self._cursor_execute(context.cursor, context.statement,
context.parameters[0], context=context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 950, in _cursor_execute
self._handle_dbapi_exception(e, statement, parameters, cursor,
context)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/
engine/base.py, line 931, in _handle_dbapi_exception
raise exc.DBAPIError.instance(statement, parameters, e,
connection_invalidated=is_disconnect)
sqlalchemy.exc.IntegrityError: (IntegrityError) columns uid,
session_id, stud_id are not unique u'INSERT INTO sim_alloc (ident,
uid, session_id, stud_id, alloc_proj, alloc_proj_rank) VALUES
(?, ?, ?, ?, ?, ?)' [1, '1d295f48-7386-11df-8e87-00264a052efc', 1,
5796, 119, 1]



Good news: Got the UUID working in a snap.
Bad news: See error :(


Note: This happened when I started using 'session.merge(temp_alloc)'
instead of 'session.add'



On Jun 9, 4:54 am, Az azfarul.is...@gmail.com wrote:

On Jun 9, 4:54 am, Az azfarul.is...@gmail.com wrote:
  memo = {}
  copied_students = copy.deepcopy(students, memo)
  copied_supervisors = copy.deepcopy(supervisors, memo)
  copied_projects = copy.deepcopy(projects, memo)

  After you do this, memo will contain a record of all copied objects. You
  should examine memo.values() to see if it is copying more than you
  expected. If it did copy just what you expected, then my worries were
  unfounded.

 I'll let you know how that turns out soonish. While I know it's my
 data, is there anything you can suggest from your experience that you
 consider to be unexpected?

  Yes, session_id/trial_id and stud_id can repeat, and you can still group
  things together by run_id. Alternatively, you could add an
  autoincrementing primary key to SimAllocation, but I believe it is
  redundant since the combination (run_id, session_id/trial_id, stud_id)
  should be unique anyway. run_id can definitely be a datetime, but I'm
  not sure how well sqlite (it sounds like you're using sqlite) supports
  datetimes in queries

[sqlalchemy] Re: Questions about session

2010-06-07 Thread Az
 would recommend using a database
 sequence or GUIDs to ensure that each call to monteCarloBasic gets a
 unique value for this column.

As another key sequence different from the simple ident ==
row_number I'm currently using right? I'll look into that.

+++

The thread business is indeed going over my head :S.

 In this way, monteCarloBasic returns its
  results as a set of objects that are not attached to any session
  (either because they are unmapped or are transient
  
 http://www.sqlalchemy.org/docs/reference/orm/sessions.html#sqlalchemy...
  instances), which the UI thread uses to update the database. How
  you pass data from worker threads to the UI thread is dependent on
  your GUI toolkit.

My GUI toolkit is Tkinter?

On Jun 7, 4:01 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/06/2010 02:58 PM, Az wrote:

  Hi Conor,

  Basically I sat down and made some decisions and changes.

  I've created an actual copy of the Student class as in I've now got
  two classes, Student and StudentUnmapped. The Unmapped one has the
  same attributes as the mapped one, except for being... well, unmapped.
  Now I can a) use deepcopy and b) change the objects without worry.
  resetData() will act on the unmapped dictionary as well so the mapped
  object remains safe and unchanged.

 Sounds good. Just beware that deepcopy will try to make copies of all
 the objects referenced by your StudentUnmapped objects (assuming you
 didn't define __deepcopy__), so you may end up copying projects,
 supervisors, etc.



  Sorry for beating around the bush with questions that were a bit non-
  SQLA.

  Let's get back to some SQLA questions:

  1. The only changes I'd push onto the mapped object would be... after
  running my MC, I get a bunch of probabilities -- those I want to
  persist. How do I modify the field in a table I've already
  session.commit()-ed using the following function. This happens
  pretty much after I've finished reading in the dictionaries
  completely. After that I just add each thing to the relevant table.
  But I'd want to update some attributes of student because I want to be
  able to have in the database for access later.

  def addToTable():
          Very simple SQLAlchemy function that populates the Student,
  Project
          and Supervisor tables.

          for student in students.itervalues():
                  session.add(student)
                  session.flush()

          for project in projects.itervalues():
                  session.add(project)
                  session.flush()

          for supervisor in supervisors.itervalues():
                  session.add(supervisor)
                  session.flush()

          session.commit()

 It sounds like you want to a) INSERT students/projects/supervisors that
 don't yet exist in the database, and b) UPDATE
 students/projects/supervisors that do exist in the database. If so, I
 think you want to use session.merge instead of session.add.

  2. Say I've now got a physical database and I've run my Monte-Carlo
  multiple times. I think I'd either want to a) have the original M-C
  sessions be overwritten or b) create another set of data, perhaps
  using the data to differentiate the two. How can I do this? Can I
  query each one separately? Or am I better off just with an overwrite?

 You can indeed append the new set of data to the existing data. You
 would just need another column in SimAllocation to distinguish between
 different calls to monteCarloBasic. I would recommend using a database
 sequence or GUIDs to ensure that each call to monteCarloBasic gets a
 unique value for this column.

  3. Finally, regarding the GUI. If each function indicates a separate
  thread, then in that case, yes with my GUI I'd be passing the
  session from thread to thread since I'm no longer just running Main.py
  but rather, the constituent functions one by one. How do I deal with
  this? The reason I used the database was because of persistence and I
  definitely want my data to persist between threads (and after I've
  closed my program) so I can use them for all manner of useful
  calculations, queries and output.

 Just to be clear, by thread I mean actual system threads spawned by
 the the thread or threading module. If this is indeed what you want,
 then you probably have a UI thread and a worker thread that runs
 monteCarloBasic. Since you should not share a single session object
 between threads, you can:

    1. Change monteCarloBasic to not rely on sessions (including their
       persistent
       
 http://www.sqlalchemy.org/docs/reference/orm/sessions.html#sqlalchemy...
       objects) at all (you would have to make copies of your students,
       projects, and supervisors before hading them over to
       monteCarloBasic). You are already sort-of on this track by using
       StudentUnmapped objects. In this way, monteCarloBasic returns its
       results as a set of objects that are not attached to any session

[sqlalchemy] Re: Questions about session

2010-06-07 Thread Az
 By default, deepcopy will make one copy of everything in the object
 graph reachable by the object you feed it. The scary part is that,
 unless you also pass in a /memo/ argument to each call to deepcopy, it
 will copy the entire graph /every single call/. So if you deepcopy the
 students dictionary and then deepcopy the projects dictionary, each
 student's allocated_proj attribute will not match any instance in the
 projects dictionary. This is why a use-case-specific copy function is
 recommended: it is a lot easier to predict which objects will get copied
 and which objects will be shared.

Shouldn't it match? I mean the student can only get allocated a
project if it exists in the projects dictionary... or is that not the
point?

By use-case-specific, you mean I'll have to redefine deepcopy inside
each class like this: def __deepcopy__(self): something, something?

The only two places where this is an issue is for Supervisor's
offered_proj attribute (a set) where, naturally, each project is an
object and in Project where proj_sup is naturally a supervisor
object :D

The usefulness of my data structures comes back to bite me now...

 class Student(object):
 [existing definitions]

 def create_db_record(self):
 result = StudentDBRecord()
 result.ee_id = self.ee_id
 [copy over other attributes]
 return result

 class StudentDBRecord(object):
 pass

The create_db_record function... does it have to called explicitly
somewhere or does it automatically run?

If I now had to commit my Student data to the database... what would I
do?

 I think a primary key of
 (run_id, session_id/trial_id, stud_id) would be good

If I make them all primary keys I get a composite key right? Within an
entire M-C simulation the stud_id's would repeat in groups -- so if
there are 100 simulations, each stud_id appears 100 times in that
commit.

Run_id is a fantastic idea! I'd probably have it be the date and time?
Given that the simulation takes a while to run... the time will have
changed sufficiently for uniqueness. However, then querying becomes a
pain because of whatever format the date and time data will be in...
so in that case, what is a GUID and is that something we could give to
the Monte-Carlo ourselves before the run as some sort of argument? It
would be the same for an entire run but different from run to run (so
not unique from row to row, but unique from one run set to the other).
Any thoughts on this?

 Never used it, sorry. In general, every UI toolkit has a message/event
 queue to which you can post messages from any thread. So you could do
 something like:

 result = monteCarloBasic(...)

 def runs_in_ui_thread():
 update_database(result)

 ui_toolkit.post_callback(runs_in_ui_thread)

Thanks for that. Now I know what to search for (message, event queue,
callback) :)


On Jun 7, 10:50 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/07/2010 02:56 PM, Az wrote:

  Sounds good. Just beware that deepcopy will try to make copies of all
  the objects referenced by your StudentUnmapped objects (assuming you
  didn't define __deepcopy__), so you may end up copying projects,
  supervisors, etc.

  Good point. I'm deepcopying my students, projects and supervisors
  dictionaries. But yes you're right, all of them have a reference to
  other objects.

  ��[Q1:] How will deepcopying the objects referenced by my
  StudentUnmapped object affect me?

 By default, deepcopy will make one copy of everything in the object
 graph reachable by the object you feed it. The scary part is that,
 unless you also pass in a /memo/ argument to each call to deepcopy, it
 will copy the entire graph /every single call/. So if you deepcopy the
 students dictionary and then deepcopy the projects dictionary, each
 student's allocated_proj attribute will not match any instance in the
 projects dictionary. This is why a use-case-specific copy function is
 recommended: it is a lot easier to predict which objects will get copied
 and which objects will be shared.



  I also tried another structure for elegance...

  class Student(object):
     def __init__(self, ee_id, name, stream_id, overall_proby):
             self.ee_id = ee_id
             self.name = name
             self.stream_id = stream_id
             self.preferences = collections.defaultdict(set)
             self.allocated_project = None
             self.allocated_proj_ref = None
             self.allocated_rank = None
             self.own_project_id = None
             self.own_project_sup = None
             self.overall_proby = overall_proby

     def __repr__(self):
             return str(self)

     def __str__(self):
             return %s %s %s: %s (OP: %s) %(self.ee_id, self.name,
  self.allocated_rank, self.allocated_project, self.overall_proby)

  class StudentDBRecord(Student):
     def __init__(self, student):
             super(StudentDBRecord, self).__init__(student.ee_id

[sqlalchemy] Re: Questions about session

2010-06-07 Thread Az
Additionally, before I tried out the create_db_record...

However whenever I try to commit:

#[§§0§]

student_records = best_node
for rec in student_records.itervalues():
MCS.session.add(rec)
MCS.session.commit()

I get:

sqlalchemy.orm.exc.UnmappedInstanceError: Class
'ProjectParties.Student' is not mapped

What is best_node?

Well it starts from here:

students_preopt = copy.deepcopy(DR.students)

This is passed to a function that spits out something called preopt
(this is where I take one of my best sessions and replicate what the
students dictionary looked like at the time)

And then I pass this to my simulated annealing algorithm which:
best_node = preopt
studOld = best_node
studNew = copy.deepcopy(studOld)

Now if studNew is better that studOld (by some weighting function):

Then best_node = studNew and we find ourselves at #[§§01]

But in short, studNew is the same as students which is mapped to
Student...

I need to get studNew into something that is mapped to
StudentDBRecord... is that where I use the function create_db_record
comes in?

On Jun 8, 1:27 am, Az azfarul.is...@gmail.com wrote:
  By default, deepcopy will make one copy of everything in the object
  graph reachable by the object you feed it. The scary part is that,
  unless you also pass in a /memo/ argument to each call to deepcopy, it
  will copy the entire graph /every single call/. So if you deepcopy the
  students dictionary and then deepcopy the projects dictionary, each
  student's allocated_proj attribute will not match any instance in the
  projects dictionary. This is why a use-case-specific copy function is
  recommended: it is a lot easier to predict which objects will get copied
  and which objects will be shared.

 Shouldn't it match? I mean the student can only get allocated a
 project if it exists in the projects dictionary... or is that not the
 point?

 By use-case-specific, you mean I'll have to redefine deepcopy inside
 each class like this: def __deepcopy__(self): something, something?

 The only two places where this is an issue is for Supervisor's
 offered_proj attribute (a set) where, naturally, each project is an
 object and in Project where proj_sup is naturally a supervisor
 object :D

 The usefulness of my data structures comes back to bite me now...

  class Student(object):
      [existing definitions]

      def create_db_record(self):
          result = StudentDBRecord()
          result.ee_id = self.ee_id
          [copy over other attributes]
          return result

  class StudentDBRecord(object):
      pass

 The create_db_record function... does it have to called explicitly
 somewhere or does it automatically run?

 If I now had to commit my Student data to the database... what would I
 do?

  I think a primary key of
  (run_id, session_id/trial_id, stud_id) would be good

 If I make them all primary keys I get a composite key right? Within an
 entire M-C simulation the stud_id's would repeat in groups -- so if
 there are 100 simulations, each stud_id appears 100 times in that
 commit.

 Run_id is a fantastic idea! I'd probably have it be the date and time?
 Given that the simulation takes a while to run... the time will have
 changed sufficiently for uniqueness. However, then querying becomes a
 pain because of whatever format the date and time data will be in...
 so in that case, what is a GUID and is that something we could give to
 the Monte-Carlo ourselves before the run as some sort of argument? It
 would be the same for an entire run but different from run to run (so
 not unique from row to row, but unique from one run set to the other).
 Any thoughts on this?

  Never used it, sorry. In general, every UI toolkit has a message/event
  queue to which you can post messages from any thread. So you could do
  something like:

  result = monteCarloBasic(...)

  def runs_in_ui_thread():
      update_database(result)

  ui_toolkit.post_callback(runs_in_ui_thread)

 Thanks for that. Now I know what to search for (message, event queue,
 callback) :)

 On Jun 7, 10:50 pm, Conor conor.edward.da...@gmail.com wrote:

  On 06/07/2010 02:56 PM, Az wrote:

   Sounds good. Just beware that deepcopy will try to make copies of all
   the objects referenced by your StudentUnmapped objects (assuming you
   didn't define __deepcopy__), so you may end up copying projects,
   supervisors, etc.

   Good point. I'm deepcopying my students, projects and supervisors
   dictionaries. But yes you're right, all of them have a reference to
   other objects.

   ��[Q1:] How will deepcopying the objects referenced by my
   StudentUnmapped object affect me?

  By default, deepcopy will make one copy of everything in the object
  graph reachable by the object you feed it. The scary part is that,
  unless you also pass in a /memo/ argument to each call to deepcopy, it
  will copy the entire graph /every single call/. So if you

[sqlalchemy] Re: Questions about session

2010-06-06 Thread Az
Hi Conor,

Basically I sat down and made some decisions and changes.

I've created an actual copy of the Student class as in I've now got
two classes, Student and StudentUnmapped. The Unmapped one has the
same attributes as the mapped one, except for being... well, unmapped.
Now I can a) use deepcopy and b) change the objects without worry.
resetData() will act on the unmapped dictionary as well so the mapped
object remains safe and unchanged.

Sorry for beating around the bush with questions that were a bit non-
SQLA.

Let's get back to some SQLA questions:

1. The only changes I'd push onto the mapped object would be... after
running my MC, I get a bunch of probabilities -- those I want to
persist. How do I modify the field in a table I've already
session.commit()-ed using the following function. This happens
pretty much after I've finished reading in the dictionaries
completely. After that I just add each thing to the relevant table.
But I'd want to update some attributes of student because I want to be
able to have in the database for access later.

def addToTable():
Very simple SQLAlchemy function that populates the Student,
Project
and Supervisor tables.

for student in students.itervalues():
session.add(student)
session.flush()

for project in projects.itervalues():
session.add(project)
session.flush()

for supervisor in supervisors.itervalues():
session.add(supervisor)
session.flush()

session.commit()

2. Say I've now got a physical database and I've run my Monte-Carlo
multiple times. I think I'd either want to a) have the original M-C
sessions be overwritten or b) create another set of data, perhaps
using the data to differentiate the two. How can I do this? Can I
query each one separately? Or am I better off just with an overwrite?

3. Finally, regarding the GUI. If each function indicates a separate
thread, then in that case, yes with my GUI I'd be passing the
session from thread to thread since I'm no longer just running Main.py
but rather, the constituent functions one by one. How do I deal with
this? The reason I used the database was because of persistence and I
definitely want my data to persist between threads (and after I've
closed my program) so I can use them for all manner of useful
calculations, queries and output.

If I can sort these three things out -- this entire project is wrapped
up :)

Thanks in advance!

Az

On Jun 6, 4:39 am, Conor conor.edward.da...@gmail.com wrote:
 On 06/05/2010 08:06 PM, Az wrote:



  Cheers!

  Creating a new instance of my mapped class and settings, manually.
  Gotcha. I think this will be an easier solution for me.

  Nah, I'm not in a web framework.

  Additional Q:

  +++

  Currently, my database is being stored in memory and it's fine like
  that since a) my data isn't very expansive and b) I'm running the
  program (python Main.py) from a command line where I can just comment
  out various functions in my Main file. However, I'm now designing a
  GUI for my code where I want to be able to call each function
  manually. Now, all functions that reference the session will reference
  session I defined as:

  Session = sessionmaker(bind=engine)
  session = Session()

  Thus after I run my finish my monteCarloBasic (defined way up at the
  top), I'd probably hit another button that would do what I did for
  CODE 2, i.e.

  def checkFor4545(trials):
      sid = 4545
      print sid
      project_id_list = list(students[sid].preferences)
      for project_id in project_id_list
          gotcha =
  session.query(SimAllocation).filter(SimAllocation.student_id ==
  sid).filter(PP.SimAllocation.alloc_proj == project_id).count()
          print project_id, gotcha/trials

  This basically counts the number of times student 4545 got each of his
  projects for entire Monte-Carlo simulation and prints the average over
  the trials.

  So basically when I'm in command line mode and go python Main.py my
  Main looks like:

  trials = 100
  monteCarloBasic(trials)
  checkFor4545(trials)

  So those will run one after the other.

  Now when I've got a GUI, I'll have the Monte-Carlo run separately and
  then afterwards, hit a button that corresponds to
  'checkFor4545(trials)'. Now, the reason I used SQLAlchemy in the first
  place (besides the general usefulness of SQL) is for data persistence.
  Will 'checkFor4545(trials)' display the same output as it would if it
  were run serially from Main.py? Will it reference the same session?
  (Probably a question you'll want to slap your forehead over, but I
  just want to verify I've got my understanding correct).

 I see nothing that indicates that they would NOT see the same session,
 but I do have some comments:

     * GUIs usually run long tasks in background threads to keep the UI
       responsive. If you were to do this, you would not want to use a
       single global

[sqlalchemy] Re: Questions about session

2010-06-05 Thread Az
Cheers!

Creating a new instance of my mapped class and settings, manually.
Gotcha. I think this will be an easier solution for me.

Nah, I'm not in a web framework.

Additional Q:

+++

Currently, my database is being stored in memory and it's fine like
that since a) my data isn't very expansive and b) I'm running the
program (python Main.py) from a command line where I can just comment
out various functions in my Main file. However, I'm now designing a
GUI for my code where I want to be able to call each function
manually. Now, all functions that reference the session will reference
session I defined as:

Session = sessionmaker(bind=engine)
session = Session()

Thus after I run my finish my monteCarloBasic (defined way up at the
top), I'd probably hit another button that would do what I did for
CODE 2, i.e.

def checkFor4545(trials):
sid = 4545
print sid
project_id_list = list(students[sid].preferences)
for project_id in project_id_list
gotcha =
session.query(SimAllocation).filter(SimAllocation.student_id ==
sid).filter(PP.SimAllocation.alloc_proj == project_id).count()
print project_id, gotcha/trials

This basically counts the number of times student 4545 got each of his
projects for entire Monte-Carlo simulation and prints the average over
the trials.

So basically when I'm in command line mode and go python Main.py my
Main looks like:

trials = 100
monteCarloBasic(trials)
checkFor4545(trials)

So those will run one after the other.

Now when I've got a GUI, I'll have the Monte-Carlo run separately and
then afterwards, hit a button that corresponds to
'checkFor4545(trials)'. Now, the reason I used SQLAlchemy in the first
place (besides the general usefulness of SQL) is for data persistence.
Will 'checkFor4545(trials)' display the same output as it would if it
were run serially from Main.py? Will it reference the same session?
(Probably a question you'll want to slap your forehead over, but I
just want to verify I've got my understanding correct).

Additionally, when I save to a physical database file, what happens
everytime I run monteCarloBasic(trials) (since it writes to the
database). Will it rewrite it every time? Or will it keep appending to
it?

++

Az


On Jun 4, 10:17 pm, Conor conor.edward.da...@gmail.com wrote:
 On 06/03/2010 02:33 PM, Az wrote:



  Firstly, apologies if I'm demanding too much but basically I'm quite a
  beginner at Python programming and this is for a University project,
  which is why I'm keen to get this done (due in a few days!). So I hope
  you won't mind me asking some questions that may seem really basic.

  deepcopy has issues because SQLAlchemy places extra information on your 
  objects, i.e. an _sa_instance_state attribute, that you dont want in your
  copy.  You *do* however need one to exist on your object.  Therefore 
  deepcopy is not supported right now by SQLAlchemy ORM objects.

  There are ways to manually blow away the old _sa_instance_state and put a 
  new one on the object, but the most straightforward is to make a new
  object with __init__() and set up the attributes that are significant, 
  instead of doing a full deep copy.

  Could you explain what you mean by creating a new object with
  __init__() and setting up the attributes? Would this be a new class
  that isn't mapped using SQLA?

 He just means creating a new instance of your mapped class and settings
 its attributes manually, e.g.:

 def copy(self):
    copy = MyMappedClass()
    copy.attr1 = self.attr1
    copy.attr2 = self.attr2
    return copy

  if you do really want to use deepcopy, you'd have to implement 
  __deepcopy__() on your objects and ensure that a new _sa_instance_state is 
  set up,
  there are functions in sqlalchemy.orm.attributes which can help with that. 
     This *should* be made an official SQLA recipe, but we haven't gotten
  around to it.

  Could you please explain what you mean by that? Would it be possible
  to give me an idea or an example of how such would work?

 In theory you can use a generic __deepcopy__ implementation for ORM
 classes. A very simple version might be:

 def orm_deepcopy(self, memo):
     mapper = class_mapper(self.__class__)
     result = self.__class__()
     memo[id(self)] = result
     for prop in mapper.iterate_properties():
         value = getattr(self, prop.key)
         setattr(result, prop.key, deepcopy(value, memo))
     return result

 class MyMappedClass(...):
     __deepcopy__ = orm_deepcopy

 Beware that this implementation does not handle overlapping properties
 well (e.g. relations and their corresponding foreign key columns),
 lazy-loading properties, read-only properties, clearing out
 auto-incrementing primary keys, etc. I would not recommend this
 approach, as a use-case-specific copy() method will be much easier to
 tailor to your needs.

  How can I stop it from closing the
  sessions?

  nothing in SQLA closes sessions.  Your program is doing that.

  I'm not issuing

[sqlalchemy] Re: Questions about session

2010-06-05 Thread Az
This will probably help:

def addToTable():
Very simple SQLAlchemy function that populates the Student,
Project
and Supervisor tables.
for student in students.itervalues():
session.add(student)
session.flush()

for project in projects.itervalues():
session.add(project)
session.flush()

for supervisor in supervisors.itervalues():
session.add(supervisor)
session.flush()

session.commit()


def resetData():
An essential helper function to help reset all student, project
and
supervisor data to their pre-allocation values

for student in students.itervalues():
student.allocated_project = None
student.allocated_proj_ref = None
student.allocated_rank = penalty

for supervisor in supervisors.itervalues():
supervisor.loading_limit = supervisor.original_quota -
supervisor.predecr_quota

for project in projects.itervalues():
project.allocated = False
project.blocked = False

So yes, my resetData function will revert _certain_ attributes for
only the Student, Supervisor and Project classes back to the state
they were in when I parsed them in from the text files. My students,
supervisors and projects dictionaries -- at the end of the entire M-C
-- will basically look like that. But if there are other attributes I
have not reverted, they won't change. For example my student has an
attribute called overall_proby which I get after calculating the
statistics from the M-C simulation. This won't change until I run the
entire simulation from start to end or terminate. However, his
allocated_project will remain None.

 If it resets
 student-project associations, then could it end up deleting the
 temp_allocs you just added in that trial?

temp_alloc belongs to the SimAllocation class so resetData() shouldn't
affect it.


 If you were to do this, you would not want to use a
 single global session, because sharing a session between threads
 is a big no-no.

Why is that a bad thing? And how can work around this single global
session? Basically, say in my GUI, I hit RUN for the Monte-Carlo
simulation and it does its work. And then I accidentally hit RUN
again. It'll recalculate everything right? Now when I click STATS to
get my statistics, I'll want to get the statistics for the second time
I ran the simulation. Thus how can I work around the single global
session problem?

 It seems like you would get duplicate primary keys on SimAllocation
 rows after the 1st call.

Once again any ways of avoiding this? I some sort of unique key for
the Monte-Carlo runs and have the incrementing row number was the only
thing I can think of since student ID's will repeat after every
individual trial is done.

Oh also: a BIG thank you for taking the time to help me out! I'm
almost at my deadline :)


On Jun 6, 4:39 am, Conor conor.edward.da...@gmail.com wrote:
 On 06/05/2010 08:06 PM, Az wrote:



  Cheers!

  Creating a new instance of my mapped class and settings, manually.
  Gotcha. I think this will be an easier solution for me.

  Nah, I'm not in a web framework.

  Additional Q:

  +++

  Currently, my database is being stored in memory and it's fine like
  that since a) my data isn't very expansive and b) I'm running the
  program (python Main.py) from a command line where I can just comment
  out various functions in my Main file. However, I'm now designing a
  GUI for my code where I want to be able to call each function
  manually. Now, all functions that reference the session will reference
  session I defined as:

  Session = sessionmaker(bind=engine)
  session = Session()

  Thus after I run my finish my monteCarloBasic (defined way up at the
  top), I'd probably hit another button that would do what I did for
  CODE 2, i.e.

  def checkFor4545(trials):
      sid = 4545
      print sid
      project_id_list = list(students[sid].preferences)
      for project_id in project_id_list
          gotcha =
  session.query(SimAllocation).filter(SimAllocation.student_id ==
  sid).filter(PP.SimAllocation.alloc_proj == project_id).count()
          print project_id, gotcha/trials

  This basically counts the number of times student 4545 got each of his
  projects for entire Monte-Carlo simulation and prints the average over
  the trials.

  So basically when I'm in command line mode and go python Main.py my
  Main looks like:

  trials = 100
  monteCarloBasic(trials)
  checkFor4545(trials)

  So those will run one after the other.

  Now when I've got a GUI, I'll have the Monte-Carlo run separately and
  then afterwards, hit a button that corresponds to
  'checkFor4545(trials)'. Now, the reason I used SQLAlchemy in the first
  place (besides the general usefulness of SQL) is for data persistence.
  Will 'checkFor4545(trials)' display the same output as it would

[sqlalchemy] Re: Questions about session

2010-06-05 Thread Az
Also adding a bit here:

My Student (this one is mapped) class looks like this:

class Student(object):
def __init__(self, ee_id, name, stream_id, overall_proby):
self.ee_id = ee_id
self.name = name
self.stream_id = stream_id
self.preferences = collections.defaultdict(set)
self.allocated_project = None
self.allocated_proj_ref = None
self.allocated_rank = None
self.own_project_id = None
self.own_project_sup = None
self.overall_proby = overall_proby

students_table = Table('studs', metadata,
Column('ee_id', Integer, primary_key=True),
Column('name', String),
Column('stream_id', String),
Column('allocated_rank', Integer),
Column('allocated_proj_ref', Integer, ForeignKey('projs.proj_id')),
Column('overall_proby', Float)
)

mapper(Student, students_table)

Now the reason I wanted to use copy was because I was going to be
changing certain data around -- specifically, self.allocated_project,
self.allocated_proj_ref and self.allocated_rank.

You suggested defining, or rather redefining copy such that:

 def copy(self):
   copy = MyMappedClass()
   copy.attr1 = self.attr1
   copy.attr2 = self.attr2
   return copy

Now does this look okay?

def copied(self):
copied = Student()
copied.allocated_project = self.allocated_project
copied.allocated_proj_ref = self.allocated_proj_ref
copied.allocated_rank = self.allocated_rank
return copied

Basically what I'm copying are dictionaries.

Thus the students dictionary looks like this:

students[ee_id] = Student(ee_id, name...)

So BEFORE I was just doing
students_copied = copy.copy(students)

How would I use the copy function I created?

students_copied = students.copied()?

And so if I used this version... my objects won't change (which was
why I wanted to use deepcopy) when I make changes to the copied
dictionary?

Thanks

Az


On Jun 6, 4:39 am, Conor conor.edward.da...@gmail.com wrote:
 On 06/05/2010 08:06 PM, Az wrote:



  Cheers!

  Creating a new instance of my mapped class and settings, manually.
  Gotcha. I think this will be an easier solution for me.

  Nah, I'm not in a web framework.

  Additional Q:

  +++

  Currently, my database is being stored in memory and it's fine like
  that since a) my data isn't very expansive and b) I'm running the
  program (python Main.py) from a command line where I can just comment
  out various functions in my Main file. However, I'm now designing a
  GUI for my code where I want to be able to call each function
  manually. Now, all functions that reference the session will reference
  session I defined as:

  Session = sessionmaker(bind=engine)
  session = Session()

  Thus after I run my finish my monteCarloBasic (defined way up at the
  top), I'd probably hit another button that would do what I did for
  CODE 2, i.e.

  def checkFor4545(trials):
      sid = 4545
      print sid
      project_id_list = list(students[sid].preferences)
      for project_id in project_id_list
          gotcha =
  session.query(SimAllocation).filter(SimAllocation.student_id ==
  sid).filter(PP.SimAllocation.alloc_proj == project_id).count()
          print project_id, gotcha/trials

  This basically counts the number of times student 4545 got each of his
  projects for entire Monte-Carlo simulation and prints the average over
  the trials.

  So basically when I'm in command line mode and go python Main.py my
  Main looks like:

  trials = 100
  monteCarloBasic(trials)
  checkFor4545(trials)

  So those will run one after the other.

  Now when I've got a GUI, I'll have the Monte-Carlo run separately and
  then afterwards, hit a button that corresponds to
  'checkFor4545(trials)'. Now, the reason I used SQLAlchemy in the first
  place (besides the general usefulness of SQL) is for data persistence.
  Will 'checkFor4545(trials)' display the same output as it would if it
  were run serially from Main.py? Will it reference the same session?
  (Probably a question you'll want to slap your forehead over, but I
  just want to verify I've got my understanding correct).

 I see nothing that indicates that they would NOT see the same session,
 but I do have some comments:

     * GUIs usually run long tasks in background threads to keep the UI
       responsive. If you were to do this, you would not want to use a
       single global session, because sharing a session between threads
       is a big no-no.
     * I'm concerned what the call to resetData does. If it resets
       student-project associations, then could it end up deleting the
       temp_allocs you just added in that trial?
     * What should happen if you run monteCarloBasic multiple times? It
       seems like you would get duplicate primary keys on SimAllocation
       rows after the 1st call.

  Additionally, when I save to a physical database file, what

[sqlalchemy] Re: Questions about session

2010-06-04 Thread Az
Sorry, meant to reply to you Michael... ended up replying to myself!
---

Firstly, apologies if I'm demanding too much but basically I'm quite a
beginner at Python programming and this is for a University project,
which is why I'm keen to get this done (due in a few days!). So I hope
you won't mind me asking some questions that may seem really basic.

 deepcopy has issues because SQLAlchemy places extra information on your 
 objects, i.e. an _sa_instance_state attribute, that you dont want in your
 copy.  You *do* however need one to exist on your object.  Therefore deepcopy 
 is not supported right now by SQLAlchemy ORM objects.
 There are ways to manually blow away the old _sa_instance_state and put a new 
 one on the object, but the most straightforward is to make a new
 object with __init__() and set up the attributes that are significant, 
 instead of doing a full deep copy.

Could you explain what you mean by creating a new object with
__init__() and setting up the attributes? Would this be a new class
that isn't mapped using SQLA?

 if you do really want to use deepcopy, you'd have to implement __deepcopy__() 
 on your objects and ensure that a new _sa_instance_state is set up,
 there are functions in sqlalchemy.orm.attributes which can help with that.
 This *should* be made an official SQLA recipe, but we haven't gotten
 around to it.

Could you please explain what you mean by that? Would it be possible
to give me an idea or an example of how such would work?

 How can I stop it from closing the
 sessions?
 nothing in SQLA closes sessions.  Your program is doing that.

I'm not issuing a session.close() anywhere (I checked). Are there any
other ways of closing a session besides that? (If the answer is
Plenty, don't worry about it... I'll try to track it down then)

On Jun 3, 7:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 3, 2010, at 1:58 PM, Az wrote:

  Owning session has been closed? Can I still use deepcopy if the
  session has not been closed?

 deepcopy has issues because SQLAlchemy places extra information on your 
 objects, i.e. an _sa_instance_state attribute, that you dont want in your 
 copy.  You *do* however need one to exist on your object.  Therefore deepcopy 
 is not supported right now by SQLAlchemy ORM objects.   There are ways to 
 manually blow away the old _sa_instance_state and put a new one on the 
 object, but the most straightforward is to make a new object with __init__() 
 and set up the attributes that are significant, instead of doing a full deep 
 copy.

 if you do really want to use deepcopy, you'd have to implement __deepcopy__() 
 on your objects and ensure that a new _sa_instance_state is set up, there are 
 functions in sqlalchemy.orm.attributes which can help with that.    This 
 *should* be made an official SQLA recipe, but we haven't gotten around to it.

  How can I stop it from closing the
  sessions?

 nothing in SQLA closes sessions.  Your program is doing that.

  The problem is that if I change my shallow copied
  dictionary, the objects are changed.

  Basically, I'm trying to do this state change thing where I'll take a
  dictionary (let's call it Node 1), make changes to it (thereby making
  changes to the objects it references) and then save those changes as
  Node 2. Then I'll take Node 2 and make some changes to that. So on and
  so forth for a certain number of changes. Everytime I do so, I want to
  retain the information from the previous Node as well as a best node
  which can be any of the Nodes. If my operations change the objects, is
  that even possible?

  That was my motivation to use deepcopy but I don't want to stop using
  SQLAlchemy because of it :(

  On Jun 3, 4:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Jun 3, 2010, at 1:24 AM, Az wrote:

  +++ Questions +++

  1. Is this the correct way to use sessions or am I sort of abusing
  them?

  I dont see any poor patterns of use above.

  2. When should I close a session?

  when you no longer need the usage of any of the objects associated with 
  it, or any remaining objects are in a state which you will re-merge them 
  into a new session before you next use them.    The session in its default 
  state of autocommit=False is just like going to your database and starting 
  a transaction, doing some work - when you're done with the work, you close 
  the transaction, and all the data associated with that trans (i.e. your 
  ORM objects) is essentially invalid; other transactions can be modifying 
  that data.   Your objects are an extension of the Session, which should be 
  considered as an object-oriented window onto a database transaction.

  3. I got the following error after trying to use copy.deepcopy() on
  one of my dictionaries.

     attribute refresh operation cannot proceed % (state_str(state)))
  sqlalchemy.exc.UnboundExecutionError: Instance Project at 0x24c5c50
  is not bound to a Session; attribute refresh operation cannot proceed

[sqlalchemy] Re: Questions about session

2010-06-03 Thread Az
Owning session has been closed? Can I still use deepcopy if the
session has not been closed? How can I stop it from closing the
sessions? The problem is that if I change my shallow copied
dictionary, the objects are changed.

Basically, I'm trying to do this state change thing where I'll take a
dictionary (let's call it Node 1), make changes to it (thereby making
changes to the objects it references) and then save those changes as
Node 2. Then I'll take Node 2 and make some changes to that. So on and
so forth for a certain number of changes. Everytime I do so, I want to
retain the information from the previous Node as well as a best node
which can be any of the Nodes. If my operations change the objects, is
that even possible?

That was my motivation to use deepcopy but I don't want to stop using
SQLAlchemy because of it :(

On Jun 3, 4:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 3, 2010, at 1:24 AM, Az wrote:

  +++ Questions +++

  1. Is this the correct way to use sessions or am I sort of abusing
  them?

 I dont see any poor patterns of use above.

  2. When should I close a session?

 when you no longer need the usage of any of the objects associated with it, 
 or any remaining objects are in a state which you will re-merge them into a 
 new session before you next use them.    The session in its default state of 
 autocommit=False is just like going to your database and starting a 
 transaction, doing some work - when you're done with the work, you close the 
 transaction, and all the data associated with that trans (i.e. your ORM 
 objects) is essentially invalid; other transactions can be modifying that 
 data.   Your objects are an extension of the Session, which should be 
 considered as an object-oriented window onto a database transaction.

  3. I got the following error after trying to use copy.deepcopy() on
  one of my dictionaries.

     attribute refresh operation cannot proceed % (state_str(state)))
  sqlalchemy.exc.UnboundExecutionError: Instance Project at 0x24c5c50
  is not bound to a Session; attribute refresh operation cannot proceed

 don't do deepcopy() on a structure that contains ORM objects if their owning 
 session has been closed.  deepcopy on ORM objects probably has issues that 
 prevent it from working as you'd expect.   You'd be better off building copy 
 constructors, i.e. def copy(self):  return FooBar().

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Questions about session

2010-06-03 Thread Az
I think I get what you mean. In the mean time, another error popped
up:

Traceback (most recent call last):
  File Main.py, line 32, in module
MCS.addToTable()
  File XXX/MonteCarloSimulation.py, line 138, in addToTable
### --- This function is described in the first post
session.flush()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1354, in flush
self._flush(objects)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
session.py, line 1432, in _flush
flush_context.execute()
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 261, in execute
UOWExecutor().execute(self, tasks)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 753, in execute
self.execute_save_steps(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 768, in execute_save_steps
self.save_objects(trans, task)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
unitofwork.py, line 759, in save_objects
task.mapper._save_obj(task.polymorphic_tosave_objects, trans)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
mapper.py, line 1303, in _save_obj
(state_str(state), instance_key, state_str(existing)))
sqlalchemy.orm.exc.FlushError: New instance Student at 0x24fe7b0
with identity key (class 'ProjectParties.Student', (7713,))
conflicts with persistent instance Student at 0x140f090

What's going on here with the flush error?

I've gone back to using just copy.copy(some_dictionary) just after
I read into my dictionaries and before I start doing stuff like
monteCarloSimulation (above) and addToTable (above). Even if I stick
the copy.copy() after addToTable(), it still manages to give me the
same error.

On Jun 3, 7:41 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 On Jun 3, 2010, at 1:58 PM, Az wrote:

  Owning session has been closed? Can I still use deepcopy if the
  session has not been closed?

 deepcopy has issues because SQLAlchemy places extra information on your 
 objects, i.e. an _sa_instance_state attribute, that you dont want in your 
 copy.  You *do* however need one to exist on your object.  Therefore deepcopy 
 is not supported right now by SQLAlchemy ORM objects.   There are ways to 
 manually blow away the old _sa_instance_state and put a new one on the 
 object, but the most straightforward is to make a new object with __init__() 
 and set up the attributes that are significant, instead of doing a full deep 
 copy.

 if you do really want to use deepcopy, you'd have to implement __deepcopy__() 
 on your objects and ensure that a new _sa_instance_state is set up, there are 
 functions in sqlalchemy.orm.attributes which can help with that.    This 
 *should* be made an official SQLA recipe, but we haven't gotten around to it.

  How can I stop it from closing the
  sessions?

 nothing in SQLA closes sessions.  Your program is doing that.

  The problem is that if I change my shallow copied
  dictionary, the objects are changed.

  Basically, I'm trying to do this state change thing where I'll take a
  dictionary (let's call it Node 1), make changes to it (thereby making
  changes to the objects it references) and then save those changes as
  Node 2. Then I'll take Node 2 and make some changes to that. So on and
  so forth for a certain number of changes. Everytime I do so, I want to
  retain the information from the previous Node as well as a best node
  which can be any of the Nodes. If my operations change the objects, is
  that even possible?

  That was my motivation to use deepcopy but I don't want to stop using
  SQLAlchemy because of it :(

  On Jun 3, 4:57 pm, Michael Bayer mike...@zzzcomputing.com wrote:
  On Jun 3, 2010, at 1:24 AM, Az wrote:

  +++ Questions +++

  1. Is this the correct way to use sessions or am I sort of abusing
  them?

  I dont see any poor patterns of use above.

  2. When should I close a session?

  when you no longer need the usage of any of the objects associated with 
  it, or any remaining objects are in a state which you will re-merge them 
  into a new session before you next use them.    The session in its default 
  state of autocommit=False is just like going to your database and starting 
  a transaction, doing some work - when you're done with the work, you close 
  the transaction, and all the data associated with that trans (i.e. your 
  ORM

[sqlalchemy] Questions about session

2010-06-02 Thread Az
 the sessions?


---


Thanks,

Az

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Questions about session

2010-06-02 Thread Az
Also:

I'm using [Python 2.6.5] and [SQLAlchemy 0.5.8]

Previously I was just shallow copying my dictionaries, and there were
no issues then with my simulations.
My dictionaries contain objects such that my students dictionary is
basically:

students[stud_id] = Student(stud_id, name, preferences,...)

Student is mapped to an SQLAlchemy table.

This is similar for many of my objects.

I was trying to run some code to optimise my allocations and there was
no real involvement with SQLAlchemy -- in that I wasn't actually
dealing with any SQLAlchemy code.

I understand that the shallow copy in Python just copies the
references whereas deepcopy copies the entire object. Does that mean
the deepcopied object is outside the session or something?

Some help would be much appreciated. I have a feeling that the answer
lies somewhere within the way deepcopy and session work but my head
just can't put two-and-two together right now :(

On Jun 3, 6:24 am, Az azfarul.is...@gmail.com wrote:
 In my code, I am currently adding to the session in various modules
 (this is the same session since I'm importing it from my most
 prominent module).

 Some sample code would be:

 ## BEGIN CODE 1 ###

 Session = sessionmaker(bind=engine)
 session = Session()

 def addToTable():
         Very simple SQLAlchemy function that populates the Student,
 Project
         and Supervisor tables.

         for student in students.itervalues():
                 session.add(student)
                 session.flush()

         for project in projects.itervalues():
                 session.add(project)
                 session.flush()

         for supervisor in supervisors.itervalues():
                 session.add(supervisor)
                 session.flush()

         session.commit()

 And then again in a function in the same module:

 def monteCarloBasic(trials):
         The Monte-Carlo simulation will generate allocations for the
         list of students by randomly arranging the order for each trial.

         In the case of a student having more than one project for a given
 rank,
         the algorithm with randomly select one of them since it is given that
         all such projects are equally desireable to the student.

         session_id = 1
         ident = 1
         for trial in xrange(trials):

                 for id in randomiseStudentKeys(True):
                         stud_id = id
                         student = students[id]

                         if student.preferences:
                                 temp_alloc = SimAllocation(ident, session_id, 
 stud_id)

                                 ranks = sorted(student.preferences.keys())

                                 for rank in ranks:
                                 # Now we begin to try giving him/her a project
                                         proj = 
 random.choice(list(student.preferences[rank]))

                                         if not (proj.allocated or 
 proj.blocked or proj.own_project):

                                                 student.allocated_project = 
 proj
                                                 student.allocated_proj_ref = 
 proj.proj_id
                                                 student.allocated_rank = rank
                                                 allocSuccessActions(proj)

                                                 temp_alloc.alloc_proj = 
 proj.proj_id    # ... we can set the
 allocated project details
                                                 temp_alloc.alloc_proj_rank = 
 rank

                                                 session.add(temp_alloc)

                                                 break

                                 ident += 1      # Increment the primary key
                                 session.add(temp_alloc)

                 session.flush()
                 session_id += 1

                 resetData() # Resets all data

         session.commit()

 ### END CODE 1 ###

 Later on I'm using this session to run some calculations on my data.
 For example:

 ### BEGIN CODE 2 

 sid = 4545
 project_id_list = list(students[sid].preferences)
 for project_id in project_id_list
     gotcha =
 session.query(SimAllocation).filter(SimAllocation.student_id ==
 sid).filter(PP.SimAllocation.alloc_proj == project_id).count()

 ## END CODE 2 ###

 Simply, this line counts how many times a certain student was
 allocated each project from his list when using the Monte-Carlo
 simulation from ### CODE 1 ### above.

 +++ Questions +++

 1. Is this the correct way to use sessions or am I sort of abusing
 them?
 2. When should I close a session?
 3. I got the following error after trying to use copy.deepcopy() on
 one of my dictionaries.

 File Main.py, line 106, in module
     OPTM.preoptAlloc(some_best)
   File /Optimisation.py, line 48, in preoptAlloc
     sid = projs_preopt[sim.alloc_proj_ref].proj_sup
   File /opt/local/Library

[sqlalchemy] Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

2010-05-26 Thread Az
Basically, I've got these simple classes mapped to tables, using
SQLAlchemy. I know they're missing a few items but those aren't
essential for highlighting the problem.

class Customer(object):
def __init__(self, uid, name, email):
self.uid = uid
self.name = name
self.email = email

def __repr__(self):
return str(self)

def __str__(self):
return Cust: %s, Name: %s (Email: %s) %(self.uid,
self.name, self.email)

The above is basically a simple customer with an id, name and an email
address.

class Order(object):
def __init__(self, item_id, item_name, customer):
self.item_id = item_id
self.item_name = item_name
self.customer = None

def __repr__(self):
return str(self)

def __str__(self):
return Item ID %s: %s, has been ordered by customer no.
%s %(self.item_id, self.item_name, self.customer)

This is the `Orders` class that just holds the order information: an
id, a name and a reference to a customer. It's initialised to `None`
to indicate that this item doesn't have a customer yet. The code's job
will assign the item a customer.

The following code maps these classes to respective database tables.


# SQLAlchemy database transmutation
engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()

customers_table = Table('customers', metadata,
Column('uid', Integer, primary_key=True),
Column('name', String),
Column('email', String)
)


orders_table = Table('orders', metadata,
Column('item_id', Integer, primary_key=True),
Column('item_name', String),
Column('customer', Integer, ForeignKey('customers.uid'))
)

metadata.create_all(engine)
mapper(Customer, customers_table)
mapper(Orders, orders_table)


Now if I do something like:

for order in session.query(Order):
print order

I can get a list of orders in this form:

Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no.
12

=

What I want to do is find out customer 12's name and email address
(which is why I used the ForeignKey into the Customer table). How
would I go about it?

=

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: sqlalchemy 0.5.8 'NoneType' object has no attribute 'get' error

2010-04-12 Thread Az
Ah, sorry I think I solved my own problem. I was doing something silly
where I was creating Student instances before mapping class. Following
advice, I put the code creating Student instances after calling
mapper(Student, students_table) and it worked out.

Many thanks for trying to replicate the scenario and I apologise for
that!

Az


On Apr 9, 4:16 pm, Michael Bayer mike...@zzzcomputing.com wrote:
 Az wrote:
  instance_dict(instance))
  AttributeError: 'NoneType' object has no attribute 'get'
  +++

  I'm at a loss as to how to resolve this issue, if it is an issue. If
  more information is required, please ask and I will provide it.

 you'd have to illustrate a full test example which replicates this
 behavior.  I have re-built a test case from your fragments and cannot
 reproduce:

 from sqlalchemy import *
 from sqlalchemy.orm import *
 import collections

 class Student(object):
     def __init__(self, name, id):
         self.id = id
         self.name = name
         self.preferences = collections.defaultdict(set)
         self.own_proj_id = None
         self.own_proj_sup = None
         self.allocated_project = None
         self.allocated_rank = 0

     def __repr__(self):
         return str(self)

     def __str__(self):
         return %s %s %(self.id, self.name)

 engine = create_engine('sqlite:///:memory:', echo=False)
 metadata = MetaData()
 students_table = Table('studs', metadata,
     Column('id', Integer, primary_key=True),
     Column('name', String)
 )
 metadata.create_all(engine)
 mapper(Student, students_table)

 Session = sessionmaker(bind=engine)
 sesh = Session()

 students = {}
 for id in xrange(10):
     students[id] = Student(student %d % id, id)

 for student in students.itervalues():
     print student

 metadata.create_all(engine)

 sesh.add_all(students.values())
 sesh.flush()

 for student in students.itervalues():
     print student

 sesh.commit()

 for student in students.itervalues():
     print student

 zzzeek-3:sqlalchemy classic$ python test.py
 0 student 0
 1 student 1
 2 student 2
 3 student 3
 4 student 4
 5 student 5
 6 student 6
 7 student 7
 8 student 8
 9 student 9
 0 student 0
 1 student 1
 2 student 2
 3 student 3
 4 student 4
 5 student 5
 6 student 6
 7 student 7
 8 student 8
 9 student 9
 0 student 0
 1 student 1
 2 student 2
 3 student 3
 4 student 4
 5 student 5
 6 student 6
 7 student 7
 8 student 8
 9 student 9



  Kind regards,

  Az

  --
  You received this message because you are subscribed to the Google Groups
  sqlalchemy group.
  To post to this group, send email to sqlalch...@googlegroups.com.
  To unsubscribe from this group, send email to
  sqlalchemy+unsubscr...@googlegroups.com.
  For more options, visit this group at
 http://groups.google.com/group/sqlalchemy?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Surrogate keys, multiple sessions

2010-04-12 Thread Az
Hi there,

I'm just trying to sort out some code implementation utilising
SQLAlchemy that should store some output for a Monte-Carlo simulation
I'm running.

The simulation works thus:

I have a dictionary of students, with their preferred projects and a
Monte-Carlo simulation that takes said dictionary, randomises the
students and tries to assign them their best choice project (i.e. ones
with the highest preference rank on their list). That constitutes a
single session.

This is repeated for say, N sessions. Thus the session_id will start
from 0/1 and remain static until the next session wherein it will
increment itself until it hits N-1/N (which is the last session), and
the simulation terminates.

I'm not sure how I'm going to insert the rows but two ways are:

1. At the end of each session, for each student in the dictionary
(loop), add the necessary info (shown below) and insert row
2. For each project assignment (or non-assignment: this is a valid
case), insert row.

I've got two tables (students_table and session_allocation) which have
very similar mappings (code below). The students_table was fairly easy
but is only effective for a single session, which is why I created the
session_allocation table to take into account a session identifier.

The kind of output I'm looking for is in the form:

———
Unique/Surrogate ID || Session_id || Student_id || Assigned_Project_id
|| Assigned_Project_Rank
———

So, questions:

1. How does one get an auto incrementing unique key going?

2. The session_id is not part of the Student class (where the
Student_id, Assigned_Project_id and Assigned_Project_rank mappings
come from). This should be a parameter of the Monte-Carlo class (yet
to be created). Is it possible to map just the session_id from that
and the other details from the student class? As you can guess, with
each session, certain parts of the Student object are overwritten
(namely the ones mentioned as columns).



# Begin code
class Student(object):
def __init__(self, sid, name, stream_id):
self.sid = sid
self.name = name
self.preferences = collections.defaultdict(set)
self.allocated_proj_ref = None
self.allocated_rank = 0

def __repr__(self):
return str(self)

def __str__(self):
return %s %s %s %s %(self.sid, self.name,
self.allcated_proj_ref, self.allocated_project)


# SQLAlchemy database code
engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()

students_table = Table('studs', metadata,
Column('sid', Integer, primary_key=True),
Column('name', String),
Column('allocated_rank', Integer),
Column('allocated_proj_ref', Integer, ForeignKey('projs.proj_id'))
)

session_allocation = Table('sesh_alloc', metadata,
Column('ident', Integer, primary_key=True),
Column('sesh_id', Integer),
Column('ee_id', Integer),
Column('allocated_proj_ref', Integer,
ForeignKey('projs.proj_id')),
Column('allocated_rank', Integer)
)

metadata.create_all(engine)



-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] sqlalchemy 0.5.8 'NoneType' object has no attribute 'get' error

2010-04-09 Thread Az
Hi,

This is my first time posting so I'll try my best not to mess up!
First let me provide everyone with the necessary information.

I'm a beginner to both Python and SQLAlchemy.

Version Info
=
OS: Mac OSX 10.5.8
Python: 2.6.4
SQLAlchemy: 0.5.8
=

My Class
===
+++
class Student(object):
def __init__(self, name, id):
self.id = id
self.name = name
self.preferences = collections.defaultdict(set)
self.own_proj_id = None
self.own_proj_sup = None
self.allocated_project = None
self.allocated_rank = 0

def __repr__(self):
return str(self)

def __str__(self):
return %s %s %(self.id, self.name)
+++

Background
=
Now, I've got a function that reads in the necessary information from
a text database into these objects. The function more or less works
and I can easily access the information from the objects.

Before the SQLAlchemy code runs, the function will read in the
necessary info and store it into the Class. There is a dictionary
called `students` which stores this as such:

+++
students = {}
students[id] = Student(all the info from the various reader
functions)
+++

Afterwards, there is an allocation algorithm that will allocate
projects to student. It does that well enough. If a student already
has his/her own project, the own_proj_id and own_proj_sup fields have
numeric info. The allocated_project remains as None (but I suppose I
can fix that to reflect the own_proj_id). Now if a student is
unsuccessful in getting a project the following fields remain as None:
own_proj_id, own_proj_sup and allocated_project.



SQLAlchemy bit
=

So after all this happens, I'd like to map my object to a database
table. Using the documentation, I've used the following code:

+++
from sqlalchemy import *
from sqlalchemy.orm import *

engine = create_engine('sqlite:///:memory:', echo=False)
metadata = MetaData()
students_table = Table('studs', metadata,
Column('id', Integer, primary_key=True),
Column('name', String)
)
metadata.create_all(engine)
mapper(Student, students_table)

Session = sessionmaker(bind=engine)
sesh = Session()
+++

Now after that, I was curious to see if I could print out all the
students from my students dictionary.

+++
for student in students.itervalues():
print student
+++

What do I get but an error:

+++
Traceback (most recent call last):
  File ~/FYP_Tests/FYP_Tests.py, line 140, in module
print student
  File /~FYP_Tests/Parties.py, line 30, in __str__
return %s %s %(self.id, self.name)
  File /opt/local/Library/Frameworks/Python.framework/Versions/2.6/
lib/python2.6/site-packages/SQLAlchemy-0.5.8-py2.6.egg/sqlalchemy/orm/
attributes.py, line 158, in __get__
return self.impl.get(instance_state(instance),
instance_dict(instance))
AttributeError: 'NoneType' object has no attribute 'get'
+++


I'm at a loss as to how to resolve this issue, if it is an issue. If
more information is required, please ask and I will provide it.

Kind regards,

Az

-- 
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalch...@googlegroups.com.
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en.



[sqlalchemy] Re: Creating a Dialect for Amazon SimpleDB

2009-06-04 Thread az

sorry to tune in late, but how different is AmazonDB from googleDB?

googleDB seems like a plain non-relational DB like berkeleyDB/btrieve 
kind of thing.

to map a relational and non-relational schemas in same way u need 
higher level of abstraction - sqlalchemy is only about sql.
so i had an idea to make a backend for googledb and similars for my 
http://dbcook.sf.net, which uses higher level of abstraction - but 
the idea haven't progressed into anything real. Now it translates 
your declarational schema only into sqlalchemy... but that isn't the 
only possiblity.
Thus to say - if dbcook.sf.net can be of help, use it.
Mail me if anyone interested in digging further.

ciao
svilen

On Thursday 04 June 2009 23:42:59 Laurence Rowe wrote:
 Without Foreign Keys or joins, I don't see how putting all tables
 in the same domain buys you anything?

 Laurence

 On Jun 4, 12:27 am, enj enjah...@gmail.com wrote:
  The way Simpledb works you don't want to make each domain a
  table, you want to put all related tables in one domain. This way
  you can do relational queries on the set of objects. One thing
  that might help conceptualize it would be to set a table
  attribute for each item that contains what the table name would
  be called. That way you could quickly get a resultset of all the
  objects that should be in that table. One can do any kind of
  relational operation on the set of objects in the domain, so
  sqlalchemy functionality can be mapped to it. Since transactions
  aren't supported that functionality wont be supported, but one
  can still make useful applications.
 
  Ian
 
  On Jun 3, 6:00 pm, Laurence Rowe laurencer...@gmail.com wrote:
   It would be interesting to see if this could be made to work.
   The SimpleDB model is rather different from the relational
   model, so it would only be useful if your application does not
   use any advanced features - no joins etc, each 'domain' might
   map to one big (albeit sparse) table.
  
   Laurence
  
   On Jun 3, 8:38 pm, enj enjah...@gmail.com wrote:
Hello all,
   
I am new to sqlalchemy and was introduced to it by a project
(cjklib) that uses it. I want to migrate this project to
Amazon SimpleDB and since it makes extensive use of
sqlalchemy I thought the best course of action might be to
make a SimpleDB dialect in sqlalchemy which could possibly
benefit other projects trying to move to AWS.
   
The purpose of my post is to see if there is any such effort
out there or any other interest in this. Also I was wondering
if there is any good docs/tutorials on implementing a
Dialect.
From my current understanding I plan to implement
engine.defaulthttp://www.sqlalchemy.org/trac/browser/sqlalche
   my/trunk/lib/sqlalchem... which will use the boto library to
interface with simpledbhttp://code.google.com/p/boto/
   
Any thoughts?
   
--
Ian Johnson

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Schema compare utility

2009-05-29 Thread az

On Friday 29 May 2009 20:53:04 Mike Conley wrote:
 I need to compare database schemas (tables, column names, types,
 constraints, etc.). Is there anything out there that uses
 SQLAlchemy metadata to compare database schemas? or should I start
 working on one?

here mine just for a diff:
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/metadata/

apart of that, check migrate, 
http://code.google.com/p/sqlalchemy-migrate/

and there was one more, miruku - trac.ollix.org/miruku/


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: orm querying: where with a subquery on self

2009-05-12 Thread az

try label the column in q2, say q2.maxgrade, then use that as 

print s.query(A).filter( A.grade==q2.maxgrade)...


On Tuesday 12 May 2009 10:17:11 Max Ischenko wrote:
 I'm trying to express the following SQL:

 SELECT  * FROM attendances a WHERE grade = (SELECT MAX(grade) FROM
 attendances WHERE student_id=a.student_id) and school_id=112;

  A2 = aliased(A) # A is Attendance class
  q2 =
  s.query(max_grade).filter(A.student_id==A2.student_id).subquery
 () print q2 # looks OK so far

 SELECT max(attendances.grade) AS max_1
 FROM attendances, attendances AS attendances_1
 WHERE attendances.student_id = attendances_1.student_id

  print s.query(A).filter(and_(A.grade==q2, A.school_id==112))

 SELECT attendances.id AS attendances_id, attendances.student_id AS
 attendances_student_id, attendances.school_id AS
 attendances_school_id, attendances.school_year_id AS
 attendances_school_year_id, attendances.grade AS attendances_grade
 FROM attendances, (SELECT max(attendances.grade) AS max_1
 FROM attendances, attendances AS attendances_1
 WHERE attendances.student_id = attendances_1.student_id) AS anon_1
 WHERE attendances.grade = SELECT max(attendances.grade) AS max_1
 FROM attendances, attendances AS attendances_1
 WHERE attendances.student_id = attendances_1.student_id AND
 attendances.school_id = %(school_id_1)s
 # doesn't work:
 LINE 8: WHERE attendances.grade = SELECT max(attendances.grade) AS
 m...

 If I do not use .subquery() it doesn't do correct thing either:
  q3 = s.query(max_grade).filter(A.student_id==A2.student_id)
  print s.query(A).filter(and_(A.grade==q3, A.school_id==112))

 SELECT attendances.id AS attendances_id, attendances.student_id AS
 attendances_student_id, attendances.school_id AS
 attendances_school_id, attendances.school_year_id AS
 attendances_school_year_id, attendances.grade AS attendances_grade
 FROM attendances
 WHERE attendances.grade = %(grade_1)s AND attendances.school_id = %
 (school_id_1)s

 Pls help,
 Max.
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing existing object with a changed copy

2009-05-06 Thread az

u'd better edit a new copy and on save copy all back into original 
then commit that one, on cancel abandon the new one (but beware of 
m2m relations if u have them).
all else isn't safe/nice IMO.

On Wednesday 06 May 2009 17:25:47 Marcin Krol wrote:
 Hello,

 I would like to implement typical Save / Cancel dialog operating on
 normal SQLA objects.

 For that purpose, the most convenient way is to make a shallow copy
 of an object using copy.copy(obj), let the user edit this object,
 and on user pressing OK in dialog replace it in SQLA, e.g. expunge
 existing object and add the edited object as replacement (and
 obviously drop the edited copy of object if user pressed Cancel
 instead).

 The reason I'm trying to do this instead of just doing
 session.commit() or session.close() on the changed object is that
 editing in my app is pretty complicated and it is done across many
 http requests, so I obviously need to save the state of the object
 in between them.

 Are there any problems with such approach? Risks? Is this safe?

 There obviously have to be some issues, such as enforcing the same
 PK in a new obj as in the old object, which is the first issue that
 comes to my mind.


 Regards,
 mk




 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing existing object with a changed copy

2009-05-06 Thread az

the copy can be a dummy non-db-aware - if that is ok in your case

there's yet another option; u can just go all over the original 
object, and on cancel do a rollback, and restore all changed stuff 
back by hand (eventualy looking at object-state's history). But this 
assumes short editing/single user.

if the editing session is really long, u need a user transaction, 
that is a copy saved to the db etc. then later at ok, retrieved, 
removed and replacing the original object (or upgrading if u have 
versions).

m2m relations are ok for copying; but they have caused me some 
leftovers when creating new objects then destroying them without 
save.

one2many are the tricky ones - there's no copy as semantics, 
there's move.

AFAIK there's no a perfect solution for all cases; u have to decide 
which one is the least painful for your setup.


On Wednesday 06 May 2009 18:00:33 Marcin Krol wrote:
 a...@svilendobrev.com wrote:
  u'd better edit a new copy and on save copy all back into
  original then commit that one, on cancel abandon the new one (but
  beware of m2m relations if u have them).
  all else isn't safe/nice IMO.

 To make it specific, should I do smth like:

 1. on beginning of edit, make a new instance of an object, then
 copy all the attributes from the original object, save the 'under
 editing' object

 2. on user save, copy all the attributes from the 'under editing'
 object one by one into the original object, expunge the 'under
 editing' object, do session.save()?

 I'm not even sure this would be safe, as I indeed have many to many
 relation, Reservation having many Hosts, with hosts being
 added/removed in Reservation.

 So I would be moving or copying the Hosts collection from one
 Reservation object to another Reservation object and back -- Mike,
 is this safe?

 Regards,
 mk

  On Wednesday 06 May 2009 17:25:47 Marcin Krol wrote:
  Hello,
 
  I would like to implement typical Save / Cancel dialog operating
  on normal SQLA objects.
 
  For that purpose, the most convenient way is to make a shallow
  copy of an object using copy.copy(obj), let the user edit this
  object, and on user pressing OK in dialog replace it in SQLA,
  e.g. expunge existing object and add the edited object as
  replacement (and obviously drop the edited copy of object if
  user pressed Cancel instead).
 
  The reason I'm trying to do this instead of just doing
  session.commit() or session.close() on the changed object is
  that editing in my app is pretty complicated and it is done
  across many http requests, so I obviously need to save the state
  of the object in between them.
 
  Are there any problems with such approach? Risks? Is this safe?
 
  There obviously have to be some issues, such as enforcing the
  same PK in a new obj as in the old object, which is the first
  issue that comes to my mind.
 
 
  Regards,
  mk

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Replacing existing object with a changed copy

2009-05-06 Thread az

  one2many are the tricky ones - there's no copy as semantics,
  there's move.

 Say again? I can't (shallow) copy one-to-many object to another? Or
 do you mean: I can't copy it to another object, modify it and then
 copy it back?
shallow? 
if A points to B1, copying B1 to B2 is ok, but u lose the B2-A 
link...
u can make many copies of A, each pointing to B1, but not the other 
way...

ah, nevermind.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Ambiguous column name in SQL literal query

2009-04-27 Thread az

as the SQl is literal, u have to apply that advice by hand: 
SELECT crfItem.idCrf, class.name as clsname, attribute.name as 
attrname ...
or something of sorts

On Monday 27 April 2009 19:00:24 Mauro wrote:
 Dear group,

 I'm having the following issue with Ambiguous column name, in
 this situation:

 #code example begin

 connString = 'sqlite:///data/prosafelocal.sqlite'
 engine = create_engine(connString)
 connection = engine.connect()
 query = 'SELECT crfItem.idCrf, crfItem.idClass, class.name,
 attribute.id, attribute.name from crfItem JOIN class ON
 crfItem.idClass = class.id JOIN attribute on class.id =
 attribute.idClass WHERE crfItem.idCrf = 1'

 result = self.connection.execute(query, use_labels=True)

 #code end


 I'm using connection.execute(query) directly with literal sql
 syntax. I get the following error:

 try 'use_labels' option on select statement. % colname)
 InvalidRequestError: Ambiguous column name 'name' in result set!
 try 'use_labels' option on select statement.

 The name column is present in two joint tables, and it generates
 the ambiguity.
 Anyway, it seems like the option use_labels=True does is not
 affecting my query behaviour.
 Any help would be really appreciated.

 Thanks,
 Mauro





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Changing Python type associated with temporal datatypes without using custom TypeDecorator?

2009-04-25 Thread az

this direct yourmodels.DateType replacement would work if u had that in some 
separate file, and all your models imported from that one. But, as there's 
probably lots of other code and the import DateTime is probably everywhere, i 
dont think u know which DateTime's to be replaced and which not. 
if u want to replace _all_ of them, just make name the BananaType 
module datetime and put that module in the beginning of PYTHONPATH. Or,  
replace the original's datetime.DateType.. These would work if the BananaType 
really emulates DateType so other non-DB things would not break. Which i 
doubt - say if u have XMLRPC, pickle/unpickle or similar materializations.

so i guess, explicit is better than implicit - u generate them anyway.. But 
this won't save you from checking the other usages of DateTime. 

On Saturday 25 April 2009 01:18, phrrn...@googlemail.com wrote:
 My manager came up with a rather clever -- if not devious --
 suggestion: implement the type adapators as usual but then diddle the
 namespace of the package where the SA model is defined. I tried it and
 it works but is sufficiently confusing that I am now in favor of
 changing the models so that the columns explicitly reference the
 appropriate decorator. The main objection was the use of a specialized
 type in such an indiscriminate fashion. However, since most of the
 models are code-generated, I can write it off as yet another
 convention in place in the code.

 pjjH


 # Here are the decorators
 class BananaDate(types.TypeDecorator):
 impl = types.Date

 def process_result_value(self, value, dialect):
 from   deshaw.datetime.date import Date
 return Date(value)

 class BananaTimestamp(types.TypeDecorator):
 impl = types.DateTime

 def process_result_value(self, value, dialect):
 from  deshaw.datetime import timestamp
 ts = timestamp.Timestamp()
 return ts.from_parts(value.year, value.month, value.day,
 value.hour, value.minute, value.second, value.microsecond)


 === in the model file ===
 # here in the model file, import the type decorators but aliased to
 the corrosponding SA types

 from banana.dbo import BananaDate as Date
 from banana.dbo import BananaTimestamp as DateTime


 On Apr 24, 5:00 pm, phrrn...@googlemail.com

 phrrn...@googlemail.com wrote:
  My employers have a custom Python type (derived from datetime) for
  dealing with dates and datetimes.Let's call it 'BananaDate'. They
  would like BananaDate be used by SQL Alchemy. The standard way of
  doing this appears to be with a TypeDecorator:
 
  class BananaDate(types.TypeDecorator):
      from   banana.date import Date
      impl = types.Date
 
      def process_result_value(self, value, dialect):
          return banana(value)
 
  I understand that the Column definitions would change from:
 
      Column('first_reset_dt', DateTime, nullable=True)
 
  to
      Column('first_reset_dt', BananaDate, nullable=True)
 
  These seems to imply that I have to explicitly change the models (and
  I would prefer not to do that)
 
  Is there some neat 'hook-based' approach that would allow me to leave
  the metadata models intact and yet work with BananaDates? Would it be
  very rude to monkey-patch the relevent base types in sqlalchemy.types
  with new definition of process_result_value?
 
  pjjH

 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: after a week or so of working fine, my app starts experiencing sqlalchemy recursion issues

2009-04-23 Thread az

and what is the query that goes plop? and whereabouts in the code?
u can use sys.setrecursionlimit( x) to eventualy move the threshold 
down and make it die in other occasions too.

On Thursday 23 April 2009 18:55:11 Jonathan Vanasco wrote:
 It starts with the code below, and the last two lines in
 visit_select and in process repeat indefinitely until the
 recursion
 error is generated

 i can't wrap around why this would happen -- i thought I had bad
 sql or something similar, but restarting the app completely fixes
 this issue.  like clockwork, every tuesday morning i get this issue
 on a single page.

 i've tried benching my app to recreate, and i can't.  It just
 happens on Tuesdays.


 File '/usr/lib/python2.5/site-packages/SQLAlchemy-0.4.8-py2.5.egg/
 sqlalchemy/orm/query.py', line 878 in all
   return list(self)
 File '/usr/lib/python2.5/site-packages/SQLAlchemy-0.4.8-py2.5.egg/
 sqlalchemy/orm/query.py', line 938 in __iter__
   return self._execute_and_instances(context)
 File '/usr/lib/python2.5/site-packages/SQLAlchemy-0.4.8-py2.5.egg/
 sqlalchemy/orm/query.py', line 941 in _execute_and_instances
   result = self.session.execute(querycontext.statement,
 params=self._params, mapper=self.mapper,
 instance=self._refresh_instance)
...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: help with a tricky property mapping

2009-04-14 Thread az

On Tuesday 14 April 2009 16:56:41 Dusty Phillips wrote:
 On Apr 13, 5:16 pm, a...@svilendobrev.com wrote:
   mapper(Document, documents, properties={
       'document_id': documents.c.id,   # document_id ORM property
   In the past, I have successfully mapped these properties using
   synonym, but this time I'm confused because I'm not sure how to
   define the synonym to a different column name. How do I change
   my 'document_id': declaration in the mapper call to set up the
   above descriptor when accessing the id column on the table?
 
  if u use '_document_id' in above mapper(..) would that be enough?
  no synonim, just diff.property-name

 Wouldn't that mean that any session queries would have to be on
 '_document_id' too? 
yes
 I'd rather have the queries on the property 
 name -- 'document_id'.
well, then the above and doc=synonym( _doc, ...)



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: help with a tricky property mapping

2009-04-14 Thread az

On Tuesday 14 April 2009 17:50:06 Dusty Phillips wrote:
 On Apr 14, 10:33 am, a...@svilendobrev.com wrote:
  On Tuesday 14 April 2009 16:56:41 Dusty Phillips wrote:
   On Apr 13, 5:16 pm, a...@svilendobrev.com wrote:
 mapper(Document, documents, properties={
     'document_id': documents.c.id,   # document_id ORM
 property In the past, I have successfully mapped these
 properties using synonym, but this time I'm confused
 because I'm not sure how to define the synonym to a
 different column name. How do I change my 'document_id':
 declaration in the mapper call to set up the above
 descriptor when accessing the id column on the table?
   
if u use '_document_id' in above mapper(..) would that be
enough? no synonim, just diff.property-name
  
   Wouldn't that mean that any session queries would have to be on
   '_document_id' too?
 
  yes
 
   I'd rather have the queries on the property
   name -- 'document_id'.
 
  well, then the above and doc=synonym( _doc, ...)

 so there would actually be two properties in the mapper?
frankly, no idea, try


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Metadata - Inherited Tables, Columns

2009-04-13 Thread az

 Is there a way of telling if a table is inherited from another
 table and which tables it inherits from in the Metadata?

inheritance is not really sql notion... so 
 a) look at the mapper.inherits (towards root) and/or 
mapper.polymorphic_itereator() (towards leafs)
 b) see if table's primary key is foreign key to elsewhere, that may 
mean joined inheritance

 Further to this, Is there a way of telling which column definitions
 come from which table?
thecolumn.table ?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Should we separate business logic and ORM mapping classes.

2009-04-13 Thread az

i would bundle them into one... but would probably split on another 
level. adding items to a user may well be posessions... so it 
depends. That might well be a model of User with related stuff, which 
is mapped to database in another way (one2many or Relator object/m2m 
or whatever), and is shown in UI in yet another way.

or, using your examples, str.to_int is same as int.from_str and can be 
implemented anywhere in _one_ place - and reachable from whereever, 
but they belong to another concept: conversion. So if it goes 
extremely OO, it would be convertor.str( something). pythonwise the 
convertor is the default namespace - so u get len(), int(), str() etc 
various protocols.

And anyway, all these Dogmas are just ways to do it. None matches 
reality in_principle. One has to match the complexity of the _model_ 
with the complexity and importance of the task/concept. As long as 
you're consistent, use whatever u like / is suited better - but 
always think of someone else reading it after one year (it can well 
be you but in quite diff.mindset).

ciao
svilen
www.svilendobrev.com

On Monday 13 April 2009 17:59:46 Nebur wrote:
 A good question. I think we agree that both ways can work well...
 Usually, I do prefer the less OO style. In your example, a pure
 mapping class and business logic in a user manager.

 The main reason is that what you call OO style seems to enforce a
 lot of arbitrary decisions.
 For example, when you bind an EMail object instance to a User
 object instance.
 Where to put the addXXX() method - into the User, or into the EMail
 ? (I see that ugly question appear repeatedly, e.g. in the language
 design. Do we want str.to_int() or Str.from_int() or int.to_str()
 ... etc...)
 Also notice that the OO style is not necessarily a pythonic style
 (which may also be desirable).
 Let's look out for examples:
 Python design prefers even to use a global len() function instead
 of str.len() etc. (hm, I don't like especially that case, but I
 think GvR also may have had its good considerations.) Or consider
 sqlalchemy-ORM where most functionality is in the orm-session.
 Other orms (SqlObject etc.) prefer to put methods into the mapped
 classes (i.e.Active Records).
 The sqlalchemy design may also have other reasons (e.g.unit-of-work
 is wanted). But indepently from collecting actions in a
 unit-of-work, the design with methods in the session -not in the
 objects- feels well. (Sadly, this is more gut feeling than
 rational.)

 I think these design question is fundamental, and a lot of gurus
 already have written about OO for decades now.  Should there really
 be no guru article, and no forum discussion about that ?? I'd also
 appreciate anyone showing me what knowledge already exists here.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Metadata - Inherited Tables, Columns

2009-04-13 Thread az

On Tuesday 14 April 2009 00:13:10 Jarrod Chesney wrote:
 I'll look into a) do the mappers pick up foreign key constraints
 and the polymorphic/inherited details from the metadata when their
 created or do you have to specify them?
both, if ambigious u have to specify manualy.
dunno about the rest, AFAIK these postgres specifics aren't 
automaticaly supported..

 - As a last resort i can 
 query the data dictionary.

 Question 1
 From the sqlalchemy mapper configuration, PostGreSQLs version of
 inheritence sounds like the concrete variary, as it automatically
 joins the tables together with a select unless you use FROM ONLY
 table_name
 Concrete table inheritance : where each type of class is stored in
 its
 own table
 joined table inheritance : where the parent/child classes are
 stored in their own tables that are joined together in a select
 Can someone confirm which category postgresql falls into?

 Question 2
 Does the metadata.reflect stuff reflect inherited table definitions
 from PostGreSQL?

 Question 3
 The inheritence i was refering to is present in postgresql
 CREATE TABLE table1 (
 col1,
 col2
 );
 CREATE TABLE table2(
 col2,
 col3
 ) INHERITS table2;
 I want to find out that :
 * table2 is inherited from table1
 * table2.col2 definition is from table1 and from table2 (postgres
 merges them)
 * table2.col1 definition comes from the inhertance of table1.col1
 (much the same as the line above)
 Does sqlalchemy support this distinction? Can i get the details
 from somewhere?
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: help with a tricky property mapping

2009-04-13 Thread az

 mapper(Document, documents, properties={
 'document_id': documents.c.id,   # document_id ORM property

 In the past, I have successfully mapped these properties using
 synonym, but this time I'm confused because I'm not sure how to
 define the synonym to a different column name. How do I change my
 'document_id': declaration in the mapper call to set up the above
 descriptor when accessing the id column on the table?

if u use '_document_id' in above mapper(..) would that be enough? no 
synonim, just diff.property-name


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Query with outer join and function

2009-04-10 Thread az

that's what i have in bitemporal queries. 
u need a groupby and subquery/ies.

something along 

subs = select( 
 [C.id.label('cid'), C.d_id.label('did'),
  func.max(C.value).label('cvalue')] 
 ).group_by( C.id )
giving the max cid/cvalues, and then somehow join Ds with that. 

D.query( ...).filter( d.id == subs.c.did )

it depends, u may be able to approach it the other way, like 
query(C)... .join(D)

don't take the aboves as solution, but rather as example of the 
direction...

http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/timed/
www.svilendobrev.com

ciao
svil

On Friday 10 April 2009 13:13:26 Joril wrote:
 Hi everyone!
 I'm trying to concoct a somewhat complicated query via Query API..
 The situation is the following:

 - A many-to-one relation between classes C and D
 - Class C has an attribute value

 My objective is to retrieve the ids of a left outer join between D
 and C (so, all the Ds and eventually the related Cs) where C.value
 has the maximum value, given the same D.

 So for example, given a table D made of
 [id, otherfield]
 (1, A)
 (2, B)
 (3, C)

 and a table C made of
 [id, d_id, value]
 (1, 1, 0)
 (2, 1, 1) - max value for d_id 1
 (3, 2, 3) - max value for d_id 2
 (4, 2, 2)

 I'd like my query to return
 [d.id, c.id]
 (1, 2)
 (2, 3)
 (3, None)

 Is this possible with a single query?
 Many thanks for your time!
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Complicated query

2009-04-06 Thread az


On Monday 06 April 2009 18:26:06 Marcin Krol wrote:
 Hello everyone,

 I've got this query:

 selectexpr = session.query(Host, Architecture, OS_Kind, OS_version,
 Virtualization, Project, Reservation)

 ...

 selectexpr = selectexpr.filter(or_(Host.earliest_reservation_id ==
 None, and_(Host.earliest_reservation_id == Reservation.id,
 Reservation.start_date  sd)))

 The problem with this query is that SQLA/Postgres does a join on
 Host.earliest_reservation_id == None (producing each combination of
 every Reservation and every host with Host.earliest_reservation_id
 = NULL).

 Now I could make a second query like session.query(Host,
 Architecture, OS_Kind, OS_version, Virtualization, Project), but do
 I have to?

 Is there some way to account for condition
 Host.earliest_reservation_id == None and produce None / NULL as
 Reservation in resulting row tuple as result of a single query?
left outer join?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generating a _BinaryExpression

2009-04-03 Thread az

well... the thing is an AST of a very simple language - python-like 
expressions (and that is not SQL! e.g. a.b.c == x.y.z  
whateverelse). u can do with that AST whatever u want - save/load or 
translate into something else, that is, another grammar. 
u can build a gui builder/editor over the tree... just like any other 
tree.
start from dbcook/util/expr.py. see if u can handle 
(load/save/edit/...) that one. then, pick whatever u want from the 
dbcook/expression.py - that's a translator only - say u can omit the 
a.b.c.d joins, they don't 100% work anyway.

we did need such gui query-editor, and we had made some, but no time 
to make it abstract enough / tree-like, so only one level of either 
ORs or ANDs, and handling just few known object-types (like kmail's 
filters). this SA-gui-query-builder question comes over and over so 
many times, still i haven't seen a good _abstract_ usable  
programmable one - one u feed the classes and their attributes. i 
think KDE (or was it openoffice) has a tree-visualizing 
regular-expresion builder, but it's hardcoded for REs, and i doubt 
it's in python. this can be a big effort; think well -- but can be 
rewarding...

ah, warning: the possible combinations of tests for THIS is awfuly 
huge. There is test/other/expression.py, that tries to make  
40something cases multiplied by say 10 different scenarios of 
inheritance / relationship...
make your set of tests first - the spec - and stick to that only.

ciao
svil

On Friday 03 April 2009 20:35:05 Christiaan Putter wrote:
 Hi Svil,

 My integration between traits and sqlalchemy is working rather well
 (at least it's passing the unit tests I could think of).  Thanks
 for all your help on that.

 Now I'm trying to do something similar to what you did in dbcook
 with expressions.  I'd like to construct a gui for the user to
 build queries with, save those queries etc.

 I started off simply using the filter() and order_by() functions,
 though those are attached to a particular session which isn't very
 useful.  The way you did it is much better obviously, so something
 like

 results = session.query( klas).select( expr.walk( Translator(
 context_Vars, context_classes

 is exactly what I need.  Can the expression be saved and modified
 again at a later stage?

 Though I don't know if I require all the functionality you have in
 your expression modules.  Or perhaps I do.

 Basically I'd just like the user to be able to select those columns
 on the class that's being queried, and any other classes with
 foreign key constraints.  That's simple to setup, the trick will be
 with all the different operators and parameter bindings.

 Is their a subset of things I should go through in your expression
 module to understand what I need to do something similar with a
 gui, or is everything very tightly integrated?

 Hope you're having a good weekend.

 Regards,
 Christian

 2009/4/1  a...@svilendobrev.com:
  i have similar thing, but by keeping an expression in my own
  terms and rendering into different things, like text, SQL,
  SA-expresion, etc. u define what is a variable, const, functor,
  overload all the operators, etc etc; then have a visitor that
  walks and translates (interprets) it into whatever one needs.
 
  see the generic one
  http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/utik
 /expr.py and the above-into-SA-translator (automaticaly doing
  other stuff)
  http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/expr
 ession.py
 
  ciao
  svil
 
  On Wednesday 01 April 2009 21:18:23 Hinrich Winther wrote:
  Maybe there is a better way of doing it. To be exact:
 
  I want to disassemble a given _BinaryExpression into basic
  elements (a list of strings, arrays, integers ...) and
  reassemble it somewhere else out of this basic elements.
 
  On 01.04.2009, at 18:14, Michael Bayer wrote:
   Hinrich Winther wrote:
   Hi,
  
   I need to generate a
   sqlalchemy.sql.expression._BinaryExpression by hand. So I
   looked into the docu and found that I need a left, right and
   a operator. I am stuck at the left :) type() says it is a
   sqlalchemy.sql.util.AnnotatedColumn. But I can't find the
   class definition anywhere in sqlalchemy. Where is it defined?
  
   if you just say column('foo') == column('bar') (column is
   sqlalchemy.sql.column), you've got a _BinaryExpression.  
   there shouldn't
   be a need to access _BinaryExpression directly.

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SQLAlchemy to build sql files (without execution)

2009-04-02 Thread az

for example see
copyall.py and copydata.py in
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/metadata/


On Thursday 02 April 2009 18:24:07 Maxim Khitrov wrote:
 On Tue, Mar 31, 2009 at 4:20 PM, Yassen Damyanov 
yassen@gmail.com wrote:
  On Mar 31, 2009, Maxim Khitrov mkhit...@gmail.com wrote:
  I should also mention that I'm
  using Python 2.6 on Windows. The latest stable MySQL-Python
  extension doesn't support 2.6, and there doesn't seem to be a
  test version available for Windows.
 
  The 1.2.2 Windows version of mysql-python should work flawlessly
  under Python 2.6. (I haven't tested that however, my Python is
  2.5.) Y.

 Yes, you're right. I searched through the forums and found an
 installation package for 2.6 that someone else put together.

 If using SQLAlchemy to create these dump files directly is not
 possible, what about using reflection to load the schema for a
 given database, recreating that schema in a temporary database,
 loading required data, and then generating the SQL dump?

 I can do the first part of that like so:

 engine =
 create_engine('mysql://root:password@localhost/caffeine_gum2')
 meta = MetaData()
 meta.reflect(bind=engine)

 Once I have the metadata, how can I create a temporary database and
 reload the table structure there? I suppose I'll need to use some
 other tool for generating the dump once the data is inserted, but
 that's fine. As long as I'm not modifying the live data I'll be
 happy.

 - Max

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Operators in many-to-many relation queries

2009-04-02 Thread az

if the m2m is implicit, u can use it as plain relation for joins etc; 
except that .has becomes .any and == becomes .contains

see class RelComparator in 
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/usage/query.py

which i use to make the relation ops look same for m2m and one2m


On Thursday 02 April 2009 19:50:11 Marcin Krol wrote:
 Hello everyone,

 For the life of me I can't find it in the docs: what operators are
 available for querying many to many relations?

 ORM tutorial shows literally 1 example of 'any' operator
 (BlogPost.keywords.any), which I used successfully, but would like
 to know all the operators I can use for querying many to many rels.


 dir(BlogPost.keywords) is no good:

 (snip internal methods)
 '_yield_per', 'add_column', 'add_entity', 'all', 'autoflush',
 'correlate', 'count', 'delete', 'distinct', 'enable_eagerloads',
 'except_', 'except_all', 'filter', 'filter_by', 'first',
 'from_self', 'from_statement', 'get', 'group_by', 'having',
 'instances', 'intersect', 'intersect_all', 'iterate_instances',
 'join', 'limit', 'logger', 'offset', 'one', 'options', 'order_by',
 'outerjoin', 'params', 'populate_existing', 'query_from_parent',
 'reset_joinpoint', 'scalar', 'select_from', 'session', 'slice',
 'statement', 'subquery', 'union', 'union_all', 'update', 'value',
 'values', 'whereclause', 'with_labels', 'with_lockmode',
 'with_parent', 'with_polymorphic', 'yield_per']


 Regards,
 mk

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generating a _BinaryExpression

2009-04-01 Thread az

i have similar thing, but by keeping an expression in my own terms and 
rendering into different things, like text, SQL, SA-expresion, etc.
u define what is a variable, const, functor, overload all the 
operators, etc etc; then have a visitor that walks and translates 
(interprets) it into whatever one needs.

see the generic one
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/utik/expr.py
and the above-into-SA-translator (automaticaly doing other stuff)
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/expression.py

ciao
svil

On Wednesday 01 April 2009 21:18:23 Hinrich Winther wrote:
 Maybe there is a better way of doing it. To be exact:

 I want to disassemble a given _BinaryExpression into basic elements
 (a list of strings, arrays, integers ...) and reassemble it
 somewhere else out of this basic elements.

 On 01.04.2009, at 18:14, Michael Bayer wrote:
  Hinrich Winther wrote:
  Hi,
 
  I need to generate a sqlalchemy.sql.expression._BinaryExpression
  by hand. So I looked into the docu and found that I need a left,
  right and a operator. I am stuck at the left :) type() says it
  is a sqlalchemy.sql.util.AnnotatedColumn. But I can't find the
  class definition anywhere in sqlalchemy. Where is it defined?
 
  if you just say column('foo') == column('bar') (column is
  sqlalchemy.sql.column), you've got a _BinaryExpression.   there
  shouldn't
  be a need to access _BinaryExpression directly.
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Session and pre/post callable methods

2009-04-01 Thread az

u have MapperExtensions, per class, and u have SessionExtension, per 
session. Both exhibit various events, in different life-span points.
i think there are some decorators for declarative...

On Wednesday 01 April 2009 19:26:35 Laurent Rahuel wrote:
 Hi,

 I'm currently trying to port a Django based application using
 sqlalchemy and I'm a bit confused with some Django's orm features
 I'm not able to find in sqlalchemy.
 To avoid triggers I used to redefine the save() and delete()
 methods from Django's models to be able to update some other tables
 not in relation with the object currently manipulated.

 Using sqlalchemy, every database querying is done via the session
 object and thus, save() and delete() are not linked to the
 manipulated object anymore.

 So my question is : Is there any kind of _pre or _post method I
 can define in my Declarative models to be able to do the same ?

 Regards,

 Laurent

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Using SQLAlchemy to build sql files (without execution)

2009-03-31 Thread az

you can use echo=True to metadata and/or logging to some extent.
there was also some recipe about how to print some query's actual sql 
without issuing it, see recipes section at the site.
but i'm not sure u can really do all you want without having a 
responding DB-api (which will at some point require a server). 
try, YMMV.

svil
www.svilendobrev.com

On Tuesday 31 March 2009 22:24:16 Yassen Damyanov wrote:
 Max,

 I am an SA newbie so you should not take my words too seriously.

 So here's what I think:

 First, the only way to know and be sure that your code runs fine is
 to run it against a real database server. MySQL is not that hard to
 set up. (If you need help for that, let me know, I can help.)

 Second, if you mean that you want to generate the DDL (the tables
 creation statements), then how about having SA run against a real
 MySQL server and logging the SQL activity? (I guess this may sound
 ugly but it will work with sure.)

 If you want to generate anything more than simply DDL, it looks to
 me that a different approach will do a better job.

 (Advice from real experts is more than appreciated.)
 Yassen

 On Tue, Mar 31, 2009 at 6:30 PM, Maxim Khitrov mkhit...@gmail.com 
wrote:
  Hello all,
 
  I'm working on a project for which I need to create sql files
  (the sort that you get from a database dump) without actually
  connecting to any database. The resulting code must be compatible
  with MySQL.
 
  My first inclination was to write a bunch of functions that
  contain sql code templates and substitute dynamic values where
  needed. I then came across SQLAlchemy and wanted to see if could
  use it to make this task a little easier.
 
  So far I've been reading the documentation on meta data and
  running some tests. My guess is that I need to define the
  statements and compile them with MySQLDialect. This, however,
  doesn't seem to be working. I tried calling compile on a Table
  object with dialect set to MySQLDialect(). When the result is
  printed there is nothing there (probably because I have no idea
  what I'm doing :)).
 
  Can anyone provide some tips on whether what I'm trying to do is
  possible, and if so, what is the best way to go about it?
  Basically, the output files will be a combination of create
  database, create table, insert, and update statements. I should
  also mention that I'm using Python 2.6 on Windows. The latest
  stable MySQL-Python extension doesn't support 2.6, and there
  doesn't seem to be a test version available for Windows.
 
  Even if the database API was available, there is no database for
  me to connect to. The goal is to generate these files on the
  local machine without depending on any external resources.
  Eventually, it may be possible to execute this code directly on
  the database server, which is why I think using SQLAlchemy from
  the beginning would make that transition much easier. Thanks for
  any help.
 
  - Max

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Mapping an existing database

2009-03-29 Thread az

see sqlsoup and/or metadata( reflect=True)

 I'm new to SQLAlchemy. I am trying to use Pylons to expose an
 existing database in a browser. All the introductory material
 assumes I am going to create the database from python object
 definitions, but I want to create the python object definitions
 from the database. Am I on my own, or is there a well-worn path
 through these woods?

 Thanks for any advice you may have for me.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-25 Thread az

ah. when u don't have an explicit spec, the testcases are the real 
spec. so make sure u really cover all them funny cases (-:

i had the default_values problem too, and to solve it i have split the 
attr.access into two layers: one that sits below SA (as a fake dict), 
and one thin that sits on top of it. First one handles basic stuff / 
data storage, latter does default_values and lazy-autosetting it (in 
terms of static_type/). Look for AutoSetter and related in 
dbcook/usage/static_type/sa2static.py

There is subtle difference in whether the traits/descriptor will 
auto-set the default value or the SA/sql-server, make sure u 
understand it. First case SA/sql would never know about missing 
value.

as of precedence... i have one declaration and i would not care who 
sets the vales as long it is what's in the declaration.

you choose yours
svilen

On Wednesday 25 March 2009 14:47:35 Christiaan Putter wrote:
 Hi,

 Right now I'm really only using multiple threads for speeding up
 downloads over the internet (which I need to update the data).
 Actually commiting the data is still done via the gui's thread
 session.  At some point speeding up the algorithms would be nice, a
 concurrent solution would be ideal seeing as how the calculations
 for each record are independent.  Though I might look into RPC for
 that instead to get actual speed ups.  That will probably solve the
 thread / session problem in any case.

 The unit testing has revealed some serious problems.  Mainly todo
 with traits and the declarative extension not knowing how to
 cooperate (eg. traits provides a mechanism to set up default
 values, though the instrumented attribute obviously overrides that
 and returns what sql says it should).  I really don't want to set
 up the mapper from a class definition myself, declarative makes
 that much simpler so i don't want to give up on that.  So I'll have
 to create my own intrumented attribute / traits subclass.  Either a
 trait that wraps an instrumented attribute, or an instrumented
 attribute that wraps a trait.  I need to somehow provide both
 traits and SA with the functionality they need in a single
 attribute on the instance.  What do you think will be easiest?

 That of course raises the question of what should take precedence.
 The value provided by traits or the one from the database.  Or
 whichever is not None.

 I'll look into the mapper extensions.

 I never thought some simple unit testing would cause so many
 headaches :-)

 I'm talking to someone at enthought to maybe integrate that into
 their sandbox for now.  I'm sure once someone else with more
 experience with the traits framework looks at it we'll be able to
 figure out what the simplest way to use sql with traits would be.  
 And then the implementation should become clearer as well.

 Hope you're having a nice day,
 Crhistian

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-23 Thread az

let me think about it.
as i see it u want to have runtime-invented data-only extensions to 
some object - or is it data + some algo about it? would the algos be 
new and reside in the new thing or they're just there and used?

imo this should be done via joins ala pyprotocols; inheritance is more 
about constant structures while u want same base object to grow and 
grow. i think SA.query(A,B1,B2,...).join(A.b1).join(A.b2)... gives u 
such notion (amde as A.b1 pointing to B1 etc)  - try? u'll get tuples 
of (A,B1,B2,B3...) objects there

hmm, one suggestion - try doing all this while forgetting about traits 
and meta-stuff first. once u get the dynamic=structure persistency 
working, add meta-stuff on top.

ciao
svil

On Sunday 22 March 2009 23:31:49 Christiaan Putter wrote:
 Hi,

 Maybe some better explanation is required from my side for you to
 be able to help me better.

 My app is basically for doing some plotting, analysis and filtering
 of securities on the stock market.

 The main class that does most of the work is
 Security(HasTraitsORM), where HasTraitsORM is my version of
 declarative's Base class which takes care of turning traits
 (attributes) into correctly typed Columns for declarative to set up
 the mapping.

 The Security class also handles retrieving historical data from an
 hdf5 file which in my opinion is better suited to storing such data
 in an hierarchical structure (with added transparent compression).

 SQLAlchemy comes into the picture for running queries mostly.  And
 storing precomputed values.

 For instance, using the above setup, I can add a child class
 SMAFields at runtime which computes some simple moving averages of
 closing prices.  Once computed this gets stored in its own table
 with columns sma10, sma20, sma50, etc. for the different periods.

 Using a combination of traits and SA any Security instance then has
 some new attributes, sec.sma10 for instance.  Which is either
 retrieved from the SMAFields table via SA or computed from methods
 on the SMAFields class if needed.

 Though the really useful bit is querying:
 sess.query(Security).filter(Security.sma10 = Security.sma20),
 which behaves as one would expect.

 The idea being that the user can easily add new functions for
 computing values from a security's data, and after the new table is
 created and filled with the results, he can run normal sql queries
 on it.  Any other child classes that get added also have access to
 the SMAFields class's fields through it's Security relation.  Thus
 a user can add as many child classes with as many columns as they
 want, and have access to all the fields they've already set up. 
 I'm not sure something like that is possible through inheritance?

 The new fields can then be added to the table viewer in the gui and
 basically be used as if they were part of the Security class from
 the beginning, running sql queries being the most important
 feature.

 It mostly works, though the really ugly hacking of SA I've done to
 get it to work might not be perfect.

 Is there a more elegant way to this?  Is this even the best
 approach at allowing a user to add new behaviour to the
 application?

 I'm new to SA and have never used any SQL before either so I'm sure
 there must be better design patterns for something like this
 already.

 I hope it's clear what my intent is now.

 Thanks again for you help svilen.

 Regards,
 Christian

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-23 Thread az

On Monday 23 March 2009 23:51:26 Christiaan Putter wrote:
 Hi,

 You're quite right, the join was the tricky bit at first. 
...

 The 'dynamically' added classes look something like this:

 class SMAFields(SecurityFields):

  sma10   = Float(sqldb=True)

  def _get_sma10(self):
##  some algo ###
return res


 That code just resides in some text editor provided for the user
 where he can change it, add new class etc.  Then from within the
 main app I use exec to compile it. The base class sets up the
 mapping, foreign keys, creates the table and also adds the sma10
 column to the Security class.  Which is why I can then:

 data = q.filter(Security.sma10 = 5)[:]

 So even though the sma10 column isn't on the Security table itself,
 through the foreign key constraint and the outer join, it behaves
 as one would expect.

 And the query only returns Security instances because I use
 sess.query(Security), without any of the other added classes.

 That all works quite well.  The problems I'm having right now:

 1. figuring out what the best practice is for working with data
 using multiple threads
 - only have one session in the gui thread? other threads call
 commit on this session using a lock?
 - expunge instances from gui sess before working on then, then
 commit changes in new session?  then add them back to gui's
 session?

 2. Creation of SMAFields records if they don't exist.  Right now
 I'm just creating them by hand.  Though automatic creation when
 loading a Security would be nice.
 - tried with orm.reconstruct decorator on Security class,
 doesn't seem to work.

can't really help u here...
for multithread, try whichever is easier to make _now_, and think 
about redoing it for real later... why multithread though?
reconstruct is when loading from db to memory. IMO u 
need on_save-like thing - see mapper extensions / session 
extensions.

 Maybe a bit too complicated for what it achieves, but the ability
 to define new fields with custom algorithms on the fly can be quite
 useful for users.
the dynamic ORM road is not really travelled too much... keep going. 
maybe put what u did somewhere to be visible? with _good_ description 
what u need from it...

ciao
svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread az

good on yer!
one thing that i sorta failed is making a clean set of tests for this.
if u aren't too far in the usage, save some major future headaches and 
do yourself a favour, make such test set. All the things u've tried 
has to be (simple) testcases - it will be the _spec_ of how the thing 
works / what u want from it. Anyone else on the same path could also 
use it then ;-)

i hope that once u do that u'll get an idea of where exactly u have 
failed and fix it.

as of expiration, it also did me some trouble. i have not stepped on 
that session-gone one but i can see how it happens. The (expected) 
life-times of SA-touched things aren't that obvious, and aren't 
documented. e.g. Objects living longer than session or in an extreme 
case, the mappers, might be troublesome.

what i can suggest is, put debug statements and run with echo=True and 
watch what goes on behind and how SA-activities intertwist with 
traits' /your ones.

ciao
svilen

On Sunday 22 March 2009 04:00:27 Christiaan Putter wrote:
 Hi svilen,

 Thanks for your advice, going through your code helped a lot in
 understanding how SA works.

 I've gotten traited classes to behave like classes generated by
 SA's declarative extension.  After a lot of stepping through code I
 realised the main problem was that SA removes values from an
 instance __dict__ once they have been expired.  Traits smartly (or
 not so smartly) picks this up and on the next access to those
 attributes and returns their default values.  Whereas SA would
 normally refresh the values from SQL.  This caused some serious
 headaches, and really strange behaviour.

 My solution was to the overwrite the class's __getattribute__ with:

 def __getattribute__(self, key):
 try:
 dict = super(HasTraitsORM,
 self).__getattribute__('__dict__')

 if ('_sa_state' in dict):
 state = dict['_sa_state']
 if key in state.manager.keys():
 return state.get_impl(key).get(state)

 except Exception, e:
 print Exception in SA getattr for %s: %s % (key, e)
 pass

 return super(HasTraitsORM, self).__getattribute__(key)


 Which tries to first get an attribute from the instance's state
 (which will issue SQL if needed), and if that fails pass the
 request along to its super class where Traits will do it's magic if
 needed.

 This seems to work in the tests if been trying.  Most of the work
 is done by SA declarative, and I've just added a thin layer on top
 to get it to behave well with Traited classes and map traits to SA
 columns.

 One issue I'm still having though is that after commits all
 attributes get expired.  If you close the session the instance was
 attached to the data you just commited can't be accessed any more. 
 This is also the behaviour of plain declarative classes.  This is
 rather unintuitive I believe since once you've comitted an instance
 and closed the session you can't use the data you just set any
 more. In my case this means records being displayed in a gui get
 corrupted.

 Though what's strange is that if you access the attributes after
 the commit and before you close the session, and only then close
 the session, the values remain in the instance's __dict__ and the
 class behaves normally again.  Is this intended for some reason? 
 Is there a workaround?  Personally I don't think quantum physics
 should apply to the bits of my programs and thus merely observing
 them should not change their behaviour. From going through the
 source I found a 'dont_expire_missing' attribute on attribute
 implementations, though this does unfortunately not do what I'd
 like.

 So what I'm doing now is keeping a session open on the gui's
 thread. Which causes some problems when trying to modify said
 instances from other threads and commiting the new values.

 How's this usually done is SA?  Right now I've put a lock on the
 gui thread's session and allow other threads to issue a commit on
 that main thread if need be.  I can't imagine this is the best way
 to do it...

 Hope you're all having a great weekend,

 Christian

 2009/2/8  a...@svilendobrev.com:
  afaiknow these traits are like my own static_types, i.e.
  descriptors holding metadata and applying it to attribute access.
  i have been combining SA with static_type for more than 2 years
  now, since SA 3.0.
 
  The approach i did in the beginning was to replace the object's
  __dict__ by something smart that is static_type-aware. When
  InstrumentionManager framework came into place, i did not find it
  any different, as SA still uses the __dict__ for data access. The
  difference is that now my  __dict__ replacement is created once
  and not at every attr.access.
 
  i did suggest one patch about replaceing the SA's
  obj.__dict__.whatever usage with an explicit set of methods (so
  one knows what access to mime), and that was making SA 1-2%
  faster, but it wasn't accepted.
 
  basicaly now there's 

[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-03-22 Thread az

On Sunday 22 March 2009 21:17:15 Christiaan Putter wrote:
 Hi Svilen,

 Setting up tests is a very good idea, I'll start on some unit
 testing immediately.  Definitely the best way to insure behaviour
 remains constant as you're working on the implementation, was just
 to lazy to do so up till now.
i'm sure if u do the tests first trying to cover as many use cases as 
u can, u may even better understand what u really need... TDD of 
sorts as first feedback to requirements-analisys.

 Speaking of laziness, I've noticed that setting lazy=False on a
 relation does not effect refresh operations on the relation, it
 only loads on reconstruction.  Having cascade='all' doesn't change
 this either.  Is their some setting that I'm missing?
dunno.

 Another little issue I'm trying to get fixed out is extending a
 class's attributes at run time, let's call it Parent.  The need for
 this is allowing users to add custom fields (in a seperate table as
 the parent) at run time to Parent.  My approach is letting the user
 set up a new class (ChildX), which inherits  Base (not parent), 
 with the attributes and methods that compute said attributes during
 run time.  Then I compile that using exec, and and do some magic in
 its metaclass.  Basically I add a relation to Parent with backref
 to ChildX and uselist=False on both sides.  Enthought then allows
 me to add new traits to the Parent class that delegate their value
 the ChildX instance through the backref on Parent.  This is
 basically just mapping the Parent class over several tables, with
 the ability to extend the class on the fly.

 It's mostly working.  One problem though is that on reconstructing
 a Parent instance (or creating a new one) I have to have a
 mechanism that checks if all the Child classes have instances
 related to this record of Parent.  Not a gig deal though.

 Is their a better way to do this?  I'm doing some really ugly
 hacking to the Parent's mapper at run time, seeing as Mapper
 doesn't have a delete_property method.

 Should I try using normal inheritance instead? Will I be able to
 access ChildX.x from ChildY for instance?  And what about changing
 ChildX's definition and running exec on it again?

i don't really get the reason of why u do all this. 
wanna extend existing class runtime or want to add new class?
IMO just subclassing on-the-fly would be easier. 


svil

 Perhaps someone has done something similar before and can give me
 some pointers.

 Thanks for all the advice up till now, it's been really useful.

 Have a great day,
 Christian

 2009/3/22  a...@svilendobrev.com:
  good on yer!
  one thing that i sorta failed is making a clean set of tests for
  this. if u aren't too far in the usage, save some major future
  headaches and do yourself a favour, make such test set. All the
  things u've tried has to be (simple) testcases - it will be the
  _spec_ of how the thing works / what u want from it. Anyone else
  on the same path could also use it then ;-)
 
  i hope that once u do that u'll get an idea of where exactly u
  have failed and fix it.
 
  as of expiration, it also did me some trouble. i have not stepped
  on that session-gone one but i can see how it happens. The
  (expected) life-times of SA-touched things aren't that obvious,
  and aren't documented. e.g. Objects living longer than session or
  in an extreme case, the mappers, might be troublesome.
 
  what i can suggest is, put debug statements and run with
  echo=True and watch what goes on behind and how SA-activities
  intertwist with traits' /your ones.
 
  ciao
  svilen
 
  On Sunday 22 March 2009 04:00:27 Christiaan Putter wrote:
  Hi svilen,
 
  Thanks for your advice, going through your code helped a lot in
  understanding how SA works.
 
  I've gotten traited classes to behave like classes generated by
  SA's declarative extension.  After a lot of stepping through
  code I realised the main problem was that SA removes values from
  an instance __dict__ once they have been expired.  Traits
  smartly (or not so smartly) picks this up and on the next access
  to those attributes and returns their default values.  Whereas
  SA would normally refresh the values from SQL.  This caused some
  serious headaches, and really strange behaviour.
 
  My solution was to the overwrite the class's __getattribute__
  with:
 
      def __getattribute__(self, key):
          try:
              dict = super(HasTraitsORM,
  self).__getattribute__('__dict__')
 
              if ('_sa_state' in dict):
                  state = dict['_sa_state']
                  if key in state.manager.keys():
                      return state.get_impl(key).get(state)
 
          except Exception, e:
              print Exception in SA getattr for %s: %s % (key,
  e) pass
 
          return super(HasTraitsORM, self).__getattribute__(key)
 
 
  Which tries to first get an attribute from the instance's state
  (which will issue SQL if needed), and if that fails pass the
  request along to its 

[sqlalchemy] Re: null() and notnull()

2009-03-19 Thread az

 def search( self, **kw  ):
   by_where_clause = {}
   for k,v in kw.items():
  by_where_clause[ k ] = v
i guess u want to do query.filter_by(**by_where_clause) after that?

it's just a syntax sugar over .filter(). so

   by_where_clause = []
   for k,v in kw.items():
  col = getattr(self.c,k)
  if v == 'nn':   # I use 'nn' to generate a NOT NULL
 by_where_clause.append( c  None)
  else:
 by_where_clause.append( c == v)
   query.filter(and_(*by_where_clause))
that's 3 lines more ..

svil

On Thursday 19 March 2009 17:43:22 jo wrote:
 Well, MIchael, in my case a notnull() function could be very
 interesting because I'm using it in a function, and would like pass
 values as parameters in such way:


 def search( self, **kw  ):
   by_where_clause = []
   for k,v in kw.items():
if k == 'myfield1':
  if v == 'nn':   # I use 'nn' to generate a NOT
 NULL because we don't have a notnull() function
  by_where_clause.append( self.c.field1 
 None) else:
  by_where_clause.append( self.c.field1 ==
 v)

elif k == 'myfield2':
  if v == 'nn':   # I use 'nn' to generate a NOT
 NULL because we don't have a notnull() function
  by_where_clause.append( self.c.field2 
 None) else:
  by_where_clause.append( self.c.field2 ==
 v)

elif k == 'myfield3':
  if v == 'nn':   # I use 'nn' to generate a NOT
 NULL because we don't have a notnull() function
  by_where_clause.append( self.c.field3 
 None) else:
  by_where_clause.append( self.c.field3 ==
 v) ...




 Mytb.search(myfield=None) -- generates WHERE myfield IS NULL
 Mytb.search(myfield=null()) -- generates WHERE myfield IS NULL
 Mytb.search(myfield='nn') -- generates WHERE myfield IS NOT NULL



 if we have a notnull() function these thing could be easier:

 def search( self, **kw  ):
   by_where_clause = {}
   for k,v in kw.items():
  by_where_clause[ k ] = v


 Mytb.search(myfield=None) -- generates WHERE myfield IS NULL
 Mytb.search(myfield=null()) -- generates WHERE myfield IS NULL
 Mytb.search(myfield=notnull()) -- generates WHERE myfield IS NOT
 NULL

 Michael Bayer wrote:
  well usually null() and not_(null()) aren't needed as explicit
  constructs.comparisons like somecol == None and somecol !=
  None will generate the appropriate NULL/NOT NULL expression.
 
  On Mar 19, 2009, at 4:48 AM, jo wrote:
  Hi all,
 
  I would like to know if there's a notnull() function in
  sqlalchemy similar to null()
  to avoid things like not_(null()) ?
 
  thank you
 
  j

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: interval tree table

2009-03-18 Thread az

On Wednesday 18 March 2009 22:31:37 bard stéphane wrote:
 is there an easy way to represent an interval tree table in
 sqlAlchemy Interval Tree is very common way to represent tree.
 with right, left, level indices, node, leaf (left - right = 1) ?

FIY see this thread with various tree rerpesentations (my or may not 
be there):
Adjacency List tree - inefficient reading
From: Kless jonas@googlemail.com
To: sqlalchemy 
Date: 2009-01-18 20:33

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Generating set of AND_ clauses dynamicly

2009-03-06 Thread az

or_( *list_of..)

On Friday 06 March 2009 20:30:08 Tomasz Nazar wrote:
 Hi there,

 I have small issue and don't know how to solve ..
 I need to have this kind of query:

 q = dbsession().query(User).
 options(eagerload_all('lang_pairs.lang_a'),
 eagerload_all('lang_pairs.lang_a')).

 join(['lang_pairs']).
 filter(
   or_(
 and_(LangPair.a_lg_id == 'L1', LangPair.a_lg_id ==
 'L2'), and_(LangPair.a_lg_id == 'L3', LangPair.a_lg_id == 'L4'),
 and_(LangPair.a_lg_id == 'L5', LangPair.a_lg_id == 'L6'), 
   )
 )

 Obviously the part with many 'and_' clauses needs to be dynamicly
 created based on particular user search.
 How to achieve (generate dynamicaly) that?

 When I tried to do a list of 'and_' and put it into
 '...filter(or_(list_of_generated_ands))' SQLA told me:
 ArgumentError: filter() argument must be of type
 sqlalchemy.sql.ClauseElement or string


 Tomasz



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Modifiable collections on a many-to-many, same table mapping

2009-03-05 Thread az

as i see u have normal mapping for the Connection, and still use it as 
secondary table; once i did use such (semi-legal?:) thing but the relation 
was readonly.
try not giving secondary* args ?

On Thursday 05 March 2009 18:37, Kevin Dangoor wrote:
 I have a many-to-many mapping that joins against a single table. The
 relation works fine, except when I try to add a new mapping via the
 list created by relation(). To make things extra interesting, I'm
 using declarative (which I really like, actually, because it maps in a
 fairly straightforward manner to standard SA-isms). Here's the setup:

 class Connection(Base):
 __tablename__ = connections
 followed_id = Column(Integer, ForeignKey('users.id',
 ondelete='cascade'), primary_key=True)
 followed = relation('User',
 primaryjoin='User.id==Connection.followed_id')

 following_id = Column(Integer, ForeignKey('users.id',
 ondelete='cascade'), primary_key=True)
 following = relation('User',
 primaryjoin='User.id==Connection.following_id')

 followed_viewable = Column(Boolean, default=False)

 class User(Base):
 __tablename__ = users

 id = Column(Integer, primary_key=True)
 username = Column(String(128), unique=True)
 email = Column(String(128))
 password = Column(String(20))
 settings = Column(PickleType())
 projects = relation('Project', backref='owner')
 quota = Column(Integer, default=10)
 amount_used = Column(Integer, default=0)

 # this is the interesting part
 users_i_follow = association_proxy('i_follow', 'followed')
 i_follow = relation(Connection,
 primaryjoin=Connection.following_id==id,
 secondary=Connection.__table__,
 secondaryjoin=id==Connection.followed_id)

 users_following_me = association_proxy('following_me', 'following')
 following_me = relation(Connection,
 primaryjoin=Connection.followed_id==id,
 secondary=Connection.__table__,
 secondaryjoin=id==Connection.following_id)

 def follow(self, ou):
 self.i_follow.append(Connection(following=self, followed=ou))

 So, everything works just fine when I add the connection to the session:
  u1 = User(kevin, foo, bar)
  u2 = User(joe, foo, bar)
  s.add(u1)
  s.add(u2)
  s.add(Connection(following=u1, followed=u2))
  s.flush()
  u1.users_i_follow

 [bespin.model.User object at 0x78ee30]

  u1.i_follow

 [bespin.model.Connection object at 0x7d0c10]

  u2.users_following_me

 [bespin.model.User object at 0x78ed50]

 But when I try to use that follow method defined up there, I get an
 exception:

   File
 /Users/admin/projects/bespin/lib/python2.5/site-packages/sqlalchemy/orm/sy
nc.py, line 81, in _raise_col_to_prop
 raise exc.UnmappedColumnError(Can't execute sync rule for source
 column '%s'; mapper '%s' does not map this column.  Try using an
 explicit `foreign_keys` collection which does not include destination
 column '%s' (or use a viewonly=True relation). % (source_column,
 source_mapper, dest_column))
 sqlalchemy.orm.exc.UnmappedColumnError: Can't execute sync rule for
 source column 'users.id'; mapper 'Mapper|Connection|connections' does
 not map this column.  Try using an explicit `foreign_keys` collection
 which does not include destination column 'connections.followed_id'
 (or use a viewonly=True relation).

 users.id *is* mapped in Connection (twice, in fact, which may be the
 problem...)  I'm hoping that there's some simple flag I'm missing
 here.

 Kevin

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Self Join via Join Table ?

2009-02-25 Thread az

u mean, the Bar is an association table of Foo to Foo?
u have to use secondary_table and/or secondary_join in the relation 
setup. And probably specify remote_side or it may not know which Foo 
is what.

On Wednesday 25 February 2009 03:39:20 Stef wrote:
 Hello Everyone,
First of all, kudos on SQLAlchemy.. the speed is pretty amazing
 - I am coming from the SQLObject world and there is a definite
 difference. Excellent work. I am also getting to grips with it
 pretty quickly, using object_session and all that good stuff. This
 said, I have hit that 20% problem, and am hoping someone can shine
 a light on it.

I have a table, lets call it Foo and another table Bar. Foo
 should be able to get a list of it's parents via Bar or it's
 children via Bar. I am also using the declarative_base system
 rather than table/ mapper defined seperately.

class Foo(Base):
 id = Column(Integer, primary_key=True)

class Bar(Base):
 parent_id = Column(Integer, default=0)
 child_id = Column(Integer, default=0)

So, I thought something like ; children = relation(Foo,
 backref=backref('parents'), primaryjoin=and_(Foo.id==Bar.parent_id)

But that's where I hit the 'wall' as it were, is there a way to
 setup a synonym for Foo in the primaryjoin clause ? Am I missing
 something stupid ? (I am okay with that ;)

Regards
Stef


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: dynamic relation filtering

2009-02-25 Thread az

thequery().with_polymorphic( list of classes ) ?

On Wednesday 25 February 2009 15:43:54 Vladimir Iliev wrote:
 hi,

 how can i filter dynamic relation's query to instances of a given
 class, including the subclasses ?

 i tried something like

 part.documents.filter(Part.documents.of_type(CADDocument))

 but i'm getting

 ArgumentError: filter() argument must be of type
 sqlalchemy.sql.ClauseElement or string

 thanks in advance



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Events Undo/Redo Functionality?

2009-02-24 Thread az

these are two general patterns, observer/listener and undo/redo 
(command-pattern), which have nothing to do with DB.
i dont know about 2) DB-observers which would react on somebody else 
changing the DB, AFAIK that is possible on certain servers only, but 
the rest should be done independly on higher level than DB-stuff. 
read about Model-View-Controller.

On Tuesday 24 February 2009 15:00:28 paniq303 wrote:
 Hi everyone,

 I am developing an offline music synthesizing and sequencing
 application in Python, using the SQLite backend to store song data.
 At the moment, I am using my own object relational mapper between
 SQL and Python, which is fairly minimal. I feel compelled to use
 SQLAlchemy, but there are two areas where I would like to assess my
 options before I go forward and embed SQLAlchemy into my
 application.

 1) Is it possible/planned to connect signal handlers to database
 changes? In my app, changing object properties will call connected
 event handlers, which in turn refresh UI views. How could I do this
 with SQLA?

 Can you think of another way to update the UI from DB changes?
 Another way could be to have a hook listen to transactions and
 generate journal records from which events can be signalled - this
 would even enable multiple processes to handle events. Does that
 sound more plausible? Is it possible?

 2) I see that SQLA supports rollback for pending transactions,
 which is a great feature. But as far as I understand, it is not
 possible to rollback transactions that have already been committed
 (undo). How would I go about and implement such a feature into my
 application best, using SQLA as a backend?

 I understand that SQL is traditionally used in web services, where
 event handling is virtually impossible and extensive undo/redo
 functionality is rarely needed. But I believe it makes perfect
 sense to use databases for document formats as well.

 Looking forward to your answer.


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: [sqlalchemy]

2009-02-21 Thread az

what u mean by working?

if the object's __init__ is not called when loaded from db (like 
pickle) - see docs for other hooks.

http://www.sqlalchemy.org/docs/05/mappers.html#constructors-and-object-initialization

e.g. the @orm.reconstructor decorator


On Saturday 21 February 2009 10:28:23 laurent FRANCOIS wrote:
 Hello

 Is it obvious that this model:


 %
 ##Class Table
 user_table = Table(
 'user', metadata,
 Column('user_id', Integer,
 primary_key=True), Column('fname', Unicode(50), default=''),
 Column('lname', Unicode(50), default='') )
 ## Class ORM
 class User(object):
 uUser model
 def __init__(self,dic):
 Set instance attribut with a dictionnary
   dico= {'fname': 'value1', 'lname': 'value2'}
   instance = User(dico)
   


 self.listfieldname = dic.keys()
 self.dic = dic

   for key in dic:
 setattr(self,key,dic[key])
 def __repr__(self):
 return '%s %s' %(self.dic[self.listfieldname[0]],
 self.dic[self.listfieldname[1]])

 mapper(User, user_table)

 %%

 doesn't work for a session.query(User).all()
 but work fine for session.add(User(dico))
 ???

 Thanks





 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to write a self 2 self relationship?

2009-02-16 Thread az

put it as text, it will be eval()'uated later

On Monday 16 February 2009 10:57:11 一首诗 wrote:
 I tried to write :

 #--
-- class User(Base):
 __tablename__ = 'users'

 id = Column(Integer, primary_key=True)
 name = Column(String)
 fullname = Column(String)
 password = Column(String)
 sons = relation(User, order_by=User.id, backref=parent)
 #--
--

 But the as 'User' is not defined at the line ... relation ...  is
 processed, the code above doesn't work.

 So, does sqlalchemy support self 2 self relationship ?
 If the answer is YES, how to do it?

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: How to write a self 2 self relationship?

2009-02-16 Thread az

thats diff. thing, see self-ref. relations
http://www.sqlalchemy.org/docs/05/mappers.html#adjacency-list-relationships


On Monday 16 February 2009 11:18:59 一首诗 wrote:
 Like this ?
 ---
- class User(Base):
 __tablename__ = 'users'

 id = Column(Integer, primary_key=True)
 name = Column(String)
 fullname = Column(String)
 password = Column(String)
 sons = relation('User', order_by='User.id', backref=parent)

 ---
-

 I got an Exception:

 sqlalchemy.exc.ArgumentError: Could not determine join condition
 between parent/child tables on relation User.sons.  Specify a
 'primaryjoin' expression.  If this is a many-to-many relation,
 'secondaryjoin' is needed as well.

 On Feb 16, 5:08 pm, a...@svilendobrev.com wrote:
  put it as text, it will be eval()'uated later

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: A question about best practices after the Object Relational Tutorial

2009-02-15 Thread az

 I know how to query the database, and to add a new database entry.
 But I really wanna know where my helper code goes. My idea is that
 there is a function addCustomer() in the Customer class, but I
 tried this out and it didnt work. Any suggestions or examples how
 to proceed?
if in the class, it ought to be a classmethod, alternative constructor 
of sorts. 
helper code - when would be that executed? 
if at creation, but not at load-from-db, just put in the __init__...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Foreign Key Constraint between seperate metadatas

2009-02-14 Thread az

use one decl-base for all?

On Saturday 14 February 2009 18:30:14 dasacc22 wrote:
 Hi,

 Say I have two different files that I import that establish tables
 via declarative_base, so

 import a, b

 where there exists a.Group and b.User and then I create my engine

 engine = create_engine(...)

 and then I ..

 a.metadata.bind = engine
 b.metadata.bind = engine

 or I could call create_all(engine) or whatever. But what I want to
 do is create a many-to-many table called group_users and set ..

 a.User.groups = relation('Group', secondary=group_users,
 backref='user')
 b.Group.users = relation('User', secondary=group_users,
 backref='group')

 and right now i do this by creating a third metadata that i bind to
 engine. But unfortunately its not aware of either tables from the
 first two metadatas that ive imported.

 Is there a way that I can merge all metadata's? Or make my
 relational metadata aware of the tables in the first two?

 Thanks,
 Daniel
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Foreign Key Constraint between seperate metadatas

2009-02-14 Thread az

put the decl-base only in separate file, and import it in all usages?

On Saturday 14 February 2009 19:08:58 dasacc22 wrote:
 I dont know how I can do that. Right now my a.py and b.py declare
 their own declarative_base that get subclassed by my table
 definations. The main goal here is that a is not aware of b and
 vice versa and I cant pass Base to an import. So I need to combine
 the metadata of multiple declarative bases together.

 I was just thinking maybe I can create sqlalchemy.MetaData() and
 then iterate over a.metadata['tables'] and b's and then tack on my
 relationship and create_all from this. But then I guess i would
 have to use mapper to connect a.Group and b.User to the new
 metadata that i just generated. I dont know if this would work or
 much less, be an optimal solution.

 On Feb 14, 11:38 am, a...@svilendobrev.com wrote:
  use one decl-base for all?
 
  On Saturday 14 February 2009 18:30:14 dasacc22 wrote:
   Hi,
  
   Say I have two different files that I import that establish
   tables via declarative_base, so
  
   import a, b
  
   where there exists a.Group and b.User and then I create my
   engine
  
   engine = create_engine(...)
  
   and then I ..
  
   a.metadata.bind = engine
   b.metadata.bind = engine
  
   or I could call create_all(engine) or whatever. But what I want
   to do is create a many-to-many table called group_users and set
   ..
  
   a.User.groups = relation('Group', secondary=group_users,
   backref='user')
   b.Group.users = relation('User', secondary=group_users,
   backref='group')
  
   and right now i do this by creating a third metadata that i
   bind to engine. But unfortunately its not aware of either
   tables from the first two metadatas that ive imported.
  
   Is there a way that I can merge all metadata's? Or make my
   relational metadata aware of the tables in the first two?
  
   Thanks,
   Daniel

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Foreign Key Constraint between seperate metadatas

2009-02-14 Thread az

look up the group, there was once or twice a discussion on directory 
structure

On Saturday 14 February 2009 19:28:37 dasacc22 wrote:
 I guess so, I originally gave that idea a pass b/c of the folder
 structure as the import would have to come from ../../ which means
 i would have had to modify the searched path of modules in each
 file. But eventually this will be a minor framework of utils which
 can reside in site-packages basically negating my original
 assumptions.

 Thanks for stating the obvious! Im working on my own and tend to
 overlook things.

 On Feb 14, 12:22 pm, a...@svilendobrev.com wrote:
  put the decl-base only in separate file, and import it in all
  usages?
 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Finding the latest matching object from a relationship?

2009-02-13 Thread az

ah, it has to be other way around
unit = session.query(Unit).filter(Unit.id==3).first()
rec = sess.query( Record).filter( Record.theunit == unit)...
where theuint is the backref of Unit.records.

see 
http://www.sqlalchemy.org/docs/05/ormtutorial.html#common-relation-operators

or just wait for Mike (-:)

ciao
svil
svilendobrev.com
http://www.linkedin.com/in/svilendobrev

On Friday 13 February 2009 10:30:24 Chris Withers wrote:
 a...@svilendobrev.com wrote:
  use the relation as a join path, and then whatever
  filtering/ordering
 
  try:
unit.join(Unit.records).order_by(Record.date.desc()).first()
  or
unit.join('records').order_by(Record.date.desc()).first()

 That gives me:

 AttributeError: 'Unit' object has no attribute 'join'

 cheers,

 Chris



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Finding the latest matching object from a relationship?

2009-02-13 Thread az

On Friday 13 February 2009 11:02:13 Chris Withers wrote:
 a...@svilendobrev.com wrote:
  rec = sess.query( Record).filter( Record.theunit == unit)...
 Is this as nice as I can get it or is their a slicker solution?
this might work too
rec = sess.query( Record).filter_by( theunit = unit)...


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Reg: Question on SqlAlchemy Speed

2009-02-13 Thread az

On Friday 13 February 2009 11:07:29 koranthala wrote:
 Hi,
 I am working on a Python application which does approximately
 1000 updates  at a time every 3 seconds in a Sqlite File Database
 using SqlAlchemy. On timing it in a 2GHz machine, I found that it
 is taking ~1.01 seconds to do the same.
 Is it possible to increase the speed of the same? My
 application is quite a complex one, and taking 1 second for the
 updates is slowing the overall performance considerably.
 I checked Sqlite page - and they mention that 25000 text
 UPDATEs with an index (which is what I am doing) takes 2.4 seconds
 (this is Sqlite2 data - sqlite3 should be faster). I guess should
 mean that 1000 text UPDATEs should take 0.1 seconds.

 The inserts I do is as follows:
 BEGIN:
 2009-02-13 14:34:40,703 INFO sqlalchemy.engine.base.Engine.0x..10
 UPDATE data SET status=? WHERE data.index = ?
 2009-02-13 14:34:40,703 INFO sqlalchemy.engine.base.Engine.0x..10
 ['True', 68762]

 ...
 (1000)
 ...

 COMMIT

 Please note that the SET values are constant and only index
 changes.
can u use 
... where data.index in (...) ?
or if not, 
... where data.index == 1 or data.index == 2 or ...
group them by say 10 or 20, and see how faster that is?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Reg: Question on SqlAlchemy Speed

2009-02-13 Thread az

   The inserts I do is as follows:
   BEGIN:
   2009-02-13 14:34:40,703 INFO
   sqlalchemy.engine.base.Engine.0x..10 UPDATE data SET status=?
   WHERE data.index = ?
   2009-02-13 14:34:40,703 INFO
   sqlalchemy.engine.base.Engine.0x..10 ['True', 68762]
  
   ...
   (1000)
   ...
  
   COMMIT
  
   Please note that the SET values are constant and only index
   changes.
 
  can u use
  ... where data.index in (...) ?
  or if not,
  ... where data.index == 1 or data.index == 2 or ...
  group them by say 10 or 20, and see how faster that is?

 Thank you for the quick reply. Your point seems very good.

 Actually, I use SQLAlchemy's Identity Mapper to do the same.
 The code I use is as follows:

 for o in self.objects:
   o.status = txt
   self.session.add(o)
 self.session.commit()

 This acutally causes the SQLAlchemy to write separate UPDATE
 statements.
 I went through SQLAlchemy tutorial - and I couldnt find a mechanism
 to cause it to update DB for multiple indexes in a single
 statement.

 Is it possible to do it via Identity Mapper?
 If not, if I modify it using SELECT statements, wont this session
 have erroneous values?

hmm... group ORM updates... no idea, wait for Mike.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: text binding for sql queries

2009-02-13 Thread az

i guess s.execute(stmt, params=dict(codeword=codeword) )


}On Friday 13 February 2009 18:42:26 Max Ischenko wrote:
 I get an error when I'm trying tu run this:

 stmt = text(select * from salary_data_new
 where codeword=:codeword union
  select * from salary_data_clean where
 codeword=:codeword )
 # s is instance of Session() class factory
 c.rec = s.execute(stmt, codeword=codeword).fetchone()

 I get:
 File
 '/home/max/projects/site-baseline/doupy/doupy/controllers/salarydb.
py', line 420 in record
   c.rec = s.execute(stmt, codeword=codeword).fetchone()
 File
 '/home/max/projects/site-baseline/py/lib/python2.5/site-packages/SQ
LAlchemy-0.5.2-py2.5.egg/sqlalchemy/orm/session.py', line 752 in
 execute
   engine = self.get_bind(mapper, clause=clause, **kw)
 TypeError: get_bind() got an unexpected keyword argument 'codeword'

 What am I doing wrong?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: single table inheritance : query a subtype and its subtypes

2009-02-13 Thread az

session.query(Child).filter( Child.type != 'childtypenamehere') ?

On Friday 13 February 2009 19:05:52 GustaV wrote:
 Hello!

 In a configuration looking like this:
 class Parent:
pass

 class Child(Parent):
pass

 class ChildChild(Child):
pass

 I would like to query the Childs and all classes that inherits it
 (here: ChildChild), but not Parent instances.
 session.query(Child).all() returns Child type only ( SQL : WHERE
 type IN (2) ), and I'd like to avoid to manually list the classes
 that inherit Child class.

 Thanks

 GustaV
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: single table inheritance : query a subtype and its subtypes

2009-02-13 Thread az

ah sorry, singleinh, ignore my post

On Friday 13 February 2009 19:05:52 GustaV wrote:
 Hello!

 In a configuration looking like this:
 class Parent:
pass

 class Child(Parent):
pass

 class ChildChild(Child):
pass

 I would like to query the Childs and all classes that inherits it
 (here: ChildChild), but not Parent instances.
 session.query(Child).all() returns Child type only ( SQL : WHERE
 type IN (2) ), and I'd like to avoid to manually list the classes
 that inherit Child class.

 Thanks

 GustaV

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Problem with inheritance and cross relations (was: Unable to model cross relations (using ext.declarative))

2009-02-12 Thread az

is this joined inheritance or concrete?
IMO if Product inherits Node, they has to have same PK?

On Thursday 12 February 2009 23:56:12 Bruce van der Kooij wrote:
 Many thanks Michael, your instructions were spot-on. In the process
 of following your instructions I decided to switch from using
 ext.declarative to a non-declarative style (define tables, define
 Python objects, setting up the mapping, the works :-).

 Regretfully while trying to work out the rest of the model (see
 ASCII diagram below) I hit another wall. I'm currently trying to
 make the model Product inherit from Node, I'm having issues with
 this because the situation I'm faced with deviates from the
 excellent examples in the documentation.

 +--+ +--+

 | Node | | User |

 +--+   1 +--+

 | nid (PK, AI) |/---| uid (PK, AI) |
 | vid  | 0..*  / +--+
 | uid  |--/

 +--+
^

| is subclass of

 +-+

 | Product |

 +-+

 | vid (PK)|
 | nid |

 +-+

 The User relation is irrelevant, but I added it to present the
 full picture. I also excluded the relation with the NodeRevision
 model since I already posted that above.

 Creating a Product instance, adding it to the session and
 committing it results in a node, a revision and a product. However
 the problem that arises is that the row for the Product table will
 not have the correct vid (it will remain on the default value, not
 with the vid of the NodeRevision associated with the Node).
 Subsequent INSERTS will then fail (duplicate entry). I'm led to
 believe that the problem stems from the fact that the primary key
 for the products table differs from that of the node table (vid
 compared to nid).

 I haven't yet managed to figure out a solution to this predicament,
 I'll probably look into multi-table mapping next to see if that
 might help.

 Here's what I've managed to get so far:

 # tables.py:
 node_table = Table(node, metadata,
 Column(vid, Integer, ForeignKey(node_revisions.vid)),
 Column(uid, Integer, ForeignKey(users.uid)),
 autoload=True
 )

 node_revisions_table = Table(node_revisions, metadata,
 Column(nid, Integer, ForeignKey(node.nid)),
 Column(uid, Integer, ForeignKey(users.uid)),
 autoload=True
 )

 uc_products_table = Table(uc_products, metadata,
 Column(vid, Integer, ForeignKey(node_revisions.vid),
 primary_key=True),
 Column(nid, Integer, ForeignKey(node.nid)),
 autoload=True
 )

 # models.py (simplified):
 class Node(object):
 pass

 class NodeRevision(object):
pass

 class Product(Node):
   pass

 # database.py (mapping):
 node_mapper = mapper(Node, node_table,
 polymorphic_on=node_table.c.type,
 properties={
 user: relation(User),
 latest_revision: relation(NodeRevision,
 primaryjoin=node_table.c.vid ==
 node_revisions_table.c.vid,
 post_update=True
 ),
 revisions: relation(NodeRevision,
 primaryjoin=node_table.c.nid ==
 node_revisions_table.c.nid,
 backref=backref(node,
 primaryjoin=node_revisions_table.c.nid ==
 node_table.c.nid
 )
 )
 }
 )

 node_revision_mapper = mapper(NodeRevision, node_revisions_table,
 properties={
 user: relation(User)
 })

 uc_products_mapper = mapper(Product, uc_products_table,
 polymorphic_identity=product
 )

 # Example of creating a product (user relation left out for
 simplicity):
 product = Product().
 product.latest_revision = NodeRevision(node=product)

 Thanks for all your help.

 Best regards,

 Bruce
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Finding the latest matching object from a relationship?

2009-02-12 Thread az

use the relation as a join path, and then whatever 
filtering/ordering

try:
  unit.join(Unit.records).order_by(Record.date.desc()).first()
or
  unit.join('records').order_by(Record.date.desc()).first()



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Pros and cons of flushing subset of session's objects

2009-02-11 Thread az

On Wednesday 11 February 2009 23:07:29 Angri wrote:
 Michael, I see you dont like the ability to flush only specific
 objects 
how about a group of objects? say whole island in the dependency 
graph?


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: copying data from one DB to another

2009-02-11 Thread az

any2any: see files in
http://dbcook.svn.sourceforge.net/viewvc/dbcook/trunk/dbcook/misc/metadata/

copyall, copydata
i haven't touched it for a while, see if still works.

On Thursday 12 February 2009 01:05:47 Piotr Ozarowski wrote:
 Hi,

 Here's what I wrote to create PostgreSQL's data backup in SQLite:
 (yes, I know pg_dump, I just want to create/use it in SQLAlchemy)

 | new_engine = sqlalchemy.create_engine(sqlite:///offline.db)
 | metadata.create_all(bind=new_engine)
 |
 | # SQLite doesn't care about foreign keys much so we can just copy
 | the data for table in metadata.sorted_tables:
 | if table.name == 'posters':
 | continue # see below
 | data = table.select(bind=session.bind).execute().fetchall()
 | if data:
 | table.insert(bind=new_engine).execute(data)
 |
 | # posters table - data column is BLOB!
 | for poster in
 | metadata.tables['posters'].select(bind=session.bind).execute():
 | metadata.tables['posters'].insert(bind=new_engine).\
 | execute(md5sum=poster.md5sum, data=StringIO(poster.data).read())

 (metadata and session are already created in app. and are pointing
 to PostgreSQL's stuff)


 Questions:
 1) can it be done nicer? Specially the part with tables containing
 BLOB columns (posters in above example)
 2) is there an easy way to do this the opposite way (from SQLite to
PostgreSQL)?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: resultset print keys values

2009-02-10 Thread az

object-query or plain query?
 - objects are .. whatever class it is; 
   print the i.__dict__ or str(i) or whatever
 - plain-sql-query ones are RowProxy, they
   have i.keys() i.items() i.values()

On Tuesday 10 February 2009 21:27:09 Lukasz Szybalski wrote:
 Hello,
 Could somebody tell me how can I print the object data in my result
 set without knowing the column names?

 myresult=session.query(...).all()

 for i in myresult:
  print 


 I need to debug some data and its hard to print the object keys and
 values (column names and its values) .

 i.keys() ?
 i.items()?  some dictionary like functions would be nice.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Declarative with Enthoughts Traits framework.

2009-02-08 Thread az

afaiknow these traits are like my own static_types, i.e. descriptors 
holding metadata and applying it to attribute access.
i have been combining SA with static_type for more than 2 years now, 
since SA 3.0.

The approach i did in the beginning was to replace the object's 
__dict__ by something smart that is static_type-aware. When 
InstrumentionManager framework came into place, i did not find it any 
different, as SA still uses the __dict__ for data access. The 
difference is that now my  __dict__ replacement is created once and 
not at every attr.access. 

i did suggest one patch about replaceing the SA's 
obj.__dict__.whatever usage with an explicit set of methods (so one 
knows what access to mime), and that was making SA 1-2% faster, but 
it wasn't accepted.

basicaly now there's a thin layer on top of SA, then SA itself, then a 
thick layer underneath managing the data (the fake __dict__).

declarative+traits... u'll end up where i was. dbcook.sf.net is doing 
that - since beginning. and it's switchable on/off.
It all works well and stable, in project with 250-300 classes, 
although about 15% slower than without it (-:)

The sa2static code:
svn co 
http://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk/dbcook/usage/static_type/

The static_type itself:
svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/static_type

whole dbcook: 
svn co https://dbcook.svn.sourceforge.net/svnroot/dbcook/trunk

i have quite some experience fighting this field, ask if u want.

ciao
svilen
www.svilendobrev.com

On Sunday 08 February 2009 01:51:20 cputter wrote:
 Hi guys and girls,

 I've recently discovered the joys of using sqlalchemy and would
 love to using it together with Traits.  A few months back there was
 an attempt to integrate sqlalchemy into traits, though it wasn't
 really comprehensive in exploiting all of sqlalchemy's potential.

 So I'm trying to work on that and combine ext.Declarative with
 traits. The basic idea is to use the DeclarativeMeta type to
 generate Columns from Traits and pass those on for the Declarative
 extension to do its magic.  This would allow mixing of sqlalchemy
 attributes and trait attributes in a single class so that we could
 still make use of all the relational setup sqlalchemy does in any
 case.

 Reading through several threads and looking at Elixir's SA
 integration helped me a bit though I couldn't find any
 documentation on how to implement the InstrumentationManager
 interface.  I'm assuming this would be essential for letting Traits
 and SQLAlchemy play well together.

 There's still a lot of work to do, and I'm not really sure what
 needs to be done for everything to work properly.  Would really
 appreciate it if someone could help me out.


 Here's an example of how it's working at the moment, I'll add the
 actual implementation at the end.

 #
 class User(HasTraitsORM):
 __tablename__ = 'users'

 id = Column('id', Integer, primary_key=True)
 name = Str(sqldb=True)

 def _name_changed(self, old, new):
 print 'Changed name from %s to %s.' % (old, new)

 def __repr__(self):
 return 'User(%s, %s)' % (self.id, self.name)

 people = ['John', 'Charls','Steve','Smith','Jane']

 for per in people:
 obj = User(name=per)
 sess = sqlservice.Session()
 sess.add(obj)
 sess.commit()
 sess.close()
 print obj

 session = sqlservice.Session()
 print '\nQuery all users\n'
 for user in session.query(User).order_by(User.name).all():
 print user

 session.close()


 

 Which spits out:
 ###
 Changed name from  to John.
 User(users.id, )
 Changed name from  to Charls.
 User(users.id, )
 Changed name from  to Steve.
 User(users.id, Steve)
 Changed name from  to Smith.
 User(users.id, Smith)
 Changed name from  to Jane.
 User(users.id, Jane)

 Query all users

 User(2, Charls)
 User(1, John)

 ##

 Which is really strange behaviour.  There's obviously something
 wrong in my implementation of HasTraitsORM but why the different
 results within the same loop???  Why add only two instances?

 Totally baffles me.

 Here's the rest of my code, hope somehow can help me out.  It's
 very messy, I've been hacking at it like crazy with no success :-)

 Hope you're all having a great weekend.
 -Chris


 ##

 # Standard library imports.
 import logging

 # Enthought library imports
 from enthought.preferences.api import Preferences
 from enthought.traits.api import \
 HasTraits, MetaHasTraits, Int, Str, Bool, Float, Any,\
 String, Enum, Python, \
 on_trait_change, TraitListObject

 # Package imports
 import sqlalchemy
 from sqlalchemy import Column, Integer
 from sqlalchemy.schema import MetaData
 from sqlalchemy.orm.interfaces import MapperProperty,
 InstrumentationManager
 from sqlalchemy.orm.attributes import get_attribute, set_attribute,
 is_instrumented
 from 

[sqlalchemy] Re: Need SqlAlchemy Model Advice for a table with many foreign keys.

2009-02-03 Thread az

so u have a member, pointing to member_profile, pointing to all its 
attributes into separate tables?
one way IMO is to map all other 50 tables into simple classes, then 
have member profile reference each of them, i.e. relation( .. 
uselist=False). Then, if u need all of them at once, request a 
eagerload - by default in the relation(), or keep that lazy and only 
specify eagerload'ing at query time. it depends on the usage 
patterns.

so far u'll have member.profile.gender.name or whatever.
maybe u can make a property over gender.name to get that directly, 
i.e. member.profile.gender which is just shortcut to 
member.profile._gender.name (note _ in the start).

i guess this can be slow - in worst case (all eagerloading) u'll be 
instantiating 50+objects on each member_profile.

u can avoid all the object-per-attribute stuff via column_property() 
selectable-attributes, but i dont know if these can scale into 
eagerload.

see if anyone has better suggestion.

ciao
svilen

On Tuesday 03 February 2009 17:29:58 Gloria W wrote:
 Hi all, thanks for this really helpful list.

 My question now involves trying to figure out the correct way to
 map this legacy database to SqlAlchemy objects using the
 declarative_base model.

 This database has two main tables, members, and member_profiles,
 where a memberID is the foreign key from members to
 member_profiles.

 It then has about thirty other tables such as this one, called
 member_gender. member_gender contains a genderID, and a string
 representing 'Male' or 'Female'.
 genderID is a foreign key in member_profiles, pointing to
 member_gender.

 So the member_profiles table contains many of these xxxID fields
 which are foreign keys into many other tables which act as
 descriptor tables, abstracting (to the point of overkill) just
 about every bit of data in this database.

 The Left Join on member_profiles, to get all of the data I need in
 one fell swoop, is over 50 lines long.

 I want to use the declarative_base class model to represent this
 database in SqlAlchemy, but here are problems I face:

 1: Multiple inheritance is not allowed, so I cannot create a class
 for every table and inherit them all into one master class.
 2: If I create three classes, for example, say Members,
 MemberProfiles, and Gender, and use Members as the base class for
 MemberProfiles, and MemberProfiles as the base class for Gender,
 the Members 'class members' are not visible via Gender. The
 inheritance seems to only go two objects deep.

 I want to be able to access the gender 'descriptor' field
 containing the string 'Female' from one master object, instead of
 accessing 'genderID'. I want to be able to do this for all foreign
 keys in this member_profiles table.

 Is there a way to properly make such a model using the
 declarative_base methodology, which allows for three or four levels
 of foreign key resolution in some 'master' object, containing all
 of the fields I want?

 Many many thanks in advance,
 Gloria

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Understanding polymorphism

2009-02-01 Thread az

only the class owner of the identity sets it, internaly.
i your case Input has no separate identity, but this does not give u right to 
set identity manualy. if it had, a=Input( whatver) would set that.

or that's how i get it.
 
On Monday 02 February 2009 04:15, MikeCo wrote:
 I have a single table polymorphism question. It looks like I can't
 directly set the value of the polymorphic discriminator when creating
 records; it is only set correctly when you instantiate objects of the
 derived class. Is that true? Why, or what is wrong here?

 Here is the scenario:

 Jobs have Steps, Steps have Inputs of different types (say SRC1 and
 SRC2)

 Use declarative to set up these classes:

 class Job(Base):
 __tablename__ = 'job'
 id = Column(Integer, primary_key=True, autoincrement=True)
 title = Column(String(20))
 jobnum = Column(Integer)
 steps = relation('Step', backref=('jobs'))
 def __repr__(s):
 return 'Job %s title=%s'%(s.id,s.title)

 class Step(Base):
 __tablename__ = 'step'
 id = Column(Integer, primary_key=True, autoincrement=True)
 name = Column(String(20))
 job_id = Column(Integer, ForeignKey('job.id'))
 inputs = relation('Input', backref=('step'))
 def __repr__(s):
 return 'Step %s name=%s'%(s.id,s.name)

 class Input(Base):
 __tablename__ = 'input'
 id = Column(Integer, primary_key=True, autoincrement=True)
 kind = Column(String(10))
 value = Column(String(30))
 step_id = Column(Integer, ForeignKey('step.id'))
 __mapper_args__ = {'polymorphic_on': kind}
 def __repr__(s):
 return 'Input %s kind=%s value=%s'%(s.id,s.kind,s.value)

 class SRC1(Input):
 __mapper_args__ = {'polymorphic_identity': 'SRC1'}
 src1file = Column(String(100))

 class SRC2(Input):
 __mapper_args__ = {'polymorphic_identity': 'SRC2'}
 src2file = Column(String(100))


 insert some data like this

 j1 = Job(title='Job#201',jobnum=201)
 step = Step(name='job201-step1')
 j1.steps.append(step)

 inp1 = SRC1(value='this one works')
 step.inputs.append(inp1)

 inp2 = Input(kind='SRC2',value='why is kind = NULL here')
 step.inputs.append(inp2)

 inp3 = Input(value='kind is null here also')
 inp3.kind='SRC2'
 step.inputs.append(inp3)

 session.add(j1)
 session.commit()


 when i retrieve the data notice that the 2nd and 3rd records have no
 kind value, even though I tried to explicitly set it.

 for i in session.query(Input):
 print i

 Input 1 kind=SRC1 value=this one works
 Input 2 kind=None value=why is kind = NULL here
 Input 3 kind=None value=kind is null here also


 So what happened with inp2 and inp3 where I tried to assign a value to
 kind?

 

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Adding extra data to column objects

2009-01-31 Thread az

there has been a number of apps announced that do html forms from SA 
schema - look up the list.
afaik Column() takes an 'info=...' dict with whatever stuff u put in 
it. Same goes for Table()


 I have been trying to figure out a way to easily create HTML forms
 and some other stuff from my schema without repeating myself all
 over. I have looked through the doc and to an extent the list but
 havent found anything.

 Ideally I would be able to do something like
 Column(whatever, String(128), metadata= { 'visible':True} )

 then in my form generating code do something like

 if obj.c.whatever.metadata['visible']:
  print obj.whatever

 Basically, what I am looking for is the ability to associate extra
 attributes to these columns in my code without having to redefine
 them. It would be awesome to do something like

 ${formfor(obj)}

 in my templates instead of hardcoding it then worrying about the
 form changing or a column growing later. I apologize if this sounds
 stupid or like a bad idea.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Correct way of moving relation

2009-01-29 Thread az

that sess.fulsh() in the middle there... if u move it up/down/out, 
will behaviour change? e.g. if u print the things in itemB.purchase 
just _After that flush - is 80 there or not?

On Thursday 29 January 2009 20:19:59 Werner F. Bruhin wrote:
 Michael,

 I run the following script and initially had the either my
 application and/or the IB Expert database tool (for Firebird SQL v
 2.1) open at the same time.  Now the following tests are done
 without any other task accessing the database.

 script:
 engine = db.sa.create_engine(dburl, encoding='utf8', echo=False)
 Session = db.sao.sessionmaker()
 Session.configure(bind=engine)
 session = Session()

 keyA = 174
 keyB = 175

 itemB = session.query(db.Cbbottle).get(keyB)
 print 'before move from B to A'
 print 'itemB id: %s' % itemB.cbbottleid

 for purch in itemB.purchase:
 print 'purchasid: %s' % purch.purchaseid
 print 'fk_cbbottleid: %s' % purch.fk_cbbottleid

 session.flush()

 itemA = session.query(db.Cbbottle).get(keyA)
 itemB = session.query(db.Cbbottle).get(keyB)

 print 'start to move from B to A'
 print 'itemA id: %s' % itemA.cbbottleid

 for purch in itemB.purchase:
 print 'purchasid: %s' % purch.purchaseid
 print 'fk_cbbottleid: %s' % purch.fk_cbbottleid
 purch.cbbottle = itemA

 session.commit()

 itemA = session.query(db.Cbbottle).get(keyA)
 print 'after move from B to A'
 print 'itemA id: %s' % itemA.cbbottleid

 for purch in itemA.purchase:
 print 'purchasid: %s' % purch.purchaseid
 print 'fk_cbbottleid: %s' % purch.fk_cbbottleid

 The following is the output, note that purchasid 80 is not being
 moved. before move from B to A
 itemB id: 175
 purchasid: 79
 fk_cbbottleid: 175
 purchasid: 80
 fk_cbbottleid: 175
 purchasid: 81
 fk_cbbottleid: 175

 start to move from B to A
 itemA id: 174
 purchasid: 79
 fk_cbbottleid: 175
 purchasid: 81
 fk_cbbottleid: 175

 after move from B to A
 itemA id: 174
 purchasid: 79
 fk_cbbottleid: 174
 purchasid: 81
 fk_cbbottleid: 174

 Without doing other tasks on the database I run the same script
 again, and get this output (80 is now moved).
 before move from B to A
 itemB id: 175
 purchasid: 80
 fk_cbbottleid: 175

 start to move from B to A
 itemA id: 174
 purchasid: 80
 fk_cbbottleid: 175

 after move from B to A
 itemA id: 174
 purchasid: 79
 fk_cbbottleid: 174
 purchasid: 80
 fk_cbbottleid: 174
 purchasid: 81
 fk_cbbottleid: 174

 If I run the script again nothing is moved (which is obviously
 correct) and all shows under fk_cbbottleid 174, now I change the
 keyA and keyB variable and reverse the values as follows:
 keyA = 175
 keyB = 174

 Now I get this and again 80 is not moved.
 before move from B to A
 itemB id: 174
 purchasid: 79
 fk_cbbottleid: 174
 purchasid: 80
 fk_cbbottleid: 174
 purchasid: 81
 fk_cbbottleid: 174

 start to move from B to A
 itemA id: 175
 purchasid: 79
 fk_cbbottleid: 174
 purchasid: 81
 fk_cbbottleid: 174

 after move from B to A
 itemA id: 175
 purchasid: 79
 fk_cbbottleid: 175
 purchasid: 81
 fk_cbbottleid: 175

 I can repeat this again and again and it is always 80 which does
 not move the first time.

 Any ideas on what I can do to find out what is causing this row not
 to move would be very much appreciated.

 Werner

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Lazy load issue when using declarative_base methodology?

2009-01-28 Thread az

typo - 81087 and 18087?
maybe run with echo=True and see what goes on?
does the testfile use same engine/.. setup as below?

On Wednesday 28 January 2009 19:21:54 Gloria W wrote:
 This is a strange problem. I'd appreciate any assistance.
 I have a class set up using the declarative_bass model, set up in
 this way:

   member_profile_table = MemberProfile.__table__
   metdata = Base.metadata
   engine = create_engine(config.db_conn)
   Session = sessionmaker(bind=engine)
   session = Session()

 It works perfectly, as I expect, in the unit tests in the same
 file. I get the expected data from the unit test:

  x.GET(81017)

 But in this test file, which creates in instance of my wrapper file
 in it's constructor:

 class Dispatch:
   def __init__(self):
 self.methods = ('OPTIONS','GET','HEAD','POST',
   'PUT','DELETE','TRACE','CONNECT')
 self.mp = MemberInfo.MemberInfo()

 I call the get method on the exact same query parameters.

   def test(self):
 result=self.mp.GET(18087)
 return result.memberID

 A pdb trace reveals that I get a valid query object from
 SqlAlchemy. But my data set is empty.

 My colleague thinks it's a lazy loading issue. My model code for my
 tables appears below.

 Thank you in advance,
 Gloria


 import sys
 import pprint
 from sqlalchemy import *
 from sqlalchemy.ext.declarative import declarative_base
 from sqlalchemy.orm import *

 sys.path.append('../config')
 import config

 from Members import *

 Base = declarative_base()
 class MemberProfile(Member):
   __tablename__ = 'member_profiles'

   memberID = Column(Integer, ForeignKey('members.memberID'),
 primary_key=True)
   SSN = Column(String)
   DOB = Column(Date)
   industryID = Column(Integer)
   primarysectorID = Column(Integer)
   address1 = Column(String)
   address2 = Column(String)
   city = Column(String)
   state = Column(String)
   zip = Column(String)
   howhearID = Column(Integer)
   affiliationID = Column(Integer)
   incomeID = Column(Integer)
   worksituationID = Column(Integer)
   currentinsuranceID = Column(Integer)
   genderID = Column(Integer)
   referemail = Column(String)
   occupation = Column(String)
   phonehome = Column(String)
   phonework = Column(String)
   phonecell = Column(String)
   phonefax = Column(String)
   occupationID = Column(Integer)
   occupationother = Column(String)
   billing_address1 = Column(String)
   billing_address2 = Column(String)
   billing_city = Column(String)
   billing_state = Column(String)
   billing_zip = Column(String)
   member = relation(Member,lazy=False,backref=backref
 ('members',order_by=memberID))

   def __init__(self, memberID, SSN=None, DOB=None, industryID=None,
 primarysectorID=None,
   address1=None, address2=None, city=None, state=None,
 zip=None, howhearID=None, affiliationID=None,
   incomeID=None, worksituationID=None, currentinsuranceID=None,
 genderID=None, referemail=None,
   occupation=None, phonehome=None, phonework=None,
 phonecell=None, phonefax=None, occupationID=None,
   occupationother=None, billing_address1=None,
 billing_address2=None, billing_city=None,
   billing_state=None, billing_zip=None, member=None):
 self.memberID = memberID
 self.SSN = SSN
 self.DOB = DOB
 self.industryID = industryID
 self.primarysectorID = primarysectorID
 self.address1 = address1
 self.address2 = address2
 self.city = city
 self.state = state
 self.zip = zip
 self.howhearID = howhearID
 self.affiliationID = affiliationID
 self.incomeID = incomeID
 self.worksituationID = worksituationID
 self.currentinsuranceID = currentinsuranceID
 self.genderID = genderID
 self.referemail = referemail
 self.occupation = occupation
 self.phonehome = phonehome
 self.phonework = phonework
 self.phonecell = phonecell
 self.phonefax = phonefax
 self.occupationID = occupationID
 self.occupationother = occupationother
 self.billing_address1 = billing_address1
 self.billing_address2 = billing_address2
 self.billing_city = billing_city
 self.billing_state = billing_state
 self.billing_zip = billing_zip
 self.member = member

   def __repr__(self):
 return MemberProfile
 ('%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','
%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s','%s'
,'%s','%s','%s')

  % (self.memberID, self.SSN, self.DOB, self.industryID,

   self.primarysectorID, self.address1, self.address2,
 self.city, self.state, self.zip, self.howhearID,
 self.affiliationID, self.incomeID, self.worksituationID,
 self.currentinsuranceID, self.genderID, self.referemail,
 self.occupation, self.phonehome, self.phonework, self.phonecell,
 self.phonefax,
 self.occupationID,
   self.occupationother, self.billing_address1,
 self.billing_address2,
   self.billing_city, self.billing_state, self.billing_zip)

 if __name__ == __main__:
   member_profile_table = MemberProfile.__table__
   metdata 

[sqlalchemy] Re: the return type of conn.execute(text())

2009-01-27 Thread az

i have recently stumbled on similar - the rowproxy's __hash__ was 
missing. so i have to tuple() them before usage.
Then there was Mike's question, what should the RowProxy emulate?
the tuple of the row, or something else?

 Today I attempted to serialize the return value of the form

 result = conn.execute(text())

 Till now I thought that the return type was a list of tuples, while
 in fact it is a list of objects of type class
 'sqlalchemy.engine.base.RowProxy'. Hence cPickle refused to
 serialize till I did some conversion.

 Just wondering what the reason for this is.

 Regards, Faheem.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: and_ and or_ precedence

2009-01-27 Thread az

theoretically - this might make a+b+c+d look like a+(b+(c+d)). Which 
isn't that bad, except that SA does not level-indent parenthesised 
expressions, and the sql is going to look like lisp program... but 
yes, u're right, SA relies on some python precedence and 
associativity being same as SQL ones. which may or may not be ok.
as one can put brackets (a+b)+(c+d) where one needs explicitly, maybe 
this would only be applicable to SA-generated things, like implicit 
joins. 
But then again, if just for readability, indenting by level the 
parenthesises and subqueries helps much much much more. i had patch 
about this long ago but it doesnot work anymore... sigh.


On Wednesday 28 January 2009 01:14:16 Jon Nelson wrote:
 I recently ran into an issue. Is it a bug? No. However, it made my
 brain hurt for a little bit until I remembered the SQL precedence
 rules for AND and OR.

 The operator precedence of AND and OR is known (AND takes
 precedence). However, it can make humans hurt their brains a bit to
 see SQL without (perhaps unnecessary) parens, explicitly noting the
 desired order of operation. Perhaps a suggestion might be to always
 use parens?

 Approx. line 2168 of sql/expression.py:

 if self.group and self.operator is not against and
 operators.is_precedent(self.operator, against):

 What would the harm be in removing the final test (and
 operators.is_predecent...) ?

 It seems to me that SQLAlchemy might group things a bit more
 explicitly instead of relying on the precedence rules. Are there
 database which have precedence rules that are not the same as
 others (or are buggy)?



--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: the return type of conn.execute(text())

2009-01-27 Thread az

On Wednesday 28 January 2009 01:34:30 Michael Bayer wrote:
 On Jan 27, 2009, at 6:28 PM, a...@svilendobrev.com wrote:
  i have recently stumbled on similar - the rowproxy's __hash__ was
  missing. so i have to tuple() them before usage.

 that doesnt strike me as a similar issue.   we should apply that
 patch someone had to detect python  2.6 and place some kind of
 callable for __hash__.

  Then there was Mike's question, what should the RowProxy emulate?
  the tuple of the row, or something else?

 RowProxy is not a buffered object in the general sense so im not
 thrilled making it evaluate itself for every tuple() type of
 access.

ah, similar in the sense of some missing implicit protocol. in this 
case - __getstate__.
nevermind, maybe document it that it's not a good thing to play 
with ...

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] is there any work around GQL / google-app/datastore ?

2009-01-26 Thread az

g'day
i'm asking out of sheer curiosity, although it may turn more serious. 
is there any known work about linking somehow SQLAlchmey and 
gogole-stuff?

i looked at the google api/lang and they seem somewhat similar to 
sqlalchemy's (well, like rdf-Alchemy is).

i might bite the idea of having dbcook work over these, thus declaring 
a schema will be translatable into 3 different db-ways.

any idea?

svil

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: It's possible to access a read-only table(without primarykey) in ORM?

2009-01-22 Thread az

use key=... argument of Column

On Friday 23 January 2009 04:52:07 Ikuta wrote:
 And I got another question now.
 The Column name has space like below.

 user = Table('tbl_user', metadata,\
 Column('name', VARCHAR(20)),\
 Column('phone number', VARCHAR(20)))

 1. when select
 user.c.get('phone number') =='12345678'))
 2. when insert
 {user.c.get('phone number):''7890'}


 I should get phone number object in the ugly way. any function in
 sqlalcemy could alias column name as below?

 1. when select
 user.c.name =='nick'
 2. when insert
 name = 'lisa'

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Saving the results of a sqlalchemy join using Python

2009-01-20 Thread az

which join are u mentioning? and which result u want to save? 
the query itself or the objects it returns after execution?

q = query(Line).join( Fiber, ...condition ).filter()
if ... return q.filter( 123...)
else: return q.filter( 456...)

building that q is not expensive;
saving it is fine but beware when sessions close/open etc.

svilen

On Tuesday 20 January 2009 16:33:16 Boudewijn Ector wrote:
 On Tue, Jan 20, 2009 at 8:24 AM, a...@svilendobrev.com wrote:
  mmh, cant really grasp what u want to do.
  what are Line.* attributes?
 
  .filter does and(), so this can be simplified:
  q = query(Line).filter( models.Line.LineDiscarded ==
  LineDiscarded ) if ...:
   return q.filter( sites...)
  else:
   return q.filter( sites...)

 That will probably work fine, but I'd like to join Fiber and Line
 together, and save the result of the join itself.
 The definition of Line:

 fiberdb=# \d Line;
 Table public.Line
Column   |Type

 |Modifiers

 +-+
- LineName  
 | character varying(50)   | not null ProviderName  
 | character varying(50)   | Length
 | integer | not null default
 nextval('Line_Length_seq'::regclass)
  Type   | character varying(50)   |
  MeasurementReport  | text|
  Comment| text|
  LineDiscarded  | character(1)|
  LineNamesReplacingThisLine | text|
  EndOfContractDate  | timestamp without time zone |
  LineNameSURFnet| character varying(50)   |
  SiteLocationCodeA  | character varying(50)   |
  SiteLocationCodeB  | character varying(50)   |
  last_updated   | timestamp with time zone|
  last_updated_by| character varying(50)   |
  GeoObjectID| integer |
  importedFromWKT| character(1)|
  ThirdPartyName | character varying(50)   |
  GeoData| geometry|
 Indexes:
 PRI_Line PRIMARY KEY, btree (LineName)
 FKI_Line_GeoObject btree (GeoObjectID)
 FKI_Line_Provider btree (ProviderName)
 FKI_Line_Site btree (SiteLocationCodeA)
 FKI_Line_SiteA btree (SiteLocationCodeA)
 FKI_Line_SiteB btree (SiteLocationCodeB)
 Check constraints:
 enforce_dims_Data CHECK (ndims(GeoData) = 3)
 enforce_srid_Data CHECK (srid(GeoData) = 4326)
 Foreign-key constraints:
 FK_Line_GeoObject FOREIGN KEY (GeoObjectID) REFERENCES
 GeoObject(GeoObjectID) ON UPDATE CASCADE ON DELETE CASCADE
 FK_Line_Provider FOREIGN KEY (ProviderName) REFERENCES
 Provider(ProviderName) ON UPDATE CASCADE ON DELETE RESTRICT
 FK_Line_SiteA FOREIGN KEY (SiteLocationCodeA) REFERENCES
 Site(SiteLocationCode) ON UPDATE CASCADE ON DELETE RESTRICT
 FK_Line_SiteB FOREIGN KEY (SiteLocationCodeB) REFERENCES
 Site(SiteLocationCode) ON UPDATE CASCADE ON DELETE RESTRICT



 So, I get the SiteLocationCodeA and SiteLocationCodeB. I need to
 retrieve all fibers between them (because I'd like to have a look
 at the FiberInUse property between two SiteLocationCodes).
 So I join Fiber and Line using the LineName as key.
 Now, I should be able to have a look which Lines between A and B
 have Fibers which are not in use.

 But, more interesting things and filters might come in handy, in my
 search function. So I'd prefer to save the result of the join into
 a variable, so I don't need to do the join in every  case of my
 search function.


 That's why.

 Cheers,

 Boudewijn Ector

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Saving the results of a sqlalchemy join using Python

2009-01-19 Thread az

mmh, cant really grasp what u want to do.
what are Line.* attributes?

.filter does and(), so this can be simplified:
q = query(Line).filter( models.Line.LineDiscarded == LineDiscarded )
if ...:
 return q.filter( sites...)
else:
 return q.filter( sites...)

On Monday 19 January 2009 23:28:55 boudewijnector wrote:
 Hi Guys,


 Currently I'm maintaining a python-2.5 (pylons) webapplication,
 which uses sqlaclhemy as its ORM.
 I've been messing around for some time with this (quite stupid)
 problem, but still can't het it to work the way I want it to.

 Situation:

 There's 2 tables, Fiber and Line. A line consists of a (or maybe in
 the future, more) Fiber.
 My definitions:

 fiberdb=# \d Fiber;
  Table public.Fiber
  Column |   Type

 | Modifiers

 +--
 +---
  FiberID| integer  | not null
 default nextval('Fiber_FiberID_seq'::regclass)
  LineName   | character varying(50)|
  FiberUsed  | character(1) |
 Indexes:
 PRI_Fiber PRIMARY KEY, btree (FiberID)
 FKI_Fiber_Line btree (LineName)
 fki_ btree (LineName)
 Foreign-key constraints:
 FK_Fiber_Line FOREIGN KEY (LineName) REFERENCES
 Line(LineName) ON UPDATE CASCADE ON DELETE RESTRICT

 fiberdb=# \d Line;
 Table public.Line
Column   |Type

 Modifiers
 +-
 +-
  LineName   | character varying(50)   | not
 null

 PRI_Line PRIMARY KEY, btree (LineName)



 As you can see, pretty straightforward.


 But, in my code I'd like to search for all Fibers which belong to a
 group of Lines. The Lines are chosen by the user , and I'd like to
 filter these Fibers depending on some conditions (like FiberUsed =
 True).

 Because there might be a lot of branches for this search (as we add
 options to the search engine, branches in the code will be added
 to), I'd like to do the join seperately in the head of the function
 and save the result of the join in some variable.

 This variable should be used in the branches of my code. This
 allows me to easily change the join conditions without  changing
 the join in 10 branches of my code.

 My current code is like this:

 def _get_list_Lines_between_Sites(self, site1, site2,
 LineDiscarded):
 query = models.Session.query
 (models.Line)
 if site2 ==
 None:
 return query.filter(and_
 (models.Line.LineDiscarded == LineDiscarded, or_
 (models.Line.SiteLocationCodeA ==
 site1,models.Line.SiteLocationCodeB == site1)))
 else:
 return query.filter(and_
 (models.Line.LineDiscarded == LineDiscarded, or_(and_
 (models.Line.SiteLocationCodeA ==
 site1,models.Line.SiteLocationCodeB ==
 site2),and_(models.Line.SiteLocationCodeA ==
 site2,models.Line.SiteLocationCodeB == site1) )))



 The query =  seems to be pretty invalid, but although I've tried
 about 10 alternatives (3 parameter for the join, using quotes, not
 using quotes, etc), I can't get this to work.

 Would anybody mind giving me a hand on this one?


 Yours sincerely,

 Boudewijn Ector

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Testing SQLAlchemy applications without the database

2009-01-18 Thread az

u want to simulate what sqlalchemy is doing?
i'd put some layer facading between the app and the sqlalchemy, then 
replace sqlalchemy with something else. that'll need a) test that 
assures the two facades behave equivalent, and b) the app tests 
should not rely on any partcular persistency-flavour.

in my bitemporal lib i have a set of tests that use a facade, and then 
i either plug behind the in-memory [plain lists] implementation, or 
the db-implementation. each implementation has its own tests as well 
assuring the basic ops work. still, getting all these combined to 
work was not very easy.


another unrelated way is to have presetup sqlite db files, on which to 
run the tests (instead of buiding db from scratch each time). this 
may or may not be faster, depends what is longer - the initial 
insertions or the actual tests.

just some ideas..
svil

On Sunday 18 January 2009 21:00:12 Adam Dziendziel wrote:
 Hello,

 I'm working on a larger project which is using SQLAlchemy. We
 started with writing automated tests using an in-memory SQLite
 database and inserting test data in setUp() then emptying tables in
 tearDown(). Even though the in-memory SQLite is pretty fast, the
 process of inserting test data (sometimes a hundred of records)
 takes significant amount some time. Eventually, a test which should
 take 30ms, takes 300ms.

 One solution is probably to create a DAO layer with find_by..()
 methods on top of the SQLAlchemy layer and then write a DAO mock
 which stores records in an array. Another one would be to create a
 fake session which stores records in an array. The latter would
 probably be better because we are not creating another unnecessary
 layer, but harder to implement, because of the complex query API.

 How do you cope with this situation?

 Best regards,
 Adam
 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Selecting from a self-referential mapper: recursive joins?

2009-01-17 Thread az

u can look at my posts on the topic and all related too (-:) 
search for recursive/recursion/graph/sets

in short:

 - storage:
   - store graph in pure form (nodes/edges only) - easy to 
maintain/update, very difficult to query 
   - store some preprocessed (cached) denormalized form (all paths per 
node or all reachable nodes or ...) - hard/expensive to 
maintain/update, easy for some queries, hard for others - depends on 
the form used; 

 - retrieving:
   - try making sql do the recursive stuff for yourself - not very 
scalable unless u use recursive sql (??)
   - do recursion yourself and load one level at a time - can be slow; 
i.e. 10 levels = 10 queries
   - load whole thing and do all math in memory - may or may not be 
feasible

10 nodes deep is a lot. i mean, if u store only pure nodes and edges, 
and need to find the paths/stuff related to A1, it would be like:
 A1 and stuff-there
or
 A1 - A1' and stuff-there
or
 A1 - A1 - A1' and stuff-there
or ...
which will make all these appear in FROM clause...; u can make a union 
of these to avoid FROM explosion but still this has to be expanded 
beforehand, as plain sql is not recursive (well some sqls are).

so u have to store the graph in denormalized/cached form - nested sets 
or whatever was the other way; i.e. u do the recursive graph math 
(using kjbuckets or similar) before saving and save the intermediate 
non-recursive (and possibly repeating) results. Depending on what 
queries u need u can/should store different stuff - e.g. storing _all 
reachable nodes per node gives u ability for certain class of 
queries, other things give u other abilities - think of it like 
moving down one direction and not another. Choosing which 
representation also depends on cost of updating it - adding one node 
or one edge may cause updating 99% of the data.

u can also store the graph in pure form, and use sql only as storage, 
i.e. always load the whole graph, do the recursive math yourself, 
then query the db for extra data, according to results. i have one 
working case where this is much faster/easier than any other approach 
(users/roles/modules/funcs and permissions across each/all of these).

although sql is about sets math internaly, that part of it isn't very 
popular and well developed - apart of having union, intersection, 
substract etc operators. at least i could not find a reasonable way 
to make it do my set arithmetics... well, i'm not sql guru at all, 
just using it as the least evil ;-).

ciao
svilen
www.svilendobrev.com

On Saturday 17 January 2009 06:15:00 rich wrote:
 Eric,

 Any additional conclusions to recursively querying your dependency
 graph?  I just found SA, and have been wondering how feasible it
 would be to manage and query a graph structure somewhat similar to
 the one you've described.  My problem is that I have a number of
 classes that build expensive data attributes, each of which is one
 step to producing many possible different solutions.  The classes
 are organized so that all attributes built by that class depend on
 the same set of parameters, so that if I know the set of parameter
 values, I know that all of the data attributes built by that class
 are congruent.  As each class instance performs one or more
 transformations, it needs to be able to retrieve the correct data
 structures, i.e. those that have been built with the same parameter
 values.

 Currently I keep the data attributes in python shelves, which I
 plan on continue doing, as they are too large to stick in sql
 (numpy arrays of up to a Gb or two), and manage a half-baked sql db
 to track the class instances, their parameter values, and shelf
 file names.  I want to use SA to manage the shelf file names and
 directories by mirroring the dependency structure of the classes.

 The problem is that there are a bunch of parameters, and the more
 dependent the classes become (deep children in a graph), the more
 parameter values are needed to exactly define the data attributes. 
 In addition, I don't know the complete dependency structure until
 run- time, as some classes can depend on an arbitrary number of
 other classes (these other classes have different methods to
 produce the needed data).

 If SA could manage a dependency graph as I build the data
 attributes, I could then query it to check whether a needed data
 attribute with the right specifications already existed, and in
 which shelf in which directory (one shelf per class instance
 holding the keyed data attributes).  Each class instance would not
 need to store the parameter values of all the class instances it
 depended on (there may be scores of such parameter values), but
 only a small subset directly used by its methods, as a query could
 traverse the graph and check which parameter values had been used
 at each step.

 I thought if I made a simple class structure that mirrored my
 working classes that all inherited from a class that would serve as
 nodes in the graph, that I might be able 

[sqlalchemy] Re: Migrating objects across sessions

2009-01-17 Thread az

read various threads about copy/deep-copy.
in any case u'll have to do the hierarchy-copying yourself - even if 
moving one object (alone) from session to session succeeds somehow.

On Saturday 17 January 2009 14:47:41 Darren Govoni wrote:
 After some further experimenting with this, it seems I cannot take
 a mapped object I retrieved from one session and save it in
 another.

 Using expunge then adding the object to a second session on a
 different database does not work. I thought maybe it would.

 This is a really useful thing to do, especially for replication at
 the object layer.

 So is it true that this cannot currently be done with sqlalchemy? I
 have been reading the docs, but no clues yet.

 On Fri, 2009-01-16 at 16:53 -0500, Darren Govoni wrote:
  Hi,
   I have an object (with references) I get from a session on
  database A and I want to copy it (deep copy) to database B. I
  tried expunging it from the first session and adding it to the
  second. What's the best practice to do this?
 
  thanks!
  Darren

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: Migrating objects across sessions

2009-01-17 Thread az

 Yes, i'd like to do that, but because of the code injection seems
 to carry state from the old session, the new session has trouble
 with even the copy.
then, separate the issues:
 a) make a working deep copy within same session (this is not at all 
trivial, except for very simple schemas)
 b) make a working move-across-session for singular obj
and just then combine

 On Sat, 2009-01-17 at 15:23 +0200, a...@svilendobrev.com wrote:
  read various threads about copy/deep-copy.
  in any case u'll have to do the hierarchy-copying yourself - even
  if moving one object (alone) from session to session succeeds
  somehow.
 
  On Saturday 17 January 2009 14:47:41 Darren Govoni wrote:
   After some further experimenting with this, it seems I cannot
   take a mapped object I retrieved from one session and save it
   in another.
  
   Using expunge then adding the object to a second session on a
   different database does not work. I thought maybe it would.
  
   This is a really useful thing to do, especially for replication
   at the object layer.
  
   So is it true that this cannot currently be done with
   sqlalchemy? I have been reading the docs, but no clues yet.
  
   On Fri, 2009-01-16 at 16:53 -0500, Darren Govoni wrote:
Hi,
 I have an object (with references) I get from a session on
database A and I want to copy it (deep copy) to database B. I
tried expunging it from the first session and adding it to
the second. What's the best practice to do this?
   
thanks!
Darren

 


--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



[sqlalchemy] Re: mysql udfs

2009-01-15 Thread az

try func.yourfuncname(..) ?

On Thursday 15 January 2009 03:18:51 clb wrote:
 Hi, I have been scouring the sqlalchemy 0.5 documentation and can't
 seem to find information on how to execute UDFs. I am dealing with
 the following scenario:

 1. I create a UDF (for example, get_fav_id, which requires a
 varchar input). I create it directly in the database with mysql's
 CREATE FUNCTION.

 2. I wish to execute it via sqlalchemy. I can execute statements
 such as:
 myengine.execute(select * from tablename;)

 Yet when I try to use the UDF (which is valid sql as far as mysql
 is concerned), nothing is returned:
 myengine.execute(select get_fav_id('input');)

 Any help would be appreciated.
 Thanks.

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google Groups 
sqlalchemy group.
To post to this group, send email to sqlalchemy@googlegroups.com
To unsubscribe from this group, send email to 
sqlalchemy+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/sqlalchemy?hl=en
-~--~~~~--~~--~--~---



  1   2   3   4   >